Quantcast
Channel: Digital Forensics – Forensic Focus – Articles
Viewing all 196 articles
Browse latest View live

How To Acquire Mobile Data With MD-NEXT From HancomGMD

$
0
0

by Michelle Oh, HancomGMD 

With an ever-increasing range of features and dramatically increased storage capacity, digital devices have become essential to our daily life. Their ability to store vast amounts of data means that these devices have proliferated and are now found in every household. They therefore prove to be a source of crucial information that acts as evidence in digital forensic investigations.  

With the expansion of 5G technology, we expect a new chapter in mobile development, and this will bring huge innovation for new technologies such as infrastructure, city management, autonomous vehicles, and connectivity for the consumer mobile internet. And this explosive growth of digital data will lead all investigators to face highly demanding challenges.

This article covers HancomGMD’s data extraction solution ‘MD-NEXT’. It focuses on how investigators can practically use MD-NEXT for both logical and physical data extractions. 

Following the step-by-step instructions below will show you the mobile data extraction sequence. It is a simple but useful tutorial for investigators who are looking for a tool which addresses all their their mobile forensic investigation needs.

I. Data Acquisition Sequence Method – Physical Extraction

Tested device – Samsung Galaxy S7 (SM-G935F) 

Step 1: Connecting the Device and Finding the Model

MD-NEXT provides a user-friendly and intuitive interface, to help you to easily find and select the device model. After the device has connected, click the cable icon on the bottom right of the screen and MD-NEXT will automatically detect the model. 

MD-NEXT supports more than 15,000 smartphone devices from around the world, including over 500 models from Chinese manufacturers (Huawei / Xiaomi / Oppo / Vivo, etc.) It also supports the extraction of IoT devices, AI speakers, smart TVs, drones and vehicle systems. 

Advanced Physical Extraction Features: 

  • Bootloader, Fastboot, MTK, QEDL, Custom Image, rooted Android, iOS physical, ADB Pro, DL, JTAG, Chip-off, SD card, USIM, removable media 
  • JTAG pin map viewer and connection scanning with AP 
  • Drone SD card extraction: DJI Phantom, Maverik series / Parrot / Pix Hawk 
  • AI speaker chip-off extraction: Amazon Echo, Kakao Mini, Naver Clova 

Step 2: Extraction Method

In the screenshot below you can see the available extraction methods for the model we have selected. Both logical and physical extractions are available. Selecting ‘BootLoader’ lets you extract all the mobile data including filesystem, user data, cache and so on. Then select the build number which matches your phone. A set of simple instructions will lead you to the download of the customized firmware.

Example: Build List for Samsung Galaxy Note 4 (SM-N910S)

MD-NEXT comes with an intuitive graphical user guide for each extraction method.

Samsung Galaxy S7 (SM-G935F) in Download Mode

Step 3: Selecting the Data Partition

When the device enters “Download Mode”, MD-NEXT recognizes and shows the file system to be extracted, and the user can easily expand the partition. You can then schedule the extracted data to be analyzed by MD-RED. 

MD-NEXT displays the specific extraction process with a useful menu which shows an extraction progress bar, as well as time and hex data by partition.

Step 4: Exporting the Acquisition Report

After the extraction sequence is completed, the user can generate an ‘Acquisition Report’. This contains detailed information of extracted partition: file path, file size, size of extracted data, and so on. All of the extracted data is guaranteed with the hash value.  

Data preview and save function: 

  • Preview extraction data
  • Hex viewer, data viewer 
  • An image dump can be saved as ‘MDF’ and standard binary file format 
  • Pre-defined extraction file name 
  • Sound alarm for extraction status change 

Specific reporting function for integrity: 

  • Extraction information: hash value, time, method and file name 
  • Supports report formats such as PDF, Excel and HTML 
  • Supports ‘Extracted File List’ generation with a hash value of each file 
  • Supports ‘Witness Document’ generation 
  • Regeneration of ‘Extraction Reports’ 

II Data Acquisition Sequence Method – Logical ‘Android Live Extraction’ 

Tested device – Samsung Galaxy Note 9 (SM-N960N)

Step 1: Device Connection and Logical Live Extraction

Android Live Extraction is a method of extracting the key active files from an Android Phone. After the device is connected, set up the phone as shown in the guide. MD-NEXT will automatically recognize the device.

Step 2: Data Extraction

You can then select the range of data for extraction. MD-NEXT will then start to acquire data from the device. You can check the extraction time and the status for each app on the ‘Extracting’ screen. 

Selective extraction for privacy protection: 

  • Supports selection of partition, file, category and app 
  • Supports selection of file system physical extraction 
  • Supports selection of all logical extraction methods 

Step 3: Exporting the Acquisition Report

The acquisition report contains useful information such as hash value, time, extraction method, and file name. The report can be formatted as PDF, Excel, or HTML.

If you want to learn more about MD-NEXT, please visit our website or send an email to sales@hancomgmd.com 

Click to download the MD-NEXT Product Brochure


How To Use Connection Graphs By Belkasoft For Complex Cases With Multiple Individuals Involved

$
0
0

A proper connection graph is a must if you need to investigate a complex case with numerous individuals using different communication media. In law enforcement, it may be a drug-related case with several dealers and a network of buyers, or a ring of sexual predators. The corporate sector might need graphs to investigate a circle of white-collar criminals stealing their company’s money, or groups responsible for data leaks, breaches, even hacking incidents. 

As a rule, cases of this kind require visualization of profiles together with the connections between them to grasp, analyze, and demonstrate the essence of a case. That is the most efficient way of detecting and understanding links, patterns, and suspicious episodes.

This article is intended to show how one can use the relevant capabilities of Belkasoft Evidence Center or BEC. As such, it will combine a brief ‘how-to’ overview of BEC’s Connection Graph with a scenario based on a group of so-called card fraudsters. The case itself is, in simplified terms, based on a real-life scenario. 

What is BEC’s Connection Graph and why do you need it?

In BEC, Connection Graph is a special window which displays communications between individuals involved in a particular case.

Connection Graph: Overview

The window depicts people as simple dots (or alternatively as avatars, if available). These dots are in turn interconnected with lines signifying communications between them. The possible channels include phone calls, SMS, chat apps, file transfers, voicemail, emails, etc.

To whom could it be helpful? There are many situations in which this can be useful. 

  • First, cases that involve a single perpetrator and numerous victims (in this case the case graph may resemble a star). 
  • Second, cases involving groups of suspects with a single victim (it is likely that the picture will look like a circle). 
  • Third, the most complex scenario with multiple perpetrators and numerous victims (in this case constellations may differ greatly from each other). 

What unites all these cases is that you can benefit from seeing a connection graph for all of them. 

Real-life case: Fraudsters

The case on which this guide is based involved a group of credit card fraudsters. As you know, there are many types of credit card crime: hacked online shopping sites and payment systems, phishing, remote access agents and trojans, skimming, etc. 

The case presented here was not very sophisticated. It was a series of phishing activities with perpetrators pretending to be bank clerks and online shopping account managers. Using fraudulent means, they would contact a victim to lure them into sharing sensitive card data. As for the communication media, the group mainly used PCs with VoIP solutions installed on them. Cell phones were used to a lesser extent. 

Our customer’s initial goal was to take possession of the computer and several other devices that had been used. These devices would then be forensically analyzed. Working with investigators and examiners, the following goals were made:

  • Identification of as many victims as possible (the customer knew that some of them had not reported the issue to the police);
  • Identification of the suspect’s accomplices (normally each victim was contacted on several occasions by different people for greater credibility); 
  • Identification of patterns used by the perpetrators. 

How to use BEC Connection Graph: Theory and practice

This section will demonstrate, step by step, how to use BEC’s Connection Graph and, where applicable, will stress what the customer could and did achieve by using it. 

  1. Start with analyzing a device and extracting artifacts with BEC.
  2. Click on ‘View’, then select ‘Connection Graph’. 

To activate the ‘Connection Graph’ window, select the corresponding menu of the ‘View’ main menu item

3. After that, your graph will be shown almost instantly.

‘Connection Graph’ visualizing communications via entities

As you can see, the picture is complex. The picture our customer saw was also quite sophisticated, since several data sources, with numerous accounts and communication archives, were added to the case.

At the same time, if you take a closer look, the picture will be coherent and well-structured. The list of all persons associated with a case is shown in the left column. In the center, the graph itself is presented. Finally, if you click on any person at the left or any avatar in the middle, detailed information about the selected user will be displayed in the right pane. You can also select a connection; in this case, both persons, along with their communications, will be presented on the right.

  1. The following control elements are available via the icons above the ‘Connection Graph’. They all were used by the customer to examine different facets of the case. 

Let’s take a closer look at them.

5. The first couple of icons are the ‘Entity Graph’ and ‘Contact Graph’. 

‘Entity Graph’ and ‘Contact Graph’ buttons

These alternatives are very useful. BEC’s ‘Entity Graph’ is based on a single phone or device used by an individual. That is to say, each given device is represented by its corresponding graph profile. At the same time, a device may simultaneously offer a lot of communication channels, such as Skype, Viber, e-mails, SMS, etc. Every channel has its own ‘contacts’, i.e. numbers and other user IDs. Sometimes, you may need to narrow your search area to such a ‘contact’; for example, to phone calls exclusively. 

That is when you need to click on the ‘Contact graph’. In this case, the landscape of your case will be divided into and sorted by single contacts, like in the picture below. 

‘Connection Graph’ visualizing a communications via contacts (narrowed view): numerous contacts and app profiles used by Ben Mills

Comparison between ‘Entities’ and ‘Contacts’ was extremely useful for the customer since in this particular case, the criminals used an entire range of contacts and assumed identities to extort card data.

6. The second couple of buttons make it possible to view a case via avatars by default or via dots. 

‘Icons’ and ‘Avatars’ buttons

If you choose to do so, you will see a picture like the screenshot below. The bigger the dot, the more active it has been. As our customer pointed out, this function was of great importance to him, as it identified the key players in the fraudsters’ group.

‘Dots’ representing entities

7. The next button is also of particular interest to an investigator. It enables you to ‘Detect Communities’. BEC can examine case landscapes and ‘unite’ people who have communicated with each other in groups. In other words, it finds groups of tightly connected people who are likely to know each other very well. 

As you can easily guess this function makes it possible, on the one hand, to identify circles of victims grouped around perpetrators, as well as links between the perpetrators on the other hand. 

Communities share the same color in BEC’s Connection Graph. Pay attention to the highlighted red dot: it represents one of the key actors within the community

8. The following control buttons include ‘Rebuild Layout’, which rearranges the angle of your view, ‘Show/Hide Weights’ which emphasizes and conceals the intensity of communications, and ‘Zoom In’/’Zoom Out’ (you can also use mouse wheel to zoom). When it comes to practical investigations, these methods of customization are useful in terms of reporting and presenting.   

Graph Layout action buttons

e. The next button is ‘Add Filters’: if you work with the Connection Graph, you can apply filters including profiles, data sources, and contacts.

‘Add Filter’

Via BEC filters you can filter data by profiles, data sources, and contacts

All these filters were actively used by the customer during the investigation. It was an inevitable problem because carding, as it stands, supposes a lot of communications with several people. Each portion had to be examined, proven, and presented individually. At the same time, these portions were also to be examined in their entirety, as a sequence of repeated actors, patterns, and techniques.  

f. The last three buttons of Belkasoft Evidence Center Connection Graph are ‘Filter Selected Nodes’, ‘Filter Selected Nodes and Neighbors’, ‘Hide Selected Nodes’. These options made it possible to exclude all the irrelevant individuals (either as fraudsters, or as victims, i.e. those not involved in the case). Without them the picture looked much more coherent and clearer.

Selected Nodes filters

If you click on ‘Filter Selected Nodes’ after you click on several profiles of interest, you will ‘isolate’ them and discount their surroundings.

‘Filter Selected Nodes’, i.e. all the dots except for those you select are deleted

g. Clicking on ‘Filter Selected Nodes and Neighbors’ will ‘isolate’ and highlight not only a node of your choice, but also its communicative surroundings.

Node with Neighbors

9. Finally, you can ‘Hide Selected Nodes’.

In our case study, these three functions made it possible to get a clear picture, including the most likely ringleader, his associates, and various ‘layers’ and ‘circles’ of victims.  

Conclusion

Connection Graph in Belkasoft Evidence Center may be a handy tool for a digital investigator. You can easily visualize connections between participants in your case and customize the visualization, take different angles, and last but not least, generate persuasive pictures as well as deep and extensive reports (you can generate a report easily: just click ‘Edit’, then ‘Generate Report’). Whether you are dealing with a ring of sexual predators with multiple victims, secretive illegal organizations or a criminal association, this tool will be extremely helpful. As we have already stressed, this tool turned out to be extremely helpful while investigating a credit card fraud case. 

You can: 

  • Isolate suspicious communities 
  • Identify potential ringleaders 
  • Track down their potential associates
  • Exclude obviously uninvolved persons from the picture 
  • Investigate each and every episode of a case associated with every victim 

As our customer told us, ‘initially the case looked like a real mess. The crime itself was not [particularly] sophisticated in technical terms. A group of people used several fake accounts and communication media to extort credit card data from gullible people. As simple as that. The true problem was the structure of the crime. More than that, as we found out, a lot of victims had not reported these crimes. So, it was also our objective to identify them. Regarding all these issues, such as identities, contacts, perpetrators and their leaders, patterns and countless interconnections, the Connection Graph by BEC was invaluable.’ 

To sum up, in an extremely digitalized world any digital forensic expert should have a tool for showing high-level communications, especially for cases with multiple devices involved. You can try this tool anytime. A 30-day free trial is available at https://belkasoft.com/get 

Walkthrough: VFC 5.0

$
0
0

What is Virtual Forensic Computing?

Virtual computing transforms investigation of the digital crime scene.

Having access to the ‘digital scene of crime’ can offer huge benefits to an investigator. Whether investigating fraud, murder, child abuse or something else, seeing the computer through the eyes of the suspect can be invaluable. Building a virtual machine (VM) of the suspect’s computer is one easy way to get forensically sound access to the user’s environment. 

A VM allows an investigator to:

  • See the desktop and operating environment just as the user saw it
  • Navigate financial records within the native software (Sage, QuickBooks, Great Plains etc.)
  • Access emails and internet search histories, demonstrate interaction with installed software
  • Determine accessibility of illegal files

“I originally ordered VFC for **PD and have been using it since.… VFC has proven to be invaluable. I first searched for forensic virtualization software in 2008 after assisting with a financial crimes investigation.  

A computer with business records had been seized.  Data files with an obscure file extension were located during forensic examination.  I did not have a compatible viewer and couldn’t verify manually-parsed data. I did locate proprietary accounting software within the suspect image.  

In an effort to view the correctly-formatted data, I contacted the accounting software company and requested a copy of the software.  They generously sent a copy of the accounting program for use in this investigation. I installed the software on my forensic workstation and, after some tweaking, was able to view the formatted data.  While I was eventually able to review the data, the manual process of extracting the data, acquiring the correct version of the proprietary software, and finally hoping it would all work on my forensic workstation was cumbersome, at best.  

I… used VFC to show attorneys and investigators digital evidence as the user would have viewed it. ”  

– Digital Forensic Examiner, Charles County Sheriff’s Office

VFC simplifies the virtualisation process

As virtualisation platforms have improved, building a replica of a suspect’s system has become much easier. What once could take a few days now takes just a few hours if you are lucky. Most of this time is spent fixing driver errors (e.g. human input device drivers such as the mouse and keyboard) and overcoming driver problems and the infamous blue screen of death (BSOD). 

However, with the right tools, investigators can now do all this reliably in just a couple of minutes. 

‘Virtual Forensic Computing’ or ‘VFC’ allows the user to create a VM from a forensic image (or a write-blocked physical hard disk drive), automatically fixing common problems and  typically booting the VM in under a minute. VFC makes the virtualisation process smooth and hassle free. 

Among VFC’s valued customers, to “VFC a forensic image” has become synonymous with virtualisation since it was first released by MD5 in 2007.

“VFC has become an essential tool in our forensic investigator’s toolkit. It provides investigators an insight into the suspect’s perspective by actually seeing the user’s desktop, settings and user environment. Screen captures from the suspect’s ‎environment add significant weight to the forensic report when describing how the suspect utilized the computer to facilitate the crime. 

VFC is truly a tool that I rely upon and use in all my computer investigations!”

– D/Sgt Vern Crowley, Ontario Provincial Police eCrime Section

A picture speaks a thousand words

Using a VM to replicate the user’s computer, the desktop environment can easily be captured for presentation to a judge or a jury. This helps juries understand the more technical aspects of their reports, or enable powerful emotive images to be put before the judging panel. Using VFC, investigators can:

  • take screenshots and embed these in their reports.
  • record video screen-capture of an examination to playback in the courtroom
  • Create portable versions of VM to demonstrate live in court 

VFC is now used on every continent, in almost every aspect of digital forensic investigations, by law enforcement, military investigations teams, forensic and cyber investigation teams in both the private and public sector.

“VFC is a very useful tool for us as the screenshots we can show a jury far outweigh simply writing about a topic.”

– Graham Green – Suffolk Police

“The product is getting better by the day and is one of our main tools – a picture paints a
thousand words as they say – very powerful in court…”

– Mark Boast, Forensic Analyst, Suffolk Constabulary, UK

“I imaged a drive which had some positive keywords … and thought I would have a look at it using VFC. The results were extremely impressive.  It showed the suspect using shareazza to download illegal content and also showed the actual folders on his desktop. Makes proving this case really easy.”

– Computer Forensic Investigator, Durham Constabulary, UK

VFC 5.0 launched July 2019

VFC 5.0 integrates the VFC workflow directly into existing forensic analysis tools, making the creation of a VM even easier with its integration components for common forensic analysis tools:

  • EnCase Enscripts 
  • XWF X-Tension files

The integration components are provided with the standard VFC package and can be setup and used within minutes. Similarly, VFC now supports a command line interface to   support automated workflows. 

These exciting new features  now allow the analyst to launch a VM of their target image directly from within their standard forensic examination suite.

VFC Mount helps reduce common errors

VFC 5.0 now comes with its own mount utility, VFC Mount, to simplify the virtualisation process and remove reliance upon third party tools. VFC Mount currently supports .E01, .EX01, AFF4, .VMDK, .BIN, .IMG, .RAW, and .DD images.

VFC Mount helps reduce instances of common Windows errors when dealing with mounted images such as the very common “The physical disk is already in use” error in VMware.

VFC 5.0 contains numerous other tweaks and upgrades to make the VM-generation more stable and effective. Early feedback has been very positive:

“I have downloaded version 5 and have used it on a couple of occasions recently, I find it
more successful in running the VM than version 4, I get less error messages than before especially the one relating to the drive being already in use. So far very happy with the upgrade

– Kevin Mount, Queensland Police, Australia

Password bypass (PWB) gives quick access to suspect accounts

VFC also gives the ability to clearly demonstrate that something doesn’t work – for instance, if a suspect insists the password they have provided is correct, VFC provides a quick way to prove them wrong without affecting the original data.

“VFC allows me to try passwords first, show they don’t work, and then bypass …”

– Special Agent, DHS ICE, US

Historically VFC PWB only worked on local Windows user accounts, however, now VFC 5.0 adds support for Windows 8/10 ‘live’ accounts with the Generic Password Reset (GPR) feature.

New from September 2019 – Windows Live ID Exploit (including PIN accounts)

Generic Password Reset (GPR) tool

New to VFC 5.0, the GPR tool can be used to help make powerful system-level changes. With GPR, the investigator can:

  • List User Accounts (including password status)
  • Bypass security on Windows online (Live ID)
  • Reset account passwords to known values (including PIN accounts)
  • Open a SYSTEM level command prompt (at the logon screen)
  • Easily reboot the guest VM

Early feedback from a select group of active police investigators, that have been given pre-release access to the Live-ID feature has been very positive.

“[VFC5] Was a dream to use. Easy to follow the prompts in the GPR. I converted the live account …, used the GPR password reset, and voila, I got in.

I will be adding some very convincing evidence to my investigation by being able to show the Judge/Jury what the User was seeing instead of just my forensic analysis.”

– Cst. Chad Seidel, Saskatoon Police Service, Canada

Continual investment ensures continued development

With additional support for Linux and other Operating Systems, VFC has continued to deliver new features since it was introduced. The newest features (for ease of reference) include:

  • Windows ‘Live ID’ (online) password reset feature – gives the user a simple method to get around even the latest in Windows user security
  • VFC Mount  – simplifies the user experience and minimize common VMware problems 
  • Generic Password Reset – gives users a simple and fast way to access a specific account or make system-level changes. It is portable, powerful and user friendly.
  • Command Line functionality and inclusive components – seamlessly integrate with EnCase Forensic and X-Ways Forensics allowing VFC to be used alongside existing, trusted forensic software.
  • 64-bit host system support – brings VFC fully up to date, giving it a rightful place in today’s forensic laboratory

Other significant features include:

  • Standalone Clone VFC VM gives the user the option to export a copy of their VM that can be reviewed by an investigator away from the forensic analyst’s workstation, without the need for a VFC dongle (license). 

“I really like the standalone VM option that VFC has now.  Giving a VM to a case agent to use on a review station has always been an issue.  The standalone VM solves that problem.”

– Special Agent, DHS ICE, US

  • Modify Hardware allows VM hardware to be amended including adding extra drives or network support 

“The addition to be able to stitch in a second drive is … brilliant … as we are … able to fully replicate the users environment rather than just their Windows installation drive.”

– Paul Ripley – Cleveland Police

  • Password Bypass (PWB) feature for Windows user accounts –  VFC 5.0 has increased the number of discrete PWB routines to over 2000, up considerably from 500 with VFC 4.0.
  • Patch VM / Restore Points  feature  – allows the investigator to patch problematic virtual machines or repair a VM after using the Windows system restore feature to ‘rewind’ a VM to an earlier historic state.
  • The VFC Log File – keeps a forensic log of all steps taken by the software (effectively contemporaneous notes) and makes VFC a powerful weapon in the forensic investigator’s arsenal.
  • Updates and upgrades have enhanced the product more, including further OS support, new password bypass routines and slicker processes. 

Development continues at a pace at MD5 Ltd; our constant aim and goal is to continue delivering a product that solves even more of our customers’ needs:

“Absolutely loving this version, there hasn’t been a password it hasn’t cracked!

Had absolutely no problems putting accounts offline and changing the passwords either through the password bypass… and it worked with password and PIN protected accounts, even 1 which defaulted to a picture based login it changed the local password without a hitch.

Can’t wait for this one to launch as I suspect it will prove uncommonly useful for us.”

– Peter Bayly, Digital Forensic Investigator, Northumbria Police

Download V5 Windows LiveID full article

To purchase or find out more about VFC visit the website www.vfc.uk.com or email: sales@md5.uk.com

How To Acquire Video Data With MD-VIDEO From HancomGMD

$
0
0

Due to the rapidly growing need for securing safe environments around the world, digital surveillance systems have become ubiquitous. A significant number of new surveillance systems are installed each year, and the importance of acquiring actionable data from these systems is growing across the globe. 

According to a recent statistic, the amount of surveillance video data being recovered annually jumped by 66% between 2017 and 2018. With the volume of video data increasing at such dramatic rates, it’s clear that law enforcement agencies require a solution for acquiring and handling this type of data in a forensically sound way.

Due to the epic scale of the video data involved in investigations, support for a variety of media formats is a top priority for any video forensic solution. MD-VIDEO supports video data acquired from a number of different devices: IP-CCTVs, dashboard cameras, smartphones, desktops, cameras, camcorders, drones, and wearable devices. Moreover, MD-VIDEO also supports various DVR filesystems, such as those used by HikVision, Dahua, Zhiling, Samsung, Bosch, Honeywell, Sony, and Panasonic. 

We are very excited to introduce our video recovery solution, MD-VIDEO. Please consult the following demonstration on video acquisition and recovery. MD-VIDEO is easy to use and is sure to dramatically improve your digital video investigations.

Transcript

Please consult the following demonstration on video acquisition and recovery. 

MD-VIDEO is easy to use, and is sure to dramatically improve your digital video investigations. 

1. Choose Evidence 

Image Selection: Click on the opened image and select the ‘Next’ button on the bottom right of the program. 

2. Select Recovery Option

In this demo, we first identify the active files, and then attempt recovery if necessary. 

3. Enter Case Information

Enter case information in the right screen that appears after clicking the ‘Case’ node. 

4. Active File Reviews 

Click ‘Case’ and then ‘Analysis Results’ to check the active video analyzed. Check the signature and other data of the file using the hex viewer. 

5. Recovery

The recovery operates based on your selected settings. Here, we’re going to recover from the unused area of the file system. Select H.264 from the codec list, and check the ‘Details’ option in the right area. 

Click on the ‘Frame’ node under ‘Analysis Results’. 

6. Create a Video

Select the desired frame and recovery result, and click the ‘Create Video’ button. 

7. Export

Select the desired files and frames in the recovery result, and click the ‘Export’ button. 

Identify the exported file and report. 

8. Save Case

9. Close Case

Lastly, we would like to introduce MD-VIDEO Analysis. MD-VIDEO Analysis allows investigators to quickly identify and sort important video frames from massive amounts of video data. Humans, as well as vehicles such as cars, bicycles, motorcycles and trucks, will be automatically identified. 

Additional features in MD-VIDEO Analysis are currently under development, and will be included in updates very soon. 

Download MD-VIDEO Product Brochure

Visit our website and find out more about MD-Series www.hancomgmd.com

SANS DFIR Summit 2019 – Recap

$
0
0

by Christa Miller, Forensic Focus

Held in Austin, Texas each summer, the SANS Digital Forensics and Incident Response (DFIR) Summit is known for offering in-depth but accessible digital forensic research — and for its laid-back, fun atmosphere.

This year’s summit, which ran from Thursday, July 25 through Friday, July 26, delivered a balanced menu of tool-oriented “how-to” style talks, artifact talks, and some specific incident response insights. Most of the people in the room on Day 1 said they were first-time attendees, though we met up with a number of returning faces, too.

One captivating new feature of this year’s summit: a graphic recorder, Ashton Rodenhiser of Canada-based Mind’s Eye Creative, who took “sketch notes” to visually represent the talks. SANS Senior Instructor and Summit advisory board member Phil Hagen said he engaged Ashton because conference organizers wanted a visual focal point around which people would come together and discuss the talks after the fact — which is exactly what they did!

In addition, David Elcock, executive director of the International Consortium of Minority Cyber Professionals (ICMCP) was on hand to talk about the organization’s work making DFIR and information security more inclusive for women, people of color, people with disabilities, and members of the LGBTQ+ community.

Elcock described the ICMCP’s three core processes: scholarships, networking and mentoring, and career pathing from high school onwards. Part of the organization’s work is to source scholarships, which are underwritten by sponsors such as airlines and banks; and to work with organizations like SANS, which partners with the ICMCP to offer a Diversity Cyber Academy.

Both Rodenhiser’s and Elcock’s involvement reinforced the overall conference theme of community-building, reflected in opening remarks offered by Rob Lee and Phil Hagen. 

Tool Talks

The SANS DFIR Summit’s tool talks aren’t a typical marketing feature set. While they do, like commercial tool webinars or lectures, focus on solving a specific problem, they’re grounded in hours of painstaking research and development. They’re like a blueprint for other practitioners to use as a foundation for their own research.

At this year’s Summit, automation was the key tool theme — reducing workload by allowing computerized processes to run through vast amounts of data to “find evil.” The talks kicked off with a keynote delivered by Troy Larson, Principal Forensic Investigator at Microsoft, and Eric Zimmerman, Senior Director at Kroll Cybersecurity and a SANS Certified Instructor Author.

Their talk, “Troying to Make Forensic Processing EZer,” focused on the implementation of tool writing and how to scale up from tool implementation standpoint using Zimmerman’s Kroll Artifact Parser and Extractor (KAPE) and Larson’s KAPE-reliant EZ triage method as an example.

Zimmerman’s part of the talk relied on his experience developing KAPE, which was released in February as a way to automate key steps of the forensic process. It’s designed as a customizable, extensible toolchain: by Zimmerman’s definition, a set of targets and modules (run process) grouped to reduce data and process to something a human analyst can work with in a thorough, repeatable, scalable, auditable way.

Larson then talked about how KAPE is useful for automating at scale in compromise investigations. The triage method he developed captures a complete disk image as a snapshot at a point in time. In turn, the snapshot can be scanned and its results delivered as structured data for large scale threat telemetry, detection and hunting.

Aimed at people who might actually use KAPE in a SOC environment, Larson’s talk included ideas for people on how to deploy in their own environments, along with the “rich story bed” of triage analytics and ways to organize hundreds of security events, including a graphic analysis of everything a file did or an account touched.

Another “tool talk with broader implications” was delivered later in the day by BlackBag Technologies’ Dr. Joe T. Sylve, Director of Research & Development. Offering a guide to the research & development process, Sylve began by highlighting the importance of research that’s informed by practitioners, because when tool vendors and practitioners aren’t aware of research, it’s useless to the broader community.

At the same time, though, misconceptions about research persist: it’s hard, you lack the right academic qualifications, or skill sets and tools are too disparate. In fact, Sylve told the room, research isn’t any harder than conventional digital forensics, academic qualifications are “no big deal,” and there’s plenty of overlap between skill sets and tools.

With his own research into APFS snapshots as an example, Sylve’s key takeaways included:

  • The best research candidates are those you’re personally invested in, either because of a case, or just because it piques your curiosity.
  • When it comes to applied research, best is to limit its scope by setting attainable goals. Even a small amount of new knowledge is useful; there’s no need to “solve forensics.”
  • Something that doesn’t strike you as interesting doesn’t necessarily mean research isn’t for you; just that your skills might support a different research area.
  • Dead ends will happen! They may not answer your question, but they do help you narrow it down. This is especially true of large, complicated tasks that would otherwise make it easy to get lost in the weeds.
  • Expand your search as dead ends occur or as one method or another doesn’t lead to answers; be flexible as you learn new things.
  • Sometimes research raises more questions than answers. Calling this “good job security,” Sylve discussed the importance of identifying things you didn’t cover — and then telling others what’s coming. This accountability helps you to hit research milestones, or lets others build on your research.
  • Sylve stressed that just knowing something isn’t useful; it’s important to share with others if for no other reason than validation. Even so, publishing a 20-page paper isn’t necessary; you can post to Twitter, a blog (your own or as a guest), DFIR Review, conference presentations, or even test on a podcast like Forensic Lunch to get reactions before presenting more formally. Each path has barriers to access and reach, so find the one that works best for you.

Brian Olson, Senior Manager of Technical Management at Verizon Media, described how even though it’s not a “security tool” per se, the open source Ansible platform came in useful on a live response because of its adaptability. This was especially important during a live incident response to the systems of a newly acquired company, which were found to have a number of vulnerabilities. 

Designed to automate repetitive IT tasks like configuration management, application deployment, and intra-service orchestration, Ansible’s self-documenting, customizable, scalable features enabled Olson’s team to build a repeatable triage playbook, obtaining volatile data, downloading and labeling files, and uniformly collecting artifacts, among other tasks.

They could then perform stack analysis of processes and webdir files and identify interesting network connections. Over two phases, the team was able to patch the hosts to “stop the bleeding” and remove known malware — and ultimately mature their incident response program.

Distributed evidence collection and analysis was the subject of a talk given by Nick Klein, Director of Klein & Co. and a SANS Certified Instructor, and Mike Cohen, a developer with Velocidex Innovations. They provided an overview of Velociraptor, which fills a need for deep visibility of endpoints — surgically examining endpoints not just for current activity, but also historical context in digital forensic investigations, threat hunting, and breach response.

Few tools, they said, offer scalable, network-wide deep forensic analysis, and Velociraptor, a single operating system-specific executable, can work on both client and server. Designed so that users don’t need to be experts, Velociraptor has no database, libraries, or external dependencies, and is highly customizable. 

Klein and Cohen moved from the artifacts Velociraptor could collect on a single system, to how this capability could help hunt for the same artifacts — for example, event logs, selected NT user hives, or specific forensic evidence such as the use of sysinternals tools or keys — across a network. From there, whatever you can hunt for, you can proactively monitor for, including on DNS (which many organizations don’t log for), each USB device plugged into a machine, or even Office macros.

Klein and Cohen stressed that Velociraptor is a work in progress and that they’re seeking feedback from people using it on real world cases. Learn more at www.velocidex.com!

The summit’s final tool talk was delivered by Elyse Rinne, Software Engineer, and Andy Wick, Senior Principal Architect, both a part of Verizon Media’s “Paranoids” information security team. Their talk focused on using the open source full packet capture system, Moloch, to “find badness.”

Moloch’s capabilities complement what you already have. By inserting it between the network and the internet, you can then store the data on machines with sufficiently large disk space to use later for hunting and incident review. In addition, Rinne and Wick talked about packet hunting, or searching for things within packets themselves.

The Paranoids’ future work will include data visualizations, protocols, cloud, and so on. Meanwhile, Moloch has a large and sustained user community, including an active Slack channel and in-person local meetups. In addition, molochON will be held October 1 in Sunnyvale, California. Learn more at Molo.ch.

Forensic Artifacts

The summit’s artifacts talks ranged widely across platforms: Windows, Mac, iOS, Android, and even email. These talks followed the theme of answering questions that may arise during casework.

Windows Artifacts

The first artifacts discussion went in depth on AmCache investigation. Blanche Lagny, a Digital Forensic Investigator with France’s Agence Nationale de la Sécurité des Systèmes d’Information (ANSSI, translated to National Agency for Information Systems Security), described how the AmCache — a Windows feature since v7 — stores metadata.

Although tools like AmCacheParser and RegRipper parse the AmCache, there’s a lack of documentation, and interpretation isn’t as easy as it might appear. Lagny’s published technical report offers this kind of reference.

Covering three different scenarios, Lagny described how AmCache behaves differently across Windows 8, Windows 10 Redstone 1, and Windows 10 Redstone 3, and how because the artifacts keep changing, it’s imperative to look at every file so you don’t miss important information. Overall, AmCache can be considered a “precious asset” in investigations because it stores data about executed binaries, drivers, executables, Office versions, operating system, etc. 

Windows 10 compressed memory was the subject of a presentation by two FireEye Labs Advanced Reverse Engineering (FLARE) reverse engineers, Omar Sardar and Blaine Stancill. They described how modern operating systems compress memory to fit as much into RAM as possible, and why: the system can use multiple cores and perform simultaneous work, and the capability allows flexible kernel deployment.

However, the memory forensics tools Volatility and Rekall couldn’t read compressed pages. Sardar and Stancill described how they integrated their research into both tools, creating a new layer in Volatility and a new address space in Rekall. They rounded out their presentation with an example of how their plugin found critical pieces of information — a new mutex, handles, shell codes, MZ payload signs, and payload strings — about a piece of malware, an orphan file with its own DLL.

Sardar and Stancill hosted the Flare-on.com challenge at Blackhat, and have made their research available on Github — look for Win10_volatility, win10_rekall, flare-vm, and commando-vm scripts. In addition, find a video with added description from Andrea Fortuna.

Apple Artifacts

Bridging the gap between Windows and Mac was a presentation by Nicole Ibrahim, Senior Associate of Cyber Response at KPMG, who focused on MacOS DS_Stores — as she put it, “like shellbags, but for Macs.”

Ibrahim focused on how the existence of this artifact indicates that a given folder was accessed, as well as how access happened, because it requires Finder GUI interaction. The Finder uses .DS_Stores to restore a folder view, of course, but its relevance to an investigation is its indication of how a user interacted with a folder — created, expanded, opened in a new tab, etc.

Ibrahim also went through interesting correlations and caveats including:

  • Window Bounds (point to point of each window corner) – Bwsp: browser window settings if the user moves the Finder window around – treat it like a semi hash to correlate different folder accesses
  • Scroll positions: vertical or horizontal – lets Finder know at what place you were last viewing: what section in folder was user looking at? X and Y axes
  • Trash Put Backs: if you send a file to the trash, name and location (originals) of file sent to trash – so Finder knows where to put restored file; even if moved, original records will follow it
  • Lack of full paths or stored timestamps; at best they provide only a time range
  • Volatile record data so that you have to carve for .DS_Store files, check local snapshots and Time Machine backups, then correlate to other .DS_Store files on disk.

Ibrahim’s DSStoreParser is available at her GitHub repository.

The other Apple artifacts talk, delivered by BlackBag Technologies’ Senior Digital Forensics Researcher Dr. Vico Marziale, shed light on the macOS Spotlight Desktop Search Service. The desktop search on OS X, macOS, and iOS, Spotlight indexes file content and metadata. It’s turned on by default, but is largely undocumented by Apple. Marziale’s talk focused on the metadata store, which he said is in some ways reminiscent of the Windows registry.

Why it matters: the data in Spotlight including message contents, email contents, phone numbers, print activity, location, calendar items, etc. can help you pinpoint specific users doing activities — and use timestamps to reconstruct a timeline. Marziale then described what’s known about Spotlight’s complicated internal structure, including the volume level and the user level.

To get data from Spotlight, you can use CLIs including mdutil, mdimport, mdfind, and mdls. You can also use Spotlight_parser and the brand-new beta Illuminate, a free CLI research tool for parsing Spotlight items.  

Email Artifacts

Arman Gungor, CEO of Metaspike, spoke about the forensic investigation of emails altered on the server. Sharing a real-life scenario, Gungor described how easy it can be to assume that emails are unalterable on a server when in reality, web services and APIs make it easy to modify both message content and headers.

Gungor made the argument to collect metadata not just from messages, but also from the servers on which they’re stored. Server metadata isn’t typically acquired alongside messages, but can contain important clues as to whether a message was altered. When emails are stored in Gmail, it’s also wise to validate message metadata using DomainKeys Identified Mail (DKIM) signatures.

Likewise messages’ neighbors — if possible, the entire folder — which are important for context. For example, manipulated messages might show larger gaps between message unique identifiers (UIDs) and internal date discrepancy from an expected chronology.

Mobile Artifacts

Vehicle forensics has been a topic of DFIR discussion for some time, but largely in relation to built-in systems like vehicle event data recorders (EDRs, or “black boxes”) and separate mobile device forensics helping to determine whether someone was driving while distracted.

However, both iOS CarPlay and Android Auto bring the two together in unprecedented ways, integrating messaging, navigation, contacts, calendars, and other data to augment travel. This integration was the topic of a talk by SANS instructors and authors Sarah Edwards, Forensic Specialist at Parsons, and Heather Mahalik, Senior Director of Digital Intelligence at Cellebrite.

“They See Us Rollin’; They Hatin’” described the convergence — and potential correlations — between mobile and vehicle platforms via research on a jailbroken iPhone X and a rooted Samsung.

  • On iOS, GUIDs can be used to correlate which car is doing what for each device; while device connections require physical access to the device to correlate across those databases.
  • Messages in an Apple system can be dictated through Siri and as a result, correlated through the KnowledgeC database. Other locations in iOS include InteractionsC.db and sms.db. For Android, Google Voice, MMSSMS.db, or other evidence from third party apps and/or logs.db can help.
  • Bluetooth connections or Android Auto Voice directions don’t mean the driver was hands-free; conversely, it can be difficult to prove driver distraction, especially when a passenger could have sent text messages. 
  • Furthermore, a device doesn’t have to be plugged in to show it was in motion; app usage will reflect its own motion, whether it’s in a car, on a bike, or on another form of transit.
  • Devices can be connected to more than one vehicle, so it’s important to correlate events across vehicles.

Forthcoming research includes the overall Android Auto timeline. For other resources around these issues, reference:

Another mobile forensics presentation, “Tracking Traces of Deleted Applications,” discussed how sometimes, what’s not on a device can tell you more than what is there about device usage. Christopher Vance, Curriculum Development Manager at Magnet Forensics, and Alexis Brignoni, a forensic examiner and researcher, talked about how to get data from deleted apps that can help investigators and examiners to triage which devices to focus on.

Mirroring the observation Mahalik made in her presentation regarding Android data “sprinkled everywhere,” Vance and Brignoni talked about the different files and databases that could remain in different places — including in the cloud — following apparent app deletion.

Purchase histories, uninstalled apps, data and net usage, and other locations can all contain pieces; because each contains a specific piece of data, correlating them all can be important. Some retain data for longer time periods than others, and the data retained for each deleted app varies. (This is where research methodology is critical: being able to test how and where apps store their data can vary widely from app to app, along with the data that remains available.)

So, while your chances of getting more data are stronger early on, having multiple places to look improves your chances even after some time has passed. They can be especially important when it comes to building timelines around app purchases, potential installs, usage/connection times, and deletion times. Tools like Brignoni’s Python parser, which can be found at his GitHub repository, can help.

Perspectives on Incident Response

One interesting talk that didn’t fit into the other categories was given by Terry Freestone, a Senior Cybersecurity Specialist with Gibson Energy. Freestone spoke about industrial control system (ICS) incident response. Following on from our article, “The Opportunity in the Crisis: ICS Malware Digital Forensics and Incident Response,” Freestone’s insights provided a great primer to anyone interested in pursuing DFIR and infosec careers in ICS.

Freestone began by bridging from other sectors into ICS based on their similarities, including a limited number of breach scenarios, the need to adhere to standards, and the notion that time is money. However, there are some critical differences, largely owing to the fact that ICS’ uptime and safety systems affect the real world — so that when things go wrong, they can REALLY go wrong.

Basic physical safety is a responder’s first priority when arriving at an ICS facility, and it imbues everything you might do. Freestone emphasized how careful communication with facility workers, including their guidance on the use of personal protective equipment and how and where to travel throughout a facility, is imperative.

In fact, it can help lay the foundation for effective incident response. Interviewing facility personnel to get their version of an incident can be crucial, as they know what their facility’s “normal” is, and their observations could in fact be tied to the digital aspects of an investigation.

The closing presentation on Day 1 was a team-based incident response war game designed and moderated by four Google engineers: Matt Linton, Chaos Specialist; Adam Nichols, Security Engineer; Francis Perron, Program Manager of Incident Response; and Heather Smith, Sr. Digital Forensics and Incident Response at Crowdstrike.

In contrast to last year’s exercise, this year’s incident took place almost entirely in the cloud. Smith told us it was designed this way because there isn’t currently a lot of training available for cloud analysis in the IR world — but responders have to get used to seeing it. Playing out a more difficult scenario, she said, can prepare examiners for where the industry is going and what they can proactively do.

During debriefings, players concluded that cloud is a different beast from what they’re used to. They found they had to prepare differently, with different expertise and tools — for example, seeking persistence mechanisms rather than “going through the front door.”

Each team of about 10 people had multiple 15-minute sessions to investigate different aspects of a cloud-oriented attack, with several rounds of questions to answer. Ground rules included using Google’s modified version of the fire/rescue community’s Incident Command System. 

Teams were further constrained by their own organizations’ maturity, based on a capability matrix including tools like binary whitelisting, antivirus, binary jobs extension blacklisting, host IDS/IPS, etc.

The upshot: preplanning for what to do with this much complexity, including what happens when there could be GDPR considerations, is imperative. Preparing for forensics in the cloud means potentially having API access and tools access, as well as the appropriate playbooks for whatever cloud environment your organization is using.

Special Events

Attendees had the chance to submit requests for Eric Zimmerman to write a brand-new tool by summit’s end. The MFT Explorer was the result of that tool challenge, with Zimmerman releasing the tool for community review on Day 2.  

Also on Day 2, Mari DeGrazia, Senior Director of Incident Response at Kroll, moderated live debates, splitting nine SANS instructors into three teams to take on hot topics in digital forensics, incident response, and even a little bit of pop culture. Network vs. host based evidence, Windows vs. Mac analysis, triage data vs. full disk images, multifactor authentication, whether Eric Zimmerman’s tools work, and the pronunciation of “GIF” were all hotly debated!

Following the debates, the annual Forensic 4:cast Awards were announced. If you aren’t on Twitter and/or you’ve been living under a rock, you can find the results here.

Have you encountered any of these tools or artifacts in your investigations, conducted your own research, or want to discuss any of these topics? Be sure to subscribe to our RSS feed to get links our daily insights, sign up to receive our monthly newsletter, and join in with discussions on the forums. 

How To Create Compelling Image Authentication Reports With Amped Authenticate’s New Projects Feature

$
0
0

How many times have you said or heard: “I’ll believe it when I see it”? This expression reveals our eyes’ dramatic convincing power: when you see something, you tend to believe it’s true much more easily than when you hear or read about it. In the digital age, for most people, this convincing power seamlessly extended to pictures they see on their computer or smartphone. Unfortunately, we all know how easy it is to forge images nowadays, to the point that seeing is no longer believing.

Fake images can play a crucial role in so many aspects of our life: politics, information, health, insurance, reputation, social media identity, terrorism. Virtually all aspects of our existence are somehow related to images.

When image authenticity is important for your case, you need the knowledge and the tools to investigate your image’s lifecycle and to reveal possible inconsistencies. Amped Authenticate is the most complete, user-friendly, and documented image forensic suite available, providing more than 40 different tools to work your way from simple visual inspection down to byte-level analysis, metadata inspection, compression analysis, source device identification, and forgery localization. Just look at Amped Authenticate’s Filter panel and you’ll notice how many filters it contains. They are grouped into different categories: Overview, File Analysis, Global Analysis, Camera Identification, and Local Analysis

Let’s go practical with a case: we are asked to perform image authentication on this digital image, which was submitted by the person shown in the picture to prove he was in a sea town on a specific day in  November 2015.

Once we load the evidence image, simply clicking on a filter name will display its result. We may also load a reference image and view the filter result computed on evidence and reference side by side. This is especially useful when you can obtain an original image from the same device model declared by your evidence image’s metadata. 

To begin with, the Visual Inspection of the image reveals some strange artifact near to the subject’s eyebrows. Despite it being a bit strange, that kind of artifact is probably not sufficiently compelling on its own to rule out image authenticity.

Below you see the File Format filter result for our evidence image: the number of warning messages already suggests that we’re hardly dealing with a camera original image (that is, an image that has never been processed in any way after being captured).

As you may have guessed from the list of filters in the Filters panel, carrying out full image authentication takes some time and needs proper reporting to ensure repeatability without losing your reader in the technical details of what you’ve done. For this reason, Amped Authenticate has recently been empowered with the Projects functionality. This provides an intuitive way for the user to organize their analysis. An Amped Authenticate project is made of a list of bookmarks (defined below), possibly organized into folders. Whenever the user finds an interesting result, they can bookmark it using the Project panel.

But what’s a bookmark? When a bookmark is added, the currently selected filter (including its input and possible post-processing parameters), together with the path to the currently loaded evidence and reference images, are saved into an entry in the Project panel. By default, the bookmark is named by the active filter and the names of the currently loaded evidence and reference image. Folders can be added by right-clicking on the panel and selecting “Add folder”. For example, in the case mentioned before we may bookmark the File Format filter as shown below: notice that we set the bookmark “Status” to “Warning”, which means it will be highlighted in red in the report. 

We should also bookmark the Visual Inspection filter, since it raised some concerns. This time we set the status to “To Check”. Notice the full list of currently created bookmarks and their status is displayed in Authenticate’s Project panel. We decide to create a folder called “Preliminary overview” and move the two bookmarks we’ve just created to this folder, by simply dragging them.

After the preliminary overview, we already have a strong suspicion about image integrity: indeed, elements highlighted by the File Format filter are hardly compatible with a camera original image. Of course, that does not necessarily mean that image authenticity is also compromised! Simply resaving an image using Adobe Photoshop is enough to break integrity, but not enough to break authenticity. We need to investigate further, but first, let’s save the project to file, clicking on the classic “floppy disk” button or hitting CTRL+S on the keyboard. Side note: when we re-load the project, the MD5 of all input files is checked and an error is raised if any inconsistency is found.

Now that our work is saved and safe, let’s continue our analysis. Working with filters in the File Analysis category, our suspicions are confirmed. For example, the JPEG QT filter shows that the image’s main JPEG Quantization Table is indeed compatible with those used by Adobe Photoshop, while the Exif filter confirms traces of Adobe Photoshop in the Software tag and shows that the last Exif ModifyDate is much later than the CreateDate (we won’t show this picture for brevity’s sake). We bookmark both filters and put them in a folder called Integrity Analysis.

OK, we’ve had enough of metadata and related stuff, it is now time to go to the signal level. We run the JPEG Ghosts Plot, the DCT Plot and the Correlation Plot filters. The first two reveal possible traces of double JPEG compression, which manifest as comb-shaped histograms in the DCT Plot, and as a local minimum at quality 68 in the JPEG Ghosts Plot (the current estimated JPEG image quality is instead 96%, as shown in Authenticate’s top bar).

The Correlation Plot, on the other hand, does not reveal any inconsistency. That is how we reported this phase of the analysis in the project (we collapsed the previous folders for better viewability). Notice that when you click on a bookmark, its description will be shown at the bottom for live viewing and editing.

By now, we are quite confident that the image was originally JPEG, it has been processed with Adobe Photoshop and re-saved to JPEG. The last question is: did any tampering occur in the process? To answer this question, we need filters in the Local Analysis category, which produce the so-called forgery localization maps. In this case, two filters (ELA and DCT Map) provide maps where the subject’s face stands out quite evidently compared to the rest, and two other filters (Blocking Artifacts and ADJPEG) provide very compelling maps.

We should bookmark all these results; as shown below, we decided to set the status of two filters to “Warning” and the status of the other two to “To Check”, to reflect the different “degree of confidence” provided by the maps.

Now we are ready to draw some conclusions on the case. We can include them in our Project by simply creating a folder called “Conclusions” and writing our considerations in the description. Empty folders are a good way to add comments to your project that are not related to any specific filter.

This is how our final Project panel looks:

We are now ready to generate a report. We only need to click on Tools -> Generate Report, and choose the output format (which can be PDF, HTML or DOCX). Here, we’ll use some snapshots to show what we obtained when exporting to PDF. Let’s begin with the report’s header, which includes general information about the project (you can set this up by clicking on the rightmost button on top of the Project panel, or with the CTRL+P keyboard shortcut); information about the software version and operating system; and a clickable table of contents which reflects the project structure.

By clicking on a bookmark (or by simply scrolling through  the pages of the document) you’ll see each bookmark report; we provide an example below. As you can see, first the analyst’s comment is presented in a box, whose color reflects the bookmark status (red for Warning, yellow for To Check, green for OK, gray if status is not set). The status is also written explicitly in the box (useful if you can’t print in color). Then the evidence and reference files associated to the bookmark are listed, along with their MD5 hash value. After that, information about the bookmarked filter and its settings are provided. To give you a general idea of the degree of precision of Amped Authenticate’s reporting system, the report for the project we created is 18 pages long. Should you need a less verbose version of the report, you can tell Authenticate to not show in the report the Input and/or Post Processing parameters for the bookmarks of your choice.

There we go! We have carried out a complete analysis which highlighted several concerns about the integrity and authenticity of the image, and we generated an effective forensic report presenting our findings. 

Keep in mind, however, that Amped Authenticate features many more tools than those we’ve seen in this article. There are many filters we couldn’t see here, and there are tools allowing for an automatic search for images captured from the same camera model on the web (so we can use them as reference for comparing metadata and file properties), or to search for image with similar content through Google Images reverse search, and much more. A really thorough analysis would have involved some of these tools as well.

To learn more about Amped Authenticate visit: https://ampedsoftware.com/authenticate or contact us at info@ampedsoftware.com

 

How To Acquire Data From A Mac Using MacQuisition

$
0
0

Depending on the digital forensic imaging tool you have available, creating a forensic image of a Mac computer can be either an anxiety-creating situation, or as easy as “1-2-3-START”.  There are several things you must identify ahead of attempting a full disk image of the system. Below are some things to consider:

  1. Type of Mac computer: Identify the serial number / model number; identify if the Mac is installed with a T2 security chip. Are SecureBoot settings enabled to prevent booting from external media?
  2. What file system (HFS+ vs APFS) is currently running on the source Mac?
  3. Is FileVault2 enabled on the source Mac? Do you have the password or Recovery Key available?
  4. Do you need a logical or physical acquisition of the Mac?
  5. Has the owner of the Mac enabled a firmware password on the system?
  6. Is the Mac installed with a fusion drive?
  7. Do you need a RAM image?

Having the answers to the above questions is imperative.  MacQuisition, BlackBag Technologies’ premier imaging tool for Mac computers, can help you answer some of those questions.  MacQuisition can identify if the Mac has a T2 security chip installed, what file system is currently running, if FileVault2 is enabled, and if a firmware password has been enabled.

Acquiring live vs “cold box”

The days of simply shutting off a computer to collect a forensic image are long gone, especially when you encounter a Mac.  With the increased use of FileVault2 encryption, an examiner must acquire as much logical data on a live Mac as possible because it may be the only time that particular data is accessible.  Running MacQuisition on a live system will immediately identify the presence of FileVault2 encryption. Once identified, an examiner would want to immediately acquire logical data, especially if the FileVault2 password or Recovery Key is unknown.

Live collection: How to to acquire logical data

When the MacQuisition dongle is plugged into a running target machine, multiple volumes will appear on the desktop (the number of volumes depends on what version of macOS is running on the target machine).  There are two volumes of interest on the MacQuisition dongle for a live collection. The ‘Application’ volume stores the application and will be used to start MacQuisition. The ‘MQData’ volume is a storage location on the dongle where acquired data can be saved.  The examiner has the option to save data to another external device as well.

To begin a live acquisition, the examiner navigates to the ‘Application’ volume and clicks on ‘MacQuisition’.  The user will be prompted for the admin password at this time and can enter it here if it is known. If the admin password is not known the below prompt will be displayed, and the user can choose to run restricted.

Next the user will see a pop-up regarding FileVault2, if it is detected by MacQuisition.

Once ‘Continue’ is clicked, the user will see the main display for MacQuisition and can enter all the relevant case information as well as change the time zone used for the logs and reports.

From here, you can select whether to do a ‘Data Collection’ (which will export specific folders and file into a folder or sparse image), or image the device.  Below is a screenshot for Data Collection:

There are several locations pre-defined within MacQuisition that are already selected, and the user can simply check or uncheck areas they would like to export.  There is also a button on the bottom left-hand side to ‘Select Files’ should the user want to select a location not already included.

If ‘Image Device’ is selected at the top, the user will see a screen that looks like this:

Physical disks are displayed, and MacQuisition will show APFS containers as well as encrypted volumes (and whether they are unlocked).  Select the disk to image, and choose the appropriate image formats, image segment size, and acquisition hashes. Here are the file formats and segment sizes available to choose from:

*Note: If acquiring a physical image of a T2 chip system, the output format is restricted to AFF4.

Click the plus sign under destination to pick the acquisition storage location.

To acquire RAM from the live Mac, root access is necessary.  If the Mac is logged in under “guest” privileges, acquire RAM from a “cold” box state.  

Cold box acquisition

Obviously, a full physical acquisition of the source Mac’s hard drive(s) is preferred by most examiners, and provides the largest amount of data, including APFS snapshots.  There are two methods an examiner can use to perform such acquisition. The first method is using a control boot method (Startup Manager). This is accomplished by depressing the POWER key while holding down the Option/Alt key.  Then select the appropriate version to run depending on the source Mac architecture. The second method is acquiring the source Mac while in Target Disk Mode (TDM). This method is recommended for Mac computers installed with the T2 security chip and allows the examiner the ability to obtain a physical image without modifying the SecureBoot settings.  The source Mac (in TDM) is attached through a write-blocker (hardware or software) to the examiner’s forensic Mac computer. Run MacQuisition from the examiner’s forensic Mac Computer and follow the same process as described under live collection how-to.

In either of the above methods, if a firmware password has been enabled on the computer, it will be identified at this stage by a “padlock” icon.  If the computer is protected with a firmware password, Apple must be served with a legal process to circumvent it. In a corporate environment, the IT department who owns the computer may have a record of it and should be contacted.  This holds true for Recovery Keys as well, since most corporate IT departments keep records of Recovery Keys on systems issued to their employees.

Obtaining the firmware password, FileVault2 password or Recovery Key is imperative.  But when will you need it? Below is a quick reference chart.

FileVault2 password/Recover key reference chart:

Output

Once an examiner has decided what method to use to acquire the source Mac (control boot or Target Disk Mode), as well as what to collect (logical or physical images), the next step is to determine where to send the acquisition/image and what filesystem to use for storage.

It is always recommended to stay with the native filesystem which you are imaging, but there are situations where the examiner may choose to analyze the acquired Mac data on a Windows-based system.  For physical images, BlackBag Technologies incorporated Paragon© drivers to allow output to NTFS. Although MacQuisition supports output to ExFat volumes, this is not recommended due to the instability of the drivers used to create it, especially on a Mac.  Improperly ejecting the external drive can cause it to corrupt the filesystem, thereby leaving the examiner with an unusable/unrecoverable image file.

Conclusion

At BlackBag we are always looking ahead to how we can enable investigators to make informed decisions with the time and resources available to them. With MacQuisition, we are exploring how we can let on-site personnel view additional relevant content quickly, before even a full image, to make sure they are focused on high-value devices. By giving investigators more information and insights earlier in the collection process, MacQuisition will save customers time and meet changing legal requirements. 

Find out more about MacQuisition and order your own copy at blackbagtech.com.

Four Critical Success Factors In Mobile Forensics

$
0
0

by Mike Dickinson, Deputy Executive Officer at MSAB

The purpose of this paper is to encourage mobile forensic practitioners to consider a wider number of critical factors surrounding their choice and use of mobile forensic tools. Specifically, the quality of decoding, training of users and ultimately the preservation of digital as evidence in court proceedings.

Introduction

There is a tendency in the world of mobile forensic tools to focus on one thing: data acquisition.

Most users tend to focus on purchasing a tool that gets them access to the data. Makes sense, right? Not much point in doing anything else, if you can’t get the data in the first place, and we would agree. But it shouldn’t stop there. There are four critical factors to consider:

  1. Accessing Data
  2. Decoding Data
  3. Data Integrity
  4. Training Users

This white paper focuses on points 2, 3 & 4 on the assumption that point 1 is already self-evident and gets plenty of attention in the marketplace.

The message here is that it doesn’t just stop once you have access, there are still some vitally important matters to consider before presenting your evidence in court.

1. Accessing data

If you do mobile forensics, you know that the hardest thing is getting the data in the first place. It is also the one thing customers are more than willing to pay for when it comes to the commercial aspects of the business.

This is currently the entire business model of Grayshift, for example, with their iPhone tool GrayKey. This tool is a way to get the data. The value of their product is that they have a unique exploit that allows users to bypass the iOS device security to recover the data.

Critically though, you need to purchase another mobile forensic tool in order to decode that data. The Grayshift business model assumes users already have another mobile forensic tool that can ingest their data and decode it to view the contents.

In other words, getting the raw data isn’t enough. You also need to be able to read it. Which leads us to the second priority – decoding.

2. Decoding Data

Why is decoding important? Put simply – time.

Unless you happen to be a digital forensic expert who reads hex binary data natively and has unlimited time to analyse data dumps, you’ll appreciate that some mobile forensic tools can automatically decode data for you.

Not many people are skilled enough to review binary data on a daily basis. But pretty much anyone can look at pictures acquired from a mobile device and work out if they are relevant or not. That is the value of decoding, it means you can quickly see what has been recovered and then determine if there is anything of evidential value on the device.

Disappointingly, we see a trend of users not giving as much thought to the quality of the tool’s decoding, compared to whether or not it can acquire the data in the first place. This is a significant oversight, given that most people only view what is automatically decoded by the tool.

It is almost as if it is assumed that the data presented will always be everything from the mobile device. Further, that every extraction will present the same data regardless of the tool used, if acquired from the same source. A simple comparison test between different digital forensic tools should soon debunk this assumption.

While the original raw data is always there in the extraction, a forensic tool’s ability to decode and present it is a separate matter. That’s because it relies on software engineers’ understanding of the latest data formats, which are changing all the time.

In our comparison tests between tools we have seen significant variances in the data presented based on the same acquisition. The speed of development and frequency of updates in apps, for example, means that the way data is stored is changing all the time and it is an endless task for tools to keep up to date.

The unpleasant truth is that mobile forensic tools often produce different results when you compare the outputs of their decoding. So, all other things being equal, you want to be sure that your tool decodes the most data in the most reliable way.

A true professional knows this and will have access to multiple mobile forensic tools for comparison and validation purposes. If two different tools come up with exactly the same result, the level of confidence in the results is significantly improved.

Equally, if there are variances, then the need for more verification is justified to ensure the integrity of the evidence presented. In serious crimes this should always be done as a matter of course.

The challenge of time, however, means that this isn’t always done in every single case. For example, it is neither practical nor proportionate for a non-specialist investigator to spend days studying digital data, for a minor case of shoplifting.

Nevertheless, investigators do need to review the contents of the phone extraction to ensure they are not overlooking evidence of similar crimes or more serious offences that make the case a more serious matter worthy of further investigation.

The simple shoplifter?

Imagine a scenario where you are using a mobile forensic tool without the latest decoding capability for WhatsApp and because it can’t see the contents of the messages, it presents no more evidence. You assume the shoplifting suspect is a one-off case and let him off with a warning and the case never goes to court.

Now imagine the same phone going through another tool that does have the latest support for WhatsApp and the data reveals that the suspect is working for a network of criminals who are dealing in stolen goods in order to fund terrorism.

This is an extreme example to make the case, but hopefully you now appreciate the importance of checking the quality of decoding that a mobile forensic tool offers.

3. Data Integrity

What if you did all that work to generate a report for presentation in court, only to discover it wasn’t usable in court?

Getting past security and encryption to acquire the data is important. Hopefully you now also appreciate the importance of good decoding too, but what about producing it as evidence?

We call this the ‘Chain of Custody.’ That’s because in many courts you need to be able to prove the origin and reliability of the evidence you present in court – from the moment it is first acquired until the day of the trial – to demonstrate that it has not been interfered with or altered in any way.

Most law enforcement users understand the necessity for the preservation of physical evidence. It’s commonly understood that you should preserve and not contaminate DNA evidence. Equally, that you should allow the defence the opportunity to examine the evidence to see if they get different results.

So how does this work in the realm of digital data evidence?

The Principles of Digital Evidence

The best guide written on this topic came from the Association of Chief Police Officers. The Good Practice Guide for Electronic Evidence outlined four principles when dealing with this type of evidence:

Principle 1: No action taken by law enforcement agencies or their agents should change data held on an electronic device or storage media which may subsequently be relied upon in court.

Principle 2: In exceptional circumstances, where a person finds it necessary to access original data held on an electronic device or on storage media, that person must be competent to do so and be able to give evidence explaining the relevance and the implications of their actions.

Principle 3: An audit trail or other record of all processes applied to computer-based electronic evidence should be created and preserved. An independent third party should be able to examine those processes and achieve the same result.

Principle 4: The person in charge of the investigation (the case officer) has overall responsibility for ensuring that the law and these principles are followed.

Look at Principle 3 again – an Audit Trail. Does your mobile forensic tool have one?

Seriously, check it out – is there a detailed log of all the processes applied to the device and the results that created the end report?

We know of at least one major tool that does not have an open, accessible audit trail that can be read and understood by an independent expert for the defence. An encrypted audit log of the extraction is not a transparent tool open to enquiry by the court.

Imagine taking all the time to acquire the data, decode it and then prepare a report in order to present the evidence at court – only to have it thrown out because nobody can make sense of what the tool is actually doing?

Impact of Privacy and Data Protection laws

It may seem obvious that by its very nature, the data recovered from a mobile device is often personal data.

Data Protection by design is important when you consider there is a piece of legislation with global reach that impacts law enforcement officers in the European Union, as well as law enforcement officers anywhere in the world, when handling personal data transferred from EU-based authorities.

The European Union’s data protection laws require that personal data be protected so that it is not lost, unintentionally deleted or accessed by unauthorized personnel. And more and more countries, over 100 as of mid-2019, are enacting their own data protection laws and regulations, according to a United Nations tracking study.

The monetary fines for violating data protection laws can be significant – that should focus everyone’s mind on the importance of data protection.

Data Protection by Design

You may be surprised to learn that one of the most popular tools on the digital forensic market stores data in an open file format easily readable in its native format when stored on a computer.

That should be of immediate concern. Consider, for example, an investigation into indecent images, where the file format allows you to see the images natively in Windows or on a USB memory stick or DVD. This type of data should be protected by default.

If you store digital evidence in an open file format, that leaves it open to accidental alteration. How can you show that it has not been interfered with prior to presentation in court? What if someone accidentally dropped images from another case into the wrong folder on the computer where the evidence is stored – how would you know?

Please be sure to check that your digital forensic tool is not susceptible to this basic oversight when considering the issue of data protection and integrity for presentation in court.

4. Training Users

The final piece of the puzzle is training. Your organization probably spent lots of money investing in mobile forensic tools, but does it then invest in the users?

Sadly, too often this seems to be overlooked. The budgets allocated seem to be directed exclusively towards the purchase of products, and training comes as an afterthought.

In times of budget cuts, we appreciate that training may be one of the first areas to be cut, as organizations focus on their immediate short-term need for savings, over the long-term beneficial investment in their staff. It’s natural that tough choices need to be made.

However, the big challenge with mobile forensics is that in order to get the data off a mobile device, you usually need to power it on and because they tend to be proprietary electronic devices, that will alter the state of the device. Thus, from a purely technical perspective, a conflict with Principle 1 of the Digital Evidence guidance.

However, Principle 2 has the answer – the user must be suitably trained.

That leads us to a very relevant court case in Australia where mobile forensic evidence was ultimately rejected. Not because the evidence was unreliable, far from it – the tools worked perfectly. The reason for the appeal was because the officer presenting the data was unable to adequately explain how the tools worked or show that he was suitably qualified: HERE.

… It was the first time he had experienced the relevant software and he did not
have any formal training in its use. It was also his evidence that the software
‘tried to do its best job at doing it’. To my mind this clearly raised questions as
to the reliability of the software and of Constable B’s correct use of it. In my
view, the prosecution failed to establish that the downloading process was of
a type generally accepted by experts as being accurate, and that the particular
downloading by Constable B was properly performed.

Hopefully, from this example you can see that it’s vitally important that organizations keep their users suitably qualified to present digital evidence in court.

The last thing anyone wants is for good evidence to be thrown out because it was not presented in the correct procedural manner or because a law enforcement witness was not adequately qualified to present the evidence. The mobile phone market moves fast, and new techniques and solutions are being developed all the time. Keeping up to date is a full-time job.

If you invest in specialist tools, be sure to invest in the operators of these tools as well – to ensure you get best value from your investment.

Conclusion

If you have understood the need to cover all four of these critical areas, your organization will be well on the way to leading the field in terms of best practices for mobile forensics.

The quality of decoding available in the tool, the security of the data recovered and the ability of the user to understand and explain these processes is just as important as data acquisition, when it comes to the bigger picture of getting your case to court.

For more information visit our website: https://www.msab.com.

About The Author

Mike Dickinson has spent half his working career in military/law enforcement and the other half in the private sector supporting these services. His specialism is providing tools and technologies designed to help in the fight against crime. Using his previous experience as a Senior Investigating Officer in the UK Police, Mike is dedicated to helping agencies make best use of their existing assets to improve the prevention and detection of crime; through the use of digital forensic technology.


How To Use Text Analytics With Rosoka Integration

$
0
0

Hi, I’m Rich Frawley and I’m the Digital Forensic Specialist with ADF Solutions. Today we are going to explore the text analytics capabilities built into ADF’s digital forensic software with the integration of Rosoka. 

Rosoka brings the power of automated energy extraction and language identification with gisting directly into ADF through a tightly integrated user experience.

Rosoka runs locally on the investigator’s computer, processing documents in over 200 languages to identify entities and locations and unstructured documents. 

This powerful functionality allows frontline officers to speed their intelligence-gathering with the ability to view translated data from structured and unstructured evidence so they can make better informed decisions starting on-scene. 

Rosoka is built into Triage G2, Triage G2 Pro, and is available as a Rosoka add-on for Mobile Device Investigator, Digital Evidence Investigator, and Triage Investigator, or our bundled Pro Tools. 

After completing a scan the classifier will run with the entities extraction automatically. This process can be paused and resumed if necessary. Entity extraction will run after image and video classification.  

Once completed, the entities can be filtered in the table where the entities were located, and also in the Entities tab on the specific file.  

When looking at a specific file with entities the “gloss” or “gist” is shown, as well as the original file. 

Along with documents, entities can also be located in tables such as email and messages. 

That’s all for this video — thank you for your time. 

Find out more at ADFSolutions.com

How To Integrate AD Enterprise And The CyberSponse Platform Using The AccessData API

$
0
0

Joe: What you’re looking at right now is the CyberSponse platform itself. As an incident responder, you’re going to spend most of your life either in the Alerts, or Violations, or Incidents page. 

In the Alerts page — what I’m going to do is I’m going to generate a simulated alert where you have an asset that’s been critically infected and you need to do something with AccessData in order to capture the memory. And so with that I’m going to go down and I’m going to run a simulation of AccessData [mumbling]. 

So when I run that, you’re going to see it creates a new alert at the top, where there’s a successful inbound connection. What this does is it creates an alert of an unknown… a specific port, 31337. And that is specifically because you’ve got a connection coming in, inbound to a specific asset. 

And we can look at it down here in the asset records: it’s the CEO’s laptop of the organization.

Well that’s a pretty important connection that you see, that’s happening on your network, that you might have to respond to. So all we have to do in the automation world, if we want to remediate or get a capture from AccessData, is I highlight that asset and I execute a full-access memory [capture].

And what you can see happening over here on the right-hand side is that AccessData is actually running Playbooks in the behind-the-scenes — and I’ll show you those in a second — that actually captures and communicates with AccessData Enterprise; captures the memory of that actual laptop itself. Steve, following my demo, is going to be able to show you that. 

And so what you can see is that the assets are correlated to the events, to the hunts, to the incidents, to the vulnerabilities, around this specific alert that came in. That’s where your ability is to interface with all of those other products that we talked about. 

So it’s curious: what does that Playbook do, and what did we do today?  

So what I’ve done is, if I view executed the Playbook option inside the incident alert record, I can see added comments relayed at the outset, I’ve waited ten minutes, I’ve been able to grab a second comment, and I’ve been able to execute a specific asset. And I’m opening a page right now, which is the Playbook engine. 

And that Playbook engine is exactly what ran behind the scenes, and I’ll take you through this real quick. 

By creating that trigger, I created it and I ran the AccessData, and I created a case around AccessData. I added a comment to the record; I added a related asset which is the CEO’s laptop; I waited ten minutes for it to get ready to communicate and build the connection between the two; I executed the volatile memory acquisition; I added a second comment to the record that the acquisition took place; I got the job status back from AccessData; then I added the volatile data to the case that I had retrieved from AccessData itself; and then I added a completion comment. 

So what you can see is all the red icons is where we actually automated and integrated with AccessData, in order to automate the forensic aspect of it. The value of that is you no longer have to jump between products anymore. You can execute and control, almost like a remote-control console, all the executables that need to take place in order for you to capture that data within the incident response record. 

And so those Playbooks — like I said, if you want to do further more investigation with that, you can add more approvals, you can add more steps, more connectors to more products. You can take that memory capture, do further evidence or collection data or scanning with it, as well, or extract specific indicators of compromise from it; you can do that all through using the platform itself. 

And with that, I’m going to let Steve take over from his part. 

Steve: What Joe just showed us was the automated Playbook, and the data that was grabbed using AccessData Enterprise. What I’m going to show you is the data in the AccessData GUI, so we can actually view it. 

He grabbed — in that Playbook, there was an incoming connection on port 31337. And so we’re going to go ahead and take a look at the sockets that we grabbed, and we’re going to see the connections. 

What we can see here is the CEO’s laptop, and we can see here the established connections, and I can see right away, there’s 31337, there’s an active connection [indecipherable] .43, and there’s an active connection from 10.1. If we go to this in Notepad, we can see the process ID, and the date and time that it happened. 

So now that I know that that’s there, I can go up into the processes and look at it, and see the active processes that were running… and we’ll find somewhere in here is my Notepad; we’ll see that Notepad is actually running on this [indecipherable]; and if we look at the command line of it, the command line looks like a Netcat listener, so it’s -L, so it’s listening on port 31337, executing a command line. 

So the command is right underneath it, so we can see the parent ID is the same process ID of the Notepad, so we definitely know that we’re tracing of this command prompt. And Notepad did spawn that. 

So what we also can do, is take a look at all the network drivers and the users on this machine. And this is all the information that it grabs, so we can open files and the DLLs. 

On top of that, what I’ve done is, I had a scan of the CEO’s laptop, so I can do a comparison from Scan A and Scan B. 

So this is a scan prior to anything happening, so I can left click and then right click, and I can get a quick [delta] of new processes that showed up from Scan A to Scan B. 

So we can see right here, here’s my Notepad that showed up, and we can see the same command line right there, and there’s the cmd.exe, looks like it’s running in the Downloads directory: my user path, Users\Steve\Downloads\cmd.exe, and it looks like it’s running Notepad as well. 

I also can get new connections on here, too. So if I can compare the two scans, same type of thing: anything in red shows up that’s new. There’s my 31337, running my Notepad, and definitely we know we’ve got an issue. 

So from here, because I’m in the GUI, I’ve already done this in the interests of time, I’ve gone and loaded a preview of the machine. 

So from here, I can look at all the files on the machine; I can scroll down through them, there’s a hundred and twenty-something thousands on the box. I can scroll down through them, I can do a timeline, I can do all kinds of different stuff with these files. 

I can get all the properties; I can just click on one of them and it’ll give me the properties of the file, and I can offload the file… I can kill it if I needed to, the process. 

And on top of that, we know we have some executables on here that are bad. So what we can do is do a remote search on the box; we know we have a Notepad running, and I can create an XML template to look for these files being hashed. 

Because I had the processes; I know what the hash is; and if I open this up, you will see here, here’s the hash galley, the XML, like I showed you when we created this, and it’s an executable file. 

So if we just clicked Search on that, it would go through on that remote [indecipherable], and even things I have listed up here, it will search for that. 

Once it finds it, it will come back, and it will show us the machines, the different files that we found. 

So not only did we find my Notepad here, down at the very bottom on that box, we can see that that’s running out of my own directory, my user downloads directory; we can see that there is a Netcat list, running out of the administrator’s directory, there’s a Netcat on here. 

And the key is that there’s a bunch of files, and even here in the recycling bin we’ve found a couple of files that were deleted, that the hash value actually matched up on that. 

So then we could go back over to our processes, under the volatile information, and I could kill one of these processes. So if I take a look at the process, I could run a… if I click on the process then I could kill it, or I could wipe it, and I could add it to the hash values, if I wanted to. 

On top of that, while we’re in here, I ran a module called System Information, which is where we’re going to glean the kind of information for ourselves, for our infected node. It’s going to pull all the apps, it’s going to grab all the prefetch files, so I can come here and take a look… you know, the prefetches, where all the different things are; I can get my user assist files; downloads; URLs; network connections; the shares on that box — so we can see if anything else was opened up on it — jump lists; shortcuts on the machine; I can pull the users onto the box; I can parse out my shellbags on the machine; also it can grab USB information. And that’s all in one nice little GUI. 

Then we can export. Same thing with the volatile data, I can export any of this information that I needed to. 

So also, while we’re doing a preview of it, we can do a lot of things like, you know, I can go and do searching… I can look for prefetch files. I can filter in on a prefetch file, I can find my command line that was executed and look at it in its natural format. 

And there we go: looking at this one here, we can see that this is coming out of Steve’s under Downloads, and it was executed this afternoon, or when the alert happened. 

I can go down and grab the registry files, I can do a timeline on it, see all the machines and anything that was accessed in the last couple of months, last couple of days, couple of hours, by simply creating and writing filters. 

So to cut a long story short, Joe’s system, CyberSponse, got an alert; created an asset through the Playbook; went and used AccessData through the APIs; grabbed all the process information, and the service information and all the volatile data, we’re able to see that we definitely do have an incident, we’re able to investigate it, and remediate it. 

Joe: And that time to respond is where you’re going to get a lot of value, right? As it’s doing a memory capture that takes a few minutes, you’re saving time by executing it remotely before you have to go back, before you go into AccessData Enterprise in order to look at the data itself. So you’re getting ahead of the curve, so to speak: you’re able to streamline your process. 

It’s interesting when you start to automate things: you know how you saw I manually executed it? You can start automating when those executions take place so that you don’t even have to have human intervention. You literally could say “Every time that there’s a known compromise on an end point, automatically grab the memory or drive image of that device, before you even open the alert.” 

So that’s kind of a proactive response that you can’t currently do… that you couldn’t do with any other product combination yet today. 

Steve: Yeah, and you can pass the API’s variables from [the] CyberSponse system. So if you had a specific file name, or something like that, you could go and grab it. You could use the API to grab the individual file; you could do a full disk dump, as Joe said, a memory dump, including the [indecipherable] file; volatile data…. 

A lot of the stuff we can do manually in the GUI, we’ve automated to be able to make those response times a lot faster, and almost instant. 

Learn more at accessdata.com/products-services/api

The Mueller Report – An Amazing Lens Into A Modern Federal Investigation

$
0
0

by Stephen Stewart, CTO, Nuix

Preface: This NOT about politics. This is all about the data discussed in Volume 1 of The Mueller Report.

I will admit, I am a total geek. When the government released the Mueller Report, I downloaded the PDF and, within a few minutes, ran it through Nuix.

For anyone who works with unstructured data for a living, the document would fall into the category of “Gross Data.” The PDF was a container for 447 JPGs with zero searchable text. Nuix made short work of this, and I was able to quickly OCR the images. Thanks to the auto-detect for rotation I was able to very quickly get good clean text.

From there, I extracted named entities (people and company names, email addresses, etc…) and pulled out a list of shingles (basically a quick way to look for repeating phrases).

Named Entities

The Mueller Report contains a wealth of named entities. Here are just a few examples:

People

Email Addresses

Company Names

Shingles

Note: “Number of Items” refers to the number of pages that contained the entity or shingle.

Thanks to how fast I had all of this detail, I was able almost immediately to start to get a sense of the document’s content and quickly understand what the data landscape looked like. 

Next Step: Analysis

So, what’s next? I decided to do a little open source analysis and compare the report to things like the publicly available data released by the ICIJ as part of the Panama Papers and the United States Treasury Specially Designated Nationals And Blocked Persons List (SDN).

This was also super easy, since I had already converted the Panama Papers and SDN to a huge search and tag (more on that in another blog post!) I should note: Many of the search tokens are over-broad, but it’s still a really interesting exercise…

As a reminder, all of this was extracted from just the final report – not the actual source data!

The actual source data that was part of the investigation read:

“During its investigation the Office issued more than 2,800 subpoenas under the auspices of a grand jury sitting in the District of Columbia; executed nearly 500 search-and-seizure warrants; obtained more than 230 orders for communications records under 18 U.S.C. § 2703(d); obtained almost 50 orders authorizing use of pen registers; made 13 requests to foreign governments pursuant to Mutual Legal Assistance Treaties; and interviewed approximately 500 witnesses, including almost 80 before a grand jury.”

Within just this short example of the source data, we have a lot to consider, including:

  • 2800 subpoenas: With 87 references to Facebook and the detailed documentation as to the activity of certain profiles, can you assume that the Office was sifting through Facebook, Twitter, and Instagram data?
  • 500 search and seizure warrants: That is bound to generate at least a couple hundred hard drives and mobile devices.
  • 230 2703(d) and 50 “pen registers”: Interesting in that it laser focused on who is talking to whom and the frequency of their communications.
  • 500 witnesses: That is a whole lot of testimony that needs to be checked against all that digital evidence.

Modern Investigations Are Complex

Regardless of your political persuasions, Volume 1 of the the Mueller Report offers an amazing look into the complexities of modern investigations and really highlights the importance of being able to handle diverse collections of data about, and created by, humans and then being able to understand the people, objects, locations and events (POLE).

Nuix’s software fits right into the middle of this landscape—helping organizations handle gross data, running hundreds of thousands of searches looking for hidden links, and visualizing relationships across the POLE framework. I’m continually surprised at how powerful and extensible it truly is.

In my next few articles, I’ll dive further into some of the topics I touched on here:

  • What it feels like to targeted by a Nation State
  • Human-generated data is at the heart of most investigations (even in 2019)
  • Using open source intelligence lists to augment investigations

 

Walkthrough: Talon Ultimate From Logicube

$
0
0

Welcome to Logicube’s tutorial on the Talon Ultimate. Featuring Logicube’s advanced technology, the Talon Ultimate provides high-performance forensic imaging at a price point that fits budget-constrained organisations without sacrificing state-of-the-art features and benefits.

The Talon Ultimate achieves speeds of up to 40GB / minute. The solution images and verifies concurrently to reduce the overall processing time. Image from a single source drive or a network repository to up to four destination drives simultaneously. A gigabit Ethernet port allows you to image to or from a network repository.

Software options available for Talon Ultimate include the Multi-Task option, to activate the second SATA source port and provide the ability to image from multiple source drives simultaneously. The SAS option activates support for SAS drives on the source and destination SATA ports.

The Logical Imaging option allows the investigator to acquire only the specific files needed. This option allows you to view and browse the root directory of a drive and select specific files to image.

We’ll now do a quick product tour of the Talon Ultimate. On the front of the unit you’ll find two USB 2.0 host ports and one USB 3.0 device port. On the rear of the unit we have a gigabit Ethernet port, an HDMI port for connection to a projector, and the DC power port.

On the left, or write-protected side, of the Talon Ultimate, we have one SATA; one Firewire; one USB 3.0; and one PCIe port. A second SATA port can be activated with the Multi-Task option.

On the right, or destination side of Talon Ultimate, we have two SATA; one USB 3.0; and one Firewire port.

The Talon Ultimate allows you to image from a single source drive up to four destinations simultaneously. With the optional Multi-Task feature, you can image from multiple source drives to multiple destinations concurrently.

Users can image from an M.2 PCIe, including NVMe drives, using optional adaptors connected to the Talon Ultimate’s PCIe source port.

The Talon Ultimate can image directly from laptops, using Logicube’s iSCSI boot disk, and can image Mac computers in target disk mode using the Firewire port.

Talon Ultimate’s user interface is easy to navigate. On the left side of the screen you’ll see all of the various operation icons, and in the middle of the screen are the setting menus for each operation.

You can connect the Talon Ultimate to a network PC and manage all operations remotely.

A Wipe feature allows you to securely sanitise destination drives using a custom pass, DoD7 pass, or secure erase.

The File Browser feature provides write-blocked preview and triage of drive contents. Logical access to source or destination drives allows users to view the drive’s partitions and contents.

Audit trail and log files provide detailed information on each task. The log files can be viewed or exported in XML, HTML or PDF format to a USB enclosure.

The Talon Ultimate allows you to secure sensitive evidence data with whole disk drive encryption, and configure custom profiles and passwords.

Once a task has begun, a progress screen appears on the screen to provide task information, including speed and the time remaining to complete.

The Talon Ultimate offers advanced technology and exceptional performance with features designed to streamline the evidence collection process, all at a budget-friendly price.

Thank you for your interest in the Talon Ultimate forensic imager. We hope you found this tutorial of interest. For more information visit logicube.com, or contact our sales team at sales@logicube.com.

What Changes Do We Need To See In eDiscovery? Part II

$
0
0

by Harold Burt-Gerrans

Let’s continue from where we left off last time, discussing standardization. If you missed it, Part 1 was all about establishing standards. Now a bit about following standards. This will sound funny to those who know what a rebel I tend to be! Watch out, I’m about to rant…

Following Standards

When there are established standards, they should be followed. I’m mentioning this as a particular issue has caused some grief on a couple of recent occasions and I’ve discovered that a few industry-leading processing software applications have adopted this particular deviation from the standard. For reference supporting my arguments, I’m using the Library of Congress.

For those of you who are non-technical, an EML file is a text file containing an email message and its attachments. An MBOX file is a file containing one or more EML messages, each preceded by a separator line. MBOX is the format for messages exported by applications such as Gmail, when exported from their Vault and Personal Data Backup functions. Yes, I am aware that Gmail also has functions to export as PST, but not every item in Gmail converts to MSG format properly. And folder structured PSTs are structurally different from the multi-labelling system used in Gmail. For eDiscovery purposes, it is safer to export as MBOX – less chance that the smoking gun message fails conversion and is not exported.

For all the techies, an EML file is RFC-5322 formatted data representing a single message (Note that RFC-5322 replaces RFC-2822 which replaced RFC-822) and a MBOX file is a collection of RFC-5322 messages each preceded by a specifically formatted separator line (see RFC-4155).

During eDiscovery processing, it is not acceptable to break a MBOX file into individual files simply by putting each separator line and its following message into the individual files, and then calling the individual files “somefilename.EML”. In this form, the contents of these individual files are still RFC4155 and not truly RFC-5322, and hence are NOT valid EML files. A valid EML file requires the removal/restructuring of the separator line.

Here’s an example of what I am complaining about …… Both of these are valid RFC-# files to the specification referenced.

RFC-5322 (EML)

RFC-4155 (MBOX)

Even though the difference between them is very small, the second file cannot be considered a valid EML.

One really good reason that this EML standard should be followed: Relativity’s document viewer does not display invalid EML files.

Another good reason: It’s a defined standard… comply with it!

I’ve heard two opposing arguments from “processors” who believe it is acceptable to call these RFC4155 (MBOX) files EMLs:

a) Many applications, such as Outlook, will open these files;
b) The separator line must be maintained because it is metadata that should be kept.

Here’s my opinion on both arguments: NONSENSE.

a) Just because an application (or several), regardless of popularity, is smart enough to compensate for your lack of ability to follow a standard does not imply that you are doing it right. It’s not a grey area. You’re either following the standard and are valid, or you’re not. Don’t assume that every piece of software will be so forgiving.

b) The separator line is typically made of information contained within the other header fields of the RFC-5322 data. Consequently, it does not provide any metadata that is not available elsewhere. Hence, it can be removed. If you still insist on keeping it, then modify it to be a valid RFC-5322 header line by prefixing it with something like “X-RFC4155: “. In the RFC-4155 (MBOX) example above, it would be a valid EML if the first line were removed or changed to:

If you know that you are one of these processing applications that doesn’t follow the standard, please add this correction to your list of future bug fixes. Enough ranting… on to something new.

De-duplication level during document review

I don’t believe there should be any level of de-duplication other than “Global” during a document review. “None” and “Custodial” are options that should not even be presented. That said, fields like “All Custodians” and/or “All Sources” should be available from your eDiscovery processing software to indicate where multiple copies have occurred. At the end of the review, if there is some distinct metadata item (such as “Not Read” in the case of an email) that is needed for a specific custodian’s version of a document, it should be made available from the processing software when needed, typically at production time. Adding dozens of duplicate documents to the review when most of them will be insignificant just causes more work for the review team, more room for coding inconsistencies and is a waste of (often billable) disk space.

But what if one copy is privileged and de-duping might remove the non-privileged copy, or worse, remove the privileged copy allowing the accidental production of privileged information? If you have two copies of a document, the document itself cannot be both privileged and not privileged based on its own content. A law firm that has significant experience as Amici Curiae for Privilege Reviews once told me that “A document can, on its own merit, be considered privileged or not, and that due to various family relations, privilege can be lost or gained.” Hence, differences in privilege for these copies can only be the result of their associated families, as in one is an attachment to a privileged email and the other is not. In this case, however, the emails, as separate individual parent documents, will define privilege for the family.

Hey, you’ve made it to the end of part 2. I hope this was a little more exciting than Part 1. Part 3 will be more thought-provoking (at least to techies like me) as we’ll start discussing future data structures to enhance the eDiscovery experience. eDiscovery Utopia, here we come….

About The Author

Harold Burt-Gerrans is Director, eDiscovery and Computer Forensics for Epiq Canada. He has over 15 years’ experience in the industry, mostly with H&A eDiscovery (acquired by Epiq, April 1, 2019).

How To Boot Scan A Mac With APFS And FileVault 2

$
0
0

Hi, I’m Rich Frawley and I’m the Digital Forensic Specialist with ADF Solutions. Today we are going to conduct a boot scan of a MacBook Air that has APFS and FileVault 2 enabled.

At this point you have decided on a search profile or profiles to use and and prepared your collection key.

When conducting a boot scan, Digital Evidence Investigator is forensically sound. This means that no changes are made to the target media.

Prior to conducting a boot scan, establish how many USB ports are available and determine if the four-port USB hub is required. Two ports are required in order to complete a scan: one for the collection key and one for the authentication key. Once the scan has started, the authentication key can be removed.

As you can see here, I have my collection key inserted; I have my authentication key ready to go; I have my four-port hub; and I also have an external drive, should I want to image this when the scan is complete.

With the MacBook Air, in order to boot to the USB device we will hold the Option key after pushing and releasing the Power button. You can see I have three devices available to me: I have the System drive and my USB device, which is broken down into a Windows boot and an EFI boot; either will work.

When booting to the collection key, Digital Evidence Investigator will automatically launch the application to scan the computer. No user input is normally required within the Windows boot manager.

Once DEI has launched, there are two options available: ‘Scan Computer’ and ‘Image Computer.’ To proceed with the boot scan, click on ‘Scan Computer.’

You can see my target devices: the physical drive up on top, partitions below; and I have my APFS partition, which is encrypted.

If I select this partition it gives me the option to unlock the partition; here’s where I would enter my password or recovery key, and select ‘OK.’ And now my drive is unlocked and ready to scan.

I select my search profile, give it a name, and select ‘Scan.’

You see it’s asking for the license. I can place in the authentication key. Once the authentication key is recognised, the scan will commence and you can remove that authentication key and now move on to another computer with another collection key and the authentication key.

Now that the scan has completed, I select ‘OK,’ and I’m given the option to go in to view my results, or to image the drive.

If I select ‘Image’ it gives me the physical drive to image here, and then I connect my external drive that I’m going to save my forensic image to.

Now you can see my source is the internal SSD drive; my destination is the drive I just plugged in; the image name; the format I want to save it to; and then I have some fields that I can fill out down here, pertaining to my specific needs. I can verify the image after it has been completed, and then select ‘Image’ to commence.

That’s all for this video; thank you for your time.

Request a free trial at TryADF.com.

Crimes Against Children Conference 2019 Recap Part I: Technology

$
0
0

by Christa Miller, Forensic Focus 

Celebrating its 31st year, the Crimes Against Children Conference ran from 12-15 July 2019 in Dallas, Texas. The conference kicked off with opening remarks by Lynn M. Davis, President and Chief Executive Officer of the Dallas Children’s Advocacy Center (DCAC), who focused on the DCAC’s new “Save Jane” initiative. Updated with the photos and names of local missing and endangered children, this video project is designed to be shareable and scalable for use by every community in the United States. 

President and co-founder of Watch Systems Mike Cormaci spoke about his company’s sex offender registry management and community notification solution, OffenderWatch, which relies on relationships with the 15,000 users in its network to provide accessible statistics about sex offenders in any given community.

Cormaci was followed by Emily Vacher, Facebook’s Director of Trust & Security, who oriented her talk to creativity and big thinking to solve child protection problems unconstrained by geographic boundaries. Pointing to the Save Jane initiative, which she called a “living art exhibit,” Vacher talked about the importance of art — not just filmmaking, but also painting, music, and other art forms — to heal and inspire, educate and raise awareness, and even assist in finding children. For example:

  • Director Sasha Joseph Neulinger’s documentary “Rewind, featured at the Tribeca Film Festival, shows how survivors can reclaim their voices and call others to action.
  • A 25th-anniversary cover of the 1993 Soul Asylum song “Runaway Train,” intersperses a music video with images of missing children. Thanks to the National Center for Missing and Exploited Children (NCMEC), these images are regularly updated and even geotargeted based on the location where the video plays. The video has already helped recover children.

Opening statements ended with a keynote by Greg Smith, founder of the Kelsey Smith Foundation, which offers liaison services to help law enforcement communicate with families during an investigation, as well as education for both civilians and law enforcement.

With that, the conference was officially under way. Many of the presentations we attended contained sensitive content that presenters didn’t want exposed to a broad online audience. If you need assistance or have questions about something you read, please be sure to post in our forums, or reach out directly to organizations like the National White Collar Crime Center (NW3C), SEARCH.org, and others that provide training and support.

Mobile Peer-To-Peer Investigations

Cellebrite’s Manager of Technology, Keith Leavitt, talked about mobile device evidence in P2P investigations. In Mississippi, where he retired from the Attorney General’s Office, minimal mobile broadband penetration meant that offenders relied on mobile technology to obtain CSAM.

Utorrent and BitTorrent file sharing services are both supported in Android, along with DroidG2, which implements Gnutella2 technology even though Gnutella itself isn’t supported for Android. Other P2P clients include eMule, eDonkey, and Shareaza; DroidG2 additionally accesses eDonkey.

The storage size of many mobile devices (plus expandable storage) means you’re likely to see one or more of these apps. However, most forensic tools don’t parse many of the apps, requiring manual examination.

Which data you can obtain depends on the level of extraction you perform: logical, file system, or physical. Because some methods are faster than others, extractions depend on the amount of time you have and the type of data you need. Tools like search term “watch lists” can help to filter extracted data.

Examining P2P apps starts with locating an identifier to provide the path to where the app data resides. In addition you may find other files of importance; logfiles that may include an IP address; and date/time stamps, all of which can be correlated with other places the IP address shows up.

Leavitt also covered challenges with external storage media. For example, Android makes it possible to mount an SD storage card as internal memory, and ejecting the card could break the device. Meanwhile, micro SD cards might store content in the device’s media download folder, and partial downloads may appear in the “resume” or “incomplete” folder rather than in the torrent folder. Torrent filenames may be bundled in a generic, hard-to-notice ZIP file.

Finally, Leavitt said it’s important to validate how the P2P apps work. Part of explaining how data got on the device, validation is a matter of running the app through a legal torrent such as the “Big Buck Bunny” film. 

Learn more about Cellebrite’s mobile forensics solutions and training here.

Bill Wiltse, President of the Child Rescue Coalition (CRC), co-presented on overcoming investigative challenges with mobile devices and apps via CRC’s investigation tool, the Child Protection System (CPS).

These challenges include attribution to a specific offender; encryption; and ease of access to children. However, the talk also described opportunities associated with geolocating mobile addresses which are not resolvable using existing carrier infrastructure. The CPS offers a way for CSAM investigators to identify suspects by correlating these IP addresses together with digital forensic examination results. 

Another of the talk’s key elements addressed the need for better communication and relationship-building across investigative teams and industry partners. Specifically, when approaching mobile carriers for information, it can help if investigative teams have already established how the carrier’s past help has enabled them to make a rescue. Including them on these kinds of wins can provide an incentive for them to help in the future.

The CRC, a 501(c)(3) nonprofit organization, maintains the CPS database with billions of records. Since 2004, the CRC has focused on information sharing, making data more digestible and better fused together to enable law enforcement to triage and find more suspects, more quickly.

Virtual Currencies, Virtual Reality, And The Dark Web

Eric Huber, Vice President of International and Strategic Initiatives at the National White Collar Crime Center (NW3C), described virtual currency and how it factors in investigations. Huber described different kinds of virtual currency including privacy coins, stablecoins, and even rewards systems like airline and hotel points.

Huber described the relationships between the blockchain ledger, currency mining, the public / private key infrastructure, wallets, and cryptocurrency exchanges, as well as how each part of the process works with the next; and how criminals exploit the network’s distributed nature.

Another more recent broad use case of blockchain technology is “smart contracts.” By turning the blockchain into an operating system, users can create decentralized applications, or dapps.  

What kinds of crimes is cryptocurrency used for? 

  • One of the most glaring examples is ransomware, in which malware encrypts files on a computer or server and ransoms them in exchange for cryptocurrency. 
  • Cryptocurrency is frequently used for dark market purchases, such as drugs or CSAM.
  • Money laundering.
  • Both virtual and physical robbery can happen. A thief can steal private keys, but can also force a crypto transaction at gunpoint.
  • Cryptojacking is when criminals install unauthorized cryptocurrency mining software on a victim’s machine. The software uses the victim’s computer and electricity to mine for cryptocurrency on the criminal’s behalf.
  • Cash smuggling or counterfeit cash transactions might rely on dark market cryptocurrency exchanges, which let users get a box of cash in return for Bitcoin.

Blockchain investigations, therefore, are a mix of forensics, blockchain analysis, and traditional financial investigations. Free block explorers show the public ledger with transaction IDs, while paid cryptoforensic tools let you visualize transactions and even — when paired with legal process — deanonymize suspects. Often you can correlate these pieces of data with a link analysis graph that allows you to follow the money and see where to serve legal process.

Huber covered different kinds of evidence associated with cryptocurrency transactions, for example:

  • Strings of hexadecimal numbers, which could be addresses or keys used in cryptocurrency transactions.
  • USB keys that could be heavily encrypted hardware wallets. They include Trezor, Ledger, and KeepKey. Electronics recovery dogs can help find these.
  • Software wallets, which could be desktop or mobile apps. Popular options include Electrum, Jaxx, Copay, Exodus, GreenAddress, and Mycelium.
  • Mobile devices specifically designed for blockchain, including the HTC Exodus and Sirin Labs Finney.

Following the trail of cryptocurrency evidence was also the topic of a talk given by Guy Gino, a special agent with Homeland Security Investigations (HSI). He offered a case study on how his team managed to de-anonymize an opioid dealer whose product turned out to have killed 34 people within one week of their receiving and using it.

Investigating the crime like a homicide, not a computer crime, Gino’s team used conventional investigative skills to follow the trail. Starting with a partial dump from a chip-off acquisition of the victim’s locked phone, the team was able to locate a photo of a previous sale with the same characteristics — font, sticker size, and quantity — as the overdose.

They also found screenshots of images from the victim’s access to the darknet through PGP and other onion apps, and from there, were able to identify her on the darknet. Gino’s team then worked their remaining evidence:

  • The logistics of the US postal system, used to ship the drugs.
  • Social media profiles.
  • Darknet marketplace “choke points.”
  • The US Treasury’s FinCen suspicious activity reports, which helped to identify suspects.

In the end, Gino’s team was able to trace the products the suspect had used for “stealth shipping” to stores in the local area, including lot numbers. Unusual transactions at a handful of those stores enabled them to look at surveillance video to identify suspects and make arrests.

Huber said while some forensic tool vendors support wallets, it’s an area that’s evolving rapidly, and one that investigators need training on to examine data in unsupported apps. Comprehensive resources you can use to learn more about cryptocurrency include Coindesk, Coin Center, the Blockchain Alliance, and Jameson Lopp’s Bitcoin Information & Eductional Resources. In addition, the NW3C offers multiple training courses on both financial crimes and cryptocurrency investigations. 

Huber also gave a talk about virtual reality (VR), augmented reality (AR), and mixed reality (MR) environments and their relevance to criminal investigations. 

Most people are familiar with AR entertainment applications, including games like Pokemon Go, Minecraft Earth, Jurassic World Alive, and Harry Potter Wizards Unite. However, Huber said there are practical applications to all of these technologies as well. For law enforcement, it could include language translation, facial and voice recognition, and even tactical integrations like shooting simulations.

With price points dropping, virtual reality (VR) is becoming more accessible, in part because in some cases it relies on mobile devices to stream an experience. For example, partnerships between Valve (the company behind the Steam gaming platform) and HTC enable gameplay using HTC’s Vive device, while a partnership between Samsung and Oculus allows the Galaxy S10 to integrate with a VR headset. Lenovo’s Mirage Solo does something similar with Google Daydream.

VR social media enables chatroom participation via avatars in a virtual world. Using VR goggles, users can interact with people in a 3D, completely immersive space. Programs like the NYPD’s Options rely on VR to help youth make better decisions in high-stress situations.

While this could be great for crime scene training and practising courtroom presentation to juries, Huber said, it also has other applications: PornHub already has 2600+ VR videos that break the fourth wall, allowing users to become immersed in the action.

And so, just as the Cornerstone VR program helps child welfare workers to know what it’s like to be a child in an abusive situation, immersive technology could also be used as a tool of abuse. People who create child exploitation content will start to use the methods pioneered by the porn industry and others to create VR child exploitation content.

Huber said you can use mobile device forensic skills to collect data from AR and VR devices, though third-party cloud data collection may also come into play with technology like Facebook’s Oculus Quest and Oculus Rift.

However, one of his most important observations involved video production head mounts and the impact on an investigator’s mental health when reviewing child exploitation evidence from the 3D point of view of either the victim or the offender. On the other hand, 360-degree cameras like the Ricoh Theta can capture entire rooms, thereby potentially finding more evidence that could lead to improved chances of victim rescue and offender identification.

Learn more about the NW3C’s virtual reality training courses here.

Using Technology to Protect Investigators

Griffeye’s Eric Oldenburg spoke to a mixed room of law enforcement investigators and examiners, supervisors, and mental health professionals. Oldenburg’s talk was based on his 15 years of ICAC task force experience — a time before anything was in place to acknowledge or help investigators deal with the stress and trauma of looking at CSAM.

Drawing a comparison to the kinds of physical injuries that police work can incur, Oldenburg described how the task force lost one of its best examiners to burnout before Supporting Heroes in Mental Health Foundational Training (SHIFT) Wellness became available to teach mitigation techniques.

Key takeaways from Oldenburg’s session include:

  • Burnout in this field is unique because it isn’t based on overwork or boredom. Rather, investigators just can’t stand it anymore. However, untreated vicarious trauma means the images stick with investigators, and it’s not enough for affected officers to transfer or leave. Supervisors must be trained to recognize burnout signs and symptoms.
  • Supervisors may not always understand what workers are going through, but they need to have empathy — and to actively educate themselves. Lightly sanitized pictures can help them see that victims aren’t “just 17-year-old girls in pigtails.”
  • Empathy extends to awareness of other factors like toxic work environments, stigma over a PTSD “label,” and stringent workers’ compensation rules, which can mean investigators never get the help they need.
  • Mandatory six-month counseling visits can deflect the burden of getting help to the agency rather than the individual — a more proactive measure than employee assistance programs (EAPs). To get commanders’ buy-in, consider comparing the cost of therapy to the costs of workers’ compensation, turnover, and training new people.
  • Other measures could include “little things” like a certified therapy dog, or even things like comfortable chairs for the digital forensics lab.
  • Terminology changes are important, too. Although it might seem small to talk about “child sex abuse material” or “child exploitation material” rather than “child pornography” or “kiddie porn,” language that normalizes horrific crimes leads to laws and policy that minimize the crimes to “just pictures” or “barely-legal” websites rather than crime scene photos and true victims.

Oldenburg also spoke about the need for supervisors to sign off on purchasing the right technology. Calling it a revelation when he first discovered video analysis software whose sound was turned off by default — to spare investigators the trauma of hearing what was happening — Oldenburg then covered features like deduplication and machine learning, which can be trained to look at many more images than a person can, without getting tired.

Other technology, like Project VIC, encapsulates the findings of many investigators in a hash cloud database so that computers can match known images without investigators ever having to be exposed to them. Submitting new images to the database is a way, Oldenburg said, to protect other investigators from vicarious trauma, as well as to provide intelligence on offenders.

He estimated that three years is the rough average for an investigator to “become unstoppable,” but it’s also the point at which many people move on because they can no longer handle CSAM. At that point, said Oldenburg, the risk of losing these investigators is losing their technical aptitude to learn technology and find victims. 

How supervisors can help: focus (and spend money) on technological solutions that empower and protect employees to best equip them to do their jobs safely, efficiently, and leave some day, unharmed.

Tool Talks

Wil Hernandez, a technical engineer at MSAB, talked about how “drowning in data” from ever-changing technology presented a dilemma for investigators on multiple levels. Hernandez talked about some of those challenges, along with opportunities — specifically, multiple sources of evidence that can be correlated, as well as used to spot patterns of life: specific locations, event sequences, travel, people and places of interest, and so forth.

MSAB’s XRY and XAMN tools make these capabilities possible with automatic image recognition, the ability to analyze multiple cases and users together, and Project VIC hash database support, among other features. Read more about how they support child exploitation investigations at MSAB’s website here.

Finally, Jeff Shackelford of PassMark Software provided an introduction to PowerShell as an investigative tool. Useful for when in-field forensic tools don’t work and you don’t want to risk losing live memory or encrypting the drive — or when you need actionable intelligence for initial interviews — PowerShell is a user-friendly, documentable method of capturing data related to wifi connections, SSIDs, and passwords; finding out whether USB ports are enabled in the Windows registry (and being able to change this); discovering the BitLocker status (and keys) on all volumes; and even looking at the clipboard history.

PassMark’s OSForensics tool is another way to utilize PowerShell in the field. Contact info@passmark.com for your 30 day trial license.

Learn more about how to put these pieces together for forensic interviewers and prosecutors in Part II of our CACC 2019 recap!


What Changes Do We Need To See In eDiscovery? Part III

$
0
0

by Harold Burt-Gerrans

Duplicative Documents

At the end of Part 2, I put forth an argument that de-duplication should always be done globally to bring the data set down to just unique documents. And now that you’re convinced (or should have been) that Global De-Duplication is the only way to go, I’m going to completely blow your mind by reversing my opinion and say:

“There should be no de-duplication at all, every document collected should be represented in the document review platform.”

Or at least, that’s what the end users should see…. Not everything is as it appears. Trust in magic.

My observation from most review platforms that I have seen is that they are typically structured from what I call a “Scanning Mentality,” where each document family consists of a lead document and its attachments. Some platforms have progressed to understanding multi-level Grandparent-Parent-Child relationships while others only truly understand a single level lead and attachment structure, but all require an actual document (and potentially extracted/OCR text and page images) for each Document Identifier (“DocID”). A quick scan of the cases we have hosted in Ringtail and Relativity showed that typically the number of unique MD5 hashes is only 70%-85% of the number of the documents in the review platform. Because we don’t extract embedded graphics from emails, these are not all logos. Since we almost exclusively de-duplicate globally, typically, 15%-30% of the documents in the review sets are children that are exact copies of children to other parents. I would be surprised if our statistics are not representative of the industry as a whole.

Some of the problems with having all these additional copies are:

  • difficulty in maintaining consistency across coding, image generation and annotations/redactions for all copies, although this can be partially mitigated by automatic replication processes
  • additional review fees due to increased hosting volumes (sorry boss) and review times
  • potential statistical skewing of text analytics results affecting Content Analytics, Technology Assisted Review, Predictive Coding, Active Learning (and/or any other analytical tool used in automated review processes).

The full correction requires functionality to be developed by the review platform vendors and adjusting their database structures to always reference a single copy of a document. The coding and annotations/redactions done by the review team will have to be tracked by the platform against the underlying document. Architecturally, the change is from a single database table or view to a multi-table structure linking DocIDs and DocID Level coding to a Document and Document Level coding, perhaps by MD5 or SHA1.

Today, when the same child is attached to two different parents, we track all the metadata (coding being “user assigned” metadata) for each of two copies of the child (one for each family). For efficiency, the review platforms should track, for each child, metadata that is specific to that occurrence of the child, and they should track, for a single copy representing both children, metadata that is specific to the document. For flexibility, the review team should have the choice of which level (DocID or Document) each coding field is to be associated with. The examples below illustrate the database tables for three document families under the current tracking of today and the multi-table tracking I propose:

Sample Current Data Structure:

Sample Multi-Table View:

With the above proposed data structures, all review coding, imaging, Text Extraction/OCR, Machine Language Translations and annotations/redactions (and any other document level action) can all be applied against the records in the Document Information table. There will never be inconsistencies between Doc0002, Doc0004 and Doc0005. Today’s platforms often implement work-arounds to solve parts of these issues by automating the copying of coding between duplicates. Unfortunately, this does not usually work for annotations (where there is no control to ensure that the images for each duplicative document are identical – variations in image file type and resolution make it difficult to programmatically replicate annotations) and usually not for rolling ingestions (where new documents are added and some/all are duplicates of existing, coded documents).

Another advantage of this architecture is that it can be applied to all documents, not just child documents, and it can quickly identify stand-alone documents which may be coded inconsistently with duplicates within families. Suppose in the above example, Doc0001 and Doc0003 were both coded privileged. In that scenario, it would be quick to find that Code2 documents are always linked to privileged parents, so the stand-alone version (Doc0005) should be investigated to determine if it should be produced. Since the use of this document content has always been in conjunction with privileged families, perhaps Doc0005 could be considered as “Privileged Gained” (Remember from Part 2 – “A document can, on its own merit, be considered privileged or not, and that due to various family relations, privilege can be lost or gained“). A better approach may be to not produce any stand-alone documents if they are duplicates of family member documents – Those family members are either all not produced due to being withheld as part of privileged families, or at least one copy is already being produced as a member of a nonprivileged family.

Finally, implementation of an architecture such as described will still allow for document reviews to be run exactly as they are today, with the current de-duplication processes and coding processes, by associating all the coding fields to the DocID level. This will gain consistency for redactions and annotations while not disrupting current workflows. Over time, reviews can migrate to Document Level coding where suddenly documents will appear to have been coded because they are duplicates of others. This should be an easy transition as it is similar to what happens today when coding replication processes are used.

Start The Revolution

Document review has progressed from reading boxes of paper to online review using advanced analytics. It’s time that the review platforms stop mimicking the mindset of “1 document = 1 or more physical pages” and move towards “1 document = 1 unique set of information” and that piece of information may exist in multiple locations. Even though the approach I’ve suggested above is limited to matching exact duplicates, the reality is that the same approach should be expanded even further to encompass textural duplicates such that Doc0002 above could be an MS-Word document and Doc0004 could be the PDF version of Doc0002. Unfortunately, near-duplicate technologies at this time are not structured to safely enable that level of de-duplication since typically only the 100% near-duplicates identified tend to be in relation to the pivot document of each near-duplicate group/set. Over time, I’m sure that technology will advance to identify the 100% near-duplicates that exist between documents that are not 100% near-duplicates of the pivots (i.e. two documents both show as 90% near-duplicates of the pivot, but they may also be 100% near duplicates of each other and that relationship typically remains unidentified today).

Hopefully, I’m not a lone voice in the wind and others will rally behind my suggestions to shift the emphasis to reviewing “information” instead of “documents”. Everybody yell loudly like me and maybe it will happen.

Hopefully you’ve enjoyed (and not been confused by) Part 3. If you’re not overly technically savvy, it might seem confusing, but I hope most techies think “This seems like database normalization, should have started this in Summation and Concordance in the 90’s”. If you post questions, I’ll try to respond in a timely manner unless they’re addressed in Part 4. Part 4 will build further on the restructuring I’ve presented here and discuss things like restructured coding controls and recursive de-duplication. eDiscovery Utopia, here we come….

About The Author

Harold Burt-Gerrans is Director, eDiscovery and Computer Forensics for Epiq Canada. He has over 15 years’ experience in the industry, mostly with H&A eDiscovery (acquired by Epiq, April 1, 2019).

The Mueller Report Part 3 – Human-Generated Data At The Heart Of Investigations

$
0
0

by Stephen Stewart, CTO, Nuix

Preface: This article is all about the data discussed in Part 1 of this blog series. No political statements are being made.

The Mueller Report is a great window into the relative value of data, both for adversaries and for investigators. In Part 1: The Mueller Report – An Amazing Lens Into a Modern Federal Investigation I covered all of the different types of data collected and analyzed for the report.

  • 2800 subpoenas. With 87 references to Facebook and the detailed documentation about the activity of certain profiles, you can assume that the Office was sifting through Facebook, Twitter, and Instagram data.
  • 500 search and seizure warrants. This is bound to generate at least a couple hundred hard drives and mobile devices.
  • 230 2703(d) and 50 “pen registers”. This is interesting because it laser focused on who is talking to whom and the frequency of their communications.
  • 500 witnesses. That is a whole lot of testimony that needs to be checked against all the digital evidence.

In Part 2: What It Feels Like To Be Targeted by a Nation State, I covered the types of exfiltrated data:

  • “In total the GRU stole hundreds of thousands of documents from the compromised email accounts and networks.”
  • “Compressed and exfiltrated over 70 gigabytes of data from this file server.”

The Data That Matters

In both instances, the most interesting data is that created by humans. At the end of the day, if you are trying to prove a point you ultimately are trying to answer the same investigative questions: who, what, where, why, when, and how. All of these questions are about people’s behaviors. 

Sure, there’s a ton of interesting stuff found in machine data, but ultimately we live in a world filled with people. People who are doing things, saying things, and in this case communicating things electronically.

The hackers we’re talking about were looking for things that might have been said that could be used for leverage. In the case of the investigation, the Office was looking to corroborate that an event had taken place or that two or more people were communicating. 

As I was reading the Report, I found it interesting how frequently the footnotes referenced “Emails” and “Texts” as the source of evidence. I was curious exactly how many times. So, using my favorite Swiss Army knife for data, I whipped up a quick script and ran it in our software:

NOTE: For you coders out there, I’m sure it can be written more efficiently, but it got the job done.

Taking It To The 5 Whs

In the results of my quick script, it turns out “Email” is footnoted 350 times and “Text” is footnoted 113 times. Even with the various footnotes, the Report calls out the threat of new types of encrypted communication, increasing the difficulties of conducting thorough investigations:

“Further, the Office learned that some of the individuals we interviewed or whose conduct we investigated—including some associated with the Trump Campaign—deleted relevant communications or communicated during the relevant period using applications that feature encryption or that do not provide for long-term retention of data or communications records. In such cases, the Office was not able to corroborate witness statements through comparison to contemporaneous communications or fully question witnesses about statements that appeared inconsistent with other known facts.”

At the end of the day it all comes back to understanding who, what, where, why, when, and how. Nuix continues to make it faster and easier for investigators, be they corporate, regulatory, or law enforcement to quickly understand who is talking to whom and the overall dynamics at play across social networks.

Crimes Against Children Conference 2019 Recap Part II: Digital Evidence On Multidisciplinary Teams

$
0
0

By Christa Miller, Forensic Focus

Our first article in this two-part series focused on the technology associated with crimes against children: mobile peer-to-peer software; cryptocurrency used to buy and sell child sexual abuse material (CSAM); the virtual worlds where abuse might take place; and how technology can help reduce investigators’ vicarious trauma.

In Part II, we focus on how the data you capture can be leveraged to work together with forensic interviewers and prosecutors to corroborate evidence and build cases. It’s very much a team effort, where the work you do supports multiple aspects of an entire investigation.

Digital Evidence Use In Forensic Interviews

One of the first ways that digital evidence, such as photos and text messages, has value to an investigation is for forensic interviews. 

Dallas Children’s Advocacy Center forensic interviewers (FIs) Chelsea Zortman and Jessica Parada joined Michael Hernandez, a detective with the Cedar Hill (Texas) Police Department, for a lecture about how to address social media, chat rooms, apps and other digital communication technology during interviews. Separately, Stacey Kreitz, a forensic interview specialist with HSI, also spoke about facilitating forensic interviews of cybercrimes.

Parada said the value of forensic interviewing is to build trust with victims as a means toward building the right evidence for a case. At the same time, though, forensic interviewers may not have a strong command of how digital evidence works, yet need the information to conduct a legally sound interview. A multidisciplinary team (MDT) approach ensures that everyone has what they need to move forward with a case.

Rescuing a survivor is often only the start. Hernandez said FIs have to know what sites the victims are on — whether these are the most popular ones, or less well-known sites like the ASMR community, Care2, GirlsAskGuys, DeviantArt, and others. Classified ads and the dark web can also come into play.

Parada added that children like to know things adults don’t know, so FIs need to be able to tell them what they’ve learned and offer them a safe place to talk about their experiences. However, she added, this requires an FI to have confidence in their knowledge.

How digital forensic examiners can help:

Build relationships with forensic interviewers. Parada acknowledged that these conversations are likely to be difficult; even more so under the pressure of trying to ensure a child’s safety.

Know what protocols the FIs are working under and how their strategy drives what’s needed for evidence. For example, a research-based method like the Prepare and Predict protocol, developed by Homeland Security Investigations (HSI), makes decisions defensible regarding how and why evidence was introduced into the interview.

Know what the team strategy for the interview is going to be, and how it might change. Evidence needs to be prepared in a legally sound way prior to the forensic interview, so the team needs to discuss their strategy before the day of the interview. To facilitate the conversation, MDT and advocacy center should already have a protocol established regarding whether or not images will be sanitized and/or how they will be sanitized. However, federal and local recommendations make it best practice for the images not to be sanitized. This accomplishes two things:

  • The interviewer doesn’t risk establishing an environment where the child feels ashamed about their body.
  • The unsanitized images allow a child to remain in an honest environment with the interviewer and be presented with accurate evidence.

Protect both the team and the evidence by maintaining chain of custody of unsanitized images. Images must be printed by the detective in possession of the evidence, and only for interview purposes. FIs can’t receive the material through email or on a flash drive, and they have to give it back once they’re done.

Provide data prior to the interview. The FI’s role isn’t to interrogate a child or put them in a position to incriminate themselves, but they do need a truthful interview. To that end, an FI can discuss the evidence (data, images, chat logs, etc.) with the child, but only if they have the knowledge that it exists and the ability to review it. Therefore, an FI needs to be familiar with language relating to an app or device used in the allegations/disclosure.

Make sure FIs know what you can and can’t retrieve. With apps like Snapchat, FIs need to know that even if there is limited or no content, the metadata and logs can give them critical timeline and contact information. Zortman pointed out that this is imperative for two reasons:

  1. The team needs the information to determine appropriate interview scheduling. If a child is in immediate danger, an interview can take place right away. Otherwise, communication allows investigators to send any needed preservation letters and warrants for evidence that can be used in the interview.
  2. Understanding general data retrieval information also allows the FI to clarify with a child throughout the interview about what may or may not be found on their device(s).

Give interviewers the right terminology. A good example of this, said Zortman, is the difference between “Finsta” and “Rinsta”: “fake” vs. “real” Instagram accounts, each with different purposes. One is public-facing, one private, used among a small group of friends.

Never assume the interviewer already knows what they need to know. Instead, help them with research, keywords, and other information they can use about how the app or device is used. Otherwise, the interviewer won’t know if the interview subject tells them something erroneous. In other words, said Parada, it’s best practice for FIs to stick to their existing protocols. Since technology is evolving so quickly, it is important for each team to do its research and update their protocols to reflect that. 

Help them understand results from your forensic software. For example, chat timestamps could look wrong because the server was in a different time zone. That can make it easier for victims to deny what happened, so be sure that the interviewer is prepared to address this. Other details include who’s who in different chat bubbles, any lingo or acronyms, and so on.

Take direction from interviews, too. Especially in emergencies, your preview may be all an interviewer has to work with. However, the initial interview itself can reveal more, such as communication apps you didn’t know to look for, which you need to conduct a deeper examination to find. Or the interview may reveal additional victims or suspects whose devices will need to be analyzed.

Another perspective on what forensic interviewers need from digital evidence came from Jessica Tigert, a forensic interviewer with the 23rd Judicial District of State of Tennessee, who joined Cellebrite’s Brendan Morgan, VP Training Ops for the Americas and Keith Leavitt, Manager of Technology for a talk about preparing to testify to digital evidence in child exploitation investigations.

Drawing on her experience interviewing 2000 children in six years, Tigert talked about making it a goal to reduce the likelihood of a child having to testify. To that end, she said, the more evidence is available to bring to the interview, the better. Part of her method is to work closely with investigators and digital forensic examiners, asking questions about the evidence to get a better sense of what it is, where it came from, how it got there, and why it’s relevant.

For example, a preview exam can help a forensic interviewer understand what type of contact a suspect had with a child victim. In turn, this information helps them work out how to structure their interview.

When it comes to preparing for court, digital forensic examiners have additional responsibilities above and beyond those listed above:

  • Understand your role and update your curriculum vitae (CV) to reflect it. Leavitt said generally, CVs shouldn’t be any longer than two pages.
  • Understand your tools and how they keep evidence forensically sound; be able to demonstrate your process of recreating and validating findings.
  • Validate your tools, using known datasets to ensure that dates and timestamps are correctly reported, and logging the results. Morgan recommended doing this every six months. The extra work, he said, can save time on the other end.
  • Prepare copies of all reports, notes, emails, and supplemental documents related to the examination. This way, you can provide assistance with exhibit creation, address possible defense expert theories, and even practice for cross-examination.
  • Get your work peer reviewed by colleagues. This ensures accuracy and that protocols were followed, making your work harder to challenge. Leavitt and Morgan also recommended finding local mentors, for instance through a regional US Secret Service Electronic Crimes Task Force (ECTF).
  • When testifying, keep things as simple as possible. Defense attorneys’ job is to try to cast doubt, so while you might be tempted to get very technical to show your understanding, this could backfire and bore the jury. You may even need to draw diagrams to explain something.

Tigert, Leavitt, and Morgan all agreed that working with attorneys means developing positive relationships with prosecutors. Understanding their perspective, including why they ask you to do some things and gathering information on their needs — their timeline, strategy, trial prep schedule, objectives, and so on — can help you to offer creative solutions to build the case.

While personality differences can make this challenging, Morgan acknowledged, it can be a great way to add value to legal teams. To that end, Leavitt advised scheduling proactive “lunch and learn”’ type meetings that can help investigators, interviewers, child or victim advocates, and attorneys figure out best ways to work together far in advance of trial.

Leavitt advised attendees to “start prepping for trial as soon as you start your case.” A heavy caseload doesn’t mean you can’t dot your i’s and cross your t’s; in fact, when you get to discovery, being able to show due diligence means the case is unlikely to go to trial.

Cellebrite Academy offers a form of peer review because of its implementation of proctored exams and other strategies. 

Legal Aspects Of Digital Evidence

Joseph Remy, an Assistant Prosecutor with the Burlington County Prosecutor’s Office, and Matthew Osteen, Cyber and Economic Attorney & General Counsel for the National White Collar Crime Center (NW3C), provided an overview of digital forensics for prosecutors.

This talk stressed the need for communication between forensic examiners, who are skilled at handling digital evidence, and prosecutors, who may or may not understand the many technical details involved in digital evidence. Even if the prosecutors do understand these details, judges and juries often don’t, so the better equipped a prosecutor can be to make a case in terms civilians understand, the easier trials will go.

Many of the takeaways from this talk involved “101” level details that many forensic examiners may take for granted in the course of their everyday work, but could end up being the first time an attorney, judge, or juror has had to consider them.

For example, when deciding whether to go to trial, prosecutors need to understand how you maintained evidentiary integrity. Documenting not only chain of custody and which tools you used on a forensic copy, but also whether you found the device damaged or destroyed — and any changes you yourself made — can be crucial particulars. Likewise the methods you used to isolate a device, preview content in the field, or authenticate evidence.

It’s also wise to document any tool testing protocols you use. Preferably involving multiple tools and multiple extraction types, prosecutors will want to know how you ensure tools acquire and parse what they say they do. (Hint: part of qualifying you as an expert witness in the United States may involve asking about processes and protocols like this.)

Locked devices may require you to ask for phone number(s), passcode(s), and/or biometrics. Osteen cautioned that, while biometric unlocks can be compelled, US case law goes “back and forth” on compelling passcodes. It’s best to check with a prosecutor in your jurisdiction before attempting this, and to be aware of technology methods such as brute-force and dictionary attacks, GrayKey, and Cellebrite’s Advanced Investigative Services (CAIS) — though changes to operating systems and device security, Osteen added, could inhibit those tools.

Prosecutors are integral to securing search warrants, yet enumerating the “places to be searched and the things to be seized” according to US law can be complex when it comes to digital devices. “Places to be searched” could include synchronized devices and accounts like computers, cloud accounts, and even vehicles, while “things to be seized” could encompass a range of data that may or may not be evidentiary.

To simplify this for the judges signing the warrants, it can help prosecutors (and therefore, forensic examiners) to analogize digital evidence to drugs and guns. For example, a digital device, if properly defined in a warrant, can be likened to a film case used to hide drugs. The point is to show that digital evidence can be found in a location’s smallest crevice, requiring a detailed search.

That detailed search, however, involves devices’ bits and bytes. Unlike drugs, digital storage doesn’t go stale. It’s more volatile, of course, but artifact fragments usually persist, and can potentially include evidence of more than one crime. That’s why the “places to be searched” must be spelled out, the search itself properly limited to a specific crime or crimes, and a new warrant obtained for evidence of each new crime. These practices should ensure the defensibility of the warrant(s).

Everyday analogies are also useful to describe abstract processes when you’re preparing to testify. For example, Remy said, the difference between a physical and a logical extraction can be explained by comparing a search of an entire house, with the rooms inside. Terms that are common to you, like “SQLite databases” and “plists,” also need to be explained to juries and attorneys, even if forensic tools present them in an understandable, “pretty” format.

These issues become even more pressing when it comes time to describe evidence acquired from the cloud. Remy said prosecutors need to know the differences between file storage, host websites, streaming, on-demand software, and data analysis. While the US Clarifying Lawful Overseas Use of Data (CLOUD) Act offers some guidance, in general, where data is stored can present a challenge, making the relationship between investigators and prosecutors more important to cultivate.

Cloud evidence was also the topic of a lecture given by Justin Fitzsimmons, Director of High Tech Training Services at SEARCH Group, Inc. This session, oriented to law enforcement and prosecutors, focused on legal process to use when approaching internet service providers (ISPs) like Google, Amazon, Facebook, Snapchat, and others for data related to user accounts, locations, and specific content.

Fitzsimmons stressed familiarity with different ISPs’ — and third parties’ — terms of service and privacy policies when preparing legal process. Companies like Facebook and Amazon have made important acquisitions in recent years that puts data under their umbrellas, so it’s important to consider other potential data sources and their correlations.

When requesting data, Fitzsimmons reiterated the need to show a nexus: why you believe that data will prove your case or identify an offender, including user attributes (and how you define them) that will put a subject behind a device.

During his presentation, Fitzsimmons shared a video showing how Google process search warrants. If a provider attempts to limit a warrant, he said, you should consider judicial remedies, or add language to the warrant to bolster the nexus between the data you seek and the evidence of the crime.

Fitzsimmons described the data that Apple, Snapchat, and other service providers do retain in the name of user convenience, which could be more than you might realize. Changes to their policies do take place, so it’s important for investigators to stay on top of terms of service and other policies: don’t discount obtaining data just because it hasn’t historically been retained.

SEARCH is most known for its ISP List, but it also offers a wide range of free investigative resources including training, podcasts, technical and legal guides, and even limited technical assistance. Visit search.org for additional information. 

Technology is complicated and growing ever more so, and the high stakes of child exploitation investigations can put pressure on everyone. What the common threads across all these talks show is that great communication — both documentation and relationship-building — can relieve a lot of the burden in correctly interpreting digital evidence.

How do you connect and collaborate with other investigative team members in your jurisdiction? Head to the Forensic Focus forums to discuss; join our LinkedIn group; or get in touch with us on Twitter or Facebook!

Uses Of Unmanned Aerial Vehicles (UAVs) In Crime Scene Investigations

$
0
0

by Chirath De Alwis and Chamalka De Silva

Recent advancements in technology have helped many people to have a better quality of life. Unmanned Aerial Vehicles (UAVs), also known as ‘drones’, are one such technological advancement that can help society to simplify day-to-day activities. These drones are now widely used in many industries such as agriculture, photography, transportation. But this same technology can also be used to conduct unethical activities.

Capturing privacy related content, illegal interception of other drones, and military use of drones in bombings are some common scenarios of unethical activities that can be done using drones. Investigating each scenario requires different approaches, as the availability of evidence can vary. Therefore, digital forensics investigators need to be able to examine drones in crime scene investigations. This article focuses on the technology behind drones and how drones can be useful in crime scene investigations. This helps investigators to simplify their digital forensics investigations when looking at drones.

Introduction

An unmanned aerial vehicle (UAV) (or un-crewed aerial vehicle, commonly known as a drone) is an aircraft without a human pilot on board and is a type of unmanned vehicle. UAVs are a component of an unmanned aircraft system (UAS), which includes a UAV, a ground-based controller, and a system of communications between the two. The flight of UAVs may operate with various degrees of autonomy: either under remote control by a human operator or autonomously by onboard computers.

Classification of UAVs

UAVs typically fall into one of six functional categories[1]:

  • Target and decoy
    • providing ground and aerial gunnery of a target that simulates an enemy aircraft or missile
  • Reconnaissance
    • providing battlefield intelligence
  • Combat
    • providing attack capability for high-risk missions 
  • Logistics
    • delivering cargo
  • Research and development
    • improving UAV technologies
  • Civil and commercial UAVs
    • agriculture, aerial photography, data collection

Technologies Used in UAVs

This section describes the latest technologies used in UAVs which will have a forensic value in crime scene investigations. UAVs have many other technologies, but these are the ones with the most forensic value.   

How Drones Work

Drones are made up of lightweight and durable materials, such as fiber and plastic. In order to operate a drone, users require an aircraft (also known as the drone), a controller unit, signal extenders, a battery, and a mobile device. The type of sensors and camera equipment can vary based on the type of drone and its purpose. 

A drone controller unit is required to connect to a mobile device that has an application which helps to view the path and navigate. Navigation can be controlled using the controller. The signal extender helps the user to extend the coverage of the drone signal, which helps it to fly for longer distances. Once the drone is ready for take-off, the user needs to power on the drone and connect to the controller using the mobile device. All the flight paths, camera view, battery status, and weather information are displayed in the mobile device attached to the controller. Modern drones have their own applications that are supported on both Android and iOS platforms. Flight records are stored inside the application, and users can upload the flight logs into the drone manufacturer’s cloud if required [2].  

Technology behind UAVs

Radar Positioning & Return Home

The latest drones have dual Global Navigation Satellite Systems (GNSS), such as GPS and GLONASS [3]. Modern drones can fly in both modes. DJI drones have called modes ‘P-Mode’ (this mode uses both GPS & GLONASS) and ‘ATTI mode’ (this mode does not use GPS & GLONASS, and the user can control the drone on their own). 

When the drone is first switched on, it searches and detects GNSS satellites, and saves the GPS coordinates as “Home Point”. High-end GNSS systems use Satellite Constellation technology [3]. Basically, a satellite constellation is a group of satellites working together to give coordinated coverage, and synchronized so that they overlap well in terms of coverage [3]. 

Most of the latest drones have three types of ‘Return to Home’ drone technology, as follows [3];

  • Pilot-initiated return to home by pressing button on Remote Controller or in an app
  • A low battery level, where the UAV will fly automatically back to the home point
  • Loss of contact between the UAV and Remote Controller, with the UAV flying back automatically to its home point

Obstacle Detection and Collision Avoidance Technology

High-tech drones use four cameras and several sensors (the exact number depends on the type of drone) to detect obstacles in advance and avoid collisions. These sensors continuously scan their surroundings and alert the controller to avoid collisions. Some of the latest drones, such as Mavic Air, use this technology when using the ‘automatic return to home’ function. These systems fuse one or more of the following sensors to sense and avoid potential collisions[3]:

  • vision sensors
  • ultrasonic
  • infrared
  • lidar
  • time of flight (ToF)
  • monocular vision

No-Fly Zone Drone Technology

There are some high-security areas that have restricted flying drones (e.g. airport runways). These restrictions are put in place by governments and the Federal Aviation Authority (FAA) to restrict flying in these areas, promptin DJI and other manufacturers to introduce a “No-Fly Zone” feature [3]. Once the drone is flying, using GPS it automatically detects these restricted areas and stops the drone when it tries to enter these restricted areas. If a user tries to launch a drone inside a no-fly zone the drone motor will not operate, and user will not be able to fly within the restricted area.

DJI No Fly Zone in USA [4]

GPS ‘Ready To Fly’ Mode Drone Technology

When the compass is calibrated, it then seeks the location of the GPS satellites. When more than six are found, it allows the drone to fly in “Ready to Fly” Mode [3].

FPV Live Video Transmission Drone Technology

FPV means “First Person View”. A video camera is mounted on the unmanned aerial vehicle and this camera broadcasts the live video to the pilot on the ground [3].  

FPV Over 4G / LTE Networks

In 2016 a new live video option, which transmits over the 4G / LTE network and provides an unlimited range and low latency video, was announced [3]. This is the Sky Drone FPV 2 and comprises a camera module, a data module and a 4G / LTE modem [3].

Range Extender UAV Technology

This is used to extend the range of communication between the smartphone or tablet and the drone in an open, unobstructed area [3]. The transmission distance can reach up to 700 meters. Each range extender has a unique MAC address and network name (SSID) [3].

Drone Range Extender [5]

Operating Systems in Drone Technology

Most unmanned aircraft use Linux, and a few use Microsoft Windows. The Linux Foundation launched a project in 2014 called the Dronecode Project: an open source, collaborative project which brings together existing and future open source unmanned aerial vehicle projects under a nonprofit structure governed by The Linux Foundation [3]. 

Data available in UAVs

In commercial (non-military) drones, the primary available evidence would be GPS locations, media files and flight logs. The locations of these files and extraction is described in another article [6].

It is important to understand that the flight logs recorded in the inside of the drone are not accessible to the user by default. To access these flight log files stored in the drone, the user needs to open the app (DJI Assistant) inside the computer and click “flight data”. This will mount the memory inside the drone, which contains the flight logs in .DAT file format. The file name will look like this “FLY807.DAT”. This can also be viewed using the online tool Airdata.com [7]. 

How Can UAVs Support Digital Forensic Investigations?

There are many cases were drones can be used to commit crimes or become a part of a crime scene. Investigators should understand the scenario and analysis of the evidence should be done based on this. This section describes the most common scenarios, what information is available, and most importantly, how to start the investigation. 

Illegal/Unauthorized Data Capturing

Scenario:

The primary use of drones is capturing videos. Some users choose to capture illegal videos of unauthorized content; for example, people can use drones to take videos of what their neighbors are doing. People sometimes also try to get into unauthorized territories to capture footage. A great example of this are attempts to capture activities in Area 51 [8]. Sometimes people use drones for information-gathering and, in this case, rather than capturing the footage they can view it live. 

Potential Evidence:

Investigating crimes like these, the primary evidence would be images or videos captured by the drone. Analyzing these videos or images should depict what the controller has captured. If the controller did not capture the footage but watched the content in real time, we can analyze the flight logs and verify whether the drone was flying around the suspected area or not.  Since the flight logs contains flight maps, the controller cannot deny that the drone flew in the suspected area. 

Session Hijacking With Drones

Scenario:

The remote control unit is connected to the drone using a wireless communication medium; depending on the model of the drone, the technology can vary. In earlier stages, drones like Phantom 3 used their own Wi-Fi connections to connect to their controllers. Some of these communication mediums and technologies can have multiple vulnerabilities that allow attackers to interfere with the signals. Therefore, it is possible to conduct session hijacking and take the ownership, or full control, of the drone. Iran has recently taken down a US military drone [9], as a great example of this. Recently researchers have found a camera that can detect and take down drones [10]. These techniques are most commonly used in the military.

Hacking Drone [11]

Potential Evidence:

Once a drone has been taken down or intercepted, the only evidence we will have to investigate are the flight logs recorded in the mobile device (from the controller). Analyzing these should help investigators to understand where the interception happened. When analyzing these flight log notifications, it should be possible to identify the interference and disconnection. If the drone was fully taken down or fully intercepted, then the “Find my drone” option should not detect the drone, because the controller cannot identify the drone from its GPS signals. 

Stolen Drones 

Scenario:

Sometimes drones can fall into a no man’s land due to a crash with an obstacle. In these cases, it is possible that someone might steal the drone. 

Potential Evidence:

This is a bit more straightforward than the previous scenarios. In this case, we can try to connect to the drone using GPS signals. The user can use the “Find my drone” option to navigate to the drone [12]. There are several cases reported in which users have identified their drones using this technique. When locating a drone using this option, if the drone appears to move, this could indicate that someone is taking the drone. 

Suicide Drones 

Scenario:

These drones can be used to crash aircrafts. Even though drones are flying at a limited speed, due to the high speed of commercial aircrafts, crashing a drone with a commercial aircraft can make a huge impact on the plane. Houthi rebels have claimed responsibility for a drone attack on the world’s largest oil processing facility in Saudi Arabia in the latest example of this [13].

Potential Evidence:

When a crash occurs, the potential evidence available is the crashed drone. Even though the drone has crashed, sometimes it is still possible to get the memory from the drone. But this is totally dependent on the impact to the drone. If the drone memory chip is available, investigators can analyze this and get the flight records from the drone memory, which will help in identifying where the drone’s journey started and its flight path up to the collision point. 

Drone Crash

Scenario:

A drone crash can happen in many ways. Internal technical failures and interaction with an obstacle are two common scenarios. The “Second MoD Airbus Zephyr” spy drone crash on an Aussie test flight in 9th Oct 2019 is a recent example [14]. 

Potential Evidence:

When a drone crashes, the potential evidence would be the drone memory or the flight log in the controller device. But most modern aircrafts can avoid obstacles, and this can be detected from the notifications. When the drone avoids an obstacle, it sends a notification to the controller device. The controller unit also receives notifications when a technical issue has occurred. This notification information is available in the flight logs. Analyzing these messages can show what caused the crash. 

Once a drone has crashed, the most important task is to identify the location of the drone. Depending on the wind speed, altitude, ground condition, and various other parameters, the crash point can vary. Recent research has been conducting into mathematically locating ocean-drowned aircraft [15]. This same formula can be modified to identify crash points for drones. The required information for this calculation can be found from the flight log located in the controller device.

UAV Anti-Forensics 

When criminals commit crimes they always try to hide their digital footprints to evade detection. Often criminals use anti-forensic techniques to mislead forensic investigators. As a forensic investigator, having an understanding of these helps us not to come to false conclusions during the investigation. This section covers some key anti-forensic techniques.

Altering Timestamps

Timestamp is a vital piece of evidence when conducting a forensic investigation on digital devices. Timestamp information helps to identify what has happened and when it happened. These timestamps also help in correlating events. Therefore, altering the timestamp is useful for criminals who want to mislead investigations. Recent research has revealed that it is possible to manipulate the timestamp of recorded media files by altering the system time in the Android OS before powering on the UAV [16]. Afterwards, all of the files created by the camera show the modified timestamp [16]. To investigate whether the timestamp has been tampered with or not, it is required to use the DJI vision app or look into the camera log. 

Blocking GPS Signals

Since GPS plays a crucial part when it comes to investigations, most attackers try to manipulate GPS data. Changing GPS data in media files does not limit the investigation of GPS records, though, since GPS data is available via the in-flight records as well. Therefore, the main possibility of manipulating GPS data is through restricting GPS signals. Recent research has attempted to disable the GPS module from drones [16], but the drones were unable to take off. In a follow-up piece of research, the researchers covered the top of the drone by attaching tin foil directly over the GPS receiver [16]. Since there was then no signal coming into the drone, the drone camera did not record any timestamps in the media files. The home point was also not recorded in the drone [16]. Since this helps to block the GPS signals, it also means that users can fly the drone in restricted areas without any issues. 

References

  1. Medium. (2016). UAV Types, Classifications and Purposes. [online] Available at: https://medium.com/@UAVLance/uav-types-classifications-and-purposes-70651867194d [Accessed 6 Oct. 2019].
  2. De Alwis, C. (2019). Crime Scene Investigation of GPS Data in Unmanned Aerial Vehicles (UAVs). [online] Forensic Focus – Articles. Available at: https://articles.forensicfocus.com/2019/10/03/crime-scene-investigation-of-gps-data-in-unmanned-aerial-vehicles-uavs/ [Accessed 8 Oct. 2019].
  3. Corrigan, F. (2019). How Do Drones Work And What Is Drone Technology. [online] DroneZon. Available at: https://www.dronezon.com/learn-about-drones-quadcopters/what-is-drone-technology-or-how-does-drone-technology-work/ [Accessed 8 Oct. 2019].
  4. DJI Official. (2019). DJI – The World Leader in Camera Drones/Quadcopters for Aerial Photography. [online] Available at: https://www.dji.com/flysafe/geo-map [Accessed 4 Oct. 2019].
  5. Amazon.com. (2019). Ultimaxx Copper Parabolic Antenna Signal Range Booster for DJI Phantom 4, P4 pro, P4 Advanced, Phantom 3 Pro, Advanced and 4K Inspire 1 Controller. [online] Available at: https://www.amazon.com/Ultimaxx-Parabolic-Antenna-Advanced-Controller/dp/B0794GSQB7/ref=sr_1_11?keywords=drone+range+extender&qid=1571113640&sr=8-11 [Accessed 9 Oct. 2019].
  6. De Alwis, C. (2019). Crime Scene Investigation of GPS Data in Unmanned Aerial Vehicles (UAVs). [online] Forensic Focus – Articles. Available at: https://articles.forensicfocus.com/2019/10/03/crime-scene-investigation-of-gps-data-in-unmanned-aerial-vehicles-uavs/ [Accessed 8 Oct. 2019].
  7. Airdata.com. (2019). Drone Data Management and Flight Analysis | Airdata UAV. [online] Available at: https://airdata.com/ [Accessed 11 Oct. 2019].
  8. Ronson, J. (2016). This Guy Sent a Drone to Spy on Area 51. [online] Inverse. Available at: https://www.inverse.com/article/12415-this-could-be-the-last-drone-footage-of-area-51-you-ll-ever-see [Accessed 4 Oct. 2019].
  9. Berlinger, J. and Starr, B. (2019). Iran shoots down US drone aircraft. [online] CNN. Available at: https://edition.cnn.com/2019/06/20/middleeast/iran-drone-claim-hnk-intl/index.html [Accessed 6 Oct. 2019].
  10. CNBC. (2017). This camera is built to detect and take down drones. [online] Available at: https://www.cnbc.com/video/2017/10/12/this-camera-is-built-to-detect-and-take-down-drones.html [Accessed 9 Oct. 2019].
  11. Khandelwal, S. (2016). Hacker Hijacks a Police Drone from 2 Km Away with $40 Kit. [online] The Hacker News. Available at: https://thehackernews.com/2016/04/hacking-drone.html [Accessed 9 Oct. 2019].
  12. F, F. (2019). How to use Find My Drone. [online] Forum.dji.com. Available at: https://forum.dji.com/thread-121403-1-1.html [Accessed 9 Oct. 2019].
  13. the Guardian. (2019). Major Saudi Arabia oil facilities hit by Houthi drone strikes. [online] Available at: https://www.theguardian.com/world/2019/sep/14/major-saudi-arabia-oil-facilities-hit-by-drone-strikes [Accessed 11 Oct. 2019].
  14. Corfield, G. (2019). Second MoD Airbus Zephyr spy drone crashes on Aussie test flight. [online] Theregister.co.uk. Available at: https://www.theregister.co.uk/2019/10/09/airbus_zephyr_drone_second_crash_australia/ [Accessed 11 Oct. 2019].
  15. Sites.math.washington.edu. (2015). Lost and Found: Mathematically Locating Ocean Downed Aircraft. [online] Available at: https://sites.math.washington.edu/~morrow/mcm/mcm15/38724paper.pdf [Accessed 8 Oct. 2019].
  16. Maarse, M. and van Ginkel, J. (2016). Digital forensics on a DJI Phantom 2 Vision+ UAV. [online] Os3.nl. Available at: https://www.os3.nl/_media/2015-2016/courses/ccf/ccf_mike_loek.pdf [Accessed 8 Oct. 2019].

About The Authors

Chirath De Alwis is an experienced information security professional with more than five years’ experience in the Information Security domain. He holds a BEng (Hons), a PGDip, and eight professional certifications in cyber security, and is also reading for his MSc specializing in Cyber Security. Currently, Chirath is involved in vulnerability management, threat intelligence, incident handling and digital forensics activities in Sri Lankan cyberspace. You can contact him on chirathdealwis@gmail.com.

Chamalka De Silva is an information security enthusiastic student currently studying for a BSc (Hons) Ethical Hacking and Network Security degree at Coventry University (UK). You can contact him on chamalkamds@gmail.com

What Changes Do We Need To See In eDiscovery? Part IV

$
0
0

by Harold Burt-Gerrans

In Part 3, I introduced the concept of consolidating duplicates by tracking Metadata at a DocID level and coding and/or document actions at a Document Level. For ease, I’m duplicating part of the example charts here as I will refer back to them to illustrate some of the following discussion.

Sample Current Data Structure:

Sample Multi-Table View:

Family Level Coding

Perhaps you noticed in the above example that Doc0001 is privileged and the attachment Doc0002 is coded as not privileged. Within quality review platforms, this is the proper way to code the documents. At production time, let the family level coding control the production of the specific documents. This type of functionality can be accomplished today.

By coding this way, it becomes obvious as to which documents caused the coding decisions for Relevance/Responsiveness and/or Privilege. It also prevents the incorrect propagation of coding to other document families when coding replication is enabled (e.g. marking a non-privileged (on its own merit) attachment as privileged because the parent is privileged potentially causes a duplicate copy of that attachment to become marked as privileged due to coding replication).

By using the concept of family level coding to control productions, the idea of a non-privileged document gaining privilege does not need to be tracked, it simply follows the disposition of the privileged document within the family Remember from Part 2:

“A document can, on its own merit, be considered privileged or not, and due to various family relations, privilege can be lost or gained.”

Conversely, the concept of “Privilege Lost” should be tracked. Functionally, “Privileged Lost” = “Not Privileged,” but I suggest that the privilege coding currently used be expanded to include a “Privileged Lost” setting for the benefit of the review teams and Quality Control reviewers. This should be done (today) when coding replication between duplicates is in use, and is even more necessary if a review platform introduces a document storage structure similar to the one I previously proposed (example above).

Consider a document, perhaps an email from a client to a lawyer, which would be considered privileged on its own merit. The first reviewer that looks at that email will likely code it as privileged. Subsequently, a second reviewer might find that same email was attached to another 3rd party email, which would cause the privilege to be lost. Updating the coding to “Not Privileged” will likely cause work at the QC level since they will need to investigate why the obviously “privileged-on-its-own-merit” document is not coded as such. However, if the QC reviewers find documents coded as “Privilege Lost”, they would immediately know that the coding is due to overriding non-privileged family members or other related documents. To help the QC reviewer, it would also be beneficial to have a coding field which contains the DocID(s) of that overriding Document or Family. This is especially important (today) if replication is used, as the overriding family member may not be a member of the family of this copy of the document, as shown below.

Sample Current Data Structure:

Sample Multi-Table View:

I know what you’re thinking… Doc0011’s family would be produced, so Doc0014’s family doesn’t really matter. And you’re right. But that’s a simple example to demonstrate the “Privilege Lost” concept. Now let’s examine another example with a bit more complexity.

Assume that Doc0021 would be an email that is privileged, and contains a non-privileged attachment Doc0022. Normally, these two documents would be withheld as a privileged family. But, Doc0023 is a non-privileged email that is the end of an email thread and includes the body of Doc0021 through a forwarding or reply. Doc0021 loses privilege as soon as Doc0023 is found, and consequently, should be produced along with Doc0022. The coding should be as follows.

Sample Current Data Structure:

Sample Multi-Table View:

As shown in the above, privileged emails that are part of a thread ending with an inclusive, non-privileged message should be checked for loss of privilege. Consider that Doc0012 above is part of Doc0011’s email thread, but as an email itself, remember that it may also be part of a completely different thread.

As the complexity of a web of threads increases, and then consider the effect of identifying textual near duplicate emails where virtually identical information may be relayed in completely unrelated messages, it becomes apparent that coding based solely on simple relationships is not necessarily reflective of the truth, and better data structures are required to track the influences of documents on each other. Thus ends Part 4. Nothing discussed so far is too radical I hope. Part 5 will likely be the closing of my set of posts (unless something requiring another large rant comes along) where I’ll add another new concept or two, including Recursive De-Duplication (which I thought would be in this part, but it made this posting too long so I pushed it to Part 5). Part 5 should tie everything together to present my vision of eDiscovery Utopia (eUtopia ?).

About The Author

Harold Burt-Gerrans is Director, eDiscovery and Computer Forensics for Epiq Canada. He has over 15 years’ experience in the industry, mostly with H&A eDiscovery (acquired by Epiq, April 1, 2019).

Viewing all 196 articles
Browse latest View live