Quantcast
Channel: Digital Forensics – Forensic Focus – Articles
Viewing all 196 articles
Browse latest View live

How To Collect And Share Digital Evidence Files With Prosecutors

$
0
0

In this short How To video, digital forensic specialist Rich Frawley will show you how to collect and share digital evidence files with prosecutors and third parties using ADF Software. This video is ideal for learning how to share evidence with prosecutors for review.

If you are tasked with the collecting specific files or collecting all files from a specific location, ADF digital forensic software makes this fast and easy.  Collecting and sharing evidence can be useful

  • due to a legal hold
  • specific need in your investigation
  • a limitation placed on of claimed privilege
  • legal limitations set forth by a judge

This lesson is helpful if, for instance, if you have a need to collect only PDF files from a specific computer or collect all files from a specific User Profile. You can accomplish this by creating a Capture to meet the specific requirements and then export the files in the ADF Stand Alone Portable viewer for analysis or review by another party. Note that ADF report viewers do not require an ADF license to review the files.

To create a New File Collection Capture, got to Setup Scans from the main screen. Click on New Profile, New Capture, and then Collect Files.  On this screen you will define the files to collect.

  • Type an existing Capture Group name or a new Capture Group name appropriate to the Capture.
  • Type in a Capture Name which is not already in use.

Pick a File Type: it is possible to specify which file types to include in your search. Searches for All Files or Specific Files are available. It is possible to add multiple specific file types. If the file type required does not exist it is possible to create one by clicking on View on any File Type group and then following the instructions within the Adding a Custom File Type Section. In this example we are going to collect all PDF files from the Users Profiles.

Select the Capture Options:

  • Only detect files (no collection) The original files will not be collected but preview thumbnails of images are created
  • File identification method – Fast identification identifies file types using the file extension only

Use thorough identification for files without extensions since that capture uses file signature analysis to identify files that have no file extension and fast identification on those that do. Thorough identification for all files uses file signature analysis to identify all files. Note: This will increase the time the scan takes to run.

Search selected file types in 

  • Archives Searches for all selected file types within archives
  • Documents Searches for all selected file types embedded within Document file types
  • Picture DB files Searches for all selected Picture file types within thumbcache and thumbs.db files

Next is File Properties where we can set specific parameters based on file properties for the File Collection:

And then…

Select the File Source options:

Entire file system Searches all live files

  • Targeted folders May be used to limit the extent of the scan making it run quicker. These can be used to limit the search to areas where evidential material is likely to exist. In addition, Targeted folders are searched before other folders and are not searched again if both Targeted folders and Entire file system are selected. Here we are going to Target the Users Profiles.
  • Files referenced by artifact records Used to target files referenced by Artifact Captures (e.g. email attachments)”
  • Deleted Files Targets deleted files for which references can still be found in the file system

When all selections are made, select Save and the File Capture is created and available in any Search Profile to be used in conjunction with other Artifact and File Captures or alone as a stand alone File Capture.

Collecting All Files from a Specific Location

When you want to collect all files from a specific location. In this example we’ll demonstrate collecting all files from the User Rich Jr. To begin, Select or Create a Capture Group and Capture Name. Select All Files and then Targeted Folders. Add the path to collect all the files from: /users/Rich Jr/.* (for example).

When all selections are made, hit save and the file capture is created and available in any Search Profile to be used in conjunction with other Artifact and File Captures or alone as a stand alone File Capture.

To Share with a Prosecutor or Third Party

Select Report → Stand Alone Viewer → Export

The entire scan results are now available for review and analysis by a prosecutor, another investigator, or another person.

Request a free trial at TryADF.com.


How To Use Amped DVRConv To Quickly Convert And Make Playable Proprietary CCTV Video

$
0
0

by Blake Sawyer, Amped Software

When I worked for the police department, I was constantly pulled in a lot of different directions. To keep a good turnaround time for the nearly 200 requests we had each month, I was constantly looking for tools to automate or ease the load. Amped FIVE was a great resource for most of my daily tasks and managed the entire video evidence workflow. With it I could create lossless playable versions of a proprietary CCTV video, generate stills for a bulletin, clarify details such as logos or license plates, and conduct comparative analysis.

But a lot of the time, the detective or light duty officer just wanted to review the video to see if the event matched the witness testimony. With over 300,000 residents there was no lack of new CCTV or proprietary formats that would come in. To keep our forensic computers free to conduct the more intensive work, we found and implemented a few dedicated machines for conversions using Amped DVRConv.

Using DVRConv, we could prevent ourselves from having to install dangerous proprietary players, in a way that was easy to play back in VLC or SMPlayer. Most critically, DVRConv can create playable files that retain all the original pixel information, which is critical for future work, should it be needed. Below is the quick and easy way to set it up to get great, quick results.

One of the first things you’ll notice is how simplified the DVRConv interface is. Once installed, DVRConv’s default settings will cover maybe 80% of your needs. You can simply drag and drop files into the big white box, and DVRConv will be off and running.

For those who are wondering what is happening in the background, unlike some other tools, there is no mystery. Simply going to the Console Log allows you to see exactly what tools and processes were employed.

In addition, there is a lot of customization that can be done from within the settings. Here are some of the options available.

The very first thing I wanted to point out is in the second tab: Folders.

In addition to dragging and dropping the files into DVRConv, you can also add them to the Input Folder. Often, I will leave the DVRConv Input Folder on the desktop, and then put the Output Folder in another location or on a storage drive. This way I can easily add files to DVRConv, and don’t keep a lot of excess on my desktop. I like to keep things organized, and DVRConv does a great job keeping things in order. Not only can I keep all my Output Files contained in one folder, but I can also add folders to the Input Folder and have the folder structure be retained in the Output Folder.

All of that is great, and super helpful to get the day-to-day files in and out, but the real power of DVRConv is in the Video Options.

The first thing to notice is the preset options. These set the conversion type, output format, and video codec automatically. By default, DVRConv is set to AVI – Copy Stream if Possible. This option carves the video stream out of a proprietary filetype and puts it into an AVI container. I tend to move this option to MKV – Copy Stream if Possible for a couple reasons. 

  1. It allows me to quickly find the converted video, as many proprietary videos claim to use an AVI container, but few use MKV.
  2. MKV is a more forgiving format for these stripped video streams. It is recognized by Windows Media Player, VLC and most video players.

Copy Stream if Possible, or else Transcode is the conversion type I tend to keep as much as possible. It breaks the process down into two sections.  Copy Stream if Possible allows the video to just be put in a new container whenever that is an option. This will not change the pixel information in the video for those who may have to testify about a video or send it to be analyzed later.

The “or else Transcode” part of the option comes into play if you run into a truly proprietary video codec that cannot be converted. This uses the information we have learned from hundreds of codecs to convert the video into a playable codec, that is decided by you and can be changed based on the terminal and intention. If you are looking to convert the video into something that just needs to be reviewed and played everywhere, the default H264 should suffice. For videos that will need to be analyzed later, or where you need to make sure every pixel is exactly copied, Raw (Uncompressed) is your friend. This takes a full picture of every frame and saves it for you to review. The downside is that these files can be VERY large, and multiple GB per file is not uncommon.

One other piece of media I commonly deal with is audio. DVRConv also is set up to deal with audio. You can set the audio the same way you do with video. It can copy an audio channel, or convert the audio channel if you need it in a more playable codec. There is also an option in Audio Codec to ignore audio, but for evidentiary review, I wouldn’t recommend ignoring evidence. 

The other Audio option is Separate Audio Stream, to separate audio from the video. I know some departments use this extensively when they need to redact audio from the video, or if they need to work on the audio separately from the video (possibly for clarifications).

Two new settings I wanted to mention are Change Output Frame Rate and Generate Input File Info, which have been added in the latest release.

Because most proprietary videos do not encode their frame rate the way it is done in a standard container, there are times when DVRConv cannot parse the frame rate from the container or video stream. When that happens, it places a default frame rate that is usually generated by FFmpeg, one of the conversion engines within DVRConv. DVRConv has added a way to account for that by making an optional Output Frame Rate. If DVRConv doesn’t find a valid frame rate, DVRConv can replace it with one you choose.

The other new setting is Generate Input File Info. This will give you the container and stream information from the video, so you can have a log of what was received. This is a great tool for discovery or to help add some data for file validation.

As you can see, DVRConv is a really powerful program, hidden inside a clean, simplified interface. DVRConv is like the goalie for a water polo team. A lot happening below the water, but everything seems calm from the surface. So many times, I had a person shocked when they came into my office after struggling with a video for hours only for me to have it ready for them after about five minutes. It meant that the next time, they would just bring it straight to me, and eventually to the other two people we had to hire to help. 

Find out more about Amped DVRConv.

What Changes Do We Need To See In eDiscovery? Part V

$
0
0

by Harold Burt-Gerrans

Welcome to Part 5. As promised in Part 4, I’ll start by discussing recursive de-duplication.

Recursive De-Duplication: Using Aliases Within De-Duplication

I can’t count the number of times that clients have complained about x.400/x.500 addresses in emails. Unfortunately, if the collected data comes with those address structures and not fred@xyz.com, we’re stuck with using them. Relativity and Ringtail have both introduced “People” or “Entity” type data structures (I’m sure other review tools have as well), but I think these structures are not used to their full potential yet.

Part of the de-duplication process should be to recursively substitute alias values in place of address strings so that multiple copies of the same email can be matched even when one copy has addresses formatted differently from the other copies. This does cause some grief when the use of aliases at a later date determines that two messages already promoted into the review stage are actually duplicates, especially if they have not been coded consistently. Without giving away every idea floating around in my noggin, I’ll just say that there are sound logical methods to deal with these issues if you prepare for them ahead of time.

In conjunction with the introduction of aliases into the processing/de-duplication phases, consideration also needs to be given within the address structure to identify what I will call “Versions” of entities. For example, John Smith may have email addresses “JSmith@xyz.com”, “O=xyz;CN=JSmith”, and “JSmith123@gmail.com”. It should be maintained within the entity structure that the Gmail version of John Smith is not the same version John Smith indicated by the other two addresses (which are aliases of each other), as each version has its own set of metadata. Additionally, JSmith could be important to the matter and because he is working two jobs, he has email addresses for the separate jobs. Perhaps the Entity structure should be something like:

A structure similar to the above allows for the grouping of everything linked to John Smith, yet still differentiates between the different ways John Smith is known.

With every ingestion of data into the processing/review system, the Alias tables can be updated (likely manually) and the recursive de-duplication process should be run, substituting the primary email string for secondary email strings for each sub-entity. By using the above structures to update the deduplication, we can now identify that an email from O=xyz;CN=JSmith to fred@abc.com is the same message as another email that is from JSmith@xyz.com to O=abc;CN=Fred and the same as a third email from JSmith@xyz.com to fred@abc.com.

Time Zones

First, a fun fact: the earth rotates once every 23 hours, 56 minutes and 4 seconds, not every 24 hours. However, because of the earth’s path around the sun, it takes about 24 hours for a point on the earth to rotate back around to the same position in relation to the sun. Hence, a day = 24 hours. Coincidentally, the difference between 23:56:04 and 24:00:00 divided by 23:56:04 is approximately equal to 1 divided by 365, number of days in a year (not really a coincidence, mathematically it makes sense that it would be).

I don’t have the energy, or the interest, to investigate and detail the history of clocks, why the day is in 24 hours of 60 minutes of 60 seconds, or why new days start at midnight instead of sunrise. Regardless, since the beginning of time (Did time actually begin? Doesn’t 1 second before the beginning of time imply that it had already begun?), people have tracked time based on the sun. Long ago, sun-based devices, such as sundials, were invented to quantify units of time based on the movements of the sun. Somewhere along the way, people standardized on sunrises being around 7:00am, working hours of 9:00am to 5:00pm etc., but these were typically locally set times. As things began to be connected across long distances (e.g. by railroads) it became a requirement that the local times were co-ordinated, and eventually, international meetings established Time Zones synchronized to Greenwich Mean Time for worldwide time standardization. As a Canadian, I’m proud to remind everyone that a key player in this was Sir Sandford Fleming, a Scottish Canadian.

While Sandford made it easier for us to schedule things – If it is 1:00am, it is the middle of the night wherever you happen to be, 12:00pm in Ontario is 9:00am on the west coast of North America, etc. – we basically can predict what is happening elsewhere based on adjusting for the time zone. Since it’s midnight in Toronto, it’s 3:00pm in Sydney and people there must be at work, since I would be at work at 3:00pm (Oddly though, some of my western colleagues still think 2:30pm for them is a good time to start a meeting with me).

In eDiscovery, we typically ask what time zone to use for processing data. For regional matters, it’s almost always the time zone of the region. But, for national or international matters, it’s important to take into consideration how the data is used as well as displayed. For example, emails typically store the time in the native file in either UTC or local time with a UTC offset. But the output from processing is typically just a date and time, without UTC reference.

Consequently, if you process the emails from two custodians within their differing time zones, the extracted email dates will show the times as they related to the custodians and will not necessarily be in correct sequence for email threading. Hence, for consistency, it is usually best to process everything for a matter against a specific time zone – perhaps the time zone of the Judge is the best choice. What should be added to the processing environments and carried into the review platform are two or three dates/times:

  • Date/time in the time zone of the matter (as the primary date/time for reference)
  • Date/time in the time zone of the custodian (for local reference)
  • Date/time in the time zone of the review team. This one is optional as often it is the same as one of the others. It is only useful to co-ordinate times / events from the reviewer’s prospective if they are in a completely different time zone.

A flexible method to do this would be to store all the dates in UTC within the review platform and allow a setting which forces their display as either static in a specific time zone or dynamic in the time zone of the person viewing the document.

One other thing to consider when reviewing emails is that the content of a thread may not be from consistent time zones. The date stamp on the most recent internal thread message is put there by the sender, so it is relative to who is sending the message out. In the example below, we processed John’s mailbox, in Ontario.

The interpretation of the timing of the above events changes significantly if:

a) Sara is in the same city as John and is interested in purchasing from John’s company. In this case, John appears to be a little slow at customer service. OR
b) John met Sara, who is in China, on InternationalDating.com. In this case, John replied in 1 minute, not 12 hours. Need I say more….

Just as a small rant… All the time zone problems in eDiscovery would go away if the world standardized on a single time zone. I’m sure I could adapt to a working 3pm – 11pm if my sunrise was at 2pm. And switching to working 2pm – 10pm when it would be Daylight Savings Time. I’m adaptable. I survived Y2K and the switch from Imperial to Canadian Metric.

I thought I might squeeze all my stuff into 5 parts, but it’s surprisingly easy to blow through 1,000 words. I have a little bit more left for Part 6, which should be the last one of the series and presents everything tied into my version of eUtopia.

About The Author

Harold Burt-Gerrans is Director, eDiscovery and Computer Forensics for Epiq Canada. He has over 15 years’ experience in the industry, mostly with H&A eDiscovery (acquired by Epiq, April 1, 2019).

Walkthrough: What’s New In XAMN v4.4

$
0
0

Hello and welcome to this video about what’s new in XAMN 4.4.

I’m going to take you through ten new improvements, as you can see listed here in the latest release of the XAMN application. Let’s get straight on to the product so we keep this video as short as possible for you.

This is the latest version of XAMN 4.4. I’m working on a beta, so some features might change before the final release, but this should be a good indication of what’s coming up.

Let’s start with this file for an iPhone 6. And the first thing we’ve done is improved the loading functionality. You can see here there are twelve XRY files to be loaded, and you get feedback in relation to where the program is. Also it’s much faster to load.

The next thing I’d like to point out is that you can see on the left-hand side that we don’t have recently opened files anymore. We’ve improved this to allow for more screen space, so that you can see more of the extractions in this particular case, and have that information available. But if you do want to open another case, you just click on the ‘Open XRY Case’ or XRY file button in the top left, and you can see all the recently opened files there. So that’s a change in XAMN 4.4.

Also a new feature in the Start Case page here is quick views. So if we go into quick views, I can edit these directly in the start tab here – for example, if I wanted to add a classic mode, that’s one of my quick views on the right-hand side. Click ‘Classic mode,’ click ‘OK,’ and you’ll see it appears there. And conversely, if I untick it, it’ll disappear. So you can manage your quick views in XAMN 4.4 straight from the icon at the beginning of the application.

Let’s go to Pictures now and see one of the major improvements that we’ve added there. So we’ll go to Gallery view, so you can see all the pictures that we’ve got. Let’s click on this particular picture of a car that you can see here. I want to show you a new feature in relation to the picture viewer.

So if I open this in the XAMN Picture Viewer application… just drag that over onto the screen… you can get a much larger view of the picture in the application. But if I want to open another picture, I have to double-click on it. That will open a second dialogue box. And if I open a third picture… and so on and so forth; you get the idea. Essentially, you have to open each of the picture viewers for each gallery.

Now we’ve had a number of feedback to improve this process, so we’ve added a new ‘pin’ button here. So simply click the pin button, and now I can scroll through to the next picture simply by selecting it in the gallery view. It’s a much easier and quicker way to deal with images, so you can see them in a full view, if you want to. Great for a second monitor screen view, as well.

That’s the new picture view. To go back to the normal mode, unpin, and then you can see that I need to double-click that to open the next picture. A nice little improvement there in XAMN.

Let’s have a look at another feature we’ve added. We had some feedback in relation to examiners who had to look at indecent images. So we’ve made some changes in the Options menu, and we’ve included a new option: ‘Prevent animated gif files from being played automatically’, as you can see here. What I mean by that: let’s find a gif file to demonstrate.

So an improvement that we’ve made in XAMN previously is that gif files, if there, will preview and display automatically, as you can see this one’s moving. Now if it’s an indecent image, perhaps that’s not appropriate, so you’d like to prevent that from happening, quickly just go to the Options menu in the Detail panel; select ‘Prevent animated gif…’, click ‘OK’, close that down, go back to Pictures. Start again: if we search for that gif file again, now you can see that it’s no longer playing the animation, it just shows you the first frame.

It’s the same, extending that to Project VIC. So Project VIC, we’ve made some improvements in XAMN 4.4. Previously if you selected this button, you would have got a whole host of options. We’ve now moved that so that the Project VIC button just simply does a process review of the extraction, so you can see here we can filter on all artifacts, or just ‘Filtered.’ Select by view; I’ll do a quick check against our database. We can see no hits. OK, fine, let’s do that again. Let’s do it on all artifacts, see if we can get a match. And we’ve got thirteen hits in this particular view. So click ‘OK’, XAMN will update the data, and here are hits for Project VIC. And we can select those images if we want. Just to reassure you, this is a fake database, so if we open the picture viewer, these are just normal images that we’re testing in the system here.

And you can see that’s how the matches are displayed. So the images by default are prevented from being displayed. We won’t show them until you want to look at them in detail in the picture viewer.

If you want to change the settings for Project VIC, you can do that now in the Options menu. We have a new section for Project VIC here, where you can decide on the format that you want to use, depending on your region. And also, now you can add multiple databases, so more than one for each region. You can create a new one, or add them here in the Options menu.

OK, we can clear all the filters using the button here. Quick update for the Time filter: we’ve improved ‘Set custom time’. If I click on that filter it defaults to today’s date. It also means if you want to have a ‘from’ and ‘to’, obviously it starts from the current date. It’s much quicker and easier to get the filter in recent time.

Next big discussion point is chat view. So we have this chat thread here, with a discussion on the Kik app with a participant called Johnny Utah. You can see we can flick through that. Now historically the chat view was originally in XAMN Horizon; recently we’ve put it in XAMN Spotlight. And the great news is, in XAMN 4.4, we’re going to put this chat view into XAMN Viewer. What that means, quite simply, is that it’s now free to use for all XAMN users.

So XAMN Viewer now contains the chat view for free. And on top of that we’ve made some minor improvements as well. You can now see the exhibit ID, so you can see where the thread came from. This is from the Apple iPhone 5, which is number six in this case, number four; just to remind you, that’s the reference number we give to the particular exhibit. So that’s included now in chat view.

We’ve also added a new shortcut to PDF. So if this chat view is something that I want to report, I can very quickly click the ‘PDF Report’ button, it goes straight to a PDF and assumes that I want to print out this chat thread. I can click ‘Export’, and we’ll open that folder to see the results. Let’s drag it over here. And here you can see the PDF report with those chats – that’s the screenshot, very quickly printed out just as you see it there. So that’s a nice little touch for the XAMN chat view: a quick shortcut to PDF.

Don’t forget of course, though, if you want to do a more detailed report, select the ‘Report export’ option and you can go through all the artifacts and all the different file formats. So these are our twelve standard file format exports here. And you can choose between all artifacts, those filtered or those selected, as you can see here.

And another new feature to point out: if we go to PDF, perhaps if I wanted to do with pictures, we now number the pictures to make it easier to report. So let’s quickly go back to pictures in the gallery view. Let’s highlight this top row, and go to ‘Export.’ You can see it defaulted to ‘Selected (9 artifacts)’, and I want to do a PDF report of that. Click ‘Next’, and let’s go to ‘Pictures only view.’ We can put eight per page, or nine per page. We’ve selected nine artifacts, let’s put all nine on one page. Click ‘Next.’ And then we can open that up, and here you can see the PDF report that’s been created.

And there’s the original screenshot, and now you can see here we’ve numbered those individually selected nine images; those pictures that we’ve selected.

Great. One other feature I’d like to point out to you: very nice new implementation of screenshot. So now we can take a screenshot of what we’re looking at. Perhaps you’ve created a… let’s create a geographical map view, for a change. This is available in XAMN Horizon. So here’s a picture of several artifacts that have been created on this case. You can see that there’s some pictures there.

Let’s take a screenshot of that. So I can either drag an area – perhaps I just want the map for my report, and that creates a picture which can be saved – or alternatively I can just do a full screen, and then you can see I’ve got the whole screen there. And I can then save the file to a destination of my choice. That’s the new screenshot ability in XAMN 4.4. Great feature there.

Just one point in terms of tagging. You’ll probably be aware that… let’s remove that one… that we can add tags as a filter, and we can tag individual data. So let’s go to the view here. I can individually tag files, so I can mark these as important. And if I wanted to, I can edit the tags and give them all sorts of meanings.

We’ve included the option now to include tags in the export. So in the Extended XML export, the export schema now includes the tag marker information as well as all the other data, so you can be ingested into third-party analysis tools. There is a new extended XML schema available from the MSAB customer portal, detailing that for your third-party vendors.

Another nice little feature: if you wanted to save a subset of this for a third party to review, you can now click on the ‘Save subset’ options, and there’s a new feature here to include XAMN Viewer for free, as part of the package, so that the recipient can both receive the file and also have a reviewing tool to review the data in it.

Then we’ve made an improvement on call data records. If you’re not familiar with that, you’ll see that we have options here to import a binary file, or a UFED file from Cellebrite; but we can also import CDR – call data records.

If I click on that a wizard appears, and this will allow you to import the telephone records from network service providers to see if they match with the data that you’ve extracted from the handset. So we’re going to browse to a demo file, to show you, and then essentially you just follow the wizard. Click ‘Next’ and it will read the template, and it will say, OK, select the header row, which I’m going to do here. And it’s going to say OK, we think the data starts here, which is correct, so we’ll go ‘Next.’ And then select the end row: perhaps you just want a few of them, so we’re going to select the end row here; click ‘Next; and then it says OK, data formatting. How are we going to deal with it? And if you need to, you can expand this to get more on the screen, so you can see what we’re looking at here.

And then you basically tell XAMN what to do. Should it ignore this data, or should it treat it? And you can see here we’ve got various bits of information. What’s new in this release is that we can now import the call data tower name. So we can import the cell tower name; classify that; and perhaps we have first cell ID here, so we can import cell ID. So cell ID and cell tower name are two new categories that we can give to the data that we import, along with, you can see traditional ones here: latitude, longitude, perhaps also duration. And then you can mark the data and verify the format, as you see here. So that’s a great new feature in call data records: importing.

And last but not least, I’d like to show you a new feature with health data. I’m going to close this file down and open a new case with health data in the app. And you can see the improved feedback on file times and the faster opening times. Let’s open the health data here – so I’m just going to move this up. And here we’ve got some health data from Apple Health app. There’s various files along the way, and you can now view the heartbeat monitors in the feedback: here’s an example.

We’ve added a new feature so you can export this. A customer requested export as a csv file, so here we’ve got a heartbeat chart that we’d like to export. Click on the ‘Export as a CSV file’ shortcut and create a test file – call that ‘test2’. And I want to open that file… and we called that ‘test2’. And here’s all the data in the spreadsheet we just exported it to.

Quite simply you would select all the data – and if you’re a wizard in Excel you’ll know how to do this – and then you can insert a chart. Let’s insert a chart view. And there you can see the heartbeat data visually represented in Excel. Hopefully that looks vaguely like something that you saw in relation to this data format.

OK. So that’s a summary of all the recent new improvements that have been made in XAMN 4.4. Thank you for watching, and if you’d like any more information, please visit our website, www.msab.com.

Cost-Effective Tools For Small Mobile Forensic Labs

$
0
0

by Alex Moeller

As the costs associated with running a mobile devices forensic laboratory can be considered to be high, this article is aimed at providing alternative options for small organisations or individuals looking to reduce overheads. 

Case Management Tools

There are numerous case management systems available online which are free to download, and premium features offered by some of the paid software are not worth losing coin over at the small business stage.

These case management systems, however, are a double-edged sword. Although many have built-in data loss mitigation features such as real-time backup, the feature requires a constant internet connection. This can open up your system to possible attacks and manipulation of case information.

Although lacking in features compared to the online programs, Microsoft Excel [1] is a viable option which can be used to design a functional case management system with little skill. The added bonus of services such as Air Tables [2] is that you can download premade templates into Excel, skipping all the messing around with fonts and table making. 

Mobile Forensic Tools

Now, this is the big saver part, and as most of us probably know, any decent software used in digital forensics is expensive. So how do you break up costs?

Building a PC that can handle Cellebrite [3] or XRY [4] will cost you around £500.00 if you’re smart, and while an expensive graphics card is not required, a decent amount of RAM and processing speed is.

Write blockers aren’t required unless you wish to perform SD card extractions. The usage of SD cards by mobile phones has generally decreased as a result of their more substantial internal storage capabilities. If you are required to examine an SD card then NIST [5] provides free validation test reports on multiple software write blockers, thus ensuring the most suitable tool is used for the work. 

SIM card readers themselves don’t cost a lot and can be purchased on Amazon for around £10.00. 

Extraction

Mobile phone extraction software can seem expensive, but it doesn’t have to be. The main difference between the more expensive ones versus the cheaper ones is ease of use. Tools like Cellebrite and XRY are great at combining lots of different mobile extraction methods into a streamlined and efficient solution. The less expensive tools require slightly more training and time spent becoming familiar with the steps involved, but practice makes perfect. Starting with the simple task of being able to extract only images or texts until your requirements outgrow the tool, at which point the more expensive software becomes the more viable tool.

Adb [6]–[8] is an option for Android devices, but you run the risk of breaking the phone if you don’t learn the correct commands.

Autopsy [9] is an option that should be considered as it is capable of extracting text messages (SMS / MMS), call logs, contacts, and GPS data. The downside to these types of software is that they have limited coverage as each device can have a different OS version. The aforementioned software will therefore only work on specific mobile devices.

A document entitled “Open Source Mobile Device Forensics” authored by Heather Mahalik in 2014 provides further options to consider when looking at open source solutions [10].

Analysis

As with the extraction stage, cheaper options are available for the analysis of data. The presentation of extracted data for analysis is crucial as there is a vast amount of data available to an examiner and it needs to be presented in a logical fashion. 

In most mobile phone extractions, however, large amounts of data are recovered, and so subsequently require a more professional touch. This can be achieved by using a software which inputs the raw data extracted from the phone and outputs it in graphical displays. 

Autopsy[11] has a GUI interface which comes with features like Timeline Analysis, Hash Filtering, File System Analysis and Keyword Searching out of the box, and has the ability to add other modules for extended functionality. 

Services like Splunk [12] offer a great way to transform messy looking data sets into clear and understandable models and tables. 

Validation

Validation of tools and methods is a massive, exhaustive process which never seems to end. 

But keep calm and keep validating. 

To ensure reproducibility and repeatability a laboratory must be able to validate results by demonstrating the reliability of the tools used to ascertain those results. For example, if instructed to locate a specific image stored on a mobile device, an examiner should be able to extract an image and confirm the hash checksum. A useful tool to accomplish this task is Jacksum [10], which is free open-source software that calculates a variety of hash checksums and can be incorporated into Microsoft Windows and accessed by simply right-clicking on a file. Another great tool for image analysis is Image Magick [11], which is also free and can provide detailed analysis of specific aspects of an image.

Validation needs to be tackled in an efficient manner with an appropriate strategy that meets your end-user requirements. Mobile phone validation can seem like a daunting task at first, but breaking it down into smaller parts will make it easier. First validating the fundamental features which exist on every make and model of phone such as contacts, SMS messages and call logs can set you on the right path, and the scope can be increased later on.

Validating every phone you encounter would be ridiculous. It would literally never end as new models are hitting the market quicker than we are able to validate. Instead, initially focus on a specific type of phone, or do a Google search for the most commonly purchased phones and pick a nice selection which represents a sample of the market. Commonly used phones can be expensive, so look for second-hand ones and perform a factory reset. Before conducting any tests perform an extraction of the phone and make note of any remaining data so it can be ignored in tests.

Buying new phones should be avoided not only to reduce costs, but also because second-hand devices have the advantage of being more closely aligned with the types of devices used in casework.

Documents published by NIST [13], [14] provide validation results [15] for you to set acceptable pass criteria for your own testing. The FSR [16] has also published guidance regarding validation, as has the Scientific Working Group on Digital Forensics [16] [17]. Combining these documents can help provide a solid overview when creating validation plans.

Digital Storage

Digital storage goes hand in hand with a good case management system. It’s crucial that exhibits for a specific case are kept as one and are not lost, and this can be achieved by keeping your case management system in sync with exhibit logs. Exhibit logs should state where an exhibit is being kept and if it has been returned to the instructing party.

The security of physical exhibits is as vital as the safety of any digital exhibits and should be made a priority. Depending on your work environment you will need a safe, stored within an area of restricted access. Ensuring only workstations with no internet capabilities have access to case data, and using only encrypted USB flash drives, will ensure safety from most outside dangers. 

A NAS system can be of great use but can cost a lot, so again, either look for cheaper alternatives like simply swapping out hard drives, or browse eBay until the right one comes up for a reasonable price. 

If that’s too expensive you can build your own, but consider that whatever route you take will require validation testing. Security is yet another key aspect to consider when using a NAS, as you can never be too careful in digital forensics. Most extracted data have the potential to contain viruses or malware which could compromise confidential files. The best way to ensure the safety of these files is to keep the NAS separated from the internet completely, but if you do need to connect to the NAS remotely an article by How-to Geek describes the necessary steps to keep it safe [19].

Report Writing

Reporting the results of a case needs to be completed with no grammatical errors and should be accessible to the reader. One way of ensuring this is by using software that picks up any grammatical errors found in reports, thus preventing any misunderstandings. Software like Grammarly [20] is free to use and offers a premium option for more advanced grammatical errors that perhaps Microsoft Word might not pick up. However, this and similar software require an internet connection to function, leaving you again open to any online attacks. With that being said, a few ways around this are available.

The first option would be to set up a low specification workstation for running internet searches and to operate Word with Grammarly installed. The finished report can then be put onto an encrypted memory stick, thus minimising the risk.

A safer option would be to make some tweaks to the spellcheck available within Word [21] and create your own dictionary of keywords and phrases you wish Word not to pick up on.

Peer Review

Peer reviewing of each other’s work is obviously a free thing to do if you work with someone else with a similar skill set, but if you work alone then you must make some friends who work in your area of expertise. Peer review is essential in ensuring reliability and error mitigation and is advised to ensure compliance with the FSR Codes of Practice [22].

When peer reviewing work, don’t waste time and money (and trees) printing out forms. Try using the comment feature in Word for areas that need addressing. This could also be a good way of recording improvement actions to show how your company finds errors and makes improvements. 

Delivery

Sending confidential documents online can be a risky game, so procedures should be put in place to mitigate against said risks. Tresorit [23] and Sophos [24] provide end-to-end encrypted file-sharing services and each offer free trials which should be taken full advantage of before making a decision on which to commit.

Transporting important case data via an external device requires security while in transit. This can be achieved by using strong encryption with software such as VeraCrypt [25], a free tool for encrypting hard drives and USB flash drives. 

Conclusion

It’s currently a difficult time for smaller laboratories to compete against larger ones, due to the stress of ISO 17025 accreditation looming over us all every second of our already stressful day-to-day lives. The chance to cut costs should be seized at every opportunity, to save money for those accreditation visits and rainy days. Not everything has to be state-of-the-art, cutting-edge tech. If you learn the necessary skills and are prepared to accept fewer flashy features, then try some of these alternative methods instead of forking out cash at every turn. I want my final words in this article to be positive and push for more cooperation between smaller digital forensic laboratories, as I believe that this will not only benefit everyone in setting a higher standard, but will also significantly improve our justice system. 

References

[1] Microsoft, ‘Microsoft Excel’. [Online]. Available: https://products.office.com/en-gb/excel. [Accessed: 13-Aug-2019].

[2] Air Tables, ‘Air Tables’. [Online]. Available: https://airtable.com/templates. [Accessed: 13-Aug-2019].

[3] Cellebrite, ‘Cellebrite’. [Online]. Available: https://www.cellebrite.com/en/home/. [Accessed: 15-Aug-2019].

[4] MSAB, ‘MSAB’. [Online]. Available: https://www.msab.com/. [Accessed: 15-Aug-2019].

[5] NIST, ‘DHS Reports — Test Results Software Write Block’. [Online]. Available: https://www.nist.gov/itl/ssd/software-quality-group/computer-forensics-tool-testing-program-cftt/cftt-technical/software. [Accessed: 17-Oct-2019].

[6] Android, ‘Android Debug Bridge (adb)’. [Online]. Available: https://developer.android.com/studio/command-line/adb#copyfiles. [Accessed: 15-Aug-2019].

[7] Chris Hoffman, ‘How to Install and Use ADB, the Android Debug Bridge Utility’. [Online]. Available: https://www.howtogeek.com/125769/how-to-install-and-use-abd-the-android-debug-bridge-utility/. [Accessed: 16-Aug-2019].

[8] Doug Lynch, ‘How to Install ADB on Windows, macOS, and Linux’. .

[9] Autopsy, ‘Autopsy’. [Online]. Available: https://www.autopsy.com/. [Accessed: 15-Aug-2019].

[10] Heather Mahalik, ‘Open Source Mobile Device Forensics’, 2014.

[11] Autopsy, ‘Sleuth Kit’. [Online]. Available: https://www.sleuthkit.org/autopsy/. [Accessed: 29-Sep-2019].

[12] Michael Baum, Rob Das, Erik Swan, ‘Splunk’. [Online]. Available: https://www.splunk.com/. [Accessed: 18-Aug-2019].

[13] NIST, ‘NIST (CFTT)’. [Online]. Available: https://www.nist.gov/itl/ssd/software-quality-group/computer-forensics-tool-testing-program-cftt/cftt-technical/mobile. [Accessed: 20-Sep-2019].

[14] NIST, ‘Mobile Device Data Population Setup Guide’. [Online]. Available: https://www.nist.gov/sites/default/files/documents/2017/05/09/mobile_device_data_population_setup_guide.pdf. [Accessed: 15-Sep-2019].

[15] NIST, ‘Test Results for Mobile Device Acquisition Tool Cellebrite’.

[16] FSR, ‘Validation Guidance’. FSR, 2014.

[17] SWGDE, ‘SWGDE Minimum Requirements for Testing Tools used in Digital and Multimedia Forensics’. 2018.

[18] SWGDE, ‘SWGDE Recommended Guidelines for Validation Testing’. 2014.

[19] Craig Lloyd, ‘6 Things You Should Do to Secure Your NAS’. [Online]. Available: https://www.howtogeek.com/350919/6-things-you-should-do-to-secure-your-nas/. [Accessed: 17-Aug-2019].

[20] Grammarly, ‘Grammarly’. [Online]. Available: https://www.grammarly.com. [Accessed: 17-Aug-2019].

[21] Microsoft, ‘Word’. [Online]. Available: https://products.office.com/en-us/word. [Accessed: 13-Aug-2019].

[22] FSR, ‘FSR Codes of Practice and Conduct’. 2017.

[23] Tresorit, ‘Tresorit’. [Online]. Available: https://tresorit.com/. [Accessed: 19-Sep-2019].

[24] Sophos, ‘Sophos’. [Online]. Available: https://www.sophos.com/en-us.aspx. [Accessed: 19-Sep-2019].

[25] Veracrypt, ‘Veracrypt’. [Online]. Available: https://archive.codeplex.com/?p=veracrypt. [Accessed: 05-Sep-2019].

About the Author

Alex Moeller is a Mobile Phone Forensics Examiner at Verden Forensics in Birmingham, UK, and has experience in conducting examinations in a variety of cases, both criminal and civil. He holds a degree in Forensic Computing from Birmingham City University and is currently preparing the laboratory for ISO 17025 accreditation in Mobile Device Forensics. 

How To Export Media Files From BlackLight Into Semantics21

$
0
0

So before we go to export our files from BlackLight to S21, what we will normally do is we will run the hashes against our case. In this case what we’ve done is we’ve already run these hashes against BlackLight, and as you can see, S21 has been run and it’s showing complete. These are the hashes that we’ve already set up and we’ve connected to this hash database through the MySQL interface within BlackLight. Once that is done – and in this case, the hash is done – we can then go over to our media section.

Now I’m going to choose ‘Combined.’ And what this is going to do is it’s going to show all the images, and all the thumbnails, and all the video files, that are part of this case. It’s displaying all of these pictures and videos and thumbnails for us. Now what I want to do is, I want to export all of these pictures and videos from the case, into a format that S21 will understand.

So I’m going to select all of these pictures and videos now. And what I did here was I selected on one, and then I selected Cmd+A, or Ctrl+A on Windows, which will then allow us to select all of the pictures and videos in the case. If I right-click anywhere on this window I can select ‘Export’, ‘Export Data Set’. And within ‘Export Data Set’ you can see S21.

BlackLight will then prepare the files for export. It will create a folder – a directory structure – on my computer, or wherever I choose to save this data. And then it will export the pictures and videos into the data set structure that S21 understands, including creating the XML files that S21 uses for the purposes of ingesting the data back into the end user application.

So now BlackLight is prepared to export the files. I’m going to create a new folder here on my desktop; what I’ve done here is I’ve selected the desktop, in this case. Normally what you’re going to do is, you’re going to have this exported to a place where an investigator can use the end user S21 application. The S21 application is a Windows-only application, so obviously in the real world I could not export it out to my desktop. Usually it would be a connected server, or some network storage that is attached to your analytical computer, whether that is a Windows or a Mac, that you can access and reach remotely from your internal network. So in this case, what I’ve done is I’m exporting these files onto my desktop, and I’m just going to call this ‘BlackLight S21 Export.’ So, BlackLight S21 Export, and then I’m going to hit ‘Create,’ and then I’m going to hit ‘Export.’

Now BlackLight will commence exporting from the case, that includes all thumbnail information, as well as all the movies and all the pictures that are in this case. This normally takes about 10-15 minutes, depending upon the size of the case; it could actually take even longer, if there are millions and millions and millions of pictures within your case.

In this case, BlackLight has already started the export feature. If I click on ‘Export Status’ I can see BlackLight as it’s exporting these files. There are over 37,000 files in this case; in this case we’re up to about 5,800 files.

So BlackLight is now exporting the pictures, videos and thumbnails from the case into the export folder that I’ve created, and it’s going to put it into the format that S21 requires for ingestion into the S21 application.

OK, so our S21 export has completed from BlackLight, and as you can see I’ve saved it to the desktop on this computer. Normally you would be saving it to a location, as I said earlier, that an investigator would be able to get a hold of that information and ingest that information correctly into S21.

In this case what’s happened is, BlackLight has exported the files and at the same time it has run a comparison of the files to the S21 hash database and appended those flags to those files, so that when it is ingested into the S21 application – the end-user application – the flags will be present and S21 can correctly display that data.

And what I’m going to do is I’m going to show you the results of that export file. Here’s the export folder here, showing BlackLight S21 Export. And if I open it up it will tell you the name of the case, Bennett-21 Exam, gives you the date and time of the export. And then within each one you have a volume info text, and a Case Report.xml file. Then of course you have your folders, you have ‘S21M’ for S21 movies; and then ‘S21P’ for pictures.

If I open up the pictures folder, all the files are located in here, in these subfolders. At the very bottom we have a Results.txt file that will give us the results within the exported files; and as you can see, it has pre-categorised 831 files, not surprisingly.

And then we have the S21 index file that contains all the information about each one of the pictures and videos – or in this case, pictures – for that particular file. So that XML file will contain the extended attributes of that file; the metadata of that file; including dates and times, full path, owner, etc. of that file. So that’s all part of this package that is forwarded over to the investigator, who then ingests this into S21.

How To Use AXIOM In Malware Investigations: Part I

$
0
0

Hey everyone, Tara Nelson here with Magnet Forensics. Today I’m going to give a little bit of insight into how AXIOM can help with some of your day-to-day investigations.

In part one of the segment we’re going to talk a little bit about malware investigations, in particular reviewing memory as part of AXIOM. Regardless of the infection, be it a phishing email or a malicious code on a website, or what have you, memory analysis is usually a key component to a malware investigation.

I have a case open in AXIOM Examine, with both an end point and a memory image of an infected machine. This can be super beneficial in your investigation because it allows you to examine multiple pieces of evidence, including memory, in one tool.

So I’m going to switch over to the artifacts view. And you can see over here on the left-hand side that there is a section dedicated to just memory.

As you may know we’ve integrated Volatility, the popular memory analysis tool, into our processing with AXIOM. This includes plugins that you see here on the left: pslist, psscan, malfind, etc. So for all of these, you can review the output in the AXIOM interface.

For the purpose of this video I’m going to focus in on just a few of the basic ones to show how AXIOM can help in your investigations, starting with pslist to show some of the running processes at the time of the collection of the memory.

At first glance, without looking further, a lot of these just look like normal Windows processes. There is one that stands out to me as being potentially suspicious: this process called Fake Intel. That one to me just does not belong at all, so definitely I’m going to flag that as something of interest that I’m going to want to look into further.

If I wanted to see if this process was associated with any network activity, I can look at this plugin called netscan. And you can see that there’s a couple of remote IP addresses associated with this process that might also be malicious, that I might want to look into further.

What I really like about doing memory analysis within AXIOM is being able to see the counts here on the side. So in particular we talked about this pslist output that shows 46 processes. We also have this other output from the plugin psscan that can show hidden processes as well. You can see that there’s one more in that count, as well. That could be a really good indication to me that I might want to look further into hidden processes.

To be able to do that pretty easily, I’m going to use this plugin called psxview. When I click on that, if you’re familiar with the output that Volatility usually puts out in the command line tool you can see that it’s pretty similar, it’s just that we have it in this nice AXIOM interface.

So we have the process name here, and then a couple of columns that show whether or not all of these processes would be seen in the output of the plugins pslist and psscan.

So scrolling down here, you can even see that the one we have already flagged as potentially malicious is being seen in the output of both of those two plugins.

Scrolling down here further, when we get to the bottom we see that there is this other process that could be potentially malicious as well, that is not being shown in the pslist output but is being shown in the psscan output.

So again, this could be a really good indication that this could be a hidden process that I might want to look into further as part of my investigation. So I’m definitely going to flag that as of interest.

So this is just a really high-level look at a malware investigation as seen through AXIOM and Volatility. Again, you can review the output of any of these plugins here within the AXIOM interface as part of our Volatility integration.

Additionally, be sure to check out part two of this video, where I’ll show additional uses of AXIOM in a malware investigation.

Hopefully this quick overview was helpful. Please feel free to reach out with any questions. Thanks for watching.

How To Boot Scan A Microsoft Surface Pro

$
0
0

Hi, I’m Rich Frawley, and I’m the Digital Forensic Specialist with ADF Solutions. Today we’re going to conduct a boot scan of a Microsoft Surface Pro with BitLocker activated.

At this point you have decided on a search profile, or search profiles, to use and prepared your collection key.

When conducting a boot scan, Digital Evidence Investigator is forensically sound. This means that no changes are made to the target media.

Prior to conducting a boot scan, establish how many USB ports are available, and determine if the four-port USB hub is required.

Two ports are required in order to complete a scan: one for the collection key and one for the authentication key. Once the scan is started, the authentication key can be removed.

The Surface Pro only has one USB port, so I have a four-port hub, the collection key connected, and the authentication key.

With the Surface Pro, in order to boot to the USB device, we’ll hold the ‘volume down’ button while pushing and releasing the power button.

When booting to the collection key, Digital Evidence Investigator will automatically launch the application to scan the computer. No user input is normally required within the Windows boot manager.

Once DEI has launched, there are two options available: Scan Computer and Image Computer. To proceed with the boot scan, click on ‘Scan Computer.’

You can see here the physical device and the BitLocker volume; the search profiles that we have on our collection key; and our scan information.

To get started, we need to enter the credentials for the BitLocker encrypted volume.

Once the volume is decrypted, we can choose the search profile that we want to run; give it a scan name; adjust our date and time if necessary; enter in any custom fields that may be present; and then start our scan.

As you can see, I have my authentication key. In order to start the scan, I present my authentication key. The scan will start. I can now remove the authentication key and move on to another computer with another collection key.

That’s all for this video; thank you for your time.

Get a free trial at www.TryADF.com.


How To Use AXIOM In Malware Investigations: Part II

$
0
0

Hey everyone, Tara Nelson here with Magnet Forensics. Today I’m going to give a little insight into how AXIOM can help with some of your day-to-day investigations. In this video we’re going to talk a little bit about malware investigations.

There is a Part I to this segment, in which I focus on reviewing memory as part of a malware investigation in AXIOM, so if you haven’t seen that yet, I encourage you to go check it out. This video will focus on additional key features that AXIOM has to offer that could also be useful in a malware examination.

To start off, I’ve identified this process of interest, named ‘Fake Intel’, through our Volatility output from memory, that I believe could be malicious.

Because we also have the end point loaded into our case, we can quickly see if there’s any correlating artifacts on the operating system that might be useful to our examination.

So I’m going to go ahead and search for that name up here, and hit Enter. And then I can switch to our operating system artifacts. And you can see that there’s a few places where this process appears on the end point: there are prefetch files; there’s references in Windows event logs – you can see that highlighted when I scroll down here – and here we can also see this Fake Intel executable in the AutoRun Items.

So it’s always pretty key to try and determine how malware maintains persistence on the infected workstation, and one of the common locations that is used is the run key in the user’s registry hive.

So we can see here pretty easily that this potentially malicious executable file, that’s referenced in the run key of this user’s NT registry hive, will be launched each time the user logs in from the location in the user’s Temp folder.

Doing analysis within AXIOM allows us to use some additional features within the tool, such as building a timeline of activity to see the different types of events that happen when this incident occurred.

The timeline in AXIOM includes file system dates and times, as well as the timestamps associated with the artifacts that are parsed out. So I’m going to go ahead and build a timeline out of this modified registry key date and time. I’m going to see what happens one minute before and after, and it’s going to open in the Timeline explorer.

When I click ‘OK’, as you can see as I’m scrolling through, there are artifacts here as part of this timeline that are both from the infected operating system, and we can also see activity from the memory image as well.

So you can really see the advantage of having all your evidence sources in one interface, to be able to correlate all of this data and really get an idea of the events that occurred in your evidence during a malware incident.

AXIOM also allows you to build connections, to give you an idea of how artifact attributes in your case are related across all of your evidence items.

So you can see I’m able to build connections off of anything that you see this little icon next to it. So I click that, and I build it off of that file name of interest. And now you can see it gives a representation of related artifacts: some are from the memory, and some are also from the operating system as well.

So those are just a couple of tips of how AXIOM can help in a malware examination. We hope you try it out. Thanks for watching, everyone.

How To Integrate LACE Carver With Griffeye Analyze DI Pro

$
0
0

Let’s talk about the exciting new LACE Carver integration with Analyze DI Pro.

Once you have the proper license, you can head over to your Downloads page on MyGriffeye.com and go to the LACE Carver download.

Once the app package has been downloaded, we can go back to Griffeye and install it under Settings –> Plugins –> and click on the ‘Install’ button, selecting the file we just downloaded from the internet.

Once the file is fully extracted and the plugin has been installed you can head over to the Analyze Forensic Marketplace, where we now have LACE Carver integration.

If you click on it, you can get [an] introduction; installation information; and how to use it, as well.

Now let’s open a new case and check out the additional processing features available to us. The first thing you’ll notice is we have an additional selection, Physical Media. The LACE Carver integration allows Griffeye Analyze DI to point directly to a physically connected device.

Notice that when we select the device, we can either look at it on the physical level or the logical level, whichever you prefer. None of my physically connected devices are write-blocked, so I’m going to use a forensic image file that I’ve already created.

Once I select the image file, notice it gives me additional options on how to process this forensic image. If I select ‘Import Forensic Image’, I get the standard Analyze DI Import, which does not get unallocated files. But if I select ‘Carve Forensic Image with LACE’, it handles the entire processing of the EO1 file to include valid files and unallocated and deleted files. It also gives me several carving options and an Advanced button if I want to further refine what I’m looking for – it could be images, videos, documents, deleted files, unallocated files, and some other file formats.

Because we chose the integrated LACE Carver to handle the forensic image file import, there’s no need to bring in an additional folder containing carved unallocated files. It’s all contained in the same source ID in this investigation. So, we can continue to process our case as we normally would.

The Integrated LACE Carver will begin to carve the forensic image. Now remember, this is getting valid files as well as deleted and unallocated files. Once the LACE Carver has completed processing the forensic image file, the results will be imported into the Griffeye case, as it normally would. Using the Integrated LACE Carver to process our forensic image, we found 33,804 files as a part of our investigation.

Now let’s take a look at a case I created using the same forensic image file, but selecting the standard import, not using the LACE Carver.

I was only able to find 1,893 files in that forensic image. Now let’s take a look at the information we have within the case, about our files. In the grid view, the unallocated column now contains checkboxes on all the files that were found in unallocated space, as well as the physical file location or physical sector where that file was found.

We also now have the ability to filter files that we found in unallocated space by going over to our filters, the File tab, and to the unallocated filter, and select ‘Is Unallocated’, and now we filter down to just the files we’ve found in unallocated space.

Thanks for watching. If you have any questions or comments, hit us up in the forums or send an email to support@griffeye.com.

How To Save Time With XAMN’s Dynamic Artifact Count Feature

$
0
0

At MSAB, we’re always looking to improve our software and make every product more user-friendly, intuitive, and valuable; and to help save you time.

We’ve recently improved the way that XAMN displays and counts artifacts. Let’s take a look at the new functionality.

We’ve opened this case in XAMN, and from the start we can get a lot of information from just looking at the filter pane on the left. The numbers next to each category tell how many artifacts have been extracted for each of them.

If we press the arrow to open the ‘Messages’ subitem, we can see more numbers next to each subcategory. Let’s see what happens when we add a filter.

For example, I’m only interested in artifacts from last week, so I click on ‘Last week’ in the ‘Time’ filter. Notice that the artifact count changes based on the added filter.

The new dynamic artifact counting feature lets you see in which categories the artifacts from last week are located. We also see that some categories are greyed-out now, which means there are no artifacts in those categories which match the current filter selection, so I don’t need to click on them.

If I unselect the ‘Time’ filter from last week, then the greyed-out categories become available again, and the artifact count changes as well.

The new dynamic artifact counting feature will help save you time when searching for evidence in XAMN.

And remember that you can always clear all selected filters and start from scratch, just by pressing the ‘clear filter selection’ icon at the top of the filter pane.

Visit www.msab.com/products/xamn for more information.

Three Reasons Why Call Detail Records Analysis Is Not “Junk Science”

$
0
0

by Patrick Siewert, Principal Consultant, Pro Digital Forensic Consulting

Since introducing our private sector clients to the impact that cellular call detail records (CDR) analysis & mapping can have on their cases, we’ve had a lot of robust discussions with litigators and clients about the veracity and value of this evidence.  CDR analysis has been used for decades in law enforcement to help prove or disprove the approximate location of criminal defendants in major crimes.  Only in the past several years have civil litigators and insurance companies also been introduced to the value that this evidence can have on their cases and/or claims investigations.  In the time we’ve been conducting CDR analysis, we’ve worked on varying types of cases from criminal prosecution for smaller prosecutors’ offices to domestic litigation to help prove/disprove cohabitation to high-dollar insurance claims to help determine if the claim and associated statements made under oath are verifiable with regard to location.  This specialty offshoot of digital forensics requires constant knowledge updating with regard to carrier practices and specialized training and tools to be able to perform these analyses effectively.

Junk ScienceHowever, mainly among the Criminal Defense Bar, the notion has been put forth that CDR analysis may be “junk science” and therefore potentially unreliable as evidence in legal proceedings.  One high-profile case in which CDR analysis was used to obtain a conviction was the case of State v. Adnan Syed, chronicled in the Serial Podcast.  However, as we’ve seen more recent developments in that case unfold, the “junk science” claim doesn’t necessarily lie with the practice, rather with the potential practitioner.  Indeed, even in computer forensics, certain vendors of forensic tools like to claim their tool has been “validated in court”, when in reality it is the examiner and their competence that needs to be validated in court.  The tool (or in this case, the cellular records) is/are just a dataset that needs to be analyzed competently to be introduced as evidence in a legal proceeding.

Toward the end of establishing that CDR analysis is not “junk science”, here are three salient points that will help debunk the myth that these records and their associated analysis are not worthy of evidentiary status.

Reason #1: Cellular Records Are “Pure” Evidence

What do we mean by “pure” evidence?  Consider for a moment other types of digital evidence that are analyzed for use in court, such as the cell phone itself or a computer system.  These items are generally affected by the user to a great degree and therefore can be open to some scrutiny about the weight and value they hold.  Cellular Records are only available via court order or search warrant to the cellular provider.  A Verizon Wireless customer cannot call customer support and ask for their cellular call detail records with historical cell site data.  The provider will not provide this data this absent legal process.  This means the user has very limited (if any) ability to manipulate the data, which makes the evidence about as pure as it gets.

Furthermore, the record-keeper has no vested interest in altering the evidence.  In fact, they have every reason to maintain better, more accurate records!  It is a fact within the cellular industry that CDRs were never meant to be used as evidence in legal proceedings.  CDRs are kept by cellular providers so they can log and analyze their own networks for efficiency and to increase overall customer experience on the network.  Simply put, the records are kept for customer service purposes and cellular companies don’t make money by having poor customer service.  It is a fortunate byproduct that these records may be obtained via legal process and analyzed for potential use in legal proceedings.  This is why cellular providers don’t maintain these records indefinitely, as detailed in our 2017 article Cellular Provider Record Retention Periods.

Name another type of digital evidence that the user never touches and to which they generally don’t have access!

Reason #2: Automated Tools Have Greatly Decreased The Human Error Factor

Back in 2001 when the incident detailed in season 1 of the Serial Podcast occurred there were few, if any, automated tools with which to conduct CDR analysis.  In modern casework, we have many options for automated tools analysis, including CellHawkZetX, CASTviz, Map Link and Pen Link, as well as some others.  Use of automated tools can save time and greatly reduce error, but they come with a few warnings:

  • Not all tools are created equally. If you’re using a tool that is free [to law enforcement], you’re generally getting what you pay for.
  • Don’t rely on the tool to do all of the work. Automated tools are great, but they cannot tell you if someone likely shut their phone off or sent a call to voicemail or left their phone in one location while committing an offense somewhere else.  Only manual analysis of the data and the behavior of the user can help verify these conclusions.
  • VALIDATE! If an automated tool is telling you something, make sure to always refer back to the original record for validation.  If an automated tool is citing a GPS coordinate for location, make sure you validate that there is actually a cell site at that location.

Reason #3: Trained, Experienced Analysts Don’t Deal in “Junk Science”

One of the traps digital forensic examiners of all ilks are susceptible to is the drawing of conclusions not based on fact.  While it’s true that a trained, experienced professional may reach conclusions based upon device activity, those same conclusions have to be rooted in some facts at some point.  The trap that sometimes rears its ugly head is when we reach conclusions that are either outside of our expertise or are not supported by the data.

There are several traps documented in litigation over the course of the life of CDR analysis in legal proceedings that have led to the claim of “junk science”.  Probably the biggest of these (and the one cited in the article linked above and again here) are conclusions about cell site range.  As analysts, we are not cellular engineers and we cannot be engaged in speculation or discussion about the “range” of a particular cell site.  This is why in most cases we approximate location of the target device in the investigation and do not get entwined in discussions about cell site range.  Even if we were fortunate enough to have propagation maps from the cellular provider which detail the effective/optimal range of a cell site, we still won’t draw conclusions about range.  It is not within the expertise of most analysts to discuss range.  That is for a cellular engineer to conclude, not an analyst of cellular records.

There are behaviors and activity that the records can tell us about, however.  A trained analyst can usually tell of the phone was off, or if a call was sent straight to voicemail, or if the phone was left in one location for a prolonged period.

At the heart of the records is user behavior.  Is there a pattern of behavior that is not adhered to during the time of the alleged incident?  Is there link analysis that can be done to confirm likely associates or accomplices?  If there are alleged accomplices, does normal text or call activity cease with these persons during the time frame of the incident?

All of these items and more can help lead a trained, experienced analyst to conclusions with a reasonable degree of certainty, but with most of these items, we require a larger dataset to compare the behavior at the time of an incident with behavior at other times.  An analyst cannot identify these behaviors with 24 or 48 hours’ worth of records.  This is not enough data from which to draw conclusions about behavior.  This is also why we highly advise obtaining at least 30 days of records on either end of the incident, preferably more.  More data is better when it comes to CDR analysis.

The ultimate test of whether or not the conclusions based upon trained, experienced analysis of the records is “junk science” lies with the competencies of the analyst.  One who draws conclusions not based on facts is what leads to an otherwise valid form of data analysis to be dubbed “junk science”.

Wrapping It Up

In any forensic discipline, there is the possibility for human error or oversight. We’re not infallible, after all, and we can’t be expected to be perfect all the time. But CDR analysis is the only one in which the term “junk science” has been bandied about quite a bit. Deeper inspection of the issues involved in each case where this claim has been made can provide lessons for current and future analysts to read and take heed. It’s when our conclusions span beyond the breadth of our expertise and what the data tells us that we get into trouble. Ultimately, everyone wants to see justice done. If we can use CDR analysis successfully in litigation without reaching past our ability into conclusions that are open to extreme scrutiny, justice will be served.

About Patrick Siewert

Patrick Siewert is the Principal Consultant of Pro Digital Forensic Consulting (www.ProDigital4n6.com), based in Richmond, Virginia.  In 15 years of law enforcement, he investigated hundreds of high-tech crimes, incorporating digital forensics into the investigations, and was responsible for investigating some of the highest jury and plea bargain child exploitation investigations in Virginia court history.  Patrick is a graduate of SCERS, BCERT, the Reid School of Interview & Interrogation and multiple online investigation schools (among others). He continues to hone his digital forensic expertise in the private sector while growing his consulting & investigation business marketed toward litigators, professional investigators and corporations, while keeping in touch with the public safety community as a Law Enforcement Instructor.

How To Use Griffeye Brain – Artificial Intelligence

$
0
0

The Griffeye Brain in Analyze DI Pro version 19.2 brings the power of machine learning and artificial intelligence to help you quickly locate and identify child sex abuse material within your investigations.

In addition, the Griffeye Brain now has improved object detection, allowing for multiple objects to be located within the same image. In this video, we’re going to discuss how to use the newly updated Griffeye Brain plugins with your investigation, to maximise efficiency and decrease time spent searching for relevant files.

The Griffeye Brain can now harness the power of your graphics card or GPU to analyze your case for CSA and objects roughly five times faster than running it on a CPU.

In order to run the Griffeye Brain on your GPU, a NVIDIA graphics card; CUDA support; and 4GB of GDDR memory is required. The recommended minimum video card is a GeForce GTX 1080Ti, or a GeForce RTX 2070 or equivalent, with at least 8GB of GDDR memory. Remember to always make sure that you have the latest drivers from NVIDIA installed prior to installing the Griffeye Brain.

If you do not have a GPU or video card that meets the minimum requirements, you can still run the CSA brain on your CPU. The object detection, however, will only run on a GPU.

The Griffeye Brain installation package can be downloaded from our website, in the Downloads section of your Griffeye account. Once you have downloaded the installation package, go ahead and install it.

After installation, open Griffeye Analyze and visit the Analyze Forensic Marketplace. Scroll down to the ‘Plugins’ section, and you will see three entries for the brain plugins:

  • Griffeye Brain CSA – GPU
  • Griffeye Brain CSA – CPU
  • Griffeye Brain Objects – GPU

Go ahead and activate the appropriate plugins you’d like to use.

Don’t forget to take a moment to read the informational tabs in each plugin, as they can answer many common questions you might have.

Let’s go to the main program settings to verify that the plugin is installed and activated.

Once you verify the plugins have been installed and activated, you can now have them run when you create your cases. Simply select the plugins you’d like to run on the ‘After Import Processing’ page as a part of your case creation process.

All of the Griffeye Brain plugins can also be run at any time from within your case, by selecting ‘Rescan Case Against Plugins’, located on the Case Data tab.

Now let’s take a look at how the CSA Brain and Object Classifiers work.

The CSA Brain analyzes all non-categorized images in your case that are at least 100×100 pixels, and assigns each image a score from 1-99. This score represents the probability that the analyzed image contains sex abuse material.

Each Analyze image will then be assigned a bookmark based on the image score: CSA Low, CSA Uncertain, or CSA High. You can see the CSA scores assigned by the Griffeye Brain in the grid view. Your case data can also be sorted by CSA score as a part of your workflow.

Now let’s take a look at how the Griffeye Brain Object Classifier works. Griffeye Brain Objects analyzes all of the images in your case and attempts to identify one or more detected objects in each image.

All images that contain identified objects are then assigned a bookmark for that object, or object type, making it easier to find images with specific objects in your case.

For example, let’s say you’re looking for images that contain people wearing glasses. You can simply locate the bookmark group for ‘Personal Care – Glasses’ and quickly filter your results to the images where the Griffeye Brain found glasses.

It is important to note that the Griffeye Brain is artificial intelligence and is not always 100% accurate. It should only be treated as a starting point for your investigation. Don’t forget to utilize other filters and functions, such as Face Detection and Analyze Relations, for a thorough analysis of your case.

Thanks for watching. If you have any comments or questions, hit us up in the forums or shoot us an email at support@griffeye.com.

Walkthrough: XRY Photon Manual

$
0
0

XRY Photon is a solution designed for recovering smartphone app data that’s inaccessible through normal extraction techniques. Now the power of XRY Photon has been expanded to cover hundreds of additional apps, with a new manual option.

Before using XRY Photon, always check the XRY device manual first, to see if an app is supported, because that’s always the fastest route.

In this demonstration, we’ll show you how our new manual option works by extracting the conversation from the Instagram app.

First, put the device in flight mode and disable alarms to prevent interruptions.

Next, choose the correct cable and connect the device to your PC, Kiosk or tablet. Open the app and navigate to the content you want to capture.

If the app you wish to examine has a configurable setting to allow screenshots, go to the app settings and make sure that screenshots are enabled. Then start XRY and press the ‘Photon’ extraction option. Fill in the case information, and press ‘Begin Extraction.’

As the process starts, you’ll be asked to provide information such as, “Which Photon extraction do you want?”

Next, decide how many frames you want to capture, and in which direction to scroll. XRY starts the process of taking screenshots and keeps going until it’s done. When XRY is done, you will have the option to make more selections.

Open another view in the app to capture, or switch to another app. The screenshots taken by XRY will be available for viewing and analysis in XAMN. Photon also makes written content available in readable text which enables text search, filtering, and other types of analysis.

Here’s an example of an extraction that’s been opened in XAMN for analysis. You can easily search for a name, number, or other information, to quickly find important insights.

Learn more at MSAB.com.

Can Your Investigation Interpret Emoji?

$
0
0

by Christa Miller, Forensic Focus

Emoji are everywhere — including in your evidence. Used across private-messaging apps and email, social media, and even in passwords and account names, emoji are pictographic representations of objects, moods, and words. They’re a convenient shortcut for users who want to convey tone and emotion in digital communication without using a lot of words.

Preston Farley, a Special Investigator with the Federal Aviation Administration (FAA), believes “emoji will emerge as a prominent form of communication sooner rather than later,” and that there are potential ramifications for digital forensics examiners and investigators when it comes to analyzing and testifying about emoji.

Presenting at the Techno Security and Digital Investigations conference in Myrtle Beach in June 2019, Farley explained that emoji present two distinct challenges.

First, like all language, emoji can complicate communication. The recipient of a text with an emoji may not interpret it the way the sender intended. If that can happen between two people, then it can happen between the investigators, attorneys, judges, and jurors responsible for determining guilt or innocence in a trial.

Emoji can complicate conversation analysis.

Second, emoji in digital evidence can present additional, more technical complications for forensic examiners. Forensic tools don’t always render the emoji in question, or depending on the acquisition device, may not render them in the way the sender viewed them when choosing them.

At that point, Farley said, interpretation can end up being a matter of one witness’ word against another’s. Currently no legal documentation exists to help judges and juries interpret emoji. Some judges may strike emoji evidence entirely, refusing to allow it to be considered alongside other communication. And no scientific standards exist for the collection or analysis of emoji relative to other forms of digital communication.

Emoji: The Technical Basics

Emoji are tiny pictographs that consist of human and animal faces, symbols, flags, food items, plants, leaves, flowers and a variety of other items. First created in 1999 by a Japanese artist, emoji have come to rely on Unicode, which itself was developed in response to the need for non-English character sets. (For a more detailed description of how Unicode came to be, read this brief Medium article.)

Unicode’s code points are expressed in four- to six-digit hexadecimal code, preceded by a U+ (so that the final result is something like U+005449). In his presentation, Farley explained how emoji encodings result from endianness and pictographic languages, as well as the operating systems on which they appear. The Unicode byte order mark — “FE FF” or “FF FE” — tells the recipient computer which endianness to use, defaulting to big endian encoding if no byte order mark is present. 

Currently, more than 1.1 million code points are listed in the Unicode lexicon, which has been expanded from UTF-8, to UTF-16, to UTF-32. Farley said the lexicon of Unicode emoji grows every year, including even obscure symbols such as Egyptian hieroglyphs, alchemy symbols from the Middle Ages, and so on.

Among the operating systems, said Farley, Apple’s Macintosh and iOS offer the best Unicode emoji and libraries. Until Windows 10, Microsoft came in as a “distant second,” and based on his 2018 research, Farley notes font libraries need to be installed for Linux even to make it into the game.

There are scores of emoji to choose from, each with multiple meanings.

Because Unicode is supported by all operating systems, Unicode-based emoji render whether senders and recipients are on Apple, Windows, iOS, or Android platforms. Therein, however, lies the rub: Unicode only suggests — but doesn’t define — an emoji’s appearance. Apart from black-and-white and color standards, how the representation of, say, a “grinning face with smiling eyes” will work is largely up to the authors.

So, while recipients will see a Unicode-encoded emoji regardless of platform, what’s represented across Apple, Google, Windows, Samsung, LG, HTC, Twitter, Facebook, Mozilla, and others might look very different.

A 2018 study on emoji rendering differences stated: “Through a survey of 710 Twitter users who recently posted an emoji-bearing tweet, we found that at least 25% of respondents were unaware that the emoji they posted could appear differently to their followers. Additionally, after being shown how one of their tweets rendered across platforms, 20% of respondents reported that they would have edited or not sent the tweet.”

And that is if the emoji render at all. An emoji encoded on iOS that has no equivalent in Android, for example, might appear only as a blank block. That means that emoji available to a suspect may not render on a victim’s device, so that if the latter is all a forensic examiner has, they may be missing important contextual clues.

How Emoji Are Used

Most people are familiar with emoji from messaging via text and social media. They can contextualize sentiment by adding, say, a smile or a wink; or they might replace words altogether.

The most common usage isn’t the sole usage, however. Farley says Windows offers support for using an emoji as an account name, and among Apple products, emoji can even be used in password creation. 

Of course, usernames and passwords aren’t subject to the same interpretive challenges as a text message or social media post. A somewhat humorous 2017 article described the author’s “ongoing struggle to master emoji”: to use these pictographs in a way that others can understand. 

In part, that’s because emoji have no grammatical structure that might help to standardize their usage. One study concluded:

“In the case of emoji-only sequencing, they appear to lack the characteristics of complex grammar, instead relying on linear patterning motivated by the meanings of the emoji themselves…. In line with this, emoji appear to be effective for communicative multimodal interactions, often using text and image relationships similar to those between speech and gestures.”

Another pair of researchers concurred with the use of emoji to indicate nonverbal gestures. Gawne and McCulloch wrote: “To best understand emoji, we need to appreciate them for their current function…. They are not necessarily composed of meaningful units, nor do they necessarily build up into more complex units of meaning, like language does. Rather, like gesture, emoji are context-sensitive and have far more flexibility in use than language.”

That concept seems to be behind changes to community standards around the use of sexually charged emoji on Facebook and Instagram. Updated in July and enacted in September 2019 in an apparent effort to limit “sexual solicitation,” the standards ban “Suggestive Elements” such as “[commonly used] sexual emoji or emoji strings… alongside an implicit or indirect ask for nude imagery, sex or sexual partners, or sex chat conversations,” according to the New York Post.

Even emoji that look innocuous may have double meanings.

“Communicative multimodal interactions” aren’t all suggestive, of course. In his presentation, Farley said emoji can be useful to communicate high-level ideas without language mastery between less literate people, or those communicating across languages and cultures. 

However, what means one thing in one community can mean something totally different elsewhere. In a LinkedIn article, consultant Martin Nikel wrote that for example: “Emojii use in South Africa shows some significant differences in interpretation to other western cultures. Therefore a conversation between two people from different regional or cultural backgrounds may carry different intent and received understanding.” These issues may need to factor into an investigation.

Emoji And The Law

Eric Goldman, a Santa Clara University law professor and blogger, reported that the number of US cases referring to emoji as evidence increased from 33 in 2017 to 53 in 2018, which accounted for 30 percent of the all-time number of opinion references to emoji. 

Although Goldman noted none of these rulings were “substantive” with regard to emoji, to pair this trend with the research on ambiguous grammar and cross-cultural communication makes it easier to see what Goldman called “lawsuits in the making.” 

A 2017 article at The Fashion Law posed the following questions:

“When a text — such as a text message, email, or social media post — containing an emoji is presented as evidence, is the emoji significant and unambiguous enough to be presented to the jury? Are some emoji significant, but others not? And if they are important, how is a court to share such information with a jury? By sight? By sound?”

Joseph Remy, a prosecutor and National White Collar Crime Center (NW3C) Advisory Board member, says little case law is on the books currently to guide prosecutors, judges, or investigators on emoji evidence, but that other types of cases are instructive.

“Thirty years ago, the issue was with drugs and drug slang,” Remy explains. “Now we’re attacking the same problem, just in a different way.” In other words, emoji are very similar to the kind of coded language used in the drug trade.

Remy refers to a 2002 case, U.S. v. Garcia 291 F.3d 127 (2002), regarding a coded verbal conversation. Prosecutors in that case brought in a government informant to testify that the conversation used code words from the asbestos removal industry — where both he and the defendant worked — to arrange a drug deal.

In Garcia, the defendant’s convictions were vacated because the witness hadn’t laid a sufficient foundation for how the defendant knew the code related to drugs and not asbestos. Citing U.S. v. Yannotti, 541 F.3d 112, 126 n.8 (2d Cir. 2008), the Second Circuit provided key advice to law enforcement regarding lay versus expert testimony: 

“An undercover agent whose infiltration of a criminal scheme has afforded him particular perceptions of its methods of operation may offer helpful lay opinion testimony…even as to co-conspirators’ action that he did not witness directly. By contrast, an investigative agent who offers an opinion about the conduct or statements of conspirators based on his general knowledge of similar conduct learned through other investigations, review of intelligence reports, or other special training… must qualify as an expert.” 

Using Garcia as guidance, Remy says, investigators are commonly asked to establish how their experience — the number of cases they’ve worked that involved particular elements, for instance — leads them to interpret evidence.

The choice of which emoji to use is very subjective.

For example, the communications of child predators and traffickers are frequently coded. Emoji usually represent what the predator wants to do to their victim; or in a drug or human trafficking case, what they’re selling.

However, Remy says that since emoji generally consist only of a small portion of the conversation, they can be interpreted based on an investigator’s experience and context. In other words, an investigator who works enough of the same kinds of cases involving emoji can gain the experience needed to interpret the coded language. 

Even so, depending on the type of case, emoji evidence can be highly subjective, which was the crux of a free-speech case, Elonis v. U.S. 575 U.S. (2015). There, the U.S. Supreme Court held that to convict a person of making threats to others would require proof of the defendant’s subjective intent to threaten, not simply a reasonable person’s objective belief that they had been threatened.

Julia Greenberg, writing for Wired in 2015, stated: 

“For investigators, attorneys, and jurors trying to determine, or prove, the intent of a phrase, as in the Elonis case, it’s often much more complicated than : ) means this past sentence was pleasing. Maybe the person was being polite. Maybe they were trying to dull a blow. Maybe it’s an evil grin and the person is being ironic. Without the context of who is using the symbol, who received it, and an understanding of how those two people—or the people in their community—typically use it, the intent may not immediately be clear.”

Indeed, Greenberg added, “…what we mean when we use language is never crystal clear, and never has been.” Or as Nikel put it for LinkedIn: “Acronyms and turns of phrase, intent, interpretation… how well does the reviewer really understand communication between two parties… even without emoji?”

These issues are important given emoji such as gas pumps, which in 2015-2016, says Remy, referred to marijuana among teens in the northeastern US, but can also refer to a certain sex act in human trafficking cases, and more innocently to flatulence or the simple act of pumping gas.

Yet emoji interpretation is unlikely to require qualified expert testimony. While Remy doesn’t write off the possibility for a case where emoji content is significant or material, Goldman was quoted in The Verge in early 2019 saying: “Emoji usually have dialects. They draw meaning from their context. You could absolutely talk about emoji as a phenomenon, but as for what a particular emoji means, you probably wouldn’t go to a linguist. You would probably go to someone who’s familiar with that community.”

That’s why a digital forensic examiner could testify to what hexadecimal code converts to, says Remy, but not an opinion on what the emoji’s meaning is. That interpretation would, again, be left to the investigator’s ability to put the emoji in the context of the surrounding conversation.

Even so, Farley says: “The domains of knowledge necessary are formidable…. if your case has a significant emoji-related component, you should probably be up to speed on what the forensic basis for them are… [and] plan on and practice being a digital concepts instructor as well.”

Encountering Emoji In Digital Evidence

Websites like EmojiStats.org and emojitracker.com track real-time emoji hits on platforms like Twitter and Facebook Messenger, and while they don’t offer any information regarding the emoji’s context, they can provide a good sense of the emoji you might be most likely to encounter.

Just like with mobile apps, however, often you’ll encounter some emoji that are unsupported by forensic tools. Farley’s presentation described how tools, relying as they do on operating systems’ encoding, may not properly render emoji. Even with Unicode support and interpretation, in order to be rendered, emoji have to be preloaded in sets. With no built-in palette of what the code means, these emoji can’t be represented on screen.

Emoji meanings also differ across languages, cultures and subcultures.

This can affect how the investigator interprets what was said. For example, a gun could be represented as a water pistol and may or may not match what the user would see.

A potential additional wrinkle is the Unicode Private User Area, which Farley said enables private parties to create their own emoji (often associated with proprietary logos). That makes it possible to use emoji no one else is familiar with. 

Calling the concept “passive cryptography,” Farley says the PUA requires too deep a level of technical knowledge for most bad actors to leverage it. “Encryption would accomplish nearly the same thing at this point and is much easier to use.”

Forensic tool support extends beyond rendering and reporting to keyword searches, and potentially other areas, such as artificial intelligence detection tools. “From a long-term perspective,” Farley adds, “as emoji are based on the Unicode standard, whatever the support ‘traditional’ languages receive, the same level of support should be accorded to emoji as well.” That includes live memory support to help capture emoji used in or as passwords, at least on Apple products.

No vendor representatives were available for comment as to the degree of support their tools do or don’t provide, though Farley speculates, “…there has not been much of a demand for this type of support by the end-user community…. If the [forensic] tools are rendering most of the emoji seen for most of the cases worked and the judicial system isn’t raising a ruckus, then everybody’s happy,” giving vendors no reason to spend limited resources on emoji artifacts.

Remy adds that emoji support may be viewed as “too big a task” across so many platforms, so that vendor support depends on a critical mass of cases demanding it. What would critical mass be? “When the contents of a message in digital format consists of a majority of the conversation in emoji such that the code cannot be interpreted using context, previous interaction and/or experience,” Remy says.

Right now, because those contents can often be interpreted based on other messages, vendor support isn’t needed. Remy believes it’s simply too soon to tell whether emoji communications are just trendy, or whether they’ll catch on. If they do, though, he anticipates that vendor support might take the form of a language pack, available for purchase like any language other than the examiner’s primary selected language.

Could forensic examiners code their own forensic tools? “If examiners are writing code, they must have a good grasp of what the software is doing with the evidence,” Farley says. “If they are unaware of why there are “strange” characters mixed in with their Unicode and they ignore them, that’s a problem.  

“For many examiners, using premade libraries like Pymoji.py will be sufficient because this shows they have a baseline understanding of what they are examining and will have the wherewithal to investigate anomalies such as Unicode characters which don’t render after a Python conversion.”

For the moment, emoji are used largely to enhance conversation, not to conduct it. In other words, they aren’t completely irrelevant as evidence, but because most digital messages still rely on words, they probably aren’t pivotal, either.

Still, as online communication continues to evolve, it makes sense for digital forensic examiners — and the investigators and prosecutors they work with — to be aware of the practical and legal challenges they may encounter in the future.


How To Decrypt BitLocker Volumes With Passware

$
0
0

Decrypting BitLocker volumes or images is challenging due to the various encryption options offered by BitLocker that require different information for decryption.

This article explains BitLocker protectors and talks about the best ways to get the data decrypted, even for computers that are turned off.

BitLocker Encryption Options

Protectors that can be used to encrypt a BitLocker volume include:

  1. TPM (Trusted Platform Module chip)
  2. TPM+PIN
  3. Startup key (on a USB drive)
  4. TPM+PIN+Startup key
  5. TPM+Startup key
  6. Password
  7. Recovery key (numerical password; on a USB drive)
  8. Recovery password (on a USB drive)
  9. Active Directory Domain Services (AD DS) account

To list the protectors of a given BitLocker volume, type the following command in command-line prompt (cmd):

manage-bde -protectors -get C:
(where C: is the name of the mounted BitLocker-encrypted volume)

The list of protectors will be displayed as follows:

Detailed information on each protector type, in accordance with Microsoft documentation, is provided below:

  1. TPM. BitLocker uses the computer’s TPM to protect the encryption key. If you specify this protector, users can access the encrypted drive as long as it is connected to the system board that hosts the TPM and the system boot integrity is intact. In general, TPM-based protectors can only be associated to an operating system volume.
  1. TPM+PIN. BitLocker uses a combination of the TPM and a user-supplied Personal Identification Number (PIN). A PIN is four to twenty digits or, if you allow enhanced PINs, four to twenty letters, symbols, spaces, or numbers.
  1. Startup key. BitLocker uses input from a USB memory device that contains the external key. It is a binary file with a .BEK extension.
  1. TPM+PIN+Startup key. BitLocker uses a combination of the TPM, a user-supplied PIN, and input from a USB memory device that contains an external key.
  1. TPM+Startup key. BitLocker uses a combination of the TPM and input from a USB memory device that contains an external key.
  1. Password. A user-supplied password is used to access the volume.
  1. Recovery key. A recovery key, also called a numerical password, is stored as a specified file in a USB memory device. It is a sequence of 48 digits divided by dashes.
  1. Active Directory Domain Services account. BitLocker uses domain authentication to unlock data volumes. Operating system volumes cannot use this type of key protector.

Any of these protectors encrypt a BitLocker Volume Master Key (VMK) to generate a Full Volume Encryption Key (FVEK), which is then used to encrypt the volume.

Using Memory Images for Instant Decryption of BitLocker Volumes

If a given BitLocker volume is mounted, the VMK resides in RAM.

When Windows displays a standard Windows user login screen, as above, this means that the system BitLocker volume is mounted and the VMK resides in memory. Once a live memory image has been created *, it is possible to use Passware Kit to extract the VMK and decrypt the volume.

When you turn on a computer configured with the default BitLocker settings, Windows reads the encryption key from the TPM chip, mounts the system drive and proceeds with the boot process. In this case the VMK resides in memory as well.

Passware Kit extracts the VMK from the memory image (or hibernation file), converts it to FVEK, and decrypts the BitLocker volume. It also recovers the Recovery key and Startup key protectors, if available. A sample result is displayed below:

As shown on the screenshot above, Passware Kit Forensic displays both the Encryption/Recovery key and Startup key (file) protectors, as well as creates a decrypted copy of the volume.

SUMMARY

To summarize, if the memory image contains the VMK, the volume gets decrypted, regardless of the protector type used to encrypt the volume. By extracting this VMK, it is also possible to recover the protectors (Recovery Key and Startup Key).

However, if the memory image does not contain the VMK (the volume was not mounted during the live memory acquisition, the hibernation file had been overwritten, etc.), it is only possible to decrypt the volume with the Password protector, i.e. to recover the original password (using brute-force or dictionary attacks).

The password recovery process is time-consuming and depends on the password complexity, any knowledge about the password, and your hardware resources available for password recovery, such as GPUs and availability of distributed computing. As a result, the recovered original password can be used to mount the BitLocker volume.

For some volumes, Password might not be among protectors used and the volume might be protected with other protectors (e.g. Startup key or TPM + PIN). In this case it is impossible to decrypt the volume without a memory image acquired while the volume was mounted or a hibernation file, which contains the VMK.

* It is important to acquire a live memory image correctly in order to preserve residing encryption keys. We recommend using the following third-party tools to acquire memory images: Belkasoft Live RAM Capturer and Magnet RAM Capture, both available free of charge, and Recon by SUMURI for macOS.

Find out more at Passware.com.

Hunting For Attackers’ Tactics And Techniques With Prefetch Files

$
0
0

by Oleg Skulkin

Windows Prefetch files were introduced in Windows XP, and since that time they have helped digital forensics analysts and incident responders to find evidence of execution. 

These files are stored under %SystemRoot%\Prefetch, and are designed to speed up applications’ startup processes. If we look at any prefetch files, we can see that their names consist of two parts: an executable name, and an eight-character hash of the executable’s location. 

Prefetch files contain various metadata: executable name, run count, volume information, files and directories referenced by the executable, and of course, timestamps. We usually use a Prefetch file’s creation timestamp as the timestamp of the first execution. It also has the embedded timestamp of the last execution, and since version 26 (Windows 8.1), the 7 most recent last run times.

Let’s take one Prefetch file, parse it with Eric Zimmerman’s PECmd, and look at each part of the output. For demonstration purposes I’ll parse CCLEANER64.EXE-DE05DBE1.pf.

Ok, I’m going to start from the top. First of all, we have the file’s timestamps:

Next we have the executable’s name, its hash path, the executable’s size and the prefetch file version:

As we have the Windows 10 version, we next see the run count, last run timestamps and 7 more previous last run times:

Next we can see information about the volume, including its serial number and creation timestamp:

And last but not least – referenced files and directories:

The files and directories referenced by the executable is what I want to focus on today. This feature enables us as digital forensic analysts, incident responders or threat hunters to track not only the facts of the execution, but also, in some cases, the exact techniques used by adversaries. Attackers use data wiping tools like SDelete quite often nowadays, so the ability to find at least traces of different tactics and techniques like these is a must-have skill for any modern examiner.

Let’s start from the Initial Access tactic (TA0001) and the most common technique – Spearphishing Attachment (T1193). Some APT groups choose weaponized attachments in a creative way. For example, Silence Group used weaponized CHM files in their spearphishing campaigns. Here is another interesting technique – Compiled HTML File (T1223). These files are run with hh.exe, so if we parse its Prefetch file, we can understand what exactly was opened by the victim:

Let’s keep digging into real-world examples and continue to the next tactic – Execution (TA0002), and CMSTP (T1191) techniques. The Microsoft Connection Manager Profile Installer (CMSTP.exe) may be used by attackers for launching malicious scripts. A good example is Cobalt Group. If we parse CMSTP.exe’s prefetch file and look at the ‘files referenced’ section, we can find what exactly was run with it:

So here we have a JavaScript scriptlet saved into 117696489.txt that was run by attackers with CMSTP. 

Let’s continue with execution examples – Regsvr32 (T1117). Regsvr32.exe is commonly used by attackers to execute arbitrary binaries. Here is another example from Cobalt Group – these guys used regsvr32.exe to run scripts, so again, if we look inside the Prefetch file of this executable, we can find the location and name of the executed script:

Next tactics – Persistence (TA0001) and Privilege Escalation (TA0004), and Application Shimming (T1138) as an example. This technique was used by the notorious Carbanak/FIN7 for persistence. Usually sdbinst.exe is used for working with shim database files (.sdb), so we can use its Prefetch file to uncover a database’s file names and locations:

Here we have not only the file used for installation, but also the name of the installed custom database.

Let’s keep going and look at one of the most typical examples of lateral movement (TA0008) – PsExec, which interacts with the ADMIN$ network share (Windows Admin Shares, T1077). A PSEXESVC service (it may have a random name, of course, as it can be renamed by the attacker using the –r switch) will be created on the target system, and if we parse this executable’s Prefetch file, we can see what exactly was run:

Let’s finish where we started: file deletion (T1107). Many adversaries use the SDelete utility to remove software they used during a number of stages of the attack’s lifecycle. If we look at the sdelete.exe prefetch file, we can clearly see what was deleted with its help:

Of course, this is not a full list of techniques that can be found as a result of prefetch file analysis, but it should be enough to understand that such files can be used not only for finding evidence of execution, but also for uncovering the exact tactics and techniques used by the adversary.

About The Author

Oleg Skulkin is senior digital forensic analyst at Group-IB, one of the global leaders in preventing and investigating high-tech crimes and online fraud. He holds a number of certifications, including GCFA, MCFE, and ACE. Oleg co-authored Windows Forensics Cookbook, Practical Mobile Forensics and Learning Android Forensics, as well as many blog posts and articles on digital forensics and incident response you can find online.

How To Use The Griffeye Intelligence Database

$
0
0

Beginning with version 19, Griffeye Analyze DI Pro and Core will start using the new Griffeye Intelligence Database, or GID, to replace the legacy intelligence manager.

In this video, we’re going to discuss the changes that the GID brings to the Analyze DI interface, and how to use the Griffeye Intelligence Database system within your cases.

First off, let’s create a new case and take a look at the differences in the case creation process, which are very minor.

On the second page of the Import options, you’ll notice that the GID section has replaced the Database section. Analyze DI will now check imported files against all of your GIDs, if selected. Analyze DI will still check your files against any legacy databases you have connected.

On the Exclude page, you now have the ability to exclude files by GID source, in addition to any legacy databases you have connected. This can be done by selecting ‘Exclude files with hits in databases’ and selecting the GID source, or sources, you would like to exclude any file matches from.

These are the only two changes that version 19 brings to the case creation wizard at this time.

Now that our case has finished processing, let’s take a look at the changes to the Analyze DI interface related to the GID.

First off, you will notice that the GID has some new buttons in the ribbon on the Home tab. These buttons are used to manage, update, and migrate to your GID. You can also still access your legacy database manager from the ‘Manage Hash Databases’ button, but keep in mind your legacy databases are now in read-only mode and cannot be changed, with the exception of removing them.

Let’s select the GID management to see the GIDs you are currently connected to.

In this example, I have a local GID populated with hashes and intelligence, but because I’m using DI Pro, I am also connected to a remote GID in Sweden. How to set up and connect additional GIDs for collaboration will be discussed in a future video.

In the Case Data tab, we also see that we now have an additional GID button that will let users rescan their case against any GIDs that they are connected to. We will discuss this button’s functions in a few moments.

Moving to the bottom of the interface, you can also quickly check to see if your GID connections are OK by the GID Status infobar next to the Filters action button. Simply hover over ‘GID Status’ to see a list of GIDs and their connection state. A check mark indicates all your GIDs are in sync and ready to use.

In the Thumbnail view, users can quickly view GID and legacy database matches of individual files by hovering over the ‘Database Match’ icon in the thumbnail header icons. This will list all the databases and GIDs – including sources – that the file matched against.

GID matches also appear in the File Info panel, under the Summary tab, by scrolling all the way down to the GID section.

GID Matches columns are also now present in the grid view, each column representing an individual GID source.

Under Filters, on the Intelligence tab, there is a new quick filter for the GID, where users can filter to file matches based on individual GID sources. Data can also now be sorted by GID source matches as well.

Now let’s talk about the ‘Rescan Against GIDs’ button on the Case Data tab. This button is a two-function button: the top option will rescan your case against all available GIDs and categorize any matched files, but will not override any categorization work you may have already completed.

The bottom button allows you to match files against specific GID sources, overriding categorization for any matched files. It is important to note that using this option will overwrite any manual categorization work you may have already completed.

Click the bottom button, and a menu will appear allowing you to select the GID that you would like to overwrite your categorization from.

After you select a GID, a window will open asking which sources within that GID you would like to match against. This window requires that you read and check the box before performing a rescan. This is to ensure that you have acknowledged that your case categorization may be overwritten for matches in the GID source that you select.

Once you have completed your work and you are ready to update your GIDs with your categorization and intelligence data, select the ‘Update GIDs’ button from the ribbon.

This will allow you to select which GID you would like to update. Note that all of the hashes and intelligence will be placed in a hash source called ‘Case Work’ within the GID you update. Analyze DI will create this source for you when you perform your first GID update. Any future updates from other cases will also be assigned to the ‘Case Work’ source.

You can verify this source was created and updated by selecting the GID Manager and opening the GID you updated.

Thanks for watching. If you have any questions or comments, hit us up in the forums or shoot us an email to support@griffeye.com.

How To Conduct A Live Forensic Scan Of A Windows Computer

$
0
0

Learn how to conduct a Windows live scan with ADF Solutions Digital Evidence Investigator.  Two USB ports are required to complete a scan, one for the Collection Key and one for the Authentication Key, once the scan has started the Authentication Key can be removed. A USB hub may be used in cases where the target computer only has one USB port.

When running a live scan from a Collection Key it is possible to create a RAM dump of the computer. RAM dumps can then be analyzed with appropriate software (e.g. Volatility).

Our digital forensic investigator uses a 4-port hub, Collection Key, Authentication Key, and the target computer which is up and running with only one USB port available.

  1. Start by inserting the USB Hub with the Collection Key and Authentication Key.  In this example, Autoplay is set to open the folder automatically, if this is not set you will need to open File Explorer and the CKY Device as shown.
  2. Execute the Start.bat file stored on the Collection Key by double clicking on it.
  3. From the main menu click on Create RAM Dump.  The RAM dump will be saved to the collection key within a zip file. Once complete click on the home key to return to the main menu.
  4. At the main menu, to continue with a Live Scan, click on Scan Computer.
  5. Select the Drive(s) or Partition(s) that you would like to scan.
  6. Select the Search Profile.
  7. Change the name or fill out any mandatory fields. Once all fields are completed you can now start the scan by clicking the scan button.
  8. Once the scan has started remove the key. You are able to remove the Authentication Key and utilize another Collection Key and the Authentication Key to conduct another or many other scans while onsite.
  9. Once started, the scan activity will be shown with the following:

     

    Progress Bar – Current area and files being scanned (along with estimated percentage complete).

    Matches Log – Real time preview (thumbnail) of File Capture matches collected. Images and Video files are represented by thumbnail images, keyword matches will show the keyword found, all other matches will be represented by an associated icon.

    Capture Results – Cumulative count of capture results.

  10. You can view the results of the scan while the scan is running, and can also refresh/pause/or stop the results while in the viewing pane. You can also return to the scan screen.  If you need to stop a scan for any reason, everything you have collected up to that point is on the collection key and able to be analyzed. No post scan collection is required.
  11. You will now have the option to View My Results or go to the Imaging portion of Digital Evidence Investigator should you want to image the target computer.

ADF software includes support and direct access to our digital forensic specialists. Contact us for assistance creating custom search profiles, to request a demo, or to learn more about how ADF tools can help you speed your investigations and reduce forensic backlogs.

Walkthrough: Quin-C Social Analyzer Widget From AccessData

$
0
0

Hello. This is Sven from the technical team here at AccessData. This video will feature the Social Analyzer widget. So let’s get started.

Go to Quin-C and open the grid, just to see how many items we have in our case. To use Quin-C with the Social Analyzer widget, we need to filter the emails.

So, go to Categories, check the ‘Email’ filter box, and see how many items we have.

Now we can close the grid. I will show you some other ways to filter the emails.

Go to Filters. In the Filter widget, we have various subcategories. So we go to Files, we go to Categories, and see? We have the emails over here as well. And in the top, we can see the emails are already marked.

In the header of Quin-C, there are some other icons to give you a hint that the email filter is already applied.

Now we close down the Filter widget, and go to the Social Analyzer. This is where the fun begins.

Usually, the Social Analyzer starts off with a blank screen. Go to the Grid Size box, and make sure the number fits to your added account. With the mouse wheel you can zoom in, and every dot you see represents a contact in your case. If the dot is bigger, there was more communication coming from this account. The arrow shows the direction of the communication.

You can move around the dots, so that the visualisation fits to your needs.

You may have noticed a small little icon in the sidebar as well. This will open an additional window within the Social Analyzer. Go to the top, select ‘Email’, and you will find a list of all email contacts in your case.

We now can search through this list, and make sure that we only see the contacts which are necessary for our research.

Let’s click on one of the emails and see how the Social Analyzer will respond.

Hey, isn’t it cool?! Let’s click another one.

This means we can reduce the view of the Social Analyzer just to fit to our needs and show us only the contacts which are in focus for us. So I removed the search result in the meantime, and on the left-hand side we have all the contacts once again in the big list.

On the right-hand side you may have noticed another arrow in the sidebar. We are clicking this one now, just to see what comes up.

OK, we have a tripartite window. Clicking on an account in the visualisation will bring all the related emails to that new window.

By the way, that Quin icon will centre the visualisation once again, if you may have zoomed in too much.

Let’s continue.

Click on an email, and the viewer will come up straight away to give you an idea what the email is about. You can click through the emails, and the viewer will follow.

To give us more space for the visualisation, we can close the left third of the window. Now we will zoom in a little bit, and pick and choose another account.

The related emails will pop up in the viewer straight away. Now we can go through all these emails: 29 in total. We can open an attachment if we want, and do our research.

If we are done with it, we can close down the viewer to give more space for the visualisation.

If we do a right click in the white background, we can expand the labels out, which means we see all the names and the email addresses of the guys in focus. From here, you can send the visualisation to report, just with another right click.

OK, this is it for the first round. I hope you like what you see.

Find out more at AccessData.com.

Viewing all 196 articles
Browse latest View live