Quantcast
Channel: Digital Forensics – Forensic Focus – Articles
Viewing all 196 articles
Browse latest View live

How To Image From A Network Repository Using Logicube’s Forensic Falcon-NEO

$
0
0

Welcome to Logicube’s tutorial on the Falcon-NEO Forensic Imager. The Falcon-NEO allows you to image directly to or from a network repository using SMB or CIFS protocol, and to image from a network location using iSCSI. Two 10GbE ports provide extremely fast network imaging performance. In this episode, we’ll show you how to image from a network repository to a physical drive connected to the Falcon-NEO.

Before creating a network repository on the Falcon-NEO, make sure you have full permissions to the shared resource. We strongly suggest that you contact your network administrator to ensure proper permissions have been set up.

We’ve set up a directory on the C drive of a computer that is connected to the same network as the Falcon-NEO. By right-clicking on the directory, I can verify that I have full permissions to this share.

We’ll now create and mount the repository on the Falcon-NEO. Navigate to the ‘Manage Repositories’ icon from the left side menu of the Falcon-NEO interface. On the ‘Add/Remove’ tab, you’ll see a list of all available repositories, including all drives attached to the Falcon-NEO destination ports and any network repository. Tap the ‘Add Repository’ button at the bottom of the screen; tap ‘Name’ to set the name of the repository; click ‘OK’; then tap ‘Drive’ to select a network share to set as a repository. Then tap ‘Network Source’ to select the network source: either LAN1, LAN2, or ‘Any’.

Next, tap ‘Destination Settings’ to enter the network settings. You’ll need to enter the path of the share. This will include the IP address and share name. Make sure you use the forward slash, not the backward slash. Then enter the domain of the share, if the shared resource is in a domain. If not, use the work group name. Then enter the username that has full permissions to the shared resource, both read and write access. Then enter the password for the username. Click ‘OK’.

Next, tap ‘Role.’ The repository can be set as a source, or a destination, or both. Once the repository has been set up, it will appear under the repository list. In order for a repository to remain configured when the Falcon-NEO is turned off, the changes must be saved and loaded to a profile on the Falcon-NEO. Refer to the users’ manual on how to do this.

Once a repository has been added, you can proceed to use it as a source or destination for whatever imaging task you require. For this task, we’ll select the File To File mode; then we’ll select the source; we’ll select the repository we just set up.

Next, choose your settings. Here you can enter a case name and other information, including an evidence ID. Click ‘OK’ and set the output format. In this case, we’ll choose ‘Directory Tree.’ For filter settings, we’ll choose not to include any filters, and leave the default settings. Next, choose your hash method, and choose whether you want to verify or not. Next, select your destination: in this case a hard drive that is connected to the Falcon-NEO.

Once all your settings have been chosen, press ‘Start’ and the imaging task will begin. Upon completion of the imaging task, you can review the log file associated with the task by navigating to the ‘Logs’ icon from the left side menu bar. The log file includes all of the chosen settings, and the file information from the network repository that was captured for this task.

Thank you for your interest in the Falcon-NEO. We hope you have found this tutorial informative. To learn more about the Falcon-NEO visit our website at logicube.com, or contact our sales team at sales@logicube.com.


Facebook’s Privacy Manifesto: What Does It Mean For Digital Forensic Investigations?

$
0
0

by Christa Miller, Forensic Focus

Mark Zuckerberg’s new “privacy manifesto” for Facebook marks not just a pivot in terms of how the social network shapes modern-day communication. It also marks what The Verge’s Casey Newton called “the end of the News Feed era.” 

Zuckerberg’s opening statement draws a distinction between the “digital equivalent of a town square” which Facebook and Instagram have helped to build over the past 15 years, and the “digital equivalent of the living room” in which more users prefer to spend time together. Most child exploitation domain experts would be quick to point out, however, that child abuse is far more pervasive in living rooms and other private spaces than it is in town squares.

For example, the United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) cites studies from 2011 and 2014 showing, respectively, that more than 90% of sexually abused children were abused by someone they knew. That’s consistent with US-based research showing that only about 10% of perpetrators of child sexual abuse are strangers to the child.

Online, of course, Facebook and its properties Instagram and WhatsApp are well known to law enforcement:

  • In late 2018, TechCrunch reported that WhatsApp’s encryption, as well as its lack of human moderation, made it easier for child abusers to use the app’s private groups to share child sexual abuse material (CSAM).
  • Business Insider reported in September 2018 that Instagram’s IGTV service had recommended video content containing CSAM. 
  • Engineering and Technology described Facebook’s efforts in late 2018 to delete more than eight million pieces of content that violated the site’s rules on child nudity and exploitation in the previous three months alone. 
  • In March 2019, Forbes reported NSPCC research showing that Instagram has become the leading platform for child grooming in the United Kingdom. 

This isn’t to say that non-Facebook-owned social media, including Tumblr, YouTube, and others, are immune to similar problems. However, Facebook’s new focus — including its recent announcement regarding its own cryptocurrency — highlights the ongoing tensions between safety and privacy, as well as between law enforcement and the private sector. How might Facebook’s planned shift from public to private sharing affect forensic investigations?

Digital Forensic Artifacts

Zuckerberg’s statement focuses on “private messaging, ephemeral stories, and small groups,” which he notes are the fastest growing areas of online communication. As reasons for this shift, he cites user desire for one-on-one or one-to-few communications, as well as concerns around permanent records of communications.

On the one hand, any content limits hurt investigations. “Most people today don’t jailbreak their $1000 iPhone or root their expensive Androids nearly as often as they used to,” explains Domenica “Lee” Crognale, a SANS Certified Instructor and co-author of the SANS FOR585: Advanced Smartphone Forensics course, “and this alone is keeping us from taking a look at some of these applications forensically.”

Indeed, the constantly evolving handshake between mobile operating systems and apps can make it difficult for forensic vendors to keep up, or to communicate new changes to forensic examiners.

In Crognale’s research, for instance, it’s common for a database or proprietary file formats to be inaccessible, if an app developer limits backup data to, say, attachments or pictures. “This means that we are often missing a piece of the puzzle,” she says; in other words, chat, email, and call content.

On the other hand, Crognale says she’s come across very few databases that are encrypted in their entirety. “It is more common to see certain records within a database encrypted (think secret messages),” she says.

Private Messaging

Facebook Messenger and Instagram Direct have enabled private messaging for some time, of course. It’s the extension of WhatsApp’s end-to-end encryption (E2EE) to these services, and to additional planned services such as video chats, payments, and commerce, that are of concern.

Acknowledging “a growing concern among some that technology may be centralizing power in the hands of governments and companies like ours,” Zuckerberg heralds E2EE as a “decentralizing” tool that helps ensure the freedom of dissidents worldwide — as well as acknowledging misuse by “people doing… truly terrible things like child exploitation, terrorism, and extortion.”

Of course, the difference between data at rest and data in motion has been well documented. Joseph Pochron, President of Forensic Technology & Consulting at TransPerfect Legal Solutions, notes that the transition to E2EE is “a positive step, but a lot of the public doesn’t realize WhatsApp makes data at rest accessible via forensic tools.”

Brett Shavers, a former law enforcement investigator and a digital forensics practitioner, concurs. “I have seen instances where encryption was marketed, but not employed as well as [it was] marketed… unless the user has control of the encryption scheme, there is no way to ensure the encryption is truly encryption (that is, without backdoors). With that, law enforcement should never assume that if something is encrypted, that it is impossible to access, because what is inaccessible today may be accessible tomorrow.”

Bringing Facebook and Instagram apps more in line with WhatsApp, then, means existing forensic research on WhatsApp and similar apps — accessible via Shavers’ website, DFIR.training — might lend some clues as to what to expect from future changes.

Forensic vendor tools, meanwhile, offer solutions that allow examiners to get around E2EE in apps by:

  • Obtaining data using the device phone number, or app tokens stored on the mobile device in place of users’ login details.
  • Acquiring and decrypting WhatsApp backups stored in an Android user’s iCloud or Google Drive account, or directly from the WhatsApp server. These include backups that the examiners themselves can make.

However, these methods require investigators to know some piece of information or to acquire a level of access that isn’t always possible; for example, when the user is uncooperative or deceased, or can’t be found.

In those cases, investigators have to approach the provider for data. Law enforcement can write search warrants; corporate litigants have to subpoena service providers. Preservation orders prevent marked-for-deletion data from being wiped until the legal paper can be served, and the provider generally returns the data within a few weeks or months to the requester.

This process is in flux, says Pochron, thanks to laws like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act, which are changing how providers deal with personally identifiable information (PII).

These laws require providers to enable users to see what data providers retain about them, and to revoke permission for providers to use the data. As a result, users can download their own data, which can in turn be imported into a forensic tool. In many cases investigators can work with victims and even some suspects to accomplish this. 

The same data retention requirements make it easier to request data from a service provider. Under the GDPR, a company has 30 days to respond to a request for user data they retain; under California’s Privacy Act, as of 2020, a company will have 45 days to respond. (Notably, Apple’s Data & Privacy Portal states that company might take up to only seven days to verify and respond to a request.)

Pochron says such user data requests can often result in “quite a lot of information.” To that end, data that’s too difficult for forensic examiners to recover from a device may increasingly be available from the cloud — and with a richer dataset. For law enforcement, he stresses, this might make real-time consent to search and collect data increasingly important.

“Ephemera Forensics”

Overall, Pochron doesn’t anticipate that Facebook’s move to E2EE will be much of a game-changer for industry unless Facebook follows “ephemeral by design” apps like Signal, Wickr, or Telegram.

That’s the condition at the heart of what Zuckerberg calls a “permanence problem”: when large quantities of messages and photos, collected over time, “become a liability” owing to embarrassment or, well, criminal behavior. Facebook may address this through ephemerality, the ability to set content to expire or be archived automatically.

Going further than the platform’s Stories, automatic deletion or expiration could extend anywhere from a month or more to as little as a few seconds or minutes. That’s why ephemeral apps pose what Crognale calls “the biggest problem for data recovery.” “These time-bombed messages are really gone, in most every case that I have examined, when the time limit has elapsed,” she explains.

As a result, ephemeral messages carry two potential implications for forensic examiners, says Pochron. The first is, of course, whether ephemeral content includes “breadcrumbs” that can help build a case, or the “smoking gun” that can make it. That’s of particular concern to criminal investigators.

The second issue is legal. “If a business deploys apps that are ephemeral by nature, then it can’t retain those business records, and won’t be able to comply with legal hold requirements,” Pochron says. He points to a 2017 case, Waymo v. Uber, in which the plaintiffs accused the defendants of using Wickr — an ephemeral messaging app — to avoid such requirements.

So, while preservation orders would be useful in some cases, in others, investigators may have to rely on whether a victim was able to screenshot and store the evidence on their device — though Crognale notes that it’s possible for devices to take screenshots on the user’s behalf. “[For instance] if an incoming call interrupts what you were doing in one application and minimizes your screen,” she says.

Just as WhatsApp forensics could contain clues on how to address encryption of all Facebook and Instagram private messages, examiners may look to research published on Telegram forensics (ResearchGate; ScienceDirect), as well as Snapchat forensics (CarpeIndicium; ScienceDirect; Eidebailly), for clues on how to deal with ephemeral messages.

Crognale’s research shows that sometimes, unsuccessful message transmissions are cached on a device as data meant for destruction — though it’s becoming more common, she adds, for databases to purge information marked for deletion.

The fallback for now: user habits. “Luckily for us, lots of people like to rely on going back to messages… so we can only hope that they choose secret messages over the ephemeral kind, because we have more success with the recovery of secret messages.”

In Shavers’ view, the forensic recovery and analysis of ephemeral data is similar to verbal harassment. “The credibility of the victim and other factors are needed to corroborate what a victim saw on a device before it disappeared, if forensics cannot retrieve it,” he says. Exceptions: if a recording was made, or if metadata exists to help corroborate a complaint.

Small Groups

Another way law enforcement proactively identifies bad actors and their content is through direct participation. ABC7 in Chicago reported in late 2017 that undercover agents had infiltrated a number of Facebook groups set up to deal in drugs and guns. That probe resulted in 50 arrests.

Drugs and guns aren’t the only currency in private Facebook groups, though. The previously cited article from Engineering and Technology, dated October 2018, quoted NCMEC’s chief operating officer, Michelle DeLaune, who commented on the “crucial blind spot [of] encrypted chat apps and secretive ‘dark web’ sites where much of new [CSAM] originates.”

Although participation in CSAM-related groups is often predicated on the sharing of CSAM, which is prohibited by federal law and therefore limits individual agents from live activities, other technology may be able to help.

Late in 2018 Facebook announced: “In addition to photo-matching technology, we’re using artificial intelligence and machine learning to proactively detect child nudity and previously unknown child exploitative content when it’s uploaded.” Additionally, new machine learning-driven software, which Facebook is helping NCMEC to develop, could help to prioritize the tips that are ultimately shared with law enforcement.

Zuckerberg’s statement referred to technology that could detect “patterns of activity or through other means, even when we can’t see the content of the messages.” However, he acknowledged, “… we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves.”

Whether investigators themselves will be able to work more proactively using AI and similar technology depends on how fast new software, or new functionality in existing software (for example, current monitoring across peer-to-peer networks), could be developed. Undercover investigations are time-consuming and resource-intensive, and social media monitoring continues to be the subject of much contentious debate. 

To some extent, as described in last year’s press release, Facebook already takes proactive steps to cooperate with law enforcement. Another possibility, as described in the TechCrunch article, could involve other third parties: “[WhatsApp] suggested that on-device scanning for illegal content would have to be implemented by phone makers to prevent its spread without hampering encryption.”

Interoperability Across Facebook’s Three Services

Zuckerberg’s manifesto calls for interoperability, or the ability for users to choose whichever service they prefer to send and receive messages. In other words, an Instagram Direct message might have originated with Facebook Messenger or WhatsApp — even SMS. (Acknowledging that the SMS protocol isn’t currently encrypted, Zuckerberg claims interoperability of SMS with Facebook messages would ensure encryption of these messages.)

Users would be able to maintain account separation across the three services, but to connect senders and recipients as part of an investigation, forensic examiners will need to go a step further. Metadata and any content fragments will have to help piece together who sent what, when and by whom it was received, and which apps were used across devices.

That could be difficult, Pochron says, if Facebook follows Apple’s example of continuity. An iPhone and Mac computer connected via Bluetooth, he says, wouldn’t show a handoff between the two — only that an iMessage was “sent.”

If no clean artifact or metadata exists to show what device a message was sent from or whether it was received, in other words, an examiner may not be able to testify to this level of detail. Crognale says the best in this case is to put the device or account user at a certain spot via network artifacts created at the same time a call was made (or vice versa).

That degree of interoperability could make it easier for bad actors to obfuscate their activity. Zuckerberg’s example, the ability for a user to use WhatsApp to receive messages sent to their Facebook account without sharing their phone number, as they currently do on Facebook Marketplace, could make it more difficult for investigators to tie activity to a particular device.

On the other hand, Crognale says this may not always be the case. Apps run in their own sandbox: like a “mini virtual machine,” she says, spun up for each app on the device. “Some applications contain flags to show where the message originated (i.e. WhatsApp or Instagram) so you can tell which app that user was interacting with when they made the post or sent the message,” she says.

Shavers says interoperability’s “consumer convenience” also helps because even if data isn’t stored in one database, it may be stored in another. “Basically, the more interconnected devices that a criminal may use, the more likely evidence will be strewn in too many locations to not be found by law enforcement,” he says. “Even when some of the evidence may be encrypted, and practically inaccessible, there may typically be enough evidence that is accessible to make a case.”

Even so, while some apps such as Viber “do a very good job at tracking every little thing in the application,” Crognale says, “others track the [bare] minimum.” According to her research, apps like WhatsApp, Instagram, and Facebook Messenger seem to fall somewhere in between.

The Mosaic of Metadata

Another key set of artifacts comes from message metadata. Even without content, metadata can be a powerful tool in helping investigators track who was talking to whom, how frequently, and over what period of time. Geolocation data, attachment data, and other metadata can be so important that in 2012, the United States Supreme Court considered whether the “mosaic” of data from disparate sources could be assembled into a separate search requiring its own warrant. 

As Slate’s Ari Ezra Waldman pointed out, even with the incipient changes, Facebook can still collect plenty of metadata, and that the integration and expansion of services meant that Facebook could “piece together new kinds of data.”

Again looking to WhatsApp as an example, Cosimo Anglano’s 2015 research on artifact correlation showed how reconstructing the contact list, event logs, sent and received files, and other WhatsApp artifacts (on the Android platform) could be correlated together to reconstruct contact lists and message chronologies.

This outcome appears to be the case in other Facebook services, as a 2018 ReCode article describes about Facebook’s Portal home assistant hardware. Built on the Messenger infrastructure, Portal collects “the same types of information (i.e. usage data such as length of calls, frequency of calls) that we collect on other Messenger-enabled devices,” a spokesperson quoted in the piece said. 

This may be why Zuckerberg states an intent “to limit the amount of time we store messaging metadata… [and] to collect less personal data in the first place, which is the way WhatsApp was built from the outset.”

The collection of less metadata ostensibly means less evidence, says Shavers, but that doesn’t mean no evidence at all. “Even if a provider collects less data, or stores data for less time,” he explains, “the mere propagation of data across devices and providers means that data exists in so many places that investigators can practically target exactly what they want, if they know what they need.”

For example, he says, metadata from pen registers — phone numbers being dialed or called in, with dates, times, and length of calls — can alone tie suspects together, even without a wiretap warrant to collect the content of those calls.

Likewise email headers in harassment cases, social media connections, and so on. “Metadata is the easiest to get court approval to obtain, easiest technically to obtain, and there is so much of it that in combination with other metadata, builds up quite the file of evidence,” Shavers says.

The major challenge with metadata isn’t its lack of content, says Pochron, but whether a forensic tool can properly map the right fields. What’s available through the provider’s API, versus actual app functionality, can change, or be located differently.

That points to the need to validate forensic tools through manual methods. Crognale says most apps have fields and tables within one or multiple SQLite databases that store artifacts related to social media activity, direct messaging, phone calls, video chats, GPS, even photo editing. “While the amount of different flags may not provide an immediately clear way to identify the actions that have occurred on a device,” she explains, “these artifacts can be easily determined through various testing methods on devices running the same version of the application.”

Investigative Information Sharing and Collaboration

Zuckerberg’s assertion that Facebook “won’t store sensitive data in countries with weak records on human rights like privacy and freedom of expression in order to protect data from being improperly accessed” could make things more difficult for investigators in countries where human trafficking, child exploitation, and other crimes are prevalent.

Whether Facebook’s services could be blocked in these or other countries, as Zuckerberg anticipates, likely means simply that child exploitation investigators will need to remain savvy to the tools bad actors are using apart from Facebook Messenger, Instagram, and WhatsApp.

“This will always be an issue, in both civil and criminal investigations,” Shavers notes. “Even with agreements, [international] cooperation and communication will never be as seamless as it is within one country. In these instances, investigators should always consider that a copy of the data that they need may actually be stored locally (in their own country) by virtue of being on a suspect’s device or a service provider in-country.”

The device itself is still an option, too, says Crognale. “For those devices that we are fortunate enough to be able to jailbreak or root prior to acquiring them, we can usually still find some very interesting data,” she explains.

Much remains uncertain until Facebook starts to make its intended changes, but Pochron says businesses can prepare now by enacting stronger information governance programs and tighter controls over “shadow IT,” or the tendency of employees to use unapproved apps that could result in legal sanctions in the event of litigation.

Meanwhile, forensic examiners in both public and private sectors can take cues from prior research and experience as to what to expect. “Purge by design,” encryption, and interoperability features don’t help, but they may not end up hurting as much as they might appear. “All data is ephemeral by nature in some regard,” says Pochron, and with that in mind, forensic examiners can approach these latest changes with curiosity to learn.

How To Image To A Network Repository With Logicube’s Forensic Falcon-NEO

$
0
0

Welcome to Logicube’s tutorial on the Falcon-NEO forensic imager. The Falcon-NEO allows you to image directly to or from a network repository using SMB or CIFS protocol, or using iSCSI. Two 10GbE ports provide extremely fast network imaging performance. In this episode we’ll show you how to image from a physical drive connected to the Falcon-NEO, to a network repository, using CIFS protocol. Make sure you have full permissions to the shared resource before attempting to create a network repository on the Falcon-NEO. We strongly suggest that you contact your network administrator to ensure proper permissions have been set up.

We have set up a directory on a computer that is connected to the same network as the Falcon-NEO. By right clicking on the directory and checking the share properties, we can verify that we have full permissions to this share. We’ll now create and mount the repository on the Falcon-NEO.

From the left-side menu, tap on the ‘Manage Repositories’ icon. On the ‘Add/Remove’ tab, you’ll see a list of available repositories, including all drives attached to the Falcon-NEO destination ports, and any network repository. Tap the ‘Add Repository’ button at the bottom of the screen, then tap ‘Name’ to set the name of the repository. Then tap ‘Drive’ to select a network share to set as a repository.

Then tap ‘Network Source’ to select the network source – either LAN1, LAN2, or ‘Any’ – then tap ‘Network Destination’ to enter the network settings. Enter the path of the share; this includes the IP address and share name. Use the forward slash, not the backward slash, when entering the path. Enter the domain of the share, if the shared resource is in a domain; otherwise, use the work group name. Then enter the username that has full permissions, both read and write access, to the share. Then enter the password for the username.

Next tap ‘Role’, and the repository can be set as a source, destination, or both. Once the repository has been set up, it will appear under the repository list. If you would like the repository you have set up to remain configured when the Falcon-NEO is turned off, the changes must be saved and loaded to a profile on the Falcon-NEO. Refer to the users’ manual for instructions on how to do this.

Once a repository has been added, you can use it as a source or destination for any imaging task. For this task, we’ll select the ‘Drive to File’ mode, then we’ll select the source – in this case, we’ll select a hard drive connected to the Falcon-NEO.

Next, choose your settings. Here you can enter a case name and other case identification information such as evidence ID. Then set the imaging method. Here we’ll choose E01, and leave the default setting of 4GB for the segment size and with compression on. You can then choose whether to image HPA and DCO areas of the drive and set the error granularity. Finally, choose the hash method and whether you wish to verify or not.

Then select your destination: in this case, the network repository we just set up. Once all your settings have been chosen, press ‘Start’ and the imaging task will begin.

Upon completion of the imaging task, you can review the log file associated with the task by navigating to the ‘Logs’ icon from the left-side menu bar. The log file includes all of the chosen settings and the file information from the network repository that was captured for this task.

Thank you for your interest in the Falcon-NEO. We hope you found this tutorial informative. To learn more about the Falcon-NEO, visit our website at logicube.com, or contact our sales team at sales@logicube.com.

Leveraging DKIM In Email Forensics

$
0
0

by Arman Gungor

My last article was about using the Content-Length header field in email forensics. While the Content-Length header is very useful, it has a couple of major shortcomings:

  • Most email messages do not have the Content-Length header field populated
  • If the suspect is aware of this data point, the integer value in the Content-Length header field is very easy to modify to make it match the length of the manipulated email payload

Wouldn’t it be great if there was something more widely used and tamper-resistant? Enter DKIM.

What is DKIM?

DomainKeys Identified Mail (DKIM) is an internet standard that allows an entity to assert responsibility for a message in transit. The entity can be the organization of the author of the message, or a relay.

The signing entity hashes the body of the message and digitally signs it along with a subset of its header fields using its private key. The public key of the signing entity is published as a _domainkey DNS TXT Resource Record for the signer’s domain. The recipient can then retrieve the signer’s public key with a DNS query, and attempt to verify the digital signature to determine whether the signature is valid.

TL;DR DKIM gives us the message body and header hashes on a silver platter—digitally signed by the transmitting domain!

Let’s work through an example and manually verify the DKIM signature. Here is a sample message sent via Gmail’s web interface:

Example Message with DKIM Signature

As with many other forensic artifacts, verifying DKIM signatures requires that the suspect message be preserved in its original form. Yahoo and Gmail allow end users to download the raw message via their web interface. In this case, I used Forensic Email Collector to acquire the message in MIME format, which is identical to what we get directly from the provider’s web interface. I’ve then removed a couple of lengthy header fields before taking the screenshot above for clarity.

Let’s start by dissecting the DKIM-Signature header field.

DKIM-Signature

The DKIM-Signature header field contains the signature of the message as well as information about how that signature was computed. Interestingly, the DKIM-Signature header field being created or verified is itself included in the signature calculation—with the exception of the value of its “b=” tag.

The tags of the above DKIM-Signature are as follows:

v= Indicates the version of the DKIM specification. You should expect to see the value “1” in this field as of this writing.

a= The algorithm that was used to create the signature. In this case, it is RSA-SHA256.

c= Indicates the canonicalization algorithms that were used for the header and the body. The canonicalization algorithm determines how the body and the header are prepared for hashing—especially as it relates to tolerance for in-transit modification. We will discuss this further below.

In this case, “relaxed/relaxed” indicates that the relaxed canonicalization algorithm was used for both the header and the body. A single value such as “c=relaxed” would have indicated that “relaxed” was used for the header, and “simple” was used for the body—equivalent to “c=”relaxed/simple”.

d= Indicates the domain claiming responsibility for transmitting the message. This is the domain whose DNS we query to get the public key. In this case, the domain is “gmail.com”.

s= Indicates the selector for the domain. In this case, “s=20161025” indicates that we can query the TXT record for 20161025._domainkey.gmail.com to get the public key.

h= This tells us which header fields were included in the signature. In this case, the list is Mime-Version, From, Date, Message-ID, Subject, and To. We will use the same list of header fields when verifying the signature.

bh= This is the hash of the body of the message after it was canonicalized, in Base64 form.

b= The signature data in Base64 form.

Canonicalization

Let’s go over the two canonicalization algorithms so that we can prepare the header and the body correctly for manual DKIM verification.

Simple Canonicalization

The simple algorithm tolerates almost no modification. For the header, the simple algorithm presents the header fields exactly as they are without changing their case, altering whitespace, etc. For the body, the simple algorithm removes any extra empty lines at the end of the message body.

Relaxed Canonicalization

The relaxed algorithm providers better tolerance for in-transit modification. For the header, the relaxed algorithm converts all header field names to lowercase (e.g., “Subject:” -> “subject:”), unfolds all lines, converts all sequences of one or more whitespace characters to a single space,

removes all whitespace at the end of each header field value, and removes any whitespace before and after the colon that separates the header field name from the value (e.g., “subject : test” -> “subject:test”).  For the body, the relaxed algorithm removes all whitespace at line endings and replaces all whitespace within a line with a single space. Extra empty lines at the end of the message body are also removed.

You can find the authoritative documentation with all of the details here: RFC 6376—Canonicalization.

DKIM Verification

Now that we know how to interpret the DKIM-Signature field and how to prepare the body and the header for hashing, we can attempt to verify the DKIM signature manually.

Step 1—Body Hash

The first step is to canonicalize the message body, hash it, and compare it to the value reported in the “bh=” tag. A mismatch here means an instant fail—we needn’t proceed further.

The canonicalized version of the message body, using the relaxed algorithm, looks as follows:

Canonicalized Message Body

I set my text editor up to show line breaks and spaces for the above screenshot. Note that the CRLF at the very end remains.

When we hash the above text using SHA-256 (based on the value of the “a=” tag) and convert the result to Base64, we find NuUVBkHAblnFrMSNaWdGtwpjr9poc3wM2sXMhd25sPE=. This matches the body hash that was included in the “bh=” tag of the DKIM-Signature header field.

We can already see how powerful this is. The DKIM-Signature header field contains a hash of the message body, which we can verify ourselves very easily without even fetching the public key of the signing entity.

Step 2—Signer’s Public Key

The next thing we should do is to query the signer’s domain and fetch their public key. We will need the d=gmail.com and s=20161025 values for this.

A good resource to use here is the DKIM Record Lookup tool from MxToolbox. When we populate the domain name with “gmail.com” and the selector with “20161025”, we get the following key:

We will use the public key above, along with the signature found in the “b=” tag, and the canonicalized version of the message header to verify the signature.

Step 3—Canonicalize The Message Header

In order to prepare the message header for verification, we need to choose the header fields indicated in the “h=” tag in that order, add the DKIM-Signature header to that list (except for the contents of the “b=” tag), and run it through the canonicalization algorithm (relaxed in this case).

Once the above steps are complete, the canonicalized message header looks as follows:

Message Header after Canonicalization

There are a few things to note here:

  1. The DKIM-Signature header field includes the body hash (the “bh=” tag). So, although we are not including the body itself in the sign/verify process, we are including the hash of the canonicalized body.
  2. The header field names have been converted to lowercase, and whitespace has been adjusted according to the relaxed canonicalization algorithm.
  3. There is no CRLF character at the very end of the text, after the “b=” tag.
  4. The value of the “b=” tag is excluded. This makes sense because the signing entity had no way of knowing the value of the “b=” tag (i.e., the signature data) until after the signature was calculated. So, the value of the “b=” tag could not have been used in the calculation of the signature.

We now pass the canonicalized header above, the public key we obtained in step 2, and the signature provided in the “b=” tag of the DKIM-Signature field to our signature verification function. I used the RSACryptoServiceProvider.VerifyData() method available in .Net for signature verification. You can use an equivalent in your programming language of choice.

The signature verification process determines the hash value in the signature using the public key and compares it to the hash value of the canonicalized message header. In this case, the two hashes match and the signature is verified.

Automation, Anyone?

Although it is great to know how to do so, verifying DKIM signatures manually can get tedious. You can use a number of open-source tools to add some automation to your DKIM verification workflow. If you use Perl, you can check out Mail::DKIM::Verifier. If Python is more your thing, dkimpy is also a good option—be mindful of how multiple DKIM-Signature headers are handled.

What Is The Forensic Relevance?

DKIM signatures give us some very powerful information to work with—the cryptographic hash of the message body and a subset of the header fields, signed by the sending domain. Even if one non-whitespace character changes after the message was signed, the DKIM signature would fail verification. When forensically authenticating an email message, a valid DKIM-Signature header and a verified signature indicate that the message body and signed parts of the message header were not modified after the signing entity calculated the DKIM signature.

Could a suspect work around this? A few ways that come to mind are by gaining access to the signing entity’s private key to sign on their behalf, by manipulating the message body and/or header without changing their hashes (not currently possible for SHA-256), and by removing the DKIM signatures from the manipulated message. The latter is simple to do, but would also be fairly easy to detect when the suspect message is compared to other messages from the same sender within the same time period.

Another security consideration is the use of the “l=” tag. This optional tag in the DKIM-Signature header field indicates the body length that was included in the cryptographic hash. Absence of this tag indicates that the entire body was hashed. If the “l=” tag was used, the suspect could append fraudulent content beyond the hashed subset of the message body without failing DKIM verification. RFC 6376 has a section on Security Considerations which is an interesting read.

Finally, it is important to note the Authentication-Results: header field Gmail inserted into the message upon receipt (see RFC 7601). This header field indicates, among other things, that DKIM verification was successful at the time Gmail received the message, and contains the first 8 bytes of the signature data in the “header.b=” tag. If the message was manipulated, this can be helpful in determining when the manipulation took place.

Conclusions

Forensic examiners should pay close attention to DKIM signatures when authenticating emails. Adding automated DKIM signature verification to your workflow would be a good starting point. If DKIM verification fails, it is important to know why. Was it because the signing entity’s public key is no longer available in their DNS records, or did the body hashes or header hashes not match?

DKIM specification has quite a bit of detail, and most tools I have encountered do not appear to have implemented all aspects of the specification. When working with DKIM, it is important to know the details and be able to perform manual verification as needed, especially to cover edge cases.

References:
RFC 6376: DomainKeys Identified Mail (DKIM) Signatures — https://tools.ietf.org/html/rfc6376
RFC 5863: DomainKeys Identified Mail (DKIM) Development, Deployment, and Operations — https://tools.ietf.org/html/rfc5863
RFC 7601: Message Header Field for Indicating Message Authentication Status — https://tools.ietf.org/html/rfc7601

About The Author

Arman Gungor, CCE, is a digital forensics and eDiscovery expert and the founder of Metaspike. He has over 21 years’ computer and technology experience and has been appointed by courts as a neutral computer forensics expert as well as a neutral eDiscovery consultant.

The Opportunity In The Crisis: ICS Malware Digital Forensics And Incident Response

$
0
0

by Christa Miller, Forensic Focus

Malware aimed at industrial control systems (ICS) is nothing new. Nearly 10 years have passed since Stuxnet first targeted the supervisory control and data acquisition (SCADA) systems and programmable logic controllers (PLCs) associated with centrifuges in Iran’s nuclear program. Since then, Havex, BlackEnergy 2, and Crash Override / Industroyer have targeted various ICS.

Until very recently, targeted attacks on ICS have remained rare. In 2017 Dragos, a provider of industrial security software and services, reported that most malware infections on ICS were accidental.

The following year, the Kaspersky lab likewise reported that most ICS malware infections — including cryptomining, ransomware, remote-access trojans (RAT), spyware, and other threats — were random. Dragos has also reported, however, that targeted ICS intrusions aren’t as rare as first believed. 

Attacks on the electrical grid and other ICS have caused concern for safety in hospitals, transportation networks, and other systems. Still, ICS is deliberately designed with failsafes; in other words, if one system fails, independently running safety instrumented systems (SIS) are triggered to shut down, limiting risk to life.

That changed in 2017, when a new form of malware, Triton — also known as TRISIS or HatMan — was found to attack those very safety features.

Triton’s authors, known as XENOTIME, are far-reaching. In April 2019, Motherboard reported that FireEye had been hired to respond to a breach at an undisclosed critical infrastructure facility, and that during the response, they had found traces they could link back to Triton authors. 

So what’s the rub for digital forensics analysts and incident responders?

  1. Your skillset is in demand — and in flux. Not only could you end up using your abilities reactively, to respond to an incident; you might also be tapped to use them proactively, as part of threat hunting.
  2. Because the effectiveness of ICS security best practices themselves are being questioned, any assessments you work on may shift along with the landscape, too.

Triton: Some Background

So named because of its targeting Triconex SIS built by Schneider Electric, the Triton malware was first discovered in August 2017. It affected systems located at the Saudi Arabian Oil Company (Saudi Aramco)’s Petro Rabigh petrochemical complex.

The attack concerned cybersecurity experts for many reasons:

  • It was the first known instance of a targeted attack on safety control systems that are designed to prevent explosions, hazardous chemical releases, or other threats.
  • Because the attackers had to have had the budget and time to purchase their own controller, the attack pointed to a nation-state.
  • The attack also took advantage of the way the plant conducted its operations; for example, initially targeting systems on a Saturday, when staffing was minimal.

The August attack was actually the second to occur. Incident responders found that an initial shutdown, which occurred two months previously in June on a single controller, had been triggered by the same malware.

That time, the plant called Schneider for support with troubleshooting the controller. According to Kelly Jackson Higgins, writing for Dark Reading in January 2019: “[T]he vendor pulled logs and diagnostics from the machine, checked the machine’s mechanics, and, after later studying the data in its own lab, addressed what it thought was a mechanical issue.” 

The second attack may have successfully fooled the company, too, had it again triggered only a single shutdown. This time, though, the malware had infected six controllers, which all shut down. That made engineers suspicious.

The next clue that something was wrong, reported the New York Times, was discovery of the malware itself. Written to resemble legitimate Schneider logic, the malware contained an error that had triggered the shutdown.

Incident responders quickly discovered multiple signs of an ongoing attack. Antivirus software alerted the organization to Mimikatz-related traffic, according to Higgins at Dark Reading, and Remote Desktop Protocol (RDP) sessions were running on the plant’s engineering workstation from within the network.

Additional findings, as reported by Gregory Hale for the Industrial Safety and Security Source blog, included unknown programs running in the infected controllers’ memory, and Python scripts created on the engineering workstation around the time of the August attack. Later on, suspected beacons to the attackers’ control network were found.

Ultimately, wrote Higgins, responders determined that the DMZ firewall between the information technology (IT) and operational technology (OT) networks was poorly configured. That allowed the attackers to compromise the DMZ itself so they could pivot into the control network.

The primary reason the attackers didn’t tamper with or disable the Triconex safety instrumented systems was due largely to human error. Jason Gutmanis, the primary responder to the Triton incident, was quoted in a January 2019 article, believing the attackers simply became complacent after nearly two years of persistence. Their mistake left tools on the system, which they later tried to eradicate.

Challenges of Digital Forensics in ICS

What might be called the “industrial internet of things” is a risk, wrote Catalan Cimpanu for Bleeping Computer in 2017, because of the “clear benefits from controlling SCADA equipment over the Internet” in contrast to isolating the equipment from either internet or internal networks. 

That’s supported by the Kaspersky reports, which stated that the main source of infection was the internet – accounting for more than one-quarter of attacks across 2018. (Removable storage media and email clients accounted for the rest.)

A facility’s own networks are only one potential target. Attackers could also pivot into an ICS network through a third party, in much the same way the 2013 Target attackers breached their payments system through a heating, ventilation and air conditioning (HVAC) system vendor. 

Forensic analysis of ICS compromises can take nothing for granted, in other words, but even this approach has its challenges. In a presentation for the 2016 SANS Institute ICS Security Summit, Chris Sistrunk, Senior ICS Security Consultant at Mandiant, detailed the “DFIRence” for ICS responses involving embedded devices. Specifically, Sistrunk detailed what kind of volatile and non-volatile data to collect, along with the major differences between a “standard” incident response process, and one for ICS.

Tasks like situational assessments, communication to management, and documentation of findings are similar across environments. However, ICS complicates the return to a normal state of operation for a few reasons:

  • Responders must account for physical processes in addition to digital processes.
  • ICS device constraints make it difficult to remediate and regain control of affected devices.
  • ICS devices also have different protocols, which must be collected manually.
  • Analysis can be tricky because there are no ICS-specific DFIR tools. Instead, Sistrunk noted, responders may have to rely on manual collection.

Who’s a first responder in an ICS facility?

Sistrunk’s presentation broke down ICS response and analysis responsibilities this way:

First responders consist of the ICS engineer or technician, network engineer, and/or vendor, who examine user and event logs to see what they reveal. Checks on firmware, running last known good configurations as well as standard configurations, and communications should all be performed at this stage.

In the analysis stage later on, the vendor, digital forensics specialist, or embedded systems analyst should evaluate embedded operating system files, captured data at rest and in transit, and if possible, volatile memory for code injection and potential rootkits.

What constitutes “first response” and “first responder,” though, seems to be in a state of flux. Gutmanis is on record saying that Schneider, the controllers’ manufacturer, could have detected the attacks two months previously, following their first attack in June 2017. 

In an official statement quoted by Higgins at Dark Reading, however, Schneider insisted this wasn’t the case because plant engineers themselves didn’t suspect a security incident. They “took one Triconex system offline, completely removing the Main Processors, and sent them to Schneider Electric’s Triconex lab in Lake Forest Calif…. Once they were removed from power, the memory was cleared and there was no way to conclude that the failure was the result of a cyber incident.” Schneider’s focus at that point was whether the controllers worked correctly within their safety function, which it determined they did.

The engineers’ actions following the first Triton incident point to a need for greater awareness. In April 2018 for the Harvard Business Review, Andy Bochman, Senior Grid Strategist, National & Homeland Security at the Idaho National Laboratory (INL), argued: “Every employee, from the most senior to the most junior, should be aware of the importance of reacting quickly when a computer system or a machine in their care starts acting abnormally: It might be an equipment malfunction, but it might also indicate a cyberattack.”

Indeed, Sistrunk’s presentation noted: anomalies, whether they consist of increased network activity, strange behavior, or some kind of failure, always require investigation to answer the question of whether it’s a “known bad” or an “unknown bad” and, if unknown, whether to escalate it to a security incident.

Part of this is, of course, preventing the kinds of mistakes that can leave systems vulnerable to begin with. For instance, as reported by Dragos, “[t]he Triconex SIS controller had the keyswitch in ‘program mode’ during the time of the attack and the SIS was connected to the operations network against best practices.”

This mistake is an example of the kinds of trade-offs companies make in the name of greater efficiency, lower costs, and reliability. In another example, writes Bochman, to prevent operational disruption, security patches are installed in batches during periodic scheduled downtime, which could take place months after their release.

Moreover he argues, even perfect best practices implementation “would be no match for sophisticated hackers, who are well funded, patient, constantly evolving, and can always find plenty of open doors to walk through.”

Towards new best practice frameworks?

Best-practices frameworks like those offered by the National Institute of Standards and Technology’s (NIST) cybersecurity framework and the SANS Institute’s top 20 security controls, writes Bochman, “entail continuously performing hundreds of activities without error. They include mandating that employees use complex passwords and change them frequently, encrypting data in transit, segmenting networks by placing firewalls between them, immediately installing new security patches, limiting the number of people who have access to sensitive systems, vetting suppliers, and so on.”

Bochman points to “numerous high-profile breaches” at companies with “large cybersecurity staffs [which] were spending significant sums on cybersecurity when they were breached” as evidence that adherence to best practices may be a losing battle. “Cyber hygiene is effective against run-of-the-mill automated probes and amateurish hackers,” he writes, “but not so in addressing the growing number of targeted and persistent threats to critical assets posed by sophisticated adversaries.”

Bochman outlines an unconventional INL strategy: consequence-driven, cyber-informed engineering (CCE), which could help companies to prioritize and mitigate damage to targets they might once have deemed unlikely:

“Identify the functions whose failure would jeopardize your business, isolate them from the internet to the greatest extent possible, reduce their reliance on digital technologies to an absolute minimum, and backstop their monitoring and control with analog devices and trusted human beings.”

By forcing organizations to create prioritized (versus comprehensive) inventories of hardware and software assets — something Bochman argues most “fail at” — the lab’s methodology “invariably turns up vulnerable functions or processes that leaders never realized were so vital that their compromise could put the organization out of business.”

Threat hunting: part of the mix

Much of what’s known about Triton is due to painstaking research by Dragos. Among their findings: Triton isn’t very scalable because even within the same product lines, like Triconex, “each SIS is unique and to understand process implications would require specific knowledge of the process. This means that this malware must be modified for each specific victim reducing its scalability.” 

Even so, they added, “the tradecraft displayed is now available as a blueprint to other adversaries looking to target SIS.” At the same time, Bochman wrote: “Information systems now are so complicated that U.S. companies need more than 200 days, on average, just to detect that they have been breached….”

That’s where threat hunting comes in. In an interview with CSO Online’s Roger Grimes, the SANS Institute’s Rob Lee described how:

“Threat hunters are an early warning system. They shorten the threat’s “dwell time,” which is the time from the initial breach until they are detected…. A threat hunter is taking the traditional indicators of compromise (IoC) and instead of passively waiting to detect them, is aggressively going out looking for them.” This activity is especially important since a “[crafty adversary] will avoid tripping the normal intrusion detection defenses.”

Such is the case with Triton’s authors, XENOTIME. Eduard Kovacs, writing for SecurityWeek, described FireEye findings showing how the group’s tools, techniques and procedures focus on maintaining access to compromised systems. Additional XENOTIME data comes from Dragos, which presented on some of their activities at SecurityWeek’s 2018 ICS Cyber Security Conference

Lee’s recommended path to threat hunting: “… first work as a security analyst and likely graduate into IR and cyber threat intelligence fields.” One of the most advanced skillsets in information security, threat hunting requires security operations and analytics, IR and remediation, attacker methodology, and cyber threat intelligence capabilities. “Combined with a bit of knowledge of attacker methodology and tactics, threat hunting becomes a very coveted skill,” Lee said.

The threat of ICS-targeted malware is chilling, but it also presents extraordinary opportunities for DFIR analysts who have the motivation. Security and vulnerability assessments, comprehensive forensic investigation, and threat hunting are all rapidly growing disciplines within the industry. Subsets such as machine learning add dimensions.

From Crime To Court: Review Principles For UK Disclosure

$
0
0

by Hans Henseler

UK Law Enforcement agencies are facing significant challenges related to digital evidence disclosure in criminal prosecution cases. Suspects who are charged with a crime must have access to all relevant evidence to ensure a fair trial, even if the evidence can undermine the prosecution. To avoid disclosure errors and ensure that digital evidence is admissible in court, the UK Crown Prosecution Service (CPS) revised the Disclosure Manual in December 2018 and addressed the following topics:

  • Reasonable lines of enquiry
  • Identifying relevant material
  • Disclosure documentation
  • Legal professional privilege (LPP)
  • Copying disclosable material

In this article, I will cover what each of the CPS disclosure manual topics mean and provide seven principles that will help streamline the review of digital material so that it better fits within the context of UK Disclosure requirements. Our recommendations are as follows:

  1. Early review of digital material can assist with the evaluation of reasonable lines of enquiry.
  2. Filter digital material proportionally to the facts.
  3. Ensure searches are targeted.
  4. Document your investigation strategy before and during reviewing.
  5. Automatically record all hits that were not examined and document why.
  6. Isolate and or redact LPP material.
  7. Use technology to assist with the disclosure of large quantities of digital evidence.

These principles can be used to augment your investigation workflow and ensure that it complies with the new disclosure guidelines.

1. Introduction to Disclosure

Due to high profile legal challenges, Law Enforcement agencies in the UK are struggling with how to sufficiently disclose evidence related to criminal prosecution cases[1]. Not only are they dealing with more digital evidence than ever before, they’re also isn’t a clear guide on how to properly disclose this evidence to ensure that findings are admissible in court. To understand what is causing these problems, it’s important to understand how “Disclosure” in the UK criminal prosecution system is defined:

Disclosure is the process in a criminal case by which someone charged with a crime is provided with copies of, or access to, material from the investigation that is capable of undermining the prosecution case against them and/or assisting their defence. Without this process taking place a trial would not be fair.[2]

Evidence material is collected by the police throughout the course of an investigation and some of this material will be relevant to the case that they are investigating[3]. The Crown Prosecution Service disclosure manual explains that “material may be relevant to an investigation if it appears […] that it has some bearing on any offence under investigation or any person being investigated, or on the surrounding circumstances of the case, unless it is incapable of having any impact on the case.”

Relevant material falls into one of two major categories:

i) Used material that is relied upon by the Crown to inform or uphold the prosecution case. For example, Witness statements, Interview tapes, Photos of injuries, Exhibits – e.g. knives, CCTV, drugs, Emails, phone, social media messages, and browser history records used as evidence.

ii) Unused material that is relevant material within the possession of the prosecution but which the prosecution does not intend to use. For example, The crime report itself, Custody imaging photograph, Copies of bail forms, Pre-interview disclosure to a solicitor, PNCB forms, CAD/incident logs, CCTV viewing requests, Search warrants, Copies of notebooks, PNC printouts on victims, Emails, phone, chat messages, Internet browser history, records not used as evidence.

2. Disclosure Requirements for Digital Evidence

Chapter 30 of the CPS Disclosure Manual[4] provides an overview on several topics related to digital evidence, specifically focusing on:

  • Reasonable lines of enquiry
  • Identifying relevant material
  • Disclosure documentation
  • Legal professional privilege
  • Copying disclosable material

Other topics mentioned in this chapter include search & seizure, scheduling, defence engagement, retention of seized material and engagement with the court—topics that are not directly relevant for digital evidence review but are certainly interesting for other digital evidence management matters, e.g., forensic case management.

2.1 Reasonable Lines of Enquiry

What amounts to a reasonable line of enquiry will depend on the circumstances of each case. Prosecutors should work closely with investigators, disclosure officers, and digital forensic experts to ensure all reasonable lines of enquiry are followed and that digital material is properly assessed for relevance, revelation, and disclosure.

As with all communication evidence, the prosecution must be able to explain to the defence and the court what they are doing as well as, what they will not be doing.

Transparency on the approach that was taken in every case is of paramount importance. The prosecution should engage in an early dialogue with the defence about what they consider to be reasonable for each case and should make explicit reference to the approach that was taken in a Disclosure Management Document.

The CPS guide to “reasonable lines of enquiry” and communications evidence[5] discusses the analysis of material. In examining the contents of a mobile device download, the investigator may set parameters relating to specific timeframes that are proportionate to the facts. For example, the investigator could focus on a specific time—like the date the complainant and suspect met to a month after the suspect’s arrest. If there are messages that are potentially undermining/assisting at either end of the window of time searched, then the search should be extended further.

As with all communication evidence, the prosecution must be able to explain to the defence and the court what they are doing as well as, what they will not be doing.

Below is an example of how this might be recorded in the Disclosure Record Sheet (DRS) and Disclosure Management Document (DMD):

The following mobile devices were seized and have been examined, and download reports prepared:

Description: iPhone X

Reference: ABC/1

Obtained from: Suspect

Telephone number: XXXXX XXXXXX

Download report reference: DEF/1

 Description: Nokia X

Reference: ABC/2

Obtained from: Complainant

Telephone number: XXXXX XXXXXX

Download report reference: DEF/2

The contents of the download in respect of the telephone ABC/1 between 1/1/2017 and 1/7/2017 have been examined for the following;

Level 2 Mobile Device Examination: 

(i) Logical Capture and Preservation of case defined and verified data (from Handset, Tablet, (U)SIM or Memory Card) using selected tool(s) in a laboratory environment to report that data.

(ii) Physical Capture and Preservation of data (from Handset, Tablet or Memory Card) using selected tool(s) in a laboratory environment to report defined data types.

 The timeframe selected is considered to be a proportionate and reasonable line of enquiry, and represents [e.g. the date that the complainant first met the suspect to a month after the suspect’s arrest].

The device has been examined for communications between the complainant and the suspect that relate to [insert parameters of the search that has been made e.g. communications about the offence or appear relevant to an issue in the case such as previous sexual behaviour between them, issues raised in interview, material that points away from the offence].

2.2 Identifying Relevant Material 

Digital material involved in today’s investigations is likely to be extensive. As a result, there will be a considerable amount of data held on a digital device which may not be ‘relevant’ to the case. The disclosure officer/investigator may utilize key words, time and date filters, sampling or other appropriate search tools and analytical techniques to locate relevant passages, phrases and identifiers. The purpose of applying such techniques is to reduce the total amount of evidence for review and to identify the relevant material on each device.

Where key word searches are applied, there should be an agreement with the investigator/disclosure officer. They should be designed to capture not only evidential material but also material likely to pass the test for relevance. Search terms that are too general or too many search terms may generate a large number of hits, containing material that may not be relevant and may complicate the disclosure exercise. It is essential that search terms are selected with care, are not too generic, and are targeted. Here is an example list of search terms for a case related to a fraud investigation:

Agency, Aggressive, Aid, Arrange, Asset, Backdate, Bad, Blackmail, Bogus, Bonus, Bribe, Budget, Case Bonus, Counterfeit, Deceive, Embezzle, Ethic, Expense, Fabricate, Fake, False, Falsify, Fictitious, Fine, Fraud, Payoff, Petty Cash, Pressure, Scam, Special Payment, Steal, Whistleblower.

2.3 Disclosure Documentation 

The disclosure officer should keep a record or log of all digital material seized, imaged and subsequently retained as relevant to the investigation. This log should be shared with the prosecutor to ensure they are aware of the nature and extent of digital material in the case, where the evidence was seized, and what was done with it.

In every case, it is important that any searching or analytical processing of digital material, as well as the data identified by that process, is properly recorded.

In cases with very large quantities of data, a case record should be made of the strategy and the analytical techniques used to search the data. To build this record, the officer in charge of the investigation must develop a strategy that sets out how the material should be analysed or searched to identify categories of data. This record should include details on the person who completed the investigation, demonstrate that they followed the process, and identify all relevant dates and times related to the case. In such cases, the strategy should record the reasons why certain categories have been searched for (such as names, companies, dates etc).

In every case, it is important that any searching or analytical processing of digital material, as well as the data identified by that process, is properly recorded. So far as practicable, what is required is a record of the terms of the searches or processing that has been carried out. The Disclosure Manual specially lists the following details that should be recorded:

  1. A record of all searches carried out, including the date of each search and the person(s) who conducted it.
  2. A record of all search words or terms used on each search. Where it is impracticable to record each word or terms (such as where Boolean searches or search strings or conceptual searches are used) it is usually sufficient to record each broad category of search.
  3. A log of the key judgements made while refining the search strategy in light of what is found or deciding not to carry out further searches.
  4. Where material relating to a ‘hit’ is not examined, the decision not to examine should be explained in the record of examination or in a statement.

2.4 Legal Professional Privilege

The prosecutor should be aware that digital material may include material which is subject to legal professional privilege (LPP). If such material is seized, the investigator must arrange for it to be isolated from other seized material and any other investigation material in the possession of the investigating authority. It is recommended that investigators allow independent counsel to be present during a search.

Where material potentially subject to LPP is thought to be on a device, analysis of file structures and search terms can be applied to identify the material likely to be LPP. The defence should be invited to assist in the process. Material which responds to the search terms or other techniques and which may be subject to LPP can then be referred to independent counsel.

2.5 Copying Disclosable Material 

When dealing with large quantities of digital evidence that must be disclosed, care must be taken to avoid inadvertently disclosing confidential or sensitive material to the defence.

Where the disclosure officer or investigator have concerns about differential disclosure of confidential information between co-accused, they should bring these concerns to the attention of the prosecutor. The prosecutor should then seek agreement with the defence, and where appropriate the court, as to how disclosure may be made.

It may be appropriate to ask owners of data if any breach of confidentiality will occur should the data be disclosed to any accused.

When dealing with large quantities of digital evidence that must be disclosed, care must be taken to avoid inadvertently disclosing confidential or sensitive material to the defence.

3. Review Principles

Based on the disclosure requirements summarized above, we have identified the seven principles that will be helpful in streamlining review of digital material so that it better fits within the context of UK Disclosure requirements.

  1. The prosecution is encouraged to enter in an early dialog with defence regarding their investigation approach. Early access for investigators who are reviewing digital materials will be helpful in assisting a prosecutor to identify which lines of enquiry are reasonable and which ones are not.
  2. When examining the contents of all digital evidence, the investigator may set parameters relating to timeframes that are proportionate to the facts, for example between the date the complainant and suspect met to a month after the suspect’s arrest.
  3. When identifying relevant material through search by key words, filtering or other search tools or analytical techniques It is essential that search terms are selected with care, are not too generic and are targeted.
  4. A record should be made of the strategy and the analytical techniques used to search the data. The record should include details of the person who has carried out the process, and the date and time it was carried out. This record should also include the reasons why certain categories were searched.
  5. A record of all hits that were not examined and the reasons why they were not followed up on must also be kept. While this may be cumbersome it can also help investigators to refine their search strategy and select search terms with care.
  6. Digital material may include material which is subject to legal professional privilege (LPP). If such material is seized, the investigator must arrange for it to be isolated.
  7. When dealing with large quantities of digital evidence that requires disclosure, care must be taken to avoid inadvertently disclosing confidential or sensitive material to the defence.

4. Conclusion

UK Crown Prosecution (CPS) updated several official guidelines with respect to Disclosure in December 2018. Disclosure affects all stages of prosecution and is not restricted to review of digital materials alone. However, when reading the chapter on digital material in the CPS disclosure manual, there are several disclosure topics that suggest basic principles for review of digital material. These principles can be used to design new features in existing digital forensics tools and evidence review platforms that assist with the implementation of procedures as described in the disclosure guidelines. Some of these principles may require manual note taking by investigators while other principles could be implemented using automated recording of case activity with associated reporting dashboards that are tailored to disclosure requirements.

About The Author 

Hans Henseler is the Director of Digital Evidence Review at Magnet Forensics. If you’d like to learn more about how Magnet REVIEW can help you analyze all the digital evidence in your investigation and satisfy disclosure requirements, visit MagnetForensics.com.

This article has also been published as a Magnet Forensics whitepaper. A PDF version can be requested here.

References 

[1] See for example https://www.telegraph.co.uk/news/2018/07/12/policeand-prosecution-lawyers-fail-correctly-disclose-evidence/

[2] Copied from “Review of the efficiency and effectiveness of disclosure in the criminal justice system”, UK Attorney General’s Office , November 2018, downloaded from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/756436/ Attorney_General_s_Disclosure_Review.pdf

[3] For example, see paragraphs 120-127 of Justice Committee, ‘Disclosure of evidence in criminal cases inquiry’ (July 2018) and the published supporting evidence. https://www.parliament.uk/business/committees/committees-a-z/commons-select/justice-committee/inquiries/ parliament-2017/disclosure-criminal-cases-17-19/

[4] Disclosure Manual, CPS, pages 90-94, Revised: 14 December 2018, https://www.cps.gov.uk/legal-guidance/disclosure-manual

[5] Disclosure – A guide to “reasonable lines of enquiry” and communications evidence. CPS publication. July 24, 2018. See https://www.cps.gov.uk/legal-guidance/disclosure-guide-reasonable-lines-enquiry-and-communications-evidence

© 2019 Magnet Forensics Inc. All rights reserved. Magnet Forensics® and related trademarks are the property of Magnet Forensics Inc. and used in countries around the world.

How To Read A Moving Low-Quality License Plate Using Amped FIVE’s Perspective Stabilization And Perspective Super Resolution

$
0
0

Thanks to TV series and movies, people nowadays believe that when it comes to digital images and videos, everything is possible. Some of you may remember the “never-ending enhance” sequence in Blade Runner or the magic zoom they have in CSI. Then we turn to reality, where cameras with poor components, coupled with Digital Video Recorders (DVRs) set to kill the quality to save storage space, mean that you often end up with a bunch of smashed pixels. Sometimes, you are asked to take vital information from them.

Contrary to what you see in movies, there are cases in which there’s just nothing to do, except invent information that is not there – something we must avoid, of course. However, sometimes the information you are looking for is not clearly available in any frame alone, but it may become way more intelligible by wisely integrating multiple frames. In this post, we’ll show you how two of Amped FIVE’s brand new filters, Perspective Stabilization and Perspective Super Resolution, can really be a game changer in such situations.

Imagine we must read the license plate of the vehicle found at the bottom of this video.

By scrolling with the mouse, we can look at the actual pixels composing the license plate (it’s important to be able to see the raw data, without any adjustment, to begin with!). The plate is 11 pixels high, not much, but still something. The car stays in the video for several seconds, so we could try to merge the information from some frames into a single, enhanced picture.

An effective way to strongly reduce the noise in a video is Frame Averaging, that simply computes their mean pixel by pixel. Unfortunately, your object of interest must be almost still across the frames, otherwise it will smear in the final output. It is thus advisable to apply Local Stabilization to your object before averaging: you select the object in the first frame and let Amped FIVE track it and keep it still in the rest of the video.

If we try this in our example, we notice that simple image stabilization does not help much, as you can see below (we also improved contrast). Why is that?

The problem is that the car is not moving in a straight direction, and the license plate perspective changes. Therefore, even after stabilization, the license plate still changes from one frame to another, and the amount of change depends on its position with respect to the camera. That’s why pixels are more and more blurred as you move from left to right in the license plate.

Fortunately, in Amped FIVE there are now two filters targeting this exact scenario: Perspective Stabilization and Perspective Super Resolution.

Perspective Stabilization

Perspective Stabilization, found under the Stabilization filter category, can stabilize a planar object whose perspective changes across frames. It is based on the concept of image rectification: since we know the license plate is a planar surface, once we know its vertices coordinates in all frames we can compute a homography, that is, a mathematical transformation, which links the coordinates of vertices in the current frame to the coordinates of the corresponding vertices in the reference frame. In order to compute the license plate homography between two frames, the pixel coordinates of the plate’s vertices are needed. Unfortunately, tracking pixels’ coordinates with high accuracy in poor resolution images is not trivial. In fact, until a few months ago, the users had to manually click on the license plates vertices in all frames they wanted to stabilize, as allowed by the Perspective Registration filter – a boring and exhausting job. With Perspective Stabilization the user now only needs to select the vertices in the first (reference) frame; then, the filter will automatically track their position in the following frames.

Let’s take a look at how we can use this filter for our case. Once we have selected the area (it is recommended to allow some extra pixels outside the license plate; of course, they must be on the same planar surface), the points will be automatically added to the filter parameters and we can now choose Motion TypeTracking Method, and Interpolation.

Motion Type refers to the expected type of motion the filter needs to track. More complex motions can give more accurate results but do take longer to compute and may be more sensitive to motion blur or video noise. Perspective is the most general setting and is what we are using in our case.

You can choose from three different types of tracking to suit your project:

  • Static Tracking compares the current frame with the reference frame where the selection of pixels has been set: it offers the most precise stabilization, but may fail if the shape of the region changes too much.
  • Dynamic Tracking compares each frame with the previous one, which can stabilize larger deformations but the position in the stabilized video may drift slightly over time. In practice, it is more robust (works in most situations), but less precise for later frames.
  • Hybrid Tracking compares the current frame with both the first and the previous, allowing for the tracking of large deformations but keeping the object steady. It is the method which usually gives the best compromise of robustness and precision.

In our case, Dynamic Tracking proved to be the best option. Under the Output tab, you can select what type of output you would like from the filter.

  • Stabilize Video will produce a stabilized video, should you only need to stabilize or if there are more steps in your workflow, or if you are thinking of using something like Frame Averaging to reduce noise and perform integration.
  • Selection Overlay will draw the warped selection onto the input video.
  • Prepare for Super Resolution will leave the video unaltered but adds the transformation matrix to each frame should you want to use Perspective Super Resolution later in a workflow.

Clicking the “Prepare for Perspective Super Resolution” button at the bottom of the Output tab will automatically add the Perspective Super Resolution filter.

Perspective Super Resolution

Perspective Super Resolution works alongside Perspective Stabilization to apply the Super Resolution effect to an object that has been the subject of some perspective disturbance. It is automatically loaded into the Chain History after clicking “Prepare for Super Resolution”.

The general idea behind super-resolution is that when you have many low-resolution observations of an object which moves a bit, you can merge the information to obtain a single observation at higher resolution. In a sense, you are trading in time resolution for pixel resolution. The Perspective Super Resolution filter uses the mathematical transformation previously estimated by the Perspective Stabilization filter to guide such an “information fusion” operation.

We asked for a magnification of a factor 5, and obtained this still image:

We compensated for the slight blurring that is typically introduced by integration techniques, by applying a slight Optical Deblurring:

Finally, we used the Correct Perspective and Sharpening filters to have a frontal view of the license plate and increase the contrast between characters and the background. As shown below, compared to the original pixels, we obtained quite an improvement!

This case shows how important is to use the proper tools in the proper order. We showed that a standard stabilization followed by frame integration was not the best choice in this case, because the perspective of the object of interest changes in the video. Thanks to the Perspective Stabilization and Perspective Super Resolution filters, we were able to automatically register frames and merge their information.

Find out more about Amped FIVE at ampedsoftware.com/FIVE.

My Digital Forensics Career Pathway

$
0
0

by Patrick Doody

Let me start by introducing myself. I’m Patrick, 39 years of age and from a working-class background. I’ve lived in London all my life, my parents moved to the UK from Southern Ireland when they were young and started a new life together and a family. I am the youngest of two children. Since a young age I was constantly questioning ‘how things worked’ and carried around many unanswered questions in my mind. I guess you could say I had a scientific mind.

When I was around eight years old my parents invested in my first computer, a Sinclair ZX Spectrum 128k, it was top of the range back then. It was an exciting time, and after enjoying many games I soon wanted to know how the games were created and how the computer ran them. I spent many long tedious hours on copying coding from manuals to make small programs compile and run; even though the results were very basic it was very satisfying. I soon discovered ‘Technology’ was not stationary and advancements were constant. This has kept my interest to this very day.

I left school with good GCSEs and progressed through college and decided to go on to university. Unfortunately I was more interested in my own personal development through much socialising and all that comes with it, and it soon became apparent that I wasn’t committed to my studies. I returned home and soon found myself working within the retail industry.

Time passed and I gained much experience in many short-lived careers, still using my free time working on my PC and following the current computer trends. It was in 2014 that I was made redundant and I found myself in a difficult situation. I was receiving government benefits; it was during this period that I joined a local government scheme and started a computer course. Within six months I had completed and passed the European Computer Driving License (ECDL) levels 2 and 3. The passion for learning had returned and in the same year I moved on to a Higher Education Diploma in Computing with the help of the 21+ Loan.

Patrick’s first computer was a a Sinclair ZX Spectrum 128k

With this new qualification a door opened for the opportunity to return to university. Even though I was anxious at returning as a mature student, I embraced the opportunity and enrolled onto a BSc degree in Computer Science. After three years and achieving a 2.2 award, I discovered my love for digital forensics. The ability to investigate and extract data from digital devices, even when it had been hidden or deleted, was of great interest. I continued studying in this field and completed an MSc in Cyber Security and Forensics.

Throughout my education I’ve gained a good knowledge and understanding of many computer science fundamentals, including information systems and software security. Within cyber security and forensics, I’ve worked with state-of-the-art technologies involved with security threats and data acquisition using such tools as FTK, X-Ways and EnCase.

On completion of my studies, I continued my involvement in the field by joining digital forensic groups and attended many interesting expos, enabling me to connect and network with people from the industry. It was around this time when I was informed of vacancies within the civil service, and the position for Forensic Practitioner became available at HMRC, as part of their ‘Digital Support and Innovation’ business area.

This was an exciting opportunity to hone my skills in the field of digital forensics. It included working on mobile devices and even vehicles. Extensive training programs were offered to aid my professional development. I decided to put myself forward for the position and my application was successful.

Patrick now works for HMRC

My role involves working in the front line of investigations in a professional laboratory, using a lot of different tools and techniques, and abiding by the HMRC regulations pursuant to the ACPO Good Practice Guide for computer-based electronic evidence: IPA, RIPA, CPIA, GDPR, and PACE. I am part of a large team within the Fraud Investigation Service (FIS) which is responsible for the department’s Civil and Criminal Investigations, tackling the most serious tax evasion and fraud supporting investigations totalling hundreds of millions of pounds each year.

Some cases can take many years to investigate, to gather enough compelling evidence to be taken to trial. One such recent case involved a small number of suspects who were involved in a £100 million pound tax fraud. HMRC worked closely with the CPS, and numerous devices were seized. We collected approximately seven terabytes of data, following strict procedures to ensure the integrity was kept, and we uncovered many electronic documents and files. This data was analysed, and the findings played a vital role in obtaining successful convictions for what came to be known as a ‘Major Attack on the Tax Revenue of the UK’.

I have been enrolled onto industry renowned courses such as CompTIA A +, as well as courses through the College of Policing, and there is great scope for career development. There is a great future within this industry and I’m looking forward to learning new skills, working more with analysis of data, assisting with live cases which may be taken to court, and maybe even being called up as an Expert Witness.

Returning to study an old passion was the best decision I’ve ever made, and I would urge anyone with an interest to pursue their goals, because with some hard work and commitment anything can be achieved.

Would you like to share your digital forensics career pathway with other readers? Send in your experience to scar@forensicfocus.com to be featured.


How To Use Cross-Case Search With Belkasoft Evidence Center

$
0
0

by Yuri Gubanov

Diving deeper may be the key to the eventual success of a digital forensic investigation. This is true not only when it comes to a single given case, but also when it comes to intersections between different cases. 

Sometimes, a person being investigated may have associates who are problematic, or who have been involved in different forms of misconduct. Consequently, in the course of a digital investigation, investigators may need to examine links between a current case and other opened (or recently archived) cases. A reliable tool is needed to get a clear and coherent picture in its entirety.   

That is why Belkasoft Evidence Center has a ‘Cross-Case Search’ function. This article is intended to demonstrate, step by step, how to use this feature productively. 

General Outline

The aim of Cross-Case Search is to detect intersections between cases. By ‘intersections’ we mean pieces of information identified in the current case that can be linked with relevant data found in other cases chosen at the time of analysis. The resulting matches are subsequently displayed in the ‘Search Results’ screen. 

Belkasoft Evidence Center (BEC) is capable of using the following data types for Cross-Case Search:  

  • Phone numbers 
  • E-mail addresses 
  • Application user identification numbers (UINs) and profile names

How To

  1. First, you need to enable Cross-Case Search while adding a data source. Switch on ‘Run cross-case analysis’:

2. Click ‘Next’ twice (just leave the second page by default). 

3. Select an existing case (or cases) for your Cross-Case Search. 

Pay Attention: Each case is identified with the following three parameters: 

  • Database icon. If you see this, you can pick up this particular case for a Cross-Case Search.
  • Case name. 
  • Path to the case. 

If you created a case with an earlier version of BEC, it will not be available for Cross-Case Search. However, you can upgrade such cases by opening it with the newest version of BEC.

4. Once you have launched your Cross-Case Search, BEC starts to search for matches in the selected cases as it processes the current one. In fact, matches can be detected before the current case is completely processed.  

5. An alert icon in the status bar will inform you about the first identified match. Clicking on the icon will lead you to ‘Search Results’

The sign is shown for the first match only and will stay on the product’s status bar until the next BEC launch.

6. If you would like to examine your Cross-Case Search results, follow this route: ‘View’ –> ‘Search Results’. You need the ‘Cross-Case Search’ node. Here you can start examining individual links between cases. You can find such a link, exemplified by an e-mail below. 

  • Results associated with your current case can be accessed in the area indicated here as ‘1’. 
  • Matches from the other cases are displayed in the area ‘2’. These matches are items from the other case related to the new one.

In this sample case, the match is PhilipLombard1939@gmail.com.

Legal Questions

Some of our customers, discussing this feature, tell us that they cannot store closed cases, so the question arises: is the Cross-Case Search function useful for them? It is, and here is why:

  • First, this feature is not necessarily to be used on archived cases. You may use other open cases. 
  • Second, you can look into cases which are already examined but not yet deleted because they have not yet gone to trial. Some cases may last for a year or longer, and during this time you still possess these cases’ data.
  • Third, you can run Cross-Case Search on your colleagues’ open cases. 
  • Lastly, even if you delete a certain case, the data required for Cross-Case Search will not be deleted by BEC by default. This data is kept anonymized, meaning that phone number and emails are not bound to any person since this information is not stored in the Cross-Case Search database. You are still able to run Cross-Case Search analysis and find meaningful results, though you will not be able to open the deleted case. This means that you can launch such a search, without corresponding images or devices, within the frames of legality.  

Conclusion

Cross-Case Search is an invaluable function to modern-day digital investigators. With Belkasoft’s Cross-Case Search, such a quest for data intersections with other cases is simple, intuitive, and automated. You can see it for yourself by requesting a free trial version of Belkasoft Evidence Center at https://belkasoft.com/get.

Case Study: Extracting And Analyzing Messenger Data With Oxygen Forensic Detective

$
0
0

by Nikola Novak

It‘s a great pleasure to share my experience of working with Oxygen Forensic Detective, which was a crucial tool in solving one of my cases.

A father of a minor girl contacted me, worried his daughter was in suspicious society and probably had been consuming marijuana. His wife accidentally found traces of a substance which looked and smelled like marijuana in his daughter‘s clothes and that led him to that conclusion.

Polygraphic testing pointed that the girl was intensively communicating with her friends mostly via Facebook and mobile messenger apps, so I suggested to her parents to thoroughly examine her phone, believing they would found incriminating evidence.

The parents reached me a day later and told me they examined the messages in all the installed apps and that they hadn‘t found any message that would justify their doubts, so they assumed the messages were deleted after the polygraphic testing.

I suggested a forensic analysis of a mobile phone: Tesla Smartphone 6.4 Lite. After her parents insisted on that, the girl reluctantly agreed.

After physical acqusition I focused on analysing the data from the messenger apps and I noticed that a complete communication from Viber app had been deleted.

Photo.1: Recover deleted Viber messages

Oxygen Forensic Detective successfully retrieved all deleted data, among which even 4652 Viber messages which were crucial in finding incriminating contacts and reconstruction of her movements. I have to admit I was fascinated by the result because I didn‘t expect that number of retrieved messages.

By using Oxygen Forensic Detective analytical tools I quickly filtered Viber communication and presented the contacted people, frequency of communication as well as the content of the messages to her parents in a simple way which I delivered to them in the form of very clear PDF report.

Photo 2: Analysis of Viber communication

Mobile devices are an important part of our lives and unavoidable subject of our inquiries. In most of the polygraphic tests a trace leads me towards mobile phones and because of that Oxygen Forensic Detective is a necessary additional tool in my work.

Photo 3: Reconstruction of movements

Find out more about Oxygen Forensic Detective at oxygen-forensic.com.

About Nikola Novak

Novak started his career early, as a student, while working in investigation agency Protecta as an associate investigator, where he learned the basic tactics of secret surveillance from ex secret service agents. Novak later became a private investigator in the agency, and advanced to the level where he managed the private investigations unit department. Novak has over a decade of experience working in the private security industry. In his career Novak has worked on more than 3,000 private investigation cases and conducted more than 1,000 polygraph examinations worldwide.

Mr. Novak is president and founder of Security and Education Association of Serbia, which gathers all the agencies from the private security sector in Serbia and organizes post-secondary education classes and specialized training courses for people who operate in the private security, polygraph and investigations industry. Novak is also founder and director of the European Polygraph Academy, which conducts basic polygraph examiner training using APA standards and continuing education for polygraph examiners worldwide. The European Polygraph Academy is the first polygraph education institution of its kind to be created and operational in continental Europe. Mr. Novak is also a course coordinator for the courses held in the European Polygraph Academy, where he also works as a co-instructor and teaches the students who attend the basic polygraph examiner training course. Mr. Novak is a member of every relevant international association – World Association of Detectives, International Association of Private Detectives, American Polygraph Association, European Polygraph Association, British and European Polygraph Association, National Polygraph Association and European Society of Criminology.

Fighting Crime With Data: Law Enforcement In The 21st Century

$
0
0

by Paul Hamrick, Nuix

Executive Summary

Law enforcement investigations have long been influenced by developments in technology; after all, new technologies create new ways for criminals to profit and new sources of evidence. Law enforcement needs to keep up with the times, dealing with technological developments in areas like firearms, automobiles or more recently, digital communications.

Over recent years, many cases have been solved after analysing electronic evidence from suspects’ devices. The wealth of information people create merely by using everyday technologies is a treasure trove for investigators to determine when a crime occurred, where it happened, who was present and any number of other connections to real people and their actions.

Investigators, however, are constantly racing to keep up as new technologies emerge, which seems to happen every day. Modern tools and techniques are well-suited to the evolving nature of electronic evidence, powering efficient investigations today and helping law enforcement agencies stay adaptable to the crimes and technologies they will face tomorrow.

Knowing The Evidence

A law enforcement entry team breaches an apartment door and finds a cache of evidence that could help disrupt and dismantle the criminal organisation that it’s had under investigation for months. In a den converted into a money counting room are mobile phones, a laptop computer and a tablet … devices used to communicate with members of the organisation up and down the chain of command. In the bedroom, a smartphone left beside the bed could help identify additional suspects and co-conspirators. The contents of a handwritten ledger could help follow the money trail and connect bank accounts and assets with the targets of the investigation. And stored away in a linen closet, an old desktop computer – long since abandoned for the convenience offered by mobile devices – stands ready to disclose closely held secrets clarifying the organisation’s structure, leadership and historic operations. All in all, a good day for investigators and prosecutors handling the case.

The DNA Of Data

Scenarios like this one are familiar to law enforcement professionals around the world. Today’s criminal investigations routinely involve the discovery, collection and analysis of digital evidence in a variety of types and formats. Across a continuum of targets including theft and fraud, assaults and homicides, drug dealing, money laundering, human trafficking and terrorism, digital evidence has forever changed investigative operations. It provides unparalleled insights into the connections between victims and witnesses, as well as correlating associations between known suspects and previously unknown co-conspirators.

Gone are the days when law enforcement professionals relied solely on documentary evidence and verbal testimonies to discern the who, what, when, where and how of a competently completed investigation. Our globally connected society and the ubiquitous presence of devices increases the importance of digital evidence in investigative operations. Devices that include smartphones, tablets, laptops, fitness trackers, GPS devices and virtual assistants make it just as important for investigators to understand how to collect, analyse and interrogate digital evidence as to know how to interact with traditional human sources.

The use of fingerprints to help solve crimes represented a sea change in 19th century law enforcement. Similarly, the introduction of DNA in 20th century investigations was hailed as “the greatest forensic advancement since the advent of fingerprinting.”[1] Now, the DNA of data is proving to be the next evolutionary change in criminal investigations. Data and the tools to collect, analyse and understand it is empowering law enforcement throughout the world.

The Value Of Digital Evidence

There is no more famous case in which the collection, analysis and understanding of digital evidence played a more important role than that of the hunt for the BTK serial killer. Between 1975 and 2005, investigators worked diligently to identify the individual responsible for the deaths of 10 people in and around Wichita, Kansas.[2]

For 30 years, investigators spent untold hours running investigative leads, speaking with witnesses and mulling over the clues contained in the letters the killer sent to the police, taunting them and their investigative abilities. But on February 16, 2005, using computer forensics, investigators identified Dennis Raider as their primary suspect. Through the analysis of a floppy disk on which Raider penned a letter to the police, investigators discovered metadata that led to Raider’s arrest.

Criminals and their organisations increasingly rely on technology to empower and expand their illegal activities. Transnational crime researcher Louise Shelly observed, “Crime and corruption are no longer limited by geographic boundaries … (T)echnology has transformed the very nature of crime itself.”[3]

The internet, social media and electronic communications provide the anonymity criminals need to obfuscate their identities and nefarious activities. The devices and applications they use to share information and orchestrate their activities are the new tools of the trade … the virtual crowbar with which they open windows of opportunity and exploit the vulnerabilities of the digital world.[4]

In its 2017 annual review, Europol identified anonymity as the most significant challenge investigators face today. The criminal intelligence agency now spends most of its time attempting to combat the digitisation of organised crime. As an example, technology allows burglars to case a street of potential victims, using data gathered from a variety of online sources to understand when the neighborhood is most vulnerable and to time their attacks accordingly.[5] The data contained on the variety of devices available to criminals has become an increasingly important source of evidence for investigators and prosecutors alike. It is the rare exception where a modern investigation does not include the review of at least some volume of data contained on a digitally connected device or an online platform used by a suspect or a victim. The data may provide insight into communications between co-conspirators, help identify a witness or suspect or assist in determining the geographic location of physical evidence.

The 1994 discovery of the IBM server used by the Cali cartel proved that criminal organisations clearly understood the value of digital data.[6] The cartel’s server contained telephone records important to the cartel for identifying potential informants cooperating with the police. The cartel also used it to track bribes it paid to government officials. Operating like any other multinational enterprise, the cartel leadership was interested in tracking expenditures, understanding profit and loss, and exploiting critical value data that would prove useful to furthering its operational goals.[7]

Only 15 years later, in 2009, investigators in Boston demonstrated the value of digital evidence by using a social media, emails, and IP addresses to identify the Craigslist Killer. Relying on digital evidence, officers obtained a search warrant for the suspect’s home where they found physical evidence linking the suspect to the murders. Among the evidence was a laptop computer that contained fragments of messages between the suspect and the victims.[8]

Now On Video

Video presents another electronic source law enforcement may rely on when conducting investigations. But video evidence presents its own challenges. Wall Street Journal technology reporter Jennifer Valentino-Devries observed that, while law enforcement may rely on video evidence to “spot a suspicious package in a crowded train station and correlate it to the license plates on a nearby car to find a potential suspect … Much of the information now being used by intelligence agencies and police is in difficult-to-analyse formats, such as video, speech-recordings, text and photos from social networks. The volume of this information can overwhelm the trained resources available to collect, analyse, and assess its value in compliance with chain of custody considerations.”[9] Look no further than the Boston Marathon bombing during April 2013 to understand the value of video in criminal investigations. Footage from the cameras located around the marathon’s finish line was an essential component to the investigation of the attack that left three dead and 260 injured. With the use of technology solutions, investigators triaged video and still photography from a variety of sources to identify the Tsarnaev brothers, eventually leading to the apprehension and conviction of Dzhokhar Tsarnaev on 30 federal charges including use of a weapon of mass destruction and the death of an MIT police officer.[10]

More Things Connected

The Internet of Things (IoT) presents the next emerging opportunity for investigators to use data to enhance investigative operations. As criminologist Professor David Wall suggested, “The emergence of (IOT) has further expanded the data flow by increasing the number and variety of devices gathering information. Your smartphone, your daily stepcounter, even your car, household goods, and house itself are all generating data on your movements and decisions and communicating this data back to their motherships.”[11]

Consider the following examples:

  • During October 2018, investigators used data from a murder victim’s Fitbit, as well as video from a neighbor’s video surveillance camera, to arrest a suspect, the victim’s stepfather. The video linked the suspect to the home in which the victim was killed; the Fitbit was used to estimate the victim’s time of death and to correlate the suspect’s presence at the victim’s home with measurements of the victim’s heartrate.[12]
  • In a similar case, investigators used home surveillance video to identify a suspect in the investigation of the disappearance of a 20-year-old woman. After watching hours of video, the investigators identified and apprehended a suspect while using data transmitted by the victim’s Fitbit and smartphone, and information from her social media accounts to locate her body.

The Big Data Challenge

According to Statista, in 1984 only 8% of US homes had a computer. By 2015, nearly 87% of homes had one. Today, nearly 70% of all internet users maintain a presence on social media. Many social media users of rely on the digital world to get their news, to maintain personal and professional relationships, and to share information, both good and bad. And we know that criminals use social media platforms to further their activities.[14]

Expanding Our Knowledge

Just eighteen years ago, only 25% of the world’s stored information was held in a digital format;[15] at the time, most data were stored on film, paper, analog magnetic tapes, and other non-digital media. Today, the exact opposite is true— nearly all the world’s stored information is digital.[16]

According to IBM, approximately 90% of all the world’s data was generated in the past two years.[17] Eric Schmidt, the former executive chairman of Alphabet, estimated that today we create as much information in 48 hours as we did from the beginning of human civilisation to the year 2003.[18] In his book Law Enforcement Information Technology: A Managerial, Operational and Practitioner Guide, Jim Chu recounts a speech delivered by Tom Steele, the former chief information officer for the Maryland State Police, at the annual meeting of the International Association of Chiefs of Police in 2000. Steele’s warning was prescient:“We are just beginning to realize the significance of what is happening. There is not one area of law enforcement that will go untouched. The very essence of how we do business has been impacted through greater communications and information sharing. Over the next 15 to 20 years, you will see the greatest redirection, reorganization and modification of policing since Sir Robert Peel and the Metropolitan Police.”

Chu also argued that, “(T)he train is leaving the station. In fact, the IT train has jumped the tracks and has taken off on the information superhighway. It is imperative that IT becomes a prime consideration in all aspects of the public safety service delivery chain … increasing the number of criminals being identified, apprehended, and convicted while decreasing administration and operating costs.”[19]

Today, most criminal investigations will include at least some data collected from a few devices used by a small number of suspects. However, investigators often encounter cases in which relevant digital evidence may come from multiple devices and many suspects. Some criminal organisations are even taking the next step and encrypting their information to prevent or delay its access by law enforcement.

In a March 2011 lecture at the Massachusetts Institute of Technology, Janet Napolitano, former Secretary of the US Department of Homeland Security, stated the focus on big data and the acquisition of the tools necessary to understand that data is based on the need to “(d)iscern meaning and information from millions – billions – of data points. And when it comes to our security, this is one of our nation’s most pressing science and engineering challenges.”[20]

Throughout the years since September 11, 2001, the US law enforcement and intelligence communities have undertaken a variety of initiatives to help correlate clues and identify the connections between individuals, organisations, and their activities. The proliferation of online data, including the development of social media networks, has become a dynamic repository for open source information that intelligence and law enforcement analysts have at their command.

Toward Better, Technology-Enabled Investigations

Technology can be a force multiplier; it can also make it harder to conduct effective and efficient criminal investigations.

In the post-9/11 era, the use of intelligence derived from open source and sensitive data repositories is essential to law enforcement operations. It is also essential to preventing crime. But the sheer volume of data available for analysis can potentially overwhelm agencies that lack the financial and personnel resources necessary to recognise the data’s critical value as it relates to protecting public safety and preventing crime.

IBM public safety, intelligence, and counter-fraud specialist Shaun Hipgrave observed that, “According to a recently published study, (a major issue) law enforcement organizations face when combatting fraud is the sheer amount of data generated by everyday business operations. And the amount of data captured in investigations is growing significantly by the day.”[22]

Leveraging Limited Resources

Where resources are constrained, investigators and analysts need effective, cost-efficient, and easy-to-use technology to process and analyse evidence. Otherwise, as criminologists Dale Willits and Jeffrey Nowacki argued, “(P)olice departments adhering to traditional methods of crime fighting (without implementing technology-enabled investigative solutions) are unlikely to effectively track many of these offenses … Moreover, in the absence of specialized computer skills, these crimes are often more difficult to investigative even when they arereported to police.”[23]

Ideally, law enforcement agencies’ technology in support of criminal investigations should:

  • Recognise a wide variety of file types and data formats
  • Connect directly to the sources where the digital evidence is stored, including file shares, email servers and archives, cloud repositories, mobile devices, forensic images, and live data from running devices and databases
  • Process unstructured, structured, and semi-structured data at speed and scale, and with forensic accuracy and precision
  • Automatically identify duplicates and group similar items.

Technology with these features can provide essential insight into the relevant evidence, helping to identify hidden connections between people, objects, locations, and events. Moreover, deploying advanced investigations technology increases the efficiency and effectiveness of investigators and analysts.

“Today, new types of evidence are helping law enforcement and prosecutors conduct more thorough and accurate investigations. Though the evidence used years ago continues to play a valuable part in a criminal case, the improvements in science and technology are enabling police and prosecutors to solve more crimes more reliably than ever before.”[24]

Criminal Investigations, Technology, And The Journey Ahead

There is no doubt: technology will continue to advance, and criminals will continue to take advantage of technology to further their activities. In 1908, any member of the public could buy a Ford Model T, which meant criminals could use them to get away from the police;[25] however, the New York City Police Department only began using motorised vehicles for patrol in the 1920s.[26] It takes time for law enforcement to embrace technology while remaining within the guardrails of criminal procedure. The rule of law is just as important as infiltrating, disrupting, and dismantling criminal organsations.

But in the 21st century, effectively addressing the asymmetric threats energised by technology innovations means law enforcement agencies must continue to invest in technology and training. Doing so will facilitate a more effective response to investigative tasks today and allow investigators and analysts to prepare for the emerging investigative challenges of tomorrow.

Building A More Complete Investigative Picture

Individual pieces of evidence each tell a limited story when viewed separately. Nuix contextualizes these bits of information and shines a spotlight on the relationships and connections between each person, object, location, and event—known as the POLE framework—that for years investigators have needed to make for themselves.

Relationships between these four categories of evidence are a catalyst in almost every investigation. Nuix software lets investigators take a step back to understand the broader view of data sources, the patterns they create, and the stories those patterns can tell us.

It starts with understanding what kinds of evidence fall within each of the categories. These are just a few examples, broken down within the elements of the POLE framework:

  • People: Suspects, victims, associates/colleagues, employers, family members
  • Objects: Electronic devices (PC, mobile, USB), email addresses, social media handles, mobile numbers, tickets, weapons
  • Locations: Home addresses, public buildings, landmarks, travel origins and destinations, places of employment
  • Events: Transmission of data, email, physical meetings, crimes, arrests, destruction of data or property

By intuitively categorizing and visualizing data relationships between pieces of evidence, Nuix software helps investigators identify relationships across POLE elements in greater detail than ever before. This, in turn, lets them uncover the truth faster and more accurately within their investigations and quickly solve once complex, difficult, and time-consuming investigations with relative ease.

About The Author

Paul serves as an investigations subject matter expert and helps Nuix support law enforcement agencies at the federal, state, and local levels, as well as corporate security investigations across a variety of Fortune 500 companies. Paul began his federal law enforcement career in 1986 as a Special Agent with the former U.S. Customs Service, later being appointed as Deputy Assistant Commissioner in the Office of Professional Responsibility at US Customs and Border Protection, the largest law enforcement agency in the US. After a 28-year career in federal law enforcement serving in various leadership capacities in the US Department of Homeland Security, Paul served as the Senior Manager leading investigative operations at General Dynamics Information Technology. He held this role until he joined Nuixin his current capacity.

About Nuix

Nuix delivers the total data intelligence organizations need to overcome the burdens of ediscovery, investigations, risk, compliance, and security in a world overflowing with data. Our intuitive platform processes and analyzes more than 1,000 file formats to reveal the key facts and their context – at any scale, with incredible speed.

References

[1] Aaron P. Stevens, Arresting Crime: Expanding the Scope of DNA Databases in America, Texas Law Review, March 2001

[2] Benjamin H. Smith, The BTK Killer: Then and Now, Oxygen, August 16, 2018

[3] Louise Shelley, Crime and Corruption in the Digital Age, Journal of International Affairs, 1998

[4] David Décary-Hétu & Carlo Morselli, Gang Presence in Social Network Sites, International Journal of Cyber Criminology, December 2011

[5] Europol, Annual Review – An overview of Europol Activities, 2017

[6] PoliceOne, The Technology Secrets of Cocaine Inc., July 20, 2002

[7] Brian Anderson, The Cartel Supercomputer of 1994, Motherboard, September 4, 2014

[8] Dan Clarendon, “Craigslist Killer” Philip Markoff’s Last Act Was to Haunt His Ex-Fiancée, InTouch, June 17, 2018

[9] Jennifer Valentino-Devries, Software finds place in posse: Firms scramble to cash in on law-enforcement demand for data-sifting programs, Wall Street Journal, Eastern Edition, November 4, 2011

[10] Brian Ross, Boston Bombing Day 2: The Improbable Story of How Authorities Found the Bombers in the Crowd, ABC News, April 19, 2016

[11] David Wall, How Big Data Feeds Big Crime, Current History, January 2018

[12] Jason Hanna and Stella Chan, The murder suspect denies it. The victim’s Fitbit tells another story, police say, CNN, October 4, 2018

[13] Nicole Chavez, Mollie Tibbetts case mystified police until a security camera offered a key clue, CNN, August 22, 2018

[14] Statista, Percentage of households in the United States with a computer at home from 1984 to 2015

[15] Rebecca Boyle, All the Digital Data In the World Is Equivalent to One Human Brain, Popular Science, February 11, 2011

[16] Bernard Marr, How Much Data Do We Create Every Day? The Mind-Blowing Stats Everyone Should Read, Forbes, May 21, 2018

[17] Jack Loechner, 90% Of Today’s Data Created In Two Years, MediaPost, December 22, 2016

[18] MG Siegler, Eric Schmidt: Every 2 Days We Create As Much Information As We Did Up To 2003, TechCrunch, August 4, 2010

[19] Jim Chu, Law Enforcement Management Technology: A Managerial, Operational, and Practitioner Guide, 2011

[20] Peter Dizikes, ‘We need to see ahead’, MIT News, March 15, 2011

[21] Serving Dope, Bronx’s Murderous Courtland Avenue Crew Found Guilty, December 11, 2012

[22] Shaun Hipgrave, Smarter Fraud Investigations with Big Data Analytics, Network Security, December 2013

[23] Dale Willits and Jeffrey Nowacki, The Use of Specialized Cybercrime Policing Units: An Organizational Analysis, Criminal Justice Studies, 2016

[24] Kristine Hamann and Rebecca Brown, Secure in Our Convictions: Using New Evidence to Strengthen Prosecution, Chicago, 2018

[25] 20FBI, A Brief History: The Nation Calls, 1908-1923

[26] Timberly Dinglas, The History of NYC Police Cars, Car Part Kings, April 13, 2015 1 2 3 4 5 7 6 8 9 10 11 12 1

Industry Roundup: Image Recognition And Categorization

$
0
0

by Christa Miller, Forensic Focus

The need for image recognition and categorization has never been more in demand thanks to the spread of extremist propaganda, child sexual abuse material (CSAM), and other illicit activity across the internet.

Because of the sheer amount of material online, investigators assigned to these kinds of cases need ways to recognize it quickly and also to categorize it — to separate known from fresh material. This is particularly important when it comes to neutralizing active threats and rescuing victims — as well as preserving investigators’ mental health by limiting the amount of material they have to see.

Thanks to developments in machine learning and artificial intelligence, a number of vendor products have been able to incorporate rapid recognition or categorization tools into their software. Here, we take a look at what’s available.

Image Recognition & Categorization Technology

Image recognition is often powered by artificial intelligence, which is trained through machine learning to differentiate objects (and, on a higher end, faces) in pictures. Examples include money, weapons, drugs, militant clothing, nudity, etc.

A step or more beyond hashing, image recognition technology focuses on the object or action rather than on whether it’s been seen before, as hashing does. That makes it possible to discern new victims or new crimes, as described in this article from Wired.

Once pictures and videos are identified, of course, it’s important to be able to categorize them according to the degree of criminal activity they represent — an important step in bringing charges. Most often applied to CSAM, categorization helps investigators to separate the known from the unknown, the relevant from the irrelevant, and even animation from real-life imagery.

Categorization resources like Project VIC (in the United States), the Child Abuse Image Database (CAID) in the United Kingdom, INTERPOL’s International Child Sexual Exploitation (ICSE) database, and C4All form a way for investigators to collaborate and share hashed data, breaking down data silos so that everyone can benefit. Many of the tools listed below offer the ability to export new hashed images to these databases.

AccessData FTK / Lab 

FTK gave users the ability to import and export Project VIC data in 2017, but more recently — as of FTK v7.1 — AccessData brought in machine learning-powered object and facial recognition. By relying on open-source machine learning technology from Google, the new system allows investigators to feed images into the software, training it in real time to make accurate, precise identifications of what the images contain. By training the system to seek specific individuals or objects within images, users can then filter the results to focus on those pictures.

Autopsy

Autopsy is an open source digital forensics platform, known for being the graphical user interface to the Sleuth Kit command line forensics framework. As open source software, Autopsy is completely free to use and might be a good alternative for smaller labs on a budget, that need to investigate child exploitation cases and get fast results.

Primarily developed and supported by Basis Technology, Autopsy is supplemented through community-developed modules. Among them:

  • FDRI—Facial Detection and Recognition in Images. Taking Second Place in the 2018 OSDFCon Module Development Contest, FDRI relies on deep learning for its face detection and face recognition capabilities.
  • Image Classification for Autopsy, a submission to the 2018 contest, automatically classifies the objects — cars, guns, or anything the user selects — it finds in images. Watch the video here and find the source code here.
  • A file-level ingest module, FaceRadar, which detects image files and then scans each one for faces.  
  • An “OpenCV” object detection module, located in Autopsy’s “Experimental” part because it doesn’t come with any trained models. A classification module is planned for a future release. 

Autopsy integrates Project VIC and C4All databases via modules in its Law Enforcement Bundle.  

Belkasoft Evidence Center

Belkasoft Evidence Center supports a variety of image recognition: faces, text (optical character), and pornography within pictures and video key frames, as well as forged image detection. As of v9.5, BEC also identifies pictures modified with hand-drawn arrows or other marks that denote potential drug dead-drops or other distribution points.

BEC relies on artificial neural networks for its image recognition. It draws on detection technology for skin, eyes, nose, and mouth features to help accurately identify faces and explicit content; and on de-skew, resolution increase, and other technologies to help identify text.

In addition, a separate Forgery Detection Plugin detects altered or modified JPEG pictures, including those saved at a different compression level, cropped, or with altered content such as exposure tuning.

BlackBag Technologies

BlackBag partnered with Image Analyzer early in 2019 to bring AI-driven image recognition combined with triage and prioritization techniques. As of its 2019 R1 version, BlackLight searches for pornography, weapons, drugs, extremism, gore, alcohol, and even swimwear and underwear, identifying new material not previously hashed. 

By focusing on the detection of visual threats, as opposed to everyday objects, Image Analyzer shortens investigative time. Because it’s built into BlackLight, the solution can be run on pictures and videos even with no Internet connection — and at no additional charge.

The integration allows investigators to examine the riskiest content categories first. This makes it possible for users to start reviewing the images or videos with the likeliest relevance to their case based on the algorithm’s confidence in whether a given content type is present. The available categories will continue to grow as BlackBag cooperates with Image Analyzer to provide user requested categories.

For categorization, BlackLight allows the export of relevant pictures to Project VIC, BlueBear LACE, and C4All to categorize the images. However, it also partners with Semantics 21 to analyze and categorize media, making for a more streamlined approach to categorization.

Cellebrite UFED Analytics 

Separate from its extraction or forensic analysis tools, Cellebrite’s UFED Analytics includes algorithms that identify weapons, drugs, CSAM, adult content, documents, and screenshots. In addition, facial recognition allows investigators to cross-reference and match individual faces across collected pictures and videos.

Once pictures are identified, UFED Analytics automatically categorizes images and individual video frames. Not only does this eliminate manual review of duplicative evidence; it also identifies and correlates unknown or unique images.

The platform’s integration with Project VIC, CAID and other defined hash value databases then allows investigators to match collected evidence against hash values of existing known material, as well as to categorize and export newly discovered material for sharing with other investigators.

Cyan Forensics

Cyan Forensics’ triage tools relies on Contraband Filters to allow investigators to scan pictures and videos on site. These filters replace MD5 hashing, allowing for speedier identification of contraband. Built using original material of extremist and CSAM content, the software allows investigators to identify whether contraband exists without exposing additional material — key if triaging on scene.

Griffeye Analyze DI

One of the most widely known and used image categorization tools, Analyze DI makes use of a new generation of algorithms for its Face Recognition technology. Face Recognition identifies and matches suspects and victims in imported images and videos — even in complex lighting conditions, blurry and noisy streams, or when faces are positioned at an angle.

This enables investigators to break out all unique individuals so they can narrow down, structure and prioritize relevant material, reducing both the time they have to take and their own exposure to the material. In addition, Face Recognition enables users to quickly find images in a case that contain similar faces to a suspect or victim. It enhances Analyze DI’s Analyze Relations link analysis tool by linking images of the same person together to show potential new links.

In addition, Griffeye Brain, a CSAM and object classifier trained on real case data, scans large numbers of previously unseen pictures and footage and suggests images that are likely to depict CSAM.

To categorize the images it finds, Analyze DI incorporates technologies and methodologies produced through Project VIC. Its image and video hashing pre-categorizes known data and stacks duplicates. In addition, Griffeye partners with many leading digital forensic software solutions including tools from ADF Solutions, Amped, BlueBear, Magnet Forensics, and Nuix.

Griffeye DI Core is free for everyone to use.

Magnet AXIOM

Over the past two years, Magnet Forensics has been busy adding image recognition capabilities to its Magnet.AI feature within AXIOM. Magnet.AI scans any artifact, including chat and email attachments, web cache data, and video thumbnails, that contains image data.

Potential results include depictions of child sexual abuse, nudity, weapons, and drugs; potential screenshots; money, documents, and personal identification such as driver’s licenses or passports; and vehicles, buildings (exteriors) and drones.

AXIOM integrates with Griffeye AnalyzeDI and Semantics 21 products, but it also features newly enhanced compatibility with Project VIC and CAID hash sets for redesigned media categorization in its own platform. Like other solutions, it makes it possible for investigators to share new data with the investigative community.

MSAB XAMN

The extraction tool XRY from MSAB automatically recognizes the contents of images during the decoding process and sorts them into categories such as drugs, weapons and people. Images identified as CSAM during this process are hidden from examiners’ eyes, protecting them from trauma. 

Examiners can then immediately zero in on categories of interest when later examining the data in XAMN – the analysis software suite from MSAB. A Location Range Filter gives investigators the ability to focus searches by geographic location.

XAMN is interoperable with the Project VIC database. If new pornographic images are found, examiners can tag them and export the hash values to the Project VIC database.

Nuix

Nuix incorporates Google’s open-source machine-learning capabilities, enabling investigators to rapidly identify images using predefined models to filter images of drugs, guns, money, weapons, cars, adult content and child abuse content.

This streamlines investigations by allowing investigators to quickly differentiate between previously known and verified images, and new images that may provide new clues about unknown victims and violators. Additionally, the system can be trained in real time to accurately and precisely identify what the images contain based on their specific investigative requirements. Nuix also incorporates skin tone analysis and facial detection technologies to further empower investigators.

Investigators can easily use these capabilities to enhance existing workflows and automate the processing and analysis of huge volumes of data. Nuix also facilitates the exchange of intelligence information and allows for the sharing of files, tags, and other metadata from any application or tool by eliminating the need to reprocess data.

Using Nuix, investigators can export unclassified files, using OData, into tools such as Griffeye Analyze DI, then analyze, categorize, and tag these unknown files. Nuix also integrates with Project VIC and CAID.

Oxygen Forensic Detective

In June 2019 Oxygen Forensics announced that integrated technology from Rank One Computing, a leading provider of facial recognition and biometrics technology, would allow for facial recognition within its tool at no additional cost to customers. The capability will allow Oxygen Forensics Detective users to capture and analyze aggregated image and video data extracted from more than 27,000 unique devices. 

Oxygen Forensics Detective’s seamless integration with Project VIC allows users to identify CSAM using hash sets. The found results are visualized both in the software’s Project VIC section, and on a separate tab in File Browser, categorized according to Project VIC classification.

Semantics 21 Laser-i Series

Semantics 21’s suite of products categorizes images from photos and video, and can triage suspect media. Facial identification, victim identification, age recognition, object detection, and nudity detection are all part of the feature set, filtering through media to understand content, its relevance, and whether further review is needed. A real-time intelligence database has full compliance and support for Project VIC (and its UK CAID variant).

T3K-LEAP

T3K’s Law Enforcement Analytical Product (LEAP) is designed for front-line personnel to triage content on smartphones within minutes and without the need for specialist knowledge or training. Its picture recognition focuses on terrorism, human trafficking, smuggling and the trade of illegal goods, and CSAM using advanced object recognition.

As artificial intelligence advances and becomes easier to deploy, expect more vendors to add solutions — via development or partnership — that make your job easier to do. Watch Forensic Focus for the latest news!

How To Analyze Windows 10 Timeline With Belkasoft Evidence Center

$
0
0

Temporal analysis of events (Timeline) can be beneficial when you want to reconstruct events related to computer incidents, data breaches, or virus attacks taking place on a victim’s computer. 

Historically, digital forensic timeline analysis has been broken down into two parts: 

  • ‘Timeline’ to describe changes associated with temporal file metadata in a file system. In other words, this Timeline is based exclusively on the corresponding file system.
  • ‘Super Timeline’ to describe changes associated with temporal metadata for the maximum number of artifacts possible within both file system and operating system. That is to say, it combines data from both sources.

However, recently Microsoft introduced a new type of Windows artifact: Windows 10 Timeline. It offers new opportunities to investigators, with greater clarity. This article describes these new forensic capabilities with Windows 10 Timeline.  

Windows 10 Timeline

The April 2018 Windows 10 update introduced a new feature called ‘Timeline.’ It enables you to look through a computer user’s activities and to quickly return to previously opened documents, launched programs, and viewed videos and pictures.  To open Windows 10 Timeline, you need to click on an icon located next to the magnifying glass icon, i.e., Task View (or click WinKey + Tab).

After that, thumbnails of programs/documents/sites that have been opened today or some time ago will become available. 

This is how Windows 10 Timeline may look if you press ‘WinKey+Tab.’

Investigators have identified the following two issues related to the new feature: 

  1. Some apps are not displayed in ‘Timeline’ after being opened.
  2. Some data from ‘Timeline’ (earlier data) is transferred to Microsoft Cloud.  

Thus, data obtained via Windows 10 Timeline artifact analysis differs from a customary Timeline and Super Timeline analysis.  

Also, Windows 10 Timeline should not be confused with a timeline generated by forensic utility programs. For example, Timeline created by Belkasoft Evidence Center contains many more entries in comparison with Windows 10 Timeline.

Location of Windows 10 Timeline Database

Windows 10 Timeline info covering user activities is stored in the ‘ActivitiesCache.db’ file with the following path: 

C:\Users\%profile name%\AppData\Local\ConnectedDevicesPlatform\L.%profile name%\

The ‘ActivitiesCache.db’ file is an SQLite database (ver. 3). Just like any other SQLite database, it is characterized by the following two auxiliary files: ‘ActivitiesCache.db-shm’ and ‘ActivitiesCache.db-wal’.

Additional info about deleted records in an SQL database can be obtained via analyzing unallocated space of the database, in freelists and the WAL-file. Examining them for your forensic analysis may increase the volume of your extracted data by 30%.  If you would like to learn more about SQLite database analysis, you can read more in our article ‘Forensic Analysis of SQLite Databases: Free Lists, Write Ahead Log, Unallocated Space and Carving’ available here.

Windows 10 Timeline database files shown by File Explorer in Belkasoft Evidence Center. WAL-file is currently empty, but this is not always the case!

Windows 10 Timeline Anatomy

‘ActivitiesCache.db’ contains the following tables: ‘Activity’, ‘Activity_PackageId’, ‘ActivityAssetCache’, ‘ActivityOperation’, ‘AppSettings’, ‘ManualSequence’, ‘Metadata’.

List of tables

The ‘Activity_PackageId’ and ‘Activity’ tables are of the most significant interest to investigators.  

‘Activity_PackageId’ Table

‘Activity_PackageId’ table

The ‘Activity_PackageId’ table contains records for applications, including paths for executable files. Here is an example string copied from the database: 

Example string copied from the database

You can also find names of executable files as well as expiration time for these records. Values located in the ‘Expiration Time’ column define a moment when the records covering PC user activities will be deleted from this database. They are stored in the ‘Epoch Time’ format.

The records in the ‘Activity_PackageId’ table are stored for a certain period of time. It may cover records for the previous 30 days. There can be records related to executable files or documents that are not already present on a hard drive being investigated. 

‘Activity’ Table

This contains as many as five fields for time tags!

‘Activity’ table

These are LastModifiedTime, ExpirationTime, StartTime, EndTime, LastModifiedOnClient. Their meanings are as follows (datetime in UTC): 

  • ‘StartTime’ means the moment when an application was launched.
  • ‘EndTime’ means the moment when an application ceases to be used.
  • ‘ExpirationTime’ means the moment when the storage duration for a record covering a user activity expires in the database.  
  • LastModifiedTime means the moment when a record covering a PC user activity has been last modified (if such an activity has been repeated several times). 
  • LastModifiedOnClient may be empty. It is filled out only when users themselves modify files.

The table is where you can find paths to executable files

You can also find paths to the executable files in this table:

‘…:”F:\\NirSoft\\x64\\USBDeview.exe“…’.

Gary Hunter (@pr3cur50r) stresses in his article ‘Windows 10 Timeline – Initial Review of Forensic Artefacts that values of time tags for deleted files do not change. As for modified documents, the volumes of time tags do not change instantly, but within 24 hours.

How to Investigate Windows 10 Timeline with Belkasoft Evidence Center

Once you have added a data source (a hard drive, a logical drive, a folder or a file), select ‘Windows Timeline’ in the ‘System Files’ of the ‘Add data source’ window. 

Adding Windows Timeline

Once you have your data extraction process completed, the ‘Windows Timeline’ category will be displayed in your ‘Overview.’ Data extraction results from the ‘ActivitiesCache.db’ will be shown there.

Data extraction results

Since the ‘ActivitiesCache.db’ file is an SQLite database, Belkasoft Evidence Center is capable of restoring deleted records saved in this file and of extracting additional info from the transaction file (WAL-file).

Using Windows 10 Timeline in Computer Forensics

Data contained in Windows 10 Timeline may be used for issues related to incident response and data breaches.

We will show a data fragment of the ‘ActivitiesCache.db’ file from a computer attacked by hackers as an example. 

Info in the ‘sendspace’ block was concealed deliberately since it led to real malware

As you can see in the screenshot below, the attackers installed some files on the victim’s computer.

Among other things, the following programs were installed: 

  • TeamViewera program used for remote computer control
  • Mimikatza program used to steal user identity data
  • RDPWrapa library making it possible for several users to connect to a computer via the RDP protocol.

More than that, analysis of the events stored in Windows 10 Timeline indicates that the attackers opened the TeamViewer journal of logs (TeamViewer14_Logfile.log file) via Notepad (notepad.exe file). They might have done this to delete or modify the file’s info. 

A more convenient search for info across the base being analyzed is available with the Belkasoft Evidence Center built-in filters. 

For example, one could perform sorting based on executable file names (‘Package Name’).

Results of sorting by package name

As you can see in the screenshot, the following files were launched on the investigated computer: nirlauncher.exe, chromecookiesview.exe, chromepass.exe, folderchangeview.exe, imagecacheviewer.exe, skypelogview.exe. Looks intriguing!

To locate Microsoft Word-related events, let us enter a search word, namely ‘word’, in the filter panel: 

Filtering Windows 10 Timeline by package name in Belkasoft Evidence Center

Once we have used the filter, we can find info about the launch of the ‘winword.exe’ executable file in the sorted results.   

One can also find info about the files that were opened or edited via Microsoft Word by inspecting the timeline event property named ‘Content”: 

Conclusion

The new Windows 10 artifact category —namely, Windows 10 Timeline— significantly facilitates investigative activities aimed at reconstructing the events that have taken place on an investigated computer over the previous 30 days. 

More than that, Windows 10 Timeline may provide additional info about the files that have launched on the examined computer, including the deleted ones. This is of particular importance when it comes to incident response and data breaches.

Of course, a reliable tool is needed for such investigations based on Windows 10 Timeline artifacts. Belkasoft Evidence Center can help you with these types of investigations. With BEC’s automated search, recovering option for deleted events, analysis of transaction files and SQLite unallocated, and advanced filters at your disposal, your digital forensics or incident response investigation will be much more efficient. 

You can download your free trial of Belkasoft Evidence Center at https://belkasoft.com/get.

Techno Security & Digital Forensics 2019 – San Antonio Sept 30 – Oct 2

$
0
0

From the 30th of September to the 2nd of October 2019, Forensic Focus will be attending the Techno Security & Digital Forensics Conference in San Antonio, TX, USA. If there are any topics you’d particularly like us to cover, or any speakers you think we should interview, please let us know in the comments.

Below is an overview of the subjects and speakers that will be featured at Techno Security. The conference has four tracks: audit / risk management; forensics; information security; and investigations, along with sponsor demos. Forensic Focus will be concentrating on the digital forensics track throughout the event.

Monday September 30th

The program will begin at midday, with Eugene Filipowicz from Kroll giving a forensicator’s guide to fakes, frauds and forgeries. Alongside the rise in ‘fake news’ sites has been a rise in digital forgery of documents as well. Filipowicz will show how to use digital forensic tools to uncover fraudulent documents.

Alongside this, Rob Attoe from Spyder Forensics will be talking about drone forensic analysis and demonstrating the kinds of evidence investigators can expect to encounter in cases involving drones. Meanwhile in the Audit / Risk Management track, Harvey Nusz and Tony Belilovskiy will be hosting an interactive discussion about the US privacy landscape and how it has been affected by the California privacy act.

At 1.15pm, Jamie Clarke will be talking about darknet markets and how much illegal content is easily available there. Particularly focusing on fake IDs and their links to international terrorism, Clarke will highlight the importance of remaining on top of darknet activity in law enforcement investigations.

Trey Amick from Magnet Forensics will be showing attendees how the latest updates to macOS impacts on digital forensic investigations. He will also look at APFS artifacts and files including KnowledgeC.db, FSEvents, Volume Mount Points, Quarantined Files, and bash history.

At 2.45pm, Vico Marziale from BlackBag will go through the Windows 10 Timeline and talk about its use and application in forensic investigations. Meanwhile Trent Livingston from ESI Analyst will discuss the importance of linking as much data as possible together during digital forensic investigations, which can help to uncover key dates, locations and activities within a case.

The final sessions of the day will include a presentation on chip-off mobile forensics by Dusan Kozusnik, CEO of Compelson; a discussion of US data protection law as it currently stands, and what changes we can anticipate in the future; a look at cyber security threat and forensic intelligence; and a session from Elena Steinke from the Women’s Society of Cyberjutsu, who will be talking about the importance of private and public sectors working together by launching counter-intelligence-like operations in cyberspace.

Tuesday October 1st

The second day of the conference will begin at 8am with a keynote by Roman Yampolskiy from the University of Louisville. Yampolskiy will be discussing how artificial intelligence will impact on the future of cybersecurity, with a particular focus on the rise of AI-enabled cyberattacks and fake forensic evidence.

Following the keynote, at 9.30 speakers from Protiviti will be tackling the thorny issue of how to prove a negative in cybersecurity investigations. Chet Hosmer from Python Forensics will show us how to investigate fake digital photos; and Tarah Melton from Magnet Forensics will do a deep dive into Windows memory analysis.

Taking your forensic analysis to the courtroom and presenting it in a way that makes sense to non-technical members of the public is a challenge at best, and at 10.45 Jeff Shackelford from Passmark Software will talk attendees through how to create a virtual machine from a forensic image to be presented in court.

Jason Roslewich will discuss some unique characteristics of APFS, as well as forensic imaging methods; and over in the info security track will be the intriguingly titled ‘Hack Yourself Before The Hackers Do.’

After lunch, Magnet’s Trey Amick will take to the stage once again to show how investigators can use GrayKey and AXIOM to acquire and parse iOS data that other tools may have missed. Steven Konecny from EisnerAmper will show attendees how to investigate Ponzi schemes in the 21st century; and Charles Giglia, VP of Data Intelligence, will demonstrate the capabilities of CCleaner and discuss whether it signals the end of digital forensics as we know it.

The dark web will again be a topic of discussion on Tuesday afternoon, with Vincent Jung from Media Sonar discussing effective strategies for running darknet investigations. Lee Reiber, COO of Oxygen Forensics, will demonstrate the huge amount of data that can be gleaned from smart fitness trackers such as the Fitbit and the Apple Watch. And Nuix’s Hoke Smith will show attendees how to detect and investigate malicious PowerShell.

At 4.30pm Andy Thompson from CyberArk will be discussing ‘Really Bad SysAdmin Confessions’, talking through some of the worst mistakes we know about in the industry and how we can avoid them. Drones will be the topic of discussion over in the Forensics track, with Greg Dominguez and David Kovar showing how to select and visualise data obtained from UAVs.

In the Info Security track, Steven Chesser will talk about the GDPR and how it can improve process, culture and the bottom line.

The day will be rounded off with a talk from John Wilson on the legal implications of Blockchain technology for digital forensic investigations.

Wednesday October 2nd

The final day of the conference will begin with an early riser session at 8am, in which Stephen Arnold will talk about the latest changes to the dark web and how they will impact on investigations. Although the number of sites on the dark web has decreased, illegal activity is continuously on the rise. Arnold will show how end-to-end encryption, surface web discussion groups, pastebins, and obfuscation tools are playing an important role in illegal dark web activity, and discuss what we can do to address this challenge.

At 9.15am Will Hernandez from MSAB will show attendees how to use the latest exploits to recover data from Android devices; speakers from NTT Security will show how hackers gain unauthorised access to buildings; Michael DaGrossa will talk about the need for artificial intelligence in information security; and Tarah Melton from Magnet Forensics will demonstrate how to tell a digital story using Connections and Timeline in Magnet AXIOM. Meanwhile over in the Investigations track will be a talk about social media, digital evidence and ‘what lurks in the cloud’.

Speakers from Belkasoft will be presenting at 10.30am, looking at different approaches to mobile device acquisition. In the Forensics track will be a discussion about building a successful threat hunting program; and in Info Security we will be looking at cloud security automation.

After lunch, speakers from Ansilio will show how to reveal issues and risks that keyword searches may have missed. Barbara Hewitt from Texas State University will demonstrate how to examine threat avoidance theories; and Gregg Braunton will discuss the importance of having a ‘statement of work’ prepared. Abdul Hassan will be running his ever-popular demonstration of social media analysis in counter-terror investigations.

At 3.15pm Jerry Bui will talk about the challenges associated with running digital forensic investigations in the era of fake news, and how these can be addressed. Ronald Hedges will discuss how cybersecurity and technology can be used by attorneys over in the Info Security track; and the day will draw to a close with Stephen Arnold taking to the stage once more to provide some deeper insights into recent updates on the dark web.

There will also be networking events taking place throughout the conference, which will be advertised during the conference and in the program. Find out more and register to attend here.

Finding And Interpreting Windows Firewall Rules

$
0
0

by Joakim Kävrestad

Determining with whom and in what way a computer has communicated can be important and interesting in several types of examinations. Communications can be an important part of analyzing if and how a computer has been remote controlled or with whom the computer has shared information. It can also be a good way to determine if a computer has been compromised or infected with malware.

If a computer is compromised and controlled remotely by a rogue user, that user needs to have an established connection to the computer. Further, many types of malware are used to steal and send information to someone, and simply need to be connected with a so-called “command and control” server that can control their behavior. A common denominator for everything that needs to communicate is that it has to pass through the firewall. For those of you who are not networking gurus, a firewall is a software or device that acts as a gatekeeper and decides what traffic is allowed to enter and leave a computer or network. Also, it will often log historic connections. Windows-based computers typically use the built-in Windows Firewall, and artifacts associated with it can provide important information to a forensic expert.

In essence, the Windows Firewall will examine IP addresses and port numbers in IP-based network traffic to make decisions on what traffic is allowed to enter and leave a computer. Traffic that is determined to be good is allowed to pass through, and traffic that is deemed bad will be blocked. IP addresses are the addresses used by computers to communicate over the Internet and port numbers are used to address traffic to a certain service. Many services are assigned specific port numbers to use for communication. Thus, if a port number assigned to a certain service – say the remote control software TeamViewer – is allowed to pass through the firewall, there is a fair chance that that TeamViewer has been installed on the computer. Likewise, some malware is known to use a specific port number and thus, if a port number associated with that malware is identified, a strong indication of infection is found. You may also analyze the firewall log to see which IP addresses a computer has been communicating with. This information can tell you if the computer has been in touch with someone that it should not be in touch with, and possibly assist in identifying an intrusion.     

Analyzing Firewall Rules

When analyzing the Windows Firewall there are essentially two main pieces of information to care about. The first is the current traffic rules: they dictate what ports, IP addresses and applications are allowed or blocked at the moment. The other is the firewall log files. These provide historic data about previous connections. Logging is unfortunately not enabled by default, but it is worth looking for the log file since it will provide a lot of information if it is present. They are located in the Windows registry in the SYSTEM hive under the following key:

\CurrentControlSet\services\SharedAccess\Parameters\FirewallPolicy\FirewallRules

The data looks as follows; the highlighted rules are dummy rules created for the purpose of this article.

As previously described, the firewall rules decide what traffic is allowed to enter or leave the computer. Each registry value is one rule and the general structure is attribute1 | attribute2 | …| attributee. Let’s zoom in on the first underlined rule to see how the rules are structured. The rule begins with 

v2.10|Action=Block|Active=TRUE|Dir=In

The very first attribute is a version number, and the second attribute describes if the action of the rule allows or blocks traffic. The third attribute decides if the rule is active or not, and the fourth attribute describes the traffic direction the rule is concerned with. If it says in, the rule is applied to incoming traffic and if it says out the rule is applied to traffic leaving the computer. The rest of it shows the actual matching rules. The matching rules decide what traffic the rule matches and the specified action is applied to all traffic that match the matching rules. The matching rules can include a number of different attributes, the most important for forensic purposes being:

  • Protocol, which decides what protocol the rule should match. A small experiment performed by the author revealed that 6 is TCP, 17 is UPD, and 1 is ICMP.
  • Lport, which decides the local port.
  • Rport, representing the port of the remote computer.
  • LA4 or LA6, which represent the local IPv4 or IPv6 address.
  • RA4 or RA6, which represent the remote IPv4 or IPv6 address.
  • App, which represents the application the rule should match, making it possible to have application-specific rules. For instance, Firefox may be able to communicate using port 80 (used for web traffic), while Skype may not.
  • Name, which is just a name given to the rule.
  • Profile, which determines under which firewall profile the rule is applied. The Windows Firewall has three different profiles (Domain, Private and Public), and a network connection will be assigned a profile. Usually, when a computer is connected to a new network, the user is asked which profile to apply to the connection. 

The way that firewall rules work is that the specified action is applied to all traffic that matches all the matching rules. For the sample rule, all incoming traffic using TCP and destined to the local port 9000 will be a match, and the action says that it will be blocked. Further, it appears as if the absence of a matching attribute is equivalent to a wildcard, meaning that all possible values of that attribute will be considered a match. For instance, our sample rule does not contain the Profile or LA4 attribute, meaning that it matches all profiles and IPv4 addresses. The forensic value of these rules is that they can reveal what programs and services are allowed to communicate through the firewall, and a rule for a service or application is a good indication saying that the service is or was installed. Further, firewall rules can reveal malware or intrusions, as these sometimes add firewall rules to enable rogue communication with the outside world.

Analyzing Firewall Log Files

The second artifact of importance when analyzing the Windows Firewall is the traffic log. If logging has been enabled it can provide data about historical connections. This log file tracks how the rules has been applied and describes what traffic was allowed through, or blocked by, the firewall. The log file is named pfirewall.log and located in [systemroot]\Windows\System32\LogFiles\Firewall. There can also be a file called pfirewall.log.old that contains historical data. The snippet below is a part of a firewall log file, and each row is one piece of traffic.

Looking at the first row beginning with a date, the date is followed by an action, in this case ‘allow’, meaning that the traffic was allowed. Next is the protocol, usually TCP or UDP followed by the source IP and destination IP. In this file, source IP is the local computer’s IP address and remote is the remote party; the computer the local computer is communicating with. The next values are source and destination port number.

The next interesting part is the final word in the line, ‘send’ or ‘receive’, which shows if the traffic was sent from or received by the local computer. In this case, the full row is interpreted as follows: UDP traffic sent from the local computer to the IP 172.217.22.174 using port number 443 was allowed. This log file can show what remote IP addresses the local computer communicated with, and the port numbers can provide information about what services have been used during communications. As such, it can be used to find remote connections, malware and intrusions. 

About The Author

Joakim Kävrestad is currently teaching and researching in digital forensics and information security at the University of Skövde, in the south of Sweden. Before entering his current position he worked as a Forensic Expert with the Swedish police and led hundreds of forensic examinations in all typed of criminal investigations. Joakim is freelancing as a forensic expert in criminal investigations through his own company and has authored two books on the subject: Guide to Digital Forensics and Fundamentals of Digital Forensics


Unreal Steganography: Using A VR Application As A Steganography Carrier

$
0
0

by Stuart Wilson

This report focuses on the use of virtual reality as a potential steganography carrier file to avoid detection of forensic analysis applications commonly used within law enforcement. The goal is to show how a virtual reality game/environment can be made with little training, what file types can be stored within it and if the files can be extracted once the environment has been packaged and if forensic tools can analyse the files. This has been done by producing a virtual reality environment using Unreal Engine, as when comparing Unity and Unreal as mentioned within section 3.3, Unreal proved easier to use within the given time. Once the virtual reality environment was built, the steganography file needed to be created. This was done by using popular tools such as Our Secret, Deep Sound and Open Puff.

Once the steganography files had been created they were placed into the virtual reality environment and the entire application was packaged. From here, the packaged environment was placed into Encase 8 and Internet Evidence Finder for testing. After analysing the virtual reality application within the forensic analysis programs, no evidence was found and only two pieces of steganography data were recovered. By showing that virtual reality applications can be used to store data within them, it has allowed for ways to store potentially harmful data in otherwise non-conspicuous files.

1. Introduction

This report will cover several key aspects which digital forensic investigators face in todays’ society. One of the major issues is steganography, the art of hiding information within other information such as image files, PDFs, audio files and many more. It will also investigate how the growing use of virtual reality and different forms of steganography can be used by criminals or other organisations to hide sensitive information within these digital environments with little training using free open source tools or applications.

It will also show how these techniques affect digital forensic investigators including ways on how to detect if a file has been subjected to forms of steganography and if the files are recoverable with the same or different tools used to hide them.

Within section 2 and 3 of this thesis, it will cover the history of virtual reality such as when virtual reality was first developed and the current uses, the technology of virtual reality, augmented reality and mixed reality along with various types of secondary devices used by virtual reality. Section 4 of this thesis will contain the literature used and reviewed within this thesis which allowed for research to be conducted within steganography and virtual reality. This lead on to the methodologies, which will be covered within section 5, and will outline what methodology will be used, along with the different types of methodologies. From here, research was conducted into steganography, covered within section 6, and includes the various types of steganography throughout history to modern day developments.

This thesis also includes a section dedicated to the building of the experiment, covered within section 7. This section contains information on how to create a virtual reality environment, followed by the storing of steganography files within it. The lead onto the results section, covered in section 8, which outlines the results of experiment though using dedicated forensic analysis software. Section 9 of this thesis covers the discussion, which includes the results means, what went well while conducting the research and experiment and what would be changed. Section 10 contains the conclusion, which outlines the overall research of this thesis and the impact it can have within the field of forensic analysis. From this, came the reflection and evaluation, covered in section 11 and 12, which covers what went well and what didn’t, the strengths and weaknesses and potential improvements for the project. The final section covers the future works of the project, outlined in section 13. Which includes the use of different tools to both create and analyse the virtual reality environment.

1.1 Aims & Objectives

The aims and objectives of this thesis are to outline how the creation of a virtual reality environment can be used as a potential steganography file carrier to elude potential forensic investigators. The ease on how each it is to create steganography files and how to retrieve this evidence. This will be done through:

  • Testing the application in various forensic tools such as Encase 8, Internet Evidence Finder (IEF), Forensic Tool Kit (FTK) and Autopsy.
  • The use of and accessibility of tutorials on how to generate a virtual reality environment from video sharing sites such as YouTube.
  • The amount of different steganography tools which are currently available for free.
  • The attempted retrieval of the placed evidence within the environment.

2. History of virtual reality development

Virtual reality as we know it today started back in the early 1960s with the first head mounted display (HMD) developed by Morton Heilig, called the Telesphere Mask (see figure 1). However, he also developed the Sensorama Simulator (see figure 2), a stationary device with a seat which not only allowed the user to view a 3D film, but also had the ability to immerse the user with other senses such as sound, wind, smell, vibrating chair and touch (Brockwell, 2016). This however wasn’t commercially successful due to it only being able to handle small amounts of people at a time and not being portable. According to the The Franklin Institute (2018) the term Virtual Reality was “first used in the mid-1980s when Jaron Lanier, founder of VPL Research, began to develop the gear, including goggles and gloves, needed to experience what he called “virtual reality.”

From these developments, Ivan Sutherland then developed his own head mounted display called the Ultimate Display. However, due to its sheer size and weight, it later became known as The Sword of Damocles (Flores-Arredondo & Assad-Kottner, 2015) (see figure 3). The difference between this and other headsets of the time is that this was the first to use a computer to generate the images shown in the headset, whereas other devices used a camera to display a live feed or a recording. At the time, the images created by the computer and displayed on the HMD were only wireframe rooms and objects, as the technology at the time did not have enough graphical processing power to generate what we perceive today as CGI (computer generated images).

Figure 1 Telesphere Mask (Virtual Reality Society, 2015)

Figure 2 Sensorama Simulator (pc mag, n.d.)

Figure 3 Sword of Damocles (Elaine, 2016)

In recent years HMD’s have grown in popularity and become vastly smaller and cheaper form what they once were, take for example the Oculus Rift (see figure 4). Here we have a prime example of a popular consumer product which has allowed consumers to not only experience virtual reality created by other people and organisations, but also to create their own virtual worlds and environments. It also includes a motion tracking stand which creates the sense of realism for the users when they turn or move their heads. At the time of writing this report, the Oculus Rift costs around £399, making this the cheapest dedicated VR headset compared to other headsets such as the HTC Vive (see figure 5) costing £499. However, HTC also offer a pro edition of the Vive which offers the wearer to fully experience surround sound and higher quality images but at a cost of £561.

Figure 4 Oculus Rift (Amazon, 2018)

Figure 5 HTC Vive (VIVE VR SYSTEM, 2018)

2.1 Difference between VR, AR & MR

2.1.1 Virtual reality

Virtual reality (VR) is a computer-generated environment displayed in such a manner that the user would be immersed in that environment as a player or character, depending on the type of virtual reality used. Virtual reality works by creating a full virtual world in which the user can explore without any natural reality involved. There are several devices which have virtual reality capability as mentioned above but these devices require a computer with a good graphics card output to be able to run. There are some devices which are cordless and do not require a computer to run but instead run on battery power and use mobile phones, such as the Samsung Gear (see figure 6) or the Google Cardboard (see figure 7).

Figure 6 Samsung Gear VR (Samsung, 2015)

Figure 7 Google Cardboard (Google, 2018)

2.1.2 Augmented reality

Augmented reality (AR) was first coined by Thomas Caudell and David Mizell back in the 1990’s to describe how the head mounted displays, which electricians used at the time, worked. In their paper, they described how the augmented reality system worked and stated that “a user looks at a workpiece and sees the exact 3D location of a drill hole is indicated by a bright green arrow, along with the drill size and depth of the hole specified in a text window floating next to the arrow. As the user changes his perspective on the workpiece, the graphical indicator appears to stay in the same physical location.” (Caudell & Mizell, 1992). Moving from this to todays augmented reality technology, it has, like with all advancements, become smaller, cheaper and more efficient, and as such, this type of technology can now be found in everyday mobile phones.

There has also been the development of AR dedicated devices such as the Microsoft HoloLens, although there currently is a debate over whether it is a mixed reality or an augmented reality device. Augmented reality can work in several ways according to the University of Exeter, one being with marker locations, in which the computergenerated object will lock onto specific points and then display the computergenerated image, and the other being marker less, in where the computer-generated image locks onto a specific location and displays. They state that “this method uses a combination of an electronic devices’ accelerometer, compass and location data (such as the Global Positioning System – GPS) to determine the position in the physical world, which way it is pointing and on which axis the device is operating.” (University of Exeter, 2010). At present, there are several applications which stands out for using these features, such examples would be the mobile application known as Pokémon Go, Snapchat and Facebook Messenger.

2.1.3 Mixed reality

Mixed reality (MR) is like augmented reality, in that it displays computer generated images onto real surfaces, however the difference between the two is that mixed reality aims to create virtual objects, such as animals, that would be generated within the environment, and if the user walked out of the area and came back then the animal would still be in that position or location. Whereas augmented reality creates computer 7 generated items within a local area but doesn’t bind them to that area for later use (Johnson, 2016).

When comparing virtual reality with augmented and mixed reality, the difference is obvious but at the same time each shares a similarity with the others. Virtual reality and augmented both use CGI for their surroundings and objects, yet one is obviously a complete virtual world (see figure 8) and the other only takes specific aspects from the virtual world and incorporates them into reality (see figure 9). They are also different in how they generate depth of field. Augmented reality works by producing a threedimensional image over a specific area whereas virtual reality creates an entire threedimensional environment and controls the view from the user’s perspective through the use of a headset and motion tracking.

Figure 8 Virtual Reality Display (Parrish, 2016)

Figure 9 Snapchat AR (Wilson, 2018)

2.2 Development of sensory immersion

Sensory immersion is when the user of a virtual world feels as if they have a sense of presence within that environment, which can be accomplished by using various aspects of reality, such as touch, smell, visuals, sound and taste. By using these aspects of immersion, it gives the virtual world/environment a sense of realism and as a result, has the potential to trick the user’s brain into believing the environment is real. This has many applications in today’s society, such as being able to treat people with various conditions such as social anxiety or post-traumatic stress disorder (PTSD).

There are various products which can give the sense of immersion, some come with consumer products such as the Oculus Rift, which comes with haptic feedback controllers to give the user a physical response and allows them to interact with the virtual environment. Most headsets on the market today offer three out of the five senses, being sound, visuals and touch, there are others dedicated devices such as the product offered by thinkgeek.com known as the “VR Sensory Immersion Generator” which allows the user to experience smell, touch, sounds and taste (ThinkGeek, 2018)

Download the full dissertation here

About The Author

Stuart Wilson is a security analyst working for Capgemini since 2019 who monitors and and researches potential threats to networks and computer systems. Before working for Capgemini, Stuart studied Computer Security with Forensics at Sheffield Hallam University for 3 years, where he acquired a first class honours degree, including the Cellebrite Certified Officer (CCO) certification. 

How To Launch 18 Simultaneous Wiping Sessions And Reach 18TB/h Overall Speed With Atola TaskForce

$
0
0

Thanks to its ability to perform 18 simultaneous imaging sessions, TaskForce is the most capable evidence acquisition in the forensics market. Atola’s team of engineers have equipped the device with a server-grade motherboard and CPU, thus allowing TaskForce to multitask at the unprecedented speed of 18 TB/h or even more. 

TaskForce has a user-friendly, task-oriented interface, enabling a user to launch each operation in a couple of clicks, control it remotely, and engage other operators in tracking each task from various devices, simply by entering the IP address in the Chrome browser on any device within the same network.  

TaskForce can simultaneously perform as many as 18 sessions, wiping drives at their top native speeds, when the standard wiping method is chosen. Thus TaskForce turns into a super-powerful forensic wiping station and saves a lot of time when preparing target drives.

TaskForce has 18 ports:

  • 6 SATA 
  • 6 SATA/SAS
  • 4 USB
  • 1 IDE
  • 1 Extension slot for Atola Thunderbolt, Apple PCIe SSD 
  • M.2 NVMe/PCIe/SATA SSD extension modules

Wiping Sessions

To start a wiping session, connect the drives to the TaskForce unit.

Turn the Source switch on each port to Target mode (the light indicator will be turned off).

Click the Wipe icon in the left-side taskbar:

Go to the Select devices panel to choose the drive to be erased:

Adjust the wiping settings:

  • the range of sectors to be wiped;
  • wiping method;
  • enter a pattern and its format (HEX/ASCII):

To launch wiping, click Start.

Please remember that wiping is launched consecutively for each particular drive. In order to start multiple wiping sessions, you should repeat this procedure for all the drives you are planning to wipe.

You can easily track the progress in the Homepage, either directly or remotely, by entering the IP address in the Google Chrome browser on another device within the network. The IP address is displayed on the small screen on the front panel.

To check overall wiping speed, click the Atola logo in the center of the top bar.

As a result, you can run as many as 18 wiping sessions simultaneously and achieve 18 TB/h or more. TaskForce’s high-speed wiping allows forensic experts to prepare multiple drives for wiping while spending a minimum amount of time with the devices.

NB Please note that wiping can take longer if another wiping method is selected. NIST 800-88 method implies wiping + rereading of the wiped range; DoD 5220.22-M wipes the same range three times.

Atola TaskForce creates detailed reports and logs in order to ensure maximum transparency and effectiveness of each operation.   

To check the report, press the Reports button in the top bar to find it in the list, or use the Search bar at the top of the page to find a particular report.

Atola TaskForce is Atola’s new forensic evidence acquisition tool capable of working with both good and damaged media and achieving top imaging speeds on healthy drives. Find out more and purchase your copy at atola.com.

Employee Turnover And Computer Forensic Analysis Best Practices

$
0
0

by Larry Lieb

Organizations historically have struggled with addressing terminated employees’ important evidence sources such as company-issued laptops, oftentimes materially affecting the organization’s ability to deal effectively with disputes that arise after an employee leaves the company.

This article will provide a documented, transparent, and repeatable process with actual tools to identify and correctly preserve key evidence. There is also a SlideShare which runs through some of the best practices, along with case studies; and at the end of the article you can find some highly useful handout record request samples.

1. How To Create A Legal Hold External USB Drive To Hold Forensic Images

Preparation of the “Target” Drive to hold the forensic image

Before beginning the forensic imaging process, please prepare a Bitlocker encrypted external USB drive which will be used to hold the forensic image; this external USB drive will be known as the “Target” drive, to which the forensic image of a workstation’s internal hard drive will be written.

Applying BitLocker Encryption to the Target drive

Once the Target drive is plugged into your workstation, open Windows Explorer and navigate to the Target drive.

Right click on the Target drive, and then left click on “Turn on BitLocker”.

When the below BitLocker menu opens, check the box “Use a password to unlock the drive” 

Enter the same password into the “Enter your password” and “Reenter your password” boxes.  

A good password naming convention is YYYYMMDD_[COMPANY NAME];  for example 20181119_[COMPANY NAME]

Once you have entered and reentered the password, click “Next”

Save the BitLocker Recovery Key to a file by choosing “Save to a File”

Save the Bitlocker Recovery Key file to a folder named “BITLOCKER RECOVERY KEY” on your computer desktop. This BitLocker Recovery Key file will be required to unlock the target drive in the event that the BitLocker password is forgotten.

Select “Encrypt used disk space only” (faster and best for new PCs and drives).

Choose “Compatible mode” (best for drives that can be moved from this device) and click “Next”.

Click “Start encrypting.”

The BitLocker encryption process should take less than one minute to complete.  Once encryption is complete, the following message will appear:

When looking at the “Target” drive in Windows Explorer, one will now see a silver padlock next to the drive indicating that the drive has been successfully BitLocker encrypted:

2. Creating A Forensic Image

In order to create a forensic image of employee workstations, we will be using AccessData’s FTK Imager Forensic Imaging tool. FTK Imager is a forensic imaging tool commonly used by US and international law enforcement professionals.

FTK Imager may be downloaded from the following location:

 

 

After you have downloaded FTK Imager Lite Version 3.1.1, please copy the entire software folder to the target drive so that the folder holding the FTK Imager Lite software is at the root of the target drive:

Once you have launched FTK Imager Lite, click on the “File” choice at the top left of the screen, which will bring up a drop-down menu with the the below options.  Click on “Create Disk Image….”

When the next menu pops up, click on the fourth choice from the top, “Logical Drive”.

In the “Source Drive Selection” drop-down menu, please select “C:\ – [NTFS]” and then click on the “Finish” button.

In the “Create Image” menu, please make sure that the “Verify images after they are created”, “Precalculate progress statistics” and “Create directory listings of all files in the image after they are created” boxes are checked.  Then, please click on the “Add…” button.

In the “Select Image Type” menu, please select the “E01” option seen below.  Then click on the “Next>” button.

  1. In the “Evidence Item Information” menu, enter the following information:
    1. “Case Number:”  Enter a Case Number, if available, which will be provided by the company’s Counsel.   If there is no case or related law suit, please enter the same information found in the below “Evidence Number:” field in the “Case Number:” field.
    2. “Evidence Number:”  Enter a unique number based upon the type of device being imaged.  For example, the first laptop of five laptops being imaged for a specific matter would be “PUSL42125-JDOE”.  NOTE:  “PUSL42125” is the company’s internal workstation tracking number and “JDOE” is the first initial and last name of the employee to whom the workstation was assigned.
    3. “Unique Description:”  Enter the of the workstation being forensically imaged Make, Model and Serial Number here.
    4. “Examiner:”  Enter your first and last name here.

Click “Next>”

On the “Select Image Destination” menu, click on “Browse” at the top of the “Select Image Destination” tab in order to select the folder location that the forensic image file will be saved to.

Select the BitLocker encrypted drive, in the below example “[COMPANY NAME] (G:)”.  Click on the “Make New Folder” button and create a new folder to hold the forensic image file.  In the below example, the forensic image file will be saved to the “PUSL42125-JDOE” folder.

In the “Select Image Destination” menu, in the “Image Filename (Excluding Extension)” box in the second row down from the top, type in the company’s workstation tracking number, then a dash, then the first initial and last name of the employee who used the workstation. In the below example we see “PUSL42125-JDOE”.  This will allow for easy identification of the forensic image.

Change the “Image Fragmentation size (MB)” value to “0”.

Change the “Compression” value to “9”.  

Do not check the box called “Use AD Encryption”.

Click “Finish”.

In the “Create Image” menu, click on the “Start” button to begin the forensic imaging process.  In the screenshot below, one can see that a forensic image file named “PUSL42125-JDOE.E01” will be created on the “G:\” drive in a folder named “PUSL42125-JDOE”. 

Once the forensic image has been successfully created, a window will appear called “Drive/Image Verify Results” as seen in the example below.  

If the “Verify result” value equals “Match”, then a successful bit-for-bit forensic image has been created of the workstation internal hard drive.

All FTK Imager open windows may be closed at this point as the forensic imaging process is successful and complete!

  • MD5 Hash” – This is a unique value calculated using a standard mathematical algorithm and is a court-accepted method of determining if a file is a true forensic copy of another file.
  • SHA1 Hash” – This is another, more complex, unique value calculated using a standard mathematical algorithm and is a court-accepted method of determining if a file is a true forensic copy of another file.   

Please see below for a SlideShare about employee turnover, originally presented as part of a class on best practices.

You can also download resources via the links below:

About The Author

Larry Lieb is a nationally known subject matter expert in the field of computer forensics and electronic discovery.  Larry has testified in both Federal and State courts on the subjects of computer and smartphone forensics.  Larry’s practice focuses on maximizing the limited dollars his clients have available for substantive legal work whilst minimizing wasted expense.

How To Perform Remote Acquisition Of Digital Devices With Belkasoft Evidence Center

$
0
0

Remote acquisition of digital devices is a useful option for modern-day organizations, both commercial and government. The main reasons for this are as follows: 

  • As entities grow, their IT environments tend to become more complex, distributed, and dispersed. 
  • Cost-efficiency may not allow organizations to hire trained IT security employees for all the locations. 
  • Ongoing business processes should not be interrupted; correspondingly, devices cannot be stopped and taken away. 
  • Sensitivity concerns make it preferable to acquire images in a confidential manner, for both investigations and monitoring.  

This is why remote acquisition is a must-have nowadays: it reduces costs, increases transparency, and does not interfere with your workplace climate when it is not needed.    

General Outline

Acquiring a remote device image with Belkasoft Evidence Center (BEC) is straightforward. The process looks like this:  

  • First, you need to deploy an agent to a remote computer. BEC provides you with two deployment options: remote and local. 
  • Second, you can acquire an image of the PC. In addition, you can collect data from RAM and mobile devices connected to the PC.
  • Third, you can schedule an image to be uploaded to the central storage of your choice at a specified time.

How To

Click on the ‘View’ main menu item.

Then click on ‘Remote acquisition’. The following screen will be shown:

BEC’s Remote Acquisition Window

If you have not deployed the agent, do so using the ‘Deploy agent’ button.

Once you have clicked ‘Deploy agent’, there will be two kinds of agent deployment with Belkasoft Evidence Center: 

Two Agent Deployment Options: Remote and Local

You may opt for ‘Remote deployment’ if you have a Windows domain that includes both your main computer and the remote one. In this case you need to select a network folder on the upper ‘Generate’ button. BEC will create a script which can be run to deploy your agent. 

Otherwise, if you have no available Windows domain, your option is ‘Local deployment’. You need to choose a folder on your own computer and click on the second ‘Generate’ button in this case. As a result, a set of files will be generated which should be passed to the computer of interest via network folders, a thumb drive, etc. After that, the agent executable file should be run on such a computer.

After you complete the previous stage, you can launch the process of acquisition by clicking on ‘Acquire’ and selecting one of the available agents from the list on the right.

Upon clicking on the ‘Acquire’ button, a list of connected remote computer names is shown along with their IP addresses. Select any to start acquisition.

Images of hard drives, mobile devices, and RAM can be acquired:

Sources for Acquisition

Please remember that hard drives and RAM can be acquired unattended, while mobile devices need to be connected to a computer; it is also necessary to unlock them and follow the standard route such as ‘trust this device’, ‘enable developer mode’, etc.  

Let us assume that you would like to acquire a hard drive image. Once you have clicked on the ‘Drive’ button, you will see the following screen with a range of options: 

Drives to Acquire

  • Source drive. Here you can choose a physical drive or a logical one (of course, they mean remote drives connected to the computer of interest).
  • Destination’. You can select a location for the acquired image on both a remote computer and your local one.
  • File format’, ‘Checksum’ and ‘Split output’ output work the same as for a drive acquisition. 

You can schedule your image for uploading. We recommend scheduling it for nighttime, especially if you would like to upload several images at once or just one big image, otherwise your (and your colleagues’) connection quality may degrade. 

Conclusion

Belkasoft Evidence Center makes it easier and more cost effective to acquire digital images of remote devices. An agent is required to be deployed on a remote machine. Hard and removable drives, and memory of a running Windows machine can be acquired. A unique feature of remote acquisition of mobile devices is supported. Uploaded images can be then analyzed with BEC on a central machine.

The acquisition process is short and transparent. Using BEC, organizations with different structures, varying resources, and dispersed locations can get access to the data they need. 

If you would like to try the remote acquisition, download BEC at belkasoft.com/get. The trial allows you to deploy one agent with full features. 

Career Paths In Digital Forensics: Practical Applications

$
0
0

by Christa Miller, Forensic Focus 

Whether you’re a college or university student trying to plot out your career, an experienced worker figuring out next steps, or a mentor seeking to help either one of them, you may be seeking to answer the question: what can I do in digital forensics?

The digital forensics profession has grown by leaps and bounds over the past three decades, and cybercrime’s proliferation means there’s no shortage of work. In the last ten years alone, mobile devices and the cloud both became evidence storage sources rivaling hard drives. The Internet of Things and artificial intelligence went from obscurity to widely deployed reality. 

Looking back at our recap of the Techno Security and Digital Investigations Conference, we can see the number of options reflected in the presentations — and then some. As Doug Brush and Nathan Mousselli pointed out in their panel discussion on certifications at Techno Security, career paths are opening up today that didn’t even exist five or ten years ago.

Meanwhile, Price Waterhouse Coopers recently listed “the essential eight” emerging technologies shaping the next generation of businesses: artificial intelligence, augmented reality, virtual reality, blockchain, drones, the Internet of Things, robotics, and 3D printing.

Of course, most of us are familiar with all of these technologies, and some are even deployed to automate, streamline, or be a subject of digital forensic investigations. But the PWC report takes them further, describing five emerging themes in which the “essential eight” “are coming together to create the next wave of innovation”: embodied AI, intelligent automation, automating trust, conversational interfaces, and extended reality.

So, while you may frequently see the same device models, the same apps, and the same low-hanging fruit from investigation to investigation, good research skills cover those rare unexpected devices and investigative curveballs.

Forensic examiners who might end up giving courtroom testimony have to be able to demonstrate they know (1) how a given device stores data and (2) how the tools they’re using help them to extract and analyze that data. Training counts for a lot of this, but so does the ability to test devices and apps to see how they store, manage, transmit, protect, and delete data. That’s especially true for emerging technology when training hasn’t quite caught up.

So where should you look today, and where might you look in the future? Several current career paths might lay the right foundation for wherever you end up.

Criminal Investigation

This field is perhaps most impacted by the broad social/cultural shift away from computers to mobile devices. As “smart” technology becomes more accessible in homes, vehicles, and the workplace, a similar shift from mobile to Internet of Things device forensics appears to be the next trend.

In these cases, you’ll be responsible for finding artifacts associated with messaging, web browsing history, photos and video, data storage, and more from a wide variety of apps. You’ll have to be able to put suspects behind keyboards (or devices), understanding that the obvious answer isn’t always the actual one.

Your skills will be put to use at multiple stages of an investigation, from quick evidence extraction to help build leads during an interview, to deeper analysis that helps build timelines and cases. The contact and geolocation data you collect might be useful intelligence for long-term operations. Memory forensics can be important, too — passwords are just one example of the critical data you can find there.

One significant subset of criminal digital forensics investigations is internet crimes against children. Often described as pursuing “the worst of the worst,” this career offers high payoff when you’re able to rescue child victims from predators. With AI-based software helping to reduce some of the immediate secondary trauma of viewing child sexual abuse, and stigma lessening around mental health intervention, this career path could be the ultimate way to serve your community.

The downside of all criminal investigations is that caseloads, and the metrics associated with rapidly clearing them, make it difficult to do much research. If you like research but you don’t necessarily want to make it a hobby you do on your downtime, you may want to look into a career that gives you the time you want.

Academia and Training Instruction

The medical profession’s “see one, do one, teach one” saying applies to digital forensics, too. If, in teaching a method to someone else, you discover you have a knack for instruction, academia might be a good place to burnish your skills.

Focusing on research enables you to be at the forefront of emergent technologies. You may have the chance to develop your own forensic tools and contribute to a broader body of research.

Academia isn’t without its troubles. In the United States, for example, adjunct professors cannot be said to make a good living, and don’t have tenure to support them. If you are on a tenure track, it can take years to actually attain tenure. The environment can be political and competitive, with a poor work/life balance. Many graduate students, wrote Chris Woolston for Nature, report mental health problems. 

Still, Woolston added, academia gives you the opportunity to connect with and support other researchers while adding to the body of research that’s so critically needed.

If making a good living is as important to you as research, though, then you may want to look into working for a vendor. A variety of roles include training others — another good way to hone your teaching skills, while researching to stay on top of the latest technology — or being an “evangelist,” taking your original research on the road and online to present at conferences or in webinars.

Corporate Investigations

Another possibility is to perform investigations in the corporate world. This is a growing area because of the need for companies to prevent and mitigate both external and internal threats.

External threats get the most media attention, but it’s really internal threats that are riskiest to companies. Misuse of corporate resources can result in thousands of hours of lost productivity as well as leaving companies vulnerable.

For example, employees who bring work-issued devices home may stream illicit material through a torrent service, download unapproved apps — flouting policy either deliberately, inadvertently, or maliciously — exfiltrate sensitive data, or introduce malware onto their system, among other activities. Any of these can occur in conjunction with theft of intellectual property or trade secrets.

You may deal mainly with computers, tablets, and the cloud rather than smartphones. As such, your skills will require a good command of operating systems, the Windows registry, Windows and/or Mac file systems, major cloud platforms like Google Suite and Microsoft Office 365®, and major tablet platforms. Email and browser analysis can also be important.

How much “forensics” is involved, though, depends on the case’s severity and the steps an employee took to cover their tracks. Human resources and legal teams frequently drive these investigations, and likely will have particular requirements.

As an ObserveIT blog discussed last year, detection, determining intent, internal resources, and lack of evidence all present challenges to these investigations. Potentially being limited in your investigations could be frustrating, but then again, pay and benefits are typically higher than in the public sector.

Although minor data breaches might simply involve identifying, isolating, and remediating the suspect machine, bigger breaches involving many sensitive records, significant monetary damages, or nation-state actors might require deeper investigation across dozens or hundreds of systems.

Enter root cause analysis, a subset of corporate investigations, which can help stop ongoing attacks and prevent future ones from happening. That’s because they indicate threats that have proliferated across systems as attackers gain a foothold and then move laterally across a network, seeking access to “crown jewels” of intellectual property — or to disrupt network operations entirely.

Here again, operating system, registry, and file system artifacts are crucial, as are memory and cloud forensics skills. Network forensics — the examination of how data and users move between systems — demands packet capture and analysis, logfile analysis, and more.

If you have a knack for programming and you’d rather zero in on the infection itself, going even deeper to figure out what makes attacks so effective, then malware forensics might be a great specialty for you. It involves reverse engineering, or delving deep into code to figure out how malware works, including its payload: how it exploits vulnerabilities and hides its own tracks.

On a proactive level, malware forensics can be used in threat hunting, which both contributes to and makes use of threat intelligence. You can use it to identify tactics, techniques, and procedures (TTPs), the patterns within them, and potentially the threat actors who deploy them.

Private Investigations and Consulting

More than just adultery investigations have gone digital. Private investigators and consultants are called upon to investigate a variety of other crimes. These can include:

  • Consulting for trial attorneys, both for the defense and the prosecution, in preparation for trial
  • Cyberbullying and harassment in schools and workplaces
  • Missing persons, particularly those deemed by law enforcement to be runaways
  • Collision reconstruction, including pairing vehicle “black box” data with other contextual clues, such as date and time stamps from text messages to see whether a driver was texting and driving at the time of collision
  • Working closely with certified fraud examiners to determine means, motive, opportunity, and intent behind the fraudulent activities shown on the books
  • Digital video recorder (DVR) analysis that can show (or disprove) evidence of tampering with a video feed at a crime scene.

Private investigators and consultants often start out in law enforcement or the private sector before striking out on their own, bringing their unique experiences to bear in serving other corporations, attorneys, or anyone in need of their skills. While they may start out offering services locally or regionally, demand can grow. Many consultants serve nationally and even internationally.

Government and Counter-Terror Investigations

Those in the military (or reserves) or federal law enforcement may have the opportunity to put digital forensics skills to use in mining electronic media for intelligence purposes. Document and media exploitation (DOMEX) is broken down into its component parts — MEDEX, or exploitation of storage media, and DOCEX, exploitation of documents found within those media.

DOMEX can involve research as well, as mission-critical devices may be encrypted or make use of apps you might never have seen before. Media in foreign languages may require translators to contextualize the information. 

You may be active duty military, or employed by a government contractor to which militaries outsource DOMEX. Either way, expect to be challenged in these roles. While operational security concerns prevent much blogging or discussion of DOMEX exploits, many experts in the community have worked in either MEDEX or DOCEX, which informs the expertise they bring to training and presentations.

The Future of Digital Forensics

As digital forensics continues to evolve into the areas the PWC report talked about — “the next wave of innovation” including embodied AI, intelligent automation, automating trust, conversational interfaces, and extended reality — it’s wise to start asking questions now. What are some of the things we can expect to see in digital forensics in the future? 

  • Writing for D/SRUPTION, Laura Cox wrote, “Process automation, chatbots, advanced robotics, autonomous drive technology, and personal companions like Buddy and Jibo could all benefit from embodied intelligence.”
  • Deloitte describes intelligent automation as “the combination of artificial intelligence and automation,” including autonomous vehicle guidance and advanced robotics. 
  • Automated trust, as explained by the MIT Technology Review, combines “blockchain—the distributed ledger technology that forms the basis of the digital currency Bitcoin—with artificial intelligence (AI) and the internet of things (IoT).” This matters to supply chains, which are coming to be increasingly complex; delivering citizen services, as described by PWC India; and smart contracts, as described by Richard Myers for In the Mesh.
  • Conversational interfaces are perhaps most prevalent in Apple’s Siri, OK Google, Amazon Echo, Microsoft Cortana, etc. You might have seen chatbots pop up on websites you visit, offering assistance. The key, according to John Brownlee at Fast Company: “The idea here is that instead of communicating with a computer on its own inhuman terms—by clicking on icons and entering syntax-specific commands—you interact with it on yours, by just telling it what to do.”
  • Extended reality, or “immersive” technology, encompasses virtual reality (VR) and augmented reality (AR). Writing for Forbes, Joe McKendrick quoted Laurence Morvan, chief corporate social responsibility officer for Accenture, in differentiating the two: VR works for immersive learning, while AR is best for “building technical skills on the job.”

McKendrick went further, however, describing Morvan’s cautions against the misuse — theft and manipulation — of personal data in immersive experiences, fake experiences, the loss of access to critical real-time data, and the potential for technology addiction.

The questions around these technologies are no different from the questions surrounding existing technologies, of course. Where data is stored, how and where it travels, how easy it is to access and analyze, and other questions remain perennial. The biggest challenge remains the rapid advancement of technology — how the answers we don’t have today could affect the questions of tomorrow.

Looking for more about these different options? Make sure you’re subscribed to our RSS feed to get our daily posts, which link to the latest insights in a variety of fields. You can also sign up to receive our monthly newsletter, and join in with discussions on the forums. In addition, Phill Moore’s This Week in 4n6 updates the community on forensic analysis, threat hunting and intelligence, malware analysis, vendor news, and all kinds of events and presentations.

Viewing all 196 articles
Browse latest View live