Quantcast
Channel: Digital Forensics – Forensic Focus – Articles
Viewing all 196 articles
Browse latest View live

The Potential Importance Of Information From Password Managers

$
0
0

by Dr Tristan Jenkinson

There have recently been a number of articles discussing the use of common passwords and encouraging better password practices. Most guidance includes the recommendation not to use the same password for different accounts. This makes sense – it limits risk of further exposure in the event that one set of details is compromised. To do this we have to remember an increasing number of (potentially complex) passwords. This is not something that comes naturally to most of us.

One way to cope with having many passwords is to use a password manager. This is a program which you access with a “master password” and it stores all of your passwords for you. The idea being that you only need to remember your master password.

In this series of articles we will explore:

  • how data from password managers can be important to investigations;
  • methods that can be used to investigate password managers; and
  • important considerations when looking to investigate data from password managers.

In this first article we cover the importance that data from password managers can have to an investigation.

Why Data from Password Managers Can Be Vitally Important

There are many different scenarios in which the content of password manager databases could be useful during an investigation. We discuss some of these below.

Passwords

One of the more obvious benefits of password managers is that they contain passwords. Therefore they may provide access to data that has been identified but could not previously be accessed because they were password protected. 

Research shows that individuals tend to reuse passwords, or use variants on other passwords, rather than using unique passwords all the time. This means that an investigator could take an export of all of the known passwords from the password manager and create a new dictionary of passwords built from these (and their variants), to try against any known password protected data.

Data Sources and New Lines of Inquiry

Password managers could contain data sources that were previously unknown. These could be diverse in nature, such as additional email addresses or different cloud storage solutions. Just knowing the additional information exists may be useful. If, in addition, these data sources can subsequently be preserved and investigated, they could be hugely useful to the investigation. 

New data sources could also give rise to entirely new lines of inquiry – for example an email address may provide an entirely new identity that was used for fraudulent activity. In this case, other data could then be searched for that alias to identify potential evidence of additional wrongdoing. Alternatively this could be used to tie a specific alias or identify to an individual, since they had the password for accounts in that name, or in corruption cases this could identify web-based email used to communicate with co-conspirators involved in the alleged corruption.

An important point to raise (and one that will be revisited in part three) is that although the accounts and passwords are available, this does not mean that the relevant data can automatically be collected. In the UK, accessing data such as email or cloud storage without authorisation to do so could be a breach of the Computer Misuse Act. 

Bitcoin Wallets, Cryptocurrencies, and Other Financial Services

If credentials for bitcoin wallets and other cryptocurrency assets are lost, the value of the asset may no longer be accessible. Therefore it is important to ensure that such details are saved in a secure manner – for example within a password manager. This also applies to online banking details for conventional bank accounts.

While other sources (such as internet history and installed applications) may indicate the relevant banks or cryptocurrencies in use, a password manager could provide full account details. This could be key in asset tracing cases, allowing for requests to be made for information from relevant financial institutions, or to identify and investigate the content of cryptocurrency wallets.

There may also be information with regard to other financial institutions, such as loan companies, stockbrokers, trading platforms, foreign exchange and wire transfer services. This information could be important in asset tracing or anti money laundering cases where information about the movement of funds may be key.

Not Just Passwords

Many password management tools provide additional storage for “important documents”. This could hold, for example, scans of passports and other key information, in case they were urgently needed and the original documents were not available. 

Such “important documents” may also hold significant value for investigators. Passport information could be helpful to link an alias to an individual, or to connect the content of the password manager to a specific individual. 

Other “important documents” could include property deeds, purchase agreements, shareholdings or important contracts or valuations. These may be of interest in cases where the value of goods under control may be of importance. Alternatively, information on assets could inform an enforcement strategy or an approach to freezing orders to ensure that assets cannot be disposed of or dissipated.

Coming Up

In part two, we will discuss some of the ways in which password managers can be investigated and then in part three, we will look at some issues to bear in mind during such investigations.

About The Author

Dr Tristan Jenkinson is a Director in the eDiscovery Consulting team. He is an expert witness with over twelve years of experience in the digital forensics and electronic disclosure field and has been appointed as an expert directly by parties, as well as being appointed as a single joint expert. Tristan advises clients with regard to forensic data collections, digital forensic investigations and issues related to electronic discovery.


Digital Forensics For National Security Symposium – Alexandria, VA, December 10th-11th

$
0
0

On the 10th and 11th of December 2019, the inaugural Digital Forensics For National Security Symposium will take place in Alexandria, VA, USA. Below is an overview of the subjects and speakers that will be featured at the event.

Tuesday December 10th

Registration will be open from 8:00-8:45am, after which Retired Special Agent Jim Christy will present some opening remarks and welcome everyone to the event.

The initial session will be run by Dr. David B. Muhlhausen from the National Institute of Justice, who will describe some of the initiatives the NIJ are currently using to enhance digital forensic evidence acquisition and analysis. This will include a discussion of open source tools that can be used by law enforcement officers, as well as an update on various collaborations which are currently helping to improve the status of digital forensic investigations.

Jude Sunderbruch will then talk about how digital forensic methods are used in conflict situations, including how serious offences such as major hacking and cyber terrorism can be investigated more effectively.

Following a break in which attendees will be able to visit sponsors and partners in the exhibit hall, John Pettus from the FBI will talk about the importance of using digital forensic techniques and evidence to protect the FBI network from cyber attacks and breaches. It is important to view digital forensics as a discipline that can provide information that can be used to prevent breaches, rather than something that is only useful after the fact; Pettus’ talk will spend some time focusing on how this can happen.

Dr. Cliff Wang from the Army Research Office will then take to the stage to show how advanced computer frameworks can be used to help the Warfighter to outmanoeuvre cyber attacks, and how to mislead and ultimately defeat adversaries in a conflict situation.

Following a lunch break, Lam Nguyen from the Department of Defense will discuss DoD strategy, with particular attention to counterintelligence and counter-terror efforts. The talk will also focus on how to test and validate digital forensic tools for reliability, performance, and reproducibility, underlining the importance of standardisation in today’s world.

Working across all levels of law enforcement is not always easy, particularly when there are gatekeeping measures in place to ensure that information is not shared beyond its authorised limits. However, sharing information about criminal activity, particularly when it comes to cyber threats, can be very important. Matt LaVigna from the National Cyber Forensics & Training Alliance will be speaking on this topic at 2:15pm, discussing some emerging threats and outlining ideas for how law enforcement can collaborate with SMEs to ensure that investigations are as up-to-date and well-informed as possible.

Yong Guan, a Professor at Iowa State University and Cyber Forensics Coordinator at NIST, will then demonstrate a mobile app forensic evidence project for law enforcement practitioners. Following a break for refreshments, Major David B. Bain from the Marine Corps will show how updating the Expeditionary Forensics Exploitation Capability will help teams to collect, analyse and store data more effectively/

The final session of the day will see Dr. Kathryn Siegfried-Spellar from Purdue University demonstrating some toolkits for network forensics and child protection investigations, including chat analysis software that helps investigators to identify child sex offenders.

Wednesday December 11th

Following an opening address from Jim Christy, day two of the conference will begin with a talk by SA Laukik Suthar from NCIS, who will show how digital forensic capabilities can be used in counter-terror investigations and how to identify cyber threats in the Naval domain.

Colonel Zane Jones from the Defense Forensics and Biometric Agency will then demonstrate some current CID initiatives that aid investigators in the analysis of digital devices, as well as looking at new initiatives that might be brought into play in the future. He will also show how soldiers and analysts are currently being trained to take advantage of digital forensic tools, and how this can help them to better understand the cyber battlefield and how it relates to traditional battlefield operations.

A representative from BlackBag Technologies will then discuss how some of their tools are being used by law enforcement, and their potential applications for national security and defense.

At 11:10am there will be a panel discussion, moderated by Linda Grody from the FBI, discussing how to facilitate the analysis and preservation of digital evidence in support of digital forensic investigations, with particular attention to the provision of forensic services for law enforcement agencies at all levels. Child protection, counter-terror, violent crime and national security will be among the topics covered by the panel.

Following a networking lunch, Barbara Guttman from NIST will discuss how investigators can ensure that their digital forensic tools are reliable, and talk about how we might develop tools to test computer forensic software, including the evergreen question of finding appropriate test sets.

The final session of the second day will see Dr. Daniel Gonzales from RAND Corp talking about recent updates in cloud-based digital forensics, including showing how some open-source processing applications can help to reduce the time taken on investigation and analysis.

There will also be networking events taking place throughout the conference, which will be advertised during the conference and in the program. Find out more and register to attend here.

How To Digital Forensic Boot Scan A Mac With APFS

$
0
0

by Rich Frawley 

In this short 3-minute video, ADF’s digital forensic specialist Rich Frawley shows how to boot a MacBook Air (APFS, non-encrypted) with Digital Evidence Investigator.

The ADF digital forensic team is hard at work putting the finishing touches on the complete package:

In the meantime, if FileVault is not an issue, ADF software can boot scan and collect the information investigators need to further an investigation or make a case. It is as simple as press and hold the Option key while powering on the Mac. This gives you access to the Startup Manager which will allow you to execute the ADF Software. This is also true for Mac’s prior to the implementation of APFS, ADF will be able to boot to your Mac and get you the relevant information for your case.

Apple T2 Security Chip

But what about the new T2 Security Chip? One of the features of the T2 Security Chip is the ability to use Secure Boot to make sure that only a legitimate, trusted operating system loads at startup. That’s good news since ADF utilizes a legitimate, trusted operating system.

Another feature is the ability to exclude booting from an external device, and this would be important to get an APFS Mac to boot to that trusted operating system. If booting from an external device is not available in the Startup Manager, then by accessing the Startup Security Utility (Authentication Required) the settings can be changed to allow booting. Once this has been accomplished you can now use ADF to boot and conduct a scan of the computer.

With ADF software, you can conduct digital investigations of a suspect Mac in the lab, or on-scene, easier, faster and smarter to:

  • Quickly identify incriminating files and artifacts
  • Easily associate files to victims or a suspect
  • Create comprehensive court-ready reports

Digital Forensic Techniques To Investigate Password Managers

$
0
0

by Dr Tristan Jenkinson

In part one we discussed the importance that data from password managers can play. In part two, we look at aspects an investigation may include from a digital forensics perspective.

How Password Managers Can Be Investigated Using Digital Forensics

Evidence of Usage of Password Management Systems

Finding evidence that a password management tool has been in use could be an important step. It could lead to a request for the relevant details to access the content, or could be used to demonstrate a previous failure to provide details under court order or as part of an agreement. Having an indication of when the manager was installed, how often it was used and when it was last used could be helpful if the usage is questioned.

There are two main types of password managers – those which are locally based and those which are cloud based. The evidence which can be used to demonstrate usage differs slightly in each case.

Evidence for the use of cloud based systems may include webpages accessed, the installation and usage of browser extensions, and potentially related downloads. There would also typically be locally cached (saved) copies of the data contained by the password manager in order to provide offline access. In particular information about how often, and when, webpages through which the password manager was used could be helpful if there are claims that the password manager was not in use, or the master password has been forgotten.

For local based password managers, the databases will be stored locally, so their existence and the date on which they were created may be useful. There is also likely to be a specific program installed for accessing the data, so looking at dates, times and frequency of the program being executed could prove helpful.

The Master Password

Without the master password, it will likely not be possible to view the content of the password manager. There are methods that can be attempted to locate the master password if it has not been provided. 

For example, check web browsers for saved passwords. They may contain an entry specifically for the password manager, or may contain passwords for other accounts or websites that may be the same or similar to the password required for the password manager.

Exporting Options and Historic Information

Once access is gained, there is a question over how data should be exported. As discussed above, there could be a vast amount of data and an investigator should ensure that all information that they require is exported.

There are often export functions from password manager systems. These typically will export out password information into a spreadsheet like file. Care should be taken though. Some password managers will store historic passwords, and the date on which they were changed – for example to make users aware that they are reusing a previously used password. This information may not be included in such an export so this should be considered by the investigator.

Sections such as secure data storage should also be checked to make sure that information has been exported, and where possible relevant metadata has been preserved. As with any forensic collection exercise, the investigator should take detailed contemporaneous notes.

Live Imaging and Access

If, at the collection/preservation stage, the machine is being imaged live (i.e. while powered on and logged in) then the investigator should consider if the password manager is logged into. If so, the investigator may then need to consider if the content can (and should) be exported. As discussed above, this may require specific considerations with regard to data privacy and data access.

Local Storage and Investigating Purge or Removal of Passwords

As has been noted, both cloud based and local password managers will typically store copies of the database containing the contents of the password manager on the local machine. 

This means that if an image of the machine is taken, and the master password is provided at a later date, in most cases it will be possible to retrieve the content of the password manager at the time of imaging.

This could be of particular interest if there is concern that the content of the password manager has been purged or deleted between the time of imaging and the time that the password is provided.

This may therefore be a consideration when having conversations about access to the password manager. For example, it may be suggested that the custodian may input the master password on a live machine, therefore providing access, but not providing the password. It is preferable to be provided with the password, so that previously captured data can then be unlocked. 

Other Potential Access Points

It is also worth considering alternative access points. Some password managers allow you to share passwords with others. Some allow you to nominate an emergency contact, who can be given access in a specific scenario. These people may also have access to the content of the password manager, so it may be worth considering who such individuals may be. It may be possible run email searches for invitations to the password manager, or notifications from the software that is found to be in use, to identify who may also have access.

Business Accounts and iCloud Keychain

Some businesses provide their staff access to password managers to use at work. This means that the business may have the means to access and export the content. Investigators should bear this in mind when considering data sources to be collected and investigated. It may be beneficial to work with internal staff, where appropriate, to collect the content of business based password management systems to avoid risk of the systems being purged if the individuals involved are tipped off. Care should be taken to ensure that the business have the rights to access and collect the data from the password manager, for example through the use of acceptable use policies (discussed further in part three).

Apple iPhones come with an inbuilt password manager – iCloud keychain. This password manager is then shared across devices using iCloud. If the business provides iPhones for business use, then the iCloud keychain system could be in use and could be a relevant location to consider – provided that there is legitimate lawful access available. This is something that investigators should be aware of and may be a helpful source to investigate.

Don’t Forget Hardcopy

Whilst these articles have are focused on the use of password manager software, investigators should also consider the use of hardcopy password books. These are in relatively common use and so are something that should be considered for collection and investigation. 

As with password managers, specific wording may need to be included in acceptable use policies, agreements between parties or court orders to ensure that they can be preserved and examined where relevant within a business environment.

Coming Up

In part three, we will discuss some of the potential issues that can arise in such investigations and some areas where early consideration may help ease or avoid these issues.

About The Author

Dr Tristan Jenkinson is a Director in the eDiscovery Consulting team at Consilio. He is an expert witness with over twelve years of experience in the digital forensics and electronic disclosure field and has been appointed as an expert directly by parties, as well as being appointed as a single joint expert. Tristan advises clients with regard to forensic data collections, digital forensic investigations and issues related to electronic discovery.

How To Easily And Accurately Play CCTV And Other Proprietary Video With Amped Replay

$
0
0

by Blake Sawyer, Amped Software

For Law Enforcement across the world, one of the biggest hindrances to actionable evidence comes from CCTV. There are sites devoted to providing codecs, of which there are hundreds, and IT departments that spend most of their time managing the many players from each DVR manufacturer. In my old casework, I would handle close to 80-90 video requests a month, most of the time coming from proprietary CCTV video. One tool that helped me as a full-time video examiner was Amped FIVE. Through daily use, we were able to get images and video clarified and out to the officers and media, as well as keep our backlog at a minimum.

That said, there was not a lot of time to devote to tricky video cases because there were so many cases coming in. A tool that recently came out geared towards empowering investigators, and non-technical video staff, would have been a great help. Amped Replay is Amped Software’s newest solution developed specifically for detectives, patrol officers, and other first responders to conduct a first-level analysis of their video evidence empowering them to:

  • Convert and play videos from proprietary CCTV/DVR formats, body-worn, dashcam, mobile phones, covert video, drones, social media and more
  • Apply quick corrections to images
  • Redact and annotate images for investigations and media release
  • Produce still images and clips complete with a transparent, easy-to-understand, technical report

Replay is a great tool to get into the hands of officers so that they can easily play proprietary video in a way that is forensically sound, easy to use, and provides quick results. Below is just a quick example of how adding Replay into your department can help IT, the officer, and the video expert.

By installing Replay, the IT department no longer has to manage the hundreds of proprietary players and installs that come from each manufacturer. Replay handles over 350 proprietary video formats, including many of the big names in the industry. All the investigator needs to do is drag-and-drop a file into Replay. It really is just that simple. Replay then goes to work preparing the video for playback, pulling out the camera streams within the file, and adding them to the Play window. In my example below, I have an .exe from a robbery with 4 cameras playable in the proprietary player. Replay identifies all 4 camera videos inside the .exe and adds them to the Play window.

Figure 1: Simply drop a video file into Replay, and it will get it ready for you to watch.

Figure 2: If a video file has multiple videos, Replay breaks them out in the player, and gives you File Information about each video.

From here, you can go through each camera, use the “M” Key to create a bookmark of images you may need for your investigation. In this video, Camera 0 shows the suspect enter and exit the store, Camera 1 shows the suspect arrive and leave on foot, Camera 2 shows the beer aisle, and Camera 3 shows the robbery and the suspect. Using the M key and the range selection tools, I can quickly create still images and sub-clips of these events. If I needed to, I could export these clips, send them out to patrol in a bulletin, or send it to the information office for social media.

The problem is while the entrance camera shows good images of the suspect’s shoes, there are several issues related to the jagged lines of an analog camera, and there is a limited amount of contrast. For that, we will go to the “Enhance” tab. At “Enhance” you can quickly make minor corrections to an image, as well as crop and resize a section. Unlike things Jack Black can do when hunting down Will Smith, “Enhance” will not be able to do anything questionable or unscientific. In fact, this section is simplified even from our more comprehensive Forensic Image and Video Enhancement software so that information isn’t even accidentally misinterpreted by the investigator. In this case, I had to Deinterlace analog video, fix the aspect ratio, adjust the contrast, crop out most of the image, and enlarge the image.

Figure 3:By pressing the “M” key or hitting the “Bookmark” button, bookmarked images are easily set aside for exporting later.

Figure 4: Clarification tools are easily applied, automated to prevent errors, and reported in the final report.

Next, I need to put out the video of the suspect for social media but want to hide the victim to help protect them. This is as easy as going to the “Annotate” tab and choosing “Hide Selection”. From there, you draw a box around what you want to hide, and then follow the subject with your mouse as it moves. That’s it! The software tracks your mouse and moves the blurred section as you do.

Figure 5:By dragging the Hide Selection Box, the user can follow the subject being redacted.

One last thing that is always helpful is to call attention to key pieces of evidence. In this clip, the suspect claims that the weapon is fake and has the orange ring that shows it to be a pellet gun. By using the “Magnify” annotation, we can quickly zoom that section to show it better. Just like with “Hide Selection”, you can track the object with the “Magnify” tool simply by following it with the mouse.

Figure 6:This “Magnify” tool can even be placed off the screen and the canvas will automatically adjust. Tracking works the same way as with the “Hide Selection” tool.

Once this is done, all that’s left is to get these images and clips out for release. It doesn’t help us to simply be able to watch evidence and mark it up, if no one else can do that as well. This is easily done in the “Export” tab. You can choose to export your current image or all the bookmarks, and then export the original or annotated clip. All of these options will create compatible images and clips with any standard software. 

Figure 7: Options such as file type and quality are automatically set to provide the most compatible versions of the file while preserving quality.

It also does something that is pretty unique, and is a reason why having a purpose-built tool is crucial for Law Enforcement. When anything is exported, it also generates a technical report, which shows the images and videos that were marked up, as well as the original file information and a File Hash. This gives the original video a bit of a digital fingerprint that helps maintain the chain of custody of the files (crucial for authentication). It also tracks and documents the steps taken in Replay so that when the case goes to court, they are easily understandable and repeatable.

The Idea is that Amped Replay needs to be fast for everyone, not just the video expert, and easy enough to use so that your IT doesn’t have to keep coming by to install updates, drivers, or codecs. With all these features, it makes it a great tool for Open Records or Investigator units It is also a great triage tool to help reduce the backlog in your Forensic Lab that often gets bogged down in the “I can’t get this to play” doldrums.

If you are interested in seeing Replay for yourself, reach out to us at ampedsoftware.com/contacts or tune into our webinar on December 10th.

How To Help Small Governments To Respond To Ransomware Attacks

$
0
0

by Christa Miller

Ransomware has captured a large share of mainstream media coverage in recent months, due in no small part to attacks that have crippled small local and county governments in the United States. One coordinated attack in particular affected 23 Texas communities in July, and a new interactive map from StateScoop shows all attacks since 2013, updating with new ones as they occur. 

In fact, Recorded Future’s Allan Liska reported in May that “State and local governments were among the first organizations to be hit with ransomware.” His research confirmed that ransomware attacks are on the rise, affecting 48 states and the District of Columbia. CBS News’ Irina Ivanova reported that of the 70 reported ransomware attacks in the first half of 2019, more than 50 targeted cities.

Research from Barracuda, meanwhile, reported that nearly half of the 55 attacked municipalities they studied had populations of fewer than 50,000 residents, while close to a quarter had fewer than 15,000 residents. “Smaller towns are often more vulnerable because they lack the technology or resources to protect against ransomware attacks,” the blog stated. 

Liska’s report additionally found that smaller governments “tend to be more targets of opportunity” than deliberate targets. “Even groups like the teams behind Ryuk and SamSam appear to stumble into these targets,” he wrote. “However, once these groups do realize they are in a state or local government target, they take advantage of the fact by targeting the most sensitive or valuable data to encrypt.”

Moreover, wrote Liska, the “outsized media coverage” devoted to ransomware’s effect on governments “likely creates a perception among attackers that these are potentially profitable targets” in spite of a lower than typical likelihood of a payout.

How ransomware infects and spreads through networks

The most common attack vectors, according to security firm Palo Alto Networks, are emails that contain malicious links or attachments, or malicious or compromised websites that users browse to. Zuly Gonzalez, co-founder and CEO of Light Point Security, says both browser vulnerabilities and user actions can result in browser-based ransomware infections.

“The important thing for folks to know is that many times simply visiting a website is enough. No need to download anything,” she says. “Even typically safe, legitimate websites, like news sites, can infect your computer. And any website that displays ads poses a risk to the user, because ads can serve malware or redirect users to malicious websites that will deliver the malware/ransomware.”

 

That doesn’t mean other vulnerabilities can’t come into play. In Allentown, Pennsylvania, according to the New York Times, a city employee took his work laptop with him, missed critical updates, and then clicked on a phishing email before returning to the office. There, the malware — not ransomware — was free to spread throughout the network. 

In Baltimore, meanwhile, “a city information technology team troubleshooting a separate communications issue with the server inadvertently changed a firewall and left a port… open for about 24 hours, and hackers who were likely running automated scans of networks looking for such vulnerabilities found it and gained access [to the city’s 911 dispatch system],” wrote Kevin Rector for the Baltimore Sun.

The impact of ransomware on small communities

Baltimore’s downed 911 system was only one example of ransomware’s impact. In Orange County, North Carolina, more than 100 computers at the library, the tax department, the planning board, the county register of deeds, and the sheriff’s department were affected. Certain transactions couldn’t be processed, and deputies couldn’t access criminal records — a potential officer safety issue.

In Rockland, Illinois, ransomware that hit the school district took out its phones, website, and student information systems. The district was able to use its Facebook page to communicate with the community, but the attack disrupted the first weeks of school.

Ransomware’s impact can be much broader, however. At Forbes, Chloe Demrovsky wrote: “Because governments manage sensitive information and critical infrastructure, outages could have national security implications, damage the local economy, and harm the general public more broadly.”

A considerable concern, Demrovsky continued, is data integrity of the “tremendous amount of sensitive data” about citizens that governments collect and have access to. “Cyber criminals may have tampered with the information, kept a copy for future use, or could repeat their ask in some way,” she wrote, adding:

“If that data and the accompanying data practices are made public, how will the public react and will trust in institutions crumble further? Even if it doesn’t become public information, what might the hackers do with it or who might be interested in purchasing it?”

Ransomware prevention

Conventional wisdom focuses on prevention. Indeed, the Rockland school district was looking at funding a number of measures including IT upgrades, mandated security awareness training from KnowBe4, and security software renewals.

These measures are among the ransomware defenses listed by the National Credit Union Administration

  • Educate all staff on ransomware’s risks and how to use email and the web safely.
  • Create regular backups of critical systems and data.
  • Maintain up-to-date firewalls and anti-malware systems and protections.
  • Use web- and email-protection systems and software.
  • Limit the ability of users or IT systems to write onto servers or other systems.
  • Have a robust patch management program.
  • Remove any device suspected of being infected from your systems.

Education can be tricky when the landscape itself keeps changing. For example, says Gonzalez, “[T]here’s no way to know if a website will result in a ransomware infection.” Organizations can implement a browser isolation solution, which “completely isolat[es] all web content off of the user’s computer, thus preventing malware from ever reaching the user’s computer in the first place.”

Browser security, together with email security and good patching practices, are just a few ways Gonzalez says even with a limited budget, a small organization can improve its security. The key, she says, is starting small, relying on resources like the CIS Security Controls as manageable starting point. “It isn’t necessary to implement everything all at once,” she adds.

Whether taxpayers prefer their hard-earned dollars to go towards security measures rather than complying with ransom demands is an open question. ProPublica’s analysis of the attack on Baltimore noted that the city spent $5 million in recovery costs relative to the $76 thousand ransom originally demanded. 

Still, sensitivity to taxpayers can mean IT systems running on what the New York Times called “motley collections of vintage software,” whose 18-month refresh cycles can make it challenging to implement good patching practices. 

The high cost of hiring cybersecurity professionals is another factor. But this, Gonzalez says, is fixable by hiring young talent. “There are a lot of young, hungry folks looking for entry-level positions,” she explains. “They’re not as experienced as a CISSP, but they are also not as expensive. There’s obviously a tradeoff there, but at least it gives [small entities] a ‘fighting chance.’”

Another way to solve the talent problem is to automate security functions through technology. Even so, says Gonzalez, many security solutions are built for large-scale enterprise environments with dedicated IT and security staff to run them. That can make them overly complicated for smaller entities running on tight budgets. When those entities believe they aren’t targets to begin with, they can tend to underspend on security.

In turn, she says, their hesitation can signal to vendors that security isn’t a priority. Compounding this: the amount a vendor needs to sell to be profitable, versus the amount of time and resources it spends on tech support when a smaller organization doesn’t have in-house IT staff.

Gonzalez acknowledges that vendors can do a better job of “strik[ing] the right balance between delivering a solution that is easy to use and works out of the box and is full-featured and flexible enough to satisfy complex networks and allow for unique enterprise customizations.” In addition, vendors can work harder to educate small organizations about the risks of both targeted and opportunistic attacks.

Are managed services a good answer?

Many small entities rely on third-party managed service providers (MSPs) to host their IT systems because they assume the MSP is creating appropriate backups, patching, and maintaining the systems. However, this can create another layer of vulnerability. Often, MSPs themselves have become targets. 

Gonzalez says relying on an MSP is still better than nothing at all. However, she cautions that broad IT expertise isn’t the same as security or incident response expertise. Few MSPs are willing to offer these specialized services, and frequently, their clients don’t understand the difference. In trusting MSPs to prevent incidents, their clients may not recognize the need for consistent security and incident response services.

Gonzalez thus advises government and small-business leadership to ask “lots of questions” about how long an MSP has handled security, if it has similarly sized clients, the steps they take to safeguard organizations — including their own — and their response plans and scope. Sometimes online reviews might be available.

MSPs may be unable to provide adequate answers. “But from a positive side,” Gonzalez says, “[this] gives DFIR pros an opportunity if they can find the right MSPs to partner with.” Managed security services providers (MSSPs) are another option. They are, however, still a third party, and they may not offer the broad IT support their clients need.

Gonzalez’s opinion: in-house is still best. This isn’t always possible, she acknowledges, but “being able to control your own infrastructure and data reduces your risk [of] loss due to third-party partners.”

Then again, she adds, “Having full control over your data (and limiting the number of 3rd party suppliers you work with) doesn’t do you any good if you are in over your head and leave your network exposed for hackers to get in.”

Responding to a ransomware attack

Most security professionals recognize that incidents of all kinds are a matter of “not if, but when.” In an April 2018 interview with Marketplace, Sophos’ Chester Wisniewski, a principal research scientist, stressed the need for a good disaster recovery plan. “I think organizations have largely focused too much on prevention, which is impossible to do perfectly, and not enough of their resources on being prepared for the bad thing when it happens,” he said.

Google relies on a form of the Incident Command System (ICS), well known among fire and emergency management officials for coordinating response to wildfires, chemical / biological / radiological / nuclear (CBRN) releases, and mass casualty incidents.

Like these kinds of incidents, malware containment is a priority so the infection doesn’t spread. A particularly well-handled example: in Texas, Lubbock County IT staff were credited with rapidly ending the attack by isolating the infected computer from the rest of the network.

It helped, of course, that the initial call was made by a county employee who noticed something wasn’t right, and that the IT staff had the training to know what to do. In addition, CNBC reported, Texas implemented its “Level 2 Escalated Response” — the second highest of the four levels in the state’s alert protocol.

But plenty of other small governments have been caught unawares and even potentially outgunned. Near Chicago, Lake County, Illinois IT staff had to unplug their 64 servers to perform the needed scans.

Matthew Meltzer, a security analyst with Virginia-based vendor and security firm Volexity, concurs that containment involves disconnecting all affected systems from the network. Similarly, clients of a compromised MSP or MSSP would want to disable its access and engage another provider or firm to conduct the investigation, minimizing risk associated with the provider compromise.

At the same time, though, it’s important for first responders to be mindful to preserve evidence. “[This] is a critical step in order to effectively determine the root cause and breadth of an infection,” Meltzer explains.

In fact, containment, he adds, is only truly successful when responders understand the attack vector, because it’s the only way to tell “what kinds of privileges an attacker may have started with, to what systems they may have pivoted… [and] if [the] incident is automated in nature or is being conducted by a human.”

In turn, this helps to determine the infection’s scope — how far it may have spread. Meltzer says, “This determination can quickly take the investigation down drastically different paths.”

Meltzer recommends acquiring memory as soon as possible after the infection, before rebooting the infected machine. Memory could potentially be the only source of evidence in an attack, not only because it could hold key artifacts that couldn’t otherwise be found on the disk, but also because some ransomware variants may encrypt system artifacts such as logs and other disk-based artifacts. 

Meltzer said other artifacts ideally would include log and packet data stored centrally for all inbound, outbound, and lateral traffic, as well as key disk artifacts and memory samples of the affected systems immediately after infection.

Achieving that ideal can often, however, be “wishful thinking,” Meltzer adds, “and you have to adjust your methodologies to collect and analyze whatever relevant data that is available.” Overall, he says the key takeaways for under-resourced organizations should be:

  • Quickly engage a third party who knows how to initiate a proper incident response.
  • Simultaneously begin disconnecting key/critical systems from the local network and internet.
  • Do not reboot infected systems and do not attempt to “clean” (delete) anything.

Meeting the long-term challenges

Ultimately, effectively preventing and responding to ransomware attacks may be a matter of careful relationship-building. “[R]esponders can’t change things that occur before they’re called,” says Meltzer. “[I]nstead responders need to provide very clear instructions to affected organizations early in order to maximize data preservation.”

The time to accomplish this isn’t at the outset of an attack, but rather, well in advance. Meltzer says one way to start building a relationship is through a proactive threat assessment, which has several benefits:

  • It can identify current risks and establish best practices, which can minimize the chance of future incidents occurring.
  • It’s cheaper than a full-blown incident response and any subsequent data recovery or decryption efforts.
  • It offers the opportunity for an organization to get to know an incident response firm before a breach takes place. 

Meltzer advises a thorough selection process, saying that incident response providers with a history of quality research publications and involvement in the information security community will have a leg up over competitors, including MSPs and MSSPs.

Once a firm is retained, clear and upfront service agreements, retainer policy, and other details help to establish trust between organizations. “[Defining] a standard procedure that first responders can leverage in order to efficiently collect evidence can go a long way to preserve data that the second-tier responders will analyze. It also limits the amount of evidence tampered with or destroyed,” Meltzer says.

Meltzer says one of the most valuable things an incident response firm can do is stay connected with small businesses and local governments in the communities they operate in, encouraging these entities to include cyber security incident response in their business continuity and disaster recovery plans. “It is always better to have a plan in place before something catastrophic occurs,” he says.

How To Use Quin-C’s Simple Review Widget

$
0
0

Hello and welcome everybody to this video about Quin-C. Today we will be talking about a widget called Simple Review.

Simple Review is a widget which has been designed for examiners whose everyday job is to run the index searches or keyword searches; perform tagging, bookmarking, viewing, labelling and exporting data. So if you are one of those users, Simple Review is going to be very helpful to you, and you can use it in your everyday job, to make the work more efficient for you.

So we will take a look at first of all how the Simple Review has been designed to run. Simple Review has been designed to run in a full-screen mode. So that means, if you are a user who has been assigned the Simple Review widget, you will not see anything else, but just the Simple Review and its interface.

In order for a user to be a Simple Review user, you have to define a separate role which basically will include a specific role setting which can be applied to all the users who you would want to act as Simple Review users.

So let’s take a look at how we can define this role for the simple users. If you go to the Admin widget and Roles, you can create a new role only designated to the Simple Review users. I have got here one role called ‘Simple Review’, and as you would know, there are two very important things to make any user work in Simple Review mode.

First of all, you can only assign them one view, and nothing else should be assigned to them. Along with that, you also have to ensure that their start view is the desktop view. When it comes to assigning the widgets to them, you can only assign them one widget – that is the Simple Review widget – and you do not have to assign them any other widget apart from this. If you do assign them any other widget, they will open Quin-C in the normal Desktop view and will not act like a Simple Review user.

You can choose to include as many widget filters as you want, and the rest of the selections depend upon the users and their administrators. So that is how you define a role for a Simple Review user.

Once you have defined a role for a Simple Review user, the next thing you have to do is assign that role to any user who you would want to act as a Simple Review user. In my case, I have made a user called Harsh, and you can see that he’s been assigned the Simple Review user, and has been assigned a few cases out of all the cases which I have.

That is it. Once you have done this, a user will act like a Simple Review user. And when he or she logs in to Quin-C, they will see a totally different Quin-C.

So, let’s see how does a Simple Review user work within Quin-C?

If I run Quin-C within Simple Review mode, I will simply log in as the user who has been assigned the role of Simple Review. So this is how Quin-C’s Simple Review interface looks like. It shows you the user who is logged in; the number of cases assigned to that user; and it also welcomes you with the three pie charts.

The first pie chart is based on the size of the cases. So, the bigger the case, the bigger a chunk it occupies on the pie chart. The second is based on the file categories you have in your cases, so if you have more PDFs in your cases, that will occupy the bigger chunk of the pie chart. And the third one is based on the tagged objects. So whichever case has the most tagged objects, that will occupy the bigger part of the pie chart again.

If this case list goes really long, you can search for any case by typing in the search bar. Also, you can apply an advanced search to it. You can search for a case by its name; by the creator; creation date; last modification date; and so on. You can make them case-sensitive, exact match, or not searchable.

We have also included two other widgets on this page. So those two widgets are auto-tagging and tasking. Auto-tagging widget allows you to automatically tag for the search terms you are interested in, and for any number of cases which you would like to perform that tagging.

Tasking allows you to search for all your tasks which have been assigned to you. You can search for your tasks just by looking at the Tasking widget, which shows you a list of all the tasks which have been assigned to you; or any comment which has been made along with the tasks.

Once you are done with this screen and you decide to work on the case which you want to, you can select one or two cases you would like to work on. Once you have selected the two cases you would like to work on, you can simply open the cases by clicking this forward button at the top; or you can also click ‘Open’ right at the bottom of the screen.

Once you click on them, the case starts loading, and this is how the case finally ends up loading. This is a very interesting and helpful screen for the Simple Review users. They can see the list of evidence which they are working on in these two cases which they have loaded; they can see all the file categories here; they can see the file status, as you would remember this from FTK; and they can also see the labels which they have applied.

Also, at the bottom of the screen we have placed a few widgets which we believe are the best and suits the most work flow of Simple Review user. And they can simply open the widgets from here and use them as and when required.

As explained earlier, Simple Review has been designed to perform the most basic review, tagging, bookmarking and exporting features, but not just that – it also allows you to apply filters; compound filters; much more easily and intuitively.

As an example, you can see that in this case, or in fact in these two cases, I have got all these evidences, and file categories etc., and if I’m only interested in looking at documents, I can simply choose to see the documents and open them. Again, you have two options to see the documents: either you can go forward from the top left of the screen, or you can simply click ‘Open Viewer.’ Either of them works the same way.

Once you click on them, the whole screen then loads the documents from all of these evidence sets, and you will only be left with the documents in this screen. This is again an interesting screen, and a different version of the screens which you would typically see in Quin-C.

You have what’s right of the screen a viewer which is always enabled in this view; and then in the middle you have the grid; and on the left you see all of your filters which you have applied. If you can click on any of these documents, to see the documents within the grid, and that is how you can simply see the documents based on the labels which you have applied.

If you have anything to search for, you can simply search here in the search bar. And within the search bar we have this cog wheel, which symbolises advanced search. If you are performing an advanced search, you can choose to apply all of these Boolean variables, or Boolean operators, to it. Also, you can search for features like synonyms, stemming, fuzziness, phonic; and you can also do a Regex search.

You have the ability to run a search from here. So in this case, if I was to run a search for the word ‘vampire’ I can simply search for the word ‘vampire,’ I can type it in the search box and hit Enter, and then it automatically filters down to the documents which have the word ‘vampire’ listed in them.

I can then choose to click on any of these documents, and as you can see, it’s now displaying the first document, which is in the grid, and it also highlights the word ‘vampire’ wherever it’s found there. This is one way of searching and tagging.

But another important way in which you can perform searching is, if you go back to the previous screen where it shows you all of your file types, evidence status, etc.; here also you have the ability to search for any term. So if I was to search for the same term, ‘vampire,’ here again, it will then filter down just to the items which contain the word ‘vampire’, and then you will see these charts representing the true form of where they found the word ‘vampire.’

So you can see that, within these evidence sets – within the Belkasoft evidence set – it is three places where ‘vampire’ has been found, and similarly for other evidence sets: in Mantooth it has been found nine times, and in Demo just the two times.

If I am now interesting in looking at the documents which have the word ‘vampire’ in them, but on top of it I want to apply a filter of deleted files – so that means I want to search for the word ‘vampire’ but only in deleted documents – then I can apply these labels and simply move forward. Once I move forward, the evidence list here will show you the filters which you have applied, and the grid will only load the documents which are first of all deleted, and then they are a part of document set, and again they have the word ‘vampire’ in them.

So you can see here how it represents this; you can simply then choose to bookmark this document, to label this document, or simply export this document. In order to do this, we have provided a really simple view within the simple viewer.

These three columns, they are all retraceable. So you can simply retrace this by clicking on this bar here towards the left; and then the left bar just collapses. So you can see that this whole column collapses when you click on this arrow.

Again, as soon as you click on it you will realise that the viewer has expanded its capabilities, and if you just expand it a little bit more, you will see that now, here, you have an expanded viewer which on its left shows you the conversation; any document families; the near duplicates to this document; and the properties of this document.

At the same time, on the right, it allows you to either apply the labels to this document; bookmarks to this document; you can use your coding panel; or you can simply download this document. So if I was to apply a bookmark to this document, I can simply choose the bookmark I wish to apply, and I can do a bookmark of this. It also allows me to write a comment against the bookmark, but in this case we can apply a label to it, and in this case I am going to call this vampire document as a part of, let’s say, ‘money.’

So once I click on ‘money,’ you will see that it shows you that the labelling job has begun, and then that’s it, the labelling has been applied. These are one-click labelling and bookmarking, so as soon as you click on it, it will start applying the labels and bookmarks.

Similarly, you have an option to download the document or basically exporting the document out of the case. As soon as you hit ‘Download this document’, it will download only the document that you have in the viewer. If you were to click on the family, it would then download the whole family of documents.

So that is how you can simply keep loading up your documents in the grid, keep applying labels, and keep moving next. If I was to click on ‘Previous’ then it will show me the previous item in the grid, and then you can do the same to that document, and then you can see any labels or bookmarks which it has applied already.

So it’s pretty easy: you can simply apply your bookmarks or labels and keep moving next from within the viewer itself. That is why it’s a really easy and simple view for the users to use.

A few other things in this view which you should notice: you can obviously turn off the filter from here, and then it loads the rest of the things within the grid. There are some other widgets which we have included within this viewer as well, and they sit right in the grid. So one of them is Social Analyzer, you’ve got your Maps, Timeline, and you’ve got Thumbnails.

So if there were lots and lots of images which were loaded in the grid, and you were interested in looking at the thumbnails, all you need to do is simply open the Thumbnail widget from here. It loads up like this – you have it on half of your screen but you still have a view of what is behind – and then you will have a view of all the thumbnails in the Thumbnail view.

If you were examining emails in the case, or any kind of communication data, you can simply launch Social Analyzer from here, and you can use Social Analyzer as you would normally use it in Quin-C. It shows you all the kind of communication data it can extract from this case. You can simply choose to pass all the contacts from it, then based on your own interest, you can choose anybody you are interested in, and as soon as you click on the contact of your own interest, it develops the communication matrix for it and visualises it on the screen for you.

You can here see how this guy has been communicating to others in this case. You can choose to expand the labels, if you like; if you don’t, you can choose not to expand them. And then you can click on any of these entities and see the emails which were exchanged between these parties.

So again, if there was any sort of timeline filter which you have to apply, so that you can see if you want to apply a timeline filter, so that you can always look at the data from a specific date and time, you can do that within Simple Review as well.

And then you can look at any sort of data widget identifies over the map, or anything which has geo-coordinates assigned to them, and then you can use them.

So this is, in a nutshell, what Quin-C Simple Review has been designed for. It’s designed for users to perform their searching; look at the documents; and then simply tag them, bookmark them, or export them within the case. You can also see that we have a report widget here, where you can keep generating your reports if you would like to, and based on your own reports you can take them back to examiners, to your other investigators, or to your administrators.

So this is what Quin-C Simple Review does. We will be looking forward to expanding some capabilities of Simple Review in the near future, but this how it works as of today. Thank you for watching this video. Have a good rest of your day.

Find out more about Simple Review at AccessData.com.

Considerations When Investigating Data From Password Managers

$
0
0

by Dr Tristan Jenkinson

In part one we discussed the importance that data from password managers could play in an investigation. In part two we then looked at what aspects an investigation may include from a digital forensics perspective. We now discuss some of the potential issues that can arise in such investigations and some areas where early consideration may help ease or avoid these issues.

The Computer Misuse Act

In the UK, if you access a computer without the authority to do so, this would likely constitute a breach of the Computer Misuse Act. This means that if credentials such as an email address and password are identified, while it would be possible to use those credentials to collect the relevant data, it still may not be possible to do so legally without additional steps seeking the relevant authorisation.

Court Orders and Other Agreements

Accessing the content of password managers will usually require the use of a master password. When preparing court orders or agreements between parties, consideration should be given to including specific wording requesting the master password to any and all password management programs used. It may also be wise to consider ensuring that cooperation is required with respect to multifactor authentication, as this can be used to protect password managers (and is discussed further below).

Ensuring that such agreements are in writing could become important, if evidence from forensic investigation later identifies usage of a password manager, but it was claimed that none were in use.

Acceptable Use Policies

With the previously noted use of password managers at work, businesses may consider looking to update their acceptable use policies to cover password managers and their content. For example, stating that if password managers are used on company devices, the business has the right to access the data stored by the password manager.

Privacy and the Impact of GDPR

One of the concerns that could be raised with regard to the content of data from password management systems is in respect of privacy.

The information contained within such programs could be considered to contain personal information. Further, it could contain information defined as “special categories of personal data” under the GDPR, which gather additional protections. Such categories of data include racial or ethnic origin, political opinions, religious or philosophical beliefs, genetic or biometric data, health data, or data relating to an individual’s sex life or sexual orientation.

This means that when looking to collect such data, or when looking to word acceptable use policies, agreements between parties or court orders, the relevant legal basis for processing should be specifically considered to avoid the potential of falling foul of the GDPR.

Multi-Factor Authentication

It is worth noting that some password managers have the capability to implement multi-factor authentication. 

This means that an additional piece of information is required to access the system in addition to the master password. Typically this is through a message sent to a mobile device.

If multi-factor authentication is set up, and there is no way to access the relevant additional information (such as the code on a mobile device), then it will likely not be possible to access the content. This should be a consideration if the individual concerned is not available, or may not be cooperative.

Summary

In summary the content of password managers can be hugely helpful – whether it be finding additional data sources in investigations, identifying further assets and bank accounts to be checked in an asset tracing case, or identifying bitcoin and cryptocurrency wallets that may need to be considered for a freezing order.

While they may not be of interest in most litigation cases, there are scenarios where their content could be absolutely vital to an investigation.

About The Author

Dr Tristan Jenkinson is a Director in the eDiscovery Consulting team at Consilio. He is an expert witness with over twelve years of experience in the digital forensics and electronic disclosure field and has been appointed as an expert directly by parties, as well as being appointed as a single joint expert. Tristan advises clients with regard to forensic data collections, digital forensic investigations and issues related to electronic discovery.


How To Extract Credential Data Using KeyScout

$
0
0

Hello, this is Keith Lockhart from Oxygen Training, and this video is going to discuss the KeyScout application.

The KeyScout application is one of the tools available in the tool suite concept of the Forensic Detective product. KeyScout is a standalone application that can be run locally or on the go, we’ll look at use cases of those last two bullets there, and we’ll discuss both those use cases.

But to talk about what KeyScout is and does, we have to venture down the road we do in the boot camp course, about using your powers for good. So KeyScout allows a user the power to search through and collect information that might possibly be information that could be protected in some shape or fashion by the operating system; by legality in the jurisdiction of use; by morals; by anything.

And when I show you how the KeyScout works, especially in the use case of On The Go – meaning we’ll take a on-the-go device and put the KeyScot application on that device so it can walk around with us and collect data wherever we might be and bring it back to the lab, you really need to start thinking about what you’re doing, and what you might be collecting, and how that might or might not be a good thing.

So, it’s a tool. I certainly don’t want to deprive anybody of the capability to do this, but I will say: use your powers for good. Don’t get yourself in trouble. This could do some scope creep, in terms of what you have access to or not.

However, when you’re in the position to need this information, it is a vital tool for the toolkit. I mean, this could be a time-sensitive, life-saving, “Wow, we need to get some data from this account right now, we don’t know the credentials, however the percentage of success of what we can do goes up greatly if we can get to a machine, run KeyScout against it, and possibly get the credentials back that we need.”

Or maybe some email information, or browser information, or iTunes backups if we can get to the box where we might not have the phone but we have a backup of it. KeyScout allows all kinds of data collection, and we’ll explore that as we move on.

So we’ve sort of already touched on the information gathering component. When we use KeyScout, its whole job is to search. We can configure how that search is going to go; provide different parameters; different depths of search – maybe we don’t have a lot of time; things we save from the search, we can limit that and maybe just doing a smash and grab, so we need credentials really quickly. Or maybe we’re digging through all the media hooked up to a particular machine, to try to go find backups or artifacts or mail, or things that KeyScout will gather for us whilst running.

Important for us to know: the OPCK file, the Oxygen credential package. This is one of KeyScout’s primary functionalities, is to feed the Cloud Extractor. Now what I mean by that is: when you’re trying to access cloud-based data, and need credentials to supply the cloud account, KeyScount is what gathers that data for you from a machine.

Or, as you’ll see inside Detective when you finally pull extracted data into a case, the accounts and passwords section is the same information that would be contained in an OCPK file. You’ll see a button in the tool called ‘Export to OCPK.’ We’ll talk about the way you’d do that and what that means. But this is the literal ability to grab the account data to feed the Cloud Extractor.

Or if you’re gathering lots of other data in a grander fashion, with much more space to be taken up, the Oxygen Desktop Backup might contain email archives; iTunes backups; browser artifacts; there’s a whole plethora – there’s our big word for the day – of tools, or areas that KeyScout can collect from.

And this ability, these options to collect from all these different areas, is a 12X version of KeyScout and higher. So if that’s new to you, pay particular attention as we go through this video, and know what you have in front of you when you upgrade to the new Detective.

OK, so to round this conversation out, listen, you can run KeyScout in the lab, which means the use case would be: if you have a target drive that you could mount virtually – you know, a VHD or a VM session, something like that – assign a drive letter to it, you can launch KeyScout against it like it’s a running machine.

Or if you’re out in the field – in a cave, I don’t know where you are – you need the ability to run against the machine right in front of you, whip out that USB drive with KeyScout on it, collect all kinds of things back to that USB and get back to the lab.

OK. Let’s see how this works.

We’ll access KeyScout from the Tools menu of our homescreen in Oxygen Forensic Detective. You’ll see you’ve got two options here: Launch KeyScout, and Add KeyScout to removable media. Let’s just discuss those quickly, because each one is relevant to use cases we just talked about.

Launching KeyScout from here almost seems nonsensical, because you really wouldn’t want to go install Detective on your target machine, just so you could run KeyScout. So it’s great for a demonstrative purpose, it’s fantastic. However, the reality use case is, mount that target drive on the same box as your Detective box and you can hit it as a drive letter and collect just like from before.

Or you can add it to removable media, like an on-the-go device or a USB drive that we talked about, you can have in your pocket or your go kit. And that’s as simple as hitting the link, navigating to the drive that you want to save KeyScout to.

OK, I’m just going to go ahead and run it from here, like we talked about, from a demonstrable perspective.

OK, so when the interface starts you can see we’ve got a couple of options off the bat. One is in our Settings. It could be that we want to add particular passwords to help us break other password things open, like a keychain; or tell the tool to search different places or exclude different places than are the defaults.

And, specifically new to the 12 version, is all these selectable tools, replications that we can collect data from or not. Again, depending on your situation. If you’ve got a lot of time, got a lot of space on your USB drive: OK. Or you’re in smash-and-grab mode, and you only need Skype. Or only Firefox. These are all selectable boxes.

And then we can read the About and the Help. And I just want to show you in Help for a second, an early quick synopsis of credential and application data that we can go after, including operating systems we can do, and then a couple of command line options if you want to get crazy during your KeyScout implementation. That’s accessible from the Help.

OK, let’s have a look at the Search. So if we click on ‘Search’ you can see we’ve got several options available to us. ‘Custom’, where you could enact some of those settings that I talked about a minute ago; but there are some defaults: Fast, Optimal, and Full. And you can see a little description of what those are doing underneath.

Let’s see how this works. I’ll click ‘Fast’ and start the search.

KeyScout takes off against the live drives on this machine, and we’ll fast-forward a little bit in time to get to the end of it, and we’ll see what KeyScout comes out with on the other side.

OK. My results are done: three minutes and fourteen seconds checking 96 directories by default. I didn’t make any changes to the settings.

And what did we get? Oh boy. 133 passwords and four tokens. That’s bad all by itself. Out of five different applications, 261MB of data – we’ll have a look. One backup found, that’s 12.5GB. Huh.

Well, let’s look here: Passwords and tokens. Wow. Well if we just have a bad look at all of the autocomplete passwords and account information on here, this is probably… yeah. Let’s just not spend a lot of time here.

OK. But if we look at the applications – hmm. I’m not a big Internet Explorer guy; it stands to reason there’s nothing there. I am a Chrome guy, which is why there’s 50MB of data there. I do some Mozilla stuff; there’s 31MB of data there. I’m a Skype guy – lots of data there. And Windows Mail. Well, this is just crazy. Hmm.

If I have a look… well, I’ve got FTKJedi account information for Skype; Slylock account information for Skype; that’s in my Google. Yeah, all kinds of internet browser based data.

Let’s see: Internet Explorer, nothing there. But the ability for KeyScout to go out and grab artifact data from all these locations – including, in this case, Windows Mail – we’ve got a Unistore database out there, there could be PSTs; who knows what we’re going to find?

From a backup perspective: that’s my backup of my iPhone 10, 12.5GB. Notice, though: tickbox. Alright? All of these are boxes, available for ticking or not. What are you trying to export or save out, or not? Again, all selectable. If you’re in smash-and-grab mode I probably would not take the iPhone backup, and I probably – well, it depends on what you’re after – if you don’t do all this and just take their credentials, you’ll have that OCPK file, that credential package, that lets you take this autocomplete username, password, or if we have token recovery, and feed it to the Cloud Extractor.

But if you’ve got time – maybe, who knows what you’re doing? Select it all.

Put the backups out there, get it out there, and you’ll have that Oxygen Desktop Backup. Takes a minute to save because of all the size, but again, up to you, based on your preference and what you’re trying to get done.

You can see a log of everything that was searched through.

And that’s it: my search is over. I could go back and run another version of the search if I wanted to do it; I could save information out, again depending on your size, space and time – I’m not going to save anything right now.

But let’s look at what happens with our results. Let me cancel that. And I’m just going to close KeyScout altogether at this point and hit my extractions, and let’s see where I have some data that would be OCPK relative. How about… Alison’s phone’s a good one. Accounts and passwords.

And all I did was take the Accounts and passwords section, just to refresh. All I’m doing is going to Alison’s phone and using the Accounts and passwords section, where we go and aggregate by account, by password, and then by token capability, everything we can get.

And if you see the cloud icon, that indicates that credentials or tokens could be used to access that data. Look: Extract with Cloud Extractor will send that information straight to the Cloud Extractor; or Save accounts data. And look what it says: Save accounts data to Oxygen credentials package.

A large chunk of analysts’ machines are not hooked up to the internet as a practice. However, somebody is, so if you can’t use this information in conjunction with a Cloud Extractor online, save it out to an OCPK file and give it to someone else who can. And what I mean by that is, just so we have an idea, I’ll just go back to the home screen really quick where I can grab the Cloud Extractor.

And on the main menu of the Cloud Extractor we’ll see the option right there: “Feed me an OCPK file.” Well, it’s probably not that direct as “Feed me,” but it is “Import credentials package.” It’s very straightforward.

And it says “Import credentials file generated by Oxygen Forensic Detective, which we just saw, or KeyScout, which we just saw. If you click that, it’s looking for that OCPK file.

So you kind of get a 360-degree idea of why KeyScout’s here, what it can do, and how it circles back to: listen, if we don’t get it from a target machine, we’re going to get it from a handset. Or a device we pull on Detective. And if we don’t get it from Detective, we’re going to get it from the target machine.

And I think one of the most relevant, or important, use cases we can talk about with this is: a young adult’s missing; their phone is missing with them; we can’t get that. But this is a cooperative recovery effort, so we are given credentials to get into the cloud accounts, where we can go find geolocation that’s real-time for that missing device, which is probably, hopefully, with that missing person.

That’s tried and true. This is a great tool. I know people get leery about scope creep when it comes to the cloud. However, exigent circumstance is not a bad situation. Articulation is not a bad concept. You have the tool available, whether it’s against a mapped drive, or Detective pulling the same information from a handset as far as extraction information. KeyScout capability to acquire that account information, or those web artifacts, or those mails, or those backups: we can’t leave that out of the toolbox.

OK, thanks for spending the time. Hope that’s been helpful. Let us know how it turns out for you. I’ll speak to you later.

Learn more about the Oxygen Forensic KeyScout and many other tools, tips, and workflows with Oxygen Forensic Detective by attending an in-person or online training course. Check the Oxygen Forensics website for course dates, locations and descriptions. 

How To Search For Visual Data With Griffeye Analyze DI

$
0
0

In this video, we’re going to discuss how to use the ‘Search’ function to help you quickly locate files of interest within your case. Analyze DI allows users to search for not only text-based information, but also visual clues as well. Adding visual clues into your workflows can really improve efficiency and help you get to the files you’re interested in even faster.

Let’s look at the search view, where you can review all of your search results. The search view is automatically open when you perform your first search. It can also be toggled on or off from the View tab on the ribbon. This panel looks similar to the Thumbnail view, but is actually a custom thumbnail and grid view just for your search results. Search results can also be quick filtered, just like your main case data can.

The search view is tabbed based on the type of search you’d like to perform. Text search allows you to search for textual data parsed during case processing. Batch searching allows you to bring any list of textual data to search for across your case, such as hash values. Image searching allows you to bring in external images to search for visually related files.

Most of the common searches can be accessed by right-clicking a file to bring up the context menu. From here we can see the different types of searches we can perform related to the file you’ve selected. We can search for EXIF related files; similar images; similar faces; relationships; and files located in the same folder as the selected file. Note that the keyboard shortcuts for each search type are also listed.

EXIF related searches can be handy when searching for files based on related camera information like serial number, camera model, software version, and create date. Simply right-click and select the EXIF criteria you’d like to search for, and view your results in Search view.

Let’s perform one of the most common types of searches: a visual similarity search.

Similarity searching allows you to search for files that are visually similar in appearance to the selected file, based on colour distribution, contours, textures, and points of interest. To conduct a similarity search, simply select a file and hit Enter.

Let’s take a look at our results in the Search view. As you perform searches, results will appear in the Recents tab list and can be viewed individually by selecting them. Similarity searches will return 100 results by default, but this can be adjusted in the Results panel settings button.

Subsequent searches can be also performed from right within your results in the Search view. This can be a very useful tool for locating files with connecting points of interest, but also locating similar non-pertinent files to quickly exclude them.

From the Context menu, we can also search for similar faces. Analyze will search for similar faces to all the faces detected in the image, split into separate results that can be viewed individually.

Relationship searches can also be performed. This search will find all the files related to the selected file, based on the criteria set in the relationship search settings. Simply select or deselect the relationships you’d like to search for. GPS and date ranges can be customised by double-clicking on them and changing the value.

Relationship search results will be returned in order of the strongest relationships first.

Now let’s perform the same searches by using files external to our case. Analyze DI allows you to select an external image that is not part of your case, and search for visually related files within your case.

From the Search view, select the Image tab. From here, select the search you’d like to perform: similarity; visual copies; or similar faces. Select the type of search you would like to perform, and then click ‘Add.’ This will prompt you to select the file you would like to perform the visual search against.

First, I’ll perform a similar image search by selecting this red bus.

Now I’ll select a search method for similar faces to match a possible victim’s face.

See how Analyze quickly locates all of the similar faces in your case?

Thanks for watching. If you have any comments or questions, hit us up in the forums or shoot us an email to support@griffeye.com.

Investigating Nonconsensual Intimate Image Sharing

$
0
0

by Christa Miller, Forensic Focus

Nonconsensual intimate image sharing – also known as image-based sexual abuse, nonconsensual pornography, or its original slang, “revenge porn” – has been around since at least the 1980s, but didn’t become a widespread social problem until the internet – and mobile phones – became ubiquitous.

So called because its perpetrators often share intimate images in reaction to a breakup or a fight, “revenge porn” is actually the result of a complex mix of motives. It’s often just as devastating to its victims as physical sexual abuse, and is unevenly policed and enforced. How can investigators and forensic examiners respond?

Defining Image-Based Sexual Abuse

A report published in the United Kingdom, “Shattering Lives and Myths: A Report on Image-Based Sexual Abuse,” describes how a mix of “control, attention seeking, jealousy, obsession, misogyny and lad culture, sexual gratification, a ‘prank’, distress, humiliation, entitlement, and to build up social capital” contribute to nonconsensual intimate image sharing.

Based on findings from more than 50 interviews and part of an international study across the UK, Australia, and New Zealand, the report found that nonconsensual intimate image sharing is often closely correlated with domestic abuse and other forms of coercive control and gender-based violence.

The “Shattering Lives” researchers named and corrected seven ideas around nonconsensual intimate image sharing:

  1. It’s a form of sexual assault, not a communications offense.
  2. It’s derived from a complex mix of motivations, often including power and control, not simply a form of revenge.
  3. The repeated proliferation of images across the internet can shatter lives, and victim-survivors can’t “just move on.”
  4. Only some, not all, forms of image-based sexual abuse are currently criminalized.
  5. The police and the criminal justice system aren’t always prepared to respond to non-consensual taking or sharing of sexual images.
  6. Likewise, websites and social media are not always good at taking down the material.
  7. A preventive, educational and restorative approach to criminal justice may be more effective than a punitive response.

The Impact Of Nonconsensual Intimate Image Sharing

The impact of nonconsensual intimate image sharing isn’t unlike that of child sexual abuse material (CSAM). People whose pictures have been nonconsensually shared describe similar fears of being recognized, of their pictures showing up again and again, of being revictimized. 

It’s also not unlike the impact of sexual assault, because of the idea that adult victim-survivors played a role in their own harm – especially when they consent to taking the pictures or having them taken. This can confound a victim’s abuse claim.

In fact, the “Shattering Lives” report found that many “victim-survivors spoke of being pressured into taking and sending images, or having such images taken.” Even this coercion had multiple factors. Grooming, the threat of physical violence, or threats to end the relationship all played roles.

Unsympathetic attitudes towards their experiences can serve to retraumatize victim-survivors, part of what the report described as “social rupture” whose repercussions are felt in daily personal, professional, and digital social spaces.

For example, US Representative Katie Hill resigned her Congressional seat as a result of abuse: “In Hill’s case, it cost her a career, while her ex-husband who allegedly leaked the photos has escaped the same amount of scrutiny,” wrote Gabby Landsverk for The Insider

A professional cost isn’t the only potential consequence for a victim-survivor. “[S]ocial, cultural or ethnic identities, life experiences and life stage impacted the harms, oppressions and consequences [victim-survivors] encountered in the context of image-based sexual abuse,” the “Shattering Lives” report stated.

Legal Remedies

In the United States, no federal law covering nonconsensual intimate image sharing currently exists. (A federal bill, the ENOUGH Act proposed by US Senator Kamala Harris in 2017 failed to pass.)

Although 46 of the 50 US states, plus the District of Columbia (Washington, DC) and one territory, Guam, have laws specifically prohibiting revenge porn, the laws suffer from inconsistency and confusion. For instance:

  • Some states make nonconsensual pornography a misdemeanor; others make it a felony.
  • In some states the crime is a misdemeanor no matter how many times it’s committed; in others, a second or third offense makes it a felony.
  • Some states categorize it as an invasion of privacy; others as a public decency offense.
  • Some classify the crime differently depending on whether the victim is 18+ years of age.

Finally, some states move more slowly than others to enact and update laws. For example, in the same three-year period it took Oregon state legislators to pass and then update that state’s law, New York’s first law was debated hotly before finally being passed in February 2019. 

That’s also due, however, to the efforts of large tech companies like Google, which lobbied in 2018 to change language in New York’s proposed statute. Prior to the bill’s passage, Google’s efforts to block it were successful. 

In part, that’s because nonconsensual intimate image sharing isn’t as clearly illegal as CSAM. Because consensually posted pictures of consenting adults fall under free speech, a broadly written statute could run afoul of the First Amendment of the US Constitution’s Bill of Rights – and the “Internet’s First Amendment,” Section 230 of the 1996 Communication Decency Act (CDA).

Of course, nonconsensually shared images themselves can “affect a person’s ability to live their life unfettered by harassment,” according to Kateri Gasper, a prosecutor and member of the National White Collar Crime Center (NW3C)’s Prosecutorial & Judicial Advisory Board, “so it’s a very narrow line.”

In the UK, according to the “Shattering Lives” report, motives depend on offenses, and in turn, offenses carry different thresholds: causing distress, intent to cause distress, or knowledge that the action could cause distress. Likewise in the US, where many laws require the government to prove that a perpetrator’s intent was to cause harm.

Another similarity between the two countries’ laws: not all image-based offenses are categorized as sex offenses. In the US, they’re frequently classified as privacy or decency crimes. That’s a problem, says McGlynn, because “[C]omplainants do not have the protections afforded to other sexual offence victims, such as protections against the use of sexual history evidence [at trial].” Further, says Jonithan Funkhouser, a detective with the Lake Oswego (Oregon) Police Department, the perpetrators of nonconsensual intimate image sharing don’t have to register as sex offenders.

Reclassifying image-based crimes to bring the law more in step with other sex offenses, on the other hand, would put consent front and center and also, critically cover sharing without intent to cause harm: “for purposes of sexual gratification, financial gain, group bonding or a ‘laugh’; threats; or fake images” in the words of the “Shattering Lives” report.

The UK Law Commission is currently reviewing existing legislation, but the Guardian reported that its conclusions won’t be released until 2021. “It is deeply troubling that we will have to wait until 2021/22 for there to be reform to the laws on image-based sexual abuse,” says McGlynn. “The current laws are out of date, piecemeal and clearly failing victims. Many of the gaps in the law are straightforward to remedy, such as criminalising threats to distribute sexual images without consent and granting anonymity to complainants. That the Government has refused to act means that many more people are going to be victimised and without any justice or redress. The delays mean justice is being delayed.”

In New York, before the passage of the 2019 law, Gasper says prosecutors had to “get creative” to prosecute offenders. For example, one defendant was charged and later convicted of felony contempt of court for violating a protection order by contacting the victim through a third party – the internet service provider he used to post to the fake online profiles he’d created in his victim’s name.

Alternatively, when the criminal burden of proof can’t be met or when the law doesn’t provide coverage, civil remedies – suing perpetrators under tort claims – may be possible, though David Bateman, a partner in the law firm K&L Gates and a co-founder of the Cyber Civil Rights Legal Project, says this approach is frequently more expensive and public than victim-survivors can handle. Plus, he says, “In the end, the only relief is monetary; there’s no jail time for an offender.” Victim-survivors who pursue the civil route are generally motivated by a need to prove who did it.

Another limiting factor, according to the “Shattering Lives” report: support in the form of advocacy is lacking, so victim-survivors themselves can struggle to understand their options between criminal and civil law, and to communicate with investigators and legal teams.

Nonprofit organizations like the Cyber Civil Rights Legal Project can sometimes offer some relief by providing pro bono services. Bateman says most times when victim-survivors contact a nonprofit, they aren’t trying to unmask their abusers. Instead, they seek to get the images taken down so they can move on with their lives.

Law Enforcement And Digital Forensics Response

In Bateman’s experience with law enforcement across the country, how authorities respond to revenge-porn reports is “a broad spectrum.” “There are federal versus state jurisdictions, and every locale has its own budget, issues, laws, priorities, and ethos,” Bateman says.

That can make for an inconsistent approach across jurisdictions. Investigators rely on prosecutors to help them understand how to gather evidence, arrest, and charge for offenses, but the many gray areas of nonconsensual intimate image sharing make it more difficult to assess the facts of the case – or to take action.

In the best case, says Gasper, digital forensics examiners may be called on to find email, social media or website screen names, browser histories, and/or social media apps associated with the posted images. Finding out where and how posts were made (e.g. to Facebook via mobile device) is important, and a list of keywords – often included in search warrants – can help. Generally, sites and apps are enumerated in the warrant.

That’s if a warrant is possible to obtain. In Oregon, where the law in 2015 inadvertently created a “loophole” by restricting its language to images posted on “an internet website,” police couldn’t investigate threats to disseminate images via text, email, or printed pictures.

Nor was it a crime to simply show pictures to others. “It’s very difficult to tell victims that what happened to them isn’t illegal,” says Funkhouser, who performs digital forensic examinations for his agency. Told in one case that the images had been deleted, detectives couldn’t get a warrant to search the phone because there was no crime to have probable cause to believe had taken place. “The best we could do was to try to get the image off the phone so it wouldn’t be disseminated,” says Funkhouser, “and hope it was the only copy out there.”

Other investigative challenges:

  • Proving the accused is truly the person who posted the pictures, rather than a third party with whom the pictures were shared.
  • Victims frequently don’t realize the pictures might have been shared outside of their relationship. 
  • Cell phones make it even more difficult to trace who posted what. 
  • Difficulty persuading friends and family to save the evidence rather than deleting it in a misguided effort to protect the victim. “It’s actually easier when it’s posted on porn sites,” Gasper says, “[which] are relatively cooperative when you [serve legal process showing] the post was made without permission.”

Subpoenas can help to prove a particular IP address was attached to the account(s) used to send the material, says Gasper, adding that it’s wise to bring in a prosecutor on every nonconsensual intimate image sharing case. That way, they can help with the timely creation and service of subpoenas, cutting down on confusion and overwhelm to move a case forward.

Even so, the subpoena process is arduous and tedious, which in turn can factor in resourcing issues. “Most detectives don’t have the luxury of doing a [year-long] investigation for a misdemeanor,” Gasper explains.

In larger cities like New York, for instance, detectives in specialty units tend to focus on felony cases. That leaves the misdemeanors to precinct detectives, who themselves are likewise occupied with homicide and assault cases.

In smaller communities, misdemeanor offenses may not be assigned to a detective at all, says Bateman. “The officer who is assigned therefore has no subpoena power, and that means no forensics.” Most effective, he says, is “when digital forensics is married with subpoena power.” But even then, the “light slap on the wrist” that’s characteristic of most misdemeanor sentences can feel like small comfort to both victim-survivors and investigators.

Still, Bateman has seen progress over the past five years in that law enforcement has learned more about the nonconsensual intimate image sharing problem, statutes have been passed to support enforcement, and in some jurisdictions, it’s what he calls “a real priority,” with dedicated investigators working the cases.

The Need For Training And Education

In Gasper’s experience, the education detectives receive frequently consists of experience. Without it, she says, detectives can quickly become overwhelmed and not know where to start. Her office receives numerous calls from victims seeking help when precinct officers didn’t know how to help them.

In a small agency like Lake Oswego’s, Funkhouser says it’s easy to ensure all officers receive legal updates as often as they need. That happens via email, or, if there are many updates, via “roll call” training prior to each shift. Additional training may be available from the multidisciplinary task forces responsible for investigating domestic violence or sexual assault cases.

Indeed, McGlynn says that because image-based sexual abuse is experienced by many victim-survivors as a form of sexual assault, procedures and training relating to sexual assault could be instructive in assisting the police to better tackle nonconsensual intimate image sharing.

But this is only a start. “There is also additional training required to understand the specific modes of perpetration of these abuses and to understand that for some victims, these crimes can have devastating consequences,” she adds. “Society and the police need to better recognise the serious harms of these abuses and take action.”

The “Shattering Lives” report called for specialized, comprehensive training and guidance for police and others in the criminal justice system, together with improved resources and technical support. Comprehensive statutory guidance provides the foundation for this, but programmatically, it needs to become part of the curriculum at police academies and annual training.

Proactive Investigative Responses

With many investigators already delivering presentations to schoolchildren and parent groups on internet safety, adding a unit on consent could help both would-be victims and perpetrators to think twice before taking, sharing, or posting intimate photos.

In the “Shattering Lives” report, in fact, one victim-survivor was quoted regarding the value of education around image-based sexual abuse’s harms and effects, versus its position as a “cool thing to do” among peers.

That’s the kind of education Gasper herself delivers to local schools at all levels, even to very young children, and elsewhere to improve awareness of the issues around consent – which can be given and then revoked, or given for some acts but not for others – or the impact that actions can have.

For example, even consent can be a thorny issue. “Adults talk about wanting privacy as a defense against government intrusion,” Gasper says, “but if you post a picture of yourself, what are you saying about your own privacy? That’s what we want to get them thinking about.”

In Lake Oswego, Funkhouser says the agency’s two school resource officers talk to students regularly about similar issues. The media are also an important part of the police department’s communications strategy, covering new laws as well as awareness and prevention tactics.

Individual investigators can also proactively share their experiences with investigations and the impact to victims. Funkhouser was part of a working group that reconvened in 2018 to update the law after police encountered investigative challenges.

Now that the revision – passed in 2019 and taking effect January 1, 2020 – covers any image distribution, “including through technologies that have not yet been invented,” Funkhouser anticipates different challenges.

“Where the images are stored – the phone, the cloud – will become the issue,” he says. In particular, images distributed through a service like Snapchat, which deletes them after a brief period of time, could make cases harder to investigate. That’s because investigators need to be able to see the image for themselves. “We have to be able to show a jury so they can see that it meets the requirements in the law – that the victim is identifiable, and that the image is intimate,” Funkhouser explains.

Nonconsensual intimate image sharing can be legally complicated to navigate and time-consuming to investigate. Ultimately, however, these cases are as important to helping victims as any other. When called upon to apply your analytical skills to these cases, plan to work closely with prosecutors, advocates, and others to strategize how best to approach the evidence – and to stay on top of important legislative and procedural changes that could affect your approach.

How To Use Social Graph In Oxygen Forensic Detective

$
0
0

Hello, this is Keith Lockhart from the Oxygen Forensic training department, and this video is talking about the Social Graph inside Oxygen Forensic Detective.

To fully understand the Social Graph and the things it can do for you, you kind of have to understand several other facets of your data and how that data is analyzed and categorized inside Detective.

So first we have to figure out how the accounts are coordinated for a user of a device. And not only accounts, but the contacts, because sometimes an account can be a contact, and vice versa. Specifically, we’ll view the profile information for the given user of a device, and it’ll be really interesting when we look at a single device that may have fifteen versions of a user on the device.

Think about this for yourself, for a minute. How many different ways do you communicate with people from your own device? Phone number; text message; Skype; WhatsApp; Line; email. If you really sit back and think about this, I can probably name thirteen or fourteen myself, right off the top of my head. So if we check the profile information inside Detective, we can get a really quick snapshot of accounts tied to the user of the phone.

Then we’re going to have to go look at some of the configuration options; (a) because they’re available now in Detective 12 and higher, and depending on how those configuration options are set, Detective will merge those accounts for the user of that profile – the user of the device – as that extraction data is turned into case data for you to utilise.

That’s going to be really important for you to understand. Because depending on how your configuration is set, is how you will see the aggregation of those contacts not only in the Social Graph, but everywhere there is the ability to filter on accounts. So this is going to become a critical piece of information, and hopefully education, as you continue to use OFD 12 going forward.

The point will be to derive some function from chaos. And I’ll illustrate that by looking at a completely unmerged set of data in the Social Graph in particular. Because the first time you look at it, you may just sit back in a chair and think, What in the world am I doing? How am I going to tell what’s what here? And our job will obviously be to help determine what’s what there, but we’ll do it with effective merging, recognising the profiles of the target of the device.

And not only one device, but we’re going to delve into what happens when you put multiple devices in the graph. It’s really cool, you can see I’m talking to all these different people, and we’re talking back and forth, and when we’re talking. But it’s even cooler when you can put three or four different devices in the graph, and while those different devices don’t really talk to each other, they all happen to be talking to this one particular contact out here in the middle of nowhere. Who is that? Let’s figure out how and why that person sitting there has common contact amongst multiple devices or users or targets, call it what you will.

So that’s our goal in this video, to figure out how Graph works, and to figure out what configurations inside Detective can get our best bang for the buck when we’re trying to figure out what’s goin gon in the Social Graph.

OK. Let’s have a look.

OK. So here’s my Detective. I’m just going to jump into my extraction list, and get down to our old human trafficking case, and in this instance, Alison Kelly’s iPhone.

So let me just start, as we were talking about the bolts of things we needed to understand about the Graph: let’s start here with Alison’s Owner information. So here in this section, you’ll see a lot of information about the data on the device, the information, the extraction and the owner. And I’m literally going to use the Owner section, and while it might look like this to begin with, I’m going to take a look at her full profile.

And just take a look here, and start recognising what’s going on. The full name is Alison Kelly. Here’s another full name for something else: Alison. Here’s an email address: alisonkelly2015 at gmail. There’s a Facebook ID. The phone number of the phone. A Twitter profile. A nickname. Another nickname. An account name. An ID, an ID, an ID, an ID, and I could just go on and on. And it could potentially all be represented in different areas of Detective when you’re doing your analysis.

So (a) great place to start to get a good snapshot of the Owner information; but (b) it would be a great place to help you confirm what you’re seeing, so you’re not losing your mind trying to figure out what’s going on.

For instance, let’s look here. I’ll go to the Calls section for Alison. And when I get here, I’ve got a column 1, column 2 – which is a big grid of data – and column 3, our detail column.

But if I come back here to column 1 where I want to filter things, I have accounts I can filter on. This is a phone number; there’s an Alison, that’s an Alison, there’s an Alison, there’s another Alison. If you don’t understand what’s happening here, you literally sit back and think the tool’s crazy, or you’re crazy. You don’t know what’s going on. Sure, I can dig down on the contacts that Alison’s having discussions with from a call perspective, because I’m in the Calls section right now; or I can even look at the sources of those calls.

And just as a side note, if you’re a Detective 11 user and you remember the old Calls section event log as being the old place you looked at telephone calls, now the Calls section aggregates all types of calls on the device, whether they’re phone calls – still the Event Log designation – or app-specific types of calls. Calls represents everything. So you also have the capability to filter on those sources, where you didn’t have that before, all in an aggregated fashion. But our problem remains. We have a lot of Alisons.

If I go back to the extraction information and go check Messages. Same thing – I can filter by accounts, groups, contacts and sources. If I just expand contacts, that’s great, there’s a bunch of them. Groups, these are group texts, group whatevers, who knows?

But from an accounts perspective, now I’ve got Alison, Alison, alisonkelly2015, alisonkellyNY – what’s going on? Well, we can start making some determination that we have some different Alisons here, because we are looking at the messages section right now, and maybe these accounts are using messages.

OK, let’s go back and just really jump in the rabbit hole. Let me hit the Social Graph for Alison’s phone. Now, based on what we were just talking about, here we go again: oh my gosh, here’s a phone number, there’s a phone number, there’s an Alison, Alison, Alison, Alison Kelly, alisonkelly1015, alisonkellyNY. So I’ve got an Alison. Another Alison. And this Alison. And that one. And this one. And there’s what looks like a centre of a group maybe – here’s one of those phone numbers; here’s an Alison, and there’s… OK. So, for me, this is really dysfunctional. I’m not quite to the matrix mode where I can instantly discern that this is Alison talking to people one way, or another, or another, or another, or another; or several other ways, in one big graph.

I don’t know if I’m good with this, and I especially don’t know if I’m going to be able to explain this to someone else, when it comes to it. When I’m looking through the Social Graph, and I’m seeing all these versions of Alison, and then I’m looking down in Contacts for people, or accounts, or other things that supposedly Alison is talking to and I see Facebook, or I see KakaoTalk, or I see Telegram; I mean, these aren’t Alison’s friend. Why is it even in there? What is going on? What is this graph trying to tell me? What’s a contact? What’s profile information? How are these things impacting on the data we see?

So to help better understand that, let me go out to the configuration options of Detective and have a look at a new specific section, if you haven’t seen it, called Contacts, where I can start to understand and get a look and feel on how Detective is determining what it’s merging together as far as contacts or not.

So there are merge rules; phone number, or account, or email address are the things – the criteria – that Detective would be using to merge together different things, different players, maybe. Maybe different Alisons, so to speak.

And within a device, at a device level, am I doing contacts that are in the same section? Or different sections? Or both? Or neither? And this one is neither, for merging contacts and accounts in the same section or different. Or merging contacts that are in the same section.

Or look: overall, in the same case, if I have the same contacts within the same, or different devices, am I going to merge with different criteria: yes or no?

Within those criteria, don’t worry about cell, home, or work, the labels. Whatever is in those fields, don’t use those as part of your merge. Don’t use the phone number that starts with 112, or in different parts of the world 911, or things that are really not going to be independently unique for a user you are trying to merge together. And noreply@gmail.com is a great email to disregard because it’s probably not having to do with anybody, but maybe a bot mailing something, or who knows what? But that’s not going to be uniquely identifying anybody that we’re after.

So these settings are default. Let’s go see what happens as a result of these settings. And we’ve kind of had a look already at the result of these settings, but we’re going to check a different setting this time.

So I’ll go back to Alison’s phone, and let’s look at Contacts.

So now I’m looking at the accounts that make up the contacts; the contacts that make up the contacts; the groups that make up the contact information; and all the sources that these contacts are communicating with each other.

So if I look from an account perspective – let me turn off contacts and groups – and I have accounts: 15. If I look down at the bottom, I have 15 accounts out of a total number of 293. But if I look in here, there’s an Alison; there’s an Alison; there’s an Alison; there’s an Alison; there’s an Alison. Alison, Alison, Alison, Alison, Alison.

Now these are not merged together right now. As part of this video, I have unmerged all these Alisons to make a point. However, if I come up to this ‘Merge contacts’ and automatically merge contacts, look what happens. Detective takes off, does a little work, does work based on – by the way – does work based on these configuration options: and look what happens to our contacts.

OK: now I’ve got an Alison; there’s an Alison; there’s an Alison; that looks like probably an Alison; there’s an Alison, Alison, Alison. Not sure about this one. But look at the difference here. Look how many different versions of Alison have been turned into one Alison.

That’s merging. If I expand, just to show, I’ve got Viber, Facebook, Facebook Messenger, Safari, text messaging – or WhatsApp, I’m sorry; whatever accounts, phones; all these versions of Alison are now this one Alison.

So let’s go have a look at the Social Graph. It looks a little different, a little more refined, because we’ve got a few less Alisons to deal with. But I’m not happy.

So I’m going to go back to Contacts, and I’m going to start doing things like this. I’ll select this Alison, and I’m going to use the Ctrl key to select this one. And I’m pretty sure this is going to be an Alison Kelly as well; that Alison; and heck, on the picture alone, I’m going to do this one; and this one. And, you know, the main one.

And now that I have them all selected, I’m going to go to ‘Merge contacts’ and ‘Merge selected contacts.’ Now look. Now I’ve got one Alison, and all of these different representations of Alison, and the clients, tools or applications she’s using to get things done.

Let’s go back and check the Social Graph. Wow. Big difference.

OK, so I’ve got this Pixie Lott who seems to be the centre of some communication universe, and I’ve got an Alison down here, that happens to be the same common Alison who’s talking to all these different people. Wow, is that a lot easier to deal with? Well, OK. Let’s be clear. For me, that is a lot easier to deal with. I am not trying to figure out which Alison is which anymore, because if you look under my accounts, I’ve got all the Alisons together as one Alison.

Look, I am concerned with the fact that Alison is talking to Stephen Bremer. I don’t care what particular client they were using at this point – I can go figure that out, if I need to – but I’m just needing to know those two talked about killing me, or whatever my particular target of investigation is of that particular case. But now I’m only dealing with one Alison. Much more effective use of the Social Graph.

Now, how do I see those communications? Let’s pretend I was after this Stephen Bremer.

Well, there’s 84 communications between Stephen – that Stephen Bremer, I think there might be a couple of them – and that Alison. Well, let’s make sure we utilise the best view.

So if I turn on the Communication view: ah. Now I can click on those 84 messages and see them in the Communication view. Or maybe it was Wonder Girls; there’s one communication between Wonder Girls and Alison. Or John Andders, there’s one there. Or gettaxi, there’s one there.

And you can see down below, as we click on individual users, their messages will populate in the message pane, or the communication pane, down below.

Now, a couple of things going on here, let’s do some good practice.

I’ve got a Stephen Bremer. It looks like I’ve got another Stephen Bremer. And I thought I saw – well, you know what? Let’s just filter up here. Bremer. Stephen, Stephen, Stephen. Oh, that looks like a – OK, what is this? The answer is, it’s a group. There is a phone number and a Stephen Bremer together: OK, fair enough. And here is a bremerstephen.

Let’s employ what we just learned. I’m going to go back to Contacts. And Alison… that’s great. Let me put everybody back together. And I’m just going to filter inside contacts here, with Bremer. Aha. So I’ve got this Stephen Bremer, who looks to be already merged to an extent by the tool, based on those configuration settings when you imported this data, but we’ve got some other ones.

That looks like him; that looks like him; I’ll go out on a limb and say that’s him; and this looks like him. OK, here we go. I’ll hold the Ctrl key, and I’ll select this one, this one, that one, and… let me stop there. So, I’m stopping here why?

OK, here’s the thing. I know these… well. I am surmising that these are individual Stephens. This Stephen I am pretty sure is part of a group; we’ll narrow that down in a minute. And while I’d like to say Stephen is guilty about everything, if this is a group about killing everybody on the planet, it probably isn’t fair for me to associate this group with individual Stephen. You know, if I see a group, individuals are individuals from a merging perspective.

And speaking of merging, look at that little chain up here, that indicates merging, right? It’s the same ‘Merge contacts’ icon we have there. We can see it’s already been done.

Alright, so I’ve got these four. Let’s do ‘Merge selected contacts.’ Ah, look. Now look at my Stephen. And now let’s go back to the Social Graph.

OK. So I’ve still got my Pixie Lott which I’m pretty sure – let’s just check – Pixie Lott is a group, indeed. And then – oh, look at this – if I give myself a little real estate, there is the Stephen in the other phone number group which we saw earlier, which we didn’t merge into Stephen, and that’s OK.

So now I’ve got Alison talking to a Stephen Bremer, and I think I’ve got all of the other Stephen Bremers put together. So now there are 94 lines of communciation between these two. And I could come down here and sort between Viber and WhatsApp; again, I’m just worried the fact one of them said “Will write u a bit later” to the other one. That’s the smoking gun, it’s a WhatsApp message, if I really need to figure that out.

Matter of fact, I could select that message and I can… oh my gosh, there it is, it’s the one between Stephen and Alison, it’s my smoking gun. Do what you need to do with it, at that point. You can use Social Graph to filter your way down to communication between people.

Now let’s go back and do something else. Now that we’ve kind of narrowed down our contact and our account problem to where I’ve got one Alison; I understand the groups that are involved; my display is that much clearer. Now let’s go down to the contacts themselves. And Daniele Rizzo, you know, I could check that out, there’s one message: “Hey, let’s switch to Telegram.” OK, maybe that’s super important, or not.

This one has one message: it is a voice message, I think. That’s great. But, you know, gettaxi having two messages to me; those are GetTaxi messages. Facebook is a contact of mine – do we talk a lot? No, not really. That’s a – oh, a confirmation code.

OK. Think about all the multi-factor verifications you go through. Telegram: there’s a Telegram code. WhatsApp: there’s a WhatsApp code. There are a billion of these things on your phone, if you’ve used it for any length of time. But guess what? They’re all communicating, they just don’t happen to be communications we’re interested in.

So let’s do this. Let me come up here to this filter, where I can slide the number of communications I want to see, or not. For instance, let me just bump it up. Show me only things that are greater than one. And look at what happens to all that noise.

Now, some of them might not be noise. I might want to go and look at Daniele’s, or this one, or this one. But the majority of those single messages that could be confirmation codes, or two-factor authentication, are out of our view. Because I’m not… that’s noise, I don’t have time to go through all of this anyway; let’s narrow the focus the best we can.

You know what? Maybe I don’t want to include Viber. Maybe I don’t want to include Skype. Now I can start filtering backward, and filtering things out of the conversation I’m not interested in. So I’ve got… oh, Barbara, Stephen, the Weekend Plans group. The gettaxi stuff, Team Snapchat, and Angela.

This is where the filtering capability of the Social Graph goes crazy. The tool in general goes crazy, because we’re a database. We can filter this to that, to this to that, any way we want.

And let’s see: “Mmmm you have nice plans.” Well, that is a smoking gun. Let me just go ahead and mark that as key evidence while I’m here, and go crazy on it.

Like before, look, maybe Stephen’s my guy. Let me just show Stephen’s contact card so I can still go, maybe determine things like information about Stephen; what kind of communications Stephen has had in general; any statistics I would be interested in about Stephen; inbound, outbound or messages; and, you know, if Stephen is part of a group. Hmm. That group, and the Weekend Plans group.

OK. Let’s just do one more thing before we go. So, pretending we enacted all the things we’ve learned, I have Alison Kelly’s data in the Social Graph, and Lars Jason. Lars is actually a conglomerate of several personalities and people, but the point I have right now is, I have got multiple extractions in the Graph for comparison.

Now I have the ability to filter on each one. Turn off Alison; turn off Patrick – which is also Lars, and a few other names – turn them back on; turn them back on.

What I’m not doing right now is showing any type of contact. And because I have multiple extractions, what I can do is look at things like unique contacts between Alison’s world, in the bubble; or Patrick – or Lars, whoever you want to call him – his world. And that’s cool, because they all have good one-to-one relationships. But based on the data in the Graph, my super-interest is going to be this: the commonality between the two.

And now we can see, once we jump outside Alison’s bubble and Lars’ bubble, we have got in common two different people: Homero and Angela. I don’t know what they’re doing. But those happen to be, I don’t know, drug dealers? Bank robber accomplices? Friends? Who knows what they are? But they are two people that happen to be in common between these two targets.

And if you just saw, I left-clicked and highlighted those two people together; so then I can view all their conversations in the pane down below. That is massive power from an analytic standpoint.

OK, thanks for watching. I appreciate you spending time with the video, learning a little bit more about the Social Graph, and I hope to see you in class soon. Take care.

Learn more about the Social Graph and many other tools, tips, and workflows with Oxygen Forensic Detective by attending an in-person or online training course. Check the Oxygen Forensics website for course dates, locations and descriptions. 

Enfuse 2019 – Recap

$
0
0

by Mattia Epifani

The Enfuse Conference, organized by OpenText, took place from the 11th-14th of November 2019 at the Venetian Conference Center in Las Vegas. More than 1,000 attendees from 40 countries were present, coming from different fields like digital forensics, e-discovery, incident response and cybersecurity. Most of the attendees were from the US and Canada, but many people from Central and South America, Asia, Africa and Europe were also present.

Forensic Focus was present for the entire conference and documented it in real time on Twitter. This article is a wrap-up of the conference highlighting some of the interesting talks from the more than 100 available.

11th November 2019 – Day One

The first day was dedicated to OpenText’s partners, and officially started with a “First Timers” session where OpenText illustrated the history of the conference and a guide on how to move around within the conference itself.

Then the Welcome Reception took place at Lagasse’s Stadium in the Venetian resort.

During the reception OpenText announced a donation to Michael’s Angel Paws, a charity organization based in Nevada that provides service dog and therapy dog programs to people with different needs.

12th November 2019 – Day Two

Starting from day two, most of the talks were run in parallel, so we were able to attend only some of them. Below is an overview of the talks we attended: more information on all the sessions is available on the conference website.

The day started with an early session: the “APFS – Review & Updates” talk by Simon Key (Course/Curriculum Developer at OpenText). During the presentation a high-level technical review of APFS and its examination was discussed, including how to process APFS snapshots. Simon did also a webinar on this topic that is available at GuidanceSoftware.com.

Then the opening keynote by Mark J. Barrenechea, CEO and CTO of OpenText, took place. The keynote was entitled “It’s your Edge: Own It” and focused on the continuous integration between different devices and how everything must now be managed, with IT security as the first port of call. The 10 most important value cases were illustrated, including endpoint security, threat intelligence, forensics, e-discovery, secure cloud collaboration, and eSignature. 

After the keynote we attended an interesting talk by Pierson Clair, Associate Managing Director at Kroll. He discussed “What’s New in maxOS Security & Forensics” and included different interesting aspects of macOS investigation, such as extracting and parsing system logging; and Unified Logging; processing .FSEventsD; and querying the KnowledgeC.db. More information on this session can be found on Kroll’s website.

At the end of the first day we attended an interesting session by Manfred Hatzesberger, Director of Professional Development and Training at OpenText, dedicated to creating examination and investigative reports and how to write them in an effective way for different audiences and users.

13th November 2019 – Day Three

Day three started with a presentation by James Eichbaum from MSAB, discussing mobile app anatomy. SQLite databases were discussed, with particular attention to understanding the different types of data found within them; how the WAL and SHM files work; and how they may be the key to a successful investigation. A lab was also conducted demonstrating how to manually recover deleted entries from an SMS database. 

Then the keynote by Muhi Majzoub, Executive Vice President & Chief Product Officer at OpenText, took place. Various interesting topics were discussed here, and some of the latest solutions from OpenText were introduced, like the latest Tableau TX1 Imager; the integration between the Axcelerate platform and Magellan; and the CoreSignature system for eSignature of documents.

Brad Robin from Belkasoft then took to the stage, for a talk entitled “Modern Encrypted Instant Messenger Investigations: Telegram on Mobile Platforms.” He presented an interesting overview of the Telegram application, including a description of the internal structure of the relevant SQLite databases that an analyst can find on Android and iOS devices.

At the end of the day we attended a round table entitled “Collection and Analysis of Ephemeral Data,” where best practices and tools to interface with ephemeral messaging systems and ’email killers’ were discussed. These messaging and file storage systems have made their way into corporate environments, and consultancies, law firms and solo examiners need to be prepared to collect, verify and analyze these data sources for use in investigations and litigations. 

14th November 2019 – Day Four

Day four started with an early session by Lisa Stewart, OpenText’s Manager of EnCase Training, entitled “The Value of Link Files in Forensic Investigations”. The presentation looked at LNK files in Windows and how they may assist in identifying media used, and files and folders currently or once residing, within the computer system. 

The closing keynote was presented by James Clapper, the former US Director of National Intelligence, discussing cyber threat intelligence and digital investigations.

Aside from the talks, we also visited the Expo Hall where OpenText and all the sponsors presented their new products. 

Some of these included:

  • OpenText™ EnCase Forensic features time-saving workflows and updates to indexing and search for improved performance and reliability. Collection of Microsoft OST artifacts is included, and users can now also parse and browse the Apple File System (APFS) snapshot to allow discovery of modified and deleted data. 
  • OpenText™ Tableau Forensic Imager (TX1) provides the ability to pause and resume any forensic imaging job, even after a power cycle.
  • Reveille Software, a go-to solution for watching, visualizing, managing, and protecting enterprise content management platforms.
  • Webroot, providing multi-vector protection for endpoints and networks and threat intelligence services to protect businesses and individuals. 

The next Enfuse conference will be held in Las Vegas from the 29th of September to the 1st of October 2020, in conjunction with OpenText Enterprise World. Anyone interested in attending should consult the official website for details. 

Forensic Focus Legal Update December 2019 – Part I

$
0
0

by Christa Miller 

In cooperation with the National White Collar Crime Center (NW3C) and SEARCH, The National Consortium for Justice Information and Statistics, Forensic Focus is proud to offer a quarterly roundup of the latest legal developments around digital forensic evidence.

Comprising major legislation and case law from around the country, this guide is intended to help our readers get a better understanding of how different countries’ laws are adapting to the rapid pace of technological change.

In Part 1 of this inaugural guide, we cover the following:

  • Data privacy laws come into effect in Kenya and the United States, and are tested in Europe.
  • The key takeaways from the new bilateral information-sharing agreement covering cloud-based data between the U.S. and U.K.
  • Can evidence collection violate ToS? WhatsApp v. NSO Group
  • Fallout from the Coalfire pen testing arrests

Part 2 will cover:

  • United States case law regarding technology — facial recognition, pole cams, geofencing, and third-party DNA databases — when it comes to search and seizure.
  • The “reasonable suspicion” standard in the U.S. when it comes to border searches.

The material published here is for general public education. It is not legal advice for any specific situation. If the reader needs specific legal advice, the reader should consult a qualified lawyer.

More Data Privacy Laws Come Into Effect

In the nearly two years since the European Union (E.U.)’s General Data Protection Regulation (GDPR) went into effect, consumer data privacy has been in the news as companies work to comply — or are fined for noncompliance — and new laws come up for adoption. One of the most significant new laws is the California Consumer Privacy Act (CCPA), the strictest in the U.S.

On the international stage

As a quick recap, the GDPR recognizes a number of privacy rights for citizens of the European Union. According to the website GDPR.eu, these rights include: 

  • The right to be informed
  • The right of access
  • The right to rectification
  • The right to erasure
  • The right to restrict processing
  • The right to data portability
  • The right to object
  • Rights in relation to automated decision making and profiling

Because the recognition of these rights gives citizens “more control over the data they loan to organizations,” those organizations targeting or collecting data on E.U. citizens — regardless of whether they themselves are located in the E.U. — are required to comply with GDPR regulations, including:

  • Accountability for compliance
  • Implementing appropriate technical and organizational security measures
  • “By design and by default” data protection ingrained into everything an organization does
  • The need to obtain and document data subjects’ consent — and to honor withdrawn consent
  • When organizations are allowed to process data
  • The need, in certain organizations, for data protection officers

When it comes to digital forensic investigations, DFIR practitioners need to ensure they have appropriate mandates in place, as well as appropriate policies and procedures, according to Jason Jordaan, Principal Partner at DFIR Labs, who presented on the topic at the SANS DFIR Summit Prague 2017.  

For one, the GDPR mandates that data retention duration must be specified. A discussion forum thread on Forensic Focus reflected that European examiners generally retain data in both criminal and civil proceedings until the case is adjudicated and they’ve been advised they can delete the case data. 

Of course, the GDPR contains provisions for criminal and other legal proceedings under Article 2 (material scope), Article 10 (processing of personal data relating to criminal convictions and offences), Article 23 (rights of the data subject), and Article 49 (transfers of personal data).  

Also on the international stage, Kenya has passed new GDPR-compliant laws. In part to facilitate information technology investment, the new law is expected to help curtail predatory lending practices. It may also help as Kenya moves to digitize citizen identities.  

Case C-507/17 Google v. Commission nationale de l’informatique et des libertés (CNIL) 

In a case that may be of interest to investigators relying on open source intelligence (OSINT), as well as those working with victims of nonconsensual intimate image sharing or child abuse material, The Court of Justice of the European Union found in September that although Google “must remove links to sensitive personal data from its internet search results in Europe when required, it does not have to scrap them from searches elsewhere in the world.” 

Even though the case was brought in 2016 under Europe’s “right to be forgotten” 2014 precedent, the GDPR codifies that law in its Article 17(2). While Google is notable for being the first major U.S. tech company to be fined for noncompliance with the GDPR, the Court’s decision this year balanced free speech and the public interest against the right to be forgotten.

In the United States: California Consumer Rights Under the CCPA

(Text © Ashton Meyers, NW3C. Used with permission.)

With the CCPA scheduled to take effect January 1, 2020 it is important to understand the rights Californians will gain. Many people fail to realize that each time we use the internet to do things like search the web, shop for merchandise, or conduct research, our privacy is at risk. The CCPA plans to help mitigate this risk by giving consumers more control in how their personal data is collected, handled, stored, and destroyed.

The CCPA will give Californians control over their data by establishing six fundamental rights. These rights include:

1) The right to destroy. Californian consumers will now have the right to request that their data be erased either for being incorrect, or because the company should no longer need access to it.

2) The right to be informed when their personal information is being collected, and for what purpose that information is being used. 

3) The right to opt out of personal data being collected. The organization must inform consumers that they will be collecting personal data, and then they must provide the right to deny that collection.

4) The right to access any personal information that an organization has collected about a consumer. Upon request, an organization must disclose and deliver that personal information free of charge.

5) The right to data portability. When consumers request access to their personal information per the fourth right, it must be provided to them in a portable and readily usable format. 

6) Finally, the CCPA gives all consumers the right to equal services and prices. This is to alleviate any discrimination that consumers choosing to execute their rights may experience. For example, this prevents organizations from charging extra fees or withholding benefits from those who request access to their personal information or opt out of its collection altogether. 

The impact of the CCPA will be farther reaching than just within state borders. When this law takes effect, it will apply to not only California residents, but also any national or international organization that conducts business in the state. If successful, other states may follow suit and enact similar privacy laws. California may also be the testing ground for a future comprehensive federal law. 

All eyes will now be focused in on California to watch the implementation of the CCPA. Expect privacy policy updates to flood the internet and new litigation to flood the courts.

Cloud-Based Evidence

The U.S./U.K. Bilateral Data Sharing Agreement

(Text © Matthew Osteen, Esq., NW3C. Used with permission.)

When the Clarifying Lawful Overseas Use of Data (CLOUD) Act was passed in early 2018, Congress had two goals in mind. The first goal was simple and prompted by exigency; congress needed to give the Stored Communications Act (SCA) extraterritorial jurisdiction. 

A case, United States v. Microsoft, had made its way up to the Supreme Court and threatened to limit law enforcement’s ability to obtain evidence stored on servers located abroad. Federal agents obtained a warrant under the SCA for emails associated with an account, the owner of which was suspected of drug trafficking. Microsoft stored the customer’s emails in a data center located in Dublin, Ireland. Microsoft ordered that the SCA did not apply to data stored abroad, as Congress had not affirmatively stated legislative intent for the SCA to have extraterritorial scope.

The CLOUD Act, passed before the Supreme Court could hear the case, provided language explicitly stating an intent for the SCA to apply extraterritorially, thus mooting the case. 

The second goal of the CLOUD Act was to preempt and dispose of foreign domestic barriers to law enforcement access to data. For example, if, in the Microsoft case, Irish law prevents the disclosure of stored data without a valid Irish warrant, then Microsoft would have to violate either U.S. law requiring production of the data or Irish law prohibiting production of the data.

To address the potential conflicts of law, the CLOUD Act authorizes the United States to enter into data sharing agreements with foreign countries. The CLOUD Act itself does not provide much guidance on the specifics of any potential data sharing agreements.

On October 7, 2019, the text of the first such data sharing agreement was released. Across 17 pages, the U.S./U.K. bilateral data sharing agreement provides a streamlined process for law enforcement officers from each country to obtain data from the other. 

The Big Takeaways

  1. Conflicts of Law. One of the primary goals of the agreement is to avoid conflicts of laws. What this means is that U.S. law enforcement seeking data under the agreement do not need to worry about the GDPR. Likewise, U.K. law enforcement seeking data under the agreement do not need to worry about the Electronic Communications Privacy Act (ECPA).
  2. “Serious Crimes.” The agreement covers data relevant to investigations of “serious crimes.” The agreement defines “serious crimes” as “an offense that is punishable by a maximum term of imprisonment of at least three years.” It is unclear if crimes can be aggregated.
  3. Legal Process. Data sought under the agreement can be obtained with legal process issued by the requesting country pursuant to the domestic law of the requesting country. This means that the same process used for obtaining data before the agreement will be effective under the agreement.
  4. Where to Serve Process. Orders can be served on the service provider from whom data is sought. However, the order must be transmitted by the designated authority of the requesting country. For the U.S., the Attorney General selects the designated authority; for the U.K., the Secretary of State for the Home Department selects the designated authority. The designated authority is then responsible for reviewing the order for compliance with the agreement, collecting the data from the provider, and passing along the data to law enforcement.
  5. Judicial Review. Under the agreement, service providers can challenge requests for data. Previously, challenges were handled by the courts. Under the agreement, challenges must be made to the designated authority of the country requesting data. If the challenge is not resolved, then the service provider may raise the challenge with the designated authority of its home country. The designated authorities of each country may then confer to mutually resolve the challenge and render a decision.

Overall, the agreement emphasizes the need for international cooperation when law enforcement investigate serious crimes, including terrorism. While many applaud the effort to reduce reliance on Mutual Legal Assistance Treaties (MLATs), privacy advocates have voiced concerns that the agreement eliminates judicial oversight and removes privacy safeguards.

Can evidence collection violate ToS? WhatsApp v. NSO Group

(Teext © Benjamin Wright, Esq. Used with permission.)

Facebook sued an Israeli company named NSO Group, claiming that NSO violated the terms of service of the messaging app WhatsApp (a Facebook property). 

Allegedly, NSO had created spyware for government clients like police agencies around the world. The spyware would allow government clients to infect the smartphones of investigative targets so that investigative evidence could be collected. NSO was allegedly running a platform that was facilitating this infection through the WhatsApp platform. 

NSO maintains that it is operating strictly within the bounds of applicable law.

One of the reasons this case is interesting is that it illustrates that all investigators and people supporting investigators need to read the terms of service for web pages and mobile apps that they may visit or access to gather evidence. Those terms of service may limit or forbid efforts by the investigator, including the collection of evidence. Violation of terms of service could undermine an investigation by an official investigator.

Additional Analysis (Orin Kerr, Twitter): 

“I just read the WhatsApp v. NSO complaint, and it makes an interesting and somewhat novel CFAA claim. NSO hacked the target computers of WhatsApp users. But is that unauthorized access of *WhatsApp’s* servers, enabling WhatsApp to bring suit? As far as I can tell, WhatsApp’s main unauthorized access argument of its computers is this: In routing the malware through WhatsApp’s servers, NSO had to disguise the malware as legit WhatsApp traffic. It’s an interesting legal question of whether that counts as an unauthorized access of WhatsApp’s computer, as compared to of the users ultimately hacked.”

Fallout From The Coalfire Pen Testing Arrests

(Text © Benjamin Wright, Esq. Used with permission.)

Much of the work we do in cybersecurity and digital forensics exists right on the edge between good and bad. A textbook example is the arrest of two employees of Coalfire Labs, a fair-sized cybersecurity firm.

Coalfire had a written contract with the state court administration of the state of Iowa. The contract provided that Coalfire would engage in penetration testing of court systems. This testing included efforts to physically get into county courthouses.

The sheriff of Dallas County, Iowa, arrested two Coalfire employees after they broke into the Dallas County Courthouse around midnight, September 11, 2019. The employees produced a document signed by three officials of Iowa State Court Administration indicating the employees had authority to do what they did.

The sheriff identified numerous problems. The authorization document provided the cell phone numbers for three state court administration officials. When the sheriff’s deputies called cell phones of two of the officials, they gave conflicting information about whether the Coalfire employees were authorized to break into this courthouse so late at night. A third state court official could not be reached because his cell phone number was not correct on the authorization document.

The County indicted the two Coalfire employees for the commission of criminal trespass. See Coalfire Investigation Report, October 9, 2019, Faegre Baker Daniels, Commissioned by Iowa Supreme Court. 

As of late November 2019, the investigation is ongoing and the incident has not been resolved.

Lesson: Prudent investigators create evidence to show they are good guys in a difficult situation. Such evidence includes having strong documentation that clearly leads to the conclusion that the professionals are acting with authorization. 

In documents like contracts for investigative services, failure to choose words accurately and carefully increases legal risk. Choosing words carefully and accurately requires intellectual rigor and professional skepticism. A professional skeptic asks hard questions about whether a contract actually authorizes aggressive steps like lock-picking — one of the Coalfire employees’ techniques, even though the Iowa-Coalfire contract did not specifically mention lock-picking.

Another problem in the Iowa-Coalfire contract, according to the Coalfire Investigation Report, was whether the State Court Administration had the authority to authorize someone to break into a county courthouse, as that courthouse building is owned by the county rather than the state of Iowa.

A deeper analysis of this case was part of the presentations offered at AwarenessCon 2019 in Adel, Iowa, November 20, 2019:

The arrest and criminal indictment of Coalfire employees triggered anxiety and conversation within the penetration testing community. As a consequence, a pen test company named TrustedSec published an example clause that it added to its standard contract for physical penetration tests. 

At AwarenessCon, David Kennedy of TrustedSec said the following language is published at the company’s GitHub account as open source contract language contributed to the community so that anyone may use it. 

The clause provides that the client would pay $25,000 for each penetration tester who is criminally charged, in addition to all legal costs. A question for the community is whether this clause makes sense. It increases the risks to a client when the client hires penetration testers. Similar clauses might be appropriate in contracts for all kinds of digital investigators.

Attorneys from different parts of the world who would like to participate in this project are welcome to contact Forensic Focus’ content manager at christa@forensicfocus.com.

Forensic Focus Legal Update December 2019 – Part II: Search And Seizure

$
0
0

by Christa Miller

In cooperation with the National White Collar Crime Center (NW3C) and SEARCH, The National Consortium for Justice Information and Statistics, Forensic Focus offers a quarterly roundup of the latest legal developments around digital forensic evidence.

Comprising major legislation and case law from around the country, this guide is intended to help our readers get a better understanding of how different countries’ laws are adapting to the rapid pace of technological change.

Part 1 of this inaugural guide covered data privacy laws, bilateral sharing of cloud-based evidence, whether evidence collection could violate a company’s terms of service, and fallout from the Coalfire pen testing arrests.

In Part 2, we cover United States case law regarding technology — facial recognition, pole cams, geofencing, and third-party DNA databases — when it comes to search and seizure, along with the “reasonable suspicion” standard when it comes to border searches.

The material published here is for general public education. It is not legal advice for any specific situation. If the reader needs specific legal advice, the reader should consult a qualified lawyer.

Search & Seizure

In the United States, a number of news articles highlight the interplay between digital technology and the Fourth Amendment of the US Constitution, “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” At the end of 2019, technology for review includes facial recognition, pole-mounted surveillance cameras, third-party DNA databases, and geofencing.

Facial Recognition

A bipartisan US Senate bill, the Facial Recognition Technology Warrant Act, seeks to impose some limits on the way federal and other law enforcement agencies use facial recognition technology for ongoing surveillance. To track an individual for longer than 72 hours, whether in real time or through historical records, investigators would need to obtain a search warrant.

Law enforcement in the US is currently unfettered when it comes to surveilling the general public via facial recognition technology. The new bill likens this capability to “other forms of intrusive searches and surveillance, such as searching cellphones, conducting a wiretap, collecting cellphone location information, or installing a location tracking device,” according to a bill summary

On the other hand, the bill doesn’t cover use of the technology for identification purposes and moreover, covers an aspect of the technology that isn’t currently in use, according to one US law professor

Still, federal law enforcement agencies would be required to create testing procedures in conjunction with the National Institute of Standards and Technology. That part of the bill addresses the considerable evidence of facial recognition’s challenges identifying people based on their gender, age, or ethnicity.  

Pole-Mounted Surveillance Cameras

Law enforcement’s use of surveillance cameras mounted to utility poles outside private residences is nothing new, but recent court decisions set important precedents at both state and federal levels in the United States, building on key Supreme Court decisions in Carpenter v. US, US v. Jones, and Riley v. California. 

In December, Colorado’s state appeals court ruled that four months of “warrantless and continuous pole camera surveillance of defendant’s fenced-in back yard was unreasonable and violated his reasonable expectation of privacy under the Fourth Amendment.” People v. Tafoya, 2019 COA 176, 2019 Colo. App. LEXIS 1799 (Nov. 27, 2019).

That decision followed one from June, when a federal district court in Massachusetts ruled that eight months’ worth of “constant pole camera digital recording of all comings and goings from defendants’ house… violated their reasonable expectation of privacy under Carpenter and chilled freedom of association.” United States v. Moore-Bush, 2019 US Dist. LEXIS 92631 (D. Mass. June 3, 2019). 

TechDirt’s Tim Cushing pointed out that although a 2014 Washington state case, US v. Vargas, resulted in a ruling that just six weeks of video-cam surveillance was unconstitutional, a Sixth Circuit Court of Appeals judge had held in 2016 that ten-week surveillance was no different from an eight-hour period.

The facts of these cases matter: in both Colorado and Massachusetts, cameras controllable in real time were in use, not just passive surveillance, wrote Cushing. In addition, the Colorado camera “also did something the average passerby couldn’t do (I mean, in addition to staring at someone’s house for 13 weeks straight): it could see above the suspect’s six-foot privacy fence to the end of the driveway near the house’s garage and entrance.”

Geofencing

Another test on the use of law enforcement investigative technology has been introduced in Virginia, where a motion to suppress was filed in October in US v. Chatrie. Police wrote a “geofence” warrant “to obtain the cell phone location information of 19 Google users who happened to be in the vicinity of a bank robbery on a Monday afternoon in Richmond,” according to the motion.

Chatrie argues that the warrant is not only overbroad, but also lacks the particularity required by the Fourth Amendment — the enumeration of “places to be searched and things to be seized.”

The motion alleges that the warrant failed to establish probable cause to search Chatrie’s Sensorvault — Google’s “large cache of deeply private data” — because “there are no facts to indicate that the bank robber used either [Google phones and services], whether ever or at the time of the robbery…. Instead, based only on Google’s popularity and the prevalence of cell phones generally, law enforcement searched a trove of private location information belonging to 19 unknown Google users who happened to be near a local bank on a Monday evening.”

This was problematic because “[n]ot only can this data reveal private activities in daily life, but it can also show that someone is inside a constitutionally protected space, such as a home, church, or hotel—all of which are in the immediate vicinity of the bank that was robbed in Richmond.”

Third-Party DNA Databases

Law enforcement use of databases like GEDmatch and 23andMe grabbed attention last year, after police matched DNA found at crime scenes with familial DNA profiles to solve the Golden State Killer cold cases. This year, with GEDmatch’s acquisition by crime scene DNA sequencing company Verogen, government access to sensitive health data is becoming more pressing.

For Slate’s Future Tense, Aaron Mak wrote: “It’s not clear whether the database users have standing to challenge a warrant because they aren’t technically the subject of the criminal investigation. But the suspects themselves also might not have standing because it isn’t technically their DNA that law enforcement is trying to access in the database.” 

The New York Times’ Kashmir Hill and Heather Murphy also highlighted the tension between consumer DNA sites’ privacy policies and government requests for information. 

Finally, a Twitter thread around 23andme’s data retention and deletion practices questions whether existing federal and state laboratory and data retention regulations might in fact conflict with the CCPA. 

(Text © Ashton Meyers, NW3C. Used with permission.)

In 2018 when the General Data Protection Requirement (GDPR) became effective in Europe, 23andme had to adjust in order to comply with the new regulations. 23andme not only adjusted their privacy statement and terms of service, they also added enhanced security measures and privacy protections. However, further adjustment will likely be necessary in order to comply with the California Consumer Privacy Act (CCPA) on January 1st, 2019.

23andme already had a jump on these adjustments compared to companies that don’t need to comply with the GDPR. In order to comply with the GDPR, 23andme already has in place the right to access and delete your data; as well as offering data portability in an accessible format. Before January 1, it is likely that 23andme will also make minor adjustments to consent forms and take steps to ensure that they are only working with third party service providers who also comply with the CCPA.

However, 23andme will not only have to comply with the CCPA, but also federal laws which are already in place like the Clinical Laboratories Improvement Act (CLIA) and the Health Insurance Portability Accountability Act (HIPAA). This means that the right to deletion will only apply to things like deletion of account information and any research involving personal data. To remain compliant with the CLIA, a third-party laboratory will still have to retain personal genetic information, date of birth, and sex.

With the lack of a federal data privacy law there is little the CCPA can do about this and generally, the CCPA shies away from applying to certain health data altogether through exceptions laid out in provisions of the law. Furthermore, if a federal law ever gets passed, it will likely include similar exceptions when applying to regulations already in place like HIPAA and the CLIA.

All this should urge consumers to stamp a “Buyer Beware” on companies like 23andme because their data will not be fully deleted, at least for some time after the request.

DNA is an even bigger issue in Kenya, where the National Integrated Identity Management System (NIIMS) could link biometric data “to everything from identity cards to access to education, health, and social services.” Quartz Africa reports that the move is a response to a corrupt citizenship process and escalating terrorist attacks. Now that Kenya has enacted its own GDPR-style regulation, it remains to be seen how the government will protect its own citizens’ personal data.

Reasonable Suspicion At The US Border

Border searches of digital devices have made headlines in the US several times this year. In August, the 9th Circuit Court of Appeals ruled that officials could conduct suspicionless manual mobile device searches, but that reasonable suspicion was required for a forensic examination. This was backed up by a US District Court in Massachusetts in November. 

(Text © Robert Peters, Esq. Used with permission.)

Historically, border searches have not required probable cause for a search warrant, and many courts have held that even the lower threshold of reasonable suspicion is not required for border searches, where individuals’ personal privacy interests are weighed against government interests in contraband interdiction among others. United States v. Touset, 890 F.3d 1227, 1229 (11th Cir. 2018). 

For example, one court upheld the warrantless forensic preview search of multiple electronic devices, on the grounds that this was a routine border search, and no individualized suspicion was required. United States v. Feiten, 2016 WL 894452, AT *4 (E.D. Mich. 2016). 

Reasonable suspicion is a “commonsense, nontechnical” concept involving “factual and practical considerations.” Illinois v. Gates, 462 US 213, 231 (1983). Courts requiring a reasonable suspicion threshold typically engage in a factual analysis of the circumstances surrounding the search. In this line of cases, manually searching a

  • computer (United States v. Ickes, 393 F.3d 501, 503, 505–06 (4th Cir. 2005)) 
  • camcorder (United States v. Linarez–Delgado, 259 F. App’x 506, 508 (3d Cir. 2007)) 
  • laptop (United States v. Arnold, 533 F.3d at 1005, 1008 (9th Cir. 2008)) 
  • or even a floppy disk (United States v. Bunty, 617 F. Supp.2d 359, 363–65 (E.D.Pa. 2008)) 

All constitute routine border searches and are permissible, as are forensic previews (United States v. Stewart, 729 F.3d 517, 521, 525 (6th Cir. 2013). 

The invasiveness, or lack thereof, of a particular search conducted at the border receives significant judicial attention. In Feiten, the court upheld a border search in part because of the less invasive nature of a forensic preview, and held that the particular forensic tool, OS Triage, implicated fewer privacy concerns than a manual search, given the thumbnail preview feature and the ability to match file names with known child pornography.

That court also found that OS Triage’s ability to maintain file integrity was relevant, particularly since manual searches “can actually alter key evidentiary aspects of each file inspected.” The court therefore characterized law enforcement’s use of OS Triage as “an exercise of electronic restraint” well within the pertinent legal boundaries.

A third 2019 case, United States v. Wanjiku, 919 F.3d 472 (2019), demonstrates one approach in jurisdictions that tend to apply a reasonable suspicion analysis to border searches. In Wanjiku, government actors conducted a forensic preview on the defendant’s phone based on reasonable suspicion, and no warrant was obtained.

The American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) filed an amicus brief advocating for the court to impose a probable cause standard for border searches of mobile devices. 

Both the amicus brief and Wanjiku on appeal emphasized the extremely personal content of mobile devices in the modern world, and suggested that warrantless mobile searches functionally permitted dragnet investigations, drawing on language from key decisions such as Riley v. California, 134 S.Ct. 2473 (2014) and Carpenter v. United States, 138 S. Ct. 2206 (2018).

The reasonable suspicion of Wanjiku was developed not only with the initial screening criteria of Wanjiku being a US male, between 18 and 60, with a prior arrest, traveling alone, from a country with high sex tourism. It was also based on government databases and publicly available social media data, which created an understandably greater interest in Wanjiku:

  • His prior arrest was for contributing to the delinquency of a minor.
  • This was his third trip in two years to the Philippines.
  • He lacked any business or family ties to the Philippines.
  • His prior flight was booked with the email address “Mr. DONGerous”.
  • His Facebook photo was masked, with the profile having “very young” Facebook friends.

The confluence of these facts resulted in Wanjiku being directed to a secondary inspection site, where his own behavior — bolting from the inspection line, visible nervousness, and “vague and evasive answers” to questions about his spending or travel habits — and additional items found on his person, including hotel receipts, syringes, condoms, and injectable testosterone, added to border agents’ reasonable suspicion.

Importantly, the Seventh Circuit discussed the specific forensic tools deployed and their non-invasive functionality in its analysis. (The court specifically noted that the preview processes were “non-destructive.”) Wanjiku provided the passcode to his phone when the agent claimed authority to search at the border, and the agent manually scrolled through the phone, encountering a few suspect images before turning the phone over to HSI. Cellebrite and XRY were utilized, resulting in the discovery of child pornography images.

Undercovered by the ACLU, EFF, and Wanjiku, but directly addressed by the Seventh Circuit, is the fact that significant evidence of child sexual abuse material was discovered not only on the mobile device, but also Wanjiku’s hard drive (discovered by an EnCase forensic preview) and laptop (which underwent a full forensic examination in a lab after contraband was discovered elsewhere).

This fact was relied on by the court to distinguish the case in some ways from Riley and Carpenter; the Seventh Circuit conceded that the US Supreme Court has “recently granted heightened protection to cell phone data,” but held that Riley and Carpenter do not apply to border searches where governmental interests “are at their zenith.”

The Seventh Circuit transparently declined to address whether reasonable suspicion or probable cause applied to border searches, choosing to “avoid entirely the thorny issue” since law enforcement acted in good faith when they searched the devices, reasonably relying on Supreme Court precedent “that required no suspicion for non-destructive border searches of property, and nothing more than reasonable suspicion for highly intrusive border searches of property.” However, the Court did find that the circumstances of the search, including Wanjiku’s unique behavior, clearly constituted reasonable suspicion.

The border search issue is unlikely to go away anytime soon owing to recent developments, such as border agents’ newly granted access to intelligence databases, including social media data. This issue may become more significant if data from device searches — including those in which visa applicants are required to provide their account credentials — start being added to these databases, and will be one to watch.

Attorneys from different parts of the world who would like to participate in this project are welcome to contact Forensic Focus’ content manager at christa@forensicfocus.com.


Industry Roundup: Cloud Forensics

$
0
0

by Christa Miller

Only a few short years ago, the idea of recovering forensic data from the cloud seemed like either troubling overreach, or unnecessarily redundant given the availability of evidence from mobile devices.

As encryption became more prevalent on those, however, law enforcement has increasingly come to rely on cloud-based evidence to build cases. The law appears to be catching up, too, with legislation like the United States’ Clarifying Lawful Overseas Use of Data (CLOUD) Act and California’s new GDPR-style Consumer Privacy Act, along with decisions like last September’s Google LLC v. CNIL, in which a European court held that Europe’s “right to be forgotten” only applies to EU citizens. 

Cloud-based evidence has relevance to civil cases as well as criminal. Business documents are increasingly stored on cloud servers owned by Dropbox, Box.com, Microsoft OneDrive, Google, and many others. Many organizations and individuals also rely on cloud-based messaging platforms like Slack, Microsoft Teams, and Google Hangouts.

However, cloud forensic extractions aren’t as easy as importing data from cloud to tool. In some countries including the United States, this is seen by courts as overreach, with social media and cloud storage accounts — and preferably, specific data types — required to be enumerated and limited to specific date and time ranges in government search warrants.

It can also be challenging to import data in a usable way. Whether directly to the tool from a provider’s API, or using a JSON file export from a provider, data from cloud-based sources typically requires additional processing or conversion rather than data retrieved from an operating system. Each provider has its own data structure to work within its own interface, and even using API developer kits, forensic tools can struggle to normalize different fields for processing, indexing, and search — to give it analytic value.

Joseph Pochron, Senior Manager, Forensic & Integrity Services in Privacy & Cyber Response at EY, explains that a native JSON dump from software may not be useful until it’s interpreted. As a result, he says, “Someone needs to review or analyze that data, which really can’t be done until that data’s been converted to something friendlier to the human eye.”

A related issue is the discovery process. Pochron says that rather than produce JSON files for opposing counsel, text or HTML is going to be a preferred native format. “JSON is a poor format for legal discovery,” says Pochron, “and lawyers want data interpretable.” At the same time, he adds, “Big Law” firms will need assistance with converting that data for review.

But cloud providers aren’t the only source of cloud-based data, and the notion of a “native format” is changing as a result. “What’s the native file format of an SMS?” Pochron says by way of example. “The entry? The database? The table entry? The forensic tool needs to normalize this.”

That way, it can capture the metadata along with the content. This is important when it comes to patterns in the metadata, especially when it comes to legal holds, because metadata can show whether data was modified or altered in any way before discovery. “If iMessages are set to delete every 30 days, was this intentional or automated?” Pochron says, adding that historical patterns can help to answer questions whose answers could be very costly for a litigant.

When it comes to APIs, a forensic tool is only as good as what the API makes available. In some cases, this may not be enough data; for example, Slack offers audit log data highly beneficial to forensic investigators and cybersecurity professionals, but this feature is not available to non-enterprise plans. On the flip side, applications like ZenDesk offer limited search capabilities through their APIs, often resulting in a need to export high volumes of data.

Pochron says providers can differ widely in what they offer in terms of granularity, deep dives, and data targeting. For example, a personal social media download may not make it possible to acquire messages from just one person. In contrast, popular email platforms now have built-in e-discovery tools, which offers “very clear, defined platforms for governance and retention,” says Pochron. “You can acquire a wide range of products, as well as run complex searches to cull the dataset at the point of collection.” Furthermore, he adds, at least one has recently added advanced processing capabilities like optical character recognition (OCR).

Pochron anticipates that artificial intelligence (AI), which is already being adopted in this space,  will continue to help in this area. He says automation using AI could also help to reduce the time humans would otherwise need to take scripting, converting, or dealing with other tricky scenarios with big data.

Until then, a number of forensic tool manufacturers make cloud forensic acquisition and analysis possible through dedicated or built-in tools. In this article, we round them up.

AccessData

AccessData’s new AD eDiscovery® 6.2 allows users to quickly collect data in the cloud from Office 365, SharePoint®, OneDrive® for Business and Office 365 Exchange. It is a next-generation e-discovery software product equipped with faster indexing and processing speeds that can be unleashed on data collected from the cloud.

Belkasoft Evidence Center

Belkasoft Evidence Center supports acquisition and analysis of a number of services. Among supported clouds are iCloud  (calendar, drive, photos, etc), Google Cloud (Drive, Gmail, Keep, and Timeline), WhatsApp, Instagram and a few dozens of webmail services. The product uses various authentication methods including user credentials, consent screen, and refresh tokens, retrieved from digital devices. Email from nearly 30 webmail clients, including Gmail, Yahoo! Mail, Hotmail, and more is available, along with messaging services from WhatsApp, Instagram, and others.

Cellebrite UFED Cloud Analyzer

UFED Cloud Analyzer was an early entrant to the field of forensic cloud data extraction. Today, the software supports the extraction, preservation, and analysis of data from both public feeds and private accounts among more than 50 social media, messaging, file storage, web page and other sources.

Cellebrite stresses its tool’s ability to help users comply with search and seizure requirements in their countries, whether they’re relying on the subject’s login credentials — either provided by them, or extracted from digital media or personal files — or via some other access method. Public data available for analysis from social media like Facebook, Twitter and Instagram includes shared location information, profiles, images, files, and communications.

Additionally, Chrome and Safari text search history on iOS devices backed up in iCloud, visited pages, voice-search recordings, foreign-language translations from Google web history, and Google Location History are all supported.

The software normalizes data across these sources, making it possible for users to search, filter and sort by Timeline, File Thumbnails, Contacts or Maps. The extraction process is logged, with each piece of extracted data hashed so that it can be compared later on with the original. Finally, cloud extractions are shareable via dynamic reports and exportable into Cellebrite’s Analytics Series or other advanced analytical tools.

UFED Cloud Analyzer is backed up by Cellebrite certification training courses including Cellebrite Social Network Investigations (CSNI), Cloud Extraction and Reporting (CLEAR), and Cellebrite Digital Forensics for Legal Pros (CDFL).

Elcomsoft Cloud eXplorer

Elcomsoft focuses on Google with its cloud extraction tool, enabling users to “extract significantly more information than available via Google Takeout,” according to its website. With both Windows and Mac versions, Cloud eXplorer is targeted both to law enforcement and IT security customers who need to access Google Account data to determine whether anything illegal or otherwise illicit has taken place.

Elcomsoft makes it possible for examiners to authenticate an account without a password and bypass two-factor authentication (2FA). This capability is based on Elcomsoft’s binary authentication token workaround, which allowed users to access Apple iCloud backups and synced data without the password.

Over-the-air acquisition is supported, enabling users to acquire user passwords, contacts (including those synced from mobile devices), and Google Drive files. Email is available via the Gmail API. Subjects’ location history — including enhanced mapping data such as routes and places — Hangouts Messages, Google Keep notes, Calendars, and stored Google Photos images are also available. A built-in viewer allows users to search, filter and analyze information.

Advising that some Google Chrome data may be encrypted with an additional password, Elcomsoft states that Cloud eXplorer can decrypt information with the correct password. Available Chrome data includes browsing history, search history and page transitions, synced bookmarks, Web forms, and logins and passwords.

Cloud eXplorer also supports SMS text messages for Android 7 (Nougat), Google Pixel and Pixel XL smartphones, as well as devices running Android 8 (Oreo) and above. Additional mobile data includes call logs and saved wi-fi SSIDs and passwords.

Elcomsoft states that Cloud eXplorer requires “no special expertise and no prior training” to obtain cloud-based data.

F-Response Universal with Cloud Collector

F-Response is designed for e-discovery, incident response, and digital forensics professionals in enterprise environments, with the result that its newly launched Universal software (v8) provides support for collecting remote cloud data stores — Amazon S3, Box.com, Dropbox, Gmail, Google Drive, GSuite, Microsoft Office 365, and OneDrive in VHD or local files and folder format.

F-Response has worked to improve its cloud collection process throughout 2019, including:

  • Handling large Dropbox files and the ability to more rapidly stream content to disk
  • Automatically redirecting the F-Response OAuth Helper callback model to a localhost bound service to collect data from Google and other providers 
  • Modifying the PowerShell script for the Client Credential Flow key generation process for Microsoft Office 365 

F-Response Universal collects the data directly to VHD or local share, making for a faster collection with reduced provider throttling.

HancomGMD MD-CLOUD

MD-CLOUD supports acquisition from major cloud services including (Drive, Docs, Photo, Calendar, Contacts, Location History and more), iCloud(Drive, Photo, Reminder, Note, Calendar, Contacts, etc), Samsung Cloud(Drive, Photo, IoT), email such as IMAP or POP3 including Gmail, Evernote, Google Takeout, Microsoft OneDrive, Twitter, Instagram, Tumblr, and some eCommerce apps. Besides, it supports data collection from the Baidu Cloud in China, Naver Cloud in Korea, and IoT data extraction from AI-powered speakers and smart home kits using both official and unofficial APIs for authentication. A web capturing feature is also supported for the collection of data from public web pages without an API.

Authentication via user ID and password, two-factor authentication, Captcha, and credentials are all supported in MD-CLOUD, along with session tokens acquired with MD-RED, HancomGMD’s forensic data analysis software, with which it integrates fully.

MD-CLOUD has a category-based viewer and each category is separated based on account holder’s credentials. MD-CLOUD also supports the auto-tagging algorithm which allows examiners to search and filter items with ease. Users can create and reuse multiple workviews based on the various filter and sorting configurations. MD-CLOUD has a Timeline-View and Summary-Chart which allow users to see all kind of activity flows such as, at a particular time user sends the email, uploads data into multiple cloud servers, etc. based on time.

MD-CLOUD creates reports in PDF and Excel formats, as well as exporting cloud data files.

Magnet AXIOM

Cloud acquisition and analysis capabilities are natively integrated into Magnet AXIOM. In addition to being able to acquire evidence from the most forensically relevant cloud services with user credentials or tokens and keychains from mobile devices, Magnet AXIOM can ingest and analyze warrant returns, publicly available information, and user-generated archives.

AXIOM can be used to:

  • Ingest warrant returns from Apple, Facebook, Instagram, Snapchat, and Google
  • Use publicly available information (posts, followers/following, etc.), from Twitter and Instagram (using a username or hashtag)
  • Ingest and analyze user-requested archive files (e.g. Google Takeout or Facebook “Download My Data”)
  • Access cloud accounts via user credentials from 50+ of the most forensically relevant cloud services including Apple, Google, Facebook, Microsoft, Slack, Dropbox, and Twitter, including metadata and audit logs
  • Access accounts with third-party tokens and keychains acquired from mobile devices
  • Recover and decrypt iCloud backup data for iOS 11 and iOS 12 backups

Because of the wealth of data available from so many sources, AXIOM additionally makes it possible for users to selectively acquire cloud-based artifacts.

MSAB XRY Cloud

A separate part of the XRY software, XRY Cloud can be used on its own, or as part of the wider MSAB Ecosystem suite of tools.

XRY Cloud relies on mobile device tokens, with or without the device in custody. With the device, the extraction is similar to any other device acquisition; without the device, the user can rely on known credentials to try to acquire data from apps that XRY supports.

From there, XRY Cloud’s “Automatic Mode” allows users to click from recovered app tokens and artifacts straight to the cloud to collect the data (assuming proper legal authority). This capability requires internet connectivity. XRY Cloud stores the data from disparate sources in a single XRY Case File.

Social media and app-based data from services such as Facebook, Google, iCloud, Twitter, Snapchat, WhatsApp, Instagram and more are all supported, along with connected cloud storage solutions including Facebook, Google, iCloud, Twitter and Snapchat.

Finally, XRY can decode Android file metadata from apps like Dropbox and Google Drive, regardless of whether file content is available on a device. From there, users can search these cloud-based data sources either via XRY Cloud or by serving the providers with legal orders to search.

Onna

Onna is marketed more as an e-discovery tool than as a digital forensics tool, but when it comes to cloud acquisitions, there may be some overlap. In particular, Onna’s e-discovery focus means it integrates with the most popular enterprise cloud apps, including GSuite, Office 365, and Slack Enterprise. 

Relying on what it calls “pre-trained categories,” Onna immediately identifies certain document types (e.g. contracts) from all unstructured data found within cloud apps. This capability can help with early case assessment.

Natural language processing and optical character recognition (OCR) make processed data fully searchable. Targeted searches are possible via text modifiers and pre-trained categories.

Onna is designed to facilitate real-time collaboration across internal and external legal teams, service providers, and others involved in the e-discovery process. Export of filtered data in CSV, DAT, or custom files is also possible to review the data in Relativity or other platforms.

OpenText

EnCase Forensic’s connectors enable the acquisition of cloud-based evidence, including email and other content, from Microsoft Office 365 and Exchange along with Box, Dropbox, Amazon S3, and Google Drive. On-premise collection is also supported, with EnCase collecting data in the background for direct preservation via an LX01/L01 file.

Courts can sanction organizations if reasonable steps are not taken to preserve electronic data. EnCase eDiscovery enables investigators to precisely collect and preserve potentially relevant data, either on the premises or in the cloud, with a defensible process that ensures strict chain of custody.

Oxygen Forensic Cloud Extractor

Built into Oxygen Forensic Detective at no additional charge, Oxygen Forensics Cloud Extractor supports 77 cloud, social media, and email services including Microsoft, Google, Samsung cloud, Huawei cloud, iCloud, Mi Cloud, Facebook, Twitter, Instagram, Amazon Alexa, WhatsApp, WickrMe, Viber, Line, Telegram, IMAP email servers, and many more.

The process starts with Oxygen Forensic Detective, which automatically finds and decodes both account credentials and tokens from Apple iOS and Android devices even if this data is securely encrypted. In addition, Oxygen Forensic KeyScout, which is available at no additional charge within Oxygen Forensic Detective, can collect and decrypt passwords and tokens on Windows, MacOS and Linux machines.

An exclusive cloud-based WhatsApp backup decryption method is available, along with WhatsApp extraction via QR code. iCloud decryption is also supported with Oxygen Forensic Detective.

After validating the credentials and logging in to extract the data, Oxygen Forensic Cloud Extractor can collect according to a specific time range. It provides detailed extraction logs upon completing acquisition.

Following collection, Oxygen Forensic Detective merges the cloud data with other acquired mobile and computer evidence for analysis. The data can be viewed, sorted, and filtered through a number of different views, including maps, social graphs, timelines, facial recognition and image categorization, and others. It can also be exported in PDF, XLS, XML, and other formats, or saved to backup for sharing with colleagues.

Paraben E3 Forensic Platform

Included in many of the E3 Forensic Platform licenses are wizards to allow you collect data from the cloud. Paraben’s Cloud Import Wizard helps investigators obtain cloud-based data using account credentials either entered manually, or imported from the data acquired from a mobile device.

E3:DS, one of the E3 Forensic Platform licenses, supports cloud data from iOS and Android app versions of Facebook, Gmail, Amazon Alexa, Google Locations, and Google Drive. E3 collects authentication tokens and credentials for these apps to collect the data.

Paraben also supports cloud data from both Microsoft Azure and Amazon Web Services.


Collecting cloud-based data walks a delicate line between obtaining readily available evidence needed to build cases, and not running afoul of legal privacy protections or company policy. As always, consult an attorney in your jurisdiction or organization before deploying cloud forensic tools.

Viewing all 196 articles
Browse latest View live