As part of a responding flyaway team, it’s probably common for you to arrive at your client’s offices, only to be pointed to a set of boxes to start analyzing. The client may tell you that they did some of their own digging around and identified the computers for you to start with; or, they have no clue where to begin. Either way, the more endpoints the client has, the more daunting this effort can be — a compound problem if they don’t know where to begin.

You might be inclined to start with the high-value systems, but you or the client may not know which systems are high-value, and the attacker may have taken extra steps to remove evidence from those systems. Any evidence they do contain may require a deep forensic examination, one you won’t have time to devote to before you’ve even scoped the incident.

Lower value systems, meanwhile, are often too plentiful to evaluate in a timely manner. Thus, like emergency medical personnel using color-coded tagging at the scene of a large scale disaster to triage patients for treatment, you need a way to determine first which systems contain evidence; and second, which to prioritize for deeper examination after you’ve fully scoped the incident.

Automation helps to find the indicators

To prioritize at scale, you first need to scan for evidence on many nodes. Some scans for “easy” indicators of compromise such as privileged user account activity, malware detection, and file names and registry keys (based on threat intelligence) can be automated with signatures or heuristics.

Automated scans can often be completed within a few minutes and give you a sense of where to start. Using tools that collect from each host in isolation, however, can make it hard to integrate results to see the big picture. Therefore, once you deploy the needed infrastructure, you need an easy way to see all the automated results together.

Priorities are always changing

Correlating results becomes especially important when you consider that network environments are dynamic. In other words, it’s likely that new indicators will crop up as responders proceed with the data collection process. These new indicators mean that you must go back and re-assess previously collected data. This process can shift investigative priority from one or more systems to others.

Getting a better picture of the incident isn’t as simple as finding new indicators, though. As we previously discussed, it is often challenging for consultants to know what is normal on a customer’s endpoint. Therefore, you must be able to see where else a file or process was seen and whitelist items that are OK in that environment.

Finally, it’s important to be able to compare both indicators and whitelisted items across systems. Not only do you need the ability to store collection results in a database that updates as indicators are found; you also need a dashboard that reflects the current priorities. Without this infrastructure, you’re left to manually keep track of which hosts need to be examined — a daunting proposition when you’re facing potentially hundreds of systems.

The ability to prioritize items accurately means you waste less time reviewing new items on each host. At this point, it doesn’t matter whether your initial set of possibly compromised systems was complete. Collecting data from one set of systems, then correlating that data set with data found on other systems, can help you to:

  • Identify whether other endpoints are compromised.
  • Prioritize and track systems.
  • Start to scope the incident.
  • Document findings for later reporting.

The newest version of Cyber Triage, v1.3 released November 18, offers the kind of incident-level correlation and situational awareness that responders need to stay apprised of new threat developments in ongoing investigations. To see how it works, contact us for a demo.