How to Speed Up Incident Response in 2019: Analyze Faster (Part 1)
This post (and the next) will focus on the best strategies to reduce the time it takes to analyze data during incident response. If you’re wondering why we focus on speed in incident response (IR) so much, read “Incident Response KPIs: TIME Is Critical. Here Are Five Reasons Why”).
Let’s begin by defining analysis—its purpose and its challenges.
What Is Incident Response Analysis?
The analysis phase is about answering questions:
- Is the host compromised?
- Was any sensitive data accessed?
- Where did the attacks originate?
By reviewing the collected data to answer critical questions like these, incident responders can determine the key facts of the case and map out a path to remediation.
Each incident has unique characteristics and evidence, and you need to exercise creativity and critical thinking to tackle the particular challenges each set of data presents.
Because of this degree of difficulty, analysis in incident response cannot be fully automated (at least currently). For example, an analysis tool will not have enough context to reason about a user connecting remotely to the desktop of another system. These actions depend on the user’s job functions, which are not typically available to DFIR tools.
Only people can solve these problems because either the tools don’t have the data or the attack sequence has not been seen before.
Instead, the typical role of software during analysis is to aid the examiners and make them more efficient.
Now that we’ve covered what analysis is, the unique challenges it poses, and why it’s difficult to automate, let’s review some general strategies IR professionals use during this phase.
Fundamental Incident Response Analysis Techniques
Find the Usual Suspects
The first analytical strategy responders use is to look for “known bads.”
Past incidents can teach us a great deal, as they are a catalog of identified threats.
When assessing a current case, you should search for artifacts (such as MD5 hash values and IP addresses) that were previously seen and shared. These can come from threat intelligence feeds like Recorded Future. This is called an Indicator of Compromise (IOC) scan.
You can also scan for signatures that are based on past incidents. Yara rules are shared as a way to scan for malware. They look for a certain sequence of bytes in a file and can be more robust that MD5 signatures because some malware will modify itself so that its MD5 value changes.
Interestingly, signatures are how antivirus has historically worked.
Software tools are critical for this technique because they can quickly compare collected data with past indicators. For example, Cyber Triage compares artifacts with blacklists and Yara rules and sends files to ReversingLabs for malware analysis.
Our second strategy is just as straightforward as our first. Want to find a threat? Then be on the lookout for the abnormal or suspicious.
This technique is needed because:
- Intruders change things up and make it hard to make signatures; they give files random names and add bytes to the end of files to change their MD5 hashes
- The attack methods haven’t been seen before
- Some evidence, such as user activity, doesn’t have byte signatures to look for.
Some examples of abnormal or suspicious signs are:
- A process that has a similar name to a standard Windows process
- An expected process, but whose memory contents have changed because malware took it over
- Remote logins at odd times or from unexpected locations.
To perform this strategy, the collected data needs to be analyzed (by a human or software) to identify suspicious artifacts. Additional analysis is then needed to better decide if the artifact is a benign or malicious outlier.
For example, if a host has a unique startup item, it could be benign if the user has a unique need for that software, or it could be malicious and point to malware.
What Makes This Hard?
There are two challenges with this approach:
- You can’t know what is abnormal if you don’t know what is “normal.” This is especially difficult for consultants and law enforcement officers because they will not know what processes are standard at a company or what behaviors are normal for a given user. Read our blog on understanding your client’s normal for more details.
- Deciding if it is malicious or benign. This often requires an in-depth analysis of the artifact or using other artifacts as context. Examples of context include using a timeline to see what happened before and after an artifact or to see how a suspicious process was started.
Software tools make the responder more efficient with this technique by automatically flagging the suspicious artifacts and giving them context to make a decision.
Cyber Triage, for instance, will analyze all of the collected data, mark it as suspicious, and allow the user to get context about a selected item.
Reconstruct the Event
The previous two techniques focus on the current state of the system, what it is now, but the next technique focuses on its past, how it got there. This approach is needed to answer investigative questions such as:
- How did a file get there?
- Who previously logged in?
- Was this file previously executed?
To perform this technique, the responder or software needs to know where to find evidence of the event and that data is reviewed and a hypothesis is formulated. The confidence in that hypothesis will depend on how much supporting or refuting information exists.
What Makes This Hard?
A few things:
- Cases often involve hostile actors who have a vested interest in not being discovered, giving you a disappearing evidence problem. Tracks will be hidden and evidence could be deleted.
- The evidence of these events could be in various places, nearly all of which do not exist for investigative reasons. For example, when a program is executed, various places in the registry store it, depending on how it was run. You need to know about all of these places to make a conclusion.
During this phase, it’s important to try to find alternative explanations of things and not stop at the first hypothesis.
For example, imagine you found some malicious files in “\Users\Jdoe\Downloads.” Before you conclude that the Jdoe user actually downloaded it, you should (among other things) compare Jdoe’s login times to the timestamps on the file. It could turn out that:
- Another user was logged in at the same time and had access to that folder
- No user was logged in and a service was running that was exploited
- A scheduled task ran at that time, reached out to a command and control server, and downloaded it.
Software tools are useful during this approach, because they can consolidate the data from various locations into a single view and show you context.
Cyber Triage collects data about what programs were run from six-plus locations. The data is shown to the user in a single view so that the user doesn’t have to remember all of the locations.
That’s it for our first post on speeding up incident response analysis! This post covered the purpose and common approaches to this stage of the DFIR process.
Next week we’ll take an in-depth look at how to make the analysis phase as fast as possible.
For more on speeding up incident response, check out the rest of our series: