In a number of recently publicized breaches, and probably many other attacks, information that could have enabled the security team to catch and contain the attack were lost in the sheer volume of alerts. Your security team is getting alerts from internal sensors, threat intelligence from multiple sources, and potential indicators of attack or compromise from your security countermeasures. Relying on these human filters to decode, deduce, and decide what is relevant takes valuable time and can result in long delays between attack, detection, and containment.
I believe that the solution to this volume of data is to build into the SIEMs automation and active awareness of their environment. Security analysts need timely and relevant information to be most effective. Wading through wave after wave of data from a variety of sources, looking for highly credible threat artifacts and correlating with the organization’s inventory of digital assets, is not the best use of these skilled resources. Taking appropriate action may require their knowledge and judgment, but filtering and correlating the flow of data is a rules-based task that can be delegated to adaptive machine algorithms.
Threat intelligence comes from a wide range of sources, of varying credibility. I am not proposing that we automate and delegate all of the threat remediation actions. Nor do we do not want a system that can be gamed by someone with malicious intent, for example by injecting false positives into the intelligence stream to prevent communication between legitimate partners. Incoming threat data includes information on the source and how the data was gathered, whether it is from a public report, sandbox isolation and execution of the code, or activity captured on an infected endpoint. The headers of the threat notices also contain details to verify that the contents of the message have not been tampered with and to enable you to calculate the trust level of the source.
The trust level of the source and the method of data collection provide the foundation for a threat credibility score. As additional notices come in, they are evaluated to substantiate the initial threat, increasing or decreasing the credibility score appropriately. As vendors, government organizations, or other companies identify suspicious or confirmed threats in their environment, that info can be quickly shared via community-based information sharing and analysis centers. If you receive multiple indicators of a similar threat, you can compound the credibility score. Then, depending on the nature of the threat and the credibility score, you can decide if this an issue that can be remediated automatically or whether it requires further investigation and the judgment of a security analyst.
Another advantage of automating the collection and parsing of this info is the ability to look back in time. Once you have identified the key characteristics of a particular threat, whether it is code samples, hash values, registry changes, or other effects, the system can automatically scan your network looking for previous occurrences of the threat over previous weeks or months, and isolate or eradicate them.
Every security team I have spoken with is trying to do more with less, and the increasing volume of alerts and attack surface is certainly contributing to the more part. As we are inundated with security event info, we need to quickly filter that flood to focus on what is most credible and most important. Reducing time to detection and time to containment or remediation are the goals, and SIEM automation is at least part of the answer.