The AI (R)evolution: Why Humans Will Always Have a Place in the SOC

By on Nov 20, 2019

In cybersecurity, the combination of men, women and machines can do what neither can do alone — form a complementary team capable of upholding order and fighting the forces of evil.

The 20th century was uniquely fascinated with the idea of artificial intelligence (AI). From friendly and helpful humanoid machines — think Rosie the Robot maid or C-3PO — to monolithic and menacing machines like HAL 9000 and the infrastructure of the Matrix, AI was a standard fixture in science fiction. Today, as we’ve entered the AI era in earnest, it’s become clear that our visions of AI were far more fantasy than prophecy. But what we did get right was AI’s potential to revolutionize the world around us — in the service of both good actors and bad.

Artificial intelligence has revolutionized just about every industry in which it’s been adopted, including healthcare, the stock markets, and, increasingly, cybersecurity, where it’s being used to both supplement human labor and strengthen defenses. Because of recent developments in machine learning, the tedious work that was once done by humans — sifting through seemingly endless amounts of data looking for threat indicators and anomalies — can now be automated. Modern AI’s ability to “understand” threats, risks, and relationships gives it the ability to filter out a substantial amount of the noise burdening cybersecurity departments and surface only the indicators most likely to be legitimate.

The benefits of this are twofold: Threats no longer slip through the cracks because of fatigue or boredom, and cybersecurity professionals are freed to do more mission-critical tasks, such as remediation. AI can also be used to increase visibility across the network. It can scan for phishing by simulating clicks on email links and analyzing word choice and grammar. It can monitor network communications for attempted installation of malware, command and control communications, and the presence of suspicious packets. And it’s helped transform virus detection from a solely signature-based system — which was complicated by issues with reaction time, efficiency, and storage requirements — to the era of behavioral analysis, which can detect signatureless malware, zero-day exploits, and previously unidentified threats.

But while the possibilities with AI seem endless, the idea that they could eliminate the role of humans in cybersecurity departments is about as farfetched as the idea of a phalanx of Baymaxes replacing the country’s doctors. While the end goal of AI is to simulate human functions such as problem-solving, learning, planning, and intuition, there will always be things that AI cannot handle (yet), as well as things AI should not handle. The first category includes things like creativity, which cannot be effectively taught or programmed, and thus will require the guiding hand of a human. Expecting AI to effectively and reliably determine the context of an attack may also be an insurmountable ask, at least in the short term, as is the idea that AI could create new solutions to security problems. In other words, while AI can certainly add speed and accuracy to tasks traditionally handled by humans, it is very poor at expanding the scope of such tasks.

There are also the tasks that humans currently excel at that AI could potentially perform someday. But these tasks are ones that humans will always have a sizable edge in, or are things AI shouldn’t be trusted with. This list includes compliance, independently forming policy, analyzing risks, or responding to cyberattacks. These are areas where we will always need people to serve as a check on AI systems’ judgment, check its work, and help guide its training.

There’s another reason humans will always have a place in the SOC: to stay ahead of cybercriminals who have begun using AI for their own nefarious ends. Unfortunately, any AI technology that can be used to help can also be used to harm, and over time AI will be every bit as big a boon for cybercriminals as it is for legitimate businesses.

Brute-force attacks, once on the wane due to more sophisticated password requirements, have received a giant boost in the form of AI. The technology combines databases of previously leaked passwords with publicly available social media information. So instead of trying to guess every conceivable password starting with, say, 111111, only educated guesses are made, with a startling degree of success.

About the Author

Celeste Fralick

Senior Principal Engineer and Chief Data Scientist, McAfee. Dr. Fralick is responsible for McAfee's technical analytic strategy that integrates into McAfee consumer and enterprise products as well as internal Business Intelligence. Dr. Fralick brings over 36 years of industry experience to McAfee. Prior to Intel's divestiture of McAfee, she was Chief Data Scientist in Intel's ...

Read more posts from Celeste Fralick

Categories: McAfee Enterprise

Subscribe to McAfee Securing Tomorrow Blogs