I spoke last week at the RSA Conference in San Francisco on the subject of AI related threats and opportunities in the cybersecurity field. I asserted that innovations such as AI can strengthen our defenses but can also enhance the effectiveness of a cyber attacker. I also looked at some examples of underlying fragility in AI that enable an attacker opportunity to evade AI based defenses. The key to successfully unlocking the potential of AI in cybersecurity requires that we in the cybersecurity industry answer the question of how we can nurture the sparks of AI innovation while recognizing its limitations and how it can be used against us.
We should look to the history of key technological advances to better understand how technology can bring both benefits and challenges. Consider flight in the 20th century. The technology has changed every aspect of our lives, allowing us to move between continents in hours, instead of weeks. Businesses, supply chains, and economies operate globally, and our ability to explore the world and the universe has been forever changed.
But this exact same technology also fundamentally changed warfare. In World War II alone, the strategic bombing campaigns of the Allied and Axis powers killed more than two million people, many of them civilians.
The underlying technology of flight is Bernoulli’s Principle, which explains why an airplane wing creates lift. Of course, the technology in play has no knowledge of whether the airplane wing is connected to a ‘life-flight’ rescue mission, or to a plane carrying bombs to be dropped on civilian targets.
When Orville Wright was asked in 1948 after the devastation of air power during World War II whether he regretted inventing the airplane he answered:
“No, I don’t have any regrets about my part in the invention of the airplane, though no one could deplore more than I do the destruction it has caused. We dared to hope we had invented something that would bring lasting peace to the earth. But we were wrong. I feel about the airplane much the same as I do in regard to fire. That is, I regret all the terrible damage caused by fire, but I think it is good for the human race that someone discovered how to start fires, and that we have learned how to put fire to thousands of important uses.”
Orville’s insight that technology does not comprehend morality—and that any advances in technology can be used for both beneficial and troubling purposes. This dual use of technology is something our industry has struggled with for years.
Cryptography is a prime example. The exact same algorithm can be used to protect data from theft, or to hold an individual or organization for ransom. This matters more than ever given that we now encrypt 75% of the world’s web traffic, protecting over 150 exabytes of data each month. At the same time, organizations and individuals are enduring record exploitation through ransomware.
The RSA Conference itself was at the epicenter of a debate during the 1990’s on whether it was possible to conditionally use strong encryption only in desirable places, or only for desirable functions. At the time, the U.S. government classified strong encryption as a munition along with strict export restrictions. Encryption is ultimately just math and it’s not possible to stop someone from doing math. We must be intellectually honest about our technologies; how they work, what the precursors to use them are and when, how and if they should be contained.
Our shared challenge in cybersecurity is to capture lightning in a bottle, to seize the promise of advances like flight, while remaining aware of the risks that come with technology. Let’s take a closer look at that aspect.
History repeats itself
Regardless of how you define it, AI is without a doubt the new foundation for cybersecurity defense. The entire industry is tapping into the tremendous power that this technology offers to better defend our environments. It enables better detection of threats beyond what we’ve seen in the past, and helps us out-innovate our cyber adversaries. The combination of threat intelligence and artificial intelligence, together or human-machine teaming provides us far better security outcomes—faster—than either capability on their own.
Not only does AI enable us to build stronger cyber defense technology, but also helps us solve other key issues such as addressing our talent shortage. We can now delegate many tasks to free up our human security professionals to focus on the most critical and complex aspects of defending our organizations.
“It’s just math..”
Like encryption, AI is just math. It can enhance criminal enterprises in addition to its beneficial purposes. McAfee Chief Data Scientist Celeste Fralick joined me on stage during this week’s keynote to run through some examples of how this math can be applied for good or ill. (visit here to view the keynote). From machine learning fueled crime-spree predictors to DeepFake videos to highly effective attack obfuscation, we touch on them all.
It’s important to understand that the cybersecurity industry is very different from other sectors that use AI and machine learning. For a start, in many other industries, there isn’t an adversary trying to confuse the models.
AI is extremely fragile, therefore one focus area of the data science group at McAfee is Adversarial Machine Learning. Where we’re working to better understand how attackers could try to evade or poison machine learning models. We are developing models that are more resilient to attacks using techniques such as feature reduction, adding noise, distillation and others.
AI and False Positives: A Warning
We must recognize that this technology, while incredibly powerful, is also incredibly different from what many cybersecurity defenders worked with historically. In order to deal with issues such as evasion, models will need to be tuned to high levels of sensitivity. The high level of sensitivity makes false positives inherent and something we must fully work into the methodology for using the technology.
False positive can have catastrophic results. For an excellent example of this, watch the video of the keynote here if you haven’t seen it yet. I talk through the quintessential example of how a false positive almost started World War III and nuclear Armageddon.
As with fire and flight, how we manage new innovations is the real story. Recognizing technology does not have a moral compass is key. Our adversaries will use the technology to make their attacks more effective and we must move forward with our eyes wide open to all aspects of how technology will be used…. Its benefits, limitations and how it will be used against us.
Please see the video recording of our keynote speech RSA Conference 2019: https://www.rsaconference.com/events/us19/presentations/keynote-mcafee