Everywhere you turn today, machine learning and artificial intelligence are being hyped as both a menace to and the savior of the human race. This is perhaps especially true in cybersecurity.
What these alluring terms usually mean is simply related to detailed statistical comparisons derived from massive data collections. Let’s look at the terms themselves:
- Machine Learning describes algorithms that can statistically compare patterns and similarities in a set of data and provide useful information without being explicitly programmed to do so.
- Artificial Intelligence describes programs that go a step further, taking the useful information from machine learning and applying it directly to a pain area to mimic reason and problem-solving and make decisions automatically.
- Human-Machine Teaming, which our CTO Steve Grobman urges for cybersecurity, describes increasing the number of important security things we can do without explicitly thinking about them or acting on them to such an extent that it frees people to perform strategic analysis and problem-solving.
At McAfee we are urging our customers to take a long and comprehensive view of human-machine teaming that looks beyond the current, cool-factor buzz. You can make it real, make it practical, and make it scalable, but what does that look like? I recently gave an analogy that can help business people understand this topic in a white paper called “Driving Toward a Better Understanding of Machine Learning.” You can download it here.
As a metaphor representing malware threats, I introduced the concept of malicious autonomous cars: self-driving cars that have been programmed to do bad things. For example, posing as taxis, malicious autonomous cars could trick and kidnap people. (Much the way ransomware could masquerade as an email attachment, then “kidnap” your critical user files, and demand payment.)
The machines are learning, and to stay secure we must learn as well. Let’s do it together.