According to the Global Economic Crime Survey 2016 published by PwC, cybercrime is now the second most reported economic crime affecting organizations today. What’s more concerning is that while the frequency of other economic crimes has more or less been steady or declining, cybercrime has been on a steady increase everywhere.
Although today’s sophisticated tools such as big-data analytics offer effective monitoring against cybercrime, few organizations use these state of the art technologies to help detect and prevent economic crime. Currently only 8% of respondents use internal monitoring approaches such as data analytics, as per PwC’s Global Economic Crime Survey.
In what could be considered a significant boost to these state-of-the-art technologies, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory, along with a machine learning startup, have successfully demonstrated that their latest machine learning platform, AI2, can detect 85% of attacks while also reducing false positives by a factor of five. This has been achieved by monitoring a web-scale platform that generated millions of log lines per day for over three months, for a total of about 3.6 billion log lines.
How does AI2 work?
According to MIT News, the AI2 platform uses active learning to detect 85% of cyber-attacks as shown below:
1. On the very first day, AI2 combs through the available data and detects suspicious activity.
2. This suspicious activity is then presented to a human expert, who confirms or denies these attacks.
3. The feedback is then incorporated to train a supervised model.
4. On Day 2, this supervised model is used in conjunction with the unsupervised model, after which feedback is again collected and used to update the virtual analyst model.
5. This learning continues and the supervised model is thus continuously enhanced. According to the MIT researchers, the AI2 model continuously generates new models which are refined every few hours, thus improving detection rates significantly and rapidly.
6. As shown below, the AI2 model has a detection rate of updwards of 85% while the detection rates in an unsupervised machine learning model is roughly 73%.
MIT researchers acknowledge that the more attacks the system observes in combination with more analyst feedback, the greater the accuracy of future predictions. It is this human-machine interaction which helps AI2 succeed where conventional machine learning models have failed. However, as machine learning algorithms improve and learning ability increases, the reliance on human analysts is expected to diminish. Ultimately, this model is expected to achieve the same performance levels even though it would then be classified as an unsupervised machine learning model.
About the Author
Categories: Cloud Security