Much has been said about the power of AI and how tomorrow’s CISO won’t be able to provide efficient cybersecurity without it.
The hype surrounding AI is based on both the quickening pace of natural language capability development and the current deficiency of capable and competent cybersecurity professionals.
A quick clarification of what AI is and to what extent it exists today may be useful before explaining the legal recognition it has today, including in the world of cybersecurity.
What is AI? Does it exist yet?
The term “artificial intelligence” is rather vague from a legal standpoint—and in the legal world, words tend to have a strong impact. For French people (and for most people around the world), the official definition of AI is as follows: “A theoretical and practical interdisciplinary field whose purpose is to understand the mechanisms of cognition and reflection, and their imitation by a material and software device, for purposes of assistance or substitution to human activities”.
AI now has full recognition of the EU Parliament as a result of the 2018/2088(INI) Motion on Comprehensive European Industrial Policy on Artificial Intelligence and Robotics, also known as the Ashley Fox Resolution, dated 12 February 2019 (Motion). Interestingly, this resolution specifically mentions the implications of AI for cybersecurity:
“Notes that cybersecurity is an important aspect of AI, especially given the challenges for transparency in high level AI; considers that the technological perspective, including auditing of the source code, and requirements for transparency and accountability should be complemented by an institutional approach dealing with the challenges of introducing AI developed in other countries into the EU single market “
So, with such official recognition, why do we read everywhere that real AI does not exist yet?
The argument made is that, although the goal is to replace the human being, AI may only provide augmented intelligence which assists the human being.
In fact, says J. McCarthy, as far back as the 1956 Dartmouth Artificial Intelligence Conference, the conference was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
More than 60 years later, machine learning is still not autonomous. But it does exist to a certain degree, and the capability of machine learning combined with the accumulation of today’s databases makes it possible to create algorithms capable of performing tasks that have never been automated before. AI, or at least a certain form of AI, is today part of our daily lives, and understanding this technology is essential so that it can be accepted and integrated into our societies.
In Part II of this blog, we’ll examine the economic, political and ethical challenges in the development of AI, particularly as they pertain to cybersecurity.