In Part I of this blog, we discussed the debate around AI: what it is, whether it exists, and to what extent it plays a role in our daily lives. In Part II, we turn our focus to the future, and how AI must be developed deliberately and thoughtfully to provide the greatest benefit to humanity.
A strategic, economic and political challenge
There is absolutely no doubt that AI is a strategic, economic and political issue. On Feb. 12, 2019, the European Parliament adopted a comprehensive European industrial policy on AI and robotics. The motion notes that AI promotes innovation, productivity, economic growth and competitiveness; reshapes multiple industrial sectors; and can help address global challenges such as health or the environment. The goal of the Parliament is to facilitate the development of AI technologies by implementing a single European market for AI and removing barriers to the deployment of AI, including through the principle of mutual recognition with regards to the cross-border use of smart products. Clearly, the goal is to allow the European Union to compete with mass investments made by third parties, especially the United States and China.
On top of this, it recommends creating a European Regulatory Agency for AI and Algorithmic Decision Making.
A strong initiative to integrate ethics in AI
The Motion highlights the importance of deploying a “trusted AI” coupled with ethical principles to enable responsible competitiveness as it will build user trust and facilitate wider adoption of AI. Parliament believes that the European Union must play a “leading role on the international stage” by establishing itself as a leader for an ethical, safe and advanced AI.
The reason why the main actors want ethics to be integrated into AI is that there is a need to guarantee, from the design stage, the transparency and the explicability of the algorithms, in order to prevent any discrimination linked to automated decision-making. This concern is echoed by Article 22 of the EU Reg 2016 /679 (“GDPR”): “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
AI is to be designed as a tool that helps and is controlled by humans aligned on fundamental rights such as dignity, autonomy, self-determination and non-discrimination. The European Commission published draft guidelines on ethics in the field of AI on December 18, 2018.
A Legal Framework for AI
The Motion stresses the need to develop a strategic regulatory environment for AI and robotics that encourages both technological innovation and strong user protection. Parliament refers to Europe’s ambition to be a pioneer in this area, hence the importance of “regularly reassessing existing legislation in order to ensure that it is appropriate to its objective as far as Europe is concerned.”
The first issue raised by the Parliament is the need to combine the GDPR with the development of AI. The key for developing AI is the trust of its users, and there can be no trust by the users if their personal data is not strongly protected. The EU Parliament argues that “the establishment of an ecosystem of trust in the development of AI technologies should be based on an appropriate data processing framework,” which implies full respect of the EU legal framework in concerning the protection of personal data, i.e., the GDPR.
The resolution also stresses the lack of specific provisions on liability and intellectual property, which undermines legal certainty. Liability in the field of AI clearly remains a grey area.
What about AI in the field of Cybersecurity?
The European Parliament highlights that “AI can both be a threat to cybersecurity and the main tool against cyber-attacks.” There is a need to ensure the integrity of the data and algorithms on which the AI is based, including “product safety checks by market surveillance authorities and consumer protection rules that implement place, whit appropriate minimum safety standards.” Simultaneously, the Motion recognizes that “the deployment of solutions integrating AI for cybersecurity purposes will make it possible to predict threats, prevent them and mitigate.”
The Parliament stresses again the importance of developing its own cybersecurity independence by developing “its own infrastructure, data centers and systems of cloud computing and its own computer components.” Some even view it as a founding element of European numerical sovereignty.
Ultimately, AI may become the next big privacy trend. Just as big data made every single company a data company, many believe that the new era of AI will transform every company into an AI company. When thinking about AI, the first thing that pops into people’s mind are autonomous vehicles and smart robots, but the legal and privacy implications are far wider, potentially impacting every single industry, from consumer goods to healthcare to financial services—without forgetting, of course, cybersecurity.
I bet you next thing you know, we will have an official EU Commission position on the legitimate interest of processing personal data for the sake of AI in the field of cybersecurity. And hopefully ethics—recognized at international level—will provide the required boundaries for a safe and transparent use of AI.