The European Commission’s Proposal for an Artificial Intelligence Act is a designed to create a regulatory framework for the use of AI in the European Union, in conformity with EU values. Following the adoption of the Council’s General Approach on 6 December 2022, the European Parliament is close to agreeing on its internal position on this proposal regulation. The forthcoming three-way negotiations (“trilogue”) between the European Parliament, Council and Commission will see EU lawmakers trying to reach a final position on sensitive issues such as the definition of an AI system, biometric identification technologies and how to respond to the rise of sophisticated chat boxes such as ChatGTP.
Definition of an “AI system”
A critical point since the beginning of the examination of the Commission proposal by both in the Council and Parliament has been the definition of an “AI system”. The Commission proposal defined an “AI system” as “software developed with one or more of the techniques and approaches that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Article 3). Concerned that traditional software would be included, Member States in the Council put forward a narrower definition of systems developed through machine learning, logic-based and knowledge-based approaches. Within the Parliament, the latest amendments propose a definition similar to one developed by the Organisation for Economic Co-operation and Development (OECD). An AI system is thus defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments”. The European Parliament also proposed deleting Annex I of the Commission’s proposal, which listed specific AI techniques, such as machine learning, logic, knowledge, or statistics.
Provisions on AI uses for biometric identification and categorization of people have also proven to be highly contentious. In its proposal, the Commission banned the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, except in limited circumstances such as a targeted search for a missing child or the prevention of a terrorist attack. Similarly, the Council also proposed banning the use of “real-time” remote biometric identification systems in publicly accessible spaces, with specific exceptions for the purposes of law enforcement. However, as Member States proposed to exclude from the Act all the AI applications related to national security, defence and military, they did not ban the use of facial recognition AI systems within these areas.
In Parliament, the Committees for Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) have gone much further by proposing the prohibition of all use of real-time remote biometric identification systems in publicly available spaces (including by law enforcement); AI systems using biometric traits to categorise people using or inferring sensitive or protected attributes (such as race, sexual and religious orientation); and AI models that fill in facial recognition databases by indiscriminately scrapping face images from social media profile pictures and CCTVs. At the same time, the Parliament proposed including as “high-risk” all AI systems used for post-remote biometric identification of people and AI systems making inferences about personal characteristics of people on the basis of biometric or biometrics-based data, including emotion recognition systems.
The recent rise of AI chatbots trained to provide prompt and detailed responses, most notably ChatGPT, has introduced new points of discussion for EU regulators. While the Commission did not include “general purpose AI” technologies in the scope of the proposed Regulation, the Council proposed that general-purpose AI systems that may be used as high-risk AI systems, or as components of high-risk AI systems, would have to comply with specific requirements (risk-management, transparency, and cybersecurity amongst others) one year and a half from the entry into force of the AI Act. The European Parliament is still working on a possible text on general-purpose AI. The IMCO and LIBE Rapporteurs proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list.
While the European Council already agreed on its internal position (“general approach”) on 6 December 2022, the Parliament is still entangled in technical and political discussions on the various amendments tabled to the Commission proposal. According to the latest information, the IMCO and LIBE Committees are expected to vote on their joint Report by the end of March 2023. The final Report could be sent to the Parliament’s plenary session for a final vote between the 17 and 20 of April 2023. Trilogue negotiations between the Commission, the Council and the Parliament to agree a final text would then begin and can be expected to continue for several months. Once the final Regulation has entered into force (which could be September/October 2023 at the earliest), the Regulation would become applicable 24 months thereafter, which is currently expected to be in the course of 2025.