1 June 2021
On 21 April 2021, the European Commission (EC) published a proposal (Proposed Regulations) which is described as the “first-ever legal framework on [Artificial Intelligence]”, aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The Proposed Regulations are of general interest because they constitute a first attempt to “regulate” AI properly and, if implemented, may become influential worldwide.
AI systems
The Proposed Regulations defined an AI system as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. The techniques and approaches listed in Annex I include machine learning approaches, logic and knowledge-based approaches and statistical approaches.
Application
The Proposed Regulations apply to, amongst others:
-
providers placing on the market or putting into service AI systems in the European Union (EU);
-
users of AI systems located within the EU; and
-
providers and users of AI systems that are located outside the EU if the output produced by the system is used in the EU.
Prohibited and high-risk AI systems
The Proposed Regulations prohibit certain AI systems including those that:
-
deploy subliminal techniques beyond a person’s consciousness materially to distort the person’s behaviour and cause or be likely to cause that person harm;
-
exploit any of the vulnerabilities of a specific group of persons materially to distort the behaviour of a member of that group and cause or be likely to cause that person harm;
-
evaluate or classify the trustworthiness of natural persons (social scoring); and
-
use ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement.
The Proposed Regulations consider certain AI systems as high-risk including those that are:
-
used as safety components of the products (or are themselves the products) such as machinery, toys, and medical devices; and
-
used to evaluate things such as creditworthiness, biometric identification and critical infrastructure.
Providers of high-risk AI systems must comply with strict requirements such as having a quality management system in place and technical documentation and ensuring that the high-risk AI systems undergo a conformity assessment procedure before being placed on the market.
Next Steps
The European Parliament and the Council of the EU will have to agree on the text of the Proposed Regulations in order for them to become EU law. This legislative process may take a year or more.
The Proposed Regulations and supporting annexes are accessible here: link.
For further information, please contact:
Simon Deane, Partner, Deacons
simon.deane@deacons.com