Earlier today, the EU adopted the EU AI Act. The only remaining step is its publication in the Official Journal (which is expected to happen in the next month). The Act will come into force 20 days after its publication, although the substantive obligations will then be phased in over a three-year period.
There has been considerable excitement about this new law, but getting to grips with it in practice is challenging. It is a formidable and complex piece of legislation running to 113 articles, 13 annexes and 180 recitals, and for many organisations its effect may be limited to only a subset of products.
With these factors in mind, we set out 10 key points on the new EU AI Act.
1. The Act applies tiered regulation
The obligations under the EU AI Act apply in tiers based on the purpose for which the AI system is intended to be used. The table below summarises those tiers.
Tier | Example/Description | Position |
Prohibited | e.g. Use of AI system that applies subliminal techniques to manipulate behaviour or cause harm | Use is prohibited |
High-risk | e.g. AI safety components integrated in certain products e.g. Use of a stand-alone AI system for recruitment or credit scoring | Subject to significant regulation |
GPAI(systematic risk) | General-purpose AI models that create systematic risk because of their capabilities, e.g. those requiring more than 1025FLOPS for training | Subject to significant regulation |
GPAI(other) | Other general-purpose AI models | Limited obligations, focussing on documentation and copyright |
Human interaction | AI systems that interact with humans or create deepfakes | Transparency obligations |
Other | Other AI systems | Limited regulation, such as AI literacy requirements |
2. Most AI systems will be subject to limited regulation
The overall effect of this tiered approach is that – in practice – many AI systems will likely be subject to only limited regulation.
For example, the definition of “prohibited” AI systems is short, focusing on AI systems used to manipulate or exploit individuals, for social scoring or for remote biometric identification. These are unlikely to be relevant to most organisations. The list of “high risk” AI systems is slightly broader, capturing safety components of certain products and uses such as recruitment or credit assessments, but these are still narrowly drawn.
In practice, many organisations are likely to only have a handful of AI systems subject to the strongest tiers of regulation. However, determining with certainty which ones are caught is critical, as the cost of compliance for high-risk systems (and the sanction for getting it wrong) are significant.
3. The overall harmonising effect is arguably deregulatory
The EU AI Act is also an EU Regulation (so directly applicable in every EU Member State) and likely applies maximum harmonisation – i.e. prevents individual Member States from creating their own AI laws within the scope of the Act.
Arguably for many organisations it has more of a deregulatory effect. In other words, the Act not only applies very limited obligations to most AI systems but also prevents national laws being created to impose extra obligations. Throughout the adoption process, this has also been touted as one of the key benefits of the Act.
4. But for “high risk” systems, the obligations are onerous
There will, however, be very significant new obligations for high risk systems under the new EU AI Act.
The specific obligations vary according to your role, and the law includes the concepts of a “provider” (being the person who develops the AI system) and “deployer” (being the person using the AI system). There are also separate obligations for “distributors” and “importers”.
The most burdensome obligations unsurprisingly fall on the “provider” who, amongst other things, must:
- Establish a risk management system with proper risk assessments.
- Ensure the system is trained on high quality data.
- Put appropriate technical documentation and record keeping arrangements in place and provide instructions for use with the AI system.
- Ensure appropriate human oversight with possibilities to intervene.
- Ensure the system achieves an appropriate level of accuracy, robustness and cyber security.
- Put a quality management system in place for post-market monitoring and draw up an EU declaration of conformity.
- Register the system.
In contrast, a “deployer” must comply with a more limited set of obligations. For example, they must:
- Ensure they use the AI system in accordance with the instructions for use.
- Apply suitable human oversight.
- Monitor the operation of the system and keep logs of its operation.
- Inform workers’ representatives when using that technology in the workplace.
Separate obligations apply to importers and distributors. Importantly, if a deployer, importer or distributor puts their trade mark on an AI system, substantially modifies an AI system or uses it for a high-risk purpose not foreseen by the provider, they will be deemed to be a provider themselves.
5. AI systems are defined but not clearly defined
“AI systems” are defined as:
“a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
That definition mirrors the definition used by the OECD to define AI systems, but from a technological perspective it’s not entirely clear what this means in practice, and there remains considerable scope for argument as to whether or not a system is “AI”. Having said that:
- There is likely to be a presumption that certain systems are “AI” based on the technology used in that system. For example, there is likely to be a presumption that a system using machine learning is “AI”.
- The marketing of the system may be important. Clearly if a product is described as being “Powered by AI”, it may be difficult to subsequently argue it is just a “dumb” product.
Ultimately, like an elephant, the concept of AI may be difficult to describe but you will generally know it when you see it.
6. The territorial scope is exceptionally broad
The territorial scope of the EU AI Act is exceptionally broad and so could potentially capture many international organisations with only a tangential connection with the EU. For example, the EU AI Act applies in a wide range of circumstances including:
- Providers of AI systems that are put into service or placed on the market in the EU.
- Deployers of AI systems established in the EU.
- Providers or deployers of AI systems where the output from the system is used in the EU.
Where the provider of a high-risk AI system is established in a third country, they must appoint an authorised representative in the EU.
7. Implementation is phased in over three years
As set out above, the Act now needs to be published in the Official Journal (which is expected to happen in the next month) and will come into force 20 days later.
The key stages after that are:
- Prohibited AI practices: These restrictions apply six months after the Act comes into force.
- General-purpose AI: These rules will apply 12 months after the Act comes into force.
- High-risk systems: These rules will start to apply 24 months after the Act comes into force (subject to the proviso below).
- High-risk systems used as a safety component of a product: These rules will start to apply 36 months after the Act comes into force.
8. The EU AI Act is just one of many new specific AI laws
The EU is not the only jurisdiction looking to enact specific new AI laws, with other countries such as China and the US (particularly through state regulation such as Illinois Biometric Information Privacy Act, New York AI Bias Law and Colorado Artificial Intelligence Act) passing laws in this area.
It is important that, to the extent possible, your compliance plan factors in these new and emerging obligations.
9. Other EU laws will fill in the gaps
While the scope of the EU AI Act is narrow, it is important to remember that these systems continue to be heavily regulated under other frameworks, particularly the GDPR, consumer protection and IP law, and that the obligations under the EU AI Act are without prejudice to those other obligations. The EU AI Act will also be supplemented by the proposed AI Liability Directive and new Product Liability Directive.
This means that even if your new AI system is not prohibited, high-risk or a general purpose AI, you will not fall into a regulatory lacuna. You will still need to consider compliance with these other obligations beyond the EU AI Act, including for example identifying a legal basis for any training of the system, considering if the system processes special category personal data, ensuring transparency and completing a data protection impact assessment. The increasingly assertive posture of data protection authorities in relation to artificial intelligence means this may not be a trivial exercise.
10. What should I do now?
While the obligations under the EU AI Act will not apply immediately, it’s important to start to prepare for these changes now. The scope of this work will vary from organisation to organisation, but for most there are five key steps:
- Identify the software and hardware products used within, or provided by, your organisation and determine if any qualify as an “AI system”.
- For those that do, confirm if those products are caught by the very broad territorial test set out in the EU AI Act.
- For those that are potentially subject to the EU AI Act, determine which tier of regulation that product falls into, noting that (as set out above) only a subset are likely to be classified as either prohibited or high risk.
- Determine what obligations your organisation is subject to and, in relation to high-risk systems, identify if you are categorised as a provider, deployer or other (noting there is a considerable difference in the obligations imposed depending on your role).
- Put a plan in place to comply with those obligations and integrate the latter within your broader digital regulation compliance framework (such as the other obligations in the EU Digital Package). Taking a cross-cutting approach to these new regulations is likely to be more efficient, e.g. by combining the various types of risk assessments required under such legislation.