The scope of the new EU AI Act is largely driven by the definition of an “AI system”. That definition is opaque and, unfortunately, the new guidelines from the EU Commission muddy the waters further. We consider if this matters in practice.
The definition of an “AI system”
The EU AI Act defines an AI system in Article 3(1). It means:
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
There are some important points to note about this multifaceted definition:
- No technical grounding: Most elements of this definition have no technical meaning. For example, the key requirement of “autonomy” seems to be rooted in philosophy, not computer science. Of course, the definition needs to be technologically neutral so it is not obsolete in a few years’ time. However, the lack of any technical reference point is challenging.
- A “bright line test” is impossible: As with many legal definitions, some things will be definitely caught, and some not. For example, DeepMind is unlikely to deny that AlphaFold uses an AI system. Equally, AI regulators are unlikely to claim jurisdiction over Frogger. However, there is no “bright line” separating the two given the difficulty in determining when a very complex program flips from traditional software to AI. This inevitable grey area is common to many other areas of law, such as the difference between an uncopyrightable idea and copyrightable expression (“Nobody has ever been able to fix that boundary, and nobody ever can” Nichols v. Universal Pictures 45 F.2d 119).
- “Varying/May”: It is not clear that all elements of the definition are mandatory or to what extent they are needed. For example, there simply needs to be “varying levels” of autonomy, and the system “may” show adaptiveness. This piles vagueness on top of vagueness.
This may be an “elephant” definition. In other words, you know it when you see it. (Or to put it more eloquently, it is “characterised more by recognition when encountered than by definition”, Ivey v Genting [2017] UKSC 67).
The EU Commission’s guidelines are opaque
Given these difficulties, the EU Commission’s guidelines have been eagerly awaited. This provided an opportunity for a more pragmatic approach. For example, the EU Commission could indicate there is a presumption that, for the time being, only certain types of technology are AI, such as machine learning systems.
Unfortunately, the EU Commission has stuck closely to the elements in the original definition and has attempted to define them in a way that is frequently unhelpful and unclear. To take some examples:
- “Autonomy”: This should be a key concept, but the guidelines seem to relate this to whether the system is “manually controlled”. For example, suggesting that systems that “are capable to operate without any human involvement or intervention” are autonomous. This is difficult to understand as most computer programs will run “automatically” once written – this article is written in Word without any need for an ongoing intervention from Microsoft. Similarly, the guidelines suggest “expert systems” are autonomous – whereas some expert systems are just deterministic, rules-based tools.
- “Adaptiveness”: The guidelines note that the definition states AI systems “may” exhibit adaptiveness. Accordingly, this is not a requirement for an AI system. Is it still an indicator of an AI system? The guidelines are silent.
- “Infers”: The guidelines suggest this term differentiates AI from traditional software (see below) and instead points towards machine learning systems or “logic and knowledge-based approaches”. There is little clarity on what “logic-based” means in this context. All computer programs use logic.
- “Influence physical or virtual environments”. The guidelines note that to be an AI system there must be an “active impact” on either a physical environment or a virtual one “including digital spaces, data flows, and software ecosystems”. What this means beyond the system simply providing an output is not clear.
Out of scope
The guidelines refer to the helpful clarification in recital 12 that AI systems should not include: “simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.”
However, even here confusion reigns. For example, the guidelines suggest that the definition excludes systems used for “mathematical optimisation”. This is odd as many AI tools are inherently just solving optimisation problems.
Equally, the suggestion that AI systems do not include “physics-based systems”, such as weather modelling of “complex atmospheric systems”, is difficult to understand. It appears this is based on a subjective assessment of the purpose, and not an objective assessment of the technology.
The guidelines then suggest an exemption for “basic processing…used in a consolidated manner for many years”. Perhaps this harks back to the idea that AI is “anything computers still can’t do” (see What Computers Still Can’t Do: A Critique of Artificial Reason by Hubert L Dreyfus) but is hardly a principled basis to delineate this term.
Does it matter?
As set out above, there was always going to be a grey area containing systems that sit uncomfortably on the boundary between dumb code and smart AI. The critical question is how large that grey area is and how well it aligns to the harms the law is intended to address. Unfortunately, the EU Commission’s guidelines leave plenty of grey and ask more questions than they answer.
Does this matter in practice? Perhaps not. The EU AI Act is “inch wide; mile deep”. The key obligations are focused on the narrow categories of prohibited, high-risk and general purpose AI systems (together with limited transparency and literacy obligations).
The sorts of sophisticated technology needed to implement those highly regulated use cases will often clearly be an “AI system”. Outside of those use cases, the question is largely academic.
The Commission’s Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 are here.
For further information, please contact:
Sonia Cissé, Partner, Linklaters
sonia.cisse@linklaters.com