This September, the European Commission presented its Proposal for an Artificial Intelligence Liability Directive (AILD). This is the first legislative proposal specifically for the compensation of damages caused by artificial intelligence. Why is the directive necessary and what changes would it bring? And would the AILD affect the assessment of copyright infringements caused by artificial intelligence?
Why is the AILD necessary?
Artificial intelligence has developed in a rapid pace during recent years, bringing many new solutions to both business operations and to society overall. However, the rise of AI has also brought with it concern relating to the damage caused by autonomous systems. The operation of AI systems can be almost completely autonomous, and their decision-making is often based on complex and opaque processes. Therefore, the solutions made by AI can come as a complete surprise even to their manufacturers and users. As more and more demanding tasks are assigned to AI systems, the risk of serious and unpredictable damage increases.
Current national liability rules are not entirely suited to handling liability claims for damage caused by AI-enabled products and services. This is because the special characteristics of AI technology, including complexity, autonomy, and opacity, may make it difficult for victims to prove the requirements for a successful liability claim. Currently, liability for compensation generally requires both the fault of the defendant as well a causal link between the defendant’s conduct and the end result. In the case of highly autonomous machines, where human intervention has been minimal or non-existent, it can be very difficult or prohibitively expensive for victims to prove these requirements.
AILD will ease the burden of proof in AI-related claims
The EU commission aims at tackling these challenges by presenting two new features in the AILD. Firstly, the directive includes a provision that improves the claimant’s right to access information relating to the so-called high-risk artificial intelligence systems. These high-risk AI systems include, for instance, devices used in critical infrastructure, law enforcement, administration of justice and public administration.
According to this provision, a court is in certain cases empowered to order the disclosure of ‘relevant evidence’ about the specific high-risk AI system which is suspected of having caused damage. To obtain disclosure, the claimant must first take all proportionate attempts at gathering the relevant evidence from the defendant. In addition, the claimant must present facts and evidence sufficient to support the plausibility of a claim for damages.
The purpose of this new feature is to ease the victim’s burden of proof. Courts are thus given the power to examine the technical details of AI systems, including their data and risk management mechanisms. This way, the AI black box can be broken into, and the chances of victims receiving compensation are improved.
As the second new feature, the directive proposal introduces a rebuttable presumption of causality. National courts will be required to presume that the output produced by the AI system was caused by the defendant if the following conditions are met:
- the claimant has demonstrated, or the court has presumed the fault of the defendant, or of a person for whose behavior the defendant is responsible, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred;
- it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and
- the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
The purpose of the presumption of causality is also to help victims meet their burden of proof. A rebuttable presumption of causality is a more complex solution compared to the reversal of the burden of proof, but it still improves the victim’s chance to meet succeed with justified liability claims. This is especially because the victim is not required to prove the causality but rather, that such causality can be considered reasonably likely.
And how does the AILD interact with the AI Act? The AILD and the AI Act may be regarded as two sides of the same coin. While the AI Act seeks ensure the safety of artificial intelligence systems and to protect fundamental rights in the field of artificial intelligence, it does not eliminate risks completely. The AILD, on the other hand, would complement the AI Act by establishing liability rules in case risks materialize and damage occurs. The AILD and the AI Act therefore work side by side, and the AILD operates with many of the terms defined in the AI Act.
AILD and copyright infringements
I have previously written about copyright infringements caused by artificial intelligence in my article on artificial intelligence and copyright. In the article, the idea of some sort of independent responsibility for artificial intelligence was rejected, and the conclusion was that the responsibility should lie with the people behind the AI system. The AILD also supports this conclusion, as it allocates the liability for compensation with either the provider or the user of the AI system.
If the AILD comes into force as suggested, the chances of successfully claiming compensation would improve also for the victims of copyright infringement. However, it should be noted that the new provision on the disclosure of evidence only applies to high-risk AI systems, and it may therefore not be very significant for copyright infringement cases. Copyright infringements are not particularly typical for these fields, even though they cannot be ruled out altogether.
What next?
The AILD has been a subject of major public debate, but it has mainly received positive reactions and broad political support within the EU. However, amendments and oversight by various EU functions are still expected. Before coming into force, the AILD needs to be adopted by the European Parliament and the Council, and this will be followed by a two-year national transition period for the Member States. Even though it is likely to take a number of years for the AILD to come into force, companies utilizing AI should still be aware that changes are on the near horizon.
For further information, please contact:
Sofia Wang, Bird & Bird
sofia.wang@twobirds.com