The AI Liability Directive

Key points to be aware of for businesses that use AI

17.11.2022

The latest step in the European Commission’s initiative to roll out artificial intelligence (AI) across Europe and promote the Digital Economy has recently been announced. On 28 September 2022, the European Commission published a proposal for the AI Liability Directive, which sets out two new rules for attributing liability in non-contractual fault-based claims where an AI system is intrinsically involved.

  • Firstly, a right to evidence, covering the level of disclosure required by a defendant in a claim involving “high-risk” category AI systems, as defined in the AI Act, which in summary covers AI systems that may have safety implications or that may affect fundamental rights (e.g. life, physical integrity and non-discrimination/equal treatment); and
  • Secondly, a (rebuttable) presumption of causality, to guide how and where fault should be attributed in claims for damages caused by an AI system, depending on its categorisation.

Before, it had been unclear as to how claims for damages involving an AI system would be dealt with. The Directive intends to bridge any potential compensation gap, such that claimants for damages caused by an AI system will enjoy the same level of protection as those claiming damages where an AI system is not involved. It is hoped this will further encourage uptake of the technology by providing certainty to businesses in how any claims would be dealt with, and providing consumers with comfort that they are protected if something goes wrong.

Article 3 – Right to evidence

For high-risk AI systems, the draft AI Act sets out certain documentation, information and record keeping requirements for the operators involved in the design, development and deployment of the system, but there is no right under the Act for a person injured by that system to access that information, which would be critical in substantiating a claim for compensation. Where the provider of the AI system has refused to disclose the relevant information, the proposed Directive would enable courts to order such disclosure (to the extent that it is necessary and proportionate), and preservation of any evidence related to the claim.

In order to encourage disclosure, it is proposed that there will be a (rebuttable) presumption of non-compliance of the defendant with a relevant duty of care until it has submitted evidence to the contrary.

Article 4 – Presumption of causation

The opaque “black-box” in which AI systems create outputs, and the autonomous behaviour and complexity of a system, bring challenges to how existing fault-based liability rules will apply where an AI system is interposed between a human act or omission, and damage.

The Directive is intended to prevent such challenges from making it impossible to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system to create an output that caused the damage – i.e. to prevent a potentially culpable defendant from shifting the blame to the AI system by hiding in the shadow of the black-box. However, it is not intended to apply to circumstances where the AI system only provided information or advice, which was taken into account by the relevant human actor, as this can easily be traced back to the human’s act/omission.

The Directive creates a rebuttable presumption of a causal link between the fault of the defendant and the output produced (or failure to produce an output) by the AI system that caused damage, where all of the following conditions are met:

  1. The claimant has demonstrated (or the court has presumed pursuant to Article 3) the fault of the defendant, or of a person for whose behaviour the defendant is responsible, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred;
  2. It can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and
  3. The claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

Where the claim relates to a high-risk AI system, there are further requirements (which relate back to the AI Act) to satisfy the first condition above. The claimant has to demonstrate the defendant did not comply with these requirements, including high quality training data sets, transparency and human oversight of the system, and appropriate levels of accuracy, robustness and cybersecurity. In addition, if the defendant can demonstrate there is sufficient evidence and expertise accessible to the claimant so that it can prove the causal link, the presumption is also rebutted. Finally, for non-high-risk AI systems, the presumption will only apply where the court considers it excessively difficult for the claimant to prove the causal link.

Whilst this Directive goes some way to reassuring businesses by providing certainty as to how claims involving an AI system may be dealt with, the right to evidence is unlikely to be popular as they may be required to disclose commercially sensitive information, although there are measures proposed in the Directive around protecting confidential information and trade secrets, and limiting the disclosure to only what is necessary.

Obviously as a European Directive in a post-Brexit world, it will be interesting to see how the UK reacts and borrows from this legislation, and generally updates and adapts its product liability law, in step with the continent or otherwise, in the ever-evolving digital age.