Out-Law News 5 min. read

AI liability addressed in fresh EU proposals

Artificial intelligence joining human hand seo


Plans to update EU law to recognise by default that there is a causal link between the fault of artificial intelligence (AI) system providers and the output, or lack of output, produced by their AI systems, have been set out by the European Commission.

The draft new ‘presumption of causality’ is contained in a proposed new AI Liability Directive (29-page / 479KB PDF), which is designed to help consumers raise damages claims when something goes wrong with the operation of an AI system.

Cameron Sarah

Sarah Cameron

Legal Director

A major barrier to businesses adopting AI has been the complexity, autonomy and opacity – the so-called ‘black box’ effect – of AI creating uncertainty around establishing liability and with whom it sits

“AI presents a challenge for existing liability frameworks,” said technology law expert Sarah Cameron of Pinsent Masons, whose analysis of the EU proposal was highlighted by the BBC. “A major barrier to businesses adopting AI has been the complexity, autonomy and opacity – the so-called ‘black box’ effect – of AI creating uncertainty around establishing liability and with whom it sits.”

“There has been a product liability framework which imposes effectively strict liability on manufacturers for defective products that cause physical harm to consumers for a long time. Its scope where AI is involved has been unclear because of the blurring of lines between products and services and the fact that AI systems generally involve a complex ecosystem with actors intervening at different stages of their lifecycle,” she said.

Under the proposed new AI Liability Directive, the presumption of causality will apply only if claimants can satisfy three core conditions: first, that the fault of an AI system provider or user has been demonstrated, or at least presumed to have been so by a court; second, that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and third, that the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

More detailed requirements for satisfying the first condition regarding fault are outlined in the draft directive in respect of AI systems is classed as ‘high-risk’ – the concept of high-risk AI is drawn from the AI Act, a separate piece of EU legislation that the Commission proposed last year that is still being scrutinised by EU law makers.

For example, claimants would need to show that providers or users of high-risk AI systems have failed to comply with obligations they would be subject to under the AI Act. For providers, these obligations include requirements in relation to the training and testing of data sets, system oversight and system accuracy and robustness. For users – which, as under the AI Act, might be an organisation or a consumer – the obligations include using or monitoring the AI system in accordance with the accompanying instructions.

To support claimants in demonstrating fault, the draft AI liability directive provides scope for courts to order providers or users of high-risk AI systems to preserve and disclose evidence about those systems. The proposed legislation further incentivises disclosure by providing for the presumption of causality to be rebutted in circumstances where a provider or user can demonstrate that “sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link”.

The plans for an AI Liability Directive were set out by the Commission alongside a separate, though related, proposal for a new Product Liability Directive.

“These two policy initiatives are closely linked and form a package, as claims falling within their scope deal with different types of liability,” Cameron said.

“The draft new Product Liability Directive covers producers’ no-fault liability for defective products, leading to compensation for certain types of damages, mainly suffered by individuals. This AI Liability Directive proposal covers national liability claims mainly based on the fault of any person with a view to compensating any type of damage and any type of victim. They complement one another to form an overall effective civil liability system. Together these rules will promote trust in AI and other digital technologies by ensuring that victims are effectively compensated if damage occurs despite the preventive requirements of the AI Act and other safety rules,” she said.

Under the Product Liability Directive proposal, AI systems and AI-enabled goods would be classed as “products” and therefore fall subject to the directive’s liability regime. It means compensation is available when defective AI causes damage, without the injured person having to prove the manufacturer’s fault, just like for any other product.

The proposal also makes it clear that not only hardware manufacturers but also software providers and providers of digital services that affect how the product works, such as a navigation service in an autonomous vehicle, can be held liable.

Wouter Seinen, Amsterdam-based expert in technology law, said: “On this point, the EU is deviating from the traditional position where product liability was predominantly a concern for manufacturers and importers of hardware. Broadening the scope is reflecting that software itself has become more prominent and that often similar functionality can be provided by specific equipment, as well as by apps that run on the user’s device.”

The proposal further aims to ensure that manufacturers can be held liable for changes they make to products they have already placed on the market, including when these changes are triggered by software updates or machine learning. It also seeks to alleviate the burden of proof in complex cases, which could include certain cases involving AI systems, and when products fail to comply with safety requirements.

Cameron said: “Businesses will be unhappy about the disclosure and burden of proof provisions. However, the EU argument is that these fit neatly with the forthcoming requirements for placing high-risk AI systems on the market under the EU AI Act.”

The Product Liability Directive proposal also builds on liability provisions specific to online platforms that are set to be written into EU law under the Digital Services Act (DSA) in the coming weeks.

Wouter Seinen

Wouter Seinen

Partner, Head of Office, Amsterdam

The proposed linkage between the Product Liability Directive and the DSA is a game changer for platforms

Under the DSA, online platforms could be held liable for breaches of consumer protection law where they present products or otherwise enable a transaction “in a way that would lead an average consumer to believe that the product is provided either by the online platform itself or by a trader acting under its authority or control”.

The European Commission has now proposed that, under the revised Product Liability Directive, online platforms could face the same liability as distributors for defective products sold via their platforms where they present the product or otherwise enable the specific transaction in a way that confuses the average consumer, though platforms that “promptly identify a relevant economic operator based in the EU” that should be held liable instead will be able to escape liability.

Seinen said: “Platforms are being nudged to be more transparent on who the actual seller of the product is. The proposed linkage between the Product Liability Directive and the DSA is a game changer for platforms as they will have to put in place bespoke controls to shield themselves for products they did not themselves, manufacture, import or even touch.”

New EU rules on AI liability were trailed alongside the EU’s digital future strategy and AI white paper, which were published in early 2020.

The position in the UK is different. No bespoke AI liability rules are envisaged in the UK, though the government did set out its plans for sectoral, risk-based regulation of AI earlier this year – a move that diverges from the approach proposed in the EU with the AI Act.

 

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.