Out-Law / Your Daily Need-To-Know

Out-Law News 4 min. read

Strict liability examined for high-risk AI

AI


EU policy makers have been urged to introduce a new strict liability regime to address increased risks of harm arising from the operation of artificial intelligence (AI) systems.

The recommendation is contained in a report prepared for the European Commission by an expert group formulated by the Commission to explore liability and new technologies. The Commission has committed to "put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence" early this year.

The report concluded that there is no need for a new legal personality to be created to account for autonomous systems, and it also made it clear that existing rules on liability do not need to be rewritten to accommodate the use of AI. However, the expert group said "certain amendments" are necessary to address the "specific characteristics" of the technology and its applications, and the potential for the allocation of liability to be "unfair or inefficient".

Technology law expert Sarah Cameron of Pinsent Masons, the law firm behind Out-Law, said she agreed with the expert group's conclusion that, for the purposes of liability, it is not necessary to give autonomous systems a legal personality.

"Harm caused by autonomous systems can or should be attributable to existing legal personalities," Cameron said. "There needs to be human accountability for autonomous systems given that while we may not know how autonomous systems arrive at their findings, we teach them how to learn. Moreover, creating a new legal personality would be highly controversial and raises more issues than it solves. What obligations and rights would that legal personality have? How would the issue of assets or funding be addressed to deal with civil liability?"  

Under the proposals outlined, the liability regime applicable to each use of AI would depend on the level of risk posed. A mix of strict, fault-based and vicarious liability regimes was recommended. The expert group said "it is impossible to come up with a single solution suitable for the entire spectrum of risks".

One of the scenarios addressed in the report is how liability is established where there is collaboration on a contractual or similar basis "in the provision of different elements of a commercial and technological unit". The concept of such a 'unit' would depend on whether there is joint or coordinated marketing of the different elements, the degree of their technical interdependency and interoperation, and the degree of specificity or exclusivity of their combination.

Where those factors are present, all parties in the collaboration are to be considered jointly and severally liable to the victim if the victim can "demonstrate that at least one element has caused the damage in a way triggering liability but not which element".

Cameron welcomed the proposed solution, highlighting the challenge of identifying where fault lies in such a "complex ecosystem".

"The expert group's proposed solution in this context is constructive and inspired," Cameron said. "This will incentivise the contractual division of responsibility between the parties up front and reduce the need for litigation."

A framework of strict liability would apply to the operation of AI in "non-private environments" and where its use "may cause significant harm", the expert group said. Either the operator or producer of AI systems would be held strictly liable for harm caused in such cases.

"Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation," the report said. "If there are two or more operators, in particular the person primarily deciding on and benefitting from the use of the relevant technology (frontend operator) and the person continuously defining the features of the relevant technology and providing essential and ongoing backend support (backend operator), strict liability should lie with the one who has more control over the risks of the operation."

"The producer should be strictly liable for defects in emerging digital technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on, the technology," it said.

The expert group said that existing defences and statutory exceptions from strict liability "may have to be reconsidered" to account for the use of AI, but it said producers should not be able to benefit from "a development risk defence".

Under the proposed reforms, the current position where victims need to prove what caused them harm would be retained in the case of harm caused by AI systems, but the burden of proof would be reversed in certain circumstances. This includes where it has been "proven that an emerging digital technology has caused harm" and "there are disproportionate difficulties or costs pertaining to establishing the relevant level of safety or proving that this level of safety has not been met".

The burden of proof would also be reversed if producers fail to enable the logging of data capable of confirming faults in the operation of the technology.

"There should be a duty on producers to equip technology with means of recording information about the operation of the technology (logging by design), if such information is typically essential for establishing whether a risk of the technology materialised, and if logging is appropriate and proportionate, taking into account, in particular, the technical feasibility and the costs of logging, the availability of alternative means of gathering such information, the type and magnitude of the risks posed by the technology, and any adverse implications logging may have on the rights of others," the expert group said. "The absence of logged information or failure to give the victim reasonable access to the information should trigger a rebuttable presumption that the condition of liability to be proven by the missing information is fulfilled."

Among its other recommendations, the expert group said operators of AI should have to comply with "an adapted range of duties of care". Such duties should include choosing the right system for the right task and skills, monitoring the system, and maintaining the system. To underpin this, the group also advised that producers should be required to "design, describe and market products in a way effectively enabling operators to comply with the duties".

The expert group's report was discussed at a meeting of the European Parliament's legal affairs committee last week.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.