AI raises ‘challenges’ with existing product liability law, study finds

Out-Law News | 08 Jun 2022 | 2:49 pm | 3 min. read

UK product liability laws need to be updated to address the use of artificial intelligence (AI), an expert has said.

Katie Hancock of Pinsent Masons said the UK risks being left behind unless reforms are implemented soon, with the European Commission expected to put forward proposals for new EU legislation on AI liability.

Hancock was commenting after a study commissioned by the Office for Product Safety and Standards (OPSS) found that the use of AI in consumer products can “challenge the regulatory framework for both product safety and liability”.

Hancock said: “The publication by the OPSS of this report serves to highlight the fact that legislation is struggling to keep pace with technological development. The General Product Safety Regulations are now 17 years old, and the Product Liability Act is 35 years old. Neither was developed with modern smart or digital products in mind.”

“The European Commission recognises the unsuitability of product liability and safety legislation for the digital age and is currently considering an overhaul. The UK has launched its own call for evidence seeking views on possible changes to its product safety regime, including in relation to artificial intelligence, but it risks being left behind if it does not move quickly,” she said.

“It is in the interests of businesses and consumers alike that product safety and product liability legislation is fit for purpose. Consumers require to be protected, and businesses require to understand the legislative and regulatory framework within which they must operate. If the UK falls behind its European counterparts, this will inevitably increase costs for businesses seeking to operate in both markets, and has potential to make the UK a less attractive place to do business,” Hancock said.

Researchers at the Centre for Strategy and Evaluation Services (CSES) who completed the study on the impact of AI on product safety for the OPSS, said that while existing legislation and supporting mechanisms for monitoring product safety “are applicable and sufficient” for “many existing AI consumer products”, challenges do arise in relation to “more complex AI systems”.

“The characteristics of more complex AI systems, in concert with general technological trends, pose challenges across all elements of the regulatory regime, including product safety and liability-related legislation, market surveillance regimes, standardisation, accreditation and conformity assessment,” according to the study.

“The key characteristics of AI systems … include mutability, opacity, data needs and autonomy. The general market trends of relevance include: the blurring of the lines between products and services; the increasing ability for consumer products to cause immaterial as well as material harm; the increasing complexity of supply chains for consumer products; and issues related to built-in obsolescence and maintenance throughout a product’s lifecycle,” it said.

“Considering the legislative framework for product safety and liability, more complex AI systems, as well as general technological and market changes, challenge many of the definitions detailed by these laws. More specifically, it is not clear to what extent these developments fall within the existing definitions of product, producer and placing on the market, as well as the related concepts of safety, harm, damages, and defects,” it said.

“Furthermore, the characteristics of AI systems, the general trends highlighted, and the lack of clarity around the applicability of existing legal definitions and concepts, bring additional impacts. These include a lack of legal certainty for economic operators involved in the manufacture of AI driven consumer products, as well as a need to improve the skills and knowledge of regulatory bodies, such as MSAs and conformity assessment bodies, on AI systems,” the study said.

EU legislators are currently scrutinising proposals for a risk-based approach to the regulation of AI systems on a cross-sector basis. The EU AI Act is separate to potential reforms to EU product liability laws to account for the use of AI that are also under consideration.

The European Parliament recently voted to endorse calls made by a special committee it established on AI in a digital age (the AIDA Committee) for reforms to EU laws on liability to account for the use of AI. They do not believe “a complete revision” of the existing laws, such as those concerning product liability, are necessary, but they do think “specific and coordinated adjustments to European and national liability regimes are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation”.

“While high-risk AI systems should fall under strict liability laws, combined with mandatory insurance cover, any other activities, devices or processes driven by AI systems that cause harm or damage should remain subject to fault-based liability,” the AIDA Committee said. “The affected person should nevertheless benefit from a presumption of fault on the part of the operator, unless the latter is able to prove that it has abided by its duty of care.”

The European Commission, which is responsible for tabling formal proposals to change EU law, said in 2020 that it believes existing legislation in areas such as liability, product safety and cybersecurity "could be improved" to better address risks around the use of AI. A Commission consultation on adapting liability rules to the digital age and AI closed in January this year.