Out-Law / Your Daily Need-To-Know

Out-Law News 4 min. read

Liabilities arising from use of AI explored by UK experts

The entrance to the Supreme Court in Westminster, London, UK

Ultimately it will fall to the courts to decide where liability for AI harms falls. Nigel Harris/Getty Images.


Legal experts have highlighted the challenges businesses face in allocating liability for harms that might arise from use of AI, but their paper also gives useful examples that could guide their – and judges’ – understanding of how existing English law might apply, according to technology law specialists.

Meghan Higgins and David McIlwaine of Pinsent Masons were commenting after the UK Jurisdiction Taskforce (UKJT) published a draft legal statement on liability for AI harms.

The paper, which is open to consultation until 13 February 2026, is non-binding on courts: the UKJT aims to provide some clarity on how emerging technologies interact with the law as it stands; it does not give a view on areas for reform. In this case, its views are designed to help businesses understand the circumstances in which they might be held liable – or be able to raise claims against others – for AI-related harms under the private law of England and Wales.

The statement is premised on the basis that AI does not have legal personality in English law and that therefore an AI system itself cannot be held legally responsible for physical or economic harm. According to the UKJT, however, it is possible, in a range of contexts, for people and businesses to be held liable for harms linked to the use of AI.

Meghan Higgins of Pinsent Masons said: “One of the most interesting aspects of the paper is the discussion of how the use of AI might give rise to uncertainty in a litigation context.”

“The authors explain that our current liability systems typically assign legal responsibility to the person or company whose voluntary action or inaction caused the harm. Because AI systems are autonomous, meaning their outputs have not been determined or programmed in advance, use of these systems could give rise to outcomes that were not anticipated by the parties involved in developing, training, and deploying the system. This could make it harder to determine which party is responsible for any harm that arises, and to what extent,” she said.

“The opacity of AI systems, meaning the difficulty in understanding and evidencing how they have arrived at an output, could create further difficulties in assessing causation and foreseeability in a litigation context. This will be particularly acute in a complex supply chain where an AI system could have been developed and trained by multiple actors,” Higgins added.

The question of liability will, the UKJT said, often depend on what the contracts between parties provide for. It said: “The extent of liability and ability to pass losses up the chain are typically determined by warranties, indemnities, limitations and exclusions.”

The UKJT said contracts will be particularly relevant in determining liability for economic harm, as the law provides that no such liability for economic harm will arise “unless there is a ‘special relationship’ between the parties involving one having voluntarily assumed a responsibility to the other” – which it said “will generally (but not always) involve a contract”.

The potential for physical harms to arise from use of AI, the UKJT said, is “most obvious in the context of ‘embodied AI’ such as in autonomous vehicles, medical robots, or assembly line machines” – where the AI output has “direct control”. However, it said physical harms could also arise in other contexts, such as where the use of AI in cancer detection results in “a false positive or a false negative”.

Where contracts do not address liability, there may nevertheless be a non-contractual duty on businesses or people to protect against AI-related harms, the taskforce said.

In most cases, liability will only arise if there is negligence involved – the UKJT said those involved in the development, deployment and operation of AI systems will not be liable for harms caused by those systems where there is no negligence on their part. However, it cited an exception to this under UK product liability rules under the Consumer Protection Act 1987, which impose ‘no fault’ liability for physical harm that results from defective products. At present, it is not clear how these rules apply in the context of IT systems, but the Law Commission is consulting on potential reforms to that regime.

An employee’s negligent use of AI can also cause employers to become liable for harms arising from that use – even where the employer themselves are free from blame – under the law on vicarious liability, the UKJT said.

Liability can also attach to a false statement given by an AI chatbot under theories of negligent misrepresentation and defamation, it said.

Whether claims relating to liability for AI harms can be sustained will depend on demonstrating that those harms have been caused by the AI use. The UKJT said, though, that the autonomous nature of AI and its opacity can make it difficult to understand precisely why a particular outcome occurred – and that expert evidence may need to be adduced to help assess whether there is causation.

David McIlwaine of Pinsent Masons said: “The UKJT’s statement provides an extremely helpful summary of how the use of AI might give rise to liability in a number of contexts and how the courts might allocate responsibility.”

“The rapid uptake of AI across many sectors means that AI use is likely to be firmly embedded in business practices before the courts have had the opportunity to consider these questions. The UKJT remains optimistic that the principles of private law in the UK are sufficiently flexible that courts will be able to deal with uncertainties as they arise,” he said.

“To the extent that AI poses special problems around causation, we may see the development of evidentiary presumptions and an increased need for experts who can help judges understand the particular AI systems in question. While businesses should be aware that the use of AI may pose unique risks in a litigation context, the most straightforward way to address them is at the outset of a relationship between the parties in the contract, particularly in warranty, liability, and exclusion provisions,” McIlwaine added.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.