Out-Law News 3 min. read

Use of AI in litigation could itself become contentious, says expert

Barrister wig

MachineHeadz/iStock.


Expectations over the use of AI to support litigation could evolve to the point that failing to deploy AI in document review and disclosure processes – or doing so in a sub-standard manner – could spur satellite claims against professionals, an expert has said.

Caroline Hearn of Pinsent Masons raised the prospect after a taskforce of legal experts set out their views on liability for AI harms.

The draft legal statement published by the UK Jurisdiction Taskforce (UKJT), which is open to consultation until 13 February, is non-binding on courts – the UKJT aims to provide some clarity on how emerging technologies interact with the law as it stands; it does not give a view on areas for reform. In this case, its statement is designed to help businesses understand the circumstances in which they might be held liable – or be able to raise claims against others – for AI-related harms under the private law of England and Wales.

The statement is premised on the basis that AI does not have legal personality in English law and that therefore an AI system itself cannot be held legally responsible for physical or economic harm. According to the UKJT, however, it is possible, in a range of contexts, for people and businesses to be held liable for harms linked to the use of AI.

Some of the circumstances in which the taskforce considers that liability for AI harms could arise are in the context of professional negligence. Professional negligence is a concept that describes when professionals such as lawyers, architects, accountants or doctors fail to perform the obligations they owe to others with reasonable skill and care. When that happens, liability may arise.

Hearn said: “As AI becomes more widely adopted in professional services, professionals are seeing engagement terms evolve to reflect how these tools may be used on client matters. Some clients are comfortable with controlled use of AI in secure environments, while others prefer to prohibit it. The UKJT’s draft legal statement highlights issues that align with our experience, particularly around confidentiality, privilege and the secure deployment of AI tools.”

“Effective use of AI in professional contexts depends on a number of factors, including robust due diligence, high‑quality prompts, appropriate supervision and clear internal processes. As firms develop standardised prompts and workflows, these can improve efficiency and consistency – but they also carry risk. As with any standard form, an error can scale quickly across matters if approaches are not properly designed and monitored,” she said.

Hearn said that expectations around the professional standards applicable to AI use in the context of litigation processes could evolve over time – and have cost implications for those that do not meet those standards.

“In high‑volume exercises such as disclosure, where speed and proportionality are central, we may see parties agreeing AI‑based review methodologies,” Hearn said. “Where both sides consent to a particular approach, this could influence how the professional standard of reasonable skill and care is applied in that context.”

“The UKJT also raises the possibility – still emerging – that in future it may be regarded as negligent not to recommend the use of generative AI for disclosure where it offers a more efficient process. In many cases, though, this may play out commercially rather than legally, through clients being reluctant to pay for slower, more expensive manual review. Beyond efficiency, there is a further question – still highly speculative – of whether, in some circumstances, lawyers could face claims on the basis that AI could have identified documents that might have influenced the litigation outcome or an earlier settlement. These will be challenging arguments to run in practice, but they illustrate how expectations around the use of AI may evolve,” she said.

“More broadly, the UKJT emphasises that the professional standard of reasonable skill and care will continue to shift as AI tools advance. What is reasonable now may change rapidly as new capabilities become mainstream. Professionals will need to stay abreast of relevant technological and market developments, and any applicable industry guidance. Importantly, though, even a widespread market practice will not necessarily insulate professionals if a court later considers the approach to fall below the required standard,” Hearn added.

“As AI becomes more embedded in professional workflows, courts may also increasingly look to expert evidence to assess whether AI was used appropriately – or whether it should have been used – in a particular matter. The nature of those experts, and how their evidence is to be evaluated, is likely to develop as both the technology and professional practices continue to evolve,” she said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.