Out-Law News 3 min. read
17 Jun 2025, 9:46 am
A recent decision highlights that artificial intelligence (AI) must not be used as a replacement for carrying out thorough analysis and verification of case law relied upon to support legal arguments, experts have said.
Lucia Doran and Meghan Higgins, technology experts at Pinsent Masons, were commenting following an English High Court judgment. The decision addressed two cases concerning the duties that lawyers owe to the court and the “actual or suspected” use of AI by lawyers to generate legal documents or arguments without the outputs then being checked.
Doran said: “It is important to recognise that whilst AI may be a useful tool to speed up the initial stages of legal research and can help point practitioners in the right direction, it cannot be used as a replacement for substantive analysis of the authorities; outputs must be checked for accuracy. Those with conduct of the litigation must also ensure that their juniors understand the limitations of AI, and their duties to the court, and independently verify any legal research conducted with the assistance of AI using authoritative legal sources.”
In the first case, Ayinde, the statement of grounds, which had been settled by a junior barrister, contained a misstatement of section 188 (3) of the Housing Act 1996 and contained a number of citations to cases which did not exist. Upon receipt of the grounds, copies of these “cases” were requested. The junior barrister was asked to provide these copies but instead drafted an email claiming the errors were “citation errors” which could be corrected on the record before the hearing. The barrister maintained that AI was not used to assist with the legal research.
The second case, Al-Haroun, revolved around witness statements from both a client and his lawyer which relied on numerous authorities which either did not exist, did not contain the passages which had been quoted, did not support the propositions they were cited in reliance of, or did not have any relevance to the subject of the application. These citations were included after research using AI tools, the output of which was not checked by the solicitor to confirm its accuracy.
In its decision, the court recognised that there needs to be “an appropriate degree of oversight” and that lawyers must be mindful that their use of AI does not infringe upon their “professional and ethical standards”. The court stated that freely available tools, such as Chat GPT, are “not capable of conducting reliable legal research” as whilst they seemingly produce plausible responses to prompts, these may turn out to be incorrect as the AI may hallucinate case law. The court warned that the misuse of AI has "serious implications for the administration of justice and public confidence in the justice system”, and that those who use AI for legal research therefore have a duty to verify output prior to using it in legal documentation or for legal advice. The court highlighted that responsibility for ensuring the accuracy of material submitted in proceedings and maintaining public confidence rests not only with those tasked carrying out the research but extends to supervising lawyers and up to heads of chambers and managing partners of law firms.
Doran said: “To mitigate the risk of similar situations arising in the future, those with managerial responsibilities within law firms, chambers and in-house legal teams should ensure that they publish policies and guidance on the responsible use of AI in providing legal advice and offer training for employees about how to conduct AI-assisted research and other AI-assisted tasks. This is particularly important given the court’s warning that in future cases the court may inquire as to whether those with leadership responsibilities have fulfilled their duties and have implemented appropriate measures.”
In both cases, the court held that contempt proceedings did not need to be initiated. However, the court stressed that the decision not to initiate contempt proceedings against the junior barrister in the Ayinde case should not be viewed as a precedent as it was fact-specific.
Higgins said: “Whilst in both these cases the court did not initiate contempt proceedings, this may not hold true in future cases if the court considers that awareness of the risks have been sufficiently communicated to the legal industry and the individual in question. It is apparent from the decision that the courts expect the lawyers appearing before them to understand that relying on AI may undermine their professional duties to the court. The decision highlights a range of sanctions that the courts may impose on solicitors including admonishment, referral to the regulator, and contempt proceedings, and we would expect that the courts may exercise some of those powers as similar cases arise in the future.”