Out-Law Analysis 7 min. read
Riyadh skyline view. Saudi Arabia will be watching closely as other jurisdictions continue to respond and rise to the challenges posed by AI. Jordan Pix / Getty
20 Nov 2025, 3:29 pm
Those involved in courts and arbitration proceedings are increasingly concerned about the use and misuse of artificial intelligence (AI). Case law and judicial guidelines in places such as California could provide some food for thought for civil law jurisdictions grappling with these issues, including in the Middle East.
AI-generated outputs are becoming more common in legal proceedings around the world as lawyers and parties using AI to help them speed up legal research, support legal arguments and even draft witness evidence and opening and closing submissions.
Some courts and bodies have responded by issuing guidance to help practitioners and parties navigate the use of AI responsibly. Ultimately, this guidance seeks to answer the question of who bears liability when AI-generated evidence leads to erroneous outcomes.
This was the central question posed during a recent panel discussion at the inaugural Egypt Arbitration Days, held in Cairo in October of this year. The answer is complex and of course depends on the specifics of each case.
Responsibility may lie with lawyers, particularly where legal misinterpretations or unverified legal information is concerned. Errors in AI-generated summaries can lead to misstatements in witness evidence and clients can also be held responsible for factual inaccuracies. In the US, there’s even legislation being floated around for discussion that, if passed, would hold AI developers accountable for AI-related harms.
To date, certain common law jurisdictions, particularly in California, have been more active in developing specific guidelines and case law on AI use. As a Californian, it has been gratifying to observe from afar how this jurisdiction, which is already such a leader in tech innovation, continues to be at the forefront of developing established rules on AI use in legal proceedings.
In one notable recent employment case, Noland v. Land of the Free, L.P., California’s Court of Appeal published an opinion addressing AI “hallucinations” in court filings. The court imposed a $10,000 sanction on the lawyer who filed two appellate briefs containing fabricated case citations generated by ChatGPT. Intriguingly, the court also declined to award legal fees or costs to the opposing counsel as a result of its failure to report the fake citations to the court or to detect them in the first place.
Across the US, to date at least 39 federal judges – including in California – have issued standing orders regulating AI use in their courtrooms, typically requiring disclosure of AI use in filings; verification of citations and legal arguments and sanctions for misuse or failure to disclose AI involvement. A number of federal courts are also exploring ‘bench cards’ and have revised evidence rules to help guide judges on AI-related matters. However, this latest opinion in California serves as a cautionary decision to legal counsel on the very real risks of misusing gen-AI in legal filings, as well as the need to promptly report any suspected unverified AI use to the courts.
Global Arbitration Review (GAR) notes that institutions and practitioners are increasingly concerned about AI disclosure, but that the approach varies significantly across different jurisdictions and institutions. While in some jurisdictions, such as England & Wales and New Zealand, the courts have adopted a more relaxed position, in the US, for example, some individual judges require lawyers to certify from the outset of a case whether AI was used when preparing submissions and to confirm human verification of any AI-generated content.
GAR also recognises that most major arbitration rules do not currently address AI to aid disclosure. One exception is the Silicon Valley Arbitration and Mediation Center (SVAMC) in California, which published its first set of ‘Guidelines on the use of AI in arbitration’ (PDF 22 pages /704 KB) in April 2024, highlighting that any court decisions made on the use of AI tools would be made on a “case-by-case basis”.
Although guidelines governing AI misuse and disclosure are still very much evolving, GAR says these issues are “likely to arise for agreement between parties, and failing that, before tribunals imminently.”
Unlike the US, civil law jurisdictions, including the Middle East, currently lack robust legal frameworks governing the use of AI evidence in legal proceedings.
However, as a recent case demonstrates, concerns over AI misuse are just as present in the Middle East. On 12 November 2025, the Qatar Financial Centre (QFC) Civil and Commercial Court delivered a landmark judgment in the case of Jonathan David Sheppard v Jillion LLC (PDF 10 pages / 594 KB) in which the judge criticised the citation of “fake cases” in submissions to the court.
The case centred on an employment claim brought by Jonathan David Sheppard against Jillion LLC. The defendant’s legal representative – an unnamed Dubai-based lawyer – submitted an application during the proceedings to extend the time to file a defence. To support this application, the lawyer cited several purported cases, but it soon became apparent that the cases did not exist.
In a witness statement, the legal representative clarified that the citations were provided “in error inadvertently…due to reliance on secondary sources / incomplete case law databases by mistake” and apologised.
The judge, noting that this was the first occasion upon which this specific issue of AI misuse had arisen before the court, held the lawyer in contempt of court and breach of Article 35.2 of the Rules and Procedures of the Civil and Commercial Court of the Qatar Financial Centre for citing “fake cases”. The court decided against publishing the name of the lawyer, stating that it would “inflict on him a disproportionately harsh penalty”, particularly “given that this is the first case where this has happened in this Court”.
This judgment highlights the growing concern over the use of AI-generated legal research, including secondary sources, in litigation across the Middle East. Whilst the Dubai court said it was not opposed, in principle, to lawyers using AI in litigation to improve efficiencies, it emphasised the need for guidance to be provided and followed by lawyers to ensure accuracy of submissions rendered before the court. The court said it would not allow such errors to be concealed by anonymity in future.
This judgment may also provide a useful blueprint for how other courts and tribunals across the region could approach this complex issue.
Although this was the first time this court had encountered this issue, interest in harnessing AI, particularly to boost efficiencies in dispute resolution, is not a new concept in Dubai.
In June 2025, the Dubai International Arbitration Centre (DIAC) partnered with Jus Mundi and Jus AI to integrate AI into its case management and publication processes. This initiative includes using AI for legal research, decision publication, and launching a virtual academy to train practitioners in AI use.
Commenting on the launch of the initiative at the time, Jehad Kazim, Executive Director of DIAC, said the partnership would “bring significant value to the international arbitration community.”
When used judiciously, it’s clear there’s considerable potential for AI to provide benefits in shaping arbitration practice.
In the Kingdom of Saudi Arabia (KSA), there is no clear guidance on AI misuse. Rule 25 (2) of the Saudi Center for Commercial Arbitration Rules gives some sense of how the courts might approach the advent of AI just like other technologies.
It states: “In establishing procedures for the arbitration, the Arbitral Tribunal and the parties are encouraged to consider how technology, including but not limited to electronic communications, e-filings, and the electronic presentation of evidence, could be used, including to reduce the environmental impact of the arbitration.”
It adds that in all cases “the Arbitral Tribunal shall determine the extent to which technology shall be used in view of all circumstances of the case, including any reasoned objection by any party that the use of such technology would impair its ability to present its case.”
However, Saudi Arabia currently lacks a formal legal framework for the evaluation and admissibility of AI-generated evidence in judicial and arbitration proceedings. This gap raises practical challenges as, undoubtedly, courts and tribunals will increasingly encounter cases involving AI-generated or AI-assisted data.
In the absence of such a framework, the Saudi Data and AI Authority’s (SDAIA) principles – namely fairness, transparency, accountability, human oversight, reliability and safety – provide a robust foundation that could eventually shape future guidelines on AI use in the courts.
The further application of the SDAIA principles to creating guidelines around the use of AI in arbitration would be a welcome development and would reinforce the Kingdom’s commitment to ethical and accountable AI governance.
The international nature of arbitration means that common law practices related to AI use will inevitably influence protocols developed in civil law jurisdictions too, including in the Middle East.
International guidelines may prove instrumental in bridging gaps between the two legal systems and, in turn, help them develop their own best practice. In September, the Chartered Institute of Arbitrators (CIArb’s) published a Guideline on the Use of AI in arbitration. The guidance outlines the benefits and risks of the use of AI in arbitration; sets out general recommendations on the use of AI in arbitration, addresses arbitrators’ powers to give directions and make rulings on the use of AI by parties, and also addresses the use of AI in arbitration by arbitrators.
In November, the International Centre for Dispute Resolution (ICDR) went one step further by launching an AI-based arbitrator for documents-only construction cases. Developed in collaboration with QuantumBlack, AI by McKinsey, they say the AI arbitrator has been trained on more than 1,500 construction awards and will use a structured legal prompt library and conversational AI to deliver draft awards.
Whether other jurisdictions will follow suit, is unclear, particularly if their domestic arbitration legislation stipulates that arbitrators must be human. In the Kingdom, for example, arbitrators are required to hold a university degree in Sharia or law. Recent reforms and draft amendments currently under consultation in 2025 propose removing this strict requirement, allowing parties to appoint arbitrators without a legal background, provided that they meet other conditions.
When it comes to AI use in arbitration, what is clear is that more guidance and more training will be welcome. Jurisdictions across the Middle East will be watching closely as common law jurisdictions continue to respond and rise to the challenges posed by AI. These civil law systems, which often rely heavily on written evidence and expert reports, could adopt similar protocols to ensure AI-generated content is rigorously scrutinised, verified and upheld to the same standard as human expert opinions.
Out-Law Analysis
19 Mar 2025