Out-Law News 2 min. read

Ireland’s Judicial Council publishes AI guidelines

ChatGPT on the screen of a mobile device

The guidelines recommend Irish judges don't use genAI tools like ChatGPT for work purposes on personal devices. Cheng Xin/Getty


Ireland’s Judicial Council publishes AI guidelines

The Judicial Council of Ireland has published new guidelines to help its judiciary better understand the potential uses, benefits and limitations of generative AI (genAI).

Although the guidelines acknowledge that AI tools have become an increasingly integral part of daily life both inside and outside of the courtroom for some time now, they note that the particular advent of genAI tools has raised new challenges for judges, legal parties and the courts as a whole.

The paper (11 pages / 296 KB) offers judges examples of best practice and how to navigate the use of genAI tools by others, including lawyers and unrepresented parties, in court proceedings.

The guidelines come as other legal services bodies in Ireland also issue their own guidance on the use of AI tools. On 30 October, Ireland’s Workplace Relations Commission (WRC) published guidance on the use of AI tools following a recent discrimination claim in which a former Ryanair employee submitted AI-generated material that was “rife with citations that were not relevant, mis-quoted and in many instances, non-existent”. On 11 November, the Law Society of Ireland also published guidance to help solicitors navigate the use of AI ethically and effectively.

The Judicial Council’s new guidelines are both “timely and welcome”, said Lisa Carty, a litigation expert with Pinsent Masons in Dublin. “With the increasing use of AI-generated material in court submissions – as highlighted by recent cases – clear guidance is essential to help judges navigate both the opportunities and risks presented by genAI,” she said.

Carty added that the guidance would also help inform judges’ own use of genAI. “It is of note that the guidance recommends that generative AI tools should only be used for routine and administrative tasks, and that judges must ensure that any AI-generated output is checked and verified for accuracy,” she said. “The guidance is also clear that no confidential or privileged information should be entered into public AI platforms.”

The guidelines offer practical tips for judges to maintain security and confidentiality when using genAI, including using only work, not personal, devices when accessing AI chatbots for work purposes. In general, they recommend that judges switch off their chat history when using chatbots or regularly delete interactions to limit potential data breaches.

They also remind judicial office holders that using such chatbots may raise copyright issues and that all responsibility for compliance with copyright laws remains with the judge. Concerns over ‘deepfake’ technology produced by AI is another potential challenge. The Judicial Council says it is “appropriate” for judges to continue to question the use of genAI where they have concerns over the validity of evidence.

Overall, the guidelines state that judges must ensure that using genAI tools is “consistent with the judiciary’s overarching obligation to protect the independence and integrity of the administration of justice and the protection of fundamental rights.”

In the event of unintentional disclosure of private, confidential, suppressed or privileged information, the Judicial Council says judges should notify their superiors and report breaches to both the relevant supervisory authority for courts and to the courts service’s data protection officer.

Ireland is not the first judiciary to publish specific AI guidelines for judicial office holders. In April, the UK judiciary published updated guidance on the use of AI, which advises judges and legal professionals on responsible use, risk awareness and the need to ensure accuracy and confidentiality when using AI tools.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.