Out-Law News 2 min. read
30 Oct 2025, 4:50 pm
A recent High Court judgment in Ireland clarified that legal correspondence can be answered by AI in certain cases.
The case concerns a cryptocurrency influencer, Eduardo Farina, who impersonated a “blue tick” verified account on social media platform X. In July 2025, X suspended the account, saying he had engaged in “ban evasion” and breached the platform’s monetisation standards.
Farina initiated proceedings in Ireland’s High Court in September and initially secured an interim order against X, alleging that the impersonation was designed to test the strength of X’s platform against the impersonation of verified accounts. However, the High Court has now ruled that there is no reason that the ban should not stand ahead of a full hearing on the underlying issues, as the influencer had a “self-acknowledged past history of impersonation” and there was not a strong enough financial case to grant a mandatory injunction in light of his “diversified revenue streams”.
In his judgment, Mr Justice Barrett also addressed a complaint put forward by Farina’s legal team in the interim hearing which alleged that correspondence from X’s legal team appeared to have been “generated by a computer system”, possibly using AI. He declined to “disapprove” of this practice, stating there was “no legal requirement” for a solicitor’s letter to be answered by a human, adding that it was reasonable for a company of X’s scale to use computer or AI technology for correspondence. The judge said he saw “no basis for judicial criticism” of the use of AI in this context and he saw “every reason why” X “might wish legitimately to do so.”
The court’s approach contrasts with another recent case heard in Ireland’s High Court where a judge expressed concerns about the use of AI in a harassment case after an ex-husband used an online AI tool to generate a document to support his case. The judge said the use of the tool risked breaching an anonymity order as the man had entered details into the AI tool that could possibly identify the alleged victims.
There are growing concerns over the use of AI in legal submissions in court proceedings worldwide. In a recent case in Quebec, an individual was fined for “inappropriate use of artificial intelligence” after submitting AI hallucinations. The judge described the conduct of the defendant, who had no legal representation, as “highly reprehensible” after it was discovered there were eight instances of “non-existent citations, unissued decisions, irrelevant references, and inconsistent conclusions” in his submissions to the court.
Earlier this year, the High Court in England & Wales ruled that legal practitioners must not use AI in place of carrying out thorough analysis and verification of case law to support legal arguments. However, it said that AI may be a useful tool to speed up the initial stages of legal research.
Ireland’s judiciary has not issued specific guidelines on the use of AI. In April 2025, the UK judiciary published updated guidance on the use of AI, which advises judicial office holders and legal professionals on responsible use, risk awareness and the need to ensure accuracy and confidentiality when using AI tools. The guidance also recognises the growing relevance of AI in the legal system and sets out expectations for its use in court and tribunal proceedings.