OUT-LAW NEWS 2 min. read

Organisations at risk of ‘public controversies’ if they fail in AI governance, warns report

Diverse team of professionals gathers in a sleek, contemporary lobby, engaged in discussion

Photo: Getty Images


Companies are increasingly putting themselves in the harsh public glare through failing to put AI governance in place, a new white paper has warned.

Reputational damage from AI-driven decisions which are seen as unfair or discriminatory can be as impactful as regulatory action when automated decision-making is involved.

The warning comes in a new white paper - jointly authored by Pinsent Masons and operations consultancy Mozaic – which highlights the growing risks for companies who fail to put robust and timely AI governance models in place while integrating new systems.


Read more on AI governance:


Simon Colvin, a technology expert with Pinsent Masons and co-author of the white paper, said poor preparation by organisations was increasingly leaving them in the spotlight.

“In an environment where public trust in automated decision-making remains fragile, organisations must ensure that AI systems are deployed with appropriate oversight and transparency,” he warned.

The white paper highlights one recent instance when a leading Australian firm came under public scrutiny after a partner used generative AI tools to produce material for case studies which incorrectly suggested the firm had been involved in previous corporate scandals.

The incident, despite relating to internal rather than client matters, highlighted the ease at which AI-generated or – worse – hallucinated material can circulate, causing reputational damage at a professional level.

“The episode reinforced a broader lesson for organisations adopting generative AI technologies: governance frameworks must extend beyond formal AI systems to include how employees use generative tools in everyday workflows,” explained Colvin.

“Without clear policies, oversight, and quality controls, AI generated content can quickly introduce significant reputational, operational, or legal risk.”

The risk of algorithmically-generated controversies, particularly around discrimination, can prove particularly damaging for organisations which have not put adequate steps in place.

In 2024, the UK’s Equality and Human Rights Commission warned employers over AI deployment after concerns arose that artificial intelligence facial recognition checks for drivers to access a delivery app were racially discriminatory – particularly if they were used to suspend drivers’ access.

Meanwhile in the Netherlands a court struck down the Dutch government’s welfare fraud risk-tracking system for breaching fair balance rules under the ECHR over concerns about a lack of transparency and the large-scale data linkage employed.

European regulators look to be stepping up enforcement against Ai-driven recruitment tools and processes, increasing the risk of reputational damage for those who fall under their scope.

Across EU jurisdictions, the trend points towards courts and regulators denying organisations an easy pass to limit their liability by characterising the algorithmic output as just ‘advisory’, or by placing the decisive scoring logic in the hands of a third-party vendor.

“When AI systems are used within professional services or advisory engagements, organisations remain responsible for the accuracy and integrity of the outputs delivered to clients whether badged as advisory or placed in the hands of third parties,” warned Colvin.

“In practice, this means that traditional professional standards, quality controls, and contractual obligations must evolve rapidly to account for the use of generative AI tools within advisory workflows.

“Incidents such as these demonstrate that AI governance is no longer limited to experimental technology deployments; it now affects the core delivery of professional services and advisory work.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.