Out-Law / Your Daily Need-To-Know

OUT-LAW NEWS 2 min. read

AI governance gap ‘widening’, warns new white paper

GettyImages

Getty Images


Organisations face an increasingly widening governance gap between the implementation of artificial intelligence systems within workflows and oversight of their impact, a new white paper has warned.

The rapid growth of AI adoption has been seen across a host of business sectors, but research indicates many companies are struggling to keep pace with integration of governance processes around its usage.

The white paper - jointly authored by Pinsent Masons and Mozaic, a specialist consultancy and expert in operating models - highlights how in many cases companies are struggling to keep pace with the legislative and operational challenges of AI oversight even as adoption rates by firms continue to increase, putting further strain on their ability to manage the transition and ongoing operations effectively.


Read more on AI governance


Incidents such as the 2024 Air Canada case – where the airline was held liable for misrepresentation made by one of its chatbot services over refunds – have put the spotlight on the legal risks of allowing an oversight gap to widen.

Simon Colvin, a technology sector expert with Pinsent Masons and one of the authors of the white paper, said that while some of the governance failure examples may be several years old, they remain highly relevant.

“AI governance failures may not necessarily manifest themselves as regulatory breaches,” he said.

“In many organisations, the structural issues they reveal are not addressed. Early incidents involving biased algorithms or opaque decision-making systems are often dismissed as isolated technology failures.

“But in reality, what they expose are much deeper organisational challenges around governance, accountability, and operating model design.”

The white paper notes that many organisations continue to be structurally unprepared for AI integration, even as tools continue to be integrated. When taken together with low levels of AI literacy among employees, structural and systemic biases in the tooling are missed. The paper cited as an example an Amazon recruitment tool which was discontinued in 2018 after developing a bias against female candidates, generated by the nature of the training data.

Organisations face a dangerous illusion of control over their AI processes, with employees encouraged to experiment in some cases with AI systems, and workflows having AI integration at a rate far quicker than management levels understand or appreciate, the research found.

Numerous industry studies in recent years have also highlighted that while most companies claim to have their AI strategy in place, only a small minority have embedded their governance structures in place to enable safe management of the processes around it – increasing exposure to legal and regulatory liability.

“As AI adoption continues to accelerate the question is not whether these issues will reappear but whether organisations have meaningfully adapted their current governance and operating models in order to prevent likely future occurrences from impacting them,” added Colvin.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.