Out-Law / Your Daily Need-To-Know

Out-Law News 2 min. read

Trust in AI: report sets out practical steps


A new report sets out real, practical steps that developers of artificial intelligence (AI) can take to build trust in the use of their technology, an expert has said.

Technology law expert Sarah Cameron of Pinsent Masons, the law firm behind Out-Law, said much of the focus to-date has been on the development of high-level principles to govern how AI is used, including in relation to ethical use and data protection. Industry will welcome those principles being translated into “a robust ‘toolbox’ of mechanisms to support the verification of claims about AI systems and development processes”, she said. Cameron said the increasing reliance on AI for “high-stakes” tasks heightened the necessity for change.

Cameron was commenting after 58 experts based across the UK, Europe and North America and from across industry and academia published a new report on mechanisms for supporting verifiable claims in AI.

“It is an international effort with contributor experts from multiple countries, which is really encouraging for promoting a more united, less duplicative approach,” Cameron said.

The report recommended 10 interventions to move toward trustworthy AI development, with a focus on providing evidence on safety, security, fairness and privacy protection. 

The interventions fall at institutional, software or hardware level and each addresses a specific gap which the authors said was preventing effective assessment of developers’ claims.

Recommendations for interventions include the development of third-party auditing in order to create an alternative to developers’ self-assessment of the claims they are making about their AI.

The report also recommended that organisations should run “red-teaming” exercises to explore risks associated with systems they develop and share best practices and tools. Developers should also strengthen the incentives to discover and report flaws in AI systems, and share incidents information to improve the understanding of how AI systems can behave in unexpected or undesired ways, it said.

For software mechanisms, the report said there should be audit trails and accountability for high-stakes AI systems, which are those involved in decision making where human welfare may be compromised. It suggested organisations developing AI, as well as funding bodies, should support research into the interpretability of AI systems, with a focus on supporting risk assessment and auditing.

The report said developers should develop, share, and use suites of tools for privacy-preserving machine learning that include measures of performance against agreed standards.

When it comes to hardware, the report said industry and academia should work together to develop hardware security features for AI accelerators, or otherwise establish best practices for the use of secure hardware in machine learning contexts.

There should be better reporting on the computing power usage associated with AI, and labs should report on ways to standardise this reporting, the experts said, and government funding bodies should substantially increase funding for computing power resources for researchers in academia and civil society to improve the ability of those researchers to verify claims made by industry.

Cameron said the mechanisms in the report would enable incremental improvements instead of providing a decisive solution to verifying AI claims, and that collaboration would be central to the success of the interventions.

Additional reporting by Beth Pendock of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.