Out-Law / Your Daily Need-To-Know

Out-Law News 1 min. read

Australian AI assurance framework for public sector an important benchmark for private sector


Australia’s new national framework for the assurance of artificial intelligence (AI) systems in the public sector is a reminder for the private sector of the importance of having clear foundations for the safe and responsible use of AI and helps set the standard and policy expectations for private enterprise, technology experts have said.

Released by the Data and Digital Ministers Meeting – a cross-jurisdictional group of ministers from Australia’s federal, state and territory governments – the ‘National framework for the assurance of artificial intelligence in government’ aims to align how the public sector uses AI systems.

The framework sets out practices that expand on Australia’s AI Ethics Principles, covering human, societal and environmental wellbeing, human-centred values, fairness, privacy protection and security, reliability and safety, transparency and explainability, contestability, and accountability.

The practices are based on five ‘cornerstones of assurance’ – which are AI governance, data governance, alignment with AI standards, procurement, and a risk-based approach – which the framework identifies as ‘valuable enablers’ for the safe and responsible use of AI.

Australia’s state and territory governments are expected to develop their own frameworks that will address their specific considerations while still being consistent with the national framework.

Veronica Scott, a cyber and data law expert at Pinsent Masons, said: “The release of the national framework is another important step in developing the public sector’s approach to the responsible adoption of AI. Assurance is key to delivering trustworthy AI and the adoption by government of a risk-based assurance model should help set the standard for the private sector, particularly in the absence of any dedicated legislation and while mandatory guardrails are still being developed.”

James Arnott, a technology law expert at Pinsent Masons, said: “While the national framework will apply to the public sector, this development is an important reminder for all parties in the private sector to consider the development of similar frameworks to ensure organisational consistency in managing the risks associated with the adoption and use of AI and their outcomes.”

Scott recently highlighted the importance of considering the privacy compliance risks involved with the adoption of generative AI tools, alongside the opportunities these technologies offer.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.