Out-Law News Lesedauer: 3 Min.

Global powers sign Bletchley declaration on AI safety


Leading global powers, including the US and China, have achieved consensus on the need for artificial intelligence (AI) systems to be designed, developed, deployed, and used in a manner that is safe.

The two countries are among 29 signatories of the ‘Bletchley declaration’, an international accord that recognises the need for AI development and use to be “human-centric, trustworthy and responsible”. The UK, EU, Australia, France, Germany, India, Singapore, and the UAE are among the other signatories of the declaration, which was agreed at the UK’s AI safety summit on Wednesday.

A central theme of the declaration was the need for sustaining international collaboration on addressing AI safety risks. To that end, meetings to follow-up on the UK summit have been set up - the Republic of Korea will co-host a mini virtual summit on AI within the next six months and France will host the next in-person summit in 12 months’ time. 

Technology law expert Sarah Cameron of Pinsent Masons said: “AI is at an inflection point as the reality of the risks and opportunities posed by the technology become increasingly apparent. The Bletchley declaration on AI safety shows Rishi Sunak has made his mark in establishing a global consensus on the opportunities, risks and need for international action, including scientific research on ‘frontier AI’. Key to the significance of this is the forward process for international collaboration with the meetings to be hosted by the Republic of Korea and France.”

“This is an important opportunity for nations to come together and put aside the political scramble on who leads the AI regulatory space,” she said.

A focus of the UK’s AI safety summit is the risks posed by frontier AI, which the UK government has defined as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”.

Signatories of the Bletchley declaration reached a common understanding on some of the specific risks that frontier AI poses.

“Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models,” according to the declaration. “Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict.”

“We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent,” the signatories agreed.

Future efforts to address frontier AI risk agreed by the signatories will focus on “identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase” and “building respective risk-based policies across our countries to ensure safety in light of such risks”.

To further that agenda, the signatories have agreed to “support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration … to facilitate the provision of the best science available for policy making and the public good”.

Ahead of the UK AI safety summit this week, experts at Pinsent Masons highlighted the need for governments and regulators to come together to provide greater clarification on how existing legislation and regulation globally applies to the use of AI systems today, given the increasing proliferation of the technology, and not just look to the future.

The signatories to the Bletchley declaration agreed that it is important to consider “a pro-innovation and proportionate governance and regulatory approach” that achieves an appropriate balance between AI benefits and risks. They said that approach could include “classifications and categorisations of risk based on national circumstances and applicable legal frameworks” as well as the development of “common principles and codes of conduct” at an international level.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.