Out-Law News 6 min. read

AI laws inevitable but not right for today, says UK government

Houses of Parliament 1200 x 630


Every country in the world will eventually need to adopt new legislation to address “the challenges posed by AI technologies”, but it is not the right approach to implement new laws “today”, the UK government has said.

In a new AI policy paper, the government said legislating for AI would only make sense when understanding of the risk it poses “has matured”. It said that is not the case yet and has instead signalled that it will pursue a more flexible approach to AI regulation, in the short term at least, marking a point of major differentiation between the UK approach to AI regulation and that of EU legislators with the EU AI Act.

The government’s views were set out in its response to its AI white paper proposals it consulted on last year

Cameron Sarah

Sarah Cameron

Legal Director

The government’s non-statutory context-based approach to AI regulation stands in stark contrast to the broad risk-based approach to AI regulation being pursued under the EU AI Act, which businesses operating in the UK will need to familiarise themselves with too if operating on a cross-border basis

Public policy expert Mark Ferguson of Pinsent Masons said: “The ambition of the UK government is for its AI regulation to be agile and able to adapt quickly to emerging issues while avoiding placing undue burden on business innovation. The UK government’s response notes that the speed at which the technology is developing means that the risks and most appropriate mitigations are still not fully understood. Therefore, the government will not legislate or implement ‘quick fixes’ that could soon become outdated.”

“The response of the UK government to the white paper and its intended actions should be considered with the upcoming UK general election in mind. The approach of the Labour Party to AI regulation is something that businesses should also be tracking with a view to sharing their views with the party that, with recent polling in mind, looks most likely to form the next UK government,” he said.

Currently in the UK, a range of legislation and regulation applies to AI – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use.

The government’s immediate approach to AI regulation involves retaining the sector-based approach to regulation, but it wants UK regulators to fulfil their regulatory functions as they relate to AI with due regard to five cross-sector principles – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. In its new response paper, the government confirmed that the principles will not be placed on a statutory footing at this time, and nor will regulators be placed under a statutory duty to have due regard to the principles yet, though it said it will keep those options under review.

The government welcomed the fact that some regulators have already “set out work in line with our principles-based approach”, citing guidance the Information Commissioner’s Office (ICO) has published on data protection and AI as one example. However, it said the public should have “full visibility of how regulators are incorporating the principles into their work” and, as a result, has asked UK regulators impacted by AI “to publish an update outlining their strategic approach to AI by 30 April 2024”. Those updates should include the regulators’ analysis of AI-related risks in the sectors and activities they regulate – and the actions they are taking to address these – and their plans and activities over the coming 12 months, it said.

To supplement the “context-based approach” to regulation it favours, the government is establishing a “central function” that will be tasked with monitoring and assessing AI risk across the whole UK economy and with supporting regulatory coordination and clarity.

A new steering committee – featuring government and regulator representatives – is to be established by the spring 'to support knowledge exchange and coordination on AI governance'

According to its response paper, the government has already recruited a new “multidisciplinary team” to sit within the Department for Science, Innovation and Technology (DSIT) to monitor for cross-sectoral AI risk. The team has people with expertise on risk, regulation, and AI, and others with backgrounds in data science, engineering, economics, and law.

Another ‘central function’ action planned is the development of “a cross-economy AI risk register”, with a view to establishing “a single source of truth on AI risks which regulators, government departments, and external groups can use”. It is further considering the development of a risk management framework for AI, similar to one developed in the US by the National Institute of Standards and Technology (NIST).

The government has also pledged £10 million of funding for regulators to help them “develop the capabilities and tools they need to adapt and respond to AI”. It said it will also work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits pertaining to AI.

More formal coordination of regulatory activities on AI is also planned. As well as publishing new guidance to help UK regulators interpret and apply the cross-sector principles on AI, a new steering committee – featuring government and regulator representatives – is to be established by the spring “to support knowledge exchange and coordination on AI governance”.

Further new initiatives to promote research and innovation and build public trust in AI were also announced, including plans for new introductory guidance on AI assurance to be issued this spring. A new pilot scheme, to be facilitated by the Digital Regulation Cooperation Forum, is also expected to become operational in the spring to offer AI innovators a chance to obtain advice from multiple UK agencies – including in respect of legal and regulatory compliance – before launching new products to market. The government added that it is also developing a plan for monitoring and evaluating the UK’s approach to AI regulation as AI technologies change.

The government’s response paper also provided an insight into how the UK’s approach to AI regulation may evolve over time.

Cameron Sarah

Sarah Cameron

Legal Director

Given what the government has said in its response paper, the first form of legislative intervention specific to AI in the UK could come in respect of risks arising with highly capable generative AI systems

The government said it will only legislate to address AI risks if it “determined that existing mitigations were no longer adequate and we had identified interventions that would mitigate risks in a targeted way”; if it was “not sufficiently confident that voluntary measures would be implemented effectively by all relevant parties and if we assessed that risks could not be effectively mitigated using existing legal powers”; and if it was “confident that we could mandate measures in a way that would significantly mitigate risk without unduly dampening innovation and competition”.

The government reiterated its intention to manage risk at the frontier of AI development and to continue to address this risk through international coordination, building on landmark agreements it forged last year – including one between leading AI developers and governments in 10 jurisdictions which provides for government testing of next-generation AI models before and after they are deployed. In this regard, it acknowledged that a context-based approach to AI regulation “may miss significant risks posed by highly capable general-purpose systems and leave the developers of those systems unaccountable” and said it expects “all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed”.

Among a suite of broader initiatives the government said it is pursuing to address AI-related risk currently, it said it is working closely with the Equality and Human Rights Commission (EHRC) and ICO to develop new solutions to address bias and discrimination in AI systems; that it is considering developing a new code of practice for cybersecurity for AI, based on National Cyber Security Centre (NCSC) guidelines; and would shortly open a call for evidence in relation to AI-related risks to trust in information, to address issues such as ‘deepfakes’.

The government added that it could also in future require suppliers of AI products and services to meet minimum good practice standards if they wish to win public contracts.

Technology law expert Sarah Cameron of Pinsent Masons said: “Finding the right balance between regulation for emerging risks while avoiding new rules having a dampening effect on regulation is difficult. However, while government has signalled its intention to consult further in a number of areas, it is under pressure to act speedily – with members of a Lords committee just last week stressing that the importance of international collaboration on AI regulation must not hold up national policymaking.”

“The government’s non-statutory context-based approach to AI regulation stands in stark contrast to the broad risk-based approach to AI regulation being pursued under the EU AI Act, which businesses operating in the UK will need to familiarise themselves with too if operating on a cross-border basis. Given what the government has said in its response paper, the first form of legislative intervention specific to AI in the UK could come in respect of risks arising with highly capable generative AI systems and be targeted at a small number of providers of such systems,” Cameron said.

“Also unlike in the EU, where a bespoke new AI liability law is proposed as part of broader product liability reforms, the UK government is at this stage only exploring how existing liability frameworks and accountability through the value chain applies in the context of AI, with no immediate prospect of reform in this area,” she said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.