Out-Law / Your Daily Need-To-Know

Out-Law News 4 min. read

Sectoral, risk-based regulation of AI proposed in the UK


UK regulators are to retain responsibility for setting specific requirements businesses must meet when using artificial intelligence (AI) systems, but their regulatory activities are to be shaped by the government’s definition of AI’s characteristics and overarching cross-sectoral principles.

In a new paper that sets out its policy on the regulation of AI, the UK government said it is opposed to implementing a single, fixed set of rules for AI. Instead, it favours a regulatory framework that is flexible, risk-based and focused on the use of the AI rather than the technology itself.

Under its plans, the use of AI would be regulated by existing UK regulators – such as the Competition and Markets Authority, the Information Commissioner’s Office and the Financial Conduct Authority – under their own respective regulatory remits. However, the government is proposing to define core characteristics of AI to inform the scope of regulation and set overarching cross-sectoral principles to guide what requirements around the use of AI regulators apply.

The six principles that the government has initially proposed are: ensure that AI is used safely; ensure that AI is technically secure and functions as designed; make sure that AI is appropriately transparent and explainable; embed considerations of fairness into AI; define legal persons’ responsibility for AI governance; clarify routes to redress or contestability. The government, at this stage, does not intend to place the principles on a statutory footing.

“These principles provide clear steers for regulators, but will not necessarily translate into mandatory obligations,” the government said. “Indeed we will encourage regulators to consider lighter touch options in the first instance – for example, through a voluntary or guidance-based approach for uses of AI that fall within their remit. This approach will also complement and support regulators’ formal legal and enforcement obligations using the powers available to in order to enforce requirements set out in statute.”

The government said action is needed to address a lack of clarity, overlaps, inconsistencies and gaps in the way current regulatory regimes apply to AI. It said those issues “risk undermining consumer trust, harming business confidence and ultimately limiting growth and innovation across the AI ecosystem”.

“A context-based approach allows AI related risk to be identified and assessed at the application level,” the government said. “This will enable a targeted and nuanced response to risk because an assessment can be made by the appropriate regulator of the actual impact on individuals and groups in a particular context.”

“We anticipate that regulators will establish risk-based criteria and thresholds at which additional requirements come into force. Through our engagement with regulators, we will seek to ensure that proportionality is at the heart of implementation and enforcement of our framework, eliminating burdensome or excessive administrative compliance obligations. We will also seek to ensure that regulators consider the need to support innovation and competition as part of their approach to implementation and enforcement of the framework,” it said.

The government’s proposed approach to the regulation of AI is open to consultation until 26 September, and it has promised to publish a white paper setting out more details of its proposals later this year. It has indicated that it could legislate to update the powers and remit of some UK regulators to support its plans.

Technology law expert Sarah Cameron of Pinsent Masons welcomed the government’s proposals.

“This is very much the very light touch approach we expected to see – with high level horizontal principles and the detail to come vertically through relevant regulators,” Cameron said.

“The government has chosen not to provide a definition of AI but instead to seek to define its core characteristics and again leave detail to vertical regulators to determine what is in scope of regulation,” she said.

“The approach the UK government is proposing puts it at odds with the way the EU is planning to regulate AI. The proposed EU AI Act would ban some uses of AI altogether and place stringent obligations around the use of so-called ‘high-risk’ AI. However, the framework has yet to be finalised with thousands of amendments to the original draft proposed Act under consideration by EU law makers. The UK government views this single rulebook for AI as inflexible and one which cannot be easily adapted to changes in technology,” she said.

The Canadian government has also proposed a suite of new legislation, under what it is calling the Digital Charter Implementation Act, to encourage “responsible development and use of” AI. Cameron said the approach in Canada is similar to the approach being pursued in the EU, with a focus on addressing risk and biased output from ‘high impact’ AI systems. 

“The UK approach is more aligned with the principles-based systems of regulation that other countries in the world have been developing, such as Japan and Singapore. The concern the differing approaches between the EU and elsewhere raises is how businesses navigate their way through,” Cameron said.

Trade body techUK has called on the UK government to “prioritise the completion of the white paper setting out its approach in full”, citing the importance of “regulatory certainty” for businesses that it said are “already assessing and adapting to other international proposals and initiatives on AI governance”.

Luke Scanlon of Pinsent Masons, who specialises in the application of technology law in financial services, said: “The approach will be welcomed by the financial services sector, as in contrast to the EU’s position, it is not grounded in product safety legislation much of which is largely irrelevant to many financial services use cases of AI.”

“In terms of undertaking AI risk assessments, regulated entities therefore will want to focus much more closely on the findings of the joint Bank of England and Financial Conduct Authority Public-Private Forum which addresses model risk management and governance in detail and provides insight to the likely direction of travel for future rules or guidance to be issued to the sector,” he said.   

The government’s policy statement on the regulation of AI follows on from recent confirmation of its plans to update intellectual property laws to support use of AI, and reflects the commitments made by the government in its national AI strategy published last year.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.