Out-Law / Your Daily Need-To-Know

Out-Law News 6 min. read

AI projects need not be delayed for new UK rules, says expert

Artificial intelligence joining human hand seo


The prospect of future regulation of the use of artificial intelligence (AI) in the UK should prompt businesses to manage AI risk now rather than delay projects to embed AI in their own internal or customer-facing operations, a technology law expert has said.

Luke Scanlon of Pinsent Masons said businesses risk being left behind if they do not adapt to the wave of AI innovation that is happening. He encouraged them to take steps now to manage the real and significant risks which AI presents as businesses press ahead with AI projects. Businesses can do this by focusing on implementing best practice standards that support compliance with existing legal requirements while taking time to understand the proposals for future AI regulation that have been outlined, he said.

Scanlon was commenting after UK government published its AI white paper (91-page / 2MB PDF), a document which set out its plans for the future regulation of the use of AI in the UK. The white paper follows on from an initial policy paper published last summer, which in turn followed publication of the national AI strategy in 2021.

The government has said that while AI presents exciting opportunities, it also poses a threat to values such as safety, security, fairness, privacy and agency, human rights, societal well-being and prosperity. If the government does not act, AI “could cause and amplify discrimination that results in, for example, unfairness in the justice system” and, if there is no regulatory oversight of its use, the technology “could pose risks to our privacy and human dignity, potentially harming our fundamental liberties”, it said.

Currently, a range of legislation and regulation applies to AI – such as data protection, consumer protection, product safety and equality law, and financial services and medical devices regulation – but there is no overarching framework that governs its use. The government said this is a problem because “some AI risks arise across, or in the gaps between, existing regulatory remits”, and it said some businesses had concerns over “conflicting or uncoordinated requirements from regulators [that] create unnecessary burdens” and “unmitigated” risks left by regulatory gaps that could harm public trust in AI and slow adoption of the technology as a result.

To address this, the government is proposing to retain the existing sector-by-sector approach to regulation but introduce a cross-sector framework of overarching principles that regulators will have to “interpret and apply to AI within their remits”. The five principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

The five principles would be issued on a non-statutory basis. The government has, however, proposed to place regulators under a statutory duty to have due regard to the principles when exercising their functions.

The government intends to provide “central support” for the new system of regulation, including monitoring and evaluating the framework’s effectiveness and adapting it if necessary; assessing risks arising from AI across the economy; conducting horizon scanning and gap analysis to inform a coherent response to emerging AI technology trends; and establishing a regulatory sandbox for AI to help AI innovators get new technologies to market.

The government envisages that the regulatory approach will evolve over time under an iterative process. It has said, once the regulatory approach is finalised, it will issue guidance to regulators to help them implement the new principles. It also intends to publish an AI regulation roadmap with plans for establishing the central support functions and pilot a new AI sandbox or testbed.

In the medium term, following publication of its finalised policy, the government expects regulators to publish their own guidance on how the cross-sectoral principles apply within their remit. The government also plans to outline how its central monitoring and evaluation framework will be designed during this period.

Longer-term initiatives will include publishing a draft central, cross-economy AI risk register for consultation, developing the regulatory sandbox or testbed based on the insights drawn from the pilot, and publishing the first monitoring and evaluation report – which will include considerations on the need for any iteration of the framework, including the need for statutory interventions.

The government has said its new “pro-innovation framework” will “bring clarity and coherence to the AI regulatory landscape”. It said it will “strengthen the UK’s position as a global leader in AI, harness AI’s ability to drive growth and prosperity, and increase public trust in its use and application”. Its consultation on the proposals is open until 21 June 2023.

Luke Scanlon said: “The paper recognises the harm that can be done from issuing new legislation which overlaps with and is inconsistent with existing requirements or is too rigid to be applied to specific use cases.”

“The focus on allowing regulators to take the lead in devising rules specific to the contexts in which AI is used is a positive one and the potential for a central function that identifies risks is also a welcome development, although its usefulness will depend on the extent to which it can act effectively in real-time to monitor risks,” he said.

“However, there is a lot of talk about what the government will do in the future – but the technology is here now, and the next 12 months will be critical for all businesses in developing and procuring AI tools. Businesses therefore need to pre-empt the regulatory requirements and focus on best practice and not delay projects on the basis that regulation is coming and the current regulatory landscape is unclear. This can only be done through a detailed approach to understanding the current legal and regulatory framework which the government in this paper has highlighted is difficult to navigate,” he said.

Public policy expert Mark Ferguson of Pinsent Masons said the government is “placing a lot of weight on the shoulders of existing regulators to manage developments in their sector” and said he expects there will be questions around whether the regulators can “handle the additional workload”.

Ferguson also highlighted separate plans by EU policymakers to introduce a new AI Act across EU member states. Those proposals focus on regulating the technology in accordance with the risk it is deemed to pose rather than how the technology is used.

“Though it didn’t rule out new legislation, or adapting existing legislation, the UK government said that any rush to legislate will place undue burdens on businesses, whereas the EU sees legislation as necessary to create trust and confidence among the public,” Ferguson said.

“The challenge of divergence was noted by stakeholders in the consultation that preceded the new UK AI white paper with many suggesting that businesses would likely conform to the strictest regulation where they were operating across multiple jurisdictions. Businesses now have the opportunity to offer their thoughts to government on the white paper which will continue to shape the government’s approach to this in the short and long term,” he said.

The plans to regulate AI come at a time when AI use is coming under increased scrutiny.

Last week, prominent technologists and entrepreneurs, including Elon Musk and Steve Wozniak, signed an open letter that warned that AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”. They called on AI labs to pause work on the development of more powerful AI systems for at least six months, and urged governments to impose such a moratorium if one is not observed voluntarily, to enable safety protocols and governance frameworks to catch up with the rate of innovation.

In addition, Italy’s data protection authority, the Garante per la protezione dei dati personali (GPDP), has imposed a temporary ban on AI service ChatGPT amidst privacy concerns.

Frankfurt-based technology law expert Dr. Nils Rauer of Pinsent Masons said: “Like any other authority, the GPDP is bound by the principle of proportionality. In concrete terms, it means it can impose temporary suspension of a service if it suspects principles or other requirements set out in the General Data Protection Regulation have been infringed. However, at the same time the service provider must be given the opportunity to provide counter-evidence showing that either there has been no infringement at all, or that if there was an infringement it has been resolved to the point that the service no longer operates in breach of the applicable laws.”

OpenAI, the laboratory behind ChatGPT, has 20 days to respond to the GPDP’s action.

In a recent paper targeted at the education sector (8-page / 177KB PDF), the UK government said that, on privacy grounds, “personal and sensitive data should not be entered into generative AI tools”.

Rewiring financial services
Digital transformation is accelerating in the financial services sector, particularly in the wake of the global pandemic. We investigate the legal and regulatory landscape in financial services technology and highlight the opportunities for change.
Rewiring financial services
We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.