Out-Law / Your Daily Need-To-Know

Local government legal risks AI services

Out-Law News | 30 Nov 2021 | 11:50 am | 1 min. read

Local governments across the UK face “looming” reputational and legal risks over how to integrate the use of artificial intelligence (AI) into their services, according to an expert.

Dr Sue Chadwick, planning law expert at Pinsent Masons, said it was “clear” that existing regulations were “inadequate”, and left local authorities with “difficult choices”.

Writing in the Local Government Lawyer, Chadwick said: “It is clear that automation of routine local government functions, and the use of data analytics to process large amounts of information from a range of sources, offers benefits in terms of saving time and money and in freeing up officers to do more complex work.”

“It is equally clear that the regulatory framework for this transformation is inadequate,” she added.

A recent report by the All-Party Parliamentary Group (APPG) on the Future of Work noted “marked gaps in legal protection” while highlighting the increasing prevalence of AI in the workplace.

“For a local authority looking to adopt new digital tools there are some difficult choices to make,” Chadwick said.

“On the one hand there is a need for speedy progress so that the benefits of the technology can be realised. On the other hand there are at the very least reputational risks connected with the adoption of new technologies at a time when levels of public trust in both government and new technologies are low.”

“There may also be legal risks looming - algorithmic bias has already been used as the basis for a successful challenge to the use of facial recognition technology by the South Wales police,” Chadwick added.

Chadwick Sue_November 2019

Sue Chadwick

Strategic Planning Advisor

There is a golden opportunity for an enterprising local authority to set up a regulatory sandbox for regulation itself.

A 2020 review by the Committee for Standards in Public Life (CSPL) concluded that use of the technology by public officials posed a risk to three Nolan Principles: openness, objectivity and accountability.

Meanwhile, the UK government’s AI strategy, published earlier this year, promised “to build the most pro-innovation regulatory environment in the world” and the development of an AI Governance Framework.

Last week, the Cabinet Office’s Central Digital and Data Office (CDDO) published the first version of a cross-government algorithmic transparency standard, while the Public Authority Algorithm Bill had its first reading in the House of Lords.

Chadwick said that these developments “do not specifically address key recommendations in the CSPL report on artificial intelligence and public standards” including one which recommends that the Government should set  “procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.” 

“Existing governance mechanisms can be adjusted to accommodate the challenges of new technologies,” she added.

“The government has proposed the use of regulatory sandboxes as a way of testing new technologies within the existing regulatory context while minimising risk. Given the increasing prevalence of the use of AI in local government functions, there is a golden opportunity for an enterprising local authority to set up a regulatory sandbox for regulation itself.”