Out-Law / Your Daily Need-To-Know

Out-Law News 5 min. read

AI concerns punctuate UK data bill debate

Houses of Parliament from the London Eye

The UK Houses of Parliament. Photo by Carl Court/Getty Images.


New data laws under consideration in the UK should require greater transparency over the use of AI systems for processing personal data and make clear that scraping copyrighted content from the internet is prohibited without the rightsholders’ consent, lawmakers have said.

Concerns about AI use punctuated the debate accompanying the second reading of the Data (Use and Access) Bill in the House of Lords on Tuesday.

The Bill, introduced into the UK parliament last month, is multi-faceted. As well as providing for some targeted reforms to UK data protection law and structural changes to the UK’s data protection authority, the Bill providers for the strengthening of enforcement powers under the Privacy and Electronic Communications Regulations (PECR), which sets out rules on direct e-marketing and on the use of cookies, as well as a new UK legal framework for initiatives on digital ID, smart data, and the digitising of key public registers and assets, and further seeks to facilitate real-time data sharing and use across current siloed systems in healthcare.

Some of the Bill’s data protection provisions envisage a relaxation of some existing restrictions applicable to automated decision-making, which is particularly relevant to organisations using AI systems.

Under the Bill, automated decision-making in most circumstances would be permitted as long as the organisation using the relevant AI or other technology implements safeguards, allowing individuals affected by those decisions to make representations, obtain meaningful human intervention, and to challenge decisions made by solely automated means. More restrictive rules are provided for in the Bill around the use of personal data for making “significant” automated decisions where highly sensitive data is processed.

Labour Lord Davies of Brixton said the Bill needs to be amended to ensure individuals are made aware when they are the subject of automated decision-making by AI systems.

“The Bill removes the general prohibition on automated decision-making, placing responsibility on individuals to enforce their rights rather than on companies to demonstrate why automation is permissible,” he said. “Even with the new safeguards being introduced, people will struggle to get meaningful explanations about decisions that will deeply affect their lives and will have difficulty exercising their right to appeal against automated decisions when the basis on which the decisions have been made is kept from them.”

Conservative peer Lord Holmes of Richmond added: “How can somebody effectively assert their right [to make representations, obtain meaningful human intervention, and to challenge decisions made by solely automated means] if they do not even know that AI and automated decision-making were in the mix at the time?”, calling for all goods and products in which AI is involved to be labelled.

Other peers expressed similar concerns with the automated decision-making (ADM) provisions in the Bill, with Liberal Democrat peer Lord Clement-Jones adding that powers the government is to be given under the Bill to “redefine what ADM actually is” would lead to “the risk of biased and discriminatory outcomes in ADM systems” increasing. This, he said, is because of “the government’s digital transformation agenda in the public sector and the increasing use of AI in the private sector”.

A number of peers also called on the government to strengthen protections for the UK’s creative industries against AI “data scraping”.

Labour peer Lord Stevenson of Balmacara said: “UK copyright law is absolutely clear that AI developers must obtain a licence when they are text or data mining – the technique used to train AI models. The media companies have suggested that the UK government should introduce provisions to ensure that news publishers and others can retain control over their data; that there must be significant penalties for non-compliance; and that AI developers must be transparent about what data their crawlers have “scraped” from websites…. Why are the government not doing much more to stop what seems clearly to be theft of intellectual property on a mass scale, and if not in this Bill, what are their plans?”

The quality of output produced by gen-AI systems is largely dependent on the quality of data used for training them. AI firms have long been seeking access to large quantities of high quality datasets to use in the development process. However, creative industry representatives continue to raise concerns over the use of their content in AI training and output, arguing that use without their consent and in return for payment of a royalty is copyright infringement.

In a bid to reconcile the two positions, the previous government sought both sides’ agreement to a voluntary AI copyright code. However, earlier this year it abandoned those plans after admitting to challenges in obtaining consensus. That government moved to explore other options through separate discussions with industry representatives and that work has continued under the new Labour administration. In October, government minister Feryal Clark said the ongoing stand-off over the use of copyright protected materials in training AI systems would be resolved by the end of the year.

Speaking in the Lords’ debate on Tuesday, crossbench peer Viscount Colville of Culross said copyright concerns in respect of AI data scraping could be exacerbated by the government’s proposed new rules for processing personal data for scientific research.

Under the Bill, the government is proposing to expand the concept of ‘scientific research’ to include certain privately funded and commercial research activities, and not just non-commercial research as is the case currently.

Viscount Colville said: “My fear is that AI companies, in their urgent need to scrape datasets for training large language models, will go beyond the policy intention in this clause. They might posit that their endeavours are scientific and may even be supported by academic papers, but when this is combined with the inclusion of commercial activities in the Bill, it opens the way for data reuses in creating AI data-driven products which claim they are for scientific research.”

“The line between product development and scientific research is blurred because of how little is understood about these emerging technologies. Maybe it would help if the Bill set out what areas of commercial activity should not be considered scientific research,” he said.

During the debate, peers also called on the government to clarify it’s plans for the regulation of AI. Some lawmakers also encouraged the government to think about removing data protection compliance burdens for SMEs, while others stressed that data law reform in the UK should not put at jeopardy the renewal of the EU’s ‘adequacy decision’ in respect of UK data protection standards, which is important to cross-border trade and business operations as it facilitates the everyday transfer of personal data to the UK by organisations in the EU. The current EU-UK adequacy decision expires in June 2025.

Sarah Cameron, a specialist in technology law at Pinsent Masons, said some of the AI-related issues lawmakers raised in the debate are likely to be addressed separately by the government than via amendments to the Bill.

“While there is a lot of pressure to address AI regulation generally and resolve the IP issues around web scraping, it is optimistic to have expected them to appear in this Bill,” Cameron said.

Cameron highlighted the fact that UK prime minister Keir Starmer has previously stated there will be regulation for the developers of the most powerful models brough forward in 2025. She further highlighted recent “to and fro” on the scraping issue, with it being reported that the government was considering an opt out mechanism – under which AI developers would be able to train AI models using online content by default unless content creators specifically opt out – but where Starmer has since said that the government “recognise the basic principle that publishers should have control over and seek payment for their work, including when thinking about the role of AI”.

Malcolm Dowden, an expert in data protection law and AI at Pinsent Masons, said the House of Lords committee stage for the Bill is due to begin on 3 December, with sittings listed for 3 and 10 December.  He said the government is clearly intent on pressing on as quickly as possible with progressing the Bill, which he added makes substantive amendments to address some of the AI issues raised unlikely to be entertained.

Dowden said: “Overall, there were no surprises in the debate. The government's significant Commons majority suggests that amendment will be very limited, with the Bill going to the Commons in the new year.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.