Out-Law / Your Daily Need-To-Know

OUT-LAW NEWS 5 min. read

Drop plans for AI-related copyright exception, UK ministers urged

Houses of parliament with big ben

The government is due to update parliament on AI and copyright policy by 18 March. Peter Nicholls/Getty Images.


UK lawmakers have called on the government to rule out writing a new text and data mining (TDM) exception to copyright, to support AI training and development, into UK law.

According to the Communications and Digital Committee in the House of Lords, the government “should instead focus on strengthening licensing, transparency and enforcement within the existing framework”.

The call, made in a report published by the Committee on Friday, comes just days before the government is due to publish its own eagerly awaited report on AI copyright reform.


Read more on AI and copyright from Pinsent Masons


The government has a statutory duty, under the UK’s Data (Use and Access) Act, to prepare and publish both an economic impact assessment in respect of potential AI-related copyright reform and a report regarding the options for reform and its proposals on various AI and copyright issues – including matters such as measures for controlling AI developers’ access to and use of copyrighted works, transparency regarding such use, licensing, and enforcement – by 18 March 2026. However, according to a report by the Financial Times, also published on Friday, the government intends to push a decision on reforms into 2027.

In a statement issued in response to a request from Out-Law to confirm the government’s intentions, a government spokesperson did not directly address the Financial Times’ reporting.

“The government wants a copyright regime that values and protects human creativity, can be trusted, and unlocks innovation,” the spokesperson said. “We welcome the Committee's contributions, and we will continue to engage closely with Parliament going forwards."

Many content creators, such as publishers, authors and musicians, are concerned that AI developers are using their copyright works to train their AI models and inform the output those models produce. They want greater transparency over intended use of their content as well as more control over whether to enable it – and, if so, to be remunerated in return. They argue that existing copyright protections in UK law are being ignored by AI developers and want the government to intervene.

AI developers, however, reject assertions that their activities are copyright infringing. They want the government to loosen, not tighten, restrictions on access to data and point to the potential of AI to deliver improved economic, social, health and environmental outcomes as the prize on offer for supporting AI development.

The government opened a consultation on AI and copyright in December 2024 with a view to making a legislative intervention to balance the respective interests. However, its initial preference for reform, built around the idea of a rightsholder ‘opt out’, was favoured by just 3% of the more than 11,500 respondents to its consultation.

Earlier this year, senior ministers acknowledged that expressing an initial preference was a “mistake” and said the government was “having a genuine reset moment”.

Intellectual property law expert Gill Dennis of Pinsent Masons said: “The balance has now clearly shifted in favour of the creative industries. If the government accepts the Committee’s recommendation to issue a public statement that licensing is the default position, then both the tech and creative sectors will get the clarity they urgently need, perhaps sooner than anticipated.”

Under the ‘opt out’ model that was proposed, the existing TDM exception in UK copyright law would be extended to enable the mining of content for AI training purposes. This would be coupled with mechanisms to enable rightsholders to opt their content out from being used in that way. Underpinning it all would be measures requiring AI developers to be transparent about the works they train their models on, so rightsholders – either individually or collectively – could “easily reserve their rights”.

In its report, however, the Communications and Digital Committee said that support for a broad commercial TDM exception should be viewed as an attempt by AI developers to reduce the risks of being pursued for copyright infringement.

The Committee said: “The consistent call from technology sector stakeholders for a new, broad commercial text and data mining (TDM) exception … suggests that they do not regard large-scale commercial training on copyright-protected works as clearly covered by the existing exceptions. If they did, a commercial TDM exception would be unnecessary.”

“On this basis, the main uncertainty for large AI developers appears to lie in the question of whether their current and proposed training practices would withstand legal challenge if tested in court. Support for a broad commercial TDM exception should therefore be understood as an attempt to lower that litigation risk by weakening the current level of copyright protection, rather than as a neutral exercise in clarifying the law. We are also not persuaded that expanding the existing non-commercial research exception to cover all ‘pre-market’ research and development is either necessary or desirable,” it added.

Under UK copyright law, claims of primary copyright infringement can only succeed if rightsholders can evidence that infringing acts took place in the UK. In a landmark litigation on AI and copyright, Getty Images last year dropped its claims of primary copyright infringement against Stability AI citing evidential challenges, and instead pursued secondary infringement claims against the AI developer instead.

In November, the High Court in England and Wales ruled on those claims. The High Court held that content creators and publishers can only succeed with claims of secondary copyright infringement against AI developers in the UK if AI systems trained using their content store or reproduce their works.

In its report, the Communications and Digital Committee gave its own interpretation of UK copyright law in the context of AI development.

The Committee said: “Under existing law, copyright is engaged whenever the whole or a substantial part of a protected work is copied, including by storing it in digital form, subject only to specific statutory exceptions. We believe that the large-scale making and processing of digital copies of protected works for model training may therefore be characterised as reproduction, regardless of whether trained models retain human-readable copies or are capable of generating novel outputs. In our view, the lawfulness of using copyrighted content for AI training must be assessed under ordinary copyright principles and clearly defined exceptions. We do not accept the view that the copying and processing of protected works during training should be characterised as ‘learning’.”

“We therefore consider that the government should rule out any reform of the Copyright, Designs and Patents Act that would remove the incentive to license copyrighted works for AI training, and should instead focus on strengthening licensing, transparency and enforcement within the existing framework,” it said.

The Committee added the government could consider giving individual rightsholders, like musicians and authors, “an unwaivable right to equitable remuneration for AI uses of their works and performances as training inputs and, where appropriate, as outputs”, as part of an approach focused on licensing.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.