OUT-LAW ANALYSIS 7 min. read
The UK AI policy pivot signalled by new copyright reports
Sir Paul McCartney has been a vocal advocate for the rights of UK creatives. Raimonda Kulikauskiene/Getty Images.
19 Mar 2026, 10:25 am
A pivoting of UK policy on AI can be implied from the government’s new AI copyright report when considered alongside comments made by chancellor Rachel Reeves earlier this week.
The government’s approach now appears to be less focused on enabling the training of AI models in the UK and more about encouraging the development and adoption of AI systems in the UK using models trained elsewhere and with non-UK data.
Read more of our coverage on AI and copyright:
- Drop plans for AI-related copyright exception, UK ministers urged
- ‘Workable’ AI copyright solutions lacking amidst UK policy ‘reset’
- UK AI copyright report reveals little progress
- Getty Images v Stability AI: Getty’s copyright case against Stability AI fails
- AI copyright regime steers away from requiring licences in all cases
The government’s twin objectives
The government has said it simultaneously wants to protect the UK’s creative industries – which it says drive £146 billion of gross value to the UK economy every year – while harnessing the potential of AI for enabling economic growth and better health, environmental and societal outcomes. Balancing the competing interests of both sides is difficult to achieve in practice.
AI developers need access to high quality datasets to train their models. Without quality data, the AI systems that are developed are prone to delivering incomplete, inaccurate, or biased information to users, with potential negative implications for organisations and people. However, quality datasets are often the product of human creativity and benefit from copyright protection. Rights holders seek fair reward for the value their work creates and protection against unlawful and unfair use of their work.
The copyright reports
The government had a statutory duty, under the UK’s Data (Use and Access) Act, to prepare and publish two AI-related copyright reports by 18 March: one regarding options for reform and proposals on various AI and copyright issues; the other an economic impact assessment in respect of reform.
In publishing those reports on Wednesday, the government confirmed that it will not pursue any immediate changes to UK copyright law relating to AI training and development. The government acknowledged that there is no current consensus over how best to balance the respective interests of UK creatives and AI developers. Further, it said it is uncertain over the impact that its potential interventions in the market might have. It appears these reasons are behind its decision to avoid taking any immediate action.
Instead, the government has pledged to mostly maintain a monitoring brief. Among other things, it plans to:
- engage in more evidence gathering to better understand how copyright law is impacting on AI development and deployment and around the economic benefits of reform;
- consider the effects of transparency rules in other jurisdictions – like the EU – to determine whether changes to the UK’s approach are appropriate;
- support industry in developing tools and standards – such as around control of access to and use of works – and in encouraging their adoption;
- keep market-led licensing approaches under review and explore how more public datasets might be made available for license too – including through the Creative Content Exchange project.
The government has not committed to any specific timeline for follow-up action – despite recent calls by some UK law makers to deliver certainty to industry and to “focus on strengthening licensing, transparency and enforcement within the existing framework”.
Problems with the opt-out model
When it opened a consultation on possible AI-related copyright reform in late 2024, the government expressed a preference for reform. This option contained three core aspects: a widening of the existing text and data mining exception in UK copyright law to enable the mining of content for AI training purposes; the introduction of a mechanism to enable rightsholders to opt their content out from being used in that way; and the further introduction of measures requiring AI developers to be transparent about the works they train their models on, so rightsholders had sight of the activity and could decide whether to reserve their rights.
Problems with pursuing this opt-out approach were detailed in the government’s new reports – including, on the one hand, its potential to disincentivise the licensing of content, and on the other encouraging rightsholders to take up the opt-out option and thereby limit the data developers would have access to. Further challenges were highlighted in relation to the costs involved in enabling transparency and implementing opt-out mechanisms and in developing effective technical standards to facilitate such solutions in the first place, with UK ministers noting a lack of workable solutions as recently as in January.
An imperfect status quo
The problem with maintaining the current position on AI and copyright in the UK is that it causes issues for both UK creatives and AI developers. This is acknowledged in the government’s own reports.
From the rightsholder perspective, while the UK’s Copyright, Designs and Patents Act (CDPA) 1988 provides protections against the unauthorised copying of their works, those protections are limited: they extend only to the extent that the infringing activity takes place in the UK and, even then, to the extent that the activity does not fall within a copyright exception.
How these provisions on primary copyright infringement apply in the context of AI training and development has not yet been fully tested before courts in the UK. Last year, in its landmark litigation against Stability AI before the High Court in England and Wales, Getty Images was forced to drop its claims of primary copyright infringement after it was unable to provide evidence that Stability AI had undertaken any acts of unauthorised copying in the UK.
Getty subsequently failed with its further claims of secondary copyright infringement against Stability AI, though the High Court’s decision is the subject of an appeal.
The result of the Getty litigation as it stands is that, if an AI system is trained on data outside of the UK and then imported into the UK without that data being stored on or reproduced by that system, rightsholders have no way to enforce any copyright they own in respect of that data under UK copyright law. In its new reports, the government said it has no plans to legislate to alter that position at this time.
From the AI developer perspective, the likelihood that primary copyright infringement claims would be sustained under the CDPA if AI training was undertaken in the UK using copyrighted data without the permission of rightsholders, coupled with the fact the existing text and data mining exception can only be relied upon for non-commercial purposes, serves to disincentivise the training of their models in the UK. Doing so would mean using data not subject to copyright or obtaining a copyright licence to minimise the risk of an infringement claim.
The government noted that much of the AI training that happens takes place in other jurisdictions, such as the US where a broad ‘fair use’ exception to copyright applies. Exactly how broadly the concept applies in the AI context is being tested in ongoing US litigation.
How this fits with broader UK policy on AI
Since coming to power, the Labour government in the UK has made it clear that it sees supporting AI development as an enabler of economic growth.
Actions it has already taken include changing planning rules to facilitate data centre development, adopting a pro-innovation approach to regulation, and boosting investment in R&D and quantum computing. In its sector plan for digital and technologies developed under the umbrella of its modern industrial strategy, the UK government said it is aiming to be “one of the top three places in the world to create, invest in and scale-up a fast-growing technology business” by 2035.
In her Mais lecture on Tuesday, UK chancellor Rachel Reeves reiterated the government's view of the potential transformational benefits AI could bring to UK productivity. She said the government’s strategy for achieving this has four strands: building compute so as to reduce dependencies on accessing computing power in other countries; maximising strengths in areas such as AI applications, AI chip designs, and cybersecurity; accelerating adoption of AI; and ensuring people are equipped to use AI.
Notably, while she said the government’s focus has been on building data centre capacity, she said it matters more that the UK identify and pursue “those parts of the supply chain where we can win the global race”.
A similar tone was struck elsewhere in her speech, where Reeves encouraged “internationally competitive companies [to] start, scale and stay here in Britain” and promised new legislation to enable the testing of new technologies but acknowledged that in a competitive global marketplace, the UK “can’t own the whole [technology] stack”.
Those comments imply a more selective focus on the part of the UK government in relation to its approach to AI policy. In the context of copyright, its new reports suggest that this focus will be more about encouraging adoption of the technology rather than the training of the underlying models here in the UK.
Its comments that “many of the materials that form part of datasets are protected by copyright, and many of the acts involved in their assembly and provision may infringe copyright if done without a licence in the UK” and subsequent acknowledgment that “it may be easier for large AI developers to take advantage of other countries’ laws” are noteworthy in this regard. It appears to consider there to be no clear evidence-backed intervention open to it currently that it can pursue to change that position without the risk of disproportionately unpicking rights and protections for UK creatives in the process.
At the same time, it appears to consider that changing existing law in a way that favours rightsholders, such as by introducing mandatory transparency obligations, imposing explicit licensing requirements on AI developers, or improving protections for rightsholders around secondary copyright infringement, might reduce the attractiveness of the UK for deploying AI systems and undermine its growth agenda.
The government seems to consider, therefore, that maintaining UK copyright law as it stands – at least for now – is its best bet for delivering its objectives, despite the problems the current framework poses for creatives and AI developers alike.