Out-Law / Your Daily Need-To-Know

Out-Law Analysis 14 min. read

Getty Images v Stability AI trial punctuates AI copyright debate

Photographers at Cannes_Digital - SEOSocialEditorial image

Victor Boyko/Getty Images.


The long-awaited trial between Getty Images and Stability AI is due to begin later this month. It comes at a time when AI copyright policy in the UK, and elsewhere globally, is at a major crossroads.

The High Court, in considering the circumstances of this dispute, is expected to address important questions around how UK copyright law currently in effect applies in the context of generative AI (gen-AI) training and output.

The court’s ruling – anticipated to follow within months – could also inform how AI-related copyright law in the UK might look in future at a time when reform is under government consideration. However, as we examine below, this case is just one of a variety of factors that will be relevant in that regard.

What the case is about

Stability AI operates an AI system, Stable Diffusion, which automatically generates images based on text or image prompts input by users. Getty has sued Stability AI, alleging that its intellectual property rights have been infringed by Stability AI – including, it claims, because copyright protected images to which it holds the rights were used in the training of Stable Diffusion and are reproduced in substantial part in the outputs of that system. Stability AI refutes the claims.

As we have explored previously, the case has brought various issues relating to the interaction between copyright law and the operation of gen-AI models to the fore.

At a fundamental level, for example, there is the potential for the High Court to provide some clarity on the jurisdictional scope of the Copyright, Designs and Patents Act 1988 (CDPA) when it assesses whether the alleged acts complained of in this dispute are subject to UK copyright law at all.

It is also anticipated that the court will provide guidance on whether and when data processing undertaken for gen-AI purposes constitutes copyright infringement, as well as whether and when gen-AI outputs based on infringing material might fall within an exception to copyright that enables copyrighted material to be used without rights holder authorisation in works of pastiche.

As a result, it seems likely that new case law will emerge that offers important guidance on how UK copyright law applies in the context of gen-AI today, at a time when there is significant debate – and lobbying – over what AI copyright policy should provide going forward.

UK copyright reform in the name of AI competitiveness

The UK is in a competition with other countries to position itself as a jurisdiction attractive to AI developers, with policymakers globally recognising the potential of the technology to boost productivity and economic growth. The UK government is pursuing how it might pull different policy levers to achieve this.

Earlier this year, the government endorsed an AI opportunities action plan developed by tech entrepreneur Matt Clifford – which includes a recommendation that it “reform the UK text and data mining regime so that it is at least as competitive as the EU”. Clifford said uncertainty around intellectual property is hindering innovation and undermining the UK’s broader AI ambitions and the growth of the creative industries.

Currently, the CDPA allows text and data mining (TDM) of copyright works – for non-commercial purposes only – provided that the user has lawful access to the work via, for example, a licence, subscription or permission in terms and conditions.

Late last year, the government opened a consultation on AI and copyright in which it set out a range of options for reform. At that stage, it indicated that its preferred option would involve a recalibration of the existing TDM exception to facilitate the training of AI models using copyright protected material – i.e. to allow for TDM for commercial purposes – but only in cases where rights holders have not opted their content out from being used in that way. Those proposals also envisage greater transparency over the use of copyright protected material to train AI, to help underpin how this new regime would work.

The proposals were met by a substantial backlash from the UK’s creative industries. They feel the law as it stands, which provides them with a right to control how their works are used, is being ignored by AI developers and that those developers need to enter into copyright licences with them to use their content. They want the government to intervene as they believe the status quo in market practice is damaging them – they view what is happening as large-scale copyright infringement, because their content is being scraped without authorisation or payment. Their case has attracted publicity owing to campaigning by well-known musicians and artists, from Elton John to Dua Lipa (9-page / 123KB PDF), and has been supported by some parliamentarians – more on which below.

For their part, AI developers dispute that they are doing anything wrong and argue that the law enables them to use copyright protected material for training their AI models without the need for a licence. Like the creative industries, the tech lobby has been actively engaging with government on what they see as barriers to AI innovation and the issue of copyright reform specifically.

Earlier this year, industry body techUK said the idea that AI and the creative industries are pitted against one another is a “false narrative”. It said AI can be a catalyst for human creativity and cited historical examples of other technological developments as proof of the “partnership” that is possible.

TechUK criticised what it described as “a regrettable lack of appetite from rights holders to engage” in the discussion over how best to balance their interests with those of AI developers. It warned that the “failure to strike a workable compromise” would not constrain AI development but rather mean that the UK misses out on “the opportunity to shape AI development, as innovation may shift to countries with clearer regulatory frameworks”. It supports a broadening of the UK’s TDM exception but says it can get behind the opt out system the government leaned towards in its consultation paper – even though it said this would involve “a big compromise for the tech sector”.

The Labour government believes it can balance the competing interests, to both protect the significant economic benefits the creative industries bring to the UK economy and enable AI development.

A recent report by the Guardian suggests that the government has shifted position on its proposed reforms since first publishing its consultation paper and that it is considering other proposals beyond those based around the operation of an opt out. However, the government has yet to publish its formal response to its consultation, which closed in February, and it is unclear whether there is a solution that will please both sides, as recent history under the previous government indicates.

In 2022, the Conservative government floated plans to expand the scope of the TDM exception to enable mining of works protected by copyright and database rights for commercial purposes but it dropped the idea after criticism from rights holder groups. Its subsequent attempt to facilitate agreement between the creative industries and tech lobby on a new voluntary AI copyright code of practice failed, following a breakdown in talks

Growing pressure to act now

The new Labour government sees the route to a solution through its consultation exercise. However, such are the concerns over the way technology and commercial practices are developing, and the speed at which these things are moving, the government has come under intense pressure to act now.

Seeking immediate additional legislative protections for rights holders in the context of AI training and development, some law makers hijacked a bill the government introduced into the UK parliament for an entirely different purpose: the Data (Use and Access) Bill (DUAB), which is primarily designed to enable data-related innovation and deliver targeted data protection reforms.

In recent weeks, the Bill has ‘ping ponged’ between the Houses of Commons and Lords amidst a major disagreement over whether AI-related copyright provisions should be included within the Bill: peers sought to ensure that added protections for the UK’s creative industries in the context of AI training and development were written into the face of the legislation; the government, backed by a heavy majority of MPs, opposed the moves and insisted that the AI copyright debate be resolved holistically and through a distinct process that would naturally follow on from its AI copyright consultation.

Parliamentary – and constitutional – convention is for the unelected Lords to give way to the wishes expressed by the elected Commons after a couple of rounds of ‘ping pong’. However, following an impassioned debate on Monday afternoon, a sizeable majority of peers took the unusual step of approving fresh amendments prepared by lead rebel Baroness Kidron, pushing the revised DUAB back to the Commons – and the ball back into the government’s court – for a third time.

In doing so, the peers rejected fresh verbal assurances from the government around the form of and timeline for action on the AI copyright issue, instead imploring the government to agree to legislative accountability in that regard, whether in the form Baroness Kidron has suggested or via its own alternative compromise wording.

How the government will respond now is unclear. What is clear is that its attempt to finalise data protection reforms, amidst external pressures it is facing to do so, faces further delay, in what represents a collision of two major policy issues.

EU decisions that essentially support the free flow of personal data from the EU to the UK are due to expire this year, with everyday cross-border business operations at jeopardy unless those decisions are refreshed. Under a proposed extension, those decisions would cease to have effect on 27 December. The government was hoping for smooth and speedy passage of the DUAB through the parliament, so that EU officials would have time to assess and then endorse the revised UK data protection framework before the current decisions expire. It was not bargaining for AI copyright policy-related roadblocks being put in the way.

1 October 2025 is the next date on which business-impacting primary UK legislation can be commenced, without the need for exceptional law-making procedures to be triggered.

The international picture

The AI copyright debate in the UK is not happening in a vacuum. The discussion on what laws should govern AI, including how copyright law should interact with AI innovation, is ongoing globally. The UK government’s position is likely to be influenced by what happens elsewhere, as well as by the outcome in the Getty Images v Stability AI case.

Relevant in this regard are recent developments in the US, where the head of the US Copyright Office, Shira Perlmutter, was reportedly sacked after her office produced a report (113-page / 1.53MB PDF) that concluded that “several stages in the development of generative AI involve using copyrighted works in ways that implicate the owners’ exclusive rights” – and that developers would require licences to cover those uses where they are not covered by the ‘fair use’ exception to copyright provided for in US law. Perlmutter is suing the US government (14-page / 596KB PDF) over its attempts to remove her from office, describing the action as “blatantly unlawful”.

US courts, not the US Copyright Office, decide whether use of copyrighted works qualifies as ‘fair use’, but the report contained the Copyright Office’s views on where the line should be drawn.

“Various uses of copyrighted works in AI training are likely to be transformative,” it said. “The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs – all of which can affect the market. When a model is deployed for purposes such as analysis or research – the types of uses that are critical to international competitiveness – the outputs are unlikely to substitute for expressive works used in training. But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

The US Copyright Office said it would be premature for the US government to change US copyright law to account for gen-AI at the moment. Instead, it has been encouraged to let licensing markets continue to develop: “Effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights,” the US Copyright Office said. “These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public.”

The US developments are significant given the recent trade deal struck between the UK and US. The deal, while non-binding and subject to further negotiation, envisages the UK and US entering into a “transformative technology partnership” and collaborating on intellectual property rights protection. In this context, the apparent rejection of the Copyright Office’s report by the Donald Trump-led US administration poses questions for the UK government over the direction it takes in relation to UK AI copyright reform.

EU policymakers have also grappled with how best to strike the balance between encouraging AI development whilst safeguarding the interests of content creators.

EU copyright law already provides for a general TDM exception, enabling reproductions and extractions of lawfully accessible works and other subject matter for the purposes of TDM. However, some limitations apply to the exception. First, such reproductions or extractions can only be retained for as long as is necessary for the purposes of TDM; and second, the exception is conditional on rights holders not expressly reserving that their works cannot be used for the purposes of TDM – those reservations must be made “in an appropriate manner, such as machine-readable means in the case of content made publicly available online”, the law provides.

On top of this, the EU AI Act specifically addresses copyright issues in the context of ‘general purpose AI’ models. Providers of those models must, among other things, put in place an EU law-compliant copyright policy, publish a sufficiently detailed summary about the content used for training of the model, and enable rights holders to reserve their rights not to have their works used for training.

Technological and market developments

In a recent report examining the development of gen-AI from a copyright perspective, the EU’s Intellectual Property Office (EUIPO) said (436-page / 6.2MB PDF) it is “essential to make copyright rules work in a way that keep human creators in control and ensure their proper remuneration, while allowing AI developers of all sizes to have competitive access to high-quality data”. It believes it is possible to balance both sides’ interests. It said it believes this is possible through “simple and effective mechanisms for copyright holders to reserve their rights and the use of their content, as well as licensing and mediation mechanisms to facilitate the conclusion of license agreements with AI developers”.

However, the EUIPO highlighted that there is currently “no single solution” enabling rights holders to opt out of having their content used for AI training under the EU TDM exception nor that enables the nature of AI generated or manipulated content to be identified and disclosed.

These same technological limitations have been highlighted by opponents of the proposed new opt out system in the UK.

One of the most established standards that content creators can turn to is the robot.txt protocol – a file that online publishers can associate with their content to relay preferences to so-called web crawlers, to tell them not to process their data.

However, there are limitations with robot.txt files. For example, compliance with the instructions in them is entirely voluntary, they only automatically constrain automated web crawlers not other ways of accessing content, and they are ineffective if the content has already been accessed from other sources as they cannot retrospectively constrain how that content is then used.

A working group established under the umbrella of the Internet Engineering Task Force is exploring new solutions, as reported by the Register in April.

The IETF produces technical documents that define how internet technology works in detail. Its AI preferences working group (AIPREF) is tasked with: delivering new standard vocabulary for expressing AI-related preferences, independent of how those preferences are associated with content; new ways of attaching or associating those preferences with content via established protocols and formats such as robot.txt; and a standard method for reconciling multiple expressions of preferences.

The AIPREF is working to an August 2025 deadline for achieving two milestones in relation to its tasks.

With the pace of change in the market, some content creators have decided not to wait for legislative reform or new technological solutions to emerge and have instead taken matters into their own hands.

In some cases, as in Getty’s case, they have initiated litigation against AI developers. In the US, for example, the New York Times is behind a lawsuit raised against OpenAI. It has taken a different approach with Amazon, however, agreeing a recent licensing deal that will allow Amazon to use NYT content for AI training purposes. Other publishers have agreed similar deals with AI developers – for example, Axel Springer in Germany was one of the first to agree an AI-related licensing deal, with OpenAI, in 2023. That agreement enables the AI developer to train its systems using “quality content from Axel Springer media brands”.

Many rights holders, however, particularly individual content creators, artists or authors, do not have the bandwidth or knowhow to identify use of their works online and recoup royalties from that activity. They are reliant on the support of collective licensing agencies. In this regard, an important development is in-train.

In the UK, the Copyright Licensing Agency (CLA) is in the process of developing a new gen-AI training licence that it says will “provide a scalable collective licensing solution that ensures remuneration for publishers and authors – in particular those not in a position to negotiate direct licensing deals – and give AI developers of all sizes the legal certainty needed to use a broad range of content to innovate and train language models”.

The new licence is expected to be available for use in the third quarter of this year, according to the CLA. Whether the terms – and cost – of such a licence are considered acceptable to AI developers remains to be seen.

The importance of the Getty Images v Stability AI trial

The tendency to view the AI copyright debate through a lens that pits the interests of AI developers against the creative industries ignores the fact that an increasing number of businesses are both AI developers and content creators or publishers. A long-term AI copyright solution – whether through legislative reform, technological and commercial solutions, or both – that works for both sides, is in everyone’s interest.

That solution should start from the basis that the operation of AI capable of facilitating business efficiencies, innovation and greater productivity, is dependent on models being trained using high-quality datasets. For that to be the case, content creators need to have sufficient incentive to not only create high-quality content but to make their works accessible – i.e. not to reserve their rights in a way that cuts off developers from using that content for AI training purposes. Disagreement stems from where the line should be drawn.

The trial between Getty Images and Stability AI has the potential to help regulate this matter, in the short term at least. The precedent the High Court sets could have a major bearing on market practice and the UK’s attractiveness as a jurisdiction for AI development. It could further inform the UK government’s steps towards legislative reform.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.