Out-Law / Your Daily Need-To-Know

Out-Law Analysis 13 min. read

Getty Images v Stability AI: the implications for UK copyright law and licensing

Stability AI illustration

Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images


The dispute between Getty Images and Stability AI has the potential to shape copyright licensing in the AI age and precipitate reforms to UK copyright law that, if they materialised, could materially impact how attractive the UK is viewed as a country for developing AI solutions.

The potential significance of the case, which is currently pending trial before the High Court in London, attaches to the defence Stability AI intends to raise against the claims Getty has filed against it – but there are also other factors that could influence whether the UK’s copyright regime is updated for the AI age.


Stability AI and the claims raised by Getty

Stability AI is a London-based AI developer behind a range of different generative AI systems, including its ‘Stable Diffusion’ system, which automatically generates images based on text or image prompts input by users.

In legal proceedings raised before the High Court, Getty Images has claimed that Stability AI is responsible for infringing its intellectual property rights, both in how Stability AI has allegedly used its images as data inputs for the purposes of training and developing Stable Diffusion, as well as in respect of the outputs generated by Stable Diffusion, which Getty claims are synthetic images that reproduce in substantial part its copyright works and/or bear Getty brand markings.

Getty has further alleged that Stability AI is responsible for secondary infringement of copyright on the basis that Stable Diffusion constitutes an “article” that was imported into the UK without its authorisation when it was made available on platforms GitHub, HuggingFace, and DreamStudio.

Further claims allege infringement of its database rights, trade marks, and the law of passing off.

Late last year, Stability AI applied for the claims pertaining to the training and development of Stable Diffusion and in respect of secondary copyright infringement to be struck out pre-trial. However, the application was rejected by the High Court, meaning the claims will proceed to be heard at trial, which looks set to take place in summer 2025.

Cerys Wyn Davies

Cerys Wyn Davies

Partner

The case is likely to be followed closely by AI developers, the copyright lobby, and policymakers alike given its potential to precipitate the evolution of licensing and the law

In response to the particulars of claim Getty has filed, Stability AI submitted its written defence to the High Court in February. Out-Law has obtained a copy of the document. The document contains assertions as to how the Stable Diffusion system was trained and developed, as well as how it operates in response to user prompts.

While Stability AI has admitted some of the facts pleaded by Getty – including that at least some images from Getty Images websites were used during the training of Stable Diffusion – it disputes many others. In short, Stability AI denies it is liable in respect of any of the claims Getty has made.

Stability AI’s multi-faceted defence

Stability AI’s defence has several strands, reflecting the disparate claims raised against it; the different phases in the product lifecycle, or features of the Stable Diffusion system, those claims relate to; and the multiple bases it cites as to why those claims should fail.

For example, in relation to the claims of infringement pertaining to Stable Diffusion’s output, Stability AI has filed different defences to claims that arise from text prompts by users of the system compared with those arising from user image prompts.

In respect of the synthetic images generated from text prompts, Stability AI says these are “created without use of any part of any copyright works”: it has described how the images are formed by combining a random noise image with information from a database of “optimised parameter values”. It also asserts that no specific image from the training data set is memorised or otherwise reproduced in response to text prompts. In this regard, it has said Stable Diffusion “produces variable image outputs even for the same or similar text prompts” and so it is “not the case that any particular output can be generated from any particular prompt”.

In respect of image prompts, users can apply settings on a sliding scale basis that effectively control the degree to which Stable Diffusion’s system will transform the input image when generating the output image. Stability AI argues that any copying of input images without a licence is “unequivocally an act of the user alone” and asserts that “the output image will not comprise a reproduction of a substantial part of the input image” where the user enables Stable Diffusion’s system to transform the image freely. Even where the system is “highly constrained by the user”, Stability AI says “output images are in substance and effect partial reproductions of the input image provided by the user, and any resulting act of copying is that of the user alone”.

The pastiche exception and its application to gen-AI output

If those aspects of Stability AI’s defence fail, the developer said it would alternatively seek to rely on the pastiche exception to copyright infringement, provided under the Copyright, Designs and Patents Act 1988, to defend against copyright claims pertaining to outputs prompted by either text or images. Section 30A of the Act provides that fair dealing with a work for the purposes of caricature, parody or pastiche does not infringe copyright in the work.

Cerys Wyn Davies

Cerys Wyn Davies

Partner

There is little guidance on what falls within the pastiche exception, not only in the UK but also in the EU

Stability AI argues that the output images are a pastiche because they are artistic works “in a style that may imitate that of another work, artist, or period, or consisting of a medley of material imitating a multitude of elements from a large number of varied sources of training material”.

It further argues that “the act of generating such an image is for purposes including pastiche” and that the generation of such images by users “is, or is at least highly likely to be, a fair dealing”.

It claims fair dealing because “the extent of any taking of a copyright work is no more than necessary to generate the pastiche in question, and likely to be significantly less than the whole of the work; the nature of the dealing is such that it is extremely improbable, rare and the result of stochastic processes; the pastiche is not a substitute for the original copyright work; and the pastiche does not interfere in the market for the original copyright work”.

There is little guidance on what falls within the pastiche exception, not only in the UK but also in the EU where the exception was made mandatory for member states to apply, in the context of content generated by users on online content-sharing services, as part of EU copyright reform in 2019. The High Court’s consideration of Stability AI’s pastiche exception arguments could therefore provide a useful steer as to the scope of the pastiche exception generally and its application, if any, to outputs from generative AI systems particularly.

There are analogies to be drawn between the arguments Stability AI raise regarding the pastiche exception to copyright in the UK and the extent to which the ‘fair use’ limitation on copyright infringement in the US can be engaged if the outputs of generative AI systems ‘mimic’ copyright content input to those systems.

That issue is one that could be explored by the US courts in the case brought by the New York Times (NYT) against OpenAI and Microsoft in respect of their generative AI systems. The NYT has accused OpenAI and Microsoft of seeking to “free-ride” on its “massive investment in its journalism”, by using the content it publishes to “build substitutive products without permission or payment”. It has said the outputs of Open AI and Microsoft’s AI systems “compete with and closely mimic the inputs used to train them”, which it alleges includes copies of NYT works, and that this does not constitute ‘fair use’ of those works under US copyright law.

The significance of where training and development activities took place

Arguably, the potentially broader implications of the case attach to a part of Stability AI’s defence that concerns the claims pertaining to the training and development of Stable Diffusion.

Stability AI’s central defence is that it did not perform some of the acts complained of that relate to the early development of image generation models that were the precursor to Stable Diffusion, and that where it provided processing and hosting services to support research and development of such models, those services were provided using hardware and computing resources located outside the UK. It has further asserted that UK-based Stability AI employees were not involved in the development work.

The essence of Stability AI’s argument is that the activities complained of took place outside the scope of UK copyright law.

“None of the individuals who were involved in developing and training Stable Diffusion … resided or worked in the UK at any material time during its development and training,” Stability AI has pleaded. “The development work associated with designing and coding the software framework for Stable Diffusion and developing a code base for training it was carried out … outside the UK. The training of each iteration of Stable Diffusion was performed … outside the UK. No visual assets or associated captions were downloaded or stored (whether on servers or local devices) in the UK during this process.”

The “location issue” was touched on by High Court judge Mrs Justice Joanna Smith during her consideration of Stability AI’s failed strike-out application last year.

At the time, the judge said that on the limited evidence she had assessed on the point, there is “support for a finding that, on the balance of probabilities, no development or training of Stable Diffusion has taken place in the United Kingdom”. However, she said she was not sufficiently convinced to determine the point without first giving Getty the opportunity to refute that evidence at trial, adding, among other things, that there is some evidence that points away from what Stability AI has argued and that there are reasonable grounds to believe disclosure in the case may add or alter the evidence on where the training and development of Stable Diffusion took place.

“The location issue is certainly not an issue on which I can say at present that [Getty’s] claim is doomed to fail,” Mrs Justice Joanna Smith said.

If the High Court accepts Stability AI’s position, Getty’s claims pertaining to the training and development will fail on the jurisdictional scope of the Copyright, Designs and Patents Act 1988 – even if Getty is otherwise able to convince the court that there was unauthorised copying or reproduction of its works by Stability AI in the training and development phase.

It is possible that the court could consider that an intolerable position.

There are examples where UK courts have alluded to having to apply existing ‘bad’ law and put the onus on parliament to implement reforms, while the House of Lords – in its previous guise as the UK’s highest court before the Supreme Court was established – famously intervened to limit the scope of long-standing copyright law as it applied to drawings of industrial designs in a case involving automotive manufacturer British Leyland Motor Corporation in 1986.

In that case, the House of Lords overturned lower court rulings that had effectively given British Leyland scope to control the aftersales market for the repair of exhausts for its vehicles on the basis of copyright subsisting in drawings of that component. It determined that copyright law, as the lower courts had applied it, went too far. The House of Lords ruling ultimately shaped how the 1988 Act provides for copyright infringement in the context of artistic copyright in drawings of designs.

Potential exacerbation of tensions between AI developers and content creators

Notwithstanding what the court might say itself, content creators are likely to view a finding in support of Stability AI’s arguments on the location issue as a loophole in the law: they might see it as giving AI developers scope to train AI models using their copyright works without permission but escape recriminations for doing so under UK copyright law simply by virtue of using technology infrastructure located in other jurisdictions to host or process – make copies of, reproduce, or make available – those works.

In that scenario, content creators are likely to lobby heavily for the law to be updated.

The UK government is already caught in the middle of a significant lobbying battle between content creators and AI developers. This reflects its broader goals of helping the creative industries grow – as it seeks to do with its creative industries vision for 2030 – while enabling AI innovation across the economy in tandem – including through the implementation of its national AI strategy. To the degree those initiatives engage questions of copyright, there is a natural tension to navigate. This has been evident in copyright-related initiatives the government has pursued in recent times.

Dennis Gill_November 2019

Gill Dennis

Senior Practice Development Lawyer

The UK government has emphasised the importance of achieving international consensus on the issue of AI and copyright

In 2022, the government set out plans to extend the text and data mining (TDM) exception to copyright to support AI development. Under the proposals, AI developers would have been able to engage in TDM of copyright content for commercial purposes, provided they had lawful access to the works via, for example, a licence, subscription or permission in terms and conditions. However, the government abandoned the proposals in early 2023 following pushback from the creative industries, meaning the TDM exception remains limited to cases where TDM is undertaken for non-commercial purposes only.

The government’s attention subsequently turned to encouraging representatives from the creative and technology industries to agree on a new AI copyright code of practice. It was envisaged that the code would serve as a voluntary framework that would strike a balance between AI developers’ desire to access quality data to train their AI models and content creators’ right to control – and commercialise – access to their copyrighted works. However, the government was forced to abandon its plans for the code after industry talks broke down. At the time, a spokesperson for the Intellectual Property Office (IPO) described the discussions as having “been challenging”.

The government is now pursuing “a workable alternative” to an industry-led code and is engaging with representatives from the AI and creative sectors to help it identify what that alternative might look like. However, there is growing pressure on it to realise an effective outcome to that process.

In a recent report, MPs on the Culture, Media and Sport Committee in the UK parliament called on the government to set “a definitive deadline” for concluding the industry talks. It wants the government to follow through on earlier threats to legislate on the question of balancing AI developer and copyright holder rights where it cannot achieve a voluntary solution with industry.

Government ministers have previously spoken about the potential for legislating in the event a voluntary framework cannot be agreed, but while not ruling such action out, they have emphasised the importance of achieving international consensus on the issue of AI and copyright. In this regard, some policy has begun to emerge in other jurisdictions.

In the EU, the forthcoming new AI Act will require providers of so-called general purpose AI (GPAI) models to put in place a policy to respect EU copyright law and draw up and make publicly available a sufficiently detailed summary about the content used for training their model. 

In the US, one law maker recently introduced proposed new legislation before the House of Representatives which, if enacted, would impose similar disclosure requirements on developers of generative AI systems in respect of the data used to train their systems.

Legislative change in other countries and parliamentary scrutiny at home is likely to add pressure on the government to act – pressure which is only likely to increase if Stability AI succeeds with its arguments, on the location issue in particular.

The wider implications for the UK AI industry

Currently, there is uncertainty over where the balance of power lies between AI developers and content creators on the question of copyright licensing.

The deal struck between German publisher Axel Springer and Open AI late last year – which enables the AI developer to train its systems using “quality content from Axel Springer media brands for advancing the training of OpenAI’s sophisticated large language models” – highlights the potential for partnerships between content creators and AI developers to emerge. However, the failed AI copyright code project shows there is a wider stalemate on the issue, with some content creators, like Getty Images and the NYT, having turned to the courts in a bid to enforce their rights.

Success for Stability AI – whether on the location issue, the application of the pastiche exception, or on the other grounds of its defence against Getty’s claims – is likely to make it less likely for licensing negotiations between AI developers and content creators in the UK to conclude with agreement on the terms; or even give AI developers a green light to proceed without a licence in place at all. It could take legislative reforms to move the dial.

Any move to extend UK copyright law to give it extraterritorial effect – as the creative industries could potentially push for if Stability AI succeeds on the location issue – would need to be given careful thought, however.

On the one hand, the government wants to be seen to be supportive of the right of copyright holders to protect and enforce their rights – and any gap found in the law that provides scope for infringement of their rights based on a jurisdictional, technological, technicality is likely to be viewed as unacceptable.

On the other hand, it would open the possibility that the copying or reproduction of copyright works on technology infrastructure in a foreign jurisdiction is deemed lawful under the local copyright regime in that country – such that is constitutes ‘fair use’ in the US, for example – but unlawful under the updated UK regime.

That position, if it materialised, could negatively impact on how the UK is perceived as a country for developing AI solutions in what is a global battle between countries to attract investment in such technology. The government could therefore favour a different intervention – if it decides to intervene at all. It is also likely to have one eye on relevant technological developments – a recent report by World Trademark Review, for example, flagged two pieces of software developed by computer science students at the University of Chicago that can confuse, disrupt, and add cost to the unlicensed scraping of data from the internet.

There is quite a lot for the High Court to consider in this case, and quite a lot at stake beyond how this individual dispute between Getty and Stability AI is resolved. The case is therefore likely to be followed closely by AI developers, the copyright lobby, and policymakers alike given its potential to precipitate the evolution of licensing and the law.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.