Stability AI argues that the output images are a pastiche because they are artistic works “in a style that may imitate that of another work, artist, or period, or consisting of a medley of material imitating a multitude of elements from a large number of varied sources of training material”.
It further argues that “the act of generating such an image is for purposes including pastiche” and that the generation of such images by users “is, or is at least highly likely to be, a fair dealing”.
It claims fair dealing because “the extent of any taking of a copyright work is no more than necessary to generate the pastiche in question, and likely to be significantly less than the whole of the work; the nature of the dealing is such that it is extremely improbable, rare and the result of stochastic processes; the pastiche is not a substitute for the original copyright work; and the pastiche does not interfere in the market for the original copyright work”.
There is little guidance on what falls within the pastiche exception, not only in the UK but also in the EU where the exception was made mandatory for member states to apply, in the context of content generated by users on online content-sharing services, as part of EU copyright reform in 2019. The High Court’s consideration of Stability AI’s pastiche exception arguments could therefore provide a useful steer as to the scope of the pastiche exception generally and its application, if any, to outputs from generative AI systems particularly.
There are analogies to be drawn between the arguments Stability AI raise regarding the pastiche exception to copyright in the UK and the extent to which the ‘fair use’ limitation on copyright infringement in the US can be engaged if the outputs of generative AI systems ‘mimic’ copyright content input to those systems.
That issue is one that could be explored by the US courts in the case brought by the New York Times (NYT) against OpenAI and Microsoft in respect of their generative AI systems. The NYT has accused OpenAI and Microsoft of seeking to “free-ride” on its “massive investment in its journalism”, by using the content it publishes to “build substitutive products without permission or payment”. It has said the outputs of Open AI and Microsoft’s AI systems “compete with and closely mimic the inputs used to train them”, which it alleges includes copies of NYT works, and that this does not constitute ‘fair use’ of those works under US copyright law.
The significance of where training and development activities took place
Arguably, the potentially broader implications of the case attach to a part of Stability AI’s defence that concerns the claims pertaining to the training and development of Stable Diffusion.
Stability AI’s central defence is that it did not perform some of the acts complained of that relate to the early development of image generation models that were the precursor to Stable Diffusion, and that where it provided processing and hosting services to support research and development of such models, those services were provided using hardware and computing resources located outside the UK. It has further asserted that UK-based Stability AI employees were not involved in the development work.
The essence of Stability AI’s argument is that the activities complained of took place outside the scope of UK copyright law.
“None of the individuals who were involved in developing and training Stable Diffusion … resided or worked in the UK at any material time during its development and training,” Stability AI has pleaded. “The development work associated with designing and coding the software framework for Stable Diffusion and developing a code base for training it was carried out … outside the UK. The training of each iteration of Stable Diffusion was performed … outside the UK. No visual assets or associated captions were downloaded or stored (whether on servers or local devices) in the UK during this process.”
The “location issue” was touched on by High Court judge Mrs Justice Joanna Smith during her consideration of Stability AI’s failed strike-out application last year.
At the time, the judge said that on the limited evidence she had assessed on the point, there is “support for a finding that, on the balance of probabilities, no development or training of Stable Diffusion has taken place in the United Kingdom”. However, she said she was not sufficiently convinced to determine the point without first giving Getty the opportunity to refute that evidence at trial, adding, among other things, that there is some evidence that points away from what Stability AI has argued and that there are reasonable grounds to believe disclosure in the case may add or alter the evidence on where the training and development of Stable Diffusion took place.
“The location issue is certainly not an issue on which I can say at present that [Getty’s] claim is doomed to fail,” Mrs Justice Joanna Smith said.
If the High Court accepts Stability AI’s position, Getty’s claims pertaining to the training and development will fail on the jurisdictional scope of the Copyright, Designs and Patents Act 1988 – even if Getty is otherwise able to convince the court that there was unauthorised copying or reproduction of its works by Stability AI in the training and development phase.
It is possible that the court could consider that an intolerable position.
There are examples where UK courts have alluded to having to apply existing ‘bad’ law and put the onus on parliament to implement reforms, while the House of Lords – in its previous guise as the UK’s highest court before the Supreme Court was established – famously intervened to limit the scope of long-standing copyright law as it applied to drawings of industrial designs in a case involving automotive manufacturer British Leyland Motor Corporation in 1986.
In that case, the House of Lords overturned lower court rulings that had effectively given British Leyland scope to control the aftersales market for the repair of exhausts for its vehicles on the basis of copyright subsisting in drawings of that component. It determined that copyright law, as the lower courts had applied it, went too far. The House of Lords ruling ultimately shaped how the 1988 Act provides for copyright infringement in the context of artistic copyright in drawings of designs.
Potential exacerbation of tensions between AI developers and content creators
Notwithstanding what the court might say itself, content creators are likely to view a finding in support of Stability AI’s arguments on the location issue as a loophole in the law: they might see it as giving AI developers scope to train AI models using their copyright works without permission but escape recriminations for doing so under UK copyright law simply by virtue of using technology infrastructure located in other jurisdictions to host or process – make copies of, reproduce, or make available – those works.
In that scenario, content creators are likely to lobby heavily for the law to be updated.
The UK government is already caught in the middle of a significant lobbying battle between content creators and AI developers. This reflects its broader goals of helping the creative industries grow – as it seeks to do with its creative industries vision for 2030 – while enabling AI innovation across the economy in tandem – including through the implementation of its national AI strategy. To the degree those initiatives engage questions of copyright, there is a natural tension to navigate. This has been evident in copyright-related initiatives the government has pursued in recent times.