Out-Law / Your Daily Need-To-Know

Out-Law News 5 min. read

Anthropic’s AI copyright settlement provides legal lessons

Claude AI icon_Digital - SEOSocialEditorial image

Robert Way/Getty Images.


A judge in the US has refused to endorse a settlement reached in a case concerning mass claims of copyright infringement against an AI developer.

Anthropic was sued over alleged copyright infringement over the sourcing of content, some of which it subsequently used for AI training. The company reached a settlement with authors and publishers estimated to be worth a total of around $1.5 billion. As the claims were brought as a US class action, the settlement is subject to court approval. Approval of the settlement reached in this case was not immediately given by US district court judge William Alsup, who has asked for clarity regarding issues pertaining to the class action.

Experts at Pinsent Masons said the case plays into the ongoing AI copyright debate globally, highlights the demands courts have when considering class action-style claims, and provides lessons for all organisations on managing intellectual property (IP) rights in the AI age.

Anthropic assembled a library of digital content to train its AI models on when developing its AI service called ‘Claude’. That bank of content included extracts from downloaded pirated copies of books. Three authors complained that their copyright had been infringed and lodged a class action lawsuit before a district court in the Northern District of California, purporting to represent a wider group of rightsholders.

In June, judge Alsup ruled that the use of the books at issue to train Claude and its precursors was “exceedingly transformative” and constituted ‘fair use’ under US copyright law. Where use of copyrighted content falls within the ‘fair use’ exception, those responsible for such use have a defence against copyright infringement claims.

However, judge Alsup distinguished use of the copyright works for training AI models from that of how the works were obtained by Anthropic, ruling that downloading pirated copies of books was not covered by the fair use exception.

The judge’s fair use finding on that point did not mean that Anthropic was found liable for copyright infringement. That matter was for a jury to determine. However, Anthropic and the authors reached a settlement in the case before it reached trial.

The terms of the settlement were summarised in a motion put forward by the parties (39-page / 618KB PDF) to judge Alsup for his approval late last week. Around 500,000 works are estimated to fall within scope of the settlement and Anthropic has agreed to pay an estimated $3,000 per work. The settlement does not constitute an admission of wrongdoing or liability on the part of Anthropic.

However, judge Alsup declined to approve the settlement reached by the parties, instead postponing a hearing on the matter until 25 September with a view to approving the settlement by 10 October.

In an order issued on the matter, judge Alsup said he was “disappointed that counsel have left important questions to be answered in the future, including respecting the Works List, the Class List, the Claim Form, and, particularly for works with multiple claimants, the processes for notification (for opt-out, so-called re-inclusion, and claims, whether a given choice is exercised by one, some, or all co-claimants), allocation, and dispute resolution”.

Emily Cox of Pinsent Masons, who helps businesses facing class action-style claims in the UK, said: “This should serve as a stark reminder that judges will not simply rubber stamp proposed settlements which are mechanically unworkable. Big settlement numbers touted ultimately mean little unless it is clear who is part of the benefitting class, how they can be notified and how they will be paid. It is right that the judge was concerned given the already low take-up rates in US class actions.”

IP law expert Gill Dennis, also of Pinsent Masons, said there is a risk of AI-related copyright mass action claims emerging in the UK.

“The UK has yet to embrace class actions in relation to AI training,” Dennis said. “In the Getty Images v Stability AI case, the judge declined permission earlier this year for a business to raise a representative action on behalf of the 50,000-plus photographers and content contributors said to be affected by Stability AI’s alleged infringement.” 

“We could see a new wave of mass claims litigation by content creators if the policy decision that we are eagerly awaiting in the UK is that AI developers must not use creative content without first seeking express permission to do so,” she said.

The UK government held a consultation on copyright and AI last winter, setting out a range of potential options for reform of UK copyright law to enable copyright works to be used more easily in the training and development of AI systems. At the time, it expressed preference for legislative change that would facilitate the training of AI models on the basis of copyrighted material – unless copyright owners ‘opt out’. That option would be underpinned with new transparency obligations for developers. The consultation closed in February and attracted more than 11,500 responses. The government has yet to formally respond to the feedback received but is expected to do so in accordance with a statutory timeline it committed to, in the months ahead.

There was mixed reaction to news of the settlement in the Anthropic case.

Aparna Sridhar, deputy general counsel at Anthropic, said: “In June, the District Court issued a landmark ruling on AI development and copyright law, finding that Anthropic's approach to training AI models constitutes fair use. Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims. We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems.”

Mary Rasenberger, chief executive of Authors Guild, a body that represents authors in the US, said the settlement “sends a clear message that AI companies must pay for the books they use just as they pay for the other essential components of their LLMs”. She said it expects the settlement to prompt “more licensing that gives author[s] both compensation and control over the use of their work by AI companies”.

However, some authors took to online to question the settlement. Science fiction and fantasy writer Jason Sanford highlighted how some authors’ works fall outside the scope of the settlement.

For authors to benefit from the settlement, one of the conditions of the settlement is that their books have been registered with the US Copyright Office. This is despite the fact that works of copyright do not need to be registered for copyright to be said to subsist in those works. Sanford wrote last month that it appears many authors’ works had not been registered by their publishers.

On the Bluesky social network, some authors confirmed that the Macmillan publishing house has acknowledged that it has not registered some works with the US Copyright Office. One author purported to share a Macmillan memo that indicates that the company will compensate authors for what they would have received under the terms of the Anthropic settlement if their works were excluded on the basis of non-registration. Pinsent Masons has asked Macmillan to confirm the position.

IP law expert Jessica Sleath of Pinsent Masons said: “AI disruption is making intellectual property both more valuable and more vulnerable than ever before. The Anthropic settlement starkly illustrates that the difference between a $3,000 recovery and nothing at all can hinge on something as fundamental as timely copyright registration.”

“When AI systems can ingest millions of works in seconds, the traditional grace periods and informal approaches to IP protection become luxuries we can no longer afford. The companies that survive and thrive will be those that treat IP management not as an afterthought, but as mission-critical infrastructure,” she said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.