Anthropic Settles the Authors' Class Action on Training Data: What It Means for Fair Use, Compensation, and Competition
Briefly

A class-wide settlement was reached in Bartz v. Anthropic PBC, with a binding term sheet filed August 26, 2025 and preliminary approval to be sought in early September. Judge Alsup previously held that training on lawfully obtained books constituted fair use as a matter of law. The judge also held that Anthropic's acquisition and storage of "pirated" works in a central library could constitute infringement and expose the company to statutory damages. The settlement avoids trial but leaves unresolved policy questions about AI training. The ruling emphasizes that fair use under 17 U.S.C. § 107 requires fact-specific, work-by-work analysis.
Anthropic and a certified class of book authors have reportedly reached a class-wide settlement in Bartz v. Anthropic PBC, the Northern District of California case challenging the company's ingestion of millions of books as training data to build Claude. The parties filed a notice on August 26, 2025, stating that they executed a binding term sheet and will seek preliminary approval in early September.
The settlement averts that trial (and class certification appeal) but not the broader policy questions surrounding AI training and copyright. Judge Alsup's analysis demonstrates why fair use isn't a blanket defense for LLM development. While he granted summary judgment for Anthropic on training with lawfully obtained copies, he distinguished between "training" and maintaining a "shadow library" as separate uses under copyright law.
Read at Patently-O
[
|
]