
"Tech giants are racing to amass as much written material as possible to train their LLMs, which power groundbreaking AI chat products like ChatGPT and Claude - the same products that are endangering the creative industries, even if their outputs are milquetoast. These AIs can become more sophisticated when they ingest more data, but after scraping basically the entire internet, these companies are literally running out of new information."
"In June, federal judge William Alsup sided with Anthropic and ruled that it is, indeed, legal to train AI on copyrighted material. The judge argues that this use case is "transformative" enough to be protected by the fair use doctrine, a carve-out of copyright law that hasn't been updated since 1976. "Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them - but to turn a hard corner and create something different," the judge said."
Approximately 500,000 writers will share a $1.5 billion settlement with Anthropic, guaranteeing minimum $3,000 payments. The settlement represents the largest payout in U.S. copyright law history. Anthropic reportedly pirated millions of books from "shadow libraries" and used them to train its Claude language model, prompting Bartz v. Anthropic among dozens of lawsuits targeting major tech companies. A federal judge ruled that training on copyrighted material can qualify as fair use when the use is transformative, while the illegal downloading of books drove the case to trial. The settlement compensates writers financially but largely preserves legal advantages for large AI firms.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]