Federal Judge In SF Rules That AI Company Anthropic Did Not Violate Copyright Law In Training Its Chatbot
Briefly

A San Francisco federal judge ruled that Anthropic did not violate copyright law by using copyrighted material to train its AI chatbot, Claude. The judge determined that the AI's outputs were transformative, qualifying them as fair use. However, he allowed a trial to proceed regarding the company’s use of pirated book copies. Internal communications suggested employees were aware of the legal risk of using pirated materials, although Anthropic later purchased legal copies, which may mitigate damages but not absolve them of liability.
Like any reader aspiring to be a writer, Anthropic's AI large language models trained upon works not to race ahead and replicate or supplant them but to turn a hard corner and create something different.
Anthropic had no entitlement to use pirated copies for its central library. That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages.
Read at sfist.com
[
|
]