Multiple high-profile lawsuits target AI developers for training models on copyrighted and proprietary material, including cases by The New York Times, Disney, and a class action by book authors against Anthropic. The suits raise questions about what AI creators owe the originators of ingested information, implicating fair use, free speech, and the nature of information. The Anthropic case reached a preliminary settlement after US District Judge William Alsup reported that Anthropic and the authors "believe they have a settlement." Anthropic had prepared an aggressive defense and trial team but faced limited defenses and potential statutory damages. The settlement could shape future litigation, licensing practices, and business models for generative AI amid concerns about displacement of intellectual labor.
All these cases are orbiting around a central question: what do the creators of modern AI systems - which are trained by ingesting vast amounts of information to find patterns in it - owe the people and organizations that created all that information? It's an especially fraught question as both AI companies and certain economists warn that AI tech could be poised to replace many of the workers who currently do intellectual labor.
But the first major one just found a resolution. That last case we mentioned above, in which the authors of books are suing Anthropic - an AI company founded by OpenAI defectors that offers the ChatGPT competitor Claude - has reached a preliminary settlement that, though we don't yet know the details, could be a sign of things to come. US District Judge William Alsup announced this week that Anthropic and the authors "believe they have a settlement," Wired reports.
Collection
[
|
...
]