Anthropic Shredded Millions of Physical Books to Train its AI
Briefly

Anthropic, a Google-backed AI startup, has faced scrutiny for its practice of scanning and discarding copyrighted books to train its Claude AI model. This method, revealed in a recent court ruling, was upheld as legal under the first-sale doctrine, allowing Anthropic to avoid seeking permissions from authors or copyright holders. The ruling highlights a growing concern over how tech companies exploit legal loopholes, potentially harming the arts and creative industries by undermining author rights. Anthropic's approach, while not unique, raises alarms about the future of intellectual property in the age of AI.
Anthropic's method of destructive book scanning is a legal loophole allowing the AI industry to exploit authors and publishers without their consent.
The court ruling validated Anthropic's practice of using legally purchased books for AI training, raising ethical concerns in the tech and creative industries.
Read at Futurism
[
|
]