Court tosses hallucinated citation from Anthropic's defense in copyright infringement case
Briefly

Anthropic faces legal challenges after AI chatbot Claude referenced a fictitious article in court filings, leading to a district court striking down part of testimony. This incident underscores growing risks associated with AI 'hallucinations' in legal documentation. Critics warn that reliance on AI tools for legal tasks can result in serious inaccuracies, as highlighted by Brian Jackson of Info-Tech Research Group. Anthropic must now provide further evidence regarding Claude's user interactions, especially concerning copyrighted content, following claims from music companies against the startup.
"AI-induced laziness is becoming an epidemic in the legal profession. AI research tools shouldn't be relied upon to create court-ready output."
"The mistake was discovered in a court filing from Anthropic as part of its defense in the case involving Universal Music Group and others."
Read at Computerworld
[
|
]