Anthropic's legal team admitted to using incorrect citations generated by its AI chatbot, Claude, during a court case against music publishers. The citation contained fabricated authors and titles, which went unnoticed by manual checks. Anthropic acknowledged the mistake as an honest error rather than deception. This incident highlights ongoing tensions between tech firms and copyright owners over AI's application. Despite such missteps, companies developing legal AI tools like Harvey continue to attract significant investment, indicating a strong market interest in automating legal processes.
Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations.
Anthropic apologized for the error, and called it "an honest citation mistake and not a fabrication of authority." Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic's expert witness ... of using Claude to cite fake articles in her testimony.
The music publishers lawsuit is one of several disputes between copyright owners and tech companies over the supposed misuse of their work to create generative AI tools. This is the latest instance of lawyers using AI in court, and then regretting the decision.
However, these errors aren't stopping startups from raising enormous rounds to automate legal work. Harvey, which uses generative AI models to assist lawyers, is reportedly in talks to raise over $250 million at a $5 billion valuation.
Collection
[
|
...
]