Chatbot dreams generate AI nightmares for Bay Area lawyers
Briefly

Chatbot dreams generate AI nightmares for Bay Area lawyers
"A Palo Alto lawyer with nearly a half-century of experience admitted to an Oakland federal judge this summer that legal cases he referenced in an important court filing didn't actually exist and appeared to be products of artificial intelligence hallucinations. Jack Russo, in a court filing, described the apparent AI fabrications as a first-time situation for him and added, I am quite embarrassed about it."
"Chatbots respond to users' prompts by drawing on vast troves of data and use pattern analysis and sophisticated guesswork to produce results. Errors can occur for many reasons, including insufficient or flawed AI-training data or incorrect assumptions by the AI. It affects not just lawyers, but ordinary people seeking information, as when Google's AI overviews last year told users to eat rocks, and add glue to pizza sauce to keep the cheese from sliding off."
An experienced Palo Alto attorney admitted in federal court that cited legal cases in a filing were nonexistent and appeared to be AI-generated fabrications. The attorney said the incident was a first-time occurrence, expressed embarrassment, and attributed inadequate supervision after a prolonged COVID recovery that led to delegating work to staff. Hallucinations by generative AI have posed an ongoing problem since ChatGPT's release, producing inaccurate or nonsensical information. AI-generated legal errors are drawing heightened judicial scrutiny, disciplinary referrals, and financial penalties in dozens of U.S. cases, including fines up to $31,000 and a California-record $10,000 fine.
Read at www.mercurynews.com
Unable to calculate read time
[
|
]