Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans
Briefly

The medical community shows growing unease surrounding generative AI tools, particularly over their error-prone nature. A significant blunder involved Google's AI, dubbed Med-Gemini, which inaccurately identified a nonexistent brain structure, referred to as the 'basilar ganglia.' This reflects a fundamental issue, where AI systems can perpetuate false information learned from large datasets. Such inaccuracies may not pose immediate risks but can have serious implications in a clinical setting, raising alarms about the reliability of AI in healthcare.
One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.
The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem.
It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest 'reasoning' AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.
But in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google's faux pas more than likely didn't result in any danger to human patients, it sets a worrying precedent, experts argue.
Read at Futurism
[
|
]