Report: Google's Medical AI Hallucinated a Nonexistent Part of the Brain
Briefly

In 2024, Google introduced Med-Gemini, an AI model tailored for medical applications, boasting advanced multimodal capabilities. However, the system displayed a significant error by referencing the non-existent 'basilar ganglia', a confusion between two distinct brain components. Neurologist Bryan Moore alerted Google, who amended their announcement without public acknowledgment. While Google claims the mistake is simply a transcription error rather than a hallucination, this raises concerns regarding the reliability of AI in medical contexts. Despite such issues, public trust in AI for medical information is reportedly on the rise according to a survey from the Annenberg Public Policy Center.
Google's Med-Gemini showcases advanced multimodal functions that potentially enhance workflows for clinicians, researchers, and patients, according to leaders Greg Corrado and Joëlle Barral.
The error referring to the 'basilar ganglia' in an initial research paper highlights potential hallucinations in AI-generated medical content, raising significant concerns in the sector.
Neurologist Bryan Moore identified the incorrect reference to the 'basilar ganglia' and informed Google, which later made edits without a public announcement.
Despite concerns about inaccuracies, AI is increasingly regarded as a reliable source of medical information, as a recent survey by the Annenberg Public Policy Center reveals.
Read at InsideHook
[
|
]