Google's healthcare AI made up a body part - what happens when doctors don't notice?
Briefly

Google's healthcare AI model, Med-Gemini, incorrectly referred to the 'basilar ganglia,' a non-existent brain structure, in its documentation. This error exemplifies the risks associated with AI technologies in medical settings. Despite the serious nature of the mistake, Google described it as a simple misspelling of 'basal ganglia,' making minimal edits to the related blog post without public acknowledgment, while the research paper remained unchanged. This incident raises concerns among medical professionals about the reliability of AI in healthcare decision-making.
The term 'basilar ganglia' is a mistake made by Google's healthcare AI model, Med-Gemini, showcasing the limitations of AI in clinical settings.
Med-Gemini is capable of summarizing health data and generating radiology reports, illustrating the potential applications of AI in healthcare.
The incident involving the misnamed 'basilar ganglia' was corrected in a blog post by Google, yet the original research paper remained unchanged.
Medical professionals regard the error as dangerous because it reflects the risks associated with relying on AI for critical health decisions.
Read at The Verge
[
|
]