Google's AI Overviews Caught Giving Dangerous "Health" Advice
Briefly

Google's AI Overviews Caught Giving Dangerous "Health" Advice
"In May 2024, Google threw caution to the wind by rolling out its controversial AI Overviews feature in a purported effort to make information easier to find. But the AI hallucinations that followed - like telling users to eat rocks and put glue on their pizzas - ended up perfectly illustrated the persistent issues that plague large language model-based tools to this day."
"And while not being able to reliably tell what year it is or making up explanations for nonexistent idioms might sound like innocent gaffes that at most lead to user frustration, some advice Google's AI Overviews feature is offering up could have far more serious consequences. The issue is severe. For instance, The Guardian found that it advised those with pancreatic cancer to avoid high-fat foods, despite doctors recommending the exact opposite. It also completely bungled information about women's cancer tests, which could lead to people ignoring real symptoms of the disease."
"It's a precarious situation as those who are vulnerable and suffering often turn to self-diagnosis on the internet for answers. "People turn to the internet in moments of worry and crisis," end-of-life charity Marie Curie director of digital Stephanie Parker told The Guardian. "If the information they receive is inaccurate or out of context, it can seriously harm their health.""
Google launched AI Overviews in May 2024, which produced notable AI hallucinations such as telling users to eat rocks and put glue on pizzas. The summaries displayed persistent large language model failures including incorrect dates, invented idioms, and inconsistent answers to identical prompts. The tool supplied inaccurate health information that could harm users, offering advice contrary to medical recommendations for pancreatic cancer and bungling women's cancer test guidance. Vulnerable people often turn to online self-diagnosis in moments of crisis, and experts warn that misleading or inconsistent AI summaries can seriously harm health and may eventually endanger lives.
Read at Futurism
Unable to calculate read time
[
|
]