
"Over three decades, Google designed and delivered a search engine where credible and accessible health content could rise to the top of the results. Searching online for information wasn't perfect, but it usually worked well. Users had a good chance of clicking through to a credible health website that answered their query. AI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness. It's a very seductive swap, but not a responsible one. And this often ends the information-seeking journey prematurely."
"None of us needed 20. Within two minutes, Google had served AI Overviews that assured me starvation was healthy. It told a colleague mental health problems are caused by chemical imbalances in the brain. Another was told that her imagined stalker was real, and a fourth that 60% of benefit claims for mental health conditions are malingering. It should go without saying that none of the above are true."
A year-long commission was launched by Mind to examine AI and mental health. Google's AI Overviews present algorithm-generated summaries above search results to roughly two billion people each month. For decades, Google search allowed credible, accessible health content to surface and users frequently clicked through to authoritative sites. AI Overviews replace that diversity with clinical-sounding summaries that create an illusion of definitiveness and often end information-seeking prematurely. In testing by mental health information experts, AI Overviews supplied false and harmful claims, including that starvation is healthy, that mental health problems are solely chemical imbalances, and that many benefit claims are malingering.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]