
"The Guardian has published a number of articles highlighting the risks of asking health-related questions to chatbots or other AI tools. For example, Google's AI overviews provide directly dangerous answers about liver values, which can lead people with serious liver diseases to believe that they are healthy. Following the review, Google has decided to remove some of the criticized AI summaries. However, there is still a risk that they will appear if the user changes to a different wording."
"For example, Google's AI overviews provide directly dangerous answers about liver values, which can lead people with serious liver diseases to believe that they are healthy."
AI chatbots and other AI tools can produce dangerously misleading health-related answers. Some AI overviews have returned incorrect assessments of liver values, causing people with serious liver disease to believe they are healthy. After review, some problematic AI summaries were removed, reducing immediate risk. Risk remains because dangerous summaries can reappear if users change wording of queries. The persistent potential for rephrased prompts to trigger harmful outputs indicates the need for stronger safeguards, better medical validation, and clearer warnings before presenting health-related conclusions from automated systems.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]