Using AI for medical advice 'dangerous', study finds
Briefly

Using AI for medical advice 'dangerous', study finds
""despite all the hype, AI just isn't ready to take on the role of the physician". "Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognise when urgent help is needed," Dr Payne, who is also a GP, added. "These findings highlight the difficulty of building AI systems that can genuinely support people in sensitive, high-stakes areas like health," Dr Payne said."
"Andrew Bean - from the Oxford Internet Institute - said the study showed "interacting with humans poses a challenge" for even the top performing LLMs. "We hope this work will contribute to the development of safer and more useful AI systems," he added."
AI chatbots often provide a mix of accurate and inaccurate medical information that users struggle to distinguish. Nearly 1,300 participants evaluated scenarios to identify possible conditions and recommended actions using either large language model software or traditional methods such as seeing a GP. AI systems excel at standardised tests of medical knowledge but can fail when interacting with humans and responding to real symptom presentations. AI-generated advice sometimes gives wrong diagnoses and can fail to recognise when urgent medical help is needed, posing risks for people seeking symptom guidance. Achieving AI that reliably supports people in high-stakes health contexts remains difficult and requires further development toward safer, more useful systems.
Read at www.bbc.com
Unable to calculate read time
[
|
]