AI chatbots don't improve medical advice, study finds
Briefly

AI chatbots don't improve medical advice, study finds
"The authors conducted a study with 1,298 UK participants who were asked to identify potential health conditions and to recommend a course of action in response to one of ten different expert-designed medical scenarios. The respondents were divided into a treatment group that was asked to make decisions with the help of an LLM (GPT-4o, Llama 3, Command R+) and a control group that was asked to make decisions based on whatever diagnostic method they would normally use, which was often internet search or their own knowledge."
"Pointing to prior work that has shown LLMs do not improve the clinical reasoning of physicians, the authors found that LLMs do not help the general public either. "Despite LLMs alone having high proficiency in the task, the combination of LLMs and human users was no better than the control group in assessing clinical acuity and worse at identifying relevant conditions," the report states."
A randomized study with 1,298 UK participants evaluated lay decision-making on ten expert-designed medical scenarios. Participants either used LLM assistance (GPT-4o, Llama 3, Command R+) or their usual diagnostic methods such as internet search or personal knowledge. LLMs alone demonstrated high task proficiency, but combining LLM output with human users produced no improvement in assessing clinical acuity and worsened identification of relevant conditions. Participants using LLMs performed no better at recommending courses of action than those consulting search engines or relying on personal knowledge, raising concerns about patient risk and commercial healthcare deployment.
Read at Theregister
Unable to calculate read time
[
|
]