ChatGPT outperformed doctors in diagnostic accuracy, study reveals
Briefly

The study, published in JAMA Network Open, tested 50 doctors on six challenging medical cases. Doctors using ChatGPT assistance scored an average of 76%, slightly higher than the 74% scored by those without it. However, ChatGPT alone achieved a remarkable 90% accuracy in diagnosing the conditions. This significant discrepancy highlights the potential benefits of AI in medical diagnostics compared to human practitioners, who may struggle to utilize such tools effectively.
The study revealed two key issues: doctors typically adhered closely to their initial diagnosis, often dismissing ChatGPT’s suggestions when they contradicted their own. This stubbornness suggests that a reliance on traditional diagnostic approaches could inhibit the effective integration of AI tools in medical practice.
Another critical issue highlighted was the underutilization of AI tools. Physicians tended to treat the chatbot like a search engine, asking it very narrow, specific questions instead of leveraging its full capabilities, such as analyzing entire medical histories for more comprehensive insights.
"I was shocked," Dr. Adam Rodman, an internal medicine expert at Beth Israel Deaconess Medical Center and co-author of the study, remarked, indicating that the findings were unexpected, illustrating the disparity in performance between doctors and AI models like ChatGPT in diagnosing complex medical cases.
Read at LiveNOW from FOX | Breaking News, Live Events
[
|
]