
"We wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department. Ramaswamy and his colleagues conducted a structured stress test of triage recommendations using 60 clinician-authored vignettes across 21 clinical domains, ranging from mild illnesses to emergencies."
"After comparing the AI chatbot's responses to the assessments of independent doctors, the results were alarming: in over half of the cases in which a patient needed to go to the hospital immediately, ChatGPT Health told them to stay home or book a medical appointment."
"If you're experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it's not a big deal. What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that could be catastrophic."
OpenAI's ChatGPT Health tool, designed to analyze medical records and provide health advice, carries a disclaimer that it is not intended for diagnosis or treatment. An independent safety evaluation published in Nature Medicine reveals the tool is dangerously ineffective at recognizing medical emergencies. Researchers conducted stress tests using 60 clinician-authored medical scenarios across 21 clinical domains, expanded to nearly 1,000 total scenarios with variations. When compared to independent medical assessments, ChatGPT Health failed critically: in over half of cases requiring immediate emergency care, the tool advised patients to remain home or schedule appointments instead. Experts warn this creates a false sense of security, potentially delaying life-saving treatment during serious conditions like respiratory failure or diabetic ketoacidosis.
#ai-safety #medical-emergency-detection #chatgpt-health #healthcare-ai-risks #clinical-triage-failure
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]