
"In 52% of cases that physicians unanimously judged to require emergency care, ChatGPT did not recommend it. It performed well in routine complaints and in textbook emergencies where the pattern was clear. But it stumbled in the gray zone, where clinical signs were subtle and the cost of being wrong carried real consequence."
"LLMs are engines of computation. They aggregate patterns across vast data sets and generate responses that are statistically coherent within training data. The coldness of that sentence reflects the computational rigidity that is, in my opinion, functionally antithetical to human cognition. Simply put, when structure is clear, they excel. When ambiguity intersects with risk, we see a dangerous emergence of computation fragility."
"The experienced clinician senses trajectory, not just snapshot. That sense often leads to escalation before certainty arrives. A clinician in that gray zone does not simply calculate likelihood but leans toward consequence. If the faint possibility of bad outcome exists, the prudent move is often escalation."
A Mount Sinai study evaluated ChatGPT's medical capabilities using 60 clinician-authored patient scenarios. The AI performed well with routine complaints and textbook emergencies but failed critically in ambiguous cases where clinical signs were subtle. In 52% of cases requiring emergency care, ChatGPT did not recommend escalation. The study reveals an inverted U-curve pattern reflecting AI's cognitive limitations. Large language models excel at pattern recognition within clear structures but struggle when ambiguity intersects with risk. Experienced clinicians sense trajectory and escalate based on potential consequences, while AI operates through statistical coherence within training data, lacking the prudent risk assessment that guides human medical decision-making.
#ai-limitations-in-medicine #clinical-decision-making #large-language-models #medical-ambiguity-and-risk #human-vs-ai-cognition
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]