
"We need to acknowledge that most of the time, there is a kernel of truth to this. "Most of the time" is not nearly enough. Not even close. The expertise lies in being able to address the minority of presenting problems that require extraordinarily high levels of training. Even more important is the expert judgment required to assess and determine what is routine and simple, and what requires intensive training and proficiency."
"Nearly all professions with nascent human-replacing technologies are like this. Most of family medicine involves the advice to lose weight, get more sleep, watch symptoms carefully, cut alcohol use, and other chestnuts that any person or website can do. This does not mean that primary care physicians can be replaced with LLMs or other technology."
"The expertise lies in assessment, decision-making, and knowing when to act on the relatively few cases and situations that require immediate follow-up, prevention plans, or referral (Gomez-Cabello et al., 2024; Riedl et al., 2024). Engineers, airplane pilots, air traffic controllers, screenwriters, architects, archivists, and many others are in a similar scenario. We are even seeing self-driving cars being remarkably effective."
Large language models are being used as therapists and tools for psychologists. LLMs can theoretically support the simplest mental health needs but currently fail to provide sufficient real-world accuracy for complex care. Expert clinicians provide assessment, judgment, and decision-making that identify the minority of cases requiring intensive training, urgent follow-up, prevention plans, or referral. Many routine tasks resemble simple advice that any person or website can offer, yet rare high-risk situations demand professional intervention. Conceptual adjunctive use of LLMs exists, but practical deployment remains profoundly problematic for serious mental health care.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]