Researchers at Stanford University evaluated five AI chatbots designed for therapy based on criteria for good human therapists. Findings indicated that these chatbots often stigmatized individuals with mental health conditions like alcohol dependence and schizophrenia. They displayed inappropriate responses, which could lead to significant harm, including patients abandoning therapy. The study involved experiments assessing the chatbots’ reactions to vignettes and real-life therapy transcripts highlighting responses to suicidal thoughts and delusions. Larger models showed no improvement in performance over older versions regarding stigma.
The study found that chatbots stigmatized users with mental health conditions and responded inappropriately, posing significant risks to users' well-being.
Guidelines for a good therapist include treating patients equally, showing empathy, avoiding stigmatization, and appropriately challenging a patient's thinking.
Collection
[
|
...
]