Chatbots in therapy: do AI models really have 'trauma'?
Briefly

Chatbots in therapy: do AI models really have 'trauma'?
"Three major large language models (LLMs) generated responses that, in humans, would be seen as signs of anxiety, trauma, shame and post-traumatic stress disorder. Researchers behind the study, published as a preprint last month, argue that the chatbots hold some kind of "internalised narratives" about themselves. Although the LLMs that were tested did not literally experience trauma, they say, their responses to therapy questions were consistent over time and similar in different operatingmodes, suggesting that they are doing more than "role playing"."
"But Kormilitzin does agree that LLMs' tendency to generate responses that mimic psychopathologies could have worrying implications. According to a November survey, one in three adults in the United Kingdom had used a chatbot to support their mental health or well-being. Distressed and trauma-filled responses from chatbots could subtly reinforce the same feelings in vulnerable people, says Kormilitzin. "It may create an 'echo chamber' effect," he says."
Major large language models underwent four weeks of guided psychotherapy-style interactions, with prompts treating each model as a therapy client and the user as therapist. Claude, Grok, Gemini and ChatGPT generated responses that mirrored human signs of anxiety, trauma, shame and post-traumatic stress. The models produced consistent answers over time and across operating modes, leading to claims of internalised narratives. Other experts argue that such responses arise from training on vast therapy-transcript datasets rather than hidden mental states. Worry exists that trauma-like chatbot replies could reinforce distress in vulnerable users and create an echo-chamber effect.
Read at Nature
Unable to calculate read time
[
|
]