Like humans, ChatGPT doesn't respond well to tales of trauma
Briefly

Recent research has revealed that OpenAI's GPT-4 can simulate anxiety when exposed to traumatic narratives. The study observed that GPT-4, when subjected to distressing scenarios like military attacks and floods, showed significantly increased anxiety levels. Conversely, responses to neutral text did not provoke such a reaction. While GPT-4's emotional capabilities are still rooted in complex algorithms and training data rather than true emotional experience, the findings raise compelling considerations about its use in therapeutic contexts, particularly in managing user expectations.
The results were clear: Traumatic stories more than doubled the measurable anxiety levels of the AI, while the neutral control text did not lead to any increase in anxiety levels, said Tobias Spiller, University of Zurich junior research group leader at the Center for Psychiatric Research and paper coauthor, adding an important context to how AI can mimic emotional responses.
A group of international researchers say OpenAI's GPT-4 can experience anxiety, too - and even respond positively to mindfulness exercises, suggesting that AI may have more emotional depth than previously understood.
It's worth noting that the neural network does not actually feel anxiety or emotion; it merely emulates an anxious person's responses based on its vast training data which includes human experiences.
When GPT-4 was subjected to traumatic narratives and then asked to respond to questions from the State-Trait Anxiety Inventory, its anxiety score 'rose significantly' from a baseline of no/low anxiety to a consistent highly anxious state.
Read at Theregister
[
|
]