#mental-health-risks

[ follow ]
fromNature
2 days ago

Chatbots in therapy: do AI models really have 'trauma'?

Three major large language models (LLMs) generated responses that, in humans, would be seen as signs of anxiety, trauma, shame and post-traumatic stress disorder. Researchers behind the study, published as a preprint last month, argue that the chatbots hold some kind of "internalised narratives" about themselves. Although the LLMs that were tested did not literally experience trauma, they say, their responses to therapy questions were consistent over time and similar in different operatingmodes, suggesting that they are doing more than "role playing".
Artificial intelligence
Artificial intelligence
fromTheregister
1 week ago

OpenAI seeks new safety chief as Altman flags growing risks

OpenAI is hiring a Head of Preparedness to secure systems and manage rising mental-health and misuse risks as AI models rapidly gain capabilities.
fromenglish.elpais.com
2 months ago

AI crosses the boundary of privacy without humanity having managed to understand it

From virtual assistants capable of detecting sadness in voices to bots designed to simulate the warmth of a bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor surrounding AI is advancing on an increasingly dense bed of questions that no one has yet answered. And while it has the potential to reduce bureaucracy or predict diseases, large language models (LLMs) trained on data in multiple formats text, image, and speech
Artificial intelligence
fromPsychology Today
6 months ago

Do LLM Conversations Need a "Gray Box" Warning Label?

LLMs may lead to 'psychological entanglement' where users mistake AI responses for genuine connections; a phenomenon particularly concerning for vulnerable individuals.
Mental health
[ Load more ]