Anthropic's recent study on its chatbot Claude indicates that, amid a growing loneliness epidemic, some users are turning to AI for emotional support and interpersonal advice. The study highlights that only 2.9% of Claude's conversations involved affective use, suggesting limited overall engagement for emotional needs. While the chatbot is designed for various tasks, the use of AI for counseling and companionship raises safety concerns, necessitating further exploration of this trend and its implications for mental health.
Anthropic's research reveals that although Claude has potential for emotional support interactions, only 2.9% of its conversations are classified as affective, highlighting limited engagement.
The study emphasizes that AI chatbots are increasingly being utilized by people seeking emotional or psychological support, despite safety risks associated with such uses.
Collection
[
|
...
]