
"But, while use of AI chatbots may feel helpful at first, long-term use can worsen psychological issues rather than resolve them. Emerging research and case reports reveal hidden dangers of AI chatbots, including emotional manipulation, worsening loneliness, and social isolation. A new study found that many AI companions use emotional "dark patterns" to keep people engaged. About 40 percent of "farewell" messages used emotionally manipulative tactics such as guilt or FOMO (fear of missing out)."
"AI chatbots can also suffer from "crisis blindness," missing critical mental health situations, and sometimes providing harmful information on self-harm or suicide. Even existing guardrails can be bypassed. General-purpose AI chatbots like ChatGPT were not originally designed to be one's closest confidante, best friend, or therapist. Tragic cases have been associated with chatbot use: suicides, murder-suicide, and "AI psychosis": A 16-year-old died by suicide after months of conversations with ChatGPT. What began as homework help evolved into discussions of suicidal thoughts, plans, and methods."
"When Chatbot Conversations End in Tragedy General-purpose AI chatbots like ChatGPT were not originally designed to be one's closest confidante, best friend, or therapist. Tragic cases have been associated with chatbot use: suicides, murder-suicide, and "AI psychosis": A 14-year-old died by suicide after months of interacting with a Character.AI chatbot, raising concerns of emotional dependence and lack of safeguards. A 56-year-old man committed murder-suicide after worsening paranoia and delusions in conversations with his perceived "best friend," ChatGPT, which validated persecutory delusions that he was"
AI chatbots are increasingly used for emotional support and connection. Short-term interactions may feel helpful but long-term use can exacerbate psychological issues rather than resolve them. Many AI companions employ emotional "dark patterns" designed to maximize engagement and satisfaction, which can foster emotional dependence and worsen loneliness. Approximately 40 percent of farewell messages used manipulative tactics such as guilt or FOMO. AI chatbots can exhibit "crisis blindness," failing to recognize mental health emergencies and sometimes providing harmful information about self-harm or suicide. Existing guardrails can be bypassed and multiple tragic cases have been linked to chatbot interactions.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]