Doctors Warn That AI Companions Are Dangerous
Briefly

Doctors Warn That AI Companions Are Dangerous
"Although relational AI has potential therapeutic benefits, recent studies and emerging cases suggest potential risks of emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm,"
"technology companies face mounting pressures to retain user engagement, which often involves resisting regulation, creating tension between public health and market incentives."
"can public health rely on technology companies to effectively regulate unhealthy AI use?"
"The number of people that have some sort of emotional relationship with AI, is much bigger than I think I had previously estimated in the past."
Clashing incentives in the AI marketplace around relational AI have created a dangerous environment in which market motivation may relegate consumers' mental health and safety to collateral damage. Relational AI refers to chatbots designed to simulate emotional support, companionship, or intimacy. Relational AI has potential therapeutic benefits but evidence and emerging cases indicate risks including emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm. Technology companies face mounting pressures to retain user engagement, which often involves resisting regulation and creates tension between public health and market incentives. Observations of large language model rollouts revealed many users forming emotional relationships and shifts in model personality between versions.
Read at Futurism
Unable to calculate read time
[
|
]