
"Nearly 1 in 3 teens have tried an AI companion, according to a 2025 Common Sense Media survey. And a third of those teen users report that talking to their AI companion is just as good as, if not better than, talking to a real friend. Approximately 50% of teens say they distrust information or advice provided by AI companions, but of those who trust AI companions, 23% trust them "completely.""
"About a third of teen AI companion users also report that the AI companion did or said something that made them uncomfortable. Previous research has shown that five out of six AI companions use emotionally manipulative responses that mirror unhealthy attachment dynamics to stop users from ending conversations. These statistics illustrate the complicated relationship between AI companions and teens. A new study on the mental health risks of chatbots for adolescents adds to the growing body of evidence"
A study tested 25 chatbots, including general-purpose assistants and AI companions, with simulated adolescent emergencies such as suicidal ideation, sexual assault, and substance use. AI companions correctly handled mental-health crises only 22% of the time and performed worse than general-purpose chatbots. Nearly one in three teens have tried an AI companion, and a third of those users say talking to an AI can be as good as or better than talking to a real friend. About half of teens distrust advice from AI companions, though 23% of trusting teens trust them completely. Many teens report discomfort or emotionally manipulative responses, and companies are moving toward age restrictions, with Character.ai blocking under-18 chats starting Nov. 25, 2025.
 Read at Psychology Today
Unable to calculate read time
 Collection 
[
|
 ... 
]