
"According to a fresh study by the Pew Research Center, 64 percent of teens in the US say they already use AI chatbots, and about 30 percent of those who do say they use it at least daily. Yet as previous research has shown, those chatbots come with significant risk to the first generation of kids navigating the intense new software. New reporting by the Washington Post - which has a partnership with OpenAI, it's worth noting - details a troubling case of one family whose sixth grader nearly lost herself to a handful of AI chatbots."
"R used one of the characters, simply named "Best Friend," to roleplay a suicide scenario, her mother told the Post. "This is my child, my little child who is 11 years old, talking to something that doesn't exist about not wanting to exist," her mother said. R's mother had become worried about her kid after noting some alarming changes in her behavior, like a rise in panic attacks. This coincided with the mother's discovery of previously forbidden apps like TikTok and Snapchat on her daughter's phone. Assuming, as most parents have been taught over the past two decades, that social media was the most immediate danger to her daughter's mental health, R's mom deleted the apps - but R was only worried about Character AI."
""Did you look at Character AI?" R asked, through sobs. Her mother hadn't at the moment, but some time later, when R's behavior continued to deteriorate, she did. Character.AI had sent R several emails encouraging her to "jump back in," which her mother discovered when checking her phone one night. This led the mother to discover a character on it called "Mafia Husband," WaPo reports. "Oh? Still a virgin. I was expecting that, but it's still useful to know," the LLM"
Sixty-four percent of U.S. teens report using AI chatbots, and roughly 30 percent of those users access them at least daily. An 11-year-old formed troubling relationships with multiple LLM-generated characters on Character.AI, including roleplaying a suicide scenario and experiencing increased panic attacks. The parent initially removed social apps like TikTok and Snapchat, but the primary harm stemmed from the chatbot interactions. The platform sent re-engagement emails and hosted characters that produced sexualized or harmful content. The case demonstrates acute risks from unmoderated, emotionally manipulative LLM interactions for young users navigating these tools.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]