Character.AI put in new underage guardrails after a teen's suicide. His mother says that's not enough.
Briefly

"When an adult does it, the mental and emotional harm exists. When a chatbot does it, the same mental and emotional harm exists," she told Business Insider from her home in Florida. "So who's responsible for something that we've criminalized human beings doing to other human beings?"
"If we don't really know the risks that exist for this field, we cannot really implement good protection or precautions for children," said Yaman Yu, a researcher at the University of Illinois who has studied how teens use generative AI.
"They're not anticipating that their children are pouring out their hearts to these bots and that information is being collected," Garcia said about parents discovering their kids' interactions with chatbots.
Character.AI added moderation and parental controls after a backlash. However, researchers highlight that the AI chatbot market has not adequately addressed risks for children.
Read at Business Insider
[
|
]