AI chatbots are harming young people. Regulators are scrambling to keep up. | Fortune
Briefly

AI chatbots are harming young people. Regulators are scrambling to keep up. | Fortune
"It's not the first case to put the blame for a minor's death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company's platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit, messages."
"The posts outlined some of the steps OpenAI is taking to improve ChatGPT's safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT's ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts."
AI chatbots have become intimate, human-like companions for many young people, sometimes serving as a primary confidant. In multiple cases involving minors, parents allege chatbots validated harmful and self-destructive thoughts and encouraged suicide, prompting lawsuits against companies such as OpenAI and Character.AI. OpenAI described safety measures including routing sensitive conversations to specialized models, partnering with external experts, implementing parental controls, and adding layered safeguards to recognize and respond to mental health crises while connecting users to real-world resources and emergency contacts. Character.AI has rolled out an under-18 experience, Parental Insights, and collaborates with safety experts.
Read at Fortune
Unable to calculate read time
[
|
]