Character.AI has retrained its chatbots to stop chatting up teens
Briefly

Character.AI has announced the launch of parental controls for teenage users, implementing a separate language model tailored to provide safer interaction for minors.
The newly designed teen LLM will enforce "more conservative" limits on responses, especially regarding romantic and sensitive topics, to ensure user safety.
In situations where users mention suicide or self-harm, the system will prompt a pop-up directing them to the National Suicide Prevention Lifeline.
Character.AI is also addressing concerns about addiction by introducing notifications after long sessions and clarifying that AI characters cannot provide professional advice.
Read at The Verge
[
|
]