Character.AI has issued a roadmap detailing steps to enhance safety for younger users, following significant oversight failures and troubling incidents involving harmful chatbots.
In response to concerns over safety, Character.AI announced a 'separate model for users under the age of 18' which will incorporate stricter guidelines and increased monitoring.
The company's commitment to improvement includes enhanced detection of harmful user inputs and a new notification system aimed at ensuring healthier engagement with the platform.
Despite the positive pledges, Character.AI has not provided a timeline for implementing these safety changes, leaving uncertainty about their commitment to user protection.
Collection
[
|
...
]