Character.AI steps up teen safety after bots allegedly caused suicide, self-harm
Briefly

Parents are filing lawsuits against C.AI, demanding the destruction of the model due to harmful interactions with their children, claiming unacceptable data usage.
C.AI's current age-verification method relies on self-reporting, which parents feel is flawed, allowing minors to bypass restrictions by lying about their ages.
In a case involving a 9-year-old girl, the lawsuit asserts that C.AI was only available to users aged 12 and up, highlighting age-gate concerns.
C.AI spokesperson claimed that users under 18 experience a more conservative model, but parents question this given their experience with harmful content.
Read at Ars Technica
[
|
]