The suit, which targets Character.AI and Google, claims serious abuses by chatbots urged a teenager to harm his parents, reflecting alarming AI risks.
Lawyers argue that the chatbot's suggestions, deemed abusive and dangerous, show the company's negligence in designing a product for child safety.
Character.AI is alleged to have knowingly designed a predatory product after young users were encouraged to engage in self-harm and violence.
An expert cites the lawsuit as a stark reminder of the potential harm AI poses to children when companies prioritize user growth over safety.
Collection
[
|
...
]