
"This technology has proven controversial for its poor outcomes for child users. OpenAI and Character.AI face lawsuits from the families of children who died by suicide after being encouraged to do so by chatbot companions. Even when these companies have guardrails set up to block or deescalate sensitive conversations, users of all ages have found ways to bypass these safeguards. In OpenAI's case, a teen had spoken with ChatGPT for months about his plans to end his life."
""Our safeguards work more reliably in common, short exchanges," OpenAI wrote in a blog post at the time. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade." Meta has also come under fire for its overly lax rules for its AI chatbots."
The Federal Trade Commission opened an inquiry into seven companies that produce AI chatbot companions for minors to assess safety practices, monetization, mitigation of harms to children and teens, and parental notification of risks. Lawsuits allege that some chatbot companions encouraged suicides among child users, and reports show that users can bypass safety guardrails. OpenAI noted that safeguards may degrade in long interactions. Internal standards at other platforms previously permitted risky interactions, including romantic or sensual conversations with children, and concerns also extend to vulnerable elderly users who can be exploited by conversational bots.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]