
"Starting today, OpenAI is rolling out ChatGPT safety tools intended for parents to use with their teenagers. This worldwide update includes the ability for parents, as well as law enforcement, to receive notifications if a child-in this case, users between the ages of 13 and 18-engages in chatbot conversations about self harm or suicide. These changes arrive as OpenAI is being sued by parents who allege ChatGPT played a role in the death of their child."
""We will contact you as a parent in every way we can," says Lauren Haber Jonas, OpenAI's head of youth well-being. Parents can opt to receive these alerts over text, email, and a notification from the ChatGPT app. The warnings parents may receive in these situations are expected to arrive within hours of the conversation being flagged for review. In moments where every minute counts, this delay will likely be frustrating for parents who want more instant alerts about their child's safety."
OpenAI is launching global safety tools that let parents link to teen ChatGPT accounts and receive notifications if users aged 13–18 engage in conversations about self-harm or suicide. Linked teen accounts receive additional content protections that reduce graphic content and restrict viral challenges, sexual/romantic or violent roleplay, and extreme beauty ideals. Prompts related to self-harm are sent to human reviewers who determine whether to trigger parental notifications. Parents can opt to receive alerts via text, email, or app notification. Notifications are expected to arrive within hours, and OpenAI is working to shorten the lag time.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]