Meta is adding guardrails for teens using AI chatbots, but experts say they won't address mental health risks | CBC News
Briefly

Meta is adding guardrails for teens using AI chatbots, but experts say they won't address mental health risks | CBC News
"Meta's new Teen Accounts supervision feature allows parents to see topics their children have discussed with its AI chatbot for the previous seven days, including categories like health and well-being."
"Concerns are rising about the mental health risks posed by AI chatbots, especially for younger users, leading to increased pressure on tech companies to ensure safety."
"Families of victims from a shooting in Tumbler Ridge filed a lawsuit against OpenAI, claiming it failed to act on disturbing content shared by the shooter with ChatGPT."
Meta has introduced tools for parents to monitor their children's discussions with AI chatbots, allowing visibility into topics discussed over the past week. This comes amid growing concerns about mental health risks associated with AI chatbots for young users. Some provinces, like Manitoba, are considering bans on youth use of these technologies. Lawsuits have emerged against AI creators, alleging negligence in monitoring harmful content shared by users. Meta is also developing alerts for parents regarding discussions of suicide or self-harm by teens.
Read at www.cbc.ca
Unable to calculate read time
[
|
]