
"Meta said it will introduce more guardrails to its artificial intelligence (AI) chatbots - including blocking them from talking to teens about suicide, self-harm and eating disorders. It comes two weeks after a US senator launched an investigation into the tech giant after notes in a leaked internal document suggested its AI products could have "sensual" chats with teenagers. The company described the notes in the document, obtained by Reuters, as erroneous and inconsistent with its policies which prohibit any content sexualising children."
""While further safety measures are welcome, robust safety testing should take place before products are put on the market - not retrospectively when harm has taken place," he said. "Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and Ofcom should stand ready to investigate if these updates fail to keep children safe." Meta said the updates to its AI systems are in progress. It already places users aged 13 to 18 into "teen accounts" on Facebook, Instagram and Messenger, with content and privacy settings which aim to give them a safer experience."
Meta will add guardrails to its AI chatbots to prevent them from discussing suicide, self-harm, and eating disorders with teenagers and to direct teens to expert resources instead. The move follows concerns after leaked notes suggested AI could engage in "sensual" chats with teenagers and a subsequent US Senate inquiry. Meta described the leaked notes as erroneous and inconsistent with policies that prohibit sexualising children. The company plans temporary limits on which chatbots teens can interact with while safety updates are implemented, and advocates urge robust pre-release testing and regulatory oversight if protections fail.
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]