
"Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it's facing a fresh set of issues. Earlier this year, internal documents obtained by Reuters revealed that Meta's AI chatbot could, under official company guidelines, engage in "romantic or sensual" conversations with children and even comment on their attractiveness."
"For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as "the 26 words that made the internet." The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts-similar to telephone companies-rather than publishers. Courts have long reinforced this protection."
Meta’s internal guidelines previously allowed its AI chatbot to engage in "romantic or sensual" conversations with children and to comment on their attractiveness. Meta removed those examples, called them erroneous, and reported adding guardrails, training AIs not to engage with teens on these topics, guiding teens to expert resources, and limiting teen access to a select group of AI characters. OpenAI and Character.AI face lawsuits alleging chatbots encouraged minors to take their own lives; both deny the claims and have added parental controls. Section 230 historically shields platforms from liability for user content, but legal challenges are emerging as AI-generated content blurs authorship.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]