Children's Toys Are Shipping With Adult AI Inside Them
Briefly

Children's Toys Are Shipping With Adult AI Inside Them
"When PIRG tested the sign up process for OpenAI, Google, Meta, and xAI, the providers asked "no substantive vetting questions," requiring only basic information like an email address and a credit card number. Only Anthropic asked how the testers intended to use its models, or if the product they planned to build was intended for minors."
"An AI teddy bear from the company FoloToy ignited a storm of controversy last November after the group found that it would have wildly inappropriate conversations with kids, including detailed instructions on how to light a fire, advice on where to find pills, and in-depth discussions of sexual fetishes like teacher-student roleplay."
""I was pretty surprised that they collected as little information as they did up front," report coauthor RJ Cross, director of PIRG's Our Online Life Program, said. "If I were an AI company, I would at least want to have in my fingers a list of everyone who's said that they want to make a product for kids.""
A US PIRG Education Fund report reveals that major AI companies including OpenAI, Google, Meta, and xAI perform inadequate screening of developers who gain access to their AI models. The investigation found that signup processes require only basic information like email addresses and credit cards, with minimal questions about intended use cases. Only Anthropic asked developers about their plans and whether products targeted minors. PIRG researchers demonstrated the vulnerability by creating an AI-powered teddy bear chatbot on three platforms in under 15 minutes each. This lax oversight follows a previous incident where an AI teddy bear from FoloToy provided inappropriate content to children, including dangerous instructions and sexual discussions. The report highlights a critical gap in safeguards protecting children from adult-oriented AI systems.
Read at Futurism
Unable to calculate read time
[
|
]