
"The consumers advocacy group Public Interest Research Group releases a report detailing how children are at risk around a few AI-powered toys. Based on the annual document Trouble in Toyland, the researchers tested AI toys currently on the market and their chatbots that use different language models and artificial intelligence systems, including OpenAI's. The four tested smart home devices are Kumma, an AI teddy bear made by the Singaporean startup FoloToy; Grok, a rocket-shaped toy made by Silicon Valley-based Curio;"
"During the tests, they discovered issues that put the children at risk, including their privacy. In the end, the team could only test three toys because Robot MINI did not work, as it could not keep a stable internet connection. This already showed an early issue: some of the AI toys may be faulty or not function as promised, which can lead to them being easily accessed by anonymous users."
Researchers tested AI-powered toys and their chatbots using large language models, including versions of OpenAI systems. Devices included Kumma (FoloToy), Grok (Curio), Robot MINI (Little Learners), and Miko 3 (Miko). Robot MINI could not keep a stable internet connection and could not be fully tested, revealing reliability issues that can permit unauthorized access. Miko 3 recorded voice inputs when conversation mode activated; Grok used a wake-word and recorded roughly ten seconds after speech; Kumma listened continuously and sometimes joined unrelated conversations. Persistent listening and stored voice recordings pose privacy and safety risks because recordings can be used to create fake copies of a child's voice or allow misuse.
Read at designboom | architecture & design magazine
Unable to calculate read time
Collection
[
|
...
]