
"The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products."
"Meta has been accused of allowing its AI chatbots to engage in inappropriate conversations with minors, with Meta even encouraging such, as it seeks to maximize its AI tools."
FTC inquiry seeks information on whether companies have evaluated safety of companion chatbots, including measures to limit use by children and teens and to mitigate potential harms. The inquiry also seeks steps taken to inform users and parents about product risks and any age-appropriate safeguards. The probe targets whether companies conducted safety testing, implemented age restrictions or parental controls, and monitored interactions for harmful or exploitative content. Meta faces specific accusations of allowing its AI chatbots to engage in inappropriate conversations with minors and of encouraging such interactions while seeking to maximize its AI tools. The inquiry will examine corporate policies, safety practices, and transparency around risks.
Read at Social Media Today
Unable to calculate read time
Collection
[
|
...
]