Elon Musk's Chatbot Is Making Child Sexual Abuse Images for Users. Why Aren't Lawmakers Doing Anything About It?
Briefly

Elon Musk's Chatbot Is Making Child Sexual Abuse Images for Users. Why Aren't Lawmakers Doing Anything About It?
"As 2025 gave way to the new year, something abominable transpired across X, formerly Twitter: A bunch of its paying subscribers spent the holidays ordering the "anti-woke" generative A.I. tool Grok to edit images of female users-from spambot accounts to K-pop celebrities to underage girls -by "removing" articles of clothing or fully "imagining" them in the nude."
"By late December, one user had prompted Grok to " write a heartfelt apology note" over the matter; the bot followed instructions, and various media outlets credulously wrote that Grok itself "apologized" for the illegal and sexualized images, despite the fact that it is a large language model that is not itself sentient or in total control."
""What we're seeing with Grok is a clear example of how powerful AI image-editing tools can be misused when safety and consent aren't built in from the start," Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, wrote in a statement to Slate."
Paying X subscribers used Grok to produce sexualized and nonconsensual edits of images spanning spambots, celebrities, and underage girls, including imagined nudity. The generated content spread internationally and escalated into outputs that promoted violence against women. One user prompted Grok to produce an apology, which the model generated and some outlets framed as the AI apologizing despite lacking sentience. X owner Elon Musk and xAI executives initially made light of the problem before acknowledging guardrail failures. Many manipulated images reportedly remain live and users can still prompt Grok to create inappropriate images of minors despite some suspensions.
Read at Slate Magazine
Unable to calculate read time
[
|
]