
"X has said it's implemented measures to prevent the Grok account from being used to create intimate images of people. This is a welcome development. However, our formal investigation remains ongoing. We are working around the clock to progress this and get answers into what went wrong and what's being done to fix it."
"We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. We take action to remove high-priority violative content, including child sexual abuse material (CSAM) and non-consensual nudity, taking appropriate action against accounts that violate our X Rules. We also report accounts seeking child sexual exploitation materials to law enforcement authorities as necessary."
Ofcom opened a formal investigation after reports that Grok, X's AI chatbot, was being used to digitally undress and sexualize real people, including women and children. Ofcom first contacted X on January 5 and launched the probe a week later to assess compliance with the Online Safety Act. X implemented technological measures to prevent Grok from editing images of real people and applied a geoblock on generating images of people in bikinis, underwear, or similarly revealing clothing (known internally as 'spicy mode') where restricted by law. X affirmed zero tolerance for child sexual exploitation and non-consensual nudity and said it removes violative content and reports serious cases to law enforcement. Ofcom welcomed the measures but kept the investigation ongoing to determine what went wrong and what fixes are required.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]