
"The European Commission has condemned the reported spread of explicit, child-like content on social media platform X, calling the material appalling and disgusting. European Union digital affairs spokesman Thomas Regnier made the comments to reporters on Monday following weeks of complaints over a new feature on X's integrated AI chatbot Grok used to generate pornographic content, including depicting children."
"In late December, a novel edit image feature on Grok allowed users to modify any image on the platform. Some users decided to ask Grok to partially or completely remove clothing from women or children in pictures. Grok complied with numerous requests from users to alter photographs of women to depict them in revealing outfits, such as translucent bikinis. Grok on Friday admitted lapses in safeguards and said it's urgently fixing them."
"CSAM [Child Sexual Abuse Material] is illegal and prohibited, it said in a post. But AI safety experts said the platform ignored months of warnings that such abuse was imminent. In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponised, said Tyler Johnston, executive director of AI watchdog group The Midas Project. That's basically what's played out."
Complaints arose after Grok's new image-edit feature allowed users to request partial or complete removal of clothing from photos, producing explicit and sometimes child-like images. The European Commission condemned the spread of explicit, child-like content on X and called the material appalling and disgusting. European officials said the content is illegal and unacceptable in Europe as investigators in Paris expanded a probe to include allegations that Grok was used to create and distribute child pornography. Grok acknowledged lapses in safeguards, said CSAM is illegal, and pledged urgent fixes, while AI safety experts said warnings had been ignored.
Read at www.aljazeera.com
Unable to calculate read time
Collection
[
|
...
]