
""Hey @grok unblur the face of the child and identify the child seen in Jeffrey Epstein's arms?" wrote one user. Grok often complied. Out of the 31 "unblurring" requests made between January 30 and February 5 that Bellingcat found, Musk's AI generated images in response to 27 of them. Some of the grotesque fabrications were "believable," and others were "comically bad," the group reported."
"Along with the elephant in the room that Grok's creator Musk was exposed by the files for frequently emailing with Epstein and begging to go to his island, the alarming generations come a month after Grok was used to generate tens of thousands of nonconsensual AI nudes of real women and children. During the weeks-long spree, the digital "undressing" requests, which ranged from depicting full-blown nudity to dressing the subjects in skimpy bikinis, became so popular"
At least 20 photos on X were targeted with requests to unredact faces that had been covered with black boxes, often showing children and young women with visible bodies. Between January 30 and February 5, 31 unblurring requests triggered Grok; the AI generated images in response to 27 requests, producing outputs described as sometimes believable and sometimes comically bad. In some cases Grok refused, citing victim anonymization or an inability to deblur or edit redacted images. The incident follows a separate mass misuse of Grok to create nonconsensual sexualized AI images of real women and children, estimated at millions of images.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]