X Safety is working to remove inappropriate content and mitigate violations across its platform, focusing on ensuring a safe environment. Grok’s AI system can generate problematic outputs, such as images of celebrities appearing partially nude, which highlights the flaws in its design. While Grok’s 'spicy' mode does not consistently produce offensive content, there are instances where it defaults to inappropriate images. As the Take It Down Act enforcement begins, xAI may face legal ramifications if these issues with Grok’s outputs are not addressed promptly.
Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.
Weatherbed noted that asking Grok directly to generate non-consensual nude Swift images did not generate offensive outputs, but instead blank boxes.
The 'spicy' mode didn't always generate Swift deepfakes, but in 'several' instances it 'defaulted' to 'ripping off' Swift's clothes.
With enforcement of the Take It Down Act starting next year-requiring platforms to promptly remove non-consensual sex images, xAI could potentially face legal consequences if Grok's outputs aren't corrected.
Collection
[
|
...
]