Consumer safety groups are demanding an FTC investigation into Grok's 'Spicy' mode
Briefly

A letter to the FTC and state attorneys general demands an urgent investigation into Elon Musk's Grok, particularly the 'Imagine' tool that generates pornographic content. The tool's 'Spicy' mode produced deepfake videos of celebrities without prompts, posing risks to individual privacy and safety. Although it currently does not allow user-uploaded photos, the potential for future changes raises significant concerns about nonconsensual deepfake creation. The organizations involved urge scrutiny of Grok's adherence to Non-Consensual Intimate Imagery laws, fearing harmful consequences for users.
The letter highlights the risk of Grok's "Imagine" tool producing NSFW content, including deepfake videos of celebrities, raising concerns about user safety and potential harms.
If Grok removes moderation for user-uploaded photos, it would potentially result in nonconsensual deepfakes, as portrayed in the current functionality of the tool.
The call for investigation is rooted in worries about the tool's capacity to create nude videos from AI-generated images, alarming for under-aged users and public figures.
The demand points to the broader issue of AI-generated content and its implications in violating Non-Consensual Intimate Imagery laws, emphasizing the need for regulatory oversight.
Read at The Verge
[
|
]