"Flux, accessible through Grok, is an excellent text-to-image generator, but it is also really good at creating fake photographs of real locations and people, and sending them right to Twitter," wrote frequent AI commentator Ethan Mollick on X. "Does anyone know if they are watermarking these in any way? It would be a good idea."
According to their testing, Grok produced images depicting political figures in compromising situations, copyrighted characters in inappropriate contexts, and scenes of violence when prompted. The Verge found that while Grok claims to have certain limitations, such as avoiding pornographic or excessively violent content, these rules seem inconsistent in practice.
Unlike other major AI image generators, Grok does not appear to refuse prompts involving real people or add identifying watermarks to its outputs. Given what people are generating so far—including images of Donald Trump and Kamala Harris kissing or giving a thumbs-up on the way to the Twin Towers in an apparent 9/11 attack—the unrestricted outputs may not last for long.
But then again, Elon Musk has made a big deal out of 'freedom of speech' on his platform, so perhaps the capability will remain (until someone likely files a defamation or copyright suit). People using Grok's image generator to shock brings up an old question in AI at this point: Should misuse of an AI image generator be the responsibility of the platform or the user?
Collection
[
|
...
]