
"the ban would not apply to AI systems with effective safety measures preventing users from creating such images. As Bloomberg noted, the ban would radically shift the EU's approach to regulating explicit deepfakes, moving beyond just prosecuting users to also punishing platforms."
"Recent shocking reports of AI-powered nudity applications, such as Grok on X, but also other tools that are freely available online, highlight an increase in AI-driven tools that allow users to generate manipulated intimate images of individuals without their consent, facilitating gender-based cyberviolence and the creation of child sexual abuse material."
"individual perpetrators-who can often be punished under national criminal law-are often hard to find. A more proactive plan, lawmakers suggested, would be to prevent widespread image-based sexual violence from the outset."
EU regulators are introducing a ban on AI systems that create or manipulate sexually explicit images resembling identifiable people without consent. The policy represents a significant regulatory shift, targeting platforms rather than individual users. Platforms with effective safety measures preventing such image creation would be exempt. The Grok scandal prompted this approach, as regulators recognized that prosecuting individual users proves ineffective when perpetrators are difficult to identify. Lawmakers emphasized that banning these systems from the EU market prevents widespread image-based sexual violence and child sexual abuse material creation more effectively than pursuing individual offenders through national criminal law.
#ai-regulation #non-consensual-deepfakes #platform-accountability #sexual-abuse-prevention #eu-policy
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]