Can AI image generators be policed to prevent explicit deepfakes of children?
Briefly

Child abusers exploit AI-generated deepfakes to blackmail victims into creating real abuse imagery, perpetuating a vicious cycle of sextortion that remains unchecked by global policing agreements.
AI systems are trained on vast datasets like Laion-5B, which may unwittingly contain illegal content like child sexual abuse material, highlighting the challenges of eliminating illicit training data sources.
Read at www.theguardian.com
[
add
]
[
|
|
]