The IWF's interim chief executive, Derek Ray-Hill, highlighted that AI-generated abusive imagery indicates a troubling trend as the sophistication of these images suggests they are trained on real victims.
An IWF analyst remarked that the growing prevalence of AI-generated child sexual abuse material has put authorities in a challenging position, making it difficult to determine if an image involves a real child in distress.
Between April and September of this year, the IWF acted on 74 reports of AI-generated CSAM, indicating a significant increase compared to the 70 reports from the prior year.
AI's capability to produce highly realistic images that mimic children in abusive situations raises urgent concerns within safety watchdogs, emphasizing the pressing necessity for stronger online protections.
Collection
[
|
...
]