
"The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025. Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models the underlying technology for chatbots such as ChatGPT and image generators such as Google's Veo 3 and ensure they have safeguards to prevent them from creating images of child sexual abuse."
"The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, the authorities have had to wait until AI-generated CSAM is uploaded online before dealing with it. This law is aimed at heading off that problem by helping to prevent the creation of those images at source."
"This week Narayan visited the London base of Childline, a helpline for children, and listened to a mock-up of a call to counsellors featuring a report of AI-based abuse. The call portrayed a teenager seeking help after he had been blackmailed by a sexualised deepfake of himself, constructed using AI. When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents, he said."
A new UK legal change will permit designated AI companies and child safety organisations to examine AI models for risks of generating child sexual abuse images. Reports of AI-generated child sexual abuse material doubled from 199 in 2024 to 426 in 2025. The change addresses the legal barrier that previously prevented testing because creating or possessing CSAM is illegal. The amendment to the crime and policing bill also bans possessing, creating, or distributing AI models designed to generate CSAM. A child helpline mock-up call illustrated a teenager blackmailed by a sexualised AI deepfake.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]