UK seeks to curb AI child sex abuse imagery with tougher testing
Briefly

UK seeks to curb AI child sex abuse imagery with tougher testing
"ensure AI systems can be made safe at the source"
"AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,"
"Today's announcement could be a vital step to make sure AI products are safe before they are released."
"Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design."
The UK government will enable authorised testers from tech firms and child safety charities to proactively assess AI models for their ability to generate child sexual abuse material before release. An amendment to the Crime and Policing Bill creates a legal pathway for pre-release testing by trusted organisations. The Internet Watch Foundation reported AI-related CSAM reports doubled, removing 426 items between January and October 2025, up from 199 in the same period in 2024. Charity leaders welcomed the measure as a step toward safer AI, while the NSPCC urged that testing be mandatory to ensure safeguarding is integral to product design.
Read at www.bbc.com
Unable to calculate read time
[
|
]