The biggest AI companies agree to crack down on child abuse images
Briefly

Tech giants commit to reviewing AI training data for child sexual abuse material (CSAM) and removing it. They pledge to avoid CSAM in datasets, stress-test AI models, and evaluate for child safety before release.
Anti-child abuse nonprofit Thorn warns AI-generated CSAM can hinder victim identification, increase demand, and allow for new victimization methods. Google increases support for NCMEC initiatives to combat child abuse.
Read at The Verge
[
add
]
[
|
|
]