Child safety org launches AI model trained on real child sex abuse images
Briefly

The new AI model, 'Predict', developed by Thorn and Hive, strives to detect unknown CSAM by analyzing real CSAM data to identify harmful content before it's uploaded.
Rebecca Portnoff emphasized that partnering with Hive was a logical choice due to their extensive experience and existing models used in content moderation across many platforms.
Kevin Guo pointed out that extensive testing has been conducted to minimize false positives and negatives, indicating a strong need for reliable detection tools among platforms.
Read at Ars Technica
[
|
]