We're unprepared for the threat GenAI on Instagram, Facebook, and Whatsapp poses to kids
Briefly

Waves of Child Sexual Abuse Material (CSAM) flood social media, with a concerning rise in AI-generated CSAM, challenging law enforcement and institutions combating the issue.
NCMEC reported 36 million cases of suspected CSAM, 85% originating from platforms like Facebook, Instagram, and WhatsApp, highlighting the overwhelming impact on social media.
AIG-CSAM poses a new challenge, as offenders utilize AI tools to create illicit content, complicating detection and exacerbating the spread of harmful material online.
In response, NCMEC added a 'Generative AI' field to their reporting system, but faces difficulties due to the influx of AI-generated CSAM reports lacking metadata, hindering effective action.
Read at Fast Company
[
|
]