Telegram U-turns and joins child safety schemeTelegram's recent partnership with the Internet Watch Foundation marks a significant pivot toward addressing child sexual abuse material.
Nonprofit scrubs illegal content from controversial AI training datasetThe LAION-5B dataset has been re-released as Re-LAION-5B, now cleaned of links to child sexual abuse materials (CSAM).
Under new law, cops bust famous cartoonist for AI-generated child sex abuse imagesCalifornia law bans possession or distribution of AI-generated child sex abuse material (CSAM), reflecting concerns over its inherent dangers to children.
Telegram U-turns and joins child safety schemeTelegram's recent partnership with the Internet Watch Foundation marks a significant pivot toward addressing child sexual abuse material.
Nonprofit scrubs illegal content from controversial AI training datasetThe LAION-5B dataset has been re-released as Re-LAION-5B, now cleaned of links to child sexual abuse materials (CSAM).
Under new law, cops bust famous cartoonist for AI-generated child sex abuse imagesCalifornia law bans possession or distribution of AI-generated child sex abuse material (CSAM), reflecting concerns over its inherent dangers to children.
Feds test whether existing laws can combat surge in fake AI child sex imagesFederal efforts to prosecute AI-generated child exploitation cases face legal challenges that may slow progress against emerging risks.
The DOJ makes its first known arrest for AI-generated CSAMCSAM generated by AI is illegal and punishable under the law.
AI Is Triggering a Child-Sex-Abuse CrisisGenerative AI is increasingly being used to create sexually explicit images of children, highlighting an emerging crisis in safety.
Texan man gets 30 years in prison for running CSAM exchangeRobert Shouse was sentenced to 30 years in prison for running a dark web forum for child sexual abuse material and personally abusing children.
US: Alaska man busted with 10,000+ child sex abuse images despite his many encrypted appsChild sexual abuse material (CSAM) is increasingly being hidden using advanced tech by some offenders, raising concerns about evolving exploitation methods.
Feds test whether existing laws can combat surge in fake AI child sex imagesFederal efforts to prosecute AI-generated child exploitation cases face legal challenges that may slow progress against emerging risks.
The DOJ makes its first known arrest for AI-generated CSAMCSAM generated by AI is illegal and punishable under the law.
AI Is Triggering a Child-Sex-Abuse CrisisGenerative AI is increasingly being used to create sexually explicit images of children, highlighting an emerging crisis in safety.
Texan man gets 30 years in prison for running CSAM exchangeRobert Shouse was sentenced to 30 years in prison for running a dark web forum for child sexual abuse material and personally abusing children.
US: Alaska man busted with 10,000+ child sex abuse images despite his many encrypted appsChild sexual abuse material (CSAM) is increasingly being hidden using advanced tech by some offenders, raising concerns about evolving exploitation methods.
Apple sued over abandoning CSAM detection for iCloud | TechCrunchApple is being sued for not implementing a system to detect child sexual abuse material in iCloud, allegedly impacting victims' trauma.
Apple sued for failing to implement tools that would detect CSAM in iCloudApple is being sued for failing to implement iCloud scanning for child sexual abuse material, leading to harm for victims.
Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting toolApple faces lawsuits from survivors of sexual abuse due to its failure to effectively address child sexual abuse material on its platforms.
Apple sued over abandoning CSAM detection for iCloud | TechCrunchApple is being sued for not implementing a system to detect child sexual abuse material in iCloud, allegedly impacting victims' trauma.
Apple sued for failing to implement tools that would detect CSAM in iCloudApple is being sued for failing to implement iCloud scanning for child sexual abuse material, leading to harm for victims.
Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting toolApple faces lawsuits from survivors of sexual abuse due to its failure to effectively address child sexual abuse material on its platforms.
Checkmarked X Users Caught Promoting Sites That Sell Child Sex Abuse VideosThe verification system on X has proven ineffective in preventing the promotion of child sexual abuse material.
Bluesky ramps up content moderation as millions join the platformBluesky is increasing its content moderation team to address a rise in concerning user content amid rapid growth.
Checkmarked X Users Caught Promoting Sites That Sell Child Sex Abuse VideosThe verification system on X has proven ineffective in preventing the promotion of child sexual abuse material.
Bluesky ramps up content moderation as millions join the platformBluesky is increasing its content moderation team to address a rise in concerning user content amid rapid growth.
Child predators are using AI to create sexual images of their favorite stars': My body will never be mine again'Predators on the dark web are increasingly using AI to create sexually explicit images of children, particularly fixating on 'star victims.'
Snap calls New Mexico's child safety complaint a 'sensationalist lawsuit'Snap argues New Mexico's AG manipulated evidence to portray its app negatively.The company emphasizes its compliance with CSAM reporting laws.State officials highlight serious concerns about children's safety in their allegations.
The world's leading AI companies pledge to protect the safety of children onlineLeading AI companies pledge to prevent AI exploitation of children for CSAM, aiming to protect children from abuse using generative AI.
Child predators are using AI to create sexual images of their favorite stars': My body will never be mine again'Predators on the dark web are increasingly using AI to create sexually explicit images of children, particularly fixating on 'star victims.'
Snap calls New Mexico's child safety complaint a 'sensationalist lawsuit'Snap argues New Mexico's AG manipulated evidence to portray its app negatively.The company emphasizes its compliance with CSAM reporting laws.State officials highlight serious concerns about children's safety in their allegations.
The world's leading AI companies pledge to protect the safety of children onlineLeading AI companies pledge to prevent AI exploitation of children for CSAM, aiming to protect children from abuse using generative AI.
AI-generated child sexual abuse imagery reaching tipping point', says watchdogAI-generated child sexual abuse imagery is increasingly prevalent online, with reports significantly rising in the past six months.
EU proposes criminalizing AI-generated child sexual abuse and deepfakes | TechCrunchAI-generated child sexual abuse imagery could be criminalized in the EU as part of updated legislation.Livestreaming child sexual abuse and possession of pedophile manuals could also become criminal offenses.
AI-generated child sexual abuse imagery reaching tipping point', says watchdogAI-generated child sexual abuse imagery is increasingly prevalent online, with reports significantly rising in the past six months.
EU proposes criminalizing AI-generated child sexual abuse and deepfakes | TechCrunchAI-generated child sexual abuse imagery could be criminalized in the EU as part of updated legislation.Livestreaming child sexual abuse and possession of pedophile manuals could also become criminal offenses.
X fails to avoid Australia child safety fine by arguing Twitter doesn't existX Corp's lack of transparency on CSAM led to civil penalties and possible costly repercussions under Australian law.
Telegram's Durov must remain in France and post a 5M bailPavel Durov, founder of Telegram, faces serious criminal charges in France including money laundering and CSAM distribution.
Was an AI Image Generator Taken Down for Making Child Porn?AI companies face scrutiny for enabling tools that facilitate the creation of child sexual abuse material, raising ethical and legal concerns.
We're unprepared for the threat GenAI on Instagram, Facebook, and Whatsapp poses to kidsSocial media platforms are inundated with AI-generated Child Sexual Abuse Material (AIG-CSAM), creating challenges for law enforcement and anti-CSAM institutions.