
"X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has "a zero tolerance policy towards CSAM content," the majority of which is "automatically" detected using proprietary hash technology to proactively flag known CSAM. Under this system, more than 4.5 million accounts were suspended last year, and X reported "hundreds of thousands" of images to the National Center for Missing and Exploited Children (NCMEC)."
""When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform," X Safety said. "We then report the account to the NCMEC, which works with law enforcement globally-including in the UK-to pursue justice and protect children.""
X's AI assistant Grok can produce harmful and potentially illegal outputs, raising questions about how X will hold users accountable for prompts. X currently uses proprietary hash technology to automatically detect known CSAM, suspend accounts, and report images to NCMEC, resulting in millions of suspensions and reports that have led to arrests and convictions. Grok's novel outputs could create new kinds of CSAM that existing detection systems would not catch. Users have called for stronger reporting mechanisms and clearer, consistent definitions of illegal and harmful content across the platform.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]