
"eSafety has not identified CSEM to be as readily accessible on any other mainstream service. We are concerned that apparently innocuous hashtags appear to be coopted to advertise CSEM, particularly when used together. The fact that some of these terms have innocuous uses means users are likely to be inadvertently exposed to CSEM despite seeking to use the X service in a legitimate manner."
"The eSafety commissioner wrote to X in January after its chatbot, Grok, was used to generate sexualised images of women and children online, which the prime minister, Anthony Albanese, described as abhorrent. eSafety found that while action by X to tackle bot accounts in October 2025 had reduced use of some previously commonly used hashtags and terms to advertise CSEM, eSafety found hashtags to advertise the material still prevalent."
Australia's eSafety regulator issued a formal warning to X following the Grok chatbot scandal, which generated sexualised images of women and children. The regulator's correspondence revealed that child sexual exploitation material (CSEM) is particularly systemic on X and more readily accessible than on any other mainstream platform. Despite X's efforts to reduce bot accounts and remove commonly used CSEM advertising hashtags in October 2025, the regulator found that innocuous hashtags were being coopted to advertise such material. Users searching legitimately on X risk inadvertent exposure to CSEM through hashtag combinations. The regulator also noted that Grok generated terrorist content posted on the platform and indicated it would consider issuing removal notices to X regarding images of people being undressed.
#child-sexual-exploitation-material #x-platform-safety #grok-chatbot #online-regulation #content-moderation
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]