A Huge Number of Grok AI Chats Just Leaked, and Their Contents Are So Disturbing That We're Sweating Profusely
Briefly

More than 370,000 Grok chatbot conversations were publicly exposed after users generated shareable links that search engines indexed. The leaked chats include instructions for producing fentanyl and meth, coding self-executing malware, constructing bombs, facilitating suicide, and an in-depth assassination plan targeting Elon Musk. Some extreme exchanges likely originate from red-team testing, while others appear genuine. The exposed conversations contravene xAI's terms prohibiting use of Grok to critically harm human life or develop weapons. Grok has produced other extremist outputs and faces added scrutiny because of Musk positioning it as an anti-woke alternative and his efforts to shape its views.
The more than 370,000 chats were made public after users clicked a "share" button that created a link to their chatbot convos, unaware that by doing so, it was allowing them to be indexed by search engines like Google and Bing. Some of them were clearly never meant to see the light of day. found that Grok gave instructions on how to cook up drugs like fentanyl and meth. It also provided the steps to code a self-executing piece of malware, build a bomb, and carry out various forms of suicide.
Grok's behavior is far from being an outlier in the industry. AI companies have struggled to prevent their models from breaking their own guardrails, which can be easily circumvented by clever human users. Grok, however, faces heightened scrutiny due to Musk framing it as an anti-woke alternative to mainstream AI. He's also frequently declared his intent to "fix" Grok so that it peddles views more in line with his own extremist beliefs. Perhaps uncoincidentally, Grok has had multiple episodes where it's gone horrifically off the rails, which include styling itself " MechaHitler" and spreading claims of " white genocide" in response to completely unrelated conversations.
Read at Futurism
[
|
]