xAI's Grok made shared conversations publicly accessible because the share function generated unique URLs that search engines could crawl. More than 370,000 chats have been indexed by Google after users clicked share links. Conversations ranged from benign requests to sensitive personal data, uploaded files, and explicit instructions for making fentanyl, methamphetamine, explosives, malware, and methods of suicide. Some chats included passwords, names, medical and psychological questions, and a plot to attack Elon Musk. Affected users and researchers discovered the exposure without prior warning. The incident followed a similar exposure of discoverable ChatGPT conversations, which prompted an immediate feature withdrawal.
As soon as users clicked the share button, a unique URL was created that allowed a conversation to be distributed via email or messaging apps. What xAI failed to mention is that these links were also automatically accessible to search engines. As a result, conversations that users intended to share only with a small circle of people were inadvertently published on Grok's website and became publicly searchable.
Other chats contained sensitive information such as medical and psychological questions, personal details, names, and even passwords. Files such as spreadsheets and images uploaded via Grok were also found to be accessible. More seriously, in certain cases, Grok provided detailed instructions for making fentanyl, methamphetamine, and explosives, as well as malware and methods of suicide. In one conversation, a plan was even drawn up to attack Elon Musk.
Collection
[
|
...
]