Jesse Van Rootselaar's chats describing gun violence were flagged by tools that monitor the company's LLM for misuse and banned in June 2025. Staff at the company debated whether or not to reach out to Canadian law enforcement over the behavior but ultimately did not, according to the Wall Street Journal. An OpenAI spokesperson said Van Rootselaar's activity did not meet the criteria for reporting to law enforcement; the company reached out to Canadian authorities after the incident.
The AI company said while its large language models (LLMs) refused the threat actor's direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows. Some of the produced output involved code for obfuscation, clipboard monitoring, and basic utilities to exfiltrate data using a Telegram bot. It's worth pointing out that none of these outputs are inherently malicious on their own.