OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Briefly

OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
"The AI company said while its large language models (LLMs) refused the threat actor's direct requests to produce malicious content, they worked around the limitation by creating building-block code, which was then assembled to create the workflows. Some of the produced output involved code for obfuscation, clipboard monitoring, and basic utilities to exfiltrate data using a Telegram bot. It's worth pointing out that none of these outputs are inherently malicious on their own."
""The threat actor made a mix of high‑ and lower‑sophistication requests: many prompts required deep Windows-platform knowledge and iterative debugging, while others automated commodity tasks (such as mass password generation and scripted job applications)," OpenAI added. "The operator used a small number of ChatGPT accounts and iterated on the same code across conversations, a pattern consistent with ongoing development rather than occasional testing.""
OpenAI disrupted three activity clusters that misused ChatGPT to facilitate malware development. One cluster involved a Russian‑language threat actor using multiple ChatGPT accounts to prototype and refine a remote access trojan and a credential stealer, and to develop post‑exploitation and credential‑theft components. The actor circumvented model refusals by creating building‑block code that was assembled into workflows, producing obfuscation, clipboard monitoring, and Telegram‑based exfiltration utilities. Requests ranged from deep Windows‑platform debugging to automated commodity tasks. A second cluster traced to North Korea overlapped a campaign targeting South Korean diplomatic missions and used ChatGPT for malware and C2 development.
Read at The Hacker News
Unable to calculate read time
[
|
]