AI is making cybercriminal workflows more efficient too, OpenAI finds
Briefly

AI is making cybercriminal workflows more efficient too, OpenAI finds
"OpenAI has published research revealing how state-sponsored and cybercriminal groups are abusing artificial intelligence (AI) to spread malware and perform widespread surveillance. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI has benefits in the cybersecurity space; it can automate tedious and time-consuming tasks, freeing up human specialists to focus on complex projects and research, for example."
"With this in mind, since February 2024, OpenAI has issued public threat reports and has closely monitored the use of AI tools by threat actors. Since last year, OpenAI has disrupted over 40 malicious networks that have violated its usage policies, and an analysis of these networks is now complete, giving us a glimpse into the current trends of AI-related cybercrime."
State-sponsored and cybercriminal groups are using artificial intelligence to spread malware and conduct widespread surveillance, including reported attempts to use ChatGPT for surveillance. AI can automate tedious cybersecurity tasks and free human specialists for complex projects, but AI also enables malicious capabilities and new abuse patterns. OpenAI has monitored AI tool use by threat actors since February 2024 and has disrupted over 40 malicious networks that violated usage policies. The disruptions reveal four major trends in AI-related cybercrime, including greater integration of AI into existing workflows and rapid changes to threat actors' Tactics, Techniques, and Procedures (TTPs).
Read at ZDNET
Unable to calculate read time
[
|
]