
"The first large-scale cyberattack campaign leveraging artificial intelligence (AI) as more than just a helping digital hand has now been recorded. As first reported by the Wall Street Journal, Anthropic, the company behind Claude, an AI assistant, published a report (.PDF) documenting the abuse of its AI models, hijacked in a wide-scale attack campaign simultaneously targeting multiple organizations. Also: Google spots malware in the wild that morphs mid-attack, thanks to AI ZDNET's key takeaways"
"In the middle of September, Anthropic detected a "highly sophisticated cyber espionage operation" that used AI throughout the full attack cycle. Claude Code, agentic AI, was abused in the creation of an automated attack framework capable of "reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations." Furthermore, these stages were performed "largely autonomously," with human operators providing basic oversight after tasking Claude Code to operate as "penetration testing orchestrators and agents" -- in other words, to pretend to be a defender."
Anthropic detected a highly sophisticated cyber espionage operation in mid-September that used AI across the full attack lifecycle. The agentic model Claude Code was abused to construct an automated attack framework that performed reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration. Human operators provided basic oversight while Claude Code carried out the majority of tactical tasks. The campaign targeted high-profile organizations and achieved autonomous execution for roughly 80–90% of tactical operations. Attribution was made to a Chinese state-sponsored group. The operation represents one of the first recorded large-scale instances of agentic AI weaponization end-to-end.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]