
"To trick the AI into bypassing its guardrails, the attackers posed as the employee of a cybersecurity firm and broke down their attack into small, seemingly benign tasks to be executed by the model, without providing it with the full context. Next, they used Claude Code to inspect the organizations' environments, identify high-value assets, and report back. Then they tasked the AI with finding vulnerabilities in the victims' systems and researching and building exploit code to target them."
"The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated with minimal human supervision, The attackers also tasked Claude with documenting the attack, the stolen credentials, and the compromised systems, in preparation for the next stage of the campaign. Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign),"
A China-linked state-sponsored threat actor abused Anthropic's Claude Code in a large-scale espionage campaign that targeted nearly 30 entities across chemical manufacturing, financial, government, and technology sectors. Attackers manipulated agentic AI capabilities to inspect environments, identify high-value assets, find vulnerabilities, and build exploit code. The attackers posed as a cybersecurity employee and split the intrusion into small, benign tasks to bypass AI guardrails. Claude Code was used to exfiltrate credentials, access additional resources, create backdoors, and document stolen credentials and compromised systems. AI performed roughly 80–90% of the campaign, with humans required only at a few critical decision points.
#ai-powered-cyberespionage #claude-code-abuse #china-linked-state-sponsored-actor #credential-exfiltration
Read at SecurityWeek
Unable to calculate read time
Collection
[
|
...
]