
"Anthropic announced on Thursday that Chinese state-backed hackers used the company's AI model Claude to automate roughly 30 attacks on corporations and governments during a September campaign, according to reporting from the Wall Street Journal. Anthropic said that up to 80% to 90% of the attack was automated with AI, a level higher than previous hacks. It occurred "literally with the click of a button, and then with minimal human interaction," Anthropic's head of threat intelligence Jacob Klein told the Journal."
"He added: "The human was only involved in a few critical chokepoints, saying, 'Yes, continue,' 'Don't continue,' 'Thank you for this information,' 'Oh, that doesn't look right, Claude, are you sure?'" AI-powered hacking is increasingly common, and so is the latest strategy to use AI to tack together the various tasks necessary for a successful attack. Google spotted Russian hackers using large-language models to generate commands for their malware, according to a company report released on November 5th."
"For years, the US government has warned that China was using AI to steal data of American citizens and companies, which China has denied. Anthropic told the Journal that it is confident the hackers were sponsored by the Chinese government. In this campaign, the hackers stole sensitive data from four victims, but as with previous hacks, Anthropic did not disclose the names of the targets, successful or unsuccessful. The company did say that the US government was not a successful target."
Chinese state-backed hackers used the AI model Claude to automate roughly 30 attacks on corporations and governments during a September campaign. Up to 80–90% of the attack workflow was automated with AI, a higher automation level than prior hacks, and operations proceeded with minimal human interaction, often "literally with the click of a button." Humans intervened only at a few critical chokepoints to approve or halt actions. AI-powered hacking is becoming more common, with adversaries using models to chain tasks and generate malware commands. The campaign exfiltrated sensitive data from four victims; target identities were not disclosed. The US government was not compromised in this campaign. Other firms observed similar tactics, including Google detecting Russian actors using large-language models to generate malware commands, and US officials long warning about Chinese use of AI for data theft.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]