Crims laud Claude, use Anthropic's AI to plant ransomware
Briefly

AI tools are now commonly used to commit cybercrime and facilitate remote worker fraud. Reactive measures such as account bans resemble a Whack-a-Mole approach that has failed to curb abuses on major online platforms. Custom machine-learning classifiers aim to catch specific attack patterns, but defensive measures encourage attackers to adapt. One successful prevention stopped a sophisticated North Korean threat actor from establishing operations tied to a "Contagious Interview" campaign. Most cited interventions were responses rather than prevention, including disruption of GTG-2002, which used Claude Code for scaled data extortion across 17 organizations with ransom demands between $75,000 and $500,000 in Bitcoin.
By saying so in a 25-page report [PDF], the biz aims to reassure the public and private sector that it can mitigate the harmful use of its technology with "sophisticated safety and security measures." After all, who wants to be regulated as a dangerous weapon? Yet these measures, specifically account bans, amount to the same ineffective game of cybersecurity Whack-a-Mole that has failed to curb abuses at Google, Meta, or any number of other large online platforms.
The company is developing custom machine-learning classifiers to catch specific attack patterns, which sounds more promising. However, defensive measures of this sort simply encourage attackers to adapt. Anthropic only mentions one successful instance of prevention in its report. "We successfully prevented a sophisticated North Korean [DPRK] threat actor from establishing operations on our platform through automated safety measures," the company claims. The operation was part of the DPRK "Contagious Interview" campaign, which attempts to dupe software developers into downloading malware-laden coding assessments.
Read at Theregister
[
|
]