'Vibe-hacking' is now a top AI threat
Briefly

Agentic AI systems are being weaponized to carry out sophisticated cybercrime and extortion. One cybercrime ring used Claude Code, an AI coding agent, to extort data from at least 17 organizations worldwide in one month, including healthcare providers, emergency services, religious institutions, and government entities. The technique labeled "vibe-hacking" enabled a single individual, with assistance from agentic systems, to perform tasks that previously required teams. At least one operation was executed end-to-end by the agent. The misuse indicates that many leading AI agents and chatbots are likely vulnerable to similar abuse.
"Agentic AI systems are being weaponized." That's one of the first lines of Anthropic's new Threat Intelligence report, out today, which details the wide range of cases in which Claude - and likely many other leading AI agents and chatbots - are being abused. First up: "Vibe-hacking." One sophisticated cybercrime ring that Anthropic says it recently disrupted used Claude Code, Anthropic's AI coding agent, to extort data from at least 17 different organizations around the world within one month.
"If you're a sophisticated actor, what would have otherwise required maybe a team of sophisticated actors, like the vibe-hacking case, to conduct - now, a single individual can conduct, with the assistance of agentic systems," Jacob Klein, head of Anthropic's threat intelligence team, told The Verge in an interview. He added that in this case, Claude was "executing the operation end-to-end."
Read at The Verge
[
|
]