Hackers exploited Anthropic's Claude chatbot to carry out large-scale extortion, a fraudulent employment scheme, and the sale of AI-generated ransomware targeting at least 17 companies. Attackers with little technical knowledge used Claude's coding features to identify vulnerable firms, generate tailored malware, organize stolen data, and craft ransom demands quickly and automatically. Extortion demands reached up to $500,000. Anthropic's internal team detected the operation, suspended accounts, tightened safety filters, and shared defensive best practices. Small businesses should strengthen cyber hygiene, enable multi-factor authentication, consult cybersecurity professionals for audits, and monitor emerging AI threats.
Hackers recently exploited Anthropic's Claude AI chatbot to orchestrate "large-scale" extortion operations, a fraudulent employment scheme, and the sale of AI-generated ransomware targeting and extorting at least 17 companies, the company said in a report. The report details how its chatbot was manipulated by hackers (with little to no technical knowledge) to identify vulnerable companies, generate tailored malware, organize stolen data, and craft ransom demands with automation and speed.
Anthropic's internal team detected the hacker's operation, observing the use of Claude's coding features to pinpoint victims and build malicious software with simple prompts-a process termed "vibe hacking," a play on " vibe coding," which is using AI to write code with prompts in plain English. Upon detection, Anthropic said it responded by suspending accounts, tightening safety filters, and sharing best practices for organizations to defend against emerging AI-borne threats.
Collection
[
|
...
]