Anthropic's Mythos breach was humiliating
Briefly

Anthropic's Mythos breach was humiliating
"Anthropic's tightly controlled rollout of Claude Mythos has taken an awkward turn. After weeks of insisting the AI model is too dangerous to release publicly, it appears the model fell into the wrong hands."
"A small group of unauthorized users has had access to Mythos since the day Anthropic announced plans to offer it to select companies for testing. Anthropic says it is investigating."
"The breach is embarrassingly unsophisticated. The group accessed Mythos by making an educated guess about the model's online location, using information from a previous breach."
"Security vulnerabilities are inevitable, and it was Mercor, not Anthropic, that revealed the information the hackers used to guess Mythos' location."
Anthropic's AI model, Claude Mythos, was reportedly accessed by unauthorized users despite claims of its dangerous capabilities. The breach occurred due to a combination of insider knowledge and educated guessing about the model's online location. This incident highlights vulnerabilities in security measures, as the hackers utilized information leaked from Mercor, a company involved in AI training data. Anthropic is currently investigating the breach, which contradicts its commitment to AI safety and cybersecurity.
Read at The Verge
Unable to calculate read time
[
|
]