AI can now hunt software bugs on its own. Anthropic is turning that into a security tool. | Fortune
Briefly

AI can now hunt software bugs on its own. Anthropic is turning that into a security tool. | Fortune
"Now, instead of just scanning code for known problem patterns, Claude Code for Security can review entire codebases, more like a human expert would-looking at how different pieces of software interact and how data moves through a system. The AI double-checks its own findings, rates how severe each issue is, and suggests fixes. But while the system can investigate code on its own, it does not apply fixes automatically, which could be dangerous in its own right-developers must review and approve every change."
"Anthropic's new Opus 4.6 model has significantly improved at finding new, high-severity vulnerabilities-software flaws that allow attackers to break into systems without permission, steal sensitive data, or disrupt critical services-across vast amounts of code. In fact, in testing open source software that runs across enterprise systems and in critical infrastructure, Opus 4.6 found some of these vulnerabilities that had gone undetected for decades, and was able to do so without task-specific tooling, custom scaffolding, or specialized prompting."
"Anthropic has introduced Claude Code Security, the company's first product aimed at using AI models to help security teams keep up with the flood of software bugs they're responsible for fixing. For large companies, unpatched software bugs are a leading cause of data breaches, outages, and regulatory headaches-while security teams are often overwhelmed by how much code they have to protect."
Claude Code Security is Anthropic's first product designed to help security teams manage large volumes of software bugs. The system reviews entire codebases, analyzes interactions between components, and traces data flow through systems. The AI double-checks its findings, assigns severity ratings, and suggests fixes while requiring developer review before changes are applied. The tool builds on over a year of work by the Frontier Red Team, an internal group that stress-tests advanced AI and probes misuse in cybersecurity. Tests using the Opus 4.6 model uncovered previously undetected, high-severity vulnerabilities across open source and critical-infrastructure software without specialized tooling.
Read at Fortune
Unable to calculate read time
[
|
]