Using AI to code does not mean your code is more secure
Briefly

Using AI to code does not mean your code is more secure
"The Georgia Tech researchers started their measurements on May 1, 2025, and as of March 20, 2026, the CVE scorecard reads: 49 for Claude Code (11 critical), 15 for GitHub Copilot (2 critical), 2 for Aether, 2 for Google Jules (1 critical), 2 for Devin, 2 for Cursor, 1 for Atlassian Rovo, and 1 for Roo Code."
"Those 74 cases are confirmed instances where we found clear evidence that AI-generated code contributed to the vulnerability. That does not mean the other ~50,000 cases were human-written. It means we could not detect AI involvement in those cases."
AI tools are increasingly linked to security vulnerabilities in code. Researchers from Georgia Tech SSLab tracked CVEs related to AI-generated code, finding 74 cases from 43,849 advisories. Claude Code was responsible for 49 CVEs, including 11 critical ones. The rise in vulnerabilities correlates with Claude Code's popularity, which added over 30.7 billion lines of code recently. Researchers caution that the identified CVEs may represent a lower bound, as many cases lack clear evidence of AI involvement.
Read at Theregister
Unable to calculate read time
[
|
]