
"AI-assisted developers produced three to four times more code than their unassisted peers, but also generated ten times more security issues. "Security issues" here doesn't mean exploitable vulnerabilities; rather, it covers a broad set of application risks, including added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations. As of June 2025, AI-generated code had introduced over 10,000 new "security findings" per month in Apiiro's repository data set, representing a 10x increase from December 2024, the biz said."
""AI is multiplying not one kind of vulnerability, but all of them at once," said Apiiro product manager Itay Nussbaum, in a blog post. "The message for CEOs and boards is blunt: if you're mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you're scaling risk at the same pace you're scaling productivity.""
Application security firm Apiiro analyzed code from tens of thousands of repositories and several thousand developers affiliated with Fortune 50 enterprises to assess the impact of AI code assistants including Claude Code, GPT-5, and Gemini 2.5 Pro. AI-assisted developers produced three to four times more code but introduced roughly ten times more security issues, spanning added open source dependencies, insecure code patterns, exposed secrets, and cloud misconfigurations. By June 2025 AI-generated code produced over 10,000 new security findings per month, a tenfold increase since December 2024. AI reduced syntax errors by 76% and logic bugs by 60% yet increased privilege-related risks and generated larger, more disruptive pull requests that complicated reviews and caused silent failures.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]