The Veracode report reveals significant security flaws in AI-generated code, showing that it introduces vulnerabilities in 45% of instances. GenAI models often prefer insecure coding methods, maintaining this trend regardless of advancements in syntax correctness. Furthermore, the report indicates that AI tools enhance the efficiency of attackers by enabling rapid identification and exploitation of weaknesses in systems, thereby facilitating sophisticated attacks and posing threats to traditional security measures. As vulnerabilities increase, exploiting them becomes easier for attackers.
A recent report by Veracode found critical security flaws in AI-generated code. The study revealed that while AI produces functional code, it introduces security vulnerabilities in 45% of cases.
When given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45% of the time, showcasing a troubling pattern of prioritization.
Despite advances in LLMs' ability to generate syntactically correct code, security performance has not improved, remaining unchanged over time.
AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI lower the barrier to entry for less-skilled attackers.
Collection
[
|
...
]