Research from Apiiro highlights that the integration of generative AI coding tools has significantly increased software development speed while simultaneously amplifying security vulnerabilities. Since the launch of ChatGPT in late 2022, usage of coding assistants like GitHub Copilot has surged, leading to a notable rise in pull requests. However, this rapid development has raised alarming security concerns, particularly with sensitive data and APIs being exposed, necessitating a focus on application security as organizations adopt AI-driven workflows.
Generative AI tools have supercharged coding velocity while putting sensitive data like Personally Identifiable Information (PII) and payment details at significant risk.
Since OpenAI introduced ChatGPT in late 2022, generative AI tools have become mainstream in software engineering, leading to a 70% surge in pull requests.
Microsoft reports that 150 million developers now use its coding assistant, GitHub Copilot, marking a 50% increase over the past two years.
The impressive coding velocity enabled by AI tools comes at a serious cost, with the volume of insecure AI-generated code increasing organizational risks.
Collection
[
|
...
]