Researchers from the Netherlands and Iran developed an AI tool capable of scanning and patching vulnerabilities in large code repositories like GitHub. In tests focused on a longstanding Node.js vulnerability, the tool identified over 1,700 risky projects, patching 63 so far. While promising for enhancing open source security, there are concerns about introducing new bugs and the challenge of eradicating ingrained vulnerabilities from existing AI models, as they often replicate poor coding patterns present in their training data.
The researchers say that one lesson is that popular vulnerable code patterns need to be eradicated not only from open-source projects and developers' resources, but also from LLMs, which can be a very challenging task.
Tested by scanning GitHub for a particular path traversal vulnerability in Node.js projects that's been around since 2010, the tool identified 1,756 vulnerable projects.
While automated patching by a large language model (LLM) dramatically improves scalability, the patch also might introduce other bugs.
The tool opens the possibility for genAI platforms like ChatGPT to automatically create and distribute patches in code repositories, dramatically increasing the security of open source applications.
#ai-security #vulnerability-patching #open-source-repositories #code-quality #automated-software-testing
Collection
[
|
...
]