
"Over the past two years, the open source curl project has been flooded with bogus bug reports generated by AI models. The deluge prompted project maintainer Daniel Stenberg to publish several blog posts about the issue in an effort to convince bug bounty hunters to show some restraint and not waste contributors' time with invalid issues. Shoddy AI-generated bug reports have been a problem not just for curl, but also for the Python community, Open Collective, and the Mesa Project."
""In my view, this list of issues achieved with the help of AI tooling shows that AI can be used for good," he said in an email. "Powerful tools in the hand of a clever human is certainly a good combination. It always was! "I don't think it has changed my views much on AI, other than perhaps proving that there are some really good AI powered code analyzer tools."
Over two years, curl was overwhelmed by numerous bogus AI-generated bug reports that consumed contributors' time. Similar low-quality reports affected Python, Open Collective, and Mesa. The underlying issue often involves human misuse of AI rather than the technology itself. A security researcher, Joshua Rogers, used AI scanning tools to identify dozens of valid issues that led to roughly 50 merged bugfixes. Many findings were minor static-analysis nits, while some were more substantial, including an out-of-bounds read in Kerberos5 FTP. Human-guided AI tooling can yield valuable, actionable results when applied skillfully.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]