AI 'slop security reports' are driving open source maintainers mad
Briefly

Recently I've noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open source projects. The issue is in the age of LLMs, these reports appear at first-glance to be potentially legitimate and thus require time to refute.
This issue is tough to tackle because it's distributed across thousands of open source projects, and due to the security-sensitive nature of reports open source maintainers are discouraged from sharing their experiences or asking for help.
Larson wants to see platforms adding systems to prevent automated or abusive creation of security reports, and allow them to be made public without publishing a vulnerability record - essentially letting maintainers name-and-shame offenders.
If you receive a report that you suspect is AI or LLM generated, reply with a short response and close the report: 'I suspect this report is AI-generated/incorrect/spam. Please respond with more justification for this.
Read at ITPro
[
|
]