How AI slop is causing a crisis in computer science
Briefly

How AI slop is causing a crisis in computer science
"Fifty-four seconds. That's how long it took Raphael Wimmer to write up an experiment that he did not actually perform, using a new artificial-intelligence tool called Prism, released by OpenAI last month. "Writing a paper has never been easier. Clogging the scientific publishing pipeline has never been easier," wrote Wimmer, a researcher in human-computer action at the University of Regensburg in Germany, on Bluesky. Large language models (LLMs) can suggest hypotheses, write code and draft papers, and AI agents are automating parts of the research process."
"Computer science was a growing field before the advent of LLMs, but it now at breaking point. The 2026 International Conference on Machine Learning (ICML) ihas received more than 24,000 submissions - more than double that of the 2025 meeting. One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December."
Large language models and AI agents can suggest hypotheses, write code and draft papers, significantly boosting researcher productivity but enabling fabricated or low-quality outputs known as AI slop. AI tools can produce complete experimental write-ups in seconds, and submission volumes to conferences and preprint servers have surged. The 2026 ICML received over 24,000 submissions, more than double the previous year, and arXiv submissions rose by over 50% while monthly rejections increased fivefold to more than 2,400. The surge overwhelms traditional peer-review systems, makes thorough evaluation increasingly infeasible, and exposes many papers to AI fabrications and hallucinations. One response is to employ AI to assist peer review and screening.
Read at Nature
Unable to calculate read time
[
|
]