
"Surveys and related studies have shown that researchers are increasingly using large language models (LLMs) to help to conduct literature searches, write manuscripts and format bibliographies. And sometimes, these models generate non-existent academic references."
"One analysis of nearly 18,000 papers accepted by three computer-science conferences found a sharp increase in references that cannot be traced to actual scholarly publications. The results, reported in January, indicated that 2.6% of papers in 2025 had at least one potentially hallucinated citation - up from about 0.3% in 2024."
"Another analysis, released in February, estimated that 2-6% of papers in four other 2025 computer-science conferences included references with rephrased titles or citations of publications that the authors couldn't verify by searching through databases and journal archives."
Researchers are increasingly using large language models for literature searches and manuscript writing, resulting in hallucinated citations. A study of nearly 18,000 papers revealed a rise in untraceable references, with 2.6% of papers in 2025 containing at least one potentially hallucinated citation, up from 0.3% in 2024. Another analysis found that 2-6% of papers in four computer-science conferences included unverifiable references. The extent of the issue is still unclear, but it affects various academic fields.
#artificial-intelligence #hallucinated-citations #academic-integrity #research-methodology #large-language-models
Read at Nature
Unable to calculate read time
Collection
[
|
...
]