Scholarly publishers are adopting AI tools to mitigate the peer review crisis caused by high submission rates and a lack of reviewers. While AI can improve the efficiency and quality of peer reviews, the prevalence of these tools is leading to concerns about new methods of cheating among researchers. A survey indicated that 19 percent of researchers utilized large language models for peer reviews. Research suggests that a notable percentage of review texts could have been significantly altered by LLMs, potentially leading to unethical practices through indirect prompt injection techniques.
If reviewers are merely skimming papers and relying on LLMs to generate substantive reviews rather than using it to clarify their original thoughts, it opens the door for new cheating methods.
Preliminary research has found that the strategy can be highly effective for inflating AI-generated review scores.
#ai-in-peer-review #research-integrity #peer-review-crisis #large-language-models #academic-publishing
Collection
[
|
...
]