Scientists hide messages in papers to game AI peer review
Briefly

Researchers are embedding secret messages in academic papers to deceive AI tools into giving positive peer-review reports. Hidden messages are included as white text or in very small fonts, making them invisible to human reviewers but detectable by AI. This practice, known as 'prompt injection,' is primarily found in computer science-related fields and represents a form of academic misconduct. Despite prohibitions from many publishers against AI use in peer review, some researchers exploit this loophole, raising concerns about the integrity of the review process.
Researchers have been sneaking secret messages into papers to manipulate AI tools into providing favorable peer-reviews, a practice uncovered by Nature and highlighted by Nikkei Asia.
Hidden messages are often inserted as white text or in tiny fonts, making them invisible to humans but detectable by AI reviewers, thus exploiting AI vulnerabilities.
While many publishers prohibit AI use in peer review, some researchers leverage large language models to draft review reports, creating a loophole that others exploit.
The act of inserting hidden prompts into papers has been termed 'prompt injection,' recognized as academic misconduct, and may rapidly escalate in prevalence.
Read at Nature
[
|
]