
""We hope that by taking strong action against violations of agreed-upon policy we will remind the community that as our field changes rapidly the thing we must protect most actively is our trust in each other.""
""What the ICML case shows is a research community in need of clear guidance on responsible AI use, including use in peer review," says Marie Soulière, head of editorial ethics and quality assurance at the publishing company Frontiers in Lausanne, Switzerland."
""It will only demotivate all the reviewers," he said, adding that they will avoid routes banning AI use and will use LLMs 'to generate meaningful reviews.'""
The International Conference on Machine Learning (ICML) rejected 497 papers due to violations of AI-use policies in peer reviews. The conference employs a reciprocal review policy, requiring authors to review other papers. Illicit AI use was detected through hidden watermarks in papers, which revealed AI-generated reviews. Organizers aim to reinforce trust within the community. Some researchers support ICML's actions, suggesting other conferences should follow suit, while others express concerns about the policy's effectiveness and its potential impact on reviewer motivation.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]