Hundreds of suspicious journals flagged by AI screening tool
Briefly

An AI tool screened about 15,000 open-access journals and flagged more than 1,000 as potentially problematic for dubious publishing practices. Many flagged journals were not on existing watchlists and include titles from large, reputable publishers. The flagged journals have published hundreds of thousands of papers that have attracted millions of citations. The screening algorithm looks for red flags such as rapid publication turnaround, high self-citation rates, weak transparency, and editorial-board members without clear affiliations to reputable institutions. The tool is available in closed beta for indexers and publishers, but it can make errors and requires human-expert vetting before removal or sanctioning.
Researchers have identified more than 1,000 potentially problematic open-access journals using an artificial intelligence (AI) tool that screened around 15,000 titles for signs of dubious publishing practices. The approach, described in Science Advances on 27 August, could be used to help tackle the rise in what the study authors call "questionable open-access journals" - those that charge fees to publish papers without doing rigorous peer review or quality checks.
The tool is available online in a closed beta version, and organizations that index journals, or publishers, can use it to review their portfolios, says study co-author Daniel Acuña, a computer scientist at the University of Colorado Boulder. But, he adds, the AI sometimes makes mistakes, and is not designed to replace detailed evaluations of journals and individual publications that might result in a title being removed from an index. "A human expert should be part of the vetting process" before any action is taken, he says.
Read at Nature
[
|
]