AI Tool Flags Predatory Journals to Safeguard Scientific Research
Researchers developed an AI that screens scientific journals to spot predatory publishers exploiting authors. It flagged over 1,400 suspicious journals for further expert review.

AI System Screens for Predatory Journals
A team of computer scientists at the University of Colorado Boulder has created an artificial intelligence platform that automatically identifies potentially predatory scientific journals. Published on Aug. 27 in Science Advances, this study addresses a growing concern in research: questionable journals that compromise scientific integrity. The AI initially flagged over 1,400 journals as possibly problematic.
The Problem of Predatory Publishing
Daniel Acuña, associate professor in the Department of Computer Science and lead author of the study, frequently receives unsolicited emails from supposed journal editors offering to publish papers for a fee. These “predatory” journals lure researchers into paying hundreds or thousands of dollars to publish without proper peer review or editorial oversight.
“It’s like whack-a-mole,” Acuña explains. “You catch one predatory journal, and another pops up, often from the same company with a new name and website.”
How the AI Screening Works
The AI system screens scientific journals by analyzing their websites and online data for specific indicators. It checks whether the journal has a credible editorial board, the quality of the website’s language, and other markers of legitimacy. While the tool is not flawless, it serves as a preliminary filter, leaving the final judgment to human experts.
Stopping the spread of questionable publications is critical because science builds upon prior research. If the foundation is weak, subsequent work risks collapse.
The Rise of Predatory Journals
Legitimate journals use peer review, where external experts evaluate studies for quality. Predatory journals bypass this, focusing instead on profit. The term “predatory” was coined in 2009 by Jeffrey Beall, a librarian at CU Denver, to describe such exploitative publishers. Often, they target researchers in countries where scientific institutions are newer and the pressure to publish is intense, including China, India, and Iran.
These journals promise peer review for a fee but typically just post the submitted articles online without scrutiny.
Efforts to Combat Predatory Publishing
Organizations like the Directory of Open Access Journals (DOAJ) have worked since 2003 to flag suspicious journals using clear criteria, such as transparent peer review policies. However, manual vetting struggles to keep up with the rapid growth of predatory publications.
To assist, Acuña’s team trained their AI using DOAJ data and scanned nearly 15,200 open-access journals. The AI flagged over 1,400 as potentially problematic. Human experts reviewed a subset and found that about 350 were false positives, but over 1,000 were likely questionable.
“This AI should be a tool for prescreening, not the final authority,” Acuña notes. “Human professionals need to make the ultimate call.”
Creating a Firewall for Science
The team prioritized transparency in their AI, avoiding the “black box” approach common in many AI tools. For example, they found that predatory journals often publish unusually high numbers of articles, list authors with multiple affiliations, and show excessive self-citation.
The system is not yet publicly available but may soon be offered to universities and publishers. Its goal is to serve as a “firewall” protecting science from unreliable data.
“Just as new smartphone software has bugs that need fixing, science too requires constant vigilance to maintain its integrity,” Acuña says.
For those interested in AI tools that support research integrity or to explore AI courses related to data analysis and automation, visit Complete AI Training.