AI Flags Over 1,000 Questionable Scientific Journals, Raising Alarms About Research Integrity

A new AI tool from the University of Colorado Boulder identifies over 1,000 questionable scientific journals, helping to protect research integrity. It screens journals for legitimacy but relies on experts for final judgment.

Categorized in: AI News Science and Research
Published on: Aug 29, 2025
AI Flags Over 1,000 Questionable Scientific Journals, Raising Alarms About Research Integrity

AI Tool Identifies Over 1,000 Questionable Scientific Journals

A team of computer scientists at the University of Colorado Boulder has developed an artificial intelligence platform that automatically detects potentially predatory scientific journals. Their study, published in Science Advances, addresses a growing concern in research quality and integrity.

Addressing Predatory Publishing

Daniel Acuña, the study’s lead author and an associate professor of computer science, regularly receives unsolicited emails from unknown journals offering to publish his work for significant fees. These so-called "predatory" journals exploit researchers by charging publication fees without proper peer review, often targeting scientists in regions with emerging research institutions such as China, India, and Iran.

"Efforts to vet these journals have been ongoing," Acuña explains, "but it’s like playing whack-a-mole—once one journal is exposed, another quickly appears, often under the same company with a new name."

AI Screening for Journal Legitimacy

The AI tool screens journals by analyzing their websites and online data for indicators such as the presence of established researchers on editorial boards and the quality of website content, including grammatical accuracy. While the system isn’t flawless, Acuña stresses that human experts should make the final judgment on a journal’s credibility.

Peer review is central to credible scientific publishing. Predatory journals bypass this process, simply posting submitted articles online after collecting fees. This undermines the foundation of scientific research, which depends on building upon validated work.

How the AI Tool Works

The team trained their AI using data from the Directory of Open Access Journals (DOAJ), a nonprofit organization that has flagged thousands of suspicious journals since 2003. The AI screened nearly 15,200 open-access journals, initially flagging over 1,400 as potentially problematic.

After expert review, approximately 350 of these flagged journals were deemed likely legitimate, leaving over 1,000 journals identified as questionable. The AI acts as a prescreening assistant to manage large volumes of journals efficiently, but human analysis remains essential.

Insights and Transparency

The researchers designed their system to be interpretable, avoiding the "black box" nature common to many AI models. They found that questionable journals tend to publish a high volume of articles, list authors with multiple affiliations, and show excessive self-citation rather than citing external research.

Currently, the AI tool is not publicly available but is expected to be offered soon to universities and publishing companies. Acuña envisions it as a "firewall for science," helping to protect research fields from unreliable data.

"Science builds on what others have done," Acuña says. "If the foundation falters, the entire structure collapses."

Further Reading

For those interested in AI applications in research and scientific workflows, exploring specialized courses can provide practical knowledge. Visit Complete AI Training for resources tailored to research professionals.