AI flags 1,000+ questionable open-access journals and shows its work
AI scanned 15,200 open-access journals, flagged 1,400; auditors confirmed 1,000+ with questionable practices. Signals align with DOAJ, are explainable, and speed human triage.

AI Flags 1,000+ Questionable Journals - What Researchers and Editors Need to Know
Unsolicited journal emails, opaque fees and weak peer review are more than annoyances - they pollute literature streams that researchers and writers rely on. A new AI system takes aim at the problem by triaging journals that fail basic publishing standards.
In a Science Advances study, the tool scanned nearly 15,200 open-access journals online. It flagged about 1,400 as potentially problematic; human auditors confirmed that more than 1,000 showed questionable practices.
Why this matters
Indexing services don't consistently filter sources. Low-standard outlets can seed citations into otherwise solid work, complicating due diligence for labs, editors and science writers. Not every paper in a questionable journal is flawed, but the absence of transparent editorial process raises risk and review costs for everyone downstream.
What the new AI found
The system ties its judgments to concrete criteria aligned with the Directory of Open Access Journals (DOAJ) best practices. That makes its signals interpretable rather than a black box. When it assigns a high probability that a journal is questionable, it can show the underlying signals that drove the call.
Performance is adjustable. With the study's operating point, the false positive rate was about 24%. That means some clean journals are flagged for review, but the trade-off enables auditors to surface a large share of problematic titles efficiently.
How the system evaluates journals
- Website signals: Presence and quality of core pages (e.g., editorial board), clarity of policies, and overall presentation.
- Code fingerprints: Patterns in site code that can reveal mass-produced journal templates reused across fly-by-night outlets.
- Bibliometrics: Self-citation loops, dense inter-citation with the same journal or institution, author identities and citation frequencies.
The approach also cross-checks editorial board members against author databases to see if listed experts publish in the relevant field - a simple but telling proxy for legitimacy.
Human + AI, not human vs. AI
Questionable outlets adapt. AI helps track shifting tactics at scale; auditors provide context and final judgment. Community initiatives like the DOAJ do rigorous work, but systematic checks across thousands of titles take time. An AI triage layer speeds the queue without replacing expert review.
Access and what's next
The tool is currently available to professional users (e.g., research integrity officers) through ReviewerZero AI. The team is integrating additional signals and plans to explore broader access for authors once reliability for self-service use is proven. Extending access could help universities - especially in the Global South - vet outlets before submission.
Practical checklist: quick journal vetting for researchers and writers
- Check DOAJ status: Is the journal listed? Has it been removed, and why? Start here for a fast screen.
- Editorial board reality check: Names, affiliations and fields should be clear. Spot-check a few members to confirm active, relevant publications.
- Peer review and APC transparency: Policies and fees should be upfront, specific and consistent across pages.
- Indexing claims: Verify any claims (e.g., Scopus, PubMed) directly with the index - not just the journal's site.
- Citation patterns: Skim recent issues. Watch for excessive self-citation or cliques of mutual citation among the same set of journals.
- Publisher footprint: Do multiple "journals" share identical templates, wording and contact details? That's a warning sign.
- When using preprints: Do extra legwork on methods and stats. Treat citations conservatively until peer review is clear.
Expert perspective
"Questionable journals are those that don't do what journals are supposed to do: transparently filter and evaluate knowledge free of conflicts of interest," says Daniel Acuña, associate professor of computer science at the University of Colorado Boulder and founder of ReviewerZero AI. "Linking AI decisions to concrete best practices makes the calls explainable and easier to audit."
Reference
Estimating the predictability of questionable open-access journals (Science Advances, 2025)
Upskill note: If you lead research integrity or editorial teams and want structured AI training for workflows like screening and triage, explore curated options by role at Complete AI Training.