More Than Half of Reviewers Now Use AI - Policies Lag Behind

More than half of reviewers now lean on AI for drafting, summaries, and basic integrity checks. Policies lag: disclose use, avoid public uploads, and keep judgment human.

Categorized in: AI News Science and Research
Published on: Dec 16, 2025
More Than Half of Reviewers Now Use AI - Policies Lag Behind

Peer Review Has Quietly Gone AI-Assist - Policies Need to Catch Up

More than half of researchers now lean on AI during peer review. A survey of 1,600 academics across 111 countries found that over 50% have used AI in the process, and nearly one-quarter increased usage in the past year.

The pattern is clear: tools that summarize manuscripts, check references, and draft responses are becoming part of everyday review work. The question isn't "if," but "how" to use them responsibly.

Policy vs. Practice

Many reviewers are using AI despite guidance that warns against uploading unpublished manuscripts to third-party chatbots. The concern is straightforward: confidentiality and authors' intellectual property.

Some publishers allow limited, disclosed AI use. Frontiers, for example, requires disclosure and prohibits sharing manuscripts with public models, and has launched an in-house AI platform for reviewers. Wiley says they see relatively low interest and confidence in AI for peer review across their portfolio, but agrees clear guidance and disclosure are essential.

How Reviewers Actually Use AI

  • Writing support: 59% use AI to help draft review reports.
  • Analysis support: 29% use it to summarize manuscripts, identify gaps, or check references.
  • Integrity checks: 28% use AI to flag potential misconduct (plagiarism, image duplication).

This reflects a practical split: language and structure help, light analysis, and early-warning signals for ethical issues.

Early Tests Show Clear Limits

Engineering scientist Mim Rahimi tested whether an LLM (GPT-5) could review a Nature Communications paper he co-authored. He tried multiple prompt setups, from basic instructions to feeding literature for novelty and rigor checks.

The model mimicked the structure and tone of a review but missed on constructive critique and made factual errors. More complex prompting didn't help; the most elaborate setup produced the weakest critique. Another study found AI reviews often matched human tone but lacked depth. As Rahimi put it, these tools can surface something useful, but relying on them wholesale would be harmful.

What This Means for Your Review Workflow

  • Disclose AI use if your journal permits it. Keep a brief log of what you used and why.
  • Do not upload unpublished manuscripts to public chatbots. Use publisher-provided or enterprise tools with confidentiality controls.
  • Use AI for scaffolding, not judgment: outline key points, tighten language, and structure sections - you own the critique.
  • Manually verify claims, statistics, and references. Watch for hallucinated citations and misread methods.
  • Treat AI flags (plagiarism, image issues) as leads, not verdicts. Confirm with established tools and your own checks.
  • Keep human accountability. If you can't defend a point without the model, it shouldn't be in your report.

What Editors and Journals Can Do Now

  • Publish a clear, reviewer-facing AI policy: what's allowed, what isn't, and how to disclose.
  • Offer secure, in-house tools or approved environments - don't push reviewers to public chatbots.
  • Require disclosure in the review form and enable confidential comments to editors about AI use.
  • Provide brief training and examples of acceptable use cases (summaries, language edits) vs. unacceptable ones (delegating judgments).
  • Audit for misuse and communicate consequences to maintain trust.

Practical Setup for Reviewers

  • Use publisher platforms when available (e.g., in-house reviewer assistants) to protect confidentiality.
  • If your institution supports it, explore local or enterprise LLMs with document controls.
  • Pair AI drafting with specialized tools: citation validators, plagiarism scanners, and image-duplication detectors.
  • Build a personal SOP: what you'll use AI for (structure, clarity) and what remains strictly human (claims assessment, novelty, ethics).

Quick Pre-Submission Checklist

  • Are all critiques evidence-based with manuscript citations or literature support?
  • Have you verified key references and statistics?
  • Did you avoid pasting confidential text into public tools?
  • Is AI use disclosed per journal policy, with a line in confidential comments if needed?
  • Does your report include specific, actionable feedback - not just polished generalities?

Further Reading and Training

For principles and expectations, see guidance from the Committee on Publication Ethics (COPE) and publisher policy pages:

If you want to tighten your prompting habits for safer, more efficient drafting, explore practical lessons here:

Bottom line: AI can speed up the mechanics of reviewing, but scientific judgment, ethical handling of manuscripts, and final accountability are still squarely on the reviewer.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide