Peer review quietly went AI-and most researchers can't see it
AI is now part of the quality gate for science. Over half of peer reviewers use AI tools, yet nearly three-quarters of researchers don't know if publishers used AI on their own manuscripts. Speed is up. Trust isn't.
In a nutshell
- 53% of peer reviewers use AI; 24% increased usage in the past year
- Most apply AI to writing tasks (59%) vs. assessing methods or stats (19%)
- 66% say publisher AI use speeds publication; only 21% say it improves trust
- 76% are unsure whether AI was used in their publication process
- Training is patchy: 35% self-taught, 18% do nothing, 16% get guidance from publishers
How AI slipped into peer review without transparency
Publishers added AI to editorial workflows for checks and efficiency. Reviewers started using it too, often without policies, training, or disclosure. The result: a system influenced by AI that most participants can't see.
The trust gap is clear. Two-thirds credit AI with faster turnaround. Only one in five say it improves trust. And 76% don't know whether AI touched their submission.
As one reviewer put it: "I would consider it unethical to use AI in peer reviewing manuscripts… the form told me not to use AI." Confusion is doing the heavy lifting.
What reviewers actually do with AI (and what they should)
Current use skews toward surface-level help. Among reviewers using AI, 59% draft or polish reports, 29% summarize, and 28% flag potential misconduct. Only 19% use it to evaluate methodology or statistical soundness.
The same pattern holds for authors. Roughly 70% use AI for language and clarity; fewer than 25% apply it to analysis, design, or methods. That's a missed opportunity. The real upside is in stress-testing claims, not just fixing prose.
- Methods checks: ask AI to identify missing controls, confounders, or protocol ambiguities; request alternative study designs that test the same hypothesis.
- Stat sanity: verify model assumptions, effect sizes, and power; simulate edge cases; ask for sources of common statistical misinterpretations.
- Reproducibility: prompt for a step-by-step replication plan; request a dependency map for code, data, and environment; suggest minimal metadata to enable reuse.
- Bias and validity: screen for sampling bias, leakage, p-hacking signals, or questionable researcher degrees of freedom.
- Ethics and compliance: query alignment with reporting standards and vulnerability points for misconduct.
Training and policy gaps are holding the line
How researchers learn AI: 35% teach themselves, 31% rely on institutions, 18% take no action, and only 16% get help from publishers. That's a shaky foundation for a process this central.
Adoption skews young. Early-career researchers: 87% use AI for authoring; senior researchers: 67%. For peer review, it's 61% (≤5 years) vs. 45% (≥15 years). A majority of senior reviewers (55%) haven't used AI in review at all.
- Set clear institutional policies for acceptable AI use in authorship, review, and editorial decisions-publish them publicly.
- Require disclosure statements for any AI-assisted content or decisions, including tools, tasks, and human verification.
- Offer baseline training and certify competencies for editors and reviewers; update annually.
- Log AI interventions in the editorial workflow for audit and appeals.
Regional divides signal different priorities
Usage is higher in China (77%) and Africa (66%), often to reduce language barriers and level the playing field. In North America (31%) and Europe (46%), concerns focus on bias, misuse, and policy readiness.
Both priorities are valid. The path forward combines language support and tooling with guardrails and full disclosure.
What this means for your role
- Editors: Disclose all AI touchpoints in decision letters. Keep an internal log of tools, versions, and checks performed. Offer a standard reviewer checklist for AI-assisted evaluation.
- Reviewers: If you use AI, say so. Specify tasks (e.g., summarization, stats critique). Never share confidential content with tools that store prompts.
- Authors: Provide an AI use statement covering drafting, analysis, figure creation, or language editing. Keep data and code secure; avoid tools that learn from your inputs.
- Publishers: Publicly document where AI is used, why, and how humans verify outcomes. Establish an ethics review that includes AI governance.
- Tool developers: Offer model cards, audit options, and clear data handling policies. Make it easy to run locally or with strict privacy controls.
- Funders/policymakers: Mandate AI disclosure and enable audits in grants and compliance frameworks.
Useful frameworks and references
- COPE (Committee on Publication Ethics) for guidance on integrity and policy.
- EQUATOR Network for transparent reporting standards.
A minimal AI disclosure checklist you can adopt now
- Tools and versions used (e.g., model name, plugin, audit script)
- Purpose and scope (drafting, summarization, stats review, image analysis)
- Content exposure (what data left secure systems, if any)
- Human verification steps and who performed them
- Limitations observed (errors, hallucinations, bias concerns)
- Confidentiality safeguards (local runs, enterprise accounts, redaction)
Misuse concerns are real-and visible
Researchers are split: 63% say AI improves manuscript quality, yet 52% say it can make them doubt a paper's integrity, and 48% say it can add errors. Seventy-one percent worry about misuse; 53% report seeing it among peers, and 45% worry about publisher misuse.
Clear norms and audit trails calm these fears far more than speed alone.
Study details and limits
Source: survey of 1,645 active researchers conducted May-June 2025 by Frontiers Media. Focus: AI use across authorship, peer review, and editorial roles, with adoption rates, behaviors, and attitudes.
Limits: self-reporting bias, uneven disciplinary coverage, snapshot timing during fast change, and varying interpretations of "AI use" or "misuse."
Where to build practical AI fluency
If your team lacks a baseline, fix that. Start with short, role-specific training and a shared policy, then iterate.
- Latest AI courses for quick upskilling and policy-ready practices.
Bottom line
AI is already part of peer review. Most use cases today polish text instead of testing claims. The fix isn't more hype-it's disclosure, targeted training, and using AI where it actually strengthens science: methods, statistics, and reproducibility.
Normalize AI statements. Audit how tools are used. Reward deeper checks, not just faster prose. That's how we speed publication without losing trust.
Your membership also unlocks: