AI is boosting paper output - but acceptance rates tell a tougher story
Large language models are changing how research gets written and submitted. A new study from Cornell University, published in Science, shows a clear pattern: more papers are being posted, especially by non-native English speakers, yet many AI-assisted manuscripts struggle in peer review.
"There's a big shift in our current ecosystem that warrants a very serious look," said Yian Yin, the study's corresponding author. The takeaway is simple: AI helps you write more, but not necessarily get accepted more.
How the researchers measured AI's footprint
The team analyzed over two million papers posted from 2018 to 2024 across three major preprint servers. They trained a model to flag text likely produced with LLM assistance and compared pre-2023 submissions to later work, when tools like ChatGPT became common.
They tracked who appeared to adopt AI tools, how their output changed, and whether those papers later cleared peer review and made it into journals. As with any detection method, this is an estimate, not a perfect label for every paper.
What changed: productivity and sourcing
Researchers who used AI posted substantially more. On a large physics/computer science server, output rose by roughly one-third for users flagged as AI adopters. In biology and the social sciences, the increase passed 50%.
The biggest gains came from scientists whose first language isn't English. Some Asian institutions saw 40% to nearly 90% more papers after adopting AI writing tools, depending on the field.
AI also improved how some authors found literature. Compared to traditional search, AI-assisted discovery surfaced newer papers and more relevant books, broadening the knowledge base. "People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas," said first author Keigo Kusumegi.
Quality signal vs. surface polish
Here's the catch. Papers with high writing complexity that appeared human-written were more likely to be accepted by journals. High-scoring papers that looked LLM-written were less likely to pass peer review.
In short: clean prose isn't a proxy for contribution. Reviewers reward clear questions, defensible methods, solid evidence, and responsible claims-not just fluent language.
Practical steps for scientists and lab leads
- Decide-and disclose-how you use AI. Distinguish language editing, outlining, and literature triage from idea generation, analysis, and claims. Many journals now expect this level of clarity.
- Use AI for polish and search; keep novelty, methods, and interpretation human-led. If a claim depends on the model, you need a stronger justification or an alternative approach.
- Add an internal pre-review that ignores writing flair and focuses on contribution, rigor, and falsifiability. If the core looks thin, rewrite the question or tighten the design.
- Verify every citation. AI can suggest plausible but wrong references. Follow primary sources, not just secondary summaries.
- For non-native English speakers: AI can reduce language friction, but have a colleague sanity-check domain-specific phrasing, figures, and conclusions.
- For group leads: set a lab policy for AI use, note which sections involved AI, and double down on transparency-code, data, and clear methods.
Signals reviewers actually reward
- A precise research question and a brief contribution statement up front.
- Methods with enough detail to reproduce, plus links to code and data where possible.
- Claims that match the evidence. No hype, no overreach.
- Recent and relevant citations anchored in primary work, not just famous older papers.
- Writing that clarifies the science rather than dressing it up.
What journals and funders may do next
- Require explicit AI-use statements in submissions and grant reports.
- Increase emphasis on novelty, data/code availability, and preregistration where applicable.
- Train reviewers to spot polished but low-contribution work and to prioritize transparency.
- Adopt triage workflows that filter for substantive advances early in the process.
Further reading
- Science (journal) for policy updates and related editorial guidance.
- COPE: Position statement on authorship and AI tools for disclosure and authorship standards.
Build your AI writing workflow-without losing rigor
If you're formalizing how your team uses AI for literature triage, drafting, and editing, a structured learning path can help. See curated options by role at Complete AI Training - Courses by Job or explore current options at Latest AI Courses.
Your membership also unlocks: