AI-Assisted Writing Jumps 400% in Non-English Science, Cutting Drafting Costs and Helping Junior Researchers Close the Language Gap

AI writing tools are quietly changing who gets published in biomedicine. Post-ChatGPT use jumped most in non-English countries and junior authors, cutting time spent on polishing.

Published on: Nov 23, 2025
AI-Assisted Writing Jumps 400% in Non-English Science, Cutting Drafting Costs and Helping Junior Researchers Close the Language Gap

AI-Assisted Writing Is Surging in Science - And It's Rebalancing Who Gets Published

Large language models are quietly changing how scientific papers get written. A new analysis of more than two million biomedical articles shows a sharp rise in AI-assisted writing after ChatGPT's release, with the fastest growth in non-English-speaking countries and among less-established scientists.

That uneven adoption matters. It suggests AI is helping reduce drafting and editing costs, especially for authors who used to spend disproportionate time and budget on English polishing.

What the data actually show

Researchers from the University of Wisconsin-Madison and Peking University analyzed publication text from 2021 to 2024 across major sources, including PubMed Central and OpenAlex. They estimated the fraction of sentences likely assisted by AI and compared pre- and post-ChatGPT patterns.

  • Post-ChatGPT, AI-generated content rose from roughly 0.04 to 0.20 in non-English-speaking countries; English-speaking countries increased from about 0.06 to 0.17.
  • Growth in non-English-speaking countries was about four times faster than in English-speaking nations.
  • Country-level increases were negatively correlated with English Proficiency Index scores.
  • China showed ~250% growth; the Netherlands saw ~60% growth.
  • Corresponding authors used AI more often, and junior or less-established researchers adopted these tools at higher rates than senior or top-cited peers.

The team used a difference-in-differences approach to show statistically significant changes by country group. They also caution that correlation doesn't prove causation and that AI detection is imperfect.

Why adoption concentrates where it does

English dominates scientific communication. For many researchers, that means extra rounds of editing, translation, and reviewer pushback on language. AI tools reduce those friction points and the cost of getting a readable draft out the door.

Less-established scientists also gain. They tend to have fewer support resources and smaller budgets for editing. AI fills those gaps, trims drafting cycles, and helps meet submission deadlines.

Quality and originality: real concerns, workable solutions

AI can standardize tone, fix grammar, and summarize, but it can also introduce errors, overgeneralize, or flatten novelty if overused. Detection tools are unreliable, so policy and process matter more than policing.

  • Keep ideas, design, and analysis human-led. Use AI for wording, structure, and clarity.
  • Track provenance: maintain a changelog or version history documenting prompts and edits.
  • Disclose AI assistance in acknowledgments or a methods note if your journal requests it.
  • Verify every claim, citation, and numerical result manually. No exceptions.

A practical workflow for researchers and science writers

  • Outline first: research question, contribution, methods, key results, and limits.
  • Draft in your native language if it's faster. Translate, then refine with AI.
  • Prompt for structure: "Rewrite this methods section for clarity and brevity, keep all numbers and units unchanged."
  • Prompt for style: "Improve readability at a graduate level, remove buzzwords, keep citations intact."
  • Use AI to generate alternative titles, abstracts, and cover letters; choose and refine the best.
  • Run an adversarial pass: ask AI to list weaknesses, missing controls, or unclear logic. Address them.
  • Final pass is human-only: figures, stats, references, and ethical statements.

Guidelines for PIs, editors, and institutions

  • Set clear policies: where AI is acceptable (language/editing), where it is not (data fabrication, analysis without validation).
  • Require disclosure when AI is used for text generation or translation.
  • Prioritize verification over detection. Mandate human checks for data, citations, and conclusions.
  • Offer training so early-career researchers learn safe, efficient AI writing practices.

What this means for careers and equity

The gains cluster where language has been a barrier and resources are thin. Junior and non-top authors-especially as corresponding authors-are seeing bigger jumps in output and polish. Senior researchers appear more conservative, often reserving AI for light editing.

That pattern doesn't fix structural disparities on its own, but it moves the needle on who gets heard and how quickly good work reaches reviewers.

Actionable next steps

  • Create a lab-wide AI writing SOP: approved tools, prompts, disclosure language, and a verification checklist.
  • Adopt a shared template with labeled sections for methods, results, and limitations that AI can refine without changing facts.
  • Schedule a pre-submission "fact lock" review to confirm numbers, references, and data availability statements.

Want structured training?

If you're formalizing AI workflows for research writing, you can browse skill-focused resources here: Complete AI Training - Courses by Skill. For team-wide upskilling, see Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)