AI is fueling a scientific publishing surge - with trade-offs researchers can't ignore
New work published in the journal Science reports a clear pattern: scientists who use large language models (LLMs) like ChatGPT are publishing far more papers across disciplines. The effect is strongest for researchers whose first language isn't English, who have long faced a language tax in top journals. Output is up, access is broader, and the writing often reads more polished.
But there's a catch. As AI makes prose cleaner and more complex, the usual cues we use to judge quality get weaker. Shiny writing can hide soft ideas.
How the team measured AI's impact
Researchers analyzed nearly 2.1 million preprint abstracts posted from January 2018 through June 2024 across major servers. They used GPT-3.5 Turbo-0125 to create AI-written versions of pre-2023 abstracts, learned the patterns that separate machine from human text, and built a detector to flag AI-assisted writing in newer papers. They also tracked authors over time to see how output changed once AI showed up in their workflow.
Source context: preprints precede peer review, but their volume and timing make them a strong signal of writing and submission behavior. For background on the journal behind the study, see Science. For the preprint ecosystem itself, see arXiv.
The numbers: big gains, especially outside native-English contexts
- Social sciences and humanities: +59.8% output
- Biology and life sciences: +52.9%
- Physics and mathematics: +36.2%
- Non-English-speaking regions (notably parts of Asia): up to +89% in some cases
LLM use is associated with a steep productivity jump, and the language gap narrows when AI helps with phrasing, grammar, and structure. That's a real win for inclusion and speed.
The quality warning you shouldn't shrug off
The study found a counterintuitive pattern: the more complex the AI-shaped prose, the less likely the paper was to be high quality. Smooth writing can mask weak methods or thin results. If editors and reviewers stop trusting writing quality, they may lean harder on status cues like author pedigree or institutional brand-undercutting the very democratization AI is helping create.
Practical moves for researchers and writers
- Let AI help with clarity, pacing, and translation-but keep the ideas, methods, and claims strictly yours.
- Disclose AI assistance. Keep a simple log of what you used it for (editing, translation, figure captions), plus key prompts and revisions.
- Tighten verification: check every citation, quoted result, and statistical claim. If you didn't read the source, don't cite it.
- Prefer plain language over ornate phrasing. Strong methods and transparent data beat fancy sentences.
- Pre-register when possible, share data and code, and document changes between preprint and final.
- For editors and PIs: use AI detectors as a nudge, not a verdict. Pair them with random audits of citations and methods reporting.
- Adopt lab or journal policies for AI use (scope, disclosure, verification, authorship limits). See COPE's guidance on AI in authorship.
A lean workflow for responsible AI-assisted writing
- Outline contributions and figures first. Draft methods and results in your own words before any AI pass.
- Use AI for language refinement only: clarity, grammar, consistency of terminology, and abstract condensing.
- Run a structured check: factual consistency, citation validity, numerical accuracy, and alignment between abstract and results.
- Final read in plain English. If a sentence sounds impressive but says little, rewrite or cut it.
Bottom line
AI is boosting output and widening participation. It's also making prose a weaker signal of scientific value. If you publish, review, or edit, double down on methods, data, and verification-and treat polished language as a style choice, not proof of substance.
Level up your workflow
If you're formalizing AI use across your role or team, explore practical training paths by role at Complete AI Training.
Your membership also unlocks: