AI tools drive 42% surge in journal submissions but lower writing quality, study finds

Submissions to Organization Science jumped 42% after ChatGPT launched, but research quality fell sharply, according to the journal's editors. Over 30% of peer reviews now contain AI-generated writing, making feedback harder to act on.

Categorized in: AI News Science and Research
Published on: May 13, 2026
AI tools drive 42% surge in journal submissions but lower writing quality, study finds

AI Submissions Surge While Quality Drops, Study Shows Strain on Peer Review

Submissions to Organization Science have jumped 42% since ChatGPT's release in late 2022, but the quality of research has declined sharply, according to a new analysis by the journal's editors. The finding marks the first hard evidence of how generative AI tools are reshaping academic publishing.

Researchers analyzed 6,957 submissions reviewed 10,389 times over five years. They used AI content detection software to track submissions before and after ChatGPT launched, giving them a clear control period for comparison.

The results are stark. Most new submissions are rejected, many during initial screening. Manuscripts with low AI scores remain most likely to be published.

Peer Reviews Show AI Influence

More than 30% of reviews submitted to the journal contain some degree of AI-generated writing. These AI-assisted reviews are harder to parse and focus more on theory while neglecting data analysis.

"This makes it harder for both editors and authors to act on reviewer feedback and can potentially affect manuscript quality," the editors wrote in their analysis, published April 27.

Lamar Pierce, editor-in-chief and a strategy professor at Washington University in St. Louis, said editors at other journals suspected this was happening but lacked evidence. "No one had the hard evidence on the extent of it or the implications," Pierce said.

Publish-or-Perish Pressure Accelerates Adoption

The surge reflects a deeper problem in academia. Universities reward quantity of publications over quality, creating pressure that junior scholars feel acutely.

"It's hard for me to blame junior scholars for focusing on more instead of better if that's what they are rewarded for," Pierce said. "Junior scholars face tremendous pressure for promotion and funding."

Universities typically count publications to avoid subjective bias in evaluations. But this metric-driven approach has unintended consequences when combined with AI tools that make it cheap and fast to generate submissions.

Skill Development at Risk

Researchers who use AI without understanding what it produces may miss crucial learning opportunities. Early-career scholars typically build expertise through each project, becoming better writers, theorists, and empiricists over time.

"If AI is used in research without understanding how and what it's producing, scholars don't become experts," Pierce said. "Human expertise is still crucial for breakthrough scientific research."

AI tools like Claude Code can accelerate coding and method selection. But researchers must evaluate these contributions themselves and understand what the AI is actually doing.

System Redesign Needed

The peer review process itself requires rethinking, not just tweaking. Most journals are asking how to adjust existing systems to account for AI, but that's the wrong question, Pierce said.

"The right question is: What is the best process for evaluating and promoting great research?" he said. "We need to start from scratch in designing this system rather than tweaking the current one."

The analysis generated immediate attention. The article was downloaded 10,000 times in its first week, and major outlets including Nature, Forbes, and the Financial Times covered the findings.

Pierce emphasized that policies need to keep pace with rapid AI advancement. "The speed of advancement is exciting but also frightening," he said. "Those proposing policy and best practices need to be immersed in using the technology. Without doing so, they'll miss the dynamic nature of the technology and why static policies will be quickly outdated."

The study doesn't prescribe appropriate AI usage levels. Instead, the authors called for ongoing conversation among journal editors, universities, and researchers about how to move forward.

Consider exploring ChatGPT Courses or AI Research Courses to develop a deeper understanding of how these tools work and how to use them effectively in your research.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)