Publish More, Discover Less: AI's Bandwagon Effect in Science

AI helps scientists move faster-more papers, more citations, earlier leadership. Yet it funnels research into popular, data-rich problems; speed rises as surprise fades.

Categorized in: AI News Science and Research
Published on: Jan 20, 2026
Publish More, Discover Less: AI's Bandwagon Effect in Science

AI is speeding up scientific careers-while funneling research into the same crowded corners

AI is helping individual scientists move faster. Across 41.3 million papers (1980-2025) in biology, chemistry, physics, medicine, materials science, and geology, researchers who used AI tools published about 3 times more papers, collected nearly 5 times as many citations, and became team leaders 1-2 years earlier than peers who didn't.

But there's a trade-off. The collective map of science shrinks. AI-heavy papers cluster around popular, data-rich problems, occupy a smaller space in "knowledge," and generate weaker follow-on engagement between studies. Speed goes up. Surprise goes down.

What the data says

A natural language model flagged roughly 311,000 AI-augmented papers (e.g., neural networks, large language models) and compared them with millions that didn't use AI. Computer science and math were excluded to avoid conflating method-building with application.

The narrowing wasn't a blip. It persisted across early machine learning, deep learning, and now generative AI-and appears to be intensifying. As James Evans puts it, individual incentives and collective progress are pulling in different directions.

Why the narrowing happens

AI automates what's most tractable: protein structure prediction, image classification, and pattern extraction from abundant datasets. Tools trained on past data are great at optimizing well-defined problems. They're less helpful in messy domains with scarce data.

Under publish-or-perish incentives, people gravitate to problems that AI can process quickly into publishable units. Luís Nunes Amaral warns we're "digging the same hole deeper," and Catherine Shea notes how this becomes a self-reinforcing loop of conformity.

The second-order effects you're already seeing

  • Topic convergence: more papers chasing the same questions with similar methods
  • Weaker cross-pollination: fewer intellectual bridges between subfields
  • Quality pressure: easier manuscript generation fuels low-quality and fraudulent submissions

Use AI to explore, not just optimize

The risk isn't that science slows down-it's that it becomes homogeneous. If you lead a lab or direct a program, repurpose AI from pure throughput to discovery expansion.

  • Bias for novelty: use LLMs to surface outlier findings, conflicting results, and odd negative data you'd otherwise ignore
  • Cross-field mapping: query models for analogs in distant literatures to import mechanisms or measures
  • Low-data strategies: apply active learning, simulation, and targeted experiments to push into sparse regimes
  • Hypothesis generation with constraints: require diversity in proposed mechanisms and experimental designs, not just more of the same
  • Pre-mortems: have AI enumerate failure modes and hidden assumptions before you scale a line of work

What lab leaders can do this quarter

  • Set a portfolio rule: at least 20-30% of projects must target low-data, high-uncertainty questions
  • Define "exploration KPIs": novel datasets created, methods transferred across domains, contradictions resolved
  • Reward replication-plus: replications that probe boundary conditions or introduce theory tension
  • Slow the manuscript mill: require a preprint readiness checklist for methodological clarity and contribution

What journals, funders, and departments can fix

  • Rebalance metrics: weigh distinct contribution and field-opening datasets over sheer count of outputs
  • Calls for uncertainty: earmark funding for questions with poor data coverage and unclear priors
  • Review design: include "topic crowding" and "method redundancy" checks in editorial workflows
  • Promotion criteria: credit negative results, dataset releases, and cross-field synthesis

Will this trend last?

Some argue better integration of data, compute, and hypothesis generation could broaden discovery. That may help-but the deeper lever is incentives. As Evans notes, the architecture is less the issue than what we reward.

If we prize speed, we'll get speed. If we prize exploration, AI can help us ask-and answer-questions we've been avoiding.

Further reading

  • Nature for studies on AI's role in scientific discovery
  • AlphaFold for an example of AI on tractable, data-rich problems

Skill up with intent

If you're adopting AI in your research, learn tools that expand search, not just accelerate output. Curate a stack for anomaly detection, cross-disciplinary retrieval, and active learning.

Complete AI Training - courses by job can help you build a focused toolkit for scientific work.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide