AI Surge Linked to Decline in Research Quality and Scientific Rigor, Study Warns

A study shows AI use may be lowering scientific rigor in health research, with many papers using simplistic or selective data. Experts urge transparency and stronger peer review to maintain quality.

Categorized in: AI News Science and Research
Published on: May 28, 2025
AI Surge Linked to Decline in Research Quality and Scientific Rigor, Study Warns

AI’s Impact on Scientific Research Quality: A Growing Concern

A recent study led by the University of Surrey, published in PLOS Biology, highlights a worrying trend: artificial intelligence (AI) may be contributing to a decline in the scientific rigour of published research.

The research focused on papers using the National Health and Nutrition Examination Survey (NHANES), a widely used American government dataset. NHANES helps researchers examine links between health conditions, lifestyle, and clinical outcomes. Between 2014 and 2021, only about four association-based NHANES studies were published annually. However, this number surged dramatically after 2021 — reaching 33 in 2022, 82 in 2023, and 190 in 2024.

What’s Driving the Surge?

Many recent papers have employed simplistic analytical methods, often considering single variables while ignoring the complex, multi-factor nature of health outcomes. Even more concerning, some studies appear to cherry-pick narrow data subsets without clear justification. This practice raises red flags about poor research habits like data dredging or shifting research questions after examining results.

One co-author from the University of Surrey described this phenomenon as “science fiction” masquerading as science fact. The ease of access to datasets through application programming interfaces (APIs), coupled with AI tools like large language models, has led to an overwhelming volume of submissions. This overload is straining journals and peer reviewers, reducing their ability to critically assess the quality of research.

Practical Solutions to Protect Scientific Integrity

To address these challenges, the research team recommends concrete steps for journals, researchers, and data providers:

  • Full Dataset Use: Researchers should use the entire available dataset unless there is a clear, well-explained reason to limit their scope.
  • Transparency: Authors must clearly state which parts of the data were analyzed, the time periods covered, and the groups studied.
  • Enhanced Peer Review: Journals should include reviewers with statistical expertise and implement early desk rejection to filter out formulaic or low-value papers.
  • Tracking Data Usage: Data providers could assign unique application IDs to monitor how open datasets are employed, a practice already in place for some UK health data platforms.

These measures aim to maintain open data access and allow AI use in research, while adding safeguards to ensure quality and transparency.

Balancing Innovation and Quality

The lead author emphasized that the goal is not to restrict AI or data access, but to encourage common-sense checks. Simple practices like transparency in data usage and involving reviewers with the right expertise can help journals detect low-quality work sooner and uphold scientific standards.

Another contributor noted that better guardrails in scientific publishing are essential in the AI era. Their suggestions aim to prevent weak or misleading studies from slipping through without hindering the benefits of AI and open data. With these tools becoming permanent fixtures in research workflows, timely action is crucial to protect trust in published science.

For researchers and professionals interested in responsible AI use and data analysis, exploring specialized training can be valuable. Resources such as Complete AI Training’s latest courses offer practical guidance on applying AI techniques effectively and ethically.