How AI Is Reshaping Research - and Why Scientists Are Divided
Researchers are adopting AI tools faster than ever, but the technology's rapid integration into science raises questions about reliability, accountability and whether some applications should exist at all.
The shift accelerated after OpenAI released ChatGPT in late 2022. Within two months, the chatbot had 100 million users. Since then, AI has spread across genomics, drug discovery, climate modeling and astrophysics. In November 2025, the Trump administration launched the Genesis Mission, tasking the Department of Energy with creating a platform for tech companies to access federal scientific datasets and develop AI agents for research automation.
But enthusiasm masks deeper concerns. A study of over 1 million preprints and published papers from 2020 to 2024 found that large language models now appear in roughly 22% of sentences in computer science papers - the highest rate across disciplines. Math and Nature Portfolio journals show the lowest adoption at 8% and 9%.
The iBorderCtrl Precedent
Keeley Crockett, a professor at Manchester Metropolitan University, has lived through what happens when AI deployment outpaces public understanding. She led a team developing automated deception detection for iBorderCtrl, a €5.2 million EU-funded border control project that ran from 2016 to 2019.
The system used neural networks to detect lies through facial analysis. For years, the project met its milestones. Then media coverage shifted. Articles about the project were "just not true," Crockett said, and public backlash became severe enough that she had to be removed from the web.
"It was a nightmare," she said. "I had to be removed from the web everywhere because people wanted to kill me."
The European Commission never adopted iBorderCtrl. Six years later, Crockett sees the same pattern repeating with generative AI - intense focus on productivity gains with little consideration of consequences.
"Everyone's so driven by the hype of using it for productivity and efficiency, they're not really thinking about the consequences," she said. "And because I've lived through the nightmare, that's what I want to try and raise awareness about."
Where AI Works in Research
AI has proven effective for literature screening, where the task is narrowly defined and outcomes are measurable. Andrea Wisenöcker, a research associate at Johannes Kepler University Linz, used the tool ASReview to screen roughly 30,000 studies for a meta-analysis on student learning loss during COVID-19.
After training the system by manually marking about 300 studies as relevant or irrelevant, ASReview ranked the remaining studies by relevance. This saved months of work that would have been impossible to do manually.
But Wisenöcker remained skeptical. She implemented safeguards: conducting a small-scale traditional literature search first, manually screening the AI's ranked output, and stopping once she found 100 consecutive irrelevant studies. She had received no formal training in using AI.
"I became a bit more critical, not in whether or not it should be used, but how it should be used," she said.
The Peer Review and Publishing Problem
Yaohui Zhang, who received his master's degree from Stanford in 2025, quantified how large language models are appearing in scientific writing. His team analyzed papers on arXiv, bioRxiv and Nature Portfolio journals from 2020 to 2024.
Computer science showed the fastest growth in LLM-modified content at 22% of sentences. Math and Nature Portfolio journals lagged at 8% and 9%.
Journal editors are now defining policies. Springer Nature, which operates over 3,000 journals, allows AI to support human expertise but not replace it. Researchers must disclose AI use except in copy editing. Peer reviewers cannot use AI.
Yann Sweeney, a manuscript editor at Nature, said scientists often use LLMs without sufficient skepticism. "LLMs are not really being trained and fine-tuned to give accurate answers," he said. "It's kind of trained to give possible answers."
Hallucinations - when AI presents inaccurate information as fact - remain unsolved after three years of development. Sweeney opposes using AI for peer review because the technology is not trustworthy enough for a process that depends on expert judgment.
"Once you do that, you just open the floodgates to lots of low-quality reviews, low-quality submissions," he said.
The AI Literacy Gap
Crockett advocates for mandatory AI literacy training across research disciplines. Different roles - grant reviewers, researchers, peer reviewers - need different levels of technical understanding, but all should understand ethics and responsible research practices.
The U.K. is piloting this approach. Most universities now offer AI workshops for all students. The goal is to equip people with AI skills before they enter the workforce.
"There has to be different levels of AI literacy programs available to different types of members of the public," Crockett said. "I think there's a bottom-line duty of care that all citizens should have free access to AI literacy."
She also calls for guardrails specific to each role. The challenge is creating courses at the right level for the right context.
Crockett worries that researchers will become dependent on AI systems for synthesis and analysis - tasks that require human expertise. "The art of doing research is to read, learn and synthesize knowledge," she said. "We learn by doing it in our brains."
What Comes Next
AlphaFold 2 demonstrates what AI can accomplish. By predicting protein structures, it solved a 50-year challenge in biology. Its creators won the 2024 Nobel Prize in chemistry.
But Sweeney cautioned against assuming the breakthrough will immediately produce new drugs. Real-world applications remain unclear across most scientific fields. AI is still in early stages.
The fundamental question remains unanswered: How do you audit a technology that's already everywhere?
"The genie is out of the bottle," Crockett said. "Everyone's using it, but how do we put guardrails around it?"
For researchers, the answer likely involves formal training before adoption. Consider exploring AI Research Courses and Generative AI and LLM Courses to develop competency in these tools.
Your membership also unlocks: