AI's Impact on Scientific Methods Under Scrutiny
A new project at the University of York will examine how artificial intelligence is extending and changing the abilities scientists use to investigate the world. The core question is blunt and overdue: is AI good for the way science is done?
What's changing in daily research
Science has always advanced with tools that help people see, calculate, and predict more effectively. AI now pushes that further: space agencies use it to compute more efficient flight paths, and some labs use robots driven by AI to design and run experiments with minimal human input. But there's a catch-studies in medicine show that practitioners can get worse at certain tasks when they rely on AI support.
The project and scope
Led by Dr Michael Stuart at the University of York, the team will study AI use across four scientific fields. They will combine interviews, on-site observations, and data analysis to build a clear picture of how researchers deploy AI in day-to-day work. The aim is practical: identify which abilities are strengthened, which are weakened, and which are being reshaped as AI becomes routine.
As Dr Stuart puts it: "When people talk about AI changing the world, they often point to curing disease or tackling climate change. If we want to know whether AI is actually good for us, we need to ask a simpler question: is it good for the way science is done?"
Project focus and key questions
- What problems are scientists using AI to tackle right now?
- How are core skills-problem framing, experimental design, interpretation, error checking-shifting with AI in the loop?
- Where does AI improve capability, and where does it erode it?
- How can a clearer view of scientific abilities inform more effective and ethical training?
The project-Scientific Progress and Artificial Intelligence: a Capabilities-Based Ethnographic Epistemology-aims to provide the first comprehensive account of scientific ability alongside a detailed snapshot of how AI is actually used in research labs today. "The most useful AI systems aren't replacing scientists," Dr Stuart said. "They help search vast possibility spaces, design better experiments, and make sense of complex data."
Why this matters for working scientists
Many researchers feel pressure to adopt AI without clarity on long-term effects on skills. The upside is clear: faster search, broader design space, richer analysis. The risk is quieter-skill decay if teams outsource too much judgment to models.
Practical steps you can apply now
- Map task ownership: which steps are human-led, AI-assisted, or AI-led, and why.
- Keep human baselines for key tasks to detect drift in accuracy or reasoning quality.
- Run A/B protocols comparing AI-assisted vs. human-only workflows on the same problem.
- Build checkpoints: sanity checks, adversarial prompts, and counterfactuals before adoption.
- Invest in upskilling so teams know when to trust, verify, or override model outputs. See AI training by role for structured options.
Methods and expected outputs
The team will report which abilities are being expanded, which require protection, and where new hybrids are emerging. Findings will inform a broader philosophical study of what counts as scientific ability and how improved abilities relate to standard measures of progress like increased knowledge or practical insight.
Funding
The project is supported by the ERC Consolidator Grant program, part of a record €728 million awards round for 2025. Details on the scheme are available via the European Research Council.
Your membership also unlocks: