How will AI advance science?
Manchester, UK, 11 December 2025 | 11:45 Europe/London
AI isn't just another instrument on the bench. It's changing how research gets planned, executed, and shared. A Manchester team is mapping that shift with two lenses: large-scale data on publications and fieldwork in active labs across the UK and beyond.
What the data says
The team is analysing millions of papers in resources such as OpenAlex to track where and how AI shows up in research. Generative tools (like conversational assistants and code copilots) are moving across fields-fast.
The US and China still lead in publication volume. But smaller research economies are adopting generative AI and delivering strong outputs. One pattern stands out: teams publishing on generative AI tend to be slightly smaller than those in other AI areas, hinting at a different collaboration model.
Opportunity, risk, and the line between human and machine
AI can read, summarise, and draft code in minutes. That speed comes with questions: who is responsible for errors, how do we govern use, and where should human judgment sit in the loop?
The project focuses on practical answers. How AI influences discovery. How researchers can use it responsibly, creatively, and fairly. The team also flags an open question: what does AI mean for novelty and creativity in science? The evidence is incomplete, and they're testing it.
What this means for PIs, RDM leads, and research managers
- Set clear AI-use rules for your group: disclosure in methods, versioning of models, prompt provenance, and consent checks for training data.
- Treat AI output as a draft, not ground truth. Require human verification, reproducible prompts, and logs stored with code and data.
- Audit AI-generated code. Add unit tests, security checks, and license reviews. Validate benchmarks and report failure cases.
- Update authorship and contribution statements. Be explicit about AI assistance and who validated what.
- Protect sensitive data. Define allowed tools, local vs. cloud compute, and review flows that align with ethics and data-sharing policies.
- Invest in skills: prompt practice, data wrangling, model evaluation, and error analysis. Curate internal exemplars and short playbooks.
If you're building team capability, you can find job-focused AI learning paths here: Complete AI Training - Courses by job.
Methods: data at scale plus lab-level realities
The study blends bibliometrics with case studies from real labs. That pairing reveals both the broad patterns and how they play out on the ground-policies, workflows, and culture included.
Meet the researchers
Cornelia Lawson is Professor of Economics of Science and Innovation at the Manchester Institute of Innovation Research and Alliance Manchester Business School. She studies researcher careers, collaboration, knowledge transfer, and AI's effects on science.
Philip Shapira is Professor of Innovation Management and Policy and a Turing Fellow at The Alan Turing Institute. He examines emerging technologies, governance, and innovation policy, including AI in science, manufacturing, and public values.
Liangping Ding is a research associate and UKRI AI Metascience Fellow at the Manchester Institute of Innovation Research. She is analysing how scientists use AI tools and how this affects productivity, novelty, and careers.
Julie Jebsen is a research associate at the Manchester Institute of Innovation Research. She conducts field research on AI use inside scientific labs.
Read the papers
- Rise of Generative Artificial Intelligence in Science.
- Generative AI in Science: Applications, Challenges, and Emerging Questions.
- Tracking developments in artificial intelligence research: constructing and applying a new search strategy.
Your membership also unlocks: