£4m Transatlantic Fellowships to Study How AI Transforms Science

A £4m transatlantic program backs 29 fellows to study how AI changes research-from productivity and careers to evidence standards and accessibility. Up to £250k over two years.

Categorized in: AI News Science and Research
Published on: Oct 10, 2025
£4m Transatlantic Fellowships to Study How AI Transforms Science

International fellowships to explore AI's impact on science

A new £4 million programme is funding 29 early career researchers across the UK, US and Canada to study how artificial intelligence is changing scientific work. The UK Metascience Unit, jointly run by the Department for Science, Innovation and Technology and UK Research and Innovation, leads the initiative with partners in the US and Canada.

The cohort will spend up to two years examining how AI affects productivity, creativity, research topics, and scientific careers. Each fellow receives up to £250,000 to run focused, applied projects.

Why this matters for research teams

AI is moving from tool to co-worker across many fields. This programme looks directly at how that shift changes scientific methods, publication norms, evidence standards, and who gets to participate in research.

Expect practical outputs on responsible use, provenance and attribution, evaluation standards for AI-generated outputs, and strategies to reduce barriers for researchers with learning disabilities. Projects span lab workflows, grant review, software engineering, and sector-specific applications such as agriculture.

How the programme works

  • 29 fellows: 18 UK (funded by UKRI), 6 US (funded by the Alfred P. Sloan Foundation), 5 Canada (funded by SSHRC).
  • Focus: AI's impact on research practice, scientific progress, and the political economy of research.
  • Funding: Up to £250,000 per project, up to two years.
  • Review: A distributed peer review pilot where applicants assessed 8-10 peer proposals, improving breadth of expertise and transparency.
  • Community: A fully funded summer school in 2026 to build a transatlantic network in AI metascience.

Research themes you can track in your own work

  • Productivity and creativity: How AI tools alter speed, idea generation, and scholarly diversity.
  • Topic selection: Whether AI shifts what scientists choose to study and how fields evolve.
  • Knowledge and validity: Standards for using, citing, and evaluating AI-generated outputs.
  • Careers and equity: Effects on training, promotion, and inclusion for researchers with disabilities.
  • Sector cases: AI for sustainable agri-food systems and evidence for policymaking.

Examples from the cohort

  • University of Manchester: How AI changes day-to-day research in biomedical science-productivity, creativity, topic variety, and career paths.
  • University of Reading: AI in agriculture and food science to support more sustainable, productive British farming.
  • Additional projects: Topic choice in science, ethics of publishing with AI tools, and accessibility for researchers with learning disabilities.

What leaders said

Government is backing researchers to test where AI strengthens scientific work and where risks to validity, ethics, and reliability need guardrails. Funders noted that the programme develops early-stage talent and trials new peer review models. Partners stressed that large AI models are global, so the inquiry must be global too.

Full list of funded fellows

UK Metascience Unit-funded fellowships

  • Niall Curry, Manchester Metropolitan University - Developing disciplinarily situated recommendations for responsible generative AI use in the social sciences.
  • Aurelia Sauerbrei, University of Oxford - From human to machine: the ethics of how AI Is reshaping data in scientific research.
  • Liangping Ding, The University of Manchester - AI and knowledge production.
  • SJ Bennett, Durham University - Synthetic metascience: tracing AI-generated epistemic shifts in scientific research practice and cultures.
  • Jorge Campos Gonzalez, University of Reading - sustAInable: AI-driven research for sustainable agri-food futures.
  • Batool Almarzouq, The University of Edinburgh - Rethinking AI reshapes scientific norms, collaboration dynamics and disruptive science in wicked problem research.
  • Cen Cong, Newcastle University - Caught in the current: rethinking research anxiety and creativity in the age of AI.
  • Basil Mahfouz, University College London - Investigating AI's impact on evidence sources for policymaking.
  • Danny Maupin, University of Surrey - Developing an evidence-based framework for reducing epistemic trespassing when using generative AI: a mixed methods study.
  • Fanqi Zeng, University of Oxford - AI in criminology research: mapping methodological shifts and epistemic risks.
  • Chelsea Sawyer, The University of Manchester - Exploring AI's role in enhancing research accessibility and equity for researchers with specific learning disabilities.
  • Megan Crawford, Edinburgh Napier University - The impact of AI on scientific foresight.
  • Zihao Li, University of Glasgow - Removing legal hurdles in copyright and data privacy for AI-driven research: unleashing the potential of AI for science.
  • Youyou Wu, University College London - Is generative AI reinventing the language of science?
  • Emma Gordon, University of Glasgow - Understanding in the age of AI: preserving scientific achievement in AI-assisted research production.
  • Joseph Shingleton, University of Glasgow - Generative AI and the future of research software engineering.
  • Charlotte Collins, University of Cambridge - How humans shape AI for life sciences research.
  • Justyna Bandola-Gill, University of Birmingham - Transforming evidence synthesis: AI and the (r)evolution of the evidence ecosystem.

Alfred P. Sloan Foundation-funded fellowships

  • Mel Andrews, Princeton University - Evaluating the epistemic credentials of AI in science evaluation.
  • Kati Kish Bar-On, Boston University - The shape of intelligence: AI and the changing culture of mathematical knowledge.
  • Gabrielle Benabdallah, University of Washington - Technologies of reading: from print culture to AI-augmented science.
  • Benjamin Santos Genta, New York University - AI, similarity, and the future of systemic reviews.
  • Seyed Mohamad (Moh) Hosseinioun, Northwestern University - Funding the future: AI changes what is science, who does it, and how.
  • Siyu Yao, University of Cincinnati - Understanding the AI revolution in science: an integrated history, philosophy, and metascience approach.

SSHRC-funded fellowships

  • Anas Ramdani, Dalhousie University - Investigating AI's impact on scientific collaboration in environmental research: a metascience perspective.
  • Graham Macdonald, University of the Fraser Valley - Ask ChatPhD: exploring the uses of AI technologies by research trainees and their implications for the political economy of university-based research.
  • Antoine Boudreau LeBlanc, Université Laval - Governing the neural turn in AI: ethical frameworks for foundation models in cognitive science.
  • Maxime Harvey, Institut national de la recherche scientifique - (AI) research infrastructure: a comparative study of AI infrastructures in STEM and humanities, arts and social sciences. (IA) infrastructuration de la recherche: Étude comparative des infrastructures de l'IA en STEM et SHS.
  • Emadeddin Naghipour, University of Victoria - Between judgment and automation: researchers, AI, and the future of peer review.

Practical steps for labs and research offices

  • Set clear AI use policies for literature review, coding, analysis, writing, and peer review. Define disclosure, verification, and authorship rules.
  • Run pilot audits: pick one workflow (e.g., systematic reviews or data cleaning), measure time saved and error profiles with and without AI tools.
  • Establish provenance checks: document prompts, model versions, parameters, and human verification for any AI-assisted output.
  • Update training plans for PIs, postdocs, and students to cover model limits, bias, privacy, copyright, and data security.
  • Improve accessibility: test AI tools that support researchers with learning disabilities and record what works at each step of the research cycle.
  • Integrate RSE early: coordinate with research software engineers on model selection, reproducibility, and compute budgeting.

For teams building skills fast, explore role-focused AI courses and certifications via Complete AI Training - courses by job.