How AI Is Changing Science: New Tools, Leaner Teams, and Unanswered Questions

AI is now central to research, and a Manchester team is mapping its role across fields and labs. Findings: leaner teams, quicker cycles, and stricter disclosure and audit trails.

Categorized in: AI News Science and Research
Published on: Dec 12, 2025
How AI Is Changing Science: New Tools, Leaner Teams, and Unanswered Questions

AI adoption is changing how scientists work, collaborate, and publish

Artificial intelligence has moved from a helpful add-on to a core part of scientific practice. A research effort at Manchester is mapping how this shift plays out across disciplines, labs, and publication pipelines.

The project blends large-scale bibliometric analysis with case studies in the U.K. and abroad. The aim is simple: show where AI accelerates science, where risks appear, and how teams can respond with clear norms and better tooling.

What the Manchester project is doing

Using databases such as OpenAlex, the team is tracking millions of publications to spot where and how AI is applied. They also examine how generative tools like ChatGPT are spreading across fields and research economies.

Alongside the data, they're running on-the-ground case studies to capture real workflows: coding, literature review, experiment planning, and analysis. This mixed approach links macro trends to what actually happens in labs.

Early signals from the data

The U.S. and China lead on publication volume in generative AI. But smaller research economies are adopting fast and delivering meaningful outputs-suggesting opportunity doesn't strictly follow size.

Teams working on generative AI tend to be slightly smaller than those in other AI areas. That points to a different style of collaboration-leaner teams, tighter iteration cycles, and more tool-driven productivity.

Speed vs. accountability

AI can summarize literature, write starter code, and refactor analysis pipelines. That saves time, but it raises questions about responsibility, governance, and where to draw the line between model output and a researcher's judgment.

The Manchester team highlights the need for clear authorship policies, disclosure of AI assistance, and audit trails for data and code. These guardrails help reviewers, editors, and future readers trust the record.

What the researchers say

According to Professor Cornelia Lawson, the project examines how AI influences discovery-and how to use it responsibly, creatively, and equitably for researchers and society. Professor Philip Shapira notes that AI is reframing science, changing skill demands, influencing collaboration, and transforming opportunities, while its effects on novelty and creativity remain uncertain.

Practical steps for research teams

  • Set lab-wide AI usage norms: allowed tools, disclosure rules, and red lines (e.g., no synthetic data in results without clear labeling).
  • Keep provenance: log prompts, versions, and outputs tied to code commits and datasets.
  • Use model-assisted coding as scaffolding, not a substitute-require human review and tests for any generated code.
  • For literature work, combine AI summaries with manual checks of key sources; cite original papers, not summaries.
  • Pilot small, cross-skilled teams for generative AI projects; measure cycle time, defect rates, and novelty of outputs.
  • Engage editors early on disclosure expectations for AI-assisted writing and figures.

Why this matters for labs and funders

AI changes skill profiles-prompting, data engineering, evaluation, and tool governance now sit alongside statistics and domain expertise. Hiring and training need to reflect that mix.

Evaluation criteria should evolve too. If smaller teams can deliver stronger outputs with AI, funding models and authorship norms may need updates to fairly recognize contributions.

Publications and data sources

The project reports include an arXiv preprint: Generative AI in Science: Applications, Challenges, and Emerging Questions (DOI: 10.48550/arxiv.2507.08310). A related paper appears in Scientometrics: Rise of Generative Artificial Intelligence in Science (DOI: 10.1007/s11192-025-05413-z).

Next steps for researchers

  • Run a 60-90 day lab pilot: define 2-3 AI use cases, set metrics (time saved, quality, rework), and review outcomes in a colloquium.
  • Create an "AI methods" addendum in your lab's SOPs covering disclosure, reproducibility, and validation.
  • Share lessons learned with department or society working groups to align on field-level norms.

Skill-building resources

If you're formalizing training for your team, browse role-specific AI course paths here: AI courses by job. Curate a short list for your lab and tie completion to your pilot plan.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide