How scientists' AI commentaries are resetting science education
A new study analyzed how experts talk about AI in leading journals and found a clear shift: scientific practice is still the anchor, but social values now sit right beside it. For people building courses, labs, and research training, this changes what we teach and how we evaluate work.
The Project AI-Vision team, led by Professor Sibel Erduran with Ho-Yin Chan and Ivan Au, reviewed 151 expert commentaries published in Nature and Science between 2021 and 2024. Funded by the John Fell Fund, the analysis is published in Research in Science Education. The takeaway: AI isn't just a technical topic; it's entangled with ethics, governance, and the institutions that produce knowledge.
The turning point: late 2022
After the release of ChatGPT, attention moved from strictly epistemic concerns-data quality, methods, and reasoning-toward social and institutional questions. Ethics, governance, and political structures that influence how scientific claims are produced gained prominence in the discourse. That shift matters for what ends up in our syllabi, seminars, and lab protocols.
Scientific practices remain central-but context now matters
Commentaries continue to emphasize scientific practices: modeling, measurement, interpretation, and peer review. Around those practices, social values are becoming central to how AI is used for research and its contributions to society. In other words, the "how" of science cannot be separated from the "who," "why," and "under what rules."
As Professor Sibel Erduran notes, "Embracing a more socially embedded understanding of the nature of science could help future scientists and citizens critically engage with AI and other emerging technologies."
What this means for your lab, department, or course
- Integrate AI across methods training: data provenance checks, model selection rationales, uncertainty reporting, and failure analysis.
- Add explicit modules on ethics, governance, and institutional incentives. Treat them as core, not optional.
- Require documentation on dataset sourcing, licensing, consent, and bias audits for any AI-supported study or student project.
- Adopt clear attribution and disclosure: where AI systems are used, how results were validated, and who is accountable.
- Update peer review rubrics to include reproducibility with AI tools, computational traceability, and policy compliance.
- Create an AI risk register for labs and courses: privacy, security, misuse scenarios, and mitigation plans.
- Connect research outputs to social impact: equity implications, stakeholder effects, and public trust considerations.
Questions to build into seminars and peer review
- What assumptions are embedded in the model, dataset, and prompt? Who benefits and who is excluded?
- How would conclusions change with alternate datasets, priors, or model classes?
- Which ethical guidelines and governance policies apply here, and where are the gaps?
- Is the work reproducible without proprietary systems? If not, what's the plan for transparency?
- What institutional incentives (publishing, funding, policy) influence the claims made?
- How are uncertainties communicated to non-experts and decision-makers?
Where the conversation is unfolding
Keep an eye on expert commentaries in Nature and Science. They're a fast signal for what skills and safeguards researchers will need next.
The full analysis appears in Research in Science Education.
If you're updating curricula or lab training
- Map your current program against the study's two pillars: scientific practices and social/institutional context.
- Pilot small changes first-e.g., a reproducibility-with-AI assignment or an ethics-and-governance case review in journal club.
- Standardize disclosures for AI use across theses, manuscripts, and grant proposals.
If your team needs structured upskilling on AI literacy and workflows, explore role-based options at Complete AI Training.
Your membership also unlocks: