Protecting Teacher Agency as AI Transforms Learning: De Montfort University Professor Calls for Longitudinal Research and Safeguards
At UNESCO's Digital Learning Week, DMU's Dr Sarah Younie called for stronger, long-term evidence on AI's effects in classrooms. She urged safeguards to protect teacher agency.

DMU professor urges deeper research into AI's impact on learning
At UNESCO's Digital Learning Week in Paris, Dr Sarah Younie, Professor of Education Innovation at De Montfort University (UK), called for stronger evidence on how AI is affecting teaching and learning. Co-presenting a new position paper from the International Teacher Task Force (ITTF) - "Promoting and Protecting Teacher Agency in the Age of Artificial Intelligence" - she stressed the need to protect teacher agency while exploring clear educational gains.
The message was direct: AI is changing classroom practice and decision-makers need better data, especially from long-term studies, to guide policy, procurement, and professional development.
Why this matters for educators
AI can support planning, feedback, and differentiation, but it also raises questions about teacher roles, assessment integrity, and student thinking. Younie argued that leaders should build protections for teachers and students while testing AI's practical benefits in real classrooms.
Key questions raised
- Could AI reduce students' independent thinking practices?
- Might it dehumanise education by weakening teacher-student relationships?
- Are we moving toward teacherless schools - and should we?
What the ITTF paper calls for
The paper states that AI brings both significant opportunities and serious responsibilities for education systems worldwide. It encourages leaders and policymakers to put safeguards in place so AI is used ethically and effectively.
- Protect teacher agency: keep educators in the loop for design, deployment, and evaluation of AI tools.
- Set clear governance: define accountability, auditability, and acceptable use in policy.
- Prioritise data privacy and security: minimise student data collection and enforce strict access controls.
- Demand transparency: require explainability for AI recommendations that affect teaching or grading.
- Address bias and fairness: run routine bias checks and impact assessments for different student groups.
- Invest in professional learning: time, training, and guidance so teachers can use AI responsibly.
- Pilot before scale: run controlled trials with clear success criteria and classroom evidence.
Research priorities highlighted
- Longitudinal effects on learning outcomes, critical thinking, and academic integrity.
- Impacts on teacher workload, wellbeing, and professional judgement.
- Equity: who benefits, who is left behind, and how to close gaps.
- Effectiveness of AI tutoring, feedback tools, and adaptive systems across subjects and phases.
- Use in early years and special education, where risks and needs can differ.
What you can do this term
- Form an AI policy working group with teachers, IT, safeguarding leads, and student voice.
- Audit current tools: purpose, data flows, costs, evidence, and alignment with curriculum goals.
- Create classroom guardrails: acceptable use, citation norms, and assessment protocols.
- Pilot a small set of AI tools with clear success measures; collect baseline and follow-up data.
- Run practical PD focused on lesson planning, feedback workflows, and assessment design.
- Engage parents and students with transparent guidance on benefits, limits, and privacy.
Further reading
Upskilling your staff
If you're planning structured PD on AI for teachers and support staff, explore curated training paths by role and skill level.