Government trials AI for SEND - earlier help, not a quick fix, warns BPS

Government launches an AI trial to spot SEND earlier and speed support. Experts welcome early flags but say this isn't a quick fix-humans must lead and funding follow.

Categorized in: AI News Education Government
Published on: Nov 29, 2025
Government trials AI for SEND - earlier help, not a quick fix, warns BPS

Government backs AI trial to speed up SEND identification - but experts warn against "quick fix" thinking

28 November 2025 - The government has launched the Special Educational Needs in the Accelerator Programme, a research initiative to test new AI tools that could flag children's special educational needs earlier. The aim: faster identification, earlier support, and better outcomes.

Dr Helena Bunn, chair of the British Psychological Society's Division of Education and Child Psychology, welcomed the focus on early intervention, noting that timely identification "has the potential to trigger early support," which prevents problems becoming entrenched. Her message is clear: AI can support the process, but it should assist professionals rather than replace them.

"Although AI can help to an extent," she said, "its capabilities should continue to be treated as assisting the identification of SEND needs and supporting decision making. AI cannot, and will not, solve all the issues which currently face the SEND system, most obviously in provision, where significant gaps are widely acknowledged."

She also stressed that the trial must probe both the capabilities and the limitations of AI and must not be sold as a "quick fix" for financial or staffing pressures. Crucially, development should involve schools, parents and parent groups, and "most notably Educational Psychologists."

Given that "the SEND system is unsustainable in its current form," Dr Bunn called for a long-term, costed plan grounded in human-led identification, with AI used as a tool by professionals. Investment in the educational psychology workforce is essential to meet rising demand.

What this means for education leaders and local authorities

  • Early identification potential: AI screening may help surface concerns sooner, especially where capacity is stretched-but it does not create provision where there is none.
  • Human-led decisions: Use AI outputs as decision support. Final judgments should remain with qualified professionals, particularly Educational Psychologists.
  • Evidence over hype: Demand transparent validation, accuracy across demographics, and clear performance baselines before adoption.
  • Stakeholder involvement: Co-design with schools, parent groups, and SEND professionals to maintain trust and practical relevance.
  • Capacity and funding: Tools won't reduce caseloads without parallel investment in assessment and provision pathways.

Guardrails for responsible AI in SEND

  • Data protection and safety: Ensure lawful basis, data minimisation, DPIAs, and safeguarding alignment. See the ICO's guidance on AI and data protection: Information Commissioner's Office.
  • Fairness and bias: Require audits for demographic performance, false positives/negatives, and error impacts across groups.
  • Transparency: Ask vendors for model documentation, training data sources, known risks, and update policies.
  • Human oversight: Define clear escalation routes, professional sign-off, and documented rationale for decisions.
  • Procurement discipline: Bake in clauses for data usage limits, vendor security, uptime, exit, and independent evaluation.
  • Policy alignment: Ensure consistency with statutory duties and the SEND Code of Practice: Department for Education.

How to run a useful pilot this academic year

  • Define the problem: Be precise (e.g., earlier flagging of language needs in Reception).
  • Pick a small, representative cohort: Include diverse settings to test equity and practicality.
  • Set baselines and metrics: Time-to-identification, referral quality, false positives/negatives, staff time saved, and pupil outcomes.
  • Build a multidisciplinary team: SENCOs, Educational Psychologists, DSLs, data protection leads, and parent reps.
  • Secure consent and governance: Clear notices, opt-in/opt-out pathways, DPIA, and safeguarding checks.
  • Plan workflows: How AI flags are reviewed, who signs off, and how notes enter MIS/CPOMS.
  • Train staff: Short, role-specific training on interpreting outputs and limits of the tool.
  • Decide exit criteria: What results justify scaling, iterating, or stopping.

Funding and workforce

Without a costed plan and workforce expansion-particularly Educational Psychologists-AI tools will shift, not solve, pressures. Leaders should map how any efficiency gains translate into more time for assessments and targeted support, then track if that actually happens.

What to watch next

  • Government guidance on evidence standards, procurement, and data governance for AI in SEND.
  • Independent evaluation from the Accelerator Programme: accuracy, equity, workload impact, and outcomes.
  • Local frameworks that formalise human oversight, escalation routes, and parent communication.

Upskilling your team

If you're planning structured AI training for education or public sector teams, you can explore role-based options here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
🎉 Black Friday Deal! Get 86% OFF - Limited Time Only!
Claim Deal →