Head teacher pleads guilty to making AI child abuse images after undercover sting

Ex-head admits making and sharing AI indecent images of children; sentencing 22 Dec. Schools must tighten policies, train staff, and ensure fast reporting.

Categorized in: AI News Education
Published on: Nov 22, 2025
Head teacher pleads guilty to making AI child abuse images after undercover sting

Head Teacher Admits Making Indecent AI Images of Children: What School Leaders Need to Do Now

A former head teacher, Dean Juric of St Robert of Newminster School in Washington, Sunderland, has pleaded guilty to making and distributing indecent AI-generated images of children. Court proceedings heard he created "pseudo" images and videos depicting girls as young as 12. He was arrested in January and found with 380 illegal images, including Category A material. He will be sentenced at Newcastle Crown Court on 22 December.

There is no suggestion any of the children in the images were pupils at his school or resembled any pupil. The Bishop Wilkinson Catholic Education Trust suspended him immediately and confirmed he no longer works in the trust. Prosecutors said his position as a head teacher was an aggravating factor. He admitted three counts of making pseudo-indecent photographs of children and one count of distributing indecent photographs of children.

Why this matters for educators

This case is a stark reminder: AI tools can be misused to generate illegal, harmful content that targets children. Titles and trust do not reduce risk-systems and culture do. Every school needs clear policies, trained staff, and fast reporting routes that reflect current technology risks.

Immediate actions for school and trust leaders

  • Update safeguarding and online safety policies: Explicitly cover AI-generated sexual content, deepfakes, and synthetic child abuse imagery. Make the legal position and disciplinary consequences unambiguous.
  • Strengthen staff code of conduct: Include off-duty behavior, use of AI tools, image creation/editing, and private messaging apps. Reiterate mandatory reporting duties and whistleblowing protections.
  • Tighten digital controls: Block high-risk apps on school networks and devices where lawful. Review BYOD rules, logging, and alerts for image creation/manipulation tools.
  • Incident response playbook: Define the steps for the DSL, when to contact police/LADO, evidence preservation, and how/when to communicate with families and staff.
  • Reporting channels: Provide confidential, well-publicized routes for staff and students. Make it simple to raise concerns early.
  • Safer recruitment and checks: Ensure enhanced checks are current, supervision is consistent, and leaders model zero tolerance.
  • Student education: Teach consent, image-based abuse, and AI fakery in age-appropriate terms. Emphasize how to seek help and what not to share.
  • Governance oversight: Boards and trusts should review safeguarding audits, AI risk registers, and training completion rates termly.

Legal, reporting, and guidance

Build staff confidence with AI

Most staff have questions about where AI helps and where it crosses the line. Provide practical training that covers safe use, policy boundaries, and red flags. Make sure everyone knows who to contact the moment something feels off.

Talking to your community

Be clear, factual, and supportive. Share what the school is doing to protect children, how incidents are handled, and where families can seek help. Keep lines open for questions and report-back on improvements.

The bottom line

This case isn't about new tech being "good" or "bad." It's about leadership, safeguards, and fast action. Review your systems now, train your people, and make reporting effortless-so protection doesn't depend on trust alone.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)