Police investigate AI-generated explicit images at NI grammar school: what school leaders should do now
Police in Northern Ireland are investigating reports that AI-generated explicit images were shared among pupils at the Royal School Armagh, a mixed grammar school of around 800 students.
The school says it reported the matter to police as soon as it became aware. Principal Graham Montgomery said the school is following guidance from education and statutory authorities, adding that "the safety and well-being of all our pupils remains our highest priority."
The PSNI confirmed an investigation is under way. "Local officers are also engaging with the appropriate school authorities and the parents/guardians of the pupils affected," a spokesperson said.
Why this matters for educators
AI now makes it easy to fabricate lifelike images. When those images involve children, the risks move from reputational harm to potential criminal offences and serious safeguarding concerns.
This incident is a reminder: schools need clear processes for image-based abuse involving synthetic media-not just traditional sharing of real images.
If a similar incident happens at your school
- Treat it as a child protection issue immediately. Inform the Designated Safeguarding Lead and contact police. Follow your local safeguarding procedures and seek advice from your authority.
- Stop the spread. Instruct students to delete and not forward content. Do not ask pupils to share or show you copies.
- Preserve evidence safely. Secure any reports, timestamps, and device details without duplicating the images.
- Support affected pupils and families. Offer pastoral care and counselling, and agree a communication plan with parents/guardians.
- Control communications. Keep a single point of contact for media and community queries. Document every decision and action.
- Avoid "DIY" investigations. Do not confront alleged creators or circulate names. Work with police and your safeguarding partners.
Longer-term safeguards to put in place
- Policy refresh. Update mobile phone, acceptable use, and image-based abuse policies to cover synthetic media and deepfakes.
- Curriculum and assemblies. Teach consent, bystander responsibilities, and the legal risks of creating or sharing explicit content-real or AI-generated.
- Staff training. Brief all staff on responding to image-based incidents, evidence handling, and how AI tools can be misused.
- Reporting routes. Make it easy for students to report concerns anonymously and safely. Ensure referrals reach the DSL quickly.
- Technology posture. Use device management and filtering to reduce circulation on school networks. Be cautious with "deepfake detectors"-treat them as signals, not proof.
- Data protection. Record incidents proportionately, restrict access on a need-to-know basis, and follow retention and deletion rules.
Useful resources
- UKCIS: Sharing nudes and semi-nudes (advice for education settings)
- Police Service of Northern Ireland (contact and guidance)
Building staff confidence with AI
Incidents like this highlight the need for practical AI literacy across the whole staffroom-what AI can do, where it goes wrong, and how to respond when it's misused. If you're planning CPD, consider structured options that connect AI basics to safeguarding and policy.
The investigation at Royal School Armagh will run its course. For school leaders, the next right move is clear: tighten process, support students, and make sure your team is ready for the next incident-before it lands.
Your membership also unlocks: