AI-generated explicit images at Royal School Armagh: what school leaders need to know and do
Royal School Armagh has identified pupils alleged to be behind the creation and sharing of AI-generated explicit images of classmates. The principal, Graham Montgomery, told parents the school believes it has identified all those whose images were manipulated and those allegedly responsible. He called the actions "shocking and without excuse," and confirmed the number of those targeted is in single figures.
The Police Service of Northern Ireland (PSNI) has opened an investigation and is engaging with the school and parents. Montgomery asked the wider community to avoid speculation, especially on social media, and emphasized the safety and well-being of pupils.
Why this matters for educators
AI tools make it easy to fabricate images that can harm pupils within hours. The victims are dealing with a deeply violating experience, and the pupils who created and shared the images are also minors who need firm accountability and structured support.
This is a safeguarding, pastoral, legal, and culture issue all at once. Schools need a clear incident plan, staff training, and consistent messaging that reinforces consent, respect, and digital responsibility.
Immediate response checklist (for any similar incident)
- Stop the spread: instruct students and staff not to view, save, or share any images. Make this explicit in writing.
- Contact police promptly and record a clear chronology of events, actions, and decisions.
- Preserve evidence safely (timestamps, URLs, device details) without forwarding the material.
- Notify parents/guardians of affected pupils early with clear points of contact and support routes.
- Arrange safeguarding support for victims (counsellor, DSL check-ins, flexible adjustments to timetable).
- Begin an internal investigation in parallel with police guidance; apply behavior policies consistently.
- Request platform takedowns where relevant and feasible; document outcomes.
- Brief staff with agreed talking points to avoid rumor and reduce harm.
Communication that reduces harm
- Keep it factual and measured: confirm an incident, an investigation, and support in place.
- Ask the community to refrain from sharing speculation or content online.
- Make reporting routes obvious: DSL, year heads, anonymous forms, and out-of-hours contacts.
- Remind students that sharing or possessing such images is illegal, even if AI-generated.
Support for victims and those responsible
Victims may experience shock, anxiety, and a sense of violation. Prioritize privacy, control over information, and access to trusted adults. Keep follow-up regular and discreet.
Students who created or shared the images need firm boundaries and education. As Jim Gamble noted, they are still children; accountability and learning must sit together. Consider restorative processes alongside sanctions where appropriate and safe.
Prevention moves you can implement this term
- Update the Acceptable Use Policy to include synthetic media, consent, and image manipulation.
- Run age-appropriate sessions on consent, digital footprints, and how AI image tools work.
- Tighten device use during the school day and review any image-editing or file-sharing permissions.
- Introduce clear reporting channels for students and parents (including anonymous options).
- Train staff so they can spot, respond, and signpost support confidently.
Legal context (UK/Northern Ireland)
Creating, possessing, or distributing indecent images of children is a serious offense, including "pseudo-photographs" and AI-generated material. Sharing intimate images without consent, including deepfakes, is also criminalized.
- CPS guidance on indecent images of children
- GOV.UK: New offenses for sharing intimate images without consent
Practical steps you can take this week
- Hold short tutor-time briefings: consent, reporting routes, and consequences of sharing harmful content.
- Issue a parent update with clear FAQs and how to talk with their child at home.
- Audit your response plan with the DSL and SLT; run a tabletop exercise.
- Review filtering, monitoring, and photo/video settings on school-managed devices.
- Schedule staff CPD on AI literacy and safeguarding implications of synthetic media.
Staff development on AI and online safety
Most schools need a baseline of AI literacy so staff can teach, guide, and intervene with confidence. If you're building a CPD pathway, consider structured modules on AI image tools, consent education, and classroom practice.
For broader AI upskilling across roles, you can explore curated options here: AI courses by job.
What the school has said so far
The school first became aware of circulating images on Thursday 8 January and referred the matter to the appropriate authorities. Montgomery stressed procedures are in place for pupil safety and well-being.
He noted that reports of "potentially dozens" of victims were inaccurate, stating the affected number is in single figures, though that does not lessen the impact. He urged everyone to be mindful that they are dealing with teenagers and to avoid public commentary that could increase harm.
The takeaway for school leaders
Move quickly, communicate calmly, and keep students' dignity at the center. Treat this as both a safeguarding incident and a curriculum moment: clear rules, real support, and education that builds wiser choices next time.
Your membership also unlocks: