Police investigate alleged AI-generated explicit images at Royal School Armagh
Police are investigating reports that AI-generated explicit images were created and shared among pupils at the Royal School Armagh. The co-educational school, which teaches about 800 pupils and includes boarding, referred the matter to the authorities as soon as it became aware.
Principal Graham Montgomery said the school is working with educational and statutory authorities and that pupil safety remains the top priority. The Police Service of Northern Ireland (PSNI) confirmed an investigation is under way and said local officers are engaging with school leaders and parents or guardians.
Why this matters for educators
AI has lowered the barrier to fabricating convincing images in minutes, often on a smartphone. For schools, that means safeguarding, conduct, and digital citizenship policies need to account for synthetic media and image-based abuse, not just traditional cyberbullying.
Immediate actions for school leaders
- Treat it as a safeguarding and criminal concern: Notify police early, preserve evidence (do not forward images), and activate your safeguarding procedures. Make sure your designated safeguarding lead coordinates the response.
- Stabilize the situation fast: Assign a single incident lead, document timelines, and limit speculation. Keep the circle of access to evidence tight.
- Support affected pupils: Provide pastoral care, discreet supervision, and clear routes to report further harm. Prioritize dignity and minimize secondary exposure.
- Communicate with parents carefully: Share facts, steps taken, how to report concerns, and how to talk to children about consent and synthetic media - without amplifying details.
- Work with platforms: Use reporting channels to request takedown of content and escalate to trust-and-safety teams with police reference numbers if available.
- Guide staff: Issue a short briefing on what to do if material is seen, how to log incidents, and how to respond to student questions.
- Reinforce curriculum touchpoints: Address consent, image-based abuse, misinformation, and AI-generated content in PSHE/RSHE and assemblies.
- Tighten tech controls: Review filtering, device policies, screen capture settings, and any school-managed tools that could be misused.
- Update policies: Ensure acceptable use, mobile device, anti-bullying, and safeguarding policies explicitly cover synthetic imagery and image-based abuse.
- Train your team: Run short, scenario-based CPD on AI literacy, deepfake awareness, evidence handling, and reporting pathways.
Longer-term safeguards
- Student education: Teach critical viewing skills, consent, and the legal/disciplinary consequences of creating or sharing explicit or fabricated content.
- Parent partnership: Provide simple guidance on conversations at home, device oversight, and how to report concerns to school and police.
- Trusted channels: Offer anonymous reporting options for students and staff to flag issues early.
- Practice the plan: Run tabletop exercises with your safeguarding team to test communication, evidence handling, and external referrals.
Helpful resources
- UK Safer Internet Centre - guidance for schools on policies, reporting, and education around online harms.
- CEOP Safety Centre - police-led reporting and advice for child protection online.
If your staff need a quick upskill on AI literacy and classroom implications, explore short, practical courses here: AI courses by job role and latest AI courses.
What we know about the Armagh case
Reports indicate some pupils used AI tools to generate fabricated explicit images, which were then shared among students. The school contacted authorities promptly, the principal emphasized ongoing cooperation, and PSNI confirmed an active investigation with engagement across the school community.
For education leaders, the message is clear: act fast, center pupil wellbeing, and make AI-specific safeguards part of everyday practice.
Your membership also unlocks: