Police investigate explicit AI deepfakes of Sydney high school students as NSW tightens laws

NSW Police are probing explicit deepfake images of Sydney female students, with support in place at the school. New laws criminalise AI-made intimate images; offenders face jail.

Categorized in: AI News Education
Published on: Oct 17, 2025
Police investigate explicit AI deepfakes of Sydney high school students as NSW tightens laws

Police investigate explicit deepfake images involving Sydney high school students

NSW Police are investigating reports that explicit deepfake images were created using the faces of female high school students in Sydney. Families alerted authorities after a male student received one of the images and informed the school.

"Officers attached to Ryde Police Area Command have commenced an investigation," a police spokesperson said. "Inquiries are ongoing and there is no further information available at this time."

The NSW Department of Education said it is working with police and will act if any student is found responsible. "Deepfakes present significant new risks to the wellbeing and privacy of students," a department spokesperson said. "If any student is found to have engaged in this behaviour, the school will be taking strong disciplinary action."

Acting Education Minister Courtney Houssos called the reports "deeply concerning." She said the school has support in place for affected students and the broader community and noted it would be inappropriate to comment further during an active police investigation.

What NSW law says

NSW recently criminalised using AI to create intimate images without consent. Creating or sharing such images can lead to up to three years in jail. The amendments also cover the creation, recording, and sharing of sexually explicit audio that is real or simulated.

Attorney-General Michael Daley said the laws better protect people, particularly young women, from image-based abuse. "This bill closes a gap in NSW legislation that leaves women vulnerable to AI-generated sexual exploitation… anyone who seeks to humiliate, intimidate or degrade someone using AI can be prosecuted."

Why this matters for school leaders

Deepfakes are manipulated images, video, or audio designed to look real. The eSafety Commissioner has called deepfakes a "current crisis affecting school communities," with students finding their images in fake explicit content and peers receiving AI-generated material.

Immediate actions for principals and wellbeing teams

  • Preserve evidence securely. Do not forward images. Capture URLs, usernames, timestamps, and device details for police and the eSafety Commissioner.
  • Report promptly to NSW Police and the eSafety Commissioner. Consider contacting your Department incident line and legal unit.
  • Activate a student safety plan: designated staff lead, daily check-ins, safe spaces, and trauma-informed communication.
  • Notify parents/carers with clear guidance: do not share the content, do not confront students online, channel all information through the school lead.
  • Offer confidential counselling and peer support. Prioritise privacy and dignity; avoid naming or hinting at identities.

System and policy measures to reduce risk

  • Update student behaviour and technology policies to explicitly cover deepfakes, synthetic media, and image-based abuse, with clear consequences.
  • Implement network and device controls: block known AI image tools where feasible, restrict AirDrop/quick share features, and tighten content filters.
  • Embed digital citizenship across years: consent, bystander action, reporting pathways, and critical media literacy about synthetic content.
  • Run targeted staff development on AI safety, escalation pathways, and evidence handling. Ensure new staff are briefed during induction.
  • Establish a communications protocol: single spokesperson, approved language, and a Q&A sheet for parents to reduce speculation and stigma.
  • Document everything: timeline, actions taken, who was notified, and outcomes. Review after action to improve response speed and clarity.

Support for affected students and families

Keep communication discreet and supportive. Avoid blame, limit retelling of events, and remove exposure to the content. Encourage students to step away from social media during the acute phase.

  • Lifeline: 13 11 14
  • Beyond Blue: 1300 22 4636
  • Kids Helpline: 1800 55 1800

Practical reporting and guidance

  • eSafety Commissioner: image-based abuse support and reporting - Guidance and Report

Professional learning for your staff

If your school is updating AI literacy and safety capabilities, consider focused training for leaders, teachers, and support staff. See relevant options here: AI courses by job role.

Bottom line: treat synthetic sexual imagery as a safeguarding and legal issue, not merely a technology problem. Move fast, centre student wellbeing, involve police, and keep your community informed without amplifying harm.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)