AI is 'supercharging' bullying: What educators need to act on now
Content note: References to suicide/self-harm.
Australia has launched a new national plan to combat school bullying, backed by $10 million and endorsed by ministers from every state and territory. The move follows warnings that AI chatbots are being used to humiliate students and, in some cases, push them toward self-harm.
What's changing under the national plan
Schools will be required to respond to bullying complaints within two school days. This two-day rule comes from the Anti-Bullying Rapid Review, which received 1,700 submissions, mostly from parents.
The funding will drive a national awareness campaign and provide new resources for teachers, parents, and students. The review supports consequences like suspensions or expulsions in some cases, but stresses that relationship repair and addressing root causes often lead to better outcomes.
Key statistic: One in four students in Years 4-9 report bullying every few weeks or more.
Why AI changes the risk profile
Education Minister Jason Clare warned that bullying is no longer just student-to-student. AI chatbots can target kids directly-insulting, humiliating, and, in disturbing cases, encouraging self-harm.
International cases and local reports have raised the stakes, while new data suggests two in five Australian parents believe their kids are turning to AI for companionship. The takeaway for schools: online harm isn't confined to social platforms, and it doesn't always have a human instigator.
Social platforms, policy shifts, and what's next
Officials report most bullying is occurring via TikTok and Snapchat. Australia's new under-16 social media restrictions begin on 10 December, which may reduce exposure but won't eliminate AI-driven risks.
Meta has flagged AI supervision tools for parents from early 2026. These will allow adults to disable one-on-one AI chats, set time limits, and review topics discussed with AI characters across its platforms. Rollout is planned for Australia, the United States, England, and Canada.
Image-based abuse and deepfakes
AI "nudify apps" and other deepfake tools are creating non-consensual sexual material, with a heavy impact on young women and female teachers. Reports indicate that digitally altered intimate images of people under 18 have more than doubled in the past 18 months, with 80% of targets being women.
The federal government has announced plans to restrict access to deepfake and nudify tools. Schools should treat image-based abuse as a high-risk issue requiring fast, trauma-informed responses and clear referral pathways.
Practical actions for school leaders and staff
- Implement the two-day response rule now: set clear intake, triage, and follow-up steps for any bullying report.
- Update policies to include AI-specific harms: chatbots, deepfakes, nudify apps, impersonation, and synthetic harassment.
- Train staff to recognize AI-enabled bullying: unusual "voice" in messages, sudden tone shifts, or content patterns that don't fit peer language.
- Equip students with digital self-defense: privacy settings, reporting screenshots, refusing to engage with AI prompts, and seeking help early.
- Engage parents: briefings on AI risks, social media settings, device rules at home, and how to report incidents to the school and eSafety.
- Use restorative practices where safe and appropriate, coupled with clear consequences for repeat or severe harm.
- Create a rapid response for image-based abuse: preserve evidence, support the victim first, and notify relevant authorities quickly.
Protocol updates to put in place this term
- Bullying intake form that flags AI-involvement (e.g., chatbot, deepfake, impersonation).
- Parent communication templates for AI-related incidents.
- Clear escalation map: classroom teacher → Year coordinator → Wellbeing lead → Principal → eSafety referral if needed.
- Monitoring approach that respects privacy but allows quick identification of high-risk patterns.
- Staff wellbeing support, especially for educators targeted by deepfakes.
Teacher development and resources
Don't wait for policy to catch up. Build staff capability on AI risks, ethical use, and classroom application now. A practical starting point is role-based learning paths that address both safety and instruction.
Explore AI courses by job role to upskill teachers, wellbeing teams, and leaders with focused, classroom-ready content.
Where to report and get help
If there's immediate risk to a student's safety, call emergency services.
- Report online harm and image-based abuse: eSafety Commissioner
- Crisis support: Lifeline 13 11 14 (call) or 0477 13 11 14 (text)
- Suicide Call Back Service: 1300 659 467
- Kids Helpline (up to age 25): 1800 55 1800
- Beyond Blue: 1300 22 4636
- Embrace Multicultural Mental Health: support for culturally and linguistically diverse communities
The job isn't to fear AI. It's to be faster, clearer, and more coordinated than the harm it enables. Put the systems in place now, and keep them current.
Your membership also unlocks: