Outcry grows for urgent ban on AI child abuse tools as advocates demand government action

Grace Tame and advocates demand a ban on AI tools creating child sexual abuse material. They call for urgent laws to criminalise possession and update child safety frameworks.

Categorized in: AI News Government
Published on: Jul 17, 2025
Outcry grows for urgent ban on AI child abuse tools as advocates demand government action

Urgent Call to Ban AI Tools Used to Create Child Sexual Abuse Material

Former Australian of the Year Grace Tame and child safety advocates are demanding immediate government action to outlaw AI tools that generate child sexual abuse material (CSAM). They emphasize the urgent need to criminalise possession of AI-driven child exploitation applications, which are becoming increasingly accessible.

Around the same time as a meeting at Parliament House, these advocates are pushing for stronger laws addressing both the risks and opportunities AI presents in protecting children. The rise of AI-facilitated exploitation has sparked concern, especially following recent alleged abuse cases in Melbourne child care centres.

Government Response and Calls for Faster Action

Grace Tame criticized the government's slow response to online child safety threats. "Previous governments and the current one have not acted swiftly enough," she said, highlighting the need for urgent reforms.

The International Centre for Missing and Exploited Children (ICMEC) is leading the push for Australia to criminalise the possession and distribution of AI tools designed to create CSAM. The UK has already introduced similar legislation, serving as a reference point for Australian policymakers.

AI-Generated Abuse Material: A Growing Problem

According to intelligence firm Graphika, non-consensual explicit AI tools have moved beyond niche forums, becoming a monetised online business with millions of visits. Platforms like Reddit, X, and Telegram have seen sharp rises in links offering access to these tools.

This surge is straining police resources, diverting attention from cases involving real victims. Advocates stress that possession of CSAM is already illegal, but the AI tools that create this material remain unregulated.

Outdated Child Safety Frameworks

The national child protection framework, drafted in 2021, fails to address AI-related harms, making it outdated in today's context. Colm Gannon, CEO of ICMEC Australia, stated regulations must require platforms to prevent their services from becoming gateways to exploitation.

He added, "This software has no societal benefit. It should be regulated and made illegal to possess models generating child sexual abuse material."

Currently, offenders can download AI tools for offline use, evading detection. The environment is described as a "wild west" that requires minimal technical skill to exploit.

Legal Recommendations and Government Promises

An independent review of the Online Safety Act recommended banning 'nudify' AI apps that produce non-consensual explicit content. The government has pledged to enforce a "duty of care" on platforms to protect children but has yet to act on the review’s 66 recommendations.

Attorney-General Michelle Rowland condemned AI-facilitated child sexual abuse as "sickening" and promised cross-government collaboration to strengthen responses. She acknowledged existing laws regulate AI but noted the need for targeted measures in high-risk settings.

Using AI to Identify and Protect Victims

Advocates urge the government to remove barriers preventing law enforcement from using AI tools to detect child abuse and grooming. Since 2021, investigative use of facial recognition tools has been limited in Australia due to privacy concerns, following a ruling against Clearview AI.

However, tools compliant with Australian privacy laws exist and could help identify victims without compromising rights. Collaborating with international partners to harmonise AI safety standards is also critical.

Law Enforcement Faces an AI-Driven Arms Race

Unregulated AI tools are enabling offenders to increase the scale and sophistication of their crimes. Grace Tame noted offenders are adopting advanced methods, including using AI chatbots to automate grooming and seek advice on avoiding detection.

The government acknowledges current regulations do not fully cover AI risks and is considering mandatory safeguards. The eSafety Commissioner recently emphasized that while perpetrators bear responsibility, technology platforms must actively prevent abuse by controlling how their products are weaponised.

Next Steps for Government and Policy Makers

  • Criminalise possession and distribution of AI tools designed to create CSAM.
  • Update child safety frameworks to explicitly address AI-related harms.
  • Implement the Online Safety Act recommendations, including banning 'nudify' AI apps.
  • Enable law enforcement to use privacy-compliant AI tools for victim identification.
  • Work with international partners to align AI safety regulations.
  • Hold technology platforms accountable for preventing abuse via their services.

For government professionals interested in AI policy and regulation, understanding these urgent challenges is essential. Tools and training on AI ethics and safety can be found at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide