Bill to Criminalise AI Child Abuse Apps Introduced to Parliament
Independent MP Kate Chaney is set to introduce legislation to criminalise the possession and distribution of AI tools specifically designed to create child sexual abuse material. These tools, currently legal to possess and distribute, are becoming increasingly accessible online, raising urgent concerns.
Legislative Action Underway
Ms Chaney plans to introduce a bill to the House of Representatives that creates new offences targeting AI tools used to generate child abuse material. While possessing or sharing child sexual abuse material is already illegal, the law does not yet cover the AI generators themselves. These AI applications are popular, with some accessed millions of times, and their spread complicates police efforts by facilitating offline material creation, which is harder to detect and trace.
Ms Chaney highlights that existing legal loopholes put children at risk and must be addressed without delay. A recent roundtable recommended swift action to outlaw these tools, directly influencing the proposed legislation.
Key Provisions of the Bill
- Creation of a new offence for using carriage services to download, access, supply, or facilitate AI technologies designed to create child abuse material.
- Offence for scraping or distributing data with the intention of training or creating these tools.
- Maximum penalty of 15 years imprisonment for offenders.
- Public defence provisions for law enforcement and intelligence agencies acting with express authorisation.
Ms Chaney points out that these AI tools enable the on-demand, unlimited generation of abusive material. Perpetrators can train AI with images of a particular child, delete the original material to avoid detection, and continue to generate new content using word prompts. This not only makes police investigations more complex but also means that every AI-generated image originates from the exploitation of an actual child.
Child Safety Experts Support Urgent Reform
The federal government is still developing a broader response to AI, balancing enabling useful applications with safeguarding against harm. However, no formal response has yet been provided to the major review of the Online Safety Act, which also recommended criminalising "nudify" apps—another form of AI misuse. Experts from last week’s roundtable stressed there is no public benefit in allowing AI generators of child abuse material to remain legal, urging immediate legislative action.
Former police detective inspector Jon Rouse emphasised that while current laws address child sexual abuse material, they do not yet cover AI-generated content. Similarly, Colm Gannon, Australian head of the International Centre for Mission and Exploited Children, described the bill as a focused step to close a critical gap.
Government Response and Future AI Regulation
Attorney-General Michelle Rowland affirmed the government's commitment to protecting vulnerable populations and noted that the existing legal framework supports this goal. She indicated that proposals aiming to strengthen responses to child sexual exploitation will be carefully considered.
Ms Chaney stressed that AI regulation must be a government priority this term. She acknowledged the challenge of regulating fast-evolving technology but highlighted the need to address immediate gaps to keep laws relevant and effective. A coordinated, holistic approach will be necessary to balance individual rights, productivity, and trust in institutions.
Last year, an inquiry led by former industry minister Ed Husic recommended the government enact standalone laws to regulate AI, capable of adapting to rapid technological changes.
For professionals involved in government and legal sectors, this bill represents a critical move to update legislation in response to emerging AI threats. Keeping pace with technology through targeted, practical regulation is essential to protect children and support effective law enforcement.
Your membership also unlocks: