AI Deepfakes and Scams Spark Urgent Calls for Stronger Regulation in New Zealand
A deepfake scam featuring Prime Minister Luxon has raised alarm over AI risks like fraud and abuse. Over 30 experts urge urgent regulation to ensure safe AI use.

Call for Stronger AI Regulation Gains Momentum
A recent deepfake scam on Facebook falsely showing Prime Minister Christopher Luxon promising $35,000 monthly earnings from a $450 investment has sparked concern among AI experts. The incident underscores urgent risks tied to artificial intelligence, including child sexual abuse imagery and fraud through deepfakes.
More than 30 AI specialists have addressed an open letter to Prime Minister Luxon, Labour leader Chris Hipkins, and other party leaders, urging immediate regulation. The letter, authored by Dr. Andrew Lensen from Victoria University, AI consultant Chris McGavin, and University of Canterbury’s Dr. Cassandra Mudgway, calls for clear rules to manage AI risks effectively so the technology benefits everyone.
Why Regulation Matters
The letter stresses that regulation isn’t about creating red tape or halting innovation. Instead, it’s about setting clear boundaries that allow businesses, researchers, and the public to innovate safely and build trust in AI systems.
New Zealand currently ranks low in public trust toward AI, which the letter attributes to hype from major tech firms that often exaggerate AI capabilities. For every positive AI story, there’s a growing number highlighting potential harms.
Current Government Approach and Expert Concerns
The Government’s AI strategy, titled Investing with Confidence, emphasizes reducing barriers for AI adoption in business. However, experts like Dr. Mudgway point out the lack of clear laws or institutions to address AI risks, especially in the public domain.
High-risk AI uses, such as biometric facial recognition and the spread of deepfake content with criminal intent, demand urgent attention. Recent scams involving deepfakes of public figures highlight the absence of mechanisms to combat such abuses.
Proposed Solutions
- Establish an independent regulatory body or watchdog to review current laws and recommend new AI legislation.
- Develop bipartisan-supported regulations to ensure stable and effective governance.
- Consider international models like the EU’s risk-based AI regulation, which bans high-risk AI applications and audits moderate-risk uses.
- Implement sector-specific rules for public services such as health and education.
International Developments
Australia recently announced steps to restrict "abusive technology," including apps that manipulate images ("nudify apps") and tools for invisible online stalking. Communications Minister Anika Wells emphasized the need for proactive harm prevention and collaboration with industry.
Government Response
Minister for Technology Shane Reti supports the current light-touch, principle-based approach, trusting businesses and organizations to integrate AI responsibly. He notes alignment with OECD AI principles, focusing on human rights, fairness, privacy, security, and safety.
The Government currently manages AI risks through existing laws related to privacy, consumer protection, and human rights, with the option to update them as necessary rather than introducing standalone AI legislation.
Dr. Mudgway confirmed Prime Minister Luxon referred the experts’ letter to Minister Reti, but no formal response has been received yet.
What This Means for Government Professionals
AI is increasingly influencing public services and citizen interactions. Understanding the need for clear regulations and oversight is critical to safeguarding rights and preventing misuse. Staying informed about AI governance developments will help government employees navigate their roles responsibly.
For those looking to deepen their AI knowledge in a practical context, resources like Complete AI Training’s latest courses offer valuable insights on AI tools and ethical considerations.