Choosing New Zealand's AI Path: Positivity, Negativity, or Neutrality
NZ must pick clear AI stances across sectors, balancing growth, safety, and protection of taonga. Set rules on training data, privacy, and liability, with quick steps now.

AI regulation in Aotearoa New Zealand: choose a stance and act
A proposed US$1.5 billion settlement over AI training on authors' books, now under judicial scrutiny, shows how hard it is to fit new technology into old rules. Courts are wrestling with whether training on copyrighted works is permitted, if AI-generated content can be protected, and who is responsible when models imitate styles without consent, like the "Studio Ghibli" case.
These questions are not abstract. They touch our laws on copyright, privacy, online harm, cultural integrity, and consumer protection. Government leaders need a clear approach.
Three policy stances to choose from
AI-positivity: Prioritise growth. Enable access to New Zealand data for training, attract AI firms, and keep regulation light. Think "open for business," with guardrails.
AI-negativity: Lead with caution. Focus on harm: privacy breaches, biased decision-making in public services, and misuse of taonga such as te reo Māori literature without acknowledgment or consent. Regulation takes precedence over speed.
AI-neutrality: Wait-and-see. Use current laws where possible, make targeted updates only when necessary, and avoid constant policy lurches that create uncertainty.
None is "the one right answer." Government can mix approaches across sectors: positive for research, stricter for public sector use, and protective over taonga and minors.
What this means for government now
- Pick a stance per domain: research, health, education, justice, procurement, and public-facing services.
- Protect taonga: set rules for te reo Māori, tikanga Māori, and mātauranga Māori in datasets and model training, including consent and benefit-sharing expectations.
- Tighten copyright practice: define acceptable training sources, require licenses where needed, and address style imitation and attribution expectations.
- Strengthen privacy and safety: restrict input of personal data into public chatbots, mandate DPIAs, and require child-safety measures and age assurance where relevant.
- Set clear liability: clarify duties for deepfakes and harmful content, and use procurement to bind overseas providers to NZ standards.
- Build capability: fund training for policy, legal, Māori data governance, and technical teams; stand up a cross-agency AI review unit.
Ninety-day actions
- Issue interim public sector rules for AI use, including approved tools, data classification, retention, and audit trails.
- Adopt a procurement addendum for AI systems covering privacy-by-design, model documentation, incident response, and NZ dispute resolution.
- Publish guidance on Māori data and taonga use in AI training, co-designed with Māori and aligned with tikanga.
- Set reporting requirements for AI incidents, including deepfakes, data leaks, and harmful automated decisions.
Copyright, data, and cultural integrity
Decide where training on copyrighted works is acceptable, when licensing is mandatory, and how to manage style imitation that risks misappropriation. For te reo Māori and other taonga, require consent, governance by Māori, and fair value return. Publish a public registry of permitted datasets and restricted taonga.
Privacy and safety
Prohibit entry of identifiable personal information into public chatbots unless a lawful purpose, contract, and safeguards exist. Require DPIAs for new AI features, data minimisation, and red-teaming for safety. For services reachable by young people, mandate filters, default-on safety settings, and swift takedown of harmful content.
Enforcement and accountability
Use existing levers first: privacy enforcement, consumer law, harmful digital communications, and sector regulators. Close gaps with targeted amendments if courts signal limits. Make vendor accountability non-negotiable through contracts, logs, and NZ-based complaint handling.
Avoid policy whiplash
Flashy settlements make headlines; they don't set durable policy. A balanced, culturally grounded approach can enable innovation while protecting people, especially the most vulnerable.
Useful references
Build capability
Upskill policy, procurement, and legal teams so decisions improve with each deployment. Curated training by job role can speed that up.