Deepfakes could cost NZ brands more than a lawsuit-consent is your first defence

Deepfakes are tripping up NZ businesses-costly and brutal on trust. There's no broad image right, so get consent, be upfront about AI, and tighten contracts before you publish.

Categorized in: AI News Legal
Published on: Jan 19, 2026
Deepfakes could cost NZ brands more than a lawsuit-consent is your first defence

Deepfakes: Legal and reputational traps NZ businesses can't ignore

Deepfake scams aren't just embarrassing. They're expensive. Kiwis have already been misled by fake videos of public figures like Prime Minister Christopher Luxon and Gareth Morgan. Journalists have been targeted too. The risk now extends to any business that uses AI-generated likenesses without tight controls.

The legal exposure is real, and the brand damage can be worse. Even if a use seems harmless, audiences feel burned by undisclosed AI and fake endorsements. Consent isn't optional. Transparency isn't a nice-to-have. It's the difference between smart adoption and a headline you don't want.

Where New Zealand law stands today

New Zealand doesn't recognise a standalone "image right." Copyright protects the photo or recording, not a face or voice. So you need to build protection from several areas of law and clear contracts.

  • Privacy tort (Hosking v Runting): misuse of private facts can bite if content crosses the line.
  • Fair Trading Act 1986: false or misleading representations (including implied endorsements) are risky.
  • Passing off and misleading conduct: using a likeness to suggest an association can trigger claims.
  • Defamation: synthetic content that harms reputation can be actionable.
  • Harmful Digital Communications Act 2015: harmful deepfakes (especially intimate or abusive) can attract liability.
  • Crimes Act and Films, Videos, and Publications Classification Act: expect heightened enforcement on intimate/objectionable deepfakes.
  • Privacy Act 2020: handling biometric identifiers (faces, voices) is personal information. Treat it with care.
  • Deceased persons: there's no broad statutory control after death, so rely on contracts, estates' IP, and careful risk assessment.

For practical guidance on biometric data, see the Office of the Privacy Commissioner's resources on biometrics and privacy: privacy.org.nz.

Why this matters even if you're "compliant"

Legal cover doesn't fix public trust. While 82-87% of Kiwi organisations are using AI in some way, only 34% of New Zealanders trust it. That gap turns small mistakes into PR crises. A single lawsuit can cost less than the lost pipeline and rebuild effort that follows.

Practical playbook for in-house legal and advisors

  • Consent first, consent often: Get clear, written permission before using a real person's image, voice, or persona-especially if generating synthetic content. Re-consent for new contexts, formats, or campaigns. Treat minors and sensitive contexts as high risk.
  • No implied endorsements: Don't suggest support for your product or service without explicit permission. Disclaimers won't save you if the creative implies endorsement.
  • Be transparent: If AI helps create a likeness, say so. Hidden AI erodes trust and invites complaints under advertising standards.
  • Tight contracts with creators and vendors: Lock down rights, territory, term, approvals, and moral rights. Ban vendors from training their models on your talent assets. Add warranties, indemnities, and security obligations.
  • Provenance and watermarking: Use content provenance tools (e.g., C2PA) and watermarks for internal traceability and external clarity.
  • Review and detection: Use multiple deepfake detectors and a human-in-the-loop review for anything involving real people.
  • Takedown and incident response: Prepare notice templates, escalation paths, and platform contacts. Preserve evidence. Move fast for injunctive relief where needed.
  • Insurance check: Validate coverage for synthetic media risks under media liability and cyber policies.
  • Cross-border use: If you export campaigns, assess right-of-publicity laws (e.g., some US states) and EU rules on biometrics.
  • Training and governance: Roll out a practical AI policy, usage guidelines, and sign-off gates for marketing and product teams.

Consent done right: what your releases should cover

  • Scope and specificity: Exact uses (ads, social, PR), formats (photo, video, voice clone), and whether synthetic derivatives are allowed.
  • Term and territory: Avoid "in perpetuity" unless fairly negotiated; define renewal and takedown rights.
  • Approvals and safeguards: Final cut/approval where appropriate; restrictions on sensitive or political contexts.
  • Compensation and consideration: Clear fees, royalties, or value exchange that reflects AI-derived uses.
  • Model training restrictions: No use of the person's data to train third-party models without separate, informed consent.
  • Data handling: Storage limits, security, deletion timelines, and access controls.
  • Post-campaign obligations: Takedown, archiving rules, and rights on termination. Include death/estate language where relevant.

Special cases and red flags

  • Public figures: Commercial use without permission is high risk. Don't rely on "public interest" for ads.
  • Journalists and politicians: Heightened scrutiny. Expect faster and louder backlash.
  • Intimate or abusive content: Treat as zero-tolerance. Expect criminal exposure and urgent injunctions.
  • Children and vulnerable people: Apply strict consent standards and extra review.
  • Deceased individuals: Contract with estates where possible; watch trademarks and moral rights in other jurisdictions.

Tooling note

Even if tools like OpenAI's Sora let anyone generate convincing video, your legal standard doesn't change. Consent and clarity still lead. Detection helps, but policy, contracts, and good judgment are what keep you safe.

Watch for regulatory movement

Lawmakers are turning attention to non-consensual AI content, with recent prosecutions in harmful deepfake cases. Expect tighter rules and faster enforcement. Keep your horizon scanning active and update policies as standards evolve.

Stay current with official guidance: Office of the Privacy Commissioner - Biometrics & Privacy

Bottom line

Use AI with permission, precision, and transparency. The law may be fragmented, but courts and regulators have more than enough to act where likenesses are misused or endorsements are implied. Legal teams that set clear rules now will save the business from costly lessons later.

If you're building an AI upskilling plan for non-legal teams, here's a curated starting point by job role: Complete AI Training - Courses by Job


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide