AI-run copycat news pages mislead Kiwis and erode trust

AI 'news' pages are flooding NZ feeds with fabricated images, blurring facts and eroding trust. Officials urge: stick to verified sources and look for clear labels.

Published on: Feb 09, 2026
AI-run copycat news pages mislead Kiwis and erode trust

AI "News" Pages Are Flooding Feeds In NZ - And They're Eroding Trust

Thousands of New Zealanders are liking and sharing Facebook posts that look like local news but are actually AI-written stories paired with fabricated images. None of the visuals are labelled as synthetic. Many are wrong - and some are offensive.

Civil defence groups and councils have issued public warnings about pages mimicking real outlets. Experts say the spread of this content is blurring the line between reporting and fabrication, and it's dragging down already fragile trust in media.

How These Pages Operate

At least 10 Facebook pages are scraping legitimate releases and news reports, feeding them into AI to rewrite, and publishing them with generated images. One page, "NZ News Hub," racked up thousands of interactions in January across more than 200 posts - all without clear disclosure or original reporting.

In multiple posts, raw prompts were accidentally left in the captions (e.g., "Here's a news-style rewrite…"). Some images carried Google's SynthID watermark, confirming AI generation. Here's what SynthID is.

What's Being Faked

Disasters are consistently dramatized. Authentic slips on East Coast highways are shown as far more destructive. A grounded tourist boat in Akaroa is edited to look overloaded. Crushed homes and cars appear where they didn't exist.

Real people are also misrepresented. A still of a minor who died in the Mount Maunganui landslide was manipulated to show her dancing. An image of grieving parents was edited to make them appear affectionate. Police are shown with foreign uniforms or guns where none were reported.

Why This Matters For PR, Comms, and Writers

People see these posts, assume they're from "the media," and blame legitimate outlets for fakery. That confusion sticks. Only 32% of New Zealanders say they trust news, and these pages chip away at what's left.

Mainstream organisations - TVNZ, RNZ, the New Zealand Herald, and Stuff - say they don't generate news visuals with AI, and if they ever do, they'll disclose it. The problem: imitator pages look and sound close enough that audiences don't spot the difference at a glance.

What Officials And Researchers Are Saying

Gisborne District Council and Tairāwhiti Civil Defence have warned the public about fake pages using NZ phone numbers, copied branding, and "breaking news" styles to look credible. The National Emergency Management Agency has also cautioned that AI-generated imagery is circulating during severe weather and urged the public to use verified channels. Check NEMA's official site.

Researchers say these pages are nearly always automated: content gets scraped, rewritten by large language models, paired with generated visuals, and posted - with inaccuracies often slipping in along the way. As image quality improves, source checks beat "visual cues."

What To Do Now: A Playbook For Teams

  • Establish a single source of truth for emergencies and fast-moving updates (owned channels, a pinned post, and a media room page). Make it obvious. Repeat it often.
  • Lock your brand identity: consistent page names, verified badges, and watermarked visuals. Don't leave room for lookalikes.
  • Adopt a zero-fabrication policy for visuals. No generated photos that depict real events or people. If you ever use AI for concepts or explainer graphics, label it clearly.
  • Get explicit consent for any image of victims, minors, or grieving families - and document it.
  • Build an image-verification workflow: reverse image search, check for SynthID/other watermarks, cross-reference uniforms, vehicles, signage, and timestamps against official releases.
  • Prepare an emergency comms protocol: who approves posts, how corrections are issued, and how to counter viral fakes fast (pre-approved copy, visuals, and escalation paths).
  • Monitor and report imposters: track pages mimicking your brand or community. Screenshot evidence. Report and publicly clarify when needed.
  • Be transparent about AI use: if AI helps with research, transcription, or summarising documents, say so. Spell out what you never use AI for.
  • Add content credentials (e.g., C2PA/CAI) where possible so audiences and partners can verify media provenance.
  • Educate your audience with simple posts on how to spot fakes and where to find verified updates.

How To Spot Synthetic "News" Posts Quickly

  • Visual inconsistencies: wrong police uniforms, non-NZ equipment, emergency gear that doesn't match official releases.
  • Over-dramatic disaster scenes: extra debris, impossible damage, crowded boats or streets out of proportion to reports.
  • Weird or garbled text embedded in images; distorted logos or signage.
  • Captions that read like prompts ("make this shorter, more dramatic").
  • Pages with slightly off names or branding; no bylines; limited "about" info; no traceable newsroom.

Platform Accountability

Meta did not provide a statement by deadline on whether these pages violate policy or what enforcement will follow. Until platforms consistently label or remove deceptive posts, comms teams need to assume the burden of proof - and make verification obvious to the public.

Rebuild Trust With Clarity And Repetition

Audiences don't need perfection. They need visible standards. Say what you verify, how you verify it, and what you will never publish. Then keep saying it.

The takeaway: control your sources, label your methods, verify your visuals, and correct fast. That's how you protect your brand - and your community.

Resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)