AI posts on Queensland Fisheries pages called lazy and ironic amid disclosure push

A Queensland agency posted AI-made images without saying so-even on a post warning about AI. Trust is on the line; label posts, log it, and skip AI visuals for sensitive updates.

Categorized in: AI News Government
Published on: Feb 05, 2026
AI posts on Queensland Fisheries pages called lazy and ironic amid disclosure push

AI images on government social channels: the irony, the risk, and what to do next

A Queensland agency used AI-generated images on Fisheries Queensland's Facebook and Instagram - a reminder of the issues around AI on Social Media. None of the four posts disclosed that AI was used.

Two images showed an invisible watermark consistent with Google's SynthID. The others had strong visual signs of generation. All were posted late last year and covered topics like infringement notices, patrols, and a court case.

Why this matters for government teams

Public trust is fragile. If a government account uses AI to illustrate an enforcement action or safety message, people want clarity on what's real and what's constructed. Government communicators should consult guidance on AI for Government when setting disclosure standards.

The stakes aren't abstract. Missteps here create avoidable backlash, FOI headaches, and a drag on credibility that bleeds into more important communications.

What experts are saying

Internet studies professor Tama Leaver called the "don't trust AI" post generated by AI itself "ironic." His point: it's getting hard to tell what's AI. That's exactly why clear disclosure should become standard.

Marketing professor Paul Harrison added that the public expects agencies to be transparent. He said the posts looked obviously AI-made and should have been labelled. Disclosure may trigger some negativity, but silence creates a bigger trust gap: "Why didn't you tell me?"

What the department says

The Department of Primary Industries, which manages the accounts, confirmed it used AI images for illustrative purposes where real photos aren't suitable due to privacy, legal, or operational reasons. It said it had not received concerns that the images were mistaken for real photos.

Queensland's own guidance says AI-produced content should be clearly identified. This isn't a legal requirement yet, but it's fast becoming the baseline for public communication.

Context beyond one account

A viral Brisbane River bull shark clip drew quick speculation it was AI-generated. During the 2024 state election, the LNP circulated a clearly labelled AI video depicting the Labor leader dancing. People are primed to question authenticity - and they're watching how government handles disclosure.

Practical checklist: disclose and document AI use

  • Label clearly: State "AI-generated illustration" in the caption and include it in alt text for accessibility.
  • Use content credentials: Where possible, embed provenance (e.g., C2PA) and keep originals on record.
  • Watermark and verify: Prefer tools with invisible watermarking and run spot checks with detectors like SynthID.
  • Fit for purpose: Don't use AI art for evidence, safety-critical visuals, or anything that could be misread as a real event.
  • Approval path: Add an "AI used?" tick-box in content workflows. If yes, require disclosure and a quick risk note.
  • Recordkeeping: Log prompts, tools used, version, date, and reviewer. Make it FOI-ready.
  • Privacy and legal: Avoid likenesses of real people or brand marks that could imply endorsement.
  • Consistency: Keep a single disclosure standard across all channels. No exceptions for "just this time."
  • Crisis plan: Prepare a short statement template if a post draws authenticity questions.
  • Training: Give staff short refreshers on disclosure, risks, and acceptable use. Keep it practical, not theoretical.

Tools that help

  • Google/DeepMind SynthID can detect certain AI watermarks and supports image provenance checks.
  • C2PA content credentials help attach verifiable "how this was made" data to media.

A note on tone and visuals

AI art can be quirky or cartoonish. That may be fine for general awareness, not for compliance or enforcement updates. If it could confuse a resident about what actually happened, use a real photo or a neutral graphic - then label the source.

Bottom line

Disclosure is cheap. Rebuilding trust is not. If your team isn't comfortable labelling AI use, ask why - and fix that.

Want help upskilling your team?

For practical, job-focused AI training and policy-aware practices, see AI for PR & Communications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)