Label it or watermark it: Australia steps up on AI transparency

Government urges visible labels and watermarks for AI content under voluntary guidance to curb deepfakes and rebuild trust. A National AI Plan and tighter rules are on the way.

Categorized in: AI News Government Legal
Published on: Dec 01, 2025
Label it or watermark it: Australia steps up on AI transparency

Government urges labels and watermarks for AI content as National AI Plan nears

AI developers and content producers have been advised to make AI-generated material clearly identifiable, using on-screen labels and embedded watermarks. The guidance is voluntary, but the intent is plain: reduce deception, build trust, and head off harm from deepfakes.

There's currently no legal requirement to disclose AI-generated content. That gap has allowed synthetic media to be confused with authentic footage and recordings, with real consequences for reputations, safety, and public confidence.

What the guidance asks for

The federal guidance calls for two transparency layers: visible labels that state content is AI-generated, and watermarking that embeds provenance data and is harder to strip out or alter. The more an AI system shapes the final output, the stronger the transparency signal should be.

Industry Minister Tim Ayres put it bluntly: "AI is here to stay. By being transparent about when and how it is used, we can ensure the community benefits from innovation without sacrificing trust." Several companies, including Google, already watermark AI outputs.

Why this matters for government and legal teams

The eSafety Commission reports deepfake image-based abuse is now appearing weekly in Australian schools. Risks run wider: fraud, misinformation, blackmail, and reputational damage are all in scope when synthetic media is convincing and unlabeled.

For agencies and in-house counsel, provenance and disclosure aren't just technical niceties-they're essential controls for safety, integrity, and lawful processing.

Legislative signals

Independent Senator David Pocock has introduced a bill to prohibit digitally altered or AI-generated depictions of an individual's face or voice without consent. He argues the government's broader AI response has been too slow since consultations began more than two years ago.

Former industry minister Ed Husic has called for a dedicated AI Act to provide a flexible framework as the technology develops. Expect this debate to intensify as the policy package takes shape.

National AI Plan: what to expect

The government is preparing to release a National AI Plan, informed by years of consultation. Expect a risk-based approach: stricter rules for higher-risk systems, lighter touch for low-risk tools, and "mandatory guardrails" targeting misuse and harm.

There's tension on the economic side. At a recent productivity roundtable, the Productivity Commission warned overly prescriptive guardrails could choke an estimated $116 billion opportunity, urging lawmakers to first map genuine legal gaps before legislating.

Alongside the plan, Senator Ayres announced an AI Safety Institute to monitor and respond to AI-related risks and help build trust through shared testing and assurance.

Immediate actions for agencies, regulators, and in-house counsel

  • Adopt provenance standards: Implement watermarking and content credentials (for example, the C2PA standard) across AI image, audio, and video workflows.
  • Label consistently: Require visible "AI-generated" labels for public-facing outputs, internal communications that may circulate, and any content with reputational or safety impact.
  • Update procurement and contracts: Mandate providers to preserve metadata, apply watermarks, and disclose model usage. Add audit rights and incident reporting for deepfake misuse.
  • Strengthen consent and likeness policies: Explicitly prohibit generating or distributing content depicting a person's face or voice without documented consent.
  • Stand up a response playbook: Define takedown, notification, and evidence-preservation steps for suspected deepfakes affecting officials, staff, or the public.
  • Risk assessments: Run DPIAs and safety reviews for AI deployments that touch identity, minors, elections, health, or law enforcement.
  • Training and comms: Train staff to spot synthetics, verify sources, and escalate. Coordinate with media, HR, and security on response protocols.
  • Recordkeeping: Log model versions, prompts, training data sources (where known), and provenance indicators to support audits and FOI obligations.

Open questions to monitor

  • Where will the "mandatory guardrails" land? Which sectors, which use cases, and how will enforcement work across platforms?
  • Standards interoperability: Will labels and watermarks survive common edits and cross-posting? How will platforms verify provenance at scale?
  • Speech, satire, and journalism: How will consent rules and exceptions be balanced with public interest and free expression?
  • Alignment with global regimes: How closely will Australia align with the EU's risk-based model under the EU AI Act?

Bottom line

Voluntary guidance sets the tone; enforceable rules are coming. Government agencies, councils, and legal teams that operationalise labeling, watermarking, and consent now will be better positioned for compliance-and better protected against the next deepfake incident.

Need structured upskilling for policy and legal teams working on AI governance? Explore focused options here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →