Australia's National AI Plan puts growth first, safety later
On 2 December 2025, the Federal Government released its long-awaited national plan for artificial intelligence. The document leans into economic opportunity and productivity, signalling that existing laws will do most of the work for now. Dedicated AI safety legislation is delayed. Critics argue that leaves a gap in protections as adoption accelerates.
What's in focus
The plan prioritises industry uptake, job creation, and public sector efficiency. It suggests privacy, consumer, and safety laws already on the books are enough to manage AI risks in the near term. That means agencies will need to apply current frameworks with more consistency and better documentation. Expect pressure to deliver value quickly while proving risks are under control.
What this means for government teams
- AI projects will move forward, but scrutiny will increase. Be ready to show your business case, risk controls, and auditing approach in plain language.
- "Existing laws apply" puts the onus on you to map each use case to legal bases and policy settings. Treat this as part of your design process, not an afterthought.
- Without new safety laws yet, governance has to carry more weight: human-in-the-loop, logging, testing, and clear accountability.
Where the plan is being challenged
Stakeholders warn the plan underplays risks from misuse and system failure. The big worries include misinformation, opaque models, biased outcomes, and unclear liability in high-risk settings. There's also concern about deepfakes and the impact on critical services if AI goes wrong.
Current levers you already have
- Privacy and data protection obligations under the Privacy Act (e.g., collection limits, purpose, security, and transparency). See the Office of the Australian Information Commissioner for guidance: OAIC.
- Consumer protection and unfair practices under existing law, including misleading claims about AI systems.
- Work health and safety duties where AI affects staff or the public.
- Anti-discrimination and human rights obligations when models influence decisions about people.
- Ethics guidance such as Australia's AI Ethics Principles: AI Ethics Principles.
A practical 90-day plan for agencies
- Stand up a cross-functional AI working group (policy, legal, procurement, ICT, security, records, comms).
- Inventory every AI pilot and tool in use. Classify by business value, data sensitivity, and risk level.
- Create a lightweight AI risk register: model source, data inputs, evaluation results, human oversight, and incident response contacts.
- Set baseline guardrails: human review for high-impact decisions, logging of prompts/outputs, red-teaming before go-live, and clear fallback procedures.
- Update procurement checklists: model transparency, data handling, content filters, security posture, IP terms, audit rights, and exit clauses.
- Tighten data governance: retention limits, synthetic data where possible, and access controls for sensitive material.
- Train your team on safe use and policy. If you need a structured path by role, see AI courses by job.
Controls to build into every AI project
- Purpose fit: Clear use case, measurable outcomes, and kill criteria if performance slips.
- Legal map: Which laws apply, how you comply, and where human review sits.
- Testing: Bias checks, robustness tests, prompt injection tests, and stress tests with edge cases.
- Accountability: Named owner, decision thresholds, and an incident playbook.
- Transparency: User notices, public-facing FAQs for significant services, and contact points for complaints.
What to watch next
- Whether government introduces targeted measures for deepfakes, high-risk use cases, or model transparency.
- Procurement guidance specific to AI, including dataset disclosures and evaluation standards.
- Consistency across jurisdictions on watermarking, content provenance, and incident reporting.
- Funding or incentives tied to safety practices, not just adoption metrics.
If you wait, risks grow
- Shadow AI spreads without controls, creating audit and security gaps.
- Procurement lock-in with vendors who cannot meet future standards.
- Weak documentation makes it hard to defend decisions to auditors, courts, or the public.
Bottom line
The plan sets a clear signal: push for economic wins while relying on existing rules. For agencies, that means move, but move with discipline. Build lightweight governance now, so you don't have to rebuild projects later under pressure.
Your membership also unlocks: