Australia's National AI Plan: Existing Laws First, Stronger Rules Later if Needed
Australia has released its National AI Plan. Instead of a new AI Act with hard rules, the government will lean on existing, technology-neutral laws and current regulators to manage AI for now.
- Mandatory "guardrails" are on hold while gaps in the law are assessed.
- A $30 million AI Safety Institute will monitor risks and advise ministers and industry from next year.
- Priority areas: data centres, workforce skills, and targeted legal clarifications with states and territories.
What changed
The earlier push for 10 mandatory guardrails and a standalone, risk-based AI Act has been paused. Instead, the government will apply current legal frameworks and regulatory expertise, with ongoing refinement of the plan as the tech and risks mature.
This shift follows business concerns that new rules could choke investment. The government's position: regulate as much as necessary, but as little as possible-at least for now.
The legal footing (what applies today)
Expect enforcement to come through familiar regimes. For legal teams, the practical questions live inside existing obligations rather than a new statute.
- Consumer law: Misleading or deceptive conduct, unfair practices, disclosure of AI-generated content, and representations about accuracy or safety.
- Privacy and data: Privacy Act (and state equivalents), consent and purpose limits, employee records carve-outs, cross-border flows, and security of personal information.
- Copyright and IP: Use of training data, text and data mining, derivative works, and model outputs-government has flagged a copyright review.
- Product liability and negligence: Defects, safety, duty of care in AI-enabled products and services.
- Anti-discrimination and human rights: Bias in hiring, credit, insurance, and access to services; explainability and contestability.
- Workplace relations and surveillance: Rostering, monitoring, automated decisions; state surveillance devices laws and consultation duties.
- Sector regulators: ASIC/APRA for financial services, TGA for health, ACCC for consumer protection, and security regimes for critical infrastructure.
AI Safety Institute: role and influence
The institute will scan for emerging risks, test assumptions, and recommend targeted fixes where existing laws fall short. It is advisory, not a new super-regulator, but its guidance will shape compliance expectations and may inform future enforcement priorities.
Business groups had argued for this approach. See the Productivity Commission's stance on pausing guardrails for a legal gap audit here, and the industry group DIGI's call to build on current regulation here.
Data centres, energy, and procurement clauses
Australia is attracting heavy data-centre investment, with projected electricity demand rising sharply this decade. The plan links future builds to new renewable capacity, positioning centres as firm offtakers and potential demand balancers.
For counsel: focus on data residency, uptime and latency SLAs, incident reporting, security accreditation, sovereign capability, energy offtake and ESG disclosures, and change-control for AI infrastructure. These clauses will carry more weight as AI workloads scale.
Workplace: surveillance, bias, and rostering
The plan flags potential future action to protect employees from AI-driven surveillance and discriminatory rostering. Government will analyse workplace regulations to keep workplaces fair and safe.
Legal teams should get ahead of this: map automated decisions affecting workers, conduct bias testing, build contestability channels, and document human oversight. Expect scrutiny from unions, safety regulators, and the Fair Work system.
Immediate actions for in-house counsel
- Map AI use and risk: Inventory models, vendors, datasets, and use cases; classify by impact on consumers, employees, and regulatory exposure.
- Tighten contracts: Warranties on training data provenance, IP indemnities, security and privacy controls, audit rights, incident reporting, change-management, and model performance baselines.
- Consumer and disclosure: Prevent misleading claims about capability or safety; set policy on disclosing AI-generated content where it matters to users.
- Privacy and data governance: DPIAs, data minimisation, retention rules, cross-border restrictions, and vendor due diligence.
- Accountability and oversight: Assign owners for AI systems, log decisions, keep testing and monitoring evidence, and maintain a complaints channel even if not mandated.
- Bias and safety testing: Pre-deployment and ongoing tests; document fixes; require the same from vendors.
- Employee impacts: Consult where required, provide notice on monitoring, and ensure lawful, proportionate surveillance settings.
- Upskill your team: Give legal, risk, and product teams a shared baseline on AI risk and compliance. Structured options are available by role here.
What to watch next
- Copyright review outcomes on training data and outputs.
- Privacy reforms that could tighten consent, automated decision notices, and penalties.
- Clarifications with states and territories on consumer and surveillance rules.
- Guidance from the AI Safety Institute on high-risk deployments and testing expectations.
- Workplace reforms addressing AI surveillance, bias, and rostering fairness.
Bottom line
No standalone AI Act yet. Compliance pressure will still grow through existing laws, sector regulators, and institute guidance. Build controls now, document them well, and you'll be ready if the government decides stronger measures are needed later.
Your membership also unlocks: