AI oversight moves in-house as Canberra backs safety institute, scraps independent advisory body

Canberra scrapped an independent AI watchdog, opting for a $29.9m AI Safety Institute inside Industry and guidance under existing laws. Agencies get faster direction and more to do.

Categorized in: AI News Government
Published on: Dec 29, 2025
AI oversight moves in-house as Canberra backs safety institute, scraps independent advisory body

Government brings AI monitoring in-house: what it means for your agency

The federal government has scrapped plans for an independent AI advisory body and will stand up an AI Safety Institute inside the Department of Industry. The shift replaces a proposed $21.6 million external body with a $29.9 million in-house institute, announced in December with the National AI plan.

Policy will be governed through existing laws and "targeted consultations" rather than a new AI Act. For public sector leaders, this means faster direction from the centre, less arm's-length oversight, and a bigger onus on agencies to interpret and implement guidance well.

What changed

The original plan: a permanent, external advisory body including community and business voices, plus a reshaped National AI Centre. That approach has been "superseded by a more dynamic and responsible approach," according to the Industry portfolio.

The new approach: an AI Safety Institute inside the Department of Industry, funded at $29.9 million, to track risks, analyse technical developments, and advise across government. The Institute is not independent and will operate from within the public service.

Law and oversight

Instead of drafting a new AI Act, the government will lean on existing laws, current regulators, targeted consultations, and the new institute. This keeps the legal framework familiar for agencies but raises questions about expertise and resourcing.

UNSW AI professor Toby Walsh has cautioned that current regulators and internal bodies may lack specialist capability, noting the EU has moved ahead with dedicated AI legislation. For context on that approach, see the EU AI Act.

Politics and stakeholder reaction

The opposition has criticised the scrapping of the external advisory body as a setback for business engagement. The concern: government and industry will have fewer formal channels to meet in the middle on real-world implementation issues.

Meanwhile, big spend on internal AI capability

Alongside the policy shift, roughly $225 million over four years has been allocated to the government's internal AI system, GovAI, via the December MYEFO. Most of this goes to the Department of Finance, following the public service AI adoption plan launched in November by Minister Katy Gallagher.

Translation for agencies: shared platforms and guidance are on the way. Expect central tools, policy templates, and clearer guardrails to roll out in stages.

What this means for your agency

  • Expect centralised guidance to firm up: risk thresholds, model testing expectations, incident reporting processes, and procurement signals.
  • Prepare for "existing law + guidance" compliance: map your AI use cases to privacy, discrimination, consumer, IP, security, data retention, and records obligations now.
  • Anticipate targeted consultations: have positions ready on high-risk use, evidence standards, auditing access, and third-party model assurances.
  • Tighten model risk management: document data sources, training and fine-tuning methods, evaluation methods, and human oversight controls.
  • Procurement will need sharper clauses: on safety benchmarks, incident reporting, red-teaming rights, content provenance, and decommissioning.
  • Coordinate with your regulator: confirm who will scrutinise your use cases and what evidence they will expect.

Gaps to watch

  • Independence: with oversight pulled in-house, you may need more internal checks to maintain public trust.
  • Capacity: existing regulators may face skill and workload pressure without new legislative levers.
  • Interoperability with global rules: if you interact with the EU or similar jurisdictions, align your controls with their risk tiers and documentation norms.

Budget snapshot

  • $29.9m: AI Safety Institute inside the Department of Industry.
  • $225m over four years: GovAI internal capability, largely within the Department of Finance.
  • $21.6m external advisory body: proposal dropped.

Action checklist for the next quarter

  • Inventory AI use across your programs and vendors; rate risk by impact and autonomy.
  • Stand up an AI risk register and change control for models, prompts, and data pipelines.
  • Set baseline guardrails: human-in-the-loop, testing thresholds, incident playbooks, and content provenance.
  • Update procurement: add evaluation, red-team, and audit requirements for AI features in new and existing contracts.
  • Prepare evidence packs: model cards, data sheets, evaluation results, and privacy/security controls for scrutiny.
  • Upskill teams in policy, assurance, and practical AI use-focus on risk, records, and safe deployment.

Need structured upskilling?

If your team needs practical training on policy-safe AI adoption, see:

Bottom line: guidance will come from the centre, but delivery risk sits with you. Build capability, document evidence, and treat AI like any other critical system-clear owners, measurable controls, and fast feedback loops.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide