Albanese Government Scraps AI Advisory Board, Backs New In-house AI Safety Institute

Australia scraps a permanent AI advisory board, relying on existing consultations and a new in-house AI Safety Institute. Agencies should follow current law and seek AISI testing.

Categorized in: AI News Government
Published on: Dec 12, 2025
Albanese Government Scraps AI Advisory Board, Backs New In-house AI Safety Institute

Albanese government scraps permanent AI advisory board: what it means for public sector leaders

Australia will not proceed with a permanent AI advisory body. The Department of Industry, Science and Resources (DISR) confirmed that the previously budgeted panel has been dropped in favour of existing consultation channels and a new in-house institute.

For agencies, this signals a pivot from standing external oversight to internal technical capability inside government. Your AI risk posture will lean on current laws, sector regulators, and targeted consultations rather than a single, ongoing advisory forum.

What changed

In the 2024-25 budget, $21.6 million was set aside to reshape the National AI Centre and stand up an advisory body drawing on civil society, industry, and academia. DISR has now confirmed "appointments to the AI advisory body will not proceed".

Instead, the government will rely on "existing mechanisms and targeted consultations", alongside a new Australian AI Safety Institute (AISI) within DISR.

A new centre of gravity: AISI

The AISI is expected to receive $29.9 million through MYEFO and will sit inside DISR. Its role: provide in-house technical capability to test and evaluate emerging AI systems.

Unlike the scrapped advisory body, AISI will not embed external experts in a formal standing structure. Staffing and governance details are still being finalised, with more information due early next year.

Political reactions

Ministers say the approach is "more dynamic and responsible", with ongoing engagement across expert communities as needed. Critics argue the shift weakens independent oversight and shortchanges business on consistent guidance.

Shadow Minister Alex Hawke called the move a blow to business and warned against turning AI into an industrial relations fight. Independent Senator David Pocock cautioned against letting AI "rip" and urged stronger input from independent researchers and civil society. The government maintains existing laws already apply to AI harms, and regulators remain responsible within their domains.

Context: other moves paused

The temporary AI Expert Group has wrapped, and the government has chosen not to adopt its recommended guardrails for high-risk uses. The National AI Plan confirmed Australia will lean on existing legal and regulatory frameworks rather than new AI-specific legislation at this stage.

In practice, direction will now come from sector regulators, targeted consultations, and AISI-led analysis-rather than a permanent advisory body with a diverse external base.

What this means for departments and agencies

Treat this as a compliance-first environment with practical testing support coming from inside government. Expect more technical evaluation and guidance from AISI, and fewer standing forums convening external voices by default.

The immediate task is to align AI initiatives to current law, your regulator's guidance, and documented risk controls. Don't wait for new legislation or a central advisory panel.

Immediate actions to stay on track

  • Map your AI use cases by risk. Flag high-risk contexts (safety, financial impact, essential services, rights and protections) and document intended controls.
  • Anchor to existing law. Consider privacy, consumer law, safety, anti-discrimination, cybersecurity, records, procurement, and sector-specific obligations.
  • Assign accountable owners. Name a senior responsible officer for AI risk and a technical lead for model evaluation and incident response.
  • Procure with testing conditions. Require pre-deployment testing, red-teaming, data provenance, model update logs, and kill-switches in contracts.
  • Adopt standards-based practices. Use risk and quality frameworks (e.g., the NIST AI Risk Management Framework) for repeatable controls and audits.
  • Run impact assessments for high-risk deployments. Cover safety, fairness, explainability, human oversight, and contingency plans.
  • Establish monitoring and reporting. Track model drift, error rates, and user complaints; define thresholds for rollback and escalation.
  • Engage external voices proactively. Without a standing advisory board, convene your own civil society, academic, and domain expert panels for critical projects.

What to watch next

Look for AISI's operating model, services, and interfaces with agencies. Clarify how testing requests will be prioritised, what evaluation artefacts will be provided, and how results will inform regulator expectations.

Expect more emphasis on demonstrable testing, documentation, and audit trails-especially in high-risk settings and sensitive datasets.

Resources

Upskilling your team

If your unit is standing up AI governance, testing, or procurement guardrails, structured upskilling helps. See curated learning paths by role here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide