Agentic AI in Government: Balancing Opportunity and Oversight for Australia’s Public Sector

Agentic AI can boost government efficiency by automating tasks and improving decision-making. However, without strong identity governance, risks like data breaches and loss of public trust rise.

Categorized in: AI News Government
Published on: Sep 09, 2025
Agentic AI in Government: Balancing Opportunity and Oversight for Australia’s Public Sector

The Promise and Peril of Agentic AI in Government

Artificial intelligence is changing how governments serve their citizens. For Australia's public sector, this shift offers significant opportunities alongside serious responsibilities.

Minister for Industry and Innovation, Senator Tim Ayres, recently highlighted AI and the digital economy as key to Australia's productivity goals. He emphasized that AI adoption is already underway across firms and government, urging a proactive approach to improve productivity, infrastructure, and security.

Agentic AI systems operate autonomously to achieve specific goals. They sense, decide, and act without waiting for human commands. This capability can transform government operations by automating routine tasks like case management and permit processing. Virtual agents can provide 24/7 citizen support, and AI can respond quickly to sudden spikes in demand, such as during emergencies or policy shifts. Moreover, AI-driven insights can improve decision-making with timely, data-based analysis.

The Risks Are Real

But powerful technology also brings risks. History shows us what happens when systems operate without proper oversight. The UK Post Office’s Horizon scandal is a stark example, where flawed IT led to wrongful prosecutions because of blind faith in the system. Similarly, Australia’s government sector was the second most-breached in 2024, with 63 data breaches reported in just six months. Many incidents went undetected for over a month, exposing weaknesses in oversight.

As government agencies adopt agentic AI, lessons from past failures must guide governance. Existing frameworks like GovAI outline safe AI use but don’t fully address the distinct risks of autonomous agents making independent decisions across systems. Without clear governance, transparency, and control, the benefits of agentic AI risk being overshadowed by loss of public trust and operational failures.

Quantifying the Risks

Research reveals that 72% of security professionals view AI agents as riskier than traditional machine identities. Many organisations report AI agents taking unintended actions—39% say agents have gained unauthorised system access, and 33% report sharing sensitive data inappropriately. Nearly 25% have experienced AI agents being tricked into revealing access credentials. Despite these risks, only 44% of organisations have governance policies specifically for managing AI agents.

To manage these risks, agencies should start AI projects on a small scale and keep humans involved as a backup. Strong guardrails, continuous monitoring, red teaming, and adversarial testing are essential. Above all, controlling what AI agents can access is critical. AI agents need identity governance just like human users.

Lead with Identity Governance

The research points to a looming governance gap. While 92% agree that governing AI agents is vital for security, many fall short in practice. Most AI agents need multiple identities to access systems and data, yet only 62% of organisations use identity security solutions to manage this complexity.

Government agencies must:

  • Identify all AI agents in their environment
  • Assign clear ownership for each agent
  • Enforce least privilege access principles
  • Conduct regular access reviews and audits

AI agents should be governed like any other identity—human, machine, or third party. Identity security now goes beyond simple provisioning; it requires real-time awareness of who (or what) has access and how that access changes over time. Without this, every AI deployment carries hidden risks.

With 98% of organisations planning to expand AI agent use in the next year, identity governance will be the critical control that scales with AI adoption. It also provides the visibility needed by legal, compliance, and executive teams.

AI agents are moving into daily government operations. It’s time for every agency to ask: Who is governing my AI agents? Effective identity governance offers a path to safer, smarter government services and protects public trust from erosion and costly failures.