APS to embed AI across government: what it means for your agency
Artificial intelligence will be embedded across the Australian Public Service as part of a plan to lift productivity, improve service delivery, and standardise safe use. Public Service Minister Katy Gallagher announced the AI Plan for the Public Service on 12 November at the Government Innovation Showcase in Canberra.
The plan sets clear expectations: every agency will have an SES-level Chief AI Officer by July next year, a central capability will sit in the Department of Finance, and training will be mandatory for all staff. Generative AI will be made available to every public servant, with secure access through a government platform and a new Gov AI Chat tool.
Key commitments and timeline
- SES-level Chief AI Officer in every agency by July next year to guide adoption and enforce compliance.
- Finance to stand up the AI Delivery and Enablement team (AIDE) to help agencies overcome barriers, share lessons, and accelerate uptake.
- GovAI: secure, government-managed access to generative AI on every laptop, plus a whole-of-service Gov AI Chat.
- Mandatory training and clear guidance for safe, responsible use across roles and classifications.
- Vendors and consultants must disclose AI use, meet quality and transparency requirements, and accept accountability for outputs.
- Focus on avoiding vendor lock-in so the APS can adopt new models and tools as they mature.
How this affects your work
AI will support drafting, analysis, research, summarisation, and internal knowledge retrieval. That includes sensitive work such as business plans and drafts for Cabinet processes-under strict controls, audits, and human oversight.
Leaders are expected to be AI-literate and model good use. Teams should be ready to integrate AI into day-to-day workflows while maintaining records, privacy, and security standards.
Actions for agency leaders
- Nominate or recruit your Chief AI Officer and set up a cross-functional working group (policy, ICT, legal, security, HR, procurement).
- Map high-value use cases (writing briefs, data analysis, citizen comms, knowledge search) and set risk tiers for each.
- Update governance: model selection, data handling, red-teaming, human-in-the-loop checks, incident reporting, and audit trails.
- Stand up training pathways for all staff, plus deeper capability for power users and approvers.
- Revise procurement templates: require AI disclosure, quality controls, provenance, and accountability from contractors.
- Define metrics: quality, time saved, error rates, user satisfaction, and compliance outcomes.
Guardrails and known risks
Trials across the APS have shown variable quality and occasional inaccuracies. A recent high-profile case saw a consultancy deliver work with fabricated references-proof that provenance, verification, and accountability cannot be optional.
Automated decision-making in benefits and payments demands care. Keep a human in the loop for high-stakes decisions, log system reasoning where possible, and follow privacy and administrative law obligations. See guidance from the OAIC on automated decision-making and AI for baseline expectations.
What won't change
The plan is not a headcount reduction strategy. The Minister was clear: AI should free people to focus on work that relies on insight, empathy, and judgment. Agencies should design adoption to enhance quality and speed-without losing accountability or fairness.
Practical next steps for teams
- Adopt a simple, clear AI use policy for your branch; include what's in-bounds and what's not.
- Start with small pilots in low-risk workflows; expand only after measurable gains and stable quality.
- Set "always verify" rules for facts, data, quotes, and references.
- Create standard prompts and style guides for common tasks to lift consistency.
- Document every deployment: purpose, data sources, controls, owners, and review cadence.
Training and capability
With training set to be mandatory, build a learning plan now. Prioritise foundational literacy for everyone, scenario-based practice for frontline teams, and advanced modules for system owners and approvers.
Looking ahead
This is the first version of the plan and will be updated frequently. Expect guidance to evolve as models improve, standards tighten, and case law and audits clarify acceptable use.
If you lead a team, move early: set governance, run a controlled pilot, measure outcomes, and share lessons. If you're a practitioner, learn the tools, apply them to routine tasks, and keep your quality bar high.
Bottom line: AI is being embedded across the APS. With the right guardrails, it can speed up the work that slows agencies down-and keep people focused on the decisions that matter.
Your membership also unlocks: