Home Affairs lays groundwork for secure generative AI on PROTECTED data
The Department of Home Affairs is setting the security foundations for federal use of generative AI with sensitive government records. Recent supplier briefings have focused on safe use with information classified as 'OFFICIAL', with the next phase moving into policies for 'PROTECTED' data.
These sessions have drawn strong industry attendance, with over 80 suppliers across government technology supply chains participating. The briefings are anchored to an AI policy advisory released in early October.
What's allowed today
Under current policy, agencies can use generative AI with 'OFFICIAL' information when working with a prescribed list of approved developers and suppliers (around 18) to avoid extra vetting. This list includes OpenAI (ChatGPT), Anthropic (Claude), and Google's Gemini via Google Australia, which is approved under the Hosting Certification Framework.
Agencies may use products from other providers for 'OFFICIAL' data, but only after completing additional checks to confirm they are free from Foreign Ownership, Control or Influence (FOCI) risks. Even with approved suppliers, agencies remain responsible for configuration, data handling, and recordkeeping.
What's next: preparing for PROTECTED data
Home Affairs has begun engaging suppliers on policies for handling the next most sensitive classification: 'PROTECTED'. Expect tighter controls, stricter assurance, and stronger contractual requirements before any production use.
The department developed its procurement policy in consultation with the Digital Transformation Agency and the Australian Signals Directorate, reinforcing a whole-of-government approach to risk, sovereignty, and assurance. See the Digital Transformation Agency and Australian Signals Directorate for broader guidance.
Whole-of-government direction
The government's policy direction is clear: provide every public servant secure access to generative AI from their laptop. Each department and agency will also appoint a Chief AI Officer by July next year to drive governance, adoption, and guardrails.
This shift signals standardised tooling, baseline controls, and auditable workflows across the service. Agencies should align capability plans now to meet those timelines.
Practical actions for agencies now
- Map current AI use cases to data classifications; keep pilots limited to 'OFFICIAL' until new guidance lands for 'PROTECTED'.
- Use approved suppliers where possible; document controls and configuration choices for assurance reviews.
- If considering non-listed vendors for 'OFFICIAL', complete and record FOCI checks and data flow diagrams.
- Stand up an interim AI lead and working group to prepare for the incoming Chief AI Officer role.
- Run small, well-scoped pilots with clear success criteria, human oversight, and logging of prompts and outputs.
- Update records management to capture AI-generated outputs and decision support notes.
- Revisit security architecture: network egress rules, data residency settings, PII handling, and redaction workflows.
- Brief procurement and legal on new terms: data use, model training restrictions, incident response, and audit rights.
- Deliver targeted training for staff on safe prompting, sensitive data handling, and acceptable use.
Supplier landscape
Industry engagement has been broad, with attendance from Google Australia, Amazon Web Services, IBM Australia, Macquarie Telecom, Oracle Australia, Microsoft, Deloitte, and others. Approvals relate to specific products and configurations that meet hosting and FOCI requirements-compliance is not a blanket pass.
Agencies should verify scope, deployment model, and data pathways for each use case. Accountability remains with the entity, regardless of vendor status.
Upskill your teams
If your workforce needs structured, job-relevant training on safe and productive AI use, explore practical learning paths here: AI courses by job.
The policy path is set: start with secure, auditable use of 'OFFICIAL' data, strengthen governance, and prepare for tighter controls as 'PROTECTED' guidance is finalised. Move methodically, document decisions, and build capability that will stand up to scrutiny.
Your membership also unlocks: