Council AI in local government: Cabonne sets benefits and boundaries in policy
Artificial intelligence is moving from curiosity to everyday tool. Cabonne Council has set out where it fits, what it can and can't do, and how staff should use it. Their new Generative AI strategic policy backs experimentation while keeping privacy, security, and human oversight at the center.
Why this policy now
Council staff asked for clear guardrails and responsibilities as AI tools show up in daily work. Deputy Mayor Jamie Jones put it plainly: "AI has some really good productivity gains but we want to ensure that council as a whole is using it appropriately and ensuring that we have the proper protective privacies and securities in place and our data isn't breached."
In short: explore the gains, protect the data, keep people accountable.
Where AI can help today
- Drafting routine documents and emails
- Summarising lengthy reports and meeting notes
- Preparing communications material for the community
- Streamlining repeat admin tasks and simple workflows
- Sorting and grouping information faster
These are assistive uses. They cut manual time without handing over judgement.
The boundaries that matter
- No autonomous decisions: AI tools cannot make decisions on behalf of council. Staff remain responsible.
- Human-in-the-loop: All AI outputs must be checked, verified, and edited by a person before use or release.
- Transparent use: Staff must appropriately disclose when generative AI contributed to the work.
- Data protection first: Any personally identifiable information must be de-identified before it goes into an AI tool.
- Standard risk checks apply: Any technical tools or automations still go through existing privacy, security, and procurement processes.
What this means for council teams
Start with low-risk, internal tasks. Build confidence, then expand. Keep AI as an assistant, not a decision-maker.
- Create a short "approved tools and use cases" list so staff know what's in-bounds.
- Set a simple disclosure line for documents (e.g., "This draft was assisted by generative AI and reviewed by [name/title].").
- Adopt a de-identification checklist and restrict uploads to non-sensitive content only. See guidance from the Office of the Australian Information Commissioner.
- Run tools through existing ICT security, privacy impact, and records management checks. The NSW AI Assurance Framework is a useful reference.
- Keep a log of significant AI-assisted content and decisions for audit and continuity.
- Offer short, practical training for staff on prompts, verification, and data handling.
Early trials, measured approach
Cabonne is trialling tools now, with eyes wide open. "It'll help staff streamline processes and reports and be able to quickly communicate with the community but we need the appropriate safeguards in place which is what this policy sets out," Cr Jones said.
That's the balance: speed where it's safe, controls where it's needed, and people in charge throughout.
A quick checklist you can use
- Define allowed use cases and banned inputs (e.g., personal or confidential data).
- Require human review and disclosure for any AI-assisted outputs.
- Set approval for tools, plugins, and automations via existing risk pathways.
- Train teams on prompt basics, bias checks, and fact verification.
- Track usage and outcomes; adjust policy with what you learn.
Helpful resources
- NSW Government AI Assurance Framework
- OAIC: De-identification and the Privacy Act
- Complete AI Training: Courses by Job
Your membership also unlocks: