Victoria outlines controlled approach to AI adoption in government
Victorian government officials are moving toward managed experimentation with AI systems rather than blanket caution, balancing innovation against security risks in public sector operations.
Ian Pham from the Victorian Managed Insurance Authority presented the approach at the PSN Victorian Government Cyber Security Showcase on 3 May. The strategy shifts cybersecurity teams away from purely risk-averse practices that can block AI progress, instead aligning risk management with organisational goals.
How the model works
Agencies start AI projects in controlled environments with defined guardrails. These include using synthetic or non-sensitive data, setting usage conditions, and implementing identity and access controls. Exposure expands gradually as teams monitor performance and reassess risks.
This staged approach lets government departments test AI capabilities without exposing sensitive information or systems to unnecessary risk.
The risks being managed
Data leakage, privacy breaches, unauthorised access, and poor data quality rank among the main concerns. Victoria's framework requires continuous visibility into these issues, supported by governance structures and staff training to build confidence in AI systems.
The shift reflects a broader challenge: cybersecurity teams traditionally minimise threats through restriction, but AI adoption requires supporting decision-making processes instead of just blocking them.
For managers overseeing AI implementation, the Victorian model offers a practical path-neither rushing adoption nor freezing it. Learn more about AI for Management and AI for Government.
Your membership also unlocks: