A/NZ CISOs on the hook: AI governance grows as personal liability fears mount

Across A/NZ, CISOs face bigger mandates and rising personal risk as AI governance moves to the front. They want clear ownership, human oversight, and metrics that show risk falls.

Categorized in: AI News Management
Published on: Feb 26, 2026
A/NZ CISOs on the hook: AI governance grows as personal liability fears mount

A/NZ CISOs face bigger mandates-and rising personal risk

Across Australia and New Zealand, 91.4 per cent of CISOs say the job has become more complex since they stepped into the role. The scope is wider, the stakes are higher, and AI has shifted the centre of gravity from tools to governance and risk.

Splunk's 2026 CISO report, based on responses from 650 CISOs globally, found nearly all now own AI governance and risk management. In A/NZ, 82.9 per cent are concerned about personal liability for security incidents-enough to change how leaders make decisions and document them.

AI is scaling operations-human oversight still decides outcomes

Half of A/NZ CISOs flagged a lack of human oversight or critical decisions made by AI as a key concern. On agentic AI, 88.6 per cent said missed alerts or false positives from hallucinations are their top risk, and 84.3 per cent are strengthening AI governance and controls.

As Splunk's A/NZ regional vice president Marc Caltabiano put it: "As AI becomes woven into the fabric of business operations, the mandate is moving beyond technology investment to governance, regulatory readiness and broader executive risk ownership." He adds that leaders need "the right balance in using automation effectively while maintaining human intelligence in the approach."

If your organisation is scaling AI, anchor it to recognised frameworks like the NIST AI Risk Management Framework and baseline controls such as the Essential Eight. Both help translate ambition into measurable guardrails.

Accountability is getting personal

Beyond operational risk, 44.3 per cent of A/NZ CISOs would become whistleblowers if their organisation ignored best practices or compliance and put the business at risk. That's a signal: governance gaps are now career and board issues, not just IT risks.

Executives should expect clearer ownership maps for AI and cyber risk, more rigorous documentation, and sharper escalation paths when controls aren't met.

The bigger threat right now: technology outpacing programs

Over the next 12 months, 51.4 per cent of A/NZ CISOs say the pace of technological advancement-including AI and quantum-is a significant challenge. 47.1 per cent cite the sophistication of threat actors as a major issue.

By contrast, 67.1 per cent rate geopolitical and macroeconomic uncertainty as a minimal challenge. The message: internal capability and control maturity matter more than external noise.

Skills gaps and burnout are dragging programs

Threat hunting and cyber threat intelligence are the most lacking skills for 90 per cent of respondents. Nearly half expect some gaps to remain unfilled, and 31.4 per cent expect most gaps to persist.

Burnout is real: 50 per cent report moderate burnout and 21.4 per cent significant burnout. That erodes incident readiness and decision quality-especially in AI-augmented environments where human-in-the-loop is critical.

ROI is still hard-because alignment is hard

Demonstrating ROI remains difficult due to business misalignment. 88.6 per cent cite conflicting priorities with the business, and 81.4 per cent point to a lack of clear KPIs.

Security leaders need outcome-based metrics that tie directly to risk reduction and resilience, not just activity and tooling.

What leaders should do now

  • Define ownership: map AI and cyber risk to specific executives, with decision rights, escalation paths, and documented accountability.
  • Set AI guardrails: require model risk assessments, human-in-the-loop for high-impact decisions, and production rollout checklists tied to control gates.
  • Upgrade KPIs: focus on loss exposure reduced, time-to-detect/contain, material incident frequency, and control effectiveness-not vanity metrics.
  • Stress-test AI in the SOC: run tabletop exercises on AI hallucinations, missed alerts, model drift, and prompt injection; track false positives/negatives as first-class metrics.
  • Close skills gaps with targeted upskilling and automation: pair threat hunting playbooks with AI-assisted detection and SOC triage. See the AI Learning Path for Cybersecurity Analysts.
  • Align the exec team: embed AI governance and security strategy in board rhythms and planning cycles. For frameworks and playbooks, explore AI for Executives & Strategy.
  • Protect the human core: implement burnout monitors (on-call load, alert volume per analyst), rotate high-intensity roles, and fund recovery windows.
  • Reinforce speak-up channels: clarify whistleblower protections and audit trails so issues surface early-before they become incidents.

The takeaway for management

AI is boosting capacity, but judgement still wins the game. Give your CISO clear ownership, measurable outcomes, and the air cover to enforce guardrails.

Do that, and you cut real risk while accelerating what matters: faster detection, cleaner decisions, and fewer material incidents.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)