AI risk through an insurance lens: what's changing, what worries underwriters, and what to do next
AI has moved from side topic to centre stage in insurer conversations. The core shift: translating AI risk into real liabilities, real claim scenarios, and clear underwriting signals. That means fewer abstract debates and more scrutiny on controls, oversight, and loss history.
In a recent discussion, Technology Partner Tom Maasland and Litigation Partner Andrew Horne unpack how insurers are thinking about AI right now, where claims may arise, and what professional firms and businesses need to do to stay insurable as AI becomes mainstream.
From cyber to AI: the risk conversation has shifted
Insurers are still alert to classic cyber exposures, but AI is now a headline concern. The focus is practical: where can AI create or amplify negligence, breach of confidence, IP infringement, discrimination, defamation, safety harm, or regulatory breaches?
The underwriting question is simple: does the insured's AI use increase frequency or severity of claims, and are there controls strong enough to prevent, detect, and respond when AI goes wrong?
Professional reliance on AI without human oversight
Insurers are seeing a growing pattern: professionals leaning on AI outputs as if they were verified facts. That's where losses come from-hallucinated citations in court filings, incorrect advice, misstatements, and process failures that should have been caught with basic review.
Consequences have already included court sanctions, regulatory referrals, reputational harm, and financial loss. Expect exclusions to tighten where firms can't demonstrate credible human-in-the-loop checks for any AI that influences client deliverables.
It's not just law: consulting, health, and retail have felt it too
Poorly supervised AI tools have triggered incidents across sectors-botched client reports, misleading marketing copy, unsafe recommendations, flawed triage, and inappropriate customer interactions. The pattern is consistent: weak guardrails create unintended outcomes.
Insurers interpret these events as signals. If AI can produce an action that a reasonable professional should have prevented, expect closer scrutiny on training, thresholds, and escalation paths before any AI output reaches a client or consumer.
Confidentiality, privilege, and training data risk
Entering confidential or privileged information into generative tools can create exposure if data is retained or used to train models. That risk compounds where terms allow broad use, or where vendor security is unclear. It also raises IP questions if models reproduce third-party material.
What insurers want to hear: that you use enterprise-grade, closed-circuit tools where needed; you've negotiated contract terms on data use and retention; you segregate sensitive inputs; and you run supplier due diligence aligned to your data classification and risk appetite.
What insurers expect right now: minimum viable governance
- Clear AI policy and use cases: where AI is allowed, where it isn't, and why.
- Human oversight: defined review steps for high-impact outputs; accountability sits with a named person, not "the system."
- Data controls: no sensitive inputs into public tools; approved enterprise tools for anything confidential or privileged.
- Vendor management: documented due diligence, contract terms on data use, security, uptime, and incident response.
- Model provenance: clarity on sources, fine-tuning data, prompt/response logging, and update/change control.
- Bias and safety checks: testing for harmful outputs; escalation and kill-switch procedures.
- Training and awareness: staff know the policy, the red lines, and how to use AI safely in their role.
- Incident readiness: treat AI failures like cyber events-triage, contain, notify, learn.
- Regulatory awareness: you track applicable rules and sector guidance; legal sign-off on sensitive use cases.
New Zealand context: adoption is up, frameworks lag
Despite fast AI uptake, many New Zealand businesses are behind on policy, controls, and compliance, as highlighted in Datacom's 2025 State of AI Index Research Report. Insurers will assume higher uncertainty-and price it-where governance is light or undocumented.
Closing that gap is now a prerequisite for affordable cover, especially for professional indemnity, cyber, D&O, and tech E&O.
How AI underwriting may evolve
Expect a path similar to cyber insurance-more detailed questionnaires and evidence requests. Carriers will probe purpose, governance, security, provenance, and regulatory awareness, then calibrate premiums, retentions, exclusions, and coverage breadth accordingly.
- What AI systems are in use, and for what functions or decisions?
- Who owns AI risk at the executive and board level?
- What policy governs AI use, and how is compliance monitored?
- Which tools are enterprise-grade vs. public, and how is data protected?
- How are outputs validated before client or customer exposure?
- What testing covers bias, safety, and failure modes?
- What's your incident playbook for AI-related harm?
- How do you manage vendor contracts, model updates, and change control?
- What steps ensure legal and regulatory compliance across jurisdictions?
Practical steps for insurers
- Refine proposal forms to separate low-risk assistive use from high-impact decisioning.
- Align exclusions and endorsements to clear control failures (e.g., no human review where required, use of unapproved tools for sensitive data).
- Request evidence: AI policy, training logs, vendor terms, and sample validation workflows.
- Price to controls and culture: reward documented governance; surcharge weak oversight.
- Track incident trends-especially hallucinations, data leakage, unsafe recommendations, and IP claims-and refresh wording accordingly.
Practical steps for insureds (professional firms and businesses)
- Publish a plain-English AI policy and enforce it. Ban sensitive data in public tools.
- Implement human-in-the-loop checkpoints for anything client-facing or safety-critical.
- Use enterprise AI with contractual protections on data use and retention.
- Log prompts/outputs for QA and audit; review edge cases and escalation outcomes.
- Train staff-then test for understanding. Treat AI like a junior analyst that needs supervision.
- Run a tabletop for an AI failure event. Tighten your response plan based on gaps.
Helpful frameworks and training
To formalise controls, many teams use the NIST AI Risk Management Framework as a baseline for governance and assurance. For staff enablement, targeted role-based training makes oversight real, not theoretical.
Episode details and how to get in touch
Information in this episode is accurate as at 30 January 2026.
If you need legal advice on any of these topics, please contact Andrew Horne, Tom Maasland or our Litigation team. You can also email us at techsuite@minterellison.co.nz. If you found this useful, rate, review, or follow MinterEllisonRuddWatts wherever you get your podcasts.
Additional resources mentioned
- Datacom's 2025 State of AI Index Research Report
- MinterEllisonRuddWatts publication: AI risks: What do insurers want to know about your use of AI?
- Far Out 2026
Bottom line: AI is now a core underwriting issue. Firms that show clear purpose, tight controls, and real human oversight will keep cover accessible and pricing sane. Those without it will feel the friction-first at renewal, then in claims.
Your membership also unlocks: