Underwriting AI's messy learning curve: governance, liability, and cyber risk

AI lifts underwriting and premiums but reshapes risk, exposing governance and data gaps. Treat it like a control system: test, monitor, and keep humans in the loop.

Categorized in: AI News Insurance
Published on: Nov 20, 2025
Underwriting AI's messy learning curve: governance, liability, and cyber risk

The new frontier of underwriting AI risk

AI is now embedded across insurance operations, from underwriting to policy admin. The upside is real: firms report meaningful lifts in producer performance and premium growth. The downside is a changing risk picture that exposes gaps in governance, training, and data usage.

As one regional underwriting leader put it: "AI is like a child - constantly learning but not always processing that learning as expected." That simple idea sets the tone for how insurers should evaluate AI exposure: inspect how it's built, how it's trained, and how it's supervised.

Why this matters to insurance leaders

AI isn't theory anymore. McKinsey has reported measurable gains in core insurance metrics, including a 10-20% improvement in new-agent success rates and a 10-15% increase in premium growth. Those gains come with risk transfer questions your teams need to price and control.

Source: McKinsey & Company

The core underwriting problem

Technology shifts daily. Keeping pace with what companies build is hard; verifying how those models are trained is harder. Training data, methods, and guardrails directly influence legal, security, and reputational exposure.

Two friction points slow risk discovery: AI touches multiple systems (expanding attack surface), and critical details are proprietary (the "secret sauce"). You still need enough detail to rate the risk.

Governance is your first control

AI governance should not be a policy on paper. Treat it like a living control system: defined roles, approval gates, model testing, security checks, and ongoing monitoring. Lawsuits tied to discrimination and bias, IP infringement, and privacy violations are already here.

Anchor programs to known guidance where possible. The NIST AI Risk Management Framework is a practical baseline for policy, testing, documentation, and assurance.

What underwriters should ask on submissions

  • AI inventory: What systems, models, and third-party tools are in use (including "shadow AI")?
  • Use cases: Where is AI in production (claims, underwriting, chatbots, fraud, pricing, manufacturing workflows)?
  • Data provenance: Are training and fine-tuning datasets licensed, permissioned, and documented?
  • Model oversight: Who approves use, tests outputs, and signs off pre-deployment? Is there a "kill switch"?
  • Security: Access controls, secrets management, prompt injection defenses, red-teaming, and monitoring in place?
  • Third parties: Contracts, DPAs, and indemnities for model vendors and data providers?
  • Testing: Bias, toxicity, privacy, and hallucination testing before launch and on a schedule?
  • Records: Audit trails, model cards, and policy exceptions retained for legal defensibility?
  • Incident playbooks: AI-specific detection, escalation, takedown, and customer notification procedures?

Red flags that increase loss potential

  • No AI governance board or unclear ownership across Legal, Security, and Compliance
  • Unlicensed data in training or fine-tuning (copyright, trade secrets, personal data)
  • High-impact use with low testing (hiring, lending, claims decisions, pricing)
  • Public AI tools used with sensitive data and no guardrails
  • No monitoring of model drift, prompt abuse, or adversarial inputs
  • Vendor dependence without visibility into their controls or model changes

Where AI already helps insurance

  • Underwriting: prefill, submission triage, risk signals, portfolio insights
  • Claims: document extraction, FNOL triage, fraud cues, adjuster assistance
  • Actuarial: pattern detection, scenario testing, faster analysis cycles
  • Policy admin and ops: workflow automation, customer service, routing

Human oversight is non-negotiable

"Train it, test it, then test it again." That's the mindset. Pre-deployment reviews and continuous monitoring keep small errors from becoming claim events. AI will learn; it won't always learn the right thing.

Legal risk spikes when training data lacks proper permissions or licensing. That's where IP and privacy claims start, followed closely by reputation damage. Keep a clean chain of custody for data and decisions.

Practical steps for insurers and brokers this quarter

  • Add AI sections to cyber and tech E&O applications; request model governance evidence
  • Map controls to NIST AI RMF; require third-party attestations where feasible
  • Stand up AI incident tabletop exercises (prompt injection, data leakage, faulty decisions)
  • Create underwriting playbooks for high-impact AI use cases (claims, pricing, employment)
  • Audit contract language: IP warranties, training-data rights, privacy, and model-change notices

Support from Munich Re Specialty

Munich Re Specialty's team is actively underwriting AI-related exposure and helping clients close gaps through the Reflex Cyber Risk ManagementTM program. Policyholders gain confidential access to cybersecurity training, legal and technology consulting, risk-surface monitoring, education, and tabletop exercises.

Explore Munich Re Specialty's cyber solutions

Upskill your teams

If your governance plan includes training for underwriting, claims, and security teams, consider structured AI learning paths by job function.

Browse AI courses by job role

The bottom line

AI isn't going away, and it will get more autonomous over time. Expect progress and a few growing pains. Treat governance like a control system, keep people in the loop, and price the risk with eyes wide open.

The views and opinions expressed here are those of the sources cited and do not necessarily reflect the views of Munich Re or its affiliates.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →