AI liability is the next big underwriting problem - and buyers already want cover
At the PwC Insurance Summit, Freddie Scarratt of Gallagher Re put it plainly: AI liability is full of unknowns, and clients are already asking for coverage. The hard part is figuring out where losses sit - cyber or liability - and how to price a risk that changes with every model update.
Gallagher Re has stood up a task force to tackle AI liability and pricing. Step one: quantify the risk. Step two: decide if you're actually pricing for it.
The coverage question: cyber vs liability
If a chatbot gives harmful advice, does that trigger a cyber policy, professional liability, tech E&O, media, or something bespoke? The answer is often "it depends," which isn't helpful at claim time.
- Map coverage: identify overlaps and gaps across cyber, E&O/PI, media/copyright, and product liability. Make the position explicit.
- Clarify triggers: define what counts as an "AI event" (model output error, data poisoning, prompt injection, copyright claims, biased decisions, system unavailability).
- Use clear endorsements: either carve AI in or carve it out. Avoid grey zones that create disputes and LAE drift.
- Vendor liability: require insureds to carry through warranties/indemnities from AI vendors and data providers. Contract language matters as much as the limit.
Underwriting AI: lawsuits or live-system tests?
Scarratt raised the core question: do we underwrite AI like a traditional line with frequency/severity, or plug into systems and stress-test them? For many risks, the answer will be both.
- Ask for an AI inventory: use cases, model types (internal vs third-party), data lineage, and where outputs touch customers or regulated decisions.
- Governance evidence: policies, human-in-the-loop controls, model monitoring, bias testing, red-teaming, rollback plans, and audit trails.
- Operational resilience: fallback logic if the model fails, rate limits, content filters, and incident response specific to AI failures.
- Copyright/data posture: training data provenance, content filters, and indemnities from providers.
- Independent testing: scenario-based stress tests of prompts, jailbreaks, and adversarial inputs. Document results, not just intentions.
Capacity, capital, and aggregation
Moderator Lee Harris of the Financial Times asked the uncomfortable question: are balance sheets big enough for AI-scale claims? She cited the recent $1.5 billion award against a large language model provider for alleged copyright infringement - the biggest such recovery in US history.
Is there capacity for that kind of event? Scarratt's take: "Who knows?" Yet he noted strong interest from the Lloyd's market, with capacity raised for a new AI product launching in January.
- Set realistic aggregates: sublimits for AI events, clear waiting periods, and event definitions to manage clash risk.
- Use reinsurance smartly: quota share for growth, aggregate stop-loss for volatility, and event covers for copyright or systemic outages.
- Mind cyber tower friction: coordinate AI liability with cyber wording to avoid both double cover and orphaned loss.
For broader market context, see Lloyd's and the NAIC's AI principles.
Demand is here: chatbots, advice engines, and front-office exposure
Retail clients are rolling out chatbots and relying on AI outputs. They want cover that matches that dependence. That means new wording, tighter underwriting questions, and packaged risk engineering.
Christina Lucas of Google said the conversation is shifting from back-office efficiency to front-office value. Claims has already moved - faster fraud detection and a smoother customer experience - and the next push is complex risk.
Product direction that will actually sell
- Tiered offerings: basic AI endorsement (clarify intent), enhanced AI liability (advice/output errors), and premium form (adds copyright/media and data integrity options).
- Risk engineering bundle: pre-bind AI governance checklist, red-team testing, and vendor contract review. Price credits for strong controls.
- Clear exclusions where needed: training-data misuse, IP scraping without rights, and deliberate non-compliance. Offer buy-backs with evidence.
- Claims playbook: specialized adjusters, technical panel access, and fast forensics to preserve logs and prompts.
What carriers, brokers, and insureds should do this quarter
- Carriers: define AI event language, build an underwriting questionnaire, and pilot stress-testing for top segments.
- Carriers: set aggregates and attach reinsurance for AI-heavy portfolios to control tail risk.
- Brokers: run coverage mapping workshops for clients; document positions across cyber/E&O/media to prevent disputes.
- Brokers: push vendors for indemnities and evidence of data rights; align limits to contractual exposures.
- Corporate insureds: maintain an AI system register, log model changes, and keep auditable records of prompts/outputs tied to decisions.
- Corporate insureds: implement human checkpoints for high-impact outputs (financial, medical, legal, safety, or regulatory decisions).
- Everyone: test for jailbreaks and prompt injection; record test plans and outcomes. Underwriters will ask for it.
- Everyone: run a tabletop on an AI advice failure and a copyright claim. Time to notification and evidence preservation are critical.
Bottom line
AI liability isn't a future problem. Clients are buying, the market is responding, and wording will decide who pays. Start with clarity on coverage, evidence on controls, and a plan for capacity - then scale what works.
Upskill your team on AI risk
If your underwriting or broking team needs a fast track on AI use cases and controls, explore practical training options: AI courses by job function.
Your membership also unlocks: