Insurers Pull Back From AI Risk as Washington Pauses Preemption Push
Two things can be true at once: companies are racing to deploy AI, and insurers don't want the bill when it goes wrong. Several major carriers have sought approval to exclude AI-related liabilities from corporate policies, even as federal efforts to rein in state AI rules are paused.
If you run finance, IT, or development, this is a clear signal. Build for accountability now, because coverage may not be there later.
What insurers are doing-and why it matters
Recent filings show large insurers exploring exclusions for claims tied to the use of agents, chatbots, and other generative systems. One carrier reportedly aims to block claims tied to "any actual or alleged use" of AI embedded in products or services. Another filed generative AI exclusions but said it has no immediate plans to implement them, keeping the option open.
Executives say the problem is uncertainty. Outputs can be unpredictable, liability chains are murky, and model providers often disclaim responsibility. We've already seen public examples: a bank apologizing after its chatbot scolded a customer, and an airline facing a lawsuit after its bot fabricated a discount.
The policy front: federal pause, state heat
A proposed federal push to challenge state AI regulations has reportedly been put on hold. That leaves companies facing a growing patchwork of state rules, with some states advancing strict disclosure and safety requirements for AI systems.
Expect more compliance variation, not less. Industry groups will keep lobbying, but near-term certainty is unlikely.
Practical steps to reduce AI liability and keep insurance viable
- Map usage: Inventory every AI touchpoint. Flag customer-facing, automated decisioning, and safety-critical flows.
- Set guardrails: Use retrieval for grounded answers, restrict system prompts, enforce content policies, and rate-limit high-risk actions.
- Keep a human in the loop: Require review for high-impact outputs (financial advice, legal claims, medical guidance, travel changes, refunds/credits).
- Log everything: Store prompts, responses, model versions, and decisions. You'll need this for audits, claims, and RCA.
- Contracts and vendors: Add AI-specific reps and warranties, IP indemnities, uptime/SLA for model gateways, and data-use limits. Require model cards and safety documentation.
- Insurance fit-check: Review E&O, cyber, CGL, and product liability for AI exclusions. Confirm notice obligations, sub-limits, and panel-counsel rules. Involve your broker early if you scale new AI features.
- Security and abuse resistance: Defend against prompt injection, data exfiltration, and tool abuse. Sanitize inputs/outputs and isolate secrets and tools.
- Testing and red teaming: Continuously probe for hallucinations, bias, jailbreaks, and unsafe actions. Gate releases behind pass/fail thresholds.
- Data hygiene: Control PII, track provenance, and avoid training on sensitive or third-party content without clear rights.
- UX accountability: Add clear disclaimers where appropriate, confirmations for high-risk actions, easy escalation to a human, and feedback loops.
- Governance: Stand up an AI policy, risk owners, and an approval process. Align to an external standard to show reasonable care.
If you need a reference point, consider adopting the NIST AI Risk Management Framework for structure and documentation. It's free, practical, and recognizable by regulators and insurers.
NIST AI Risk Management Framework
Implications by function
- Finance: Quantify loss scenarios, reserves, and ROI with risk controls priced in. Validate coverage; track any AI exclusions and sub-limits.
- Legal/Compliance: Update ToS, disclosures, and consent flows. Tighten vendor clauses. Prepare a playbook for takedowns, IP claims, and data subject requests.
- IT/SecOps: Route all AI calls through a gateway with auth, logging, DLP, and policy enforcement. Separate environments for experimentation vs. production.
- Developers/ML: Use smaller or constrained models for deterministic tasks. Add rule-based fallbacks. Monitor drift and rollback quickly.
Insurance won't save bad design
Insurers pulling back is a warning, not a surprise. Treat LLMs as untrusted components, design for checks and traceability, and keep a human in the loop where the stakes are high.
Do that, and you'll ship faster with fewer incidents-and a better chance your insurer stays in your corner.
Level up your team
If your org needs practical training on safe deployment, governance, and real use cases, explore our curated programs and certifications.
Further reading for policy and claims risk:
Your membership also unlocks: