AI makes mistakes? There's insurance for that
Insurers and reinsurers are starting to write cover that pays when AI screening tools make specific types of errors. The early focus is US mortgage lending, where model risk and operational exposures are easier to isolate from rate, macro, and regulatory drivers.
Key points for carriers and brokers
- Cover triggers when borrower defaults exceed a model's predictions strictly due to "excess errors" by the AI.
- Munich Re and Greenlight Re are providing capacity; Armilla validates model performance; MKIII bundles the insurance with its screening platform.
- Premiums scale with expected error rates; scope is deliberately narrow to avoid rate, macro, and policy change effects.
- Some carriers are seeking regulatory permission to exclude AI-related losses from standard forms to curb correlated, ambiguous exposures.
What's actually insured
The product ring-fences model failure. If defaults run higher than the model predicted and the variance is attributable to the AI tool's own mistakes, the policy responds. Losses tied to interest-rate shifts, economic shocks, or new regulation are outside the grant.
This is a clean separation: technology error gets insured; conventional credit and market risk do not. It reduces basis risk in claims handling and limits disputes over causation.
Who's participating
MKIII, a start-up serving credit unions and community banks, has bundled this cover into its AI screening service. Nearly all credit decisions are machine-led, with one staff member spending about three hours a day on borderline reviews.
On the capacity side, Munich Re is directly covering AI misfire risk, while Greenlight Re is supporting through reinsurance capacity. Armilla validates model performance and helped secure backing for MKIII's software.
How the trigger works
The trigger is tied to measurable model performance. If actual defaults exceed the model's predicted rate beyond a defined threshold, and analysis attributes the gap to "excess errors" by the AI, the policy pays. Think of it as a narrow, performance-linked structure rather than a traditional indemnity for broad credit deterioration.
Pricing and model uncertainty
Underwriters accept that AI is probabilistic and will make mistakes. Pricing reflects the expected error rate and tolerance bands agreed in the wording. Better-calibrated models with lower historical error rates carry lower premiums; higher-variance systems pay more.
Scale and use case
According to MKIII, lenders on its platform paid millions of dollars in premiums to secure tens of millions in limit, supporting hundreds of millions in home loan origination. The core benefit for lenders: potential capital relief tied to recognized risk transfer of model error.
Why some carriers are cautious
AI-related losses can be correlated, hard to attribute, and easy to argue under legacy policy language. That's why several insurers have sought permission to exclude AI losses in standard forms while they build purpose-built wordings and controls.
The current approach favors tight definitions, measurable triggers, and clear boundaries between tech error and macro factors. It's pragmatic and reduces silent exposure.
What to do next: a practical checklist
- Wording: Define "excess error," attribution standards, data rights, and dispute resolution. Exclude macro, rate, and regulatory impacts explicitly.
- Data and evidence: Require model documentation, version control, training data lineage, monitoring dashboards, and audit trails.
- Validation: Use independent testing/benchmarking to set baseline error rates and acceptable variance bands.
- Trigger design: Tie payouts to out-of-sample performance versus stated predictions, with materiality thresholds.
- Aggregation: Map accumulation across clients using similar models, data vendors, or cloud dependencies.
- Capital: Align limits, attachment points, and reinsurance with quantified error distributions and tail scenarios.
- Governance: Mandate model risk controls consistent with recognized frameworks and ongoing performance reporting.
- Claims: Pre-agree causation methodology, time windows, and data access to shorten cycle time.
- Regulatory: Track filings for AI exclusions and any guidance on model risk transfer in credit portfolios.
Why this matters for the sector
This is a shift toward granular, performance-linked cover that lives or dies by measurable model outcomes. It opens a path to underwrite AI use in financial services without absorbing broad credit or market risk.
The hard part is attribution and accumulation. Getting those two right will determine whether this niche scales or stalls.
Helpful resources
- NIST AI Risk Management Framework - common language for controls, testing, and oversight.
- SR 11-7: Model Risk Management Guidance - still the baseline for model governance in finance.
Upskill your team
If you're building internal literacy on AI risk, model governance, and controls, see resources curated for insurance and finance professionals here: Complete AI Training - Courses by Job.
Your membership also unlocks: