Insurance Customers Are Open to AI-But They Don't Think They'll Benefit Most
Fresh data from J.D. Power shows something you've likely felt in your inbox: customers are willing to try AI, but they don't trust it to work in their favor. Sixty-eight percent believe insurers will capture most or all of the gains from AI. Only 26% think benefits will be shared equally.
They see convenience. They don't see clear value to them yet.
Source: J.D. Power insurance research
Where AI Fits Right Now
- Automated claim status updates (24%)
- Billing management (23%)
- Basic customer service answers (21%)
These are low-friction wins that reduce wait time and headaches. They improve the experience without touching core judgment.
Where Trust Breaks
- 47% are somewhat or very uncomfortable with AI processing claims
- 33% want limits on AI in pricing until bias and ethics concerns are addressed
- 30% are open to partial use with strong safeguards, explainability, and compliance
- Only 15% support fully using AI for pricing
The line is clear: customers accept AI for updates and self-service. They hesitate when it affects payouts or price.
Why Customers Feel This Way
Virtual assistants save time. That's an obvious benefit. AI in pricing or underwriting isn't as visible, and customers suspect it could work against them.
Without proof of fairness and a clear appeal path, trust stalls.
What Carriers Should Do Next
- Double down on service automation. Claims status, billing, IDV prompts, FNOL intake. Measure cycle time, CSAT, and deflection-then share the wins with customers.
- Keep humans in the chair for decisions. Use AI for triage, flagging, and recommendations. Require human review for liability, coverage, and price changes.
- Explain the role of AI in plain language. "What AI does. What a human decides. How you can opt out or request a review." Put this in-app, on quotes, and in EOBs.
- Prove fairness. Set bias tests, stability checks, and drift monitoring. Publish high-level results and commit to third-party audits aligned with the NIST AI Risk Management Framework.
- Offer control. Let customers opt in to AI-assisted pricing or request a human-only review. Make the appeal path obvious and fast.
- Stand up model governance. Cross-functional review (underwriting, actuarial, claims, legal, compliance, security, CX). Version control, documentation, incident playbooks.
- Shadow-test before you ship. Run AI as a shadow model on pricing or claim decisions. Compare outcomes, fairness metrics, and complaint rates before turning anything on.
Messaging That Builds Buy-In
- "AI speeds updates; licensed adjusters and underwriters make the call."
- "You can request a human review at any point."
- "We test for bias every release and share the results at a high level."
- "Here's what you get: faster cycle times, clearer bills, fewer repetitive questions."
Operational Checklist
- Pick 2-3 low-risk use cases and track: handle time, CSAT, deflection, complaint rate.
- For pricing, run A/B or champion-challenger with fairness thresholds and rollback triggers.
- Create an appeals SLA (e.g., 48-72 hours) and publish it.
- Log all AI-assisted decisions with reason codes customers can read.
- Review models quarterly for drift, bias, data leakage, and unapproved proxies.
What to Measure
- CSAT/NPS for AI-assisted vs human-only interactions
- Claim cycle time vs satisfaction and re-open rates
- Appeals and reversal rates
- Fairness metrics across protected classes and geographies
- Regulatory inquiries and DOI complaints
Bottom Line
People like AI for convenience. They don't yet trust it with judgment calls that change their price or payout.
Earn trust by showing your math, keeping humans accountable, and sharing gains with the policyholder. Do that, and adoption stops being a fight.
If your team needs practical upskilling across AI, governance, and workflow automation, explore AI courses by job.
Your membership also unlocks: