AI readiness in insurance: where carriers stand and what to do next
A new survey of roughly 100 insurance professionals (fielded July-August 2025) spotlights a clear gap between ambition and execution on AI. Only a small slice says their tech is truly ready, while security, skills, and policy guardrails are still maturing.
Here's the signal, minus the noise, and a practical plan you can use to move faster without increasing risk.
What the numbers say
- Tech stack: 7% totally agree their stack is fully modernized; 31% mostly agree.
- Data access: 9% totally agree unstructured data is easy to analyze; 11% say the same for database accessibility. Over half say enterprise content is stuck in silos.
- Policies and governance: About half say their company has firm AI use policies. Data management/governance is stronger: 28% in total agreement, 40% mostly agree. Only 14% fully agree they have an AI council with clear risk guidelines and bias monitoring in place.
- Barriers: Data security is the top blocker at 41%. Lack of in-house skills is a significant barrier for 32%.
- Steps being taken: 49% are upskilling current staff, 43% are partnering with tech vendors, and 41% are hiring talent with AI skills.
What this means for insurers
The industry isn't short on AI ideas. The constraint is readiness: fragmented data, security concerns, and inconsistent governance. The good news-many teams are already investing in skills and smart partnerships. The next step is a focused plan that fixes data and policy basics while delivering quick wins that pay for the journey.
A practical readiness plan
- Modernize the data foundation
* Run a 60-90 day data audit: where is PII, who touches it, how is it accessed?
* Prioritize de-siloing enterprise content (policies, claims notes, endorsements, emails) with an indexed repository and APIs.
* Standardize metadata and retention; implement row/field-level controls for PII and claims data. - Set clear AI governance
* Define acceptable use, escalation paths, and human-in-the-loop checkpoints for underwriting, claims, and customer service.
* Stand up an AI council spanning risk, legal, compliance, security, actuarial, and product. Give it decision rights and SLAs.
* Adopt a known framework, like the NIST AI Risk Management Framework, and map controls to state regs and NAIC guidance. - Tighten security before scale
* Classify data, restrict access by default, log everything, and monitor prompts/outputs for leakage and bias.
* Review vendors for PII handling, isolation, model retraining policies, and incident response. Bake requirements into contracts. - Build the operating model
* Define roles: AI product owner, data steward, model risk reviewer, prompt curator.
* Create a skills matrix for adjusters, underwriters, and ops; upskill by role, not one-size-fits-all. If you need a fast start, see curated options by role at Complete AI Training.
* Pair internal SMEs with vendors for co-delivery and knowledge transfer. - Choose use cases you can govern
* High-fit starters: claims note summarization, subrogation triage, document classification, producer email assistance, underwriting prefill and risk flags.
* Require auditable outputs, redaction for PII, and a clear ROI target (cycle time, loss adjustment expense, quote-to-bind). - Measure outcomes and risk
* Track business KPIs (time saved, accuracy vs. baseline, leakage prevented) and risk metrics (bias checks, data exposure events, override rates).
* Review monthly with the AI council; iterate or sunset quickly.
Quick wins you can deliver in 90 days
- Claims note summarization with human review to cut adjuster admin time.
- Document intake: classify, extract, and auto-route submissions with confidence scores.
- PII redaction service for all free-text fields before model usage.
- Publish a plain-English AI use policy and a one-page do/don't for staff.
- Stand up a pilot data catalog for your top three systems feeding AI.
- Launch a focused training cohort (underwriting or claims) with a clear playbook and metrics.
Common pitfalls to avoid
- Starting with a moonshot instead of a measurable, governed pilot.
- Ignoring data quality and PII handling until after build-fix it first.
- Skipping human oversight for decisions that affect coverage, claim outcomes, or pricing.
- Letting shadow IT spin up unvetted tools and share sensitive data.
- Locking into a single vendor without exit rights or data portability.
Bottom line
The survey points to a simple truth: AI payoff follows readiness. Fix the data seams, lock down security, give people the skills, and start with governed, high-impact use cases. Do that, and you'll see real gains while staying inside your risk appetite.
Your membership also unlocks: