Why insurers are still struggling to scale AI - and what has to change
Insurers have spent real money on AI and automation. Yet productivity gains are thin. The problem isn't the tech. It's how companies are structured, how decisions get made, and how work is designed.
Puneet Chattree, insurance industry lead at Accenture Canada, put it plainly: the barriers are structural, strategic, and cultural. If carriers look inward and ask, "What's actually holding us back?" the answers aren't in the models - they're in the operating model.
Stop confusing cost control with productivity
Many leaders still equate productivity with cutting expenses. As Chattree said, "Not every organization thinks about productivity; they think about cost." Trimming a few basis points won't change how your company performs.
Productivity is a value lever. It's about getting smarter at how outcomes are achieved - cycle times, quality, decisions, and customer experience. Start at the top: define productivity targets across the full business, not just the expense line.
- Claims: Reduce FNOL-to-payment days and loss adjustment expense (LAE), increase straight-through processing (STP).
- Underwriting: Lift quote-to-bind conversion, shrink turn times, improve hit ratios with risk precision.
- Distribution: Lower acquisition cost per policy while raising lifetime value.
Merge your AI plan with your business plan
Chattree's warning is blunt: you don't need two strategies. If your AI roadmap sits apart from underwriting, claims, and distribution goals, you'll optimize fragments and stall at pilot.
AI should be built into how the business moves: process redesign, data standards, decision rights, and team structures. You can have great models, but if a pilot needs six approvals, nothing ships at speed.
- Single strategy: Tie AI initiatives to explicit P&L and customer outcomes.
- Operating model: Fewer handoffs, fewer approvals, clearer product ownership.
- Funding: Budget by capability (e.g., "claims intake"), not one-off projects.
Escape the use case trap - think in value chains
Use cases are easy to pitch and hard to scale. Chattree calls this the "use case mentality." It optimizes a task and ignores the upstream and downstream friction that kills the benefit.
Shift to value chains. Ask, "With AI and automation, how should this entire flow work now?" That's where compounding gains live.
- Claims: Triage at FNOL, document extraction, fraud scoring, decision support, payments - one integrated pipeline, one set of metrics.
- Distribution: Instead of "personalized marketing" as a bolt-on, connect lead scoring, broker enablement, next-best action, and quoting into a single loop.
Thinking in flows forces better data design, cleaner handoffs, and measurable outcomes end to end.
Rewire decision-making to move faster
Hierarchy slows deployment. If model releases require layers of signoff, cycle time beats your ROI. Build guardrails, then grant authority.
- Decision rights: Name accountable owners for each value chain with clear approval thresholds.
- Two-track governance: Fast path for low-risk changes; deeper review for high-impact models.
- Release cadence: Standard 30/60/90-day pilot-to-scale playbook with pre-agreed metrics.
Reimagine the workforce - not just the work
AI at scale changes roles. Chattree's point: you can't postpone the skills plan. Decide what must be automated, augmented, or newly built - then hire and upskill against that map.
- Claims and underwriting: From manual review to exception handling and decision oversight.
- New capabilities: Model risk, prompt and retrieval design, data product management, automation ops.
- Frontline adoption: Training on new workflows and tools, not just the theory.
If you need a structured path for upskilling by role, explore curated AI courses by job to accelerate internal capability building.
What to do next: a simple 90-day plan
- Weeks 1-2: Pick one value chain (e.g., auto claims). Set three metrics (STP rate, FNOL-to-payment days, LAE%). Map the flow and current blockers.
- Weeks 3-6: Redesign the flow. Stand up a data spine (intake schema, document extraction, triage scores). Define decision rights and risk thresholds.
- Weeks 7-10: Pilot with a small segment. Track lift, variance, and error types. Train frontline teams in the new workflow.
- Weeks 11-12: Scale to the next segment. Lock in the release cadence. Bake metrics into weekly ops reviews.
Metrics that prove real productivity
- Claims: STP %, cycle time, LAE%, re-open rate, leakage delta.
- Underwriting: Turnaround time, quote-to-bind %, loss ratio at bind, underwriter throughput.
- Distribution: Cost per bind, CAC payback, lifetime value, broker satisfaction.
- Tech/productivity: Time-to-deploy, approval count per release, percent of manual steps removed.
Risk, controls, and trust
Speed requires guardrails. Establish model risk policies, human-in-the-loop thresholds, data lineage, and audit logs. Use a common framework so controls don't become blockers.
For reference, see the NIST AI Risk Management Framework. For the macro case on productivity potential, McKinsey's research is a useful primer: The economic potential of generative AI.
The bottom line
The capability exists. As Chattree put it, the gap is alignment - strategy tied to operations, AI embedded in value chains, and a workforce built for where you're going. If you keep treating productivity as cost-cutting, you'll automate the wrong things.
The winners will redesign around speed and clarity, measure what matters, and scale what works. Start small. Move fast. Prove lift. Then repeat.
Your membership also unlocks: