Insurers Gain from AI but Lack Governance to Prove It
Insurance companies are seeing measurable benefits from artificial intelligence-52% report revenue growth and 62% cite improved decision-making-but many lack the operational controls to demonstrate those gains safely to regulators and customers.
A Grant Thornton survey of 100 insurance executives found that 44% say governance or compliance challenges have caused AI projects to fail or underperform. Only 24% expressed confidence they could pass an independent AI governance review within 90 days.
The gap between policy and practice poses real risk. While 61% of insurance boards have established AI governance policies, evidence of those controls remains fragmented across teams and tools. Without centralized proof that AI systems work as intended, insurers expose themselves to regulatory pressure and customer liability.
Where the Breakdown Happens
The problem isn't a lack of policies. It's the absence of tested infrastructure to verify them. Poor guardrails and unclear decision rights mean controls exist on paper but lack operational teeth.
Grant Thornton said insurers that manage AI risk effectively define and classify use cases, then prioritize based on potential impact and complexity. This structured approach turns governance from a compliance checkbox into a foundation for scaling AI across higher-value workflows.
Separate research from AM Best reinforces the pattern. Nearly 60% of insurance respondents expect AI to significantly transform their business within one to three years, yet 41% of actively deploying insurers cite data readiness, security, and legacy system integration as major obstacles.
When underlying data is poor quality, fragmented, or poorly governed, AI systems produce unreliable outputs. Insurers that have modernized legacy systems and built robust data governance find AI integration easier.
Building Defensible AI
Grant Thornton recommends insurers evaluate current governance structures, assess operating models for greater AI adoption, and build trust with customers and regulators-not just chase revenue growth.
"Governance is what makes AI revenue scalable, defensible and sustainable," the firm said. Measurable value matters, but only when backed by controls that can withstand external scrutiny.
For insurance professionals managing or overseeing AI initiatives, the message is direct: document your controls, test them regularly, and ensure decision rights are clear across teams. Revenue gains from AI are real-but they're only defensible when governance is provable.
Learn more about AI for Insurance and Generative AI and LLM best practices for building reliable AI systems.
Your membership also unlocks: