Most insurers can't prove their AI is safe or effective, Grant Thornton survey finds
Three in four insurance and financial services leaders lack confidence they could pass an independent AI governance audit within 90 days, according to a new Grant Thornton survey of nearly 1,000 senior US business leaders.
The gap exposes a widening disconnect: boards approve major AI investments, but few have the controls in place to measure whether those systems work or manage their risks.
Governance, not technology, is the problem
Nearly half of operations leaders said AI underperforms because controls and compliance aren't working. Yet only 11% believe organizations should prioritize risk and compliance to unlock AI's value.
The survey found that 46% of leaders cite weak controls as the reason AI initiatives fail to deliver. This matters to insurers specifically: the National Association of Insurance Commissioners has adopted guidance spelling out expectations for AI systems programs, and regulators are increasing scrutiny of how insurers deploy and monitor AI.
Tom Puthiyamadam, managing partner of Advisory Services for Grant Thornton Advisors, said the pattern is familiar. "Guardrails come after an incident occurs - not before - and by then there may be significant organizational and operational consequences."
Boards approve spending but skip the strategy
Three in four boards have approved major AI investments. Only 52% have set clear AI governance expectations. Just 54% have integrated AI risk and opportunity into ongoing board or committee oversight.
Strategy is another fault line. More than half of executives said strategy drives AI return on investment, yet only 22% of operations leaders reported having a fully developed and implemented AI strategy.
Most governance models were built for traditional IT, not AI. Centralized review bodies become bottlenecks. Puthiyamadam said the fix is to set policy centrally, then delegate assessments to trained reviewers at the division or regional level.
Autonomous AI without tested safeguards
Nearly three in four organizations are piloting, scaling, or running autonomous AI. Only one in five has tested a response plan for AI failures.
Most insurers (95%) do not permit AI agents to make fully autonomous, high-stakes decisions without human review. But 43% list regulatory and compliance uncertainty as a top concern about deploying autonomous AI.
The real risk is not failure itself, but being unprepared when it happens. Many companies have incident playbooks for traditional systems, but have not adapted them for AI-specific issues such as model drift, hallucinations, or biased outputs.
Governance correlates with results
Companies with fully integrated AI governance are almost four times more likely to report AI-driven revenue growth than those still piloting (58% versus 15%).
Half of operations leaders said they need a formal AI strategy or governance plan in place within the next six months to improve performance.
For insurance professionals, the message is direct: AI for Insurance requires more than pilot projects and board enthusiasm. It requires measurement, clear ownership, and tested safeguards before scaling. AI for Executives & Strategy means treating governance as a driver of value, not a constraint on it.
Your membership also unlocks: