Risk management in an age of AI governance: Boards need new leadership models
AI is no longer a side project. It sits at the center of strategy, risk, and value. For insurers and management teams, the question isn't "Should we use AI?" It's "How do we control downside and create upside at the same time?"
Sumeet Gupta, senior managing director and leader of AI & digital transformation at FTI Consulting, put it plainly: "It is no longer adequate for an organisation's Board of Directors to focus on traditional risk vectors when it comes to AI governance. They must be equipped to understand, assess and advise on both defensive and offensive plays for creating and protecting shareholder value through AI adoption."
Why boards must play offense and defense
Most executives know the risks of AI and GenAI. What's missed is that those risks shift as the tech and use cases change. Static controls won't cut it. Boards need a live view of risk, and a clear appetite for calculated bets.
Gupta's point is direct: pair strong oversight with smart risk-taking-or get left behind. "If they don't, organizations will fall behind."
Tactical risks vs. opportunity risks
Think in two buckets. One protects the core business. The other determines whether you grow, stagnate, or lose ground.
- Tactical risks (day to day): data leakage, model drift, shadow AI, weak access controls, third-party model exposure, regulatory noncompliance. As Gupta said, these require "hand-to-hand combat."
- Opportunity risks (strategic): what if you don't act, don't invest enough, or don't attract the right talent? These are existential-miss them, and your competitors set the pace.
Beyond risk lists: leadership archetypes
Knowing the risks isn't enough. Context and behavior at the top decide outcomes. Gupta notes boards typically fall into one of four archetypes for AI governance-each with strengths and drawbacks. Two questions reset the agenda: Which archetype are you today? Which do you need to be next?
In practice, leadership postures often look like this:
- The Observer: waits for clarity; low downside, slow upside.
- The Controller: tight guardrails; safe but often slow to value.
- The Experimenter: fast pilots; uneven controls and scattered ROI.
- The Integrator: clear risk appetite, staged deployment, measurable outcomes; requires discipline and board fluency.
What insurance leaders should implement this quarter
- Define AI risk appetite: where AI is encouraged (e.g., underwriting assistants, claims triage) and where it's off-limits (e.g., unvetted customer-facing advice).
- Create an AI risk register: map risks to owners, controls, and metrics (e.g., data loss incidents, model drift rates, false-accept/false-reject, audit exceptions).
- Stand up model oversight: data lineage, evals tied to business outcomes, drift monitoring, rollback plans, and threshold-based alerts.
- Tighten data controls: classify sensitive data, enforce DLP, use redaction/RAG patterns to limit exposure, and manage secrets responsibly.
- Vet vendors hard: demand security attestations (e.g., ISO 27001), documented evals, red-team results, rate-limit strategies, and indemnity language.
- Keep humans in the loop: checkpoints for underwriting, pricing, and claims; sample decisions; record overrides for audit.
- Write incident playbooks: model hallucination, biased outputs, leakage, outage; include comms, rollback, and regulator notifications.
- Train the board and frontline: scenario-based training on both controls and use cases; publish a simple AI usage policy people actually read.
- Measure offense and defense: track loss ratio impact, quote-to-bind speed, FNOL cycle time, reserve adequacy, exception rates, and data incidents.
- Set tone from the top: name an accountable AI owner, align incentives, and require quarterly reviews on both risk posture and value creation.
Board questions that move the needle
- What AI use cases are in production, and which controls backstop each one?
- Where are we taking smart risks that competitors aren't-and how are we measuring ROI?
- What's our plan for model drift, bias, and data leakage events? Who owns the response?
- Do we have the talent and training to scale AI safely across underwriting, claims, and distribution?
- Which archetype describes our current stance-and what's the near-term target state?
Helpful frameworks and next steps
If you need a baseline, the NIST AI Risk Management Framework is a solid reference for principles and controls. Pair it with your enterprise risk management and model risk disciplines-then tailor to insurance workflows.
Upskilling the leadership team shortens the learning curve. If you're building board and manager fluency, explore role-based learning paths here:
Bottom line
AI risk is fluid, and value creation won't wait. Set your archetype with intent, manage tactical threats with discipline, and place informed bets on opportunity. That balance is the new core skill of modern insurance leadership.
Your membership also unlocks: