UK lawmakers push for AI stress tests in finance - here's what insurers should do now
UK lawmakers say regulators aren't moving fast enough to curb AI risks. The Treasury Committee is pressing the Financial Conduct Authority (FCA) and the Bank of England to start AI-specific stress testing and move off a wait-and-see approach.
Why it matters for insurance: automated systems already influence pricing, claims, fraud, credit, and distribution. Shock scenarios now look different - and a single failure can cascade across firms and customers.
What's changing for insurers
- About 75% of UK financial firms use AI across core functions, including claims processing and credit assessments. Efficiency is real, but so are the risks.
- Agentic AI (systems that take autonomous actions) raises the stakes. As the FCA noted to Reuters, racing to deploy these tools exposes retail customers to new failure modes.
- Opaque credit and pricing outcomes, algorithmic tailoring that excludes vulnerable customers, and unregulated advice via chatbots - these issues scale fast once embedded.
- Heavy dependence on a small group of US tech providers for AI and cloud services concentrates operational risk.
Regulatory signals to watch
- AI-specific stress testing: the Committee wants the FCA and Bank of England to run scenarios that reflect model drift, vendor outages, and agentic errors.
- Consumer protection guidance by end-2026: clarity on how rules apply to AI and the level of system knowledge expected from senior managers.
- Systemic risk: AI-driven trading could amplify herding. Insurers with investment arms need to factor this into risk models.
- The FCA will review the report; the Bank of England says it has taken steps to assess AI risks and will consider the recommendations.
What this means for claims, pricing, and distribution
- Claims: automated denial or routing errors can create large-scale complaints in hours. You need a kill switch and audited decision trails.
- Pricing/underwriting: fairness tests, explainability for key factors, and monitoring for drift are non-negotiable - especially under Consumer Duty.
- Distribution: chatbots that "suggest" products can cross into advice. Guardrails, scripted boundaries, and escalation to humans need to be explicit.
Practical actions for insurance leaders
- Map AI use: inventory models, where they run (cloud/on-prem), their business criticality, and who owns them.
- Run targeted stress tests: model drift, corrupted training data, prompt injection, third-party outage, and agentic loops that trigger unintended actions.
- Install controls: human-in-the-loop for high-impact decisions, kill switches, rollbacks, and fallback workflows.
- Prove fairness: run segment-level error and bias tests for pricing, claims, fraud, and anti-selection. Document thresholds and remediation triggers.
- Tighten chatbot governance: clear scope, disclaimers, advice boundaries, and handoff rules. Log interactions and monitor for harmful outputs.
- Clarify SMCR accountabilities: name the senior manager responsible for AI risks; define their required level of system oversight and reporting.
- Vendor risk: reduce single-provider exposure, set exit plans, secure access to logs and model metrics, and test failover.
- Evidence trails: end-to-end auditability for data, prompts, features, versions, and decisions. You'll need this when complaints spike.
- Tabletop exercises: simulate an AI failure that hits claims or pricing. Time how fast you detect, disable, communicate, and compensate.
Questions boards should ask
- Which AI decisions can harm vulnerable customers, and what protections are in place?
- What's our dependency on any single cloud or model provider, and how do we fail over?
- Who can switch off a faulty system, and under which conditions?
- How do we measure drift, bias, and customer outcomes - and how often do we act on the data?
- Could our investment strategies be exposed to AI-driven herding? What's the control plan?
People and governance moves
The finance ministry appointed Starling Bank CIO Harriet Rees and Lloyds Banking Group executive Rohit Dhawan to advise on steering AI adoption across financial services. Meg Hillier, who chairs the Treasury Committee, said she isn't convinced the system is ready for a serious AI failure - and the hit would land hardest on consumers.
Timeline and next steps
Expect pressure to build through 2026 as the FCA develops guidance and regulators explore stress testing. If you're in insurance, don't wait. Stand up an AI risk program now, run a live scenario test next quarter, and brief the board with clear remediation milestones.
Useful resources
Your membership also unlocks: