FCA launches Mills Review on AI: what insurers need to do now
AI is already baked into pricing, underwriting, claims and customer interactions. The FCA has now launched the Mills Review to map how advanced AI will affect consumers, retail financial markets and regulators through 2030 and beyond.
This isn't abstract. It's about who sets the rules for automated decisions, how customers appeal them, and what's considered fair when models learn from behaviour in real time.
Why this matters for insurers
Traditional quote engines still lean on blunt proxies like postcode and job title. As models move to behavioural signals, expect harder questions on fairness, explainability and discrimination risk.
If your pricing or claims models adjust on the fly, you'll need proof they don't penalise protected groups, and a clear route for customers to challenge outcomes under Consumer Duty.
What the FCA is reviewing
- How AI could evolve, including more autonomous and agentic systems.
- How these shifts could change market structure, competition and UK competitiveness.
- Impacts on consumers, including how expectations and behaviour will influence retail financial services.
- How regulators may need to adapt so retail markets keep working well.
Wholesale markets and broader societal impacts are out of scope, but the FCA will consider knock-on effects where relevant.
A note from Sheldon Mills
'AI is already shaping financial services, but its longer-term effects may be more far-reaching. This review will consider how emerging uses of AI could influence consumers, markets and firms, looking towards 2030 and beyond. By taking a forward-looking view, the review will help the FCA continue to support innovation while promoting the safe and trusted adoption of AI in retail financial services.'
What insurers should do next
- Document decisions: For pricing, underwriting and claims, record model purpose, data sources, feature relevance and change logs. Keep an audit trail.
- Fairness by design: Set measurable fairness thresholds, test for bias pre- and post-deployment, and monitor drift. Remove or justify proxies that can infer protected characteristics.
- Explainability: Prepare clear customer-friendly explanations for quotes, declines, claim triage and sub-limits. Build tooling to generate reason codes automatically.
- Appeals and redress: Stand up fast human review for AI-led decisions. Track overturn rates and learn from them.
- Data governance: Lock down data lineage, consent, retention and vendor obligations. Vet third-party data and synthetic data for representativeness.
- Model risk management: Classify model criticality, run challenger models, perform scenario and stress testing, and set kill-switches for bad behaviour.
- Operational resilience: Map AI dependencies, run failure drills, and ensure continuity plans cover model outages and corrupted inputs.
- Board oversight: Assign accountabilities, define risk appetite for AI use, and report metrics aligned to Consumer Duty outcomes.
Pricing and underwriting: prepare for behavioural models
If you're moving from demographics to behaviour, be explicit about what behaviours matter and why. Sense-check whether they unfairly disadvantage certain groups or create feedback loops.
Set guardrails for real-time learning. Cap rate changes, throttle updates and require sign-off for new features that affect protected groups.
Claims and customer outcomes
For fraud and triage models, track false positives and their customer impact. Use sampling and human-in-the-loop review to keep error rates within tolerance.
Build playbooks for edge cases. The hard calls are where complaints, reputational damage and regulatory interest cluster.
How to contribute to the Review
The FCA is seeking views now. Deadline for comments: Tuesday 24 February 2026. You can send contributions to TheMillsReview@fca.org.uk.
Expect a series of recommendations to go to the FCA Board in summer 2026, followed by an external publication.
Context: FCA's existing AI work
The Review builds on the FCA's AI Discussion Paper, AI Sprint, AI Lab with live testing, and its Supercharged Sandbox supported by NVIDIA.
For background, see the FCA's AI Discussion Paper DP5/22 here.
What good looks like by 2030
- Traceable, explainable models with real-time monitoring and clear business ownership.
- Fair pricing frameworks that balance risk-based accuracy with customer outcomes.
- Human oversight that is informed, timely and well-documented.
- Vendor contracts that enforce data quality, testing standards and audit access.
- Continuous training for teams on AI risk, ethics and regulation.
If you're building capability
Upskill underwriting, pricing, claims and compliance teams on AI fundamentals, model risk and practical tooling. A shared baseline cuts friction and speeds up safe adoption.
Useful starting points: courses by job function here and AI Design Courses that cover explainability, fairness and risk.
Bottom line
AI will reset how risk is priced, how claims are assessed and how customers are treated. The Mills Review is a chance to shape the guardrails.
Get your evidence in order, test your models like a regulator would, and submit your view before the deadline.
Your membership also unlocks: