Finance AI on Trial: UK Regulator's Bold Review Could Rewire Retail Finance
February 16, 2026
In January 2026, the UK's Financial Conduct Authority (FCA) began a sector-wide review of artificial intelligence in retail finance. In the press, it's been tagged the "Mills Review," after FCA head Sheldon Mills. This is a fact-finding exercise-not a lawsuit and not a rule change. The brief is clear: understand how AI is affecting consumers, markets, and firms-and whether current rules still fit.
Why this review matters
AI already sits in the critical path. By late 2024, the Bank of England indicated that more than half of AI use cases involved some automated decisioning. Credit scoring and fraud detection are the headline examples. When those models misfire, people can be denied access, accounts get frozen, and costs creep up. That's a consumer protection issue-and a supervisory one.
The competition angle: data and vendor concentration
There's a growing risk that access to top-tier AI tooling-and the data to fuel it-clusters with a few large providers. That creates pricing power upstream and cost pass-throughs downstream. Smaller firms get squeezed; customers pay more. Expect the review to probe how concentrated the AI stack has become and what safeguards might be needed.
Who's in scope
Think of the UK retail finance names you know: Lloyds Banking Group, Barclays UK, HSBC UK, Vanguard UK, Santander UK, and others. No firm is singled out. The lens is system-wide use of AI and its knock-on effects across the market.
What could come out of this
- Sharper expectations for model governance, documentation, and explainability.
- Boundaries on high-impact automated decisions in credit and fraud controls.
- Tighter oversight of third-party AI and data providers to reduce dependency risk.
- Competition remedies if access or pricing for AI capabilities distorts the market.
What finance leaders should do now
- Map your AI estate: use cases, data sources, automated vs human-in-the-loop, and customer touchpoints.
- Test outcomes by segment for accuracy, bias, and error rates in credit and fraud models.
- Document decision logic; build explanations a customer (and supervisor) can understand in plain language.
- Set thresholds and kill-switches for actions that lock accounts, deny credit, or change pricing.
- Audit data lineage and consent; verify provenance for third-party data and embeddings.
- Reassess vendor contracts: change management, model updates, audit rights, and SLAs tied to error rates.
- Quantify total cost of AI (including model risk). Avoid silent pass-throughs that raise fairness questions.
- Brief the board. Assign single-point accountability for AI risk with MI fit for FCA scrutiny.
- Prepare customer comms and complaints playbooks for AI-driven decisions.
What to watch next
- Evidence requests or calls for input tied to the review.
- Interim findings that flag priority risks (bias, explainability gaps, vendor concentration).
- Follow-on consultations if rule updates or guidance are on the table.
For primary context, see the FCA and the Bank of England.
Planning an AI stack review for your finance teams? You may find this helpful: AI tools for finance.
Bottom line: The Mills Review is the pause before the next move. Use it to clean up model risk, vendor dependencies, and customer outcomes-before the rules catch up.
Your membership also unlocks: