UK Finance Is Betting On AI. Regulators Are Still Watching.
"Fool around and find out" sums up the UK financial sector's posture on AI. Adoption is high, oversight is loose, and the risk to consumers is real.
AI already influences who gets credit, what premiums cost, and how fast claims get paid. MPs on the Treasury Select Committee say the Bank of England, the FCA, and HM Treasury are taking a hands-off stance that could cause serious harm if left unchanged.
AI is already making core decisions
Over three-quarters of UK financial firms use AI, with insurers and international banks leading. We're past pilots. Models are inside fraud systems, credit engines, pricing, and claims workflows.
Regulators point to flexible rules like Consumer Duty and SM&CR, but firms are left to interpret how those apply to model risk, explainability, and data drift. That creates uncertainty and uneven standards across the market.
Consumers are exposed
Opaque models make it hard for customers to understand why they were declined or offered worse terms. Using historical data without guardrails can deepen exclusion for people with thin files or irregular incomes.
Unregulated AI "advice" from chatbots and search engines adds another risk. It looks authoritative, but it isn't covered like regulated advice and can push users into poor decisions. Expect more fraud, too, as criminals use AI to scale impersonation and scams.
Regulators are reacting, not directing
The FCA cites Consumer Duty, SM&CR, "AI Live Testing," and sandboxes. Helpful, but voluntary and limited. The prevailing approach relies on monitoring complaints and issues after they surface.
Accountability is still fuzzy. Senior managers are nominally responsible, but high-complexity models make true oversight hard without clearer standards for documentation, testing, and escalation.
System-wide risk is rising
AI can amplify cyberattacks, concentrate dependency on a few US cloud providers, and spur herd behavior in markets. Current resilience tests don't directly model AI-driven failures or correlated model errors at scale.
The Committee wants AI scenarios embedded into future system-wide stress tests. That's overdue.
Critical third parties: the weak link
The Critical Third Parties regime exists on paper but hasn't been activated. No providers have been designated, despite major outages that disrupted UK banks. Heavy reliance on a handful of tech firms remains a single point of failure.
What finance leaders should do now
- Map AI across the stack: Inventory every model, data feed, and decision it touches. Assign ownership and risk ratings.
- Tie to existing obligations: Document how each use case meets Consumer Duty outcomes and SM&CR accountability.
- Set model risk standards: Pre-deployment validation, bias and performance testing, challenger models, drift monitoring, and clear kill switches.
- Explain decisions: Provide specific reasons for adverse outcomes and a simple path to appeal with a human review.
- Strengthen data discipline: Source governance, lineage, consent, representativeness checks, and secure feature stores. Be cautious with synthetic data leaks and contamination.
- Human-in-the-loop for high impact: For credit, insurance, fraud blocks, and collections, require human oversight on edge cases and vulnerable customers.
- Third-party risk controls: Contract for transparency, testing evidence, right to audit, incident reporting, exit plans, and concentration limits. Build resilience beyond a single cloud.
- AI incident response: Playbooks for model failures, prompt injection, data poisoning, and deepfake fraud. Run red-team exercises and capture post-mortems.
- Stress testing: Add AI failure scenarios into ICAAP, ORSA, and liquidity playbooks. Include simultaneous model errors and cloud outages.
- Guardrails on public LLMs: No unapproved "advice" bots for customers. For staff, restrict sensitive data, log prompts, and provide approved tools.
- Upskill fast: Train product, risk, and compliance teams on model basics, fairness testing, and explainability. Standardize documentation templates.
What regulators should clarify
- Interpretation guidance: Concrete expectations for applying Consumer Duty and SM&CR to AI, with examples.
- Minimum standards: Baselines for testing, monitoring, explainability, documentation, and human oversight for high-stakes use cases.
- Incident reporting: A clear regime for AI-related failures and near-misses, plus data-sharing for sector-wide learning.
- Stress tests: Incorporate AI-specific scenarios into system-wide exercises.
- Activate CTP regime: Designate critical providers and supervise resilience across cloud and AI infrastructure. See HM Treasury's overview of Critical Third Parties.
The takeaway
Dame Meg Hillier put it bluntly: the system isn't ready for a major AI incident. Observing from the sidelines won't cut it while models are already making decisions that affect people's money and livelihoods.
If you run or oversee products in finance, move now. Tighten governance, demand proof from vendors, and build AI failure into your worst-case planning. The gains are real-but so are the downside risks if you wait for rules to catch up.
Practical resources
- AI tools for finance - vetted tools to improve risk, ops, and analysis with controls.
Your membership also unlocks: