AI in finance: Regulators' wait-and-see stance could hurt consumers and the system
More than 75% of UK financial services firms now use AI, with the heaviest adoption in insurance and international banking. A new report from the Treasury Select Committee warns that the current approach by the Bank of England, the FCA, and the Treasury is too passive-and that gap increases the risk of consumer harm and systemic stress.
AI is already embedded in core processes like claims handling and credit assessment, not just back-office automation. The message is clear: growth is outpacing guardrails.
Why this matters for finance leaders
- Model risk: opaque logic, unstable outputs under stress, and data drift can trigger faulty pricing, claims decisions, or credit calls.
- Consumer harm: bias, unfair outcomes, and poor explainability raise conduct risk and complaints, testing Consumer Duty obligations.
- Operational resilience: dependency on a small set of AI and cloud providers concentrates risk.
- Market integrity: correlated model behavior can amplify shocks if many firms react to the same signals at once.
What the Committee wants from public bodies
- AI-specific stress tests run by the Bank of England and the FCA to gauge readiness for AI-driven shocks.
- Practical FCA guidance by year-end on how existing consumer protection rules apply to AI, plus clear accountability expectations for harm caused by AI inside firms.
- Designation-by Government-of critical AI and cloud providers under the Critical Third Parties (CTP) Regime to enable actual oversight and enforcement.
What you can do now (don't wait for the rulebook)
- Establish ownership: assign a senior manager with clear accountability for AI outcomes; align with existing SMCR responsibilities.
- Build an AI inventory: list models, use-cases, criticality, data sources, owners, and dependencies (including third parties).
- Evolve model risk management: independent validation, adversarial testing, scenario analysis, and challenger models for high-impact use-cases.
- Prove fairness and explainability: define measurable fairness thresholds; provide customer-ready explanations for adverse decisions.
- Tighten data controls: lineage, consent, retention, and access management for training and inference data.
- Strengthen third-party risk: contractual rights to audit, incident reporting SLAs, model change notifications, and exit plans for critical AI vendors.
- Operational playbooks: incident response for model failure or drift, human-in-the-loop escalation, and clear kill-switches.
- Documentation: decisions, assumptions, tests, and metrics-enough to stand up to supervisory scrutiny and internal audit.
Questions to put to your AI and cloud providers
- What are your model monitoring thresholds and who signs off on changes?
- How do you test for bias and stability across segments and time?
- What's your incident history and mean-time-to-recover for model outages?
- Can you support our explainability and Consumer Duty requirements?
- If you fail, how do we fail safe? Provide a tested downgrade or manual fallback plan.
Accountability and controls
AI governance must be more than a policy PDF. Create a cross-functional forum (risk, legal, compliance, data science, product) that approves high-risk use-cases and sets minimum control standards.
Tie AI risk metrics to enterprise risk appetite, with regular reporting to the board risk committee. Incentives should reward safe deployment, not just speed.
What the Committee signaled
"The use of AI in the City has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the Government to ensure the safety mechanisms within the system keeps pace."
"Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk."
Timeline and implications for firms
Expect AI-focused stress testing and FCA guidance to raise the bar on documentation, testing, and accountability. Firms that prepare now will avoid rushed remediation later and reduce the risk of consumer redress.
If the Government designates key AI and cloud providers under the CTP Regime, vendor obligations and supervisory visibility will tighten. Have your due diligence, audit rights, and exit plans ready.
Useful links
Upskilling your team
If your risk and product teams need a faster path to practical AI controls in finance, explore focused training and toolkits. Start here: AI tools for finance.
Your membership also unlocks: