Bank of England and PRA Chart Path for AI in Finance With Measured Regulatory Approach
The Bank of England and Prudential Regulation Authority have outlined a strategy for AI adoption in financial services that keeps existing rules in place while monitoring risks as the technology evolves. Deputy Governors Sarah Breeden and Sam Woods sent the roadmap to HM Treasury and two other government departments in early April, responding to a January request for a clear plan on AI innovation.
The regulators see AI as capable of driving competition and growth in banking and insurance without destabilizing the financial system. Their approach is to gather evidence, engage with firms, and adjust course only if new safeguards become necessary.
What the regulators are doing now
The PRA introduced Model Risk Management Principles in 2023 that already account for AI-specific concerns. These will be refined further in 2026. AI adoption is now a core supervisory priority, meaning banks and insurers will discuss their AI use regularly with regulators.
A biennial survey of AI usage across the financial sector is scheduled for this year. The Bank and PRA also launched the AI Consortium with the Financial Conduct Authority in May 2025, a public-private forum that will publish findings later in 2026 on key risks including concentration among a handful of model providers, explainability gaps in generative AI and LLM systems, and the emergence of agentic AI.
The Cross-Market Operational Resilience Group's AI Taskforce produced baseline guidance last year covering regulatory expectations, risk frameworks, technical implementation, third-party model sourcing, and staff awareness.
What firms told regulators
The PRA held roundtables last year with challenger banks, global systemically important banks, and insurers. Participants said the existing regulatory framework does not block safe AI adoption. They saw no need for AI-specific rules or a dedicated regulatory sandbox, preferring the FCA's testing programs instead.
Industry responses to a 2022 joint discussion paper showed that current rules do not prevent responsible AI use. Firms asked for practical guidance, stronger coordination between regulators at home and abroad, and closer oversight of third-party models and data quality.
International and domestic coordination
The Bank contributes to G20 Financial Stability Board work on AI practices and co-chairs insurance-sector AI initiatives through the International Association of Insurance Supervisors. It also collaborates with G7 experts on cybersecurity risks related to AI.
Domestically, the Bank works with the AI Security Institute and the Digital Regulation Cooperation Forum.
How the Bank itself uses AI
The Bank has deployed AI internally to improve predictive analytics, GDP forecasting, and distress forecasting. Large language model tools help with data extraction and querying to boost supervisory efficiency. Off-the-shelf AI assistants are already delivering productivity gains in summarization, note-taking, and code generation.
What comes next
The PRA will report annually on how regulation is enabling or hindering AI-driven innovation. These updates will appear in the PRA's Business Plan and Annual Report, with references to broader Bank initiatives where relevant.
The technology-agnostic regulatory framework will remain under review. Should new guardrails become necessary, regulators said they will introduce them in a measured way. The Bank and PRA committed to ongoing partnership with government departments on the issue.
For those working in AI for Finance, understanding this regulatory trajectory matters. Firms can expect continued dialogue with supervisors about AI use, but not sudden rule changes. The focus remains on evidence and practical implementation rather than prescriptive restrictions.
Your membership also unlocks: