UK appoints banking tech leaders as AI champions amid watchdog warnings
The UK government has appointed two senior banking technologists to guide safe, scaled adoption of AI across financial services. Starling Bank CIO Harriet Rees and Lloyds Banking Group's AI lead Rohit Dhawann will report to Lucy Rigby MP, economic secretary to HM Treasury. The roles are voluntary and follow a recommendation in the AI opportunities action plan led by Matt Clifford.
The timing matters. The Treasury Select Committee has warned that regulators' current approach to AI is exposing the public and the financial system to potential serious harm. The government's move signals a push to turn fast adoption into controlled, accountable delivery.
What the champions will do
- Support firms to adopt AI safely at scale and act as catalysts for responsible implementation.
- Identify barriers and accelerators, with a focus on insurance and reinsurance, capital markets, retail investment, asset management and wholesale services.
- Engage industry, regulators and other stakeholders to advise HM Treasury ministers and officials.
- Operate on contracts running to September, with a possible extension.
Rigby said the pair "bring deep, real-world experience of deploying AI safely at scale, and they will help turn rapid adoption into practical delivery - unlocking growth while keeping our financial system secure and resilient."
Industry reaction: support, but the scope looks narrow
Practitioners welcome technologists taking point, expecting more direct, usable guidance. Still, the first question landing in inboxes was simple: why only two people? Several argue the effort should include AI companies and more representation from major banks.
Fintech commentator Chris Skinner called the move overdue given the rise of deep fake scams using AI. He also pressed for a wider brief: include innovators such as Revolut and bring in heavyweights from the AI industry like Microsoft and Alphabet.
Why this matters for government
Public trust, market integrity and systemic stability are at stake. AI is already embedded in fraud controls, credit, trading, and customer interactions. Small flaws can scale quickly, and accountability gets murky without clear lines of responsibility.
- Define who is accountable for AI-driven decisions across senior management functions.
- Set sector-specific guardrails: model risk controls, data provenance, human oversight triggers and incident reporting.
- Coordinate FCA, PRA, ICO and NCSC through a joint tasking framework for AI risks and resilience.
- Stand up shared testing sandboxes for model safety and fraud (including deepfakes and synthetic identity).
- Mandate third-party risk standards and model inventories for systemically important firms.
- Track adoption metrics and near-miss incidents to inform policy and supervision.
Immediate steps departments can take
- Map high-impact use cases in supervised firms: fraud prevention, credit scoring, trading surveillance, and customer communications.
- Require explainability, audit trails and human-in-the-loop checkpoints for high-risk models.
- Promote secure data-sharing standards to reduce fragmented controls and blind spots.
- Invest in AI assurance skills for regulators and public sector teams working with financial services.
- Establish a standing channel with the AI champions for quick issue escalation and feedback.
Resources
HM Treasury's overview and updates can be found here: HM Treasury. The Treasury Committee's work on financial services and technology is here: Treasury Select Committee.
Working with banks or insurers on AI implementation? This curated list of tools used in finance can help with due diligence and vendor assessments: AI tools for finance.
Your membership also unlocks: