UAE Central Bank Sets Guardrails for AI Use in Financial Services

The UAE central bank issued new guidance for

Categorized in: AI News Finance
Published on: Feb 24, 2026
UAE Central Bank Sets Guardrails for AI Use in Financial Services

UAE central bank sets framework for responsible AI use in finance

The Central Bank of the UAE (CBUAE) has issued a guidance note to steer how licensed financial institutions deploy AI and machine learning. The focus is clear: protect consumers, strengthen governance and transparency, and keep humans in control of high-impact decisions.

"The guidance note aims to establish a clear framework for the responsible use of artificial intelligence and machine learning in the financial sector, in a way that enhances consumer protection, reinforces governance and transparency principles, and emphasizes the importance of human oversight and data protection requirements," said H.E. Khaled Mohamed Balama, Governor of the Central Bank of the UAE.

What the guidance covers

The note sets a practical framework for safe, fair, and explainable AI in financial services. It also encourages collaboration with peers, academia, the CBUAE, and other stakeholders to share best practices and help build industry standards for trustworthy AI.

  • Governance and accountability: Clear ownership, model risk controls, and board oversight.
  • Fairness and non-discrimination: Detect and mitigate bias across the model lifecycle.
  • Transparency and explainability: Make model intent, data use, and decisions understandable.
  • Effective human oversight: Human-in-the-loop, escalation paths, and override authority.
  • Data management and privacy: Consent, minimization, quality, lineage, and secure handling.

Who must comply

The framework applies to all licensed financial institutions under the CBUAE's supervision. It supports consumer protection and financial stability, and it is consistent with the UAE's national AI strategy.

For reference, see the Central Bank of the UAE and the UAE's Strategy for Artificial Intelligence.

Why this matters to finance leaders

For CFOs, CROs, COOs, and heads of data and compliance, this is a mandate to operationalize AI governance. Systems must be explainable, traceable, and controllable-without slowing delivery.

Institutions that can prove fairness, resilience, and consumer protection will reduce model risk, cut remediation costs, and earn supervisory confidence.

Practical next steps for implementation

  • Establish a board-approved AI policy, with roles and accountability across the three lines of defense.
  • Stand up model risk management for AI: inventory, validation, monitoring, versioning, and change control.
  • Run fairness testing (pre- and post-deployment). Document metrics, thresholds, and mitigations.
  • Define human-in-the-loop checkpoints and escalation/override procedures for high-impact use cases.
  • Tighten data governance: consent capture, minimization, quality SLAs, lineage, retention, and privacy controls.
  • Increase transparency: consumer disclosures, adverse-action reasons, and clear explanations for key decisions.
  • Manage third-party and model vendors: contractual controls for privacy, security, testing, and audit access.
  • Set incident and drift response playbooks, with reporting workflows to the CBUAE where applicable.
  • Train developers, product owners, risk, and auditors on the guidance and on model risk practices.
  • Pilot in controlled sandboxes, measure outcomes, and gate production releases on risk and fairness criteria.

Collaboration and industry standards

The CBUAE encourages institutions to share best practices and co-develop standards. Consider joint workstreams on testing methodologies, disclosure formats, and bias benchmarks to raise quality across the market.

Further resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)