UAE Central Bank issues responsible AI guidance as generative AI use surges in finance

CBUAE issues AI guidance on governance, fairness, and accountability for UAE LFIs. Legal teams should audit gaps, tighten contracts, and publish clear bilingual disclosures.

Categorized in: AI News Legal
Published on: Mar 04, 2026
UAE Central Bank issues responsible AI guidance as generative AI use surges in finance

UAE Central Bank issues responsible AI guidance for financial services: what legal teams need to do now

The Central Bank of the UAE (CBUAE) has published non-binding guidance that sets clear expectations for how licensed financial institutions (LFIs) deploy AI and machine learning. The message is simple: AI must sit inside strong governance, protect consumers, and keep decision-makers accountable.

This lands as generative AI use accelerates across the sector. A survey by the Dubai Financial Services Authority reported a 166% increase in generative AI usage among financial institutions between 2024 and 2025.

Status and scope

The guidance note, issued on 11 February, is not a rulebook, but it does set the bar for supervisory expectations. It applies to LFIs across the UAE and focuses on governance, transparency, accountability, and consumer protection.

For legal and compliance, treat it as the reference point regulators will use when asking how your firm governs AI.

Governance expectations

LFIs are expected to establish documented AI governance frameworks proportionate to the institution's size, nature, and complexity. AI risks should be built into enterprise risk management with clear roles for risk, compliance, internal audit, and IT.

Firms should maintain a complete inventory of AI models with lifecycle documentation. That includes purpose, data sources, testing regimes, approvals, monitoring, and decommissioning controls.

Fairness, testing, and consumer protection

AI and ML systems must not produce discriminatory, manipulative, or unfair outcomes. Decisions should align with existing ethical standards and the duty to act honestly, fairly, and in consumers' best interests.

The CBUAE recommends periodic "stress testing" of AI systems to surface bias and unintended consequences. Testing should be repeatable, documented, and tied to real customer impact.

Third-party AI and accountability

Accountability for AI outcomes stays with the LFI, even when AI is outsourced to vendors or cloud providers. Appropriate due diligence is expected before onboarding and throughout the relationship.

In practice, legal teams should ensure contracts support oversight, testing access, and clear responsibilities for performance, incident handling, and consumer redress.

Transparency, disclosures, and consumer choice

LFIs should be transparent about AI use, especially for high-impact decisions. Disclosures must be clear, understandable, and available in Arabic and English, with adequate customer support.

Firms should be able to explain how AI systems function and the logic behind AI-assisted decisions. Provide routes for clarification, challenge, and redress. Consider offering an opt-out from AI-based decision-making where appropriate, taking into account the potential risk or impact to the customer.

Human oversight: three models

  • Human in the loop: AI recommends; a human approves or rejects the decision.
  • Human on the loop: AI operates for routine tasks; a human monitors outcomes and can intervene.
  • Human out of the loop: AI operates without direct human involvement - only for low-risk, non-material processes with controls in place.

Consumer rights and redress

Consumers should be able to request human review, challenge AI-driven decisions, correct inaccurate data, and access clear complaints and redress mechanisms. Consider whether these rights should be documented in customer-facing materials, including terms and conditions and FAQs.

What legal and compliance should do now

  • Run a gap assessment against the CBUAE principles; prioritise high-impact use cases.
  • Formalise an AI governance framework proportionate to your business; assign clear roles across risk, compliance, internal audit, and IT.
  • Stand up a firm-wide AI model inventory with ownership, purpose, approvals, monitoring, and retirement criteria.
  • Define "high-impact decisions" and map them to required disclosures, human oversight levels, and escalation paths.
  • Implement periodic stress testing for bias and unintended outcomes; document results and remediation.
  • Review third-party AI contracts and due diligence: testing support, performance and incident duties, auditability, and consumer-facing obligations.
  • Publish clear, bilingual notices explaining AI use and decision logic; provide channels for clarification, challenge, and redress.
  • Update customer terms and internal policies to reflect transparency duties, consumer rights, and oversight models.
  • Align with prior CBUAE expectations, including the 2021 enabling technologies guidance, and ensure board-level reporting.

How this fits with existing frameworks

The guidance builds on the CBUAE's risk and consumer protection approach, including the 2021 Guidelines for Financial Institutions Adopting Enabling Technologies. It reinforces that AI must be managed within existing risk controls - not as an exception to them.

For primary materials and updates, see the Central Bank of the UAE.

Expert perspective

Dubai-based fintech law expert Marie Chowdhry noted that the note frames AI and ML as a consumer protection and conduct risk issue, with AI decisions expected to meet the same fairness standards as traditional processes.

Technology and data law expert Martin Hayward said the focus on transparency and human oversight reflects a proportionate, risk-based approach: by requiring proportionate governance and ongoing testing, LFIs remain fully accountable for AI outcomes, including when third parties are involved.

Bottom line

Non-binding does not mean optional. If your AI cannot be governed, explained, and defended, it will be hard to justify to customers - and to supervisors.

Legal teams should drive a practical plan: document governance, prove fairness, harden contracts, and make disclosures that customers can actually use.

For deeper how-to material and sector updates, see AI for Legal. For supervisory context in Dubai's financial centre, refer to the DFSA.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)