Out-Law / Your Daily Need-To-Know

OUT-LAW NEWS 3 min. read

UAE Central Bank publishes responsible AI guidance for financial sector

CBUAE

The CBUAE guidance comes as the UAE financial services sector witnesses a surge in generative AI usage. Photo: Tom Dulat/Getty


Recent guidance published by the Central Bank of the United Arab Emirates (CBUAE) underscores that AI use in the financial services sector must uphold good governance and consumer protection.

Marie Chowdhry and Martin Hayward were commenting following the publication of guidance by the CBUAE on consumer protection and the responsible adoption of AI in the financial services sector.

The guidance note, published on 11 February, sets out a series of principles for responsible use of AI and machine learning (ML) by licensed financial institutions (LFIs) across the UAE. Although the principles are non-binding, the guidance establishes clear regulatory expectations for the financial services sector around AI governance, consumer protection, transparency and accountability.

The guidance has been published at a time when the UAE financial services sector is seeing a surge in generative AI usage. The Dubai Financial Services Authority (DFSA) – which regulates firms in the Dubai International Financial Centre, a separate and distinct legal jurisdiction to mainland UAE – published a survey late last year which found that generative AI usage among financial institutions surged by 166% between 2024 and 2025.

In particular, the CBUAE AI guidance states that LFIs are expected to establish documented AI governance frameworks that are proportionate to their size, nature and complexity. AI-related risks should be fully integrated into risk management, with clear roles for risk, compliance, internal audit and IT functions. Institutions are also expected to maintain a comprehensive inventory of AI models and ensure appropriate documentation mechanisms are in place.

The paper also emphasises that AI and ML systems must not result in discriminatory, manipulative or unfair outcomes. AI-driven decisions must be consistent with LFIs’ ethical standards and their duty to act honestly, fairly and in the best interests of consumers.

The CBUAE recommends that LFIs should ‘stress test’ AI systems periodically to identify and address potential biases or unintended consequences. The responsibility for AI outcomes remains with LFIs, even where functions are outsourced. Where AI systems are provided by third party vendors or cloud service providers, LFIs are expected to always conduct appropriate due diligence.

Importantly, the note stresses that LFIs have an obligation to be transparent with consumers about the use of AI, particularly in relation to high impact decisions. Clear and understandable disclosures must be provided in both Arabic and English, with adequate customer support. LFIs should be able to explain how AI systems function and provide consumers with information about the logic behind AI-assisted decisions, as well as mechanisms for clarification, challenge and redress. Customer opt-out options for AI-based decision-making, similar to those found in data protection laws for decision-making resulting from automated processing or profiling, should also be considered where appropriate to take “into account the potential risks or impact to the customer”.

The guidance also underscores the need for human oversight, particularly for decisions that have “significant implications” for consumers. It recognises three different oversight models, with the level of human involvement calibrated to the risk posed to consumers.

Firstly, it highlights ‘human in the loop’, where AI provides recommendations but a human decision-maker retains full authority to approve or reject the outcome; ‘human on the loop’, where AI works autonomously for routine tasks, while a human monitors outcomes and can intervene where necessary; and, finally, ‘human out of the loop’, where AI operates without direct human involvement. However, the guidance cautions that the latter approach should only be taken for “low-risk, non-material processes with appropriate controls in place”.

In light of the new guidance, LFIs in the UAE will be expected to assess their current approach to AI and ML, identify gaps and, where necessary, formalise AI governance frameworks and strengthen their transparency and disclosure in relation to their firm’s AI usage.

The guidance also highlights that consumers are within their rights to request human review, challenge AI-driven decisions, correct inaccurate data and access clear complaints and redress mechanisms in line with consumer protection requirements. Consideration of whether such rights need to be formally documented in any customer facing materials, such as terms and conditions, is suggested. AI policies should also complement existing CBUAE consumer protection and risk management obligations.

The guidance builds on, and sits firmly within, the CBUAE’s existing risk management and consumer protection frameworks, including the Guidelines for Financial Institutions Adopting Enabling Technologies, issued by the CBUAE in 2021, which looked in detail at governing the roll-out of AI and managing institutional risk.

Commenting on the new guidance, Dubai-based fintech law expert Marie Chowdhry of Pinsent Masons said: “The guidance note makes clear that the use of AI and ML in financial services is fundamentally a consumer protection and conduct risk issue, with AI‑driven decisions expected to meet the same standards of fairness and acting in consumers’ best interests as traditional processes.”

Martin Hayward, a technology and data law expert at Pinsent Masons, said the guidance note’s focus on transparency and human oversight reflects a proportionate, risk‑based regulatory approach. “By requiring proportionate governance and ongoing testing, the CBUAE reinforces that LFIs remain fully accountable for AI outcomes including where AI systems or services are provided by third parties,” he said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.