Executive summary
On 11 February 2026, the Central Bank of the UAE issued the Guidance Note on Consumer Protection and Responsible Adoption and Use of Artificial Intelligence and Machine Learning by Licensed Financial Institutions. Insurance providers are explicitly in scope.
The Note is principles-based, written in the language of "should" rather than "shall." That choice does not soften its impact. The Note imports binding obligations from the Personal Data Protection Law, the Model Management Standards, the Consumer Protection Regulation, and the Outsourcing Regulation, and consolidates them into eight operational pillars covering ten substantive obligations.
Material non-compliance can be characterized as a governance shortfall under Article 14 (General Framework of Governance) of Federal Decree-Law No. 48 of 2023 Regulating Insurance Activities, with measures and sanctions available to the CBUAE under Article 33 of the same Decree-Law including fines of up to AED100,000,000.
Three points define the Note's strategic significance:
- Insurance claims are explicitly named as a high-impact decision, placing every claims-AI deployment at the highest risk tier.
- Third-party administrators are not direct addressees, but inherit the obligations through Section 9 (outsourcing).
- The Note is operative from issuance, with no transitional period. Insurers that already deploy AI in claims, underwriting, fraud, or customer service have an unannounced compliance gap dated to the issuance.
The decisions that follow for UAE insurer leadership are not whether to comply but how. Three choices need to be made in 2026:
- Build, buy, or partner the governance stack.
- Triage the existing AI estate against the Note's standards.
- Reset third-party contracts to import the outsourcing obligations.
1. What the Note is, and what it changes
The Guidance Note runs to ten sections. Sections 2 through 9 function as operational pillars covering governance, fairness, transparency, data quality, monitoring, human oversight, framework integration, and outsourcing. Section 10 is an encouragement to participate in the UAE's AI sandboxes and the Innovation Hub, and Section 1 sets out definitions.
The novelty is consolidation. Most of the substance flows from instruments already in force:
- Personal Data Protection Law (Federal Decree-Law No. 45 of 2021)
- CBUAE Model Management Standards and accompanying Model Management Guidance (2022)
- CBUAE Consumer Protection Regulation
- CBUAE Outsourcing Regulation for Banks (applied "to the extent applicable")
The AI Note collects these obligations in one place, routes them through a consumer-protection lens, and adds the explicit definitional move that brings every claims-AI system into the highest risk tier.
The legal status of the Note is principles-based supervisory expectation, not binding regulation. The only "shall" appears in the Definitions section. The remainder of the document uses "should." That is a deliberate framing choice, not a soft signal. CBUAE practice treats published guidance as the benchmark against which routine and for-cause supervision measures conduct.
A failure to align with the Note can be characterized as a governance shortfall under Article 14 (General Framework of Governance) of Federal Decree-Law No. 48 of 2023 Regulating Insurance Activities. The CBUAE's response menu sits in Article 33 (Measures and Sanctions) and ranges from a notice of violation through deposing senior employees, suspending or revoking the license, restructuring, liquidation, and a fine of up to AED100,000,000. The list of specific violations and their fines is issued by the CBUAE Board under Article 34.
"This Guidance Note shall supplement and not replace any laws, regulations or directives issued by the CBUAE or other competent authorities, and LFIs remain responsible for complying with all applicable laws, regulations and requirements."CBUAEGuidance Note — closing paragraph
2. Who is captured
The opening line of the Note names the addressees: "licensed financial institutions, including insurance providers (LFIs), in the United Arab Emirates (the UAE)." Filed under "All Licensed Financial Institutions > Market Conduct & Consumer Protection," the Note applies across the CBUAE-regulated population:
- Banks
- Finance companies
- Exchange houses
- Insurance providers
- Payment service providers
The Note's subject-matter focus is consumer-facing AI: applications that bear on consumers, with the aim of promoting consumer protection and good market conduct in AI/ML use. Pure back-office AI without a consumer touchpoint sits outside the primary lens, although institutional governance obligations still apply.
Third-party administrators, claims platforms, and AI vendors are not named anywhere in the Note. They are not direct addressees. They are captured indirectly, and the mechanism is Section 9. The licensed insurer remains responsible for any AI delivered or operated through outsourcing arrangements. Section 9 imports four obligations into every TPA and AI vendor relationship:
- Due diligence on AI vendor reputation, governance, security, and data protection
- Contractual rights to audit and information
- Annual independent cybersecurity reviews of third-party AI providers
- Active management of vendor concentration risk
The practical consequence is that every existing TPA and AI vendor contract requires either an addendum or a new schedule that imports these Section 9 obligations. Insurers cannot rely on the third party's own goodwill. The obligation sits with the insurer, who must demonstrate oversight in any supervisory examination.
The high-impact decision anchor
The most consequential definitional choice in the Note is the inclusion of "insurance claim" as an example of a "high-impact decision."
"High-impact decision: any determination by an LFI using AI that materially affects a customer's access to financial products or services, for example in respect of a potential loan application or insurance claim."CBUAE Guidance Note — Section 1, Definitions
Several of the Note's specific requirements attach with greater force to high-impact decisions:
- Transparency disclosures
- Opt-out rights
- Periodic bias testing
- The consumer's right to request human review
Naming "insurance claim" places automated FNOL triage, claim acceptance, total loss declarations, fraud scoring that affects payout, and settlement amounts at the top of the risk tier. The Note's strongest controls apply to these systems by design.
3. The ten obligations
The substantive content of the Note reduces to ten operational obligations across the eight pillars (sections 2 through 9). Section 8 (Integration with Existing Frameworks) is cross-cutting and shapes how the obligations below sit alongside the rest of the institutional risk and conduct framework rather than functioning as a standalone obligation.
Obligation and practical content
1. Documented governance framework
An AI/ML governance framework "commensurate with the size, nature and complexity" of operations. Senior management and the Board accountable for AI outcomes. AI risk integrated with the institutional risk framework. Regular reporting to senior management and the Board on performance and risk.
2. Maintained model inventory
An inventory of every AI system developed or deployed, with material metadata including model name, purpose, and risk rating. Aligned with the CBUAE Model Management Standards and Model Management Guidance (2022).
3. Periodic bias testing
Testing for discriminatory or manipulative outcomes "once a year or each time a model is upgraded, materially changed or a new one is introduced." No AI system should be deployed if discriminatory; no AI system should be retained if it becomes so post-deployment.
4. Transparency and explainability
Customers told when interacting with AI, and how AI decisions are made, with greater force for high-impact decisions. Disclosures in plain language, in both Arabic and English, with telephone support in all major UAE languages. Opt-out rights considered for high-impact decisions. The Note explicitly references SHAP (Shapley Additive Explanations) as a tool for algorithmic transparency.
5. Data quality, privacy, and security
Accurate, relevant, up-to-date data with clear provenance and audit trails. Compliance with the Personal Data Protection Law and the UAE Information Assurance Regulation. Privacy-by-design and security-by-design. Stress testing and validation. Operational resilience including redundancy and incident response. Data localization where required by the Consumer Protection Standards.
6. Continuous monitoring/challenge
Continuous monitoring per the Model Management Standards. Pre-implementation testing of automatic vendor updates. Periodic engagement of independent third parties to challenge AI use.
7. Maintained kill-switch
A control, not a procedure. Verbatim text from Section 6 set out below.
8. Three-tier human oversight
Three named oversight models. Human-in-the-loop: AI recommends, human approves. Human-on-the-loop: AI runs autonomously, human monitors and can intervene. Human-out-of-the-loop: AI operates without direct human involvement, permitted only for low-risk, non-material processes with appropriate controls. The level of involvement should be commensurate with the risk to the consumer.
9. Consumer rights to challenge
Consumers should be able to request human review or explanation of AI-generated decisions. Alternative arrangements should exist where a consumer does not wish to be subject to an AI decision. Complaints handled per Article 8 of the Consumer Protection Regulation. AI must not be used for pressure-selling, unsuitable targeting, or misleading marketing.
10. Outsourcing and vendor concentration
Due diligence on AI vendor reputation, governance, security, and data protection. Contractual rights to audit and information. Annual independent cybersecurity reviews of third-party AI providers. Vendor concentration risk: a range of providers preferred where feasible. Third-party models held to the same standards as in-house models.
"LFIs should at all times retain the clear and immediate ability, with human intervention, to cease use of an AI model system, technology or application deployed or utilized."
CBUAE Guidance Note — Section 6, on the kill-switch
4. What compliance looks like
The Note's principles translate into technical and operational controls. Three of these are the hardest to retrofit onto an existing AI estate, and consequently the most likely to surface during supervisory examination.
The model inventory and audit trail. The Note requires every AI system to be inventoried with risk rating and material metadata. Combined with the long-cycle record retention obligations applicable under CBUAE rules, every high-impact AI decision must be logged with:
- Input features
- Model version
- Prediction or output
- Confidence score
- Timestamp
- Downstream action taken
Most insurers do not produce this artifact at the granularity required. Building it after the fact, against historical decisions, is materially harder than designing it in from the start.
Bilingual disclosure across every consumer touchpoint. AI-driven communications, chatbot interfaces, and claims decision rationales must be available in plain Arabic and plain English, with telephone support in all major UAE languages. English-only AI customer interfaces fail this standard. This is the most operationally specific requirement in the Note and the easiest for a supervisor to spot.
The data residency and AI provider tension. Most major LLM providers process data outside the UAE. CBUAE rules and the Note's own data quality, privacy, and security pillar require insurance-related data to be handled in line with the Personal Data Protection Law and applicable data localization standards. Insurers using commercial AI providers have three workable paths:
- UAE-localized inference through providers such as Microsoft Azure UAE region or AWS Bahrain
- Tokenization or anonymization of personal data before AI processing
- Cross-border transfer mechanisms under the Personal Data Protection Law, such as Standard Contractual Clauses or explicit consent
The Note does not resolve this tension; it surfaces it as a risk for the insurer to address.
The kill-switch obligation requires a technical control, not just a documented procedure. It needs a deployment toggle, a feature flag, or a load balancer rule that can disable the model in production immediately, with a documented fallback to human handling that does not interrupt service. Most insurers have neither the technical control nor the fallback path.
5. Three decisions for the C-suite in 2026
The Note converts AI from a technology decision into a board-level governance decision. The work that follows is operational, not philosophical. Three choices need to be made in 2026.
Decision one: build, buy, or partner the governance stack
Most UAE insurers do not have an AI governance function today. The Note creates a workload that requires either a new internal capability (an AI governance officer plus the technical infrastructure for model inventory, bias testing, explainability, and audit trails), or a partnership with a third party that can deliver the governance overlay. Three paths:
- Build: takes time and headcount. Highest control, slowest to deliver.
- Buy: requires careful vendor diligence under the same Section 9 obligations being imported.
- Partner with industry experts that operate inside the same regulatory regime: shifts execution to a counterparty already governed by the same standards.
The decision needs to be made early in 2026 to allow execution within the calendar year.
Decision two: triage the existing AI estate
Every insurer has AI in production today: fraud scoring, automated underwriting decisions, claims triage, chatbots. None of it was deployed under the Note's standards. A focused 60-day stocktake answers four questions:
- What AI runs in the business?
- Which decisions does it support?
- Which qualify as high-impact?
- What is the gap to the Note's standards, with a prioritized remediation plan attached?
The output is a single document that the Board can sign off on and the supervisor can review.
Decision three: reset third-party contracts
Every TPA, technology vendor, and AI provider contract needs an addendum or amendment that imports the Section 9 outsourcing obligations:
- Audit rights
- Cybersecurity review provisions
- Bias testing reporting
- Kill-switch access
- Exit and data return terms
- Explainability documentation requirements
Contract work runs on a longer cycle than internal compliance changes. Insurers without contract resets in motion by mid-2026 will struggle to demonstrate supervisory readiness during routine examination.
6. The posture going forward
The Note is a calibration of supervisory expectation, not the start of a new regulatory regime. Insurers reading it as a fresh compliance burden will overreact. Insurers reading it as cosmetic will under-prepare. The asymmetry of risk argues against waiting. Enforcement depth in 2026 is unknown, but the trajectory of CBUAE practice has been toward principle-based supervision and active examination, not regulatory inertia.
The commercial opportunity for first-movers is real. Insurers that build an auditable AI governance posture in 2026 own a defensible market position. Customer trust in AI-driven insurance decisions is fragile, and visible governance is a market signal that policyholders, brokers, and insurer counterparties read clearly. The Note frames AI governance as a consumer-protection requirement. It also makes governance visible enough to be marketed.
The Innovation Hub and AI sandbox routes (Section 10) are explicit invitations to engage. Insurers that participate position themselves to influence any sectoral guidance that follows the Note. Those that do not will absorb whatever annex emerges, on whatever timeline it emerges.
Source and document context
Guidance Note on the Consumer Protection and Responsible Adoption and Use of Artificial Intelligence and Machine Learning by Licensed Financial Institutions in the U.A.E., issued 11 February 2026 by the Central Bank of the UAE
Cross-referenced instruments
UAE Personal Data Protection Law (Federal Decree-Law No. 45 of 2021); CBUAE Model Management Standards and Model Management Guidance (2022); CBUAE Consumer Protection Regulation (Article 8 cited); CBUAE Consumer Protection Standards; CBUAE Outsourcing Regulation for Banks; CBUAE Guidelines for Financial Institutions adopting Enabling Technologies; UAE Information Assurance Regulation; UAE Charter for the Development & Use of AI (July 2024); UAE National Strategy for AI
Federal legislation referenced
Federal Decree-Law No. 48 of 2023 Regulating Insurance Activities (Article 14 General Framework of Governance, Article 33 Measures and Sanctions, Article 34 List of Violations and Fines; verified against the gazetted text published by the CBUAE, First Edition September 2024); Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data; Decretal Federal Law No. 14 of 2018 on the Central Bank and Organization of Financial Institutions and Activities
About Axxion
Axxion Claims Settlement Services L.L.C. is a Dubai-based motor claims management company and the UAE's first dedicated motor third-party administrator (TPA). Axxion manages the full motor claims lifecycle on behalf of insurance partners, from first notification of loss through damage assessment, repair coordination, quality control, and settlement. The operation pairs more than four decades of hands-on repair and motor claims expertise with AI-enabled processes to deliver lower repair costs, shorter cycle times, and auditable compliance on every claim.
Axxion's claims platform generates a documented cost trail on each claim, produces burning cost analytics for insurer partners.
The company is led by Managing Director and Co-founder Frederik Bisbjerg, an internationally recognized insurance executive whose career includes C-level leadership at Qatar Insurance Group, AXA Global Healthcare, Al Wathba Insurance, and Daman National Health Insurance. Bisbjerg is a published author on insurance transformation and a founding faculty member of the world's first mini-MBA in Digital Insurance.
His work as Head of MENA at The Digital Insurer and his contributions to AI strategy across the GCC have made him one of the region's leading voices on the application of artificial intelligence in insurance operations.
Axxion operates within World Automotive Group, a MENA-based automotive and insurance services group. World Automotive Group is owned by Skelmore Holding, a global consortium founded in Toronto in 1994, with $650 million in revenue and 4,000 employees across the GCC and North America.
Contact
Frederik Bisbjerg, Managing Director, f@axxion.co
Stijn Venrooij, Executive Director, Operations and AI, s@axxion.co
Web: www.axxion.co · General contact: hi@axxion.co
.png)

