1. Introduction
The growing use of machine learning (ML) for fraud detection in the banking sector has dramatically improved the effectiveness of fraud detection, especially in rapidly digitizing countries like the UAE and Qatar. Artificial intelligence (AI), as a key instrument in financial operations, particularly in compliance-driven environments, has pronounced operational benefits, along with important issues regarding transparency, fairness, and institutional trust in AI-driven decisions (
McNally & Bastos, 2025).
In the regular banking sector, AI-driven fraud detection has transitioned from being a supplementary tool to a key component of risk reduction. Nonetheless, in contrast to classic rule-based systems, ML models are “black boxes” that prevent auditors and financial experts from understanding and justifying AI-driven fraud detection alerts. This lack of transparency erodes accountability and auditability, necessitating the alignment of Explainable AI (XAI) frameworks to bridge the space between automation and regulatory interpretability (
Salih et al., 2025).
Notwithstanding the increased interest in AI ethics and XAI, a central gap in the literature is that empirical studies that explore how transparency, fairness, and trust are perceived to affect the utilization of AI-based fraud detection systems in compliance-focused environments, such as those in the UAE and Qatar, are scarce. Although prior work has acknowledged the effects of transparency and bias on AI acceptance, few studies have formulated combined models that measure these factors in connection with AI adoption within the financial industry. Furthermore, previous studies seldom consider how local regulatory pressures for compliance, as well as local culture, may affect trust in AI systems (
Abu Huson et al., 2025;
Hussin et al., 2025).
This paper expands the existing scholarly literature by putting forth a conceptual model that examines the interaction among AI transparency, fairness perception, trust, and the adoption of fraud detection. It addresses the following research question:
How do transparency, the perception of fairness, and trust affect AI adoption in fraud detection in the UAE’s traditional banks, as well as those in Qatar?
Applying Partial Least Squares Structural Equation Modeling (PLS-SEM) and Multi-Group Analysis (MGA) to the empirical data from auditors, compliance officers, and risk managers in UAE and Qatari banks, the research finds that transparency plays a strong role in fostering trust, which in turn has a strong impact on AI adoption. Fairness perception also moderates the trust–adoption relationship, stressing the need to circumvent algorithmic bias in order to increase AI credibility and acceptance.
This work contributes to the increasingly popular discussion of AI ethics and regulation by presenting a practical and policy-informed analysis of the conditions under which AI systems are most likely to be trusted and embraced in regulated financial sectors. It offers useful advice for banks, AI developers, and policymakers seeking to reconcile AI performance with responsibility.
The rest of the paper is organized as follows:
Section 2 provides a review of the existing literature on AI-based fraud detection and the drivers of ethical adoption.
Section 3 discusses the conceptual framework and methodology.
Section 4 discusses the empirical findings, followed by the discussion in
Section 5.
Section 6 concludes with implications for policy and directions for further research.
2. Literature Review
2.1. Introduction to AI in Fraud Detection
The use of artificial intelligence (AI) and machine learning (ML) has substantially changed fraud detection procedures in banking, enhancing both efficiency and accuracy in detecting suspicious transactions. In the financial sectors of the UAE and Qatar, fraud detection powered by AI has evolved from a complementary function to a primary fraud prevention tool, mainly through growing digitalization and regulatory requirements. The implementation of these technologies, however, raises vital interpretability, trust, and fairness issues—concerns that are especially significant in high-risk contexts such as fraud detection, where mistakes can have legal and reputational implications (
Morshed & Khrais, 2025).
Compared with traditional rules-based systems, AI models—particularly those reliant on deep learning—tend to be “black boxes”, concealing the reasoning logic behind their decisions. This makes it more challenging for auditors and compliance officers to substantiate AI-driven fraud alerts, thus undermining trust in automated decision-making (
Andrae, 2025). Where regulatory oversight is intense, as in the Gulf, the inability to clarify why a legitimate transaction has been identified as suspicious can give rise to doubts surrounding model dependability and institutional accountability. Such doubts are further compounded by the fact that there are no standardized procedures for assessing AI interpretability in diverse banking contexts (
Al-Abbadi et al., 2025;
Oguntibeju, 2024).
2.2. AI Techniques and Explainability in Fraud Detection
The systems are typically made up of a combination of supervised learning algorithms, for instance, decision trees and random forests, together with unsupervised algorithms, such as isolation forests and autoencoders, for detecting anomalous transaction activity (
Kim et al., 2025). The methods increase the accuracy of detecting fraud by identifying sophisticated and intricate data patterns. They are, however, not transparent, thus preventing interpretability in cases involving compliance, where an explanation of each flagged transaction is necessary (
Hosain et al., 2024).
In response, financial firms increasingly use Explainable AI (XAI) approaches, including Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), as they convert AI model outputs into explanations that are understandable by humans (
Yeo et al., 2025). SHAP assigns a feature importance value to every input feature, illustrating its impact on a given output, whereas LIME approximates a complex model with a simpler model for a given point, assisting in explaining individual results (
Younisse et al., 2022).
Although promising, these methods are not free of limitations. SHAP and LIME are computationally intensive, which may cause them to fall short when it comes to real-time fraud detection scalability. Additionally, their outputs can sometimes depend on implementation, resulting in regulatory audit inconsistencies (
Tahir et al., 2024). For these reasons, banks are adopting hybrid AI models that combine rule-based reasoning with machine learning in a bid to strike a balance between explainability and performance in terms of detection (
Hjelkrem & Lange, 2023).
Real-world examples validate the practical utility of XAI. JPMorgan Chase, for example, applies SHAP-based models to ensure its transaction surveillance systems deliver traceable logic for compliance officers (
Dichev et al., 2025;
Villani et al., 2022). First Abu Dhabi Bank (FAB) in the Gulf has also introduced fraud detection technology in compliance with local regulatory requirements for interpretability, while maintaining detection performance (
Odeh et al., 2024). Such deployments confirm the growing role of XAI in upholding operational efficiency alongside regulatory compliance (
Marín Díaz et al., 2023).
2.3. Trust, Transparency, and Regional Compliance Views
Trust is a key pillar for the effective implementation of AI in fraud detection, particularly in compliance-driven banking sectors such as those in the UAE and Qatar. Accountants, together with compliance officers, should be able to comprehend and substantiate AI-driven alerts in a bid to ensure confidence in automated systems. Without transparency, trust is lost, curbing willingness to use AI tools, even if they promise operational advantages (
Ashfaq & Ayub, 2021).
The opacity of AI fraud models, particularly deep learning systems, complicates auditors’ explanations for why some transactions are flagged. This lack of interpretability has a direct influence on regulatory acceptance. For instance,
Akhtar et al. (
2024) points out that algorithmic obscurity erodes auditor trust, particularly in jurisdictions where explanations of fraud alerts are required. Regulators in the UAE and Qatar increasingly demand that AI systems deliver audit-ready, interpretable outputs (
Price, 2025).
In addition, the selective or sporadic adoption of explanation measures, such as SHAP or LIME, further complicates trust in institutions. Whereas some institutions adopt XAI tools in advance, others use these tools only in high-risk cases, resulting in varying oversight quality. Such inconsistency may subvert the perception of the fairness and legitimacy of fraud detection results (
Bhardwaj & Parashar, 2025).
Regulatory philosophies in the regions also impact AI adoption in varying ways. Western banks, such as Citibank and Barclays, tend toward real-time fraud detection efficiency through their dependence on high-speed, black-box AI systems (
Kaluarachchi & Sedera, 2024). Banks in the Gulf, such as Qatar National Bank and Emirates NBD, focus more on explainability and regulatory compliance, even if this comes at the cost of minor compromises in terms of detection performance (
AL-Dosari et al., 2024). The differing strategies reflect the role of governance structures in determining the trade-off between model efficiency and transparency.
2.4. Fairness, Mitigation of Bias, and Human Judgment in Fraud Detection
Apart from transparency, fairness is also becoming a key factor in maintaining trust in AI-based fraud detection. If the decisions made by AI are biased or perceived as discriminatory by auditors and compliance officers, adoption is more likely to be met with resistance—even when the AI systems are accurate. In regulated ecosystems such as those in the UAE and Qatar, perceived algorithmic fairness plays a central role in the institutional adoption and acceptance of AI tools (
Barnes et al., 2024;
Majrashi, 2025).
Recent cases involving global financial services highlight the importance of fairness-aware AI. The prominent algorithmic bias scandal at Wells Fargo in credit decision-making was a cause for alarm regarding discriminatory outcomes in AI deployment (
Saxena et al., 2024). This was a credit-specific case, but it demonstrates how AI systems, if not regulated, can unintentionally perpetuate systemic biases. In response, banks such as Deutsche Bank and Standard Chartered have introduced fairness-aware AI frameworks with real-time bias detection as part of their fraud detection pipelines (
Deshpande et al., 2023;
Maghyereh & Ziadat, 2024). These approaches argue not only that decisions made by AI should be explained but also that they should be fair across groups. Fairness is achieved through approaches like adversarial debiasing, where AI is trained to purge biased patterns, or fairness constraints that impose equitable outcomes in detecting fraud. Increasing human oversight—such as auditors manually reviewing suspicious cases of fraud—gives a crucial layer of accountability, most appreciably in regulated contexts in which this oversight is required for compliance (
Pulivarthy & Whig, 2025).
There is also evidence from research substantiating the operation of perceived fairness when adopting AI. In a high-risk setting, such as fraud detection, fairness can fortify trust in a way that makes specialists more likely to implement AI systems they perceive as fair and transparent (
Elamin et al., 2025;
Fundira et al., 2024).
2.5. Global AI Governance and Regional Alignment
The ethical adoption of AI in detecting fraud is further regulated today by international regulatory guidelines that ensure fairness, openness, and accountability. Major standards—e.g., the EU AI Act, the US NIST AI Risk Management Framework, and OECD AI Principles—put forward explainability and bias reduction as prerequisites for ethical use. Most countries, such as the UAE and Qatar, are adopting these guidelines, particularly in their regulated financial sectors (
Schuett, 2024).
Banks in the Gulf are actively implementing Explainable AI (XAI) and fairness-based governance to detect fraud. Regulations such as the UAE Personal Data Protection Law and Qatar’s Data Privacy Protection Regulations require financial institutions to explain algorithmic decisions made concerning private data, such as fraud warnings. These regulations mandate transparent, traceable, and verifiable AI systems in line with domestic and global standards (
Ibrahim & Truby, 2022).
Moreover, banks are adopting hybrid approaches that integrate formal regulation with in-house AI ethics management. Models like the IEEE Ethically Aligned Design (EAD) are being used to support bias minimization, auditability, and human-centric risk assessment (
Pulivarthy & Whig, 2025). Hybrid approaches go beyond SHAP and LIME; they also encompass new interpretability methods like counterfactual explanations, which explain how small alterations in input data might affect fraud warning decisions.
Ultimately, institutional and regional factors affect the adoption of AI and its governance. Whereas operational efficiency tends to concern Western banks, compliance and ethical certainty are more significant for Gulf banks. This is not merely a reflection of technological preference but also of regulatory culture, tolerance of risk, and the maturity of oversight institutions. Global AI ethics convergence with local regulatory frameworks would be instrumental in scaling AI-based fraud detection in compliance-focused markets such as the UAE and Qatar.
Based on the literature reviewed and the gaps identified, the following hypotheses are put forward to address the impact of transparency, fairness perception, trust, regulatory compliance, and AI exposure on the adoption of AI-based fraud detection systems in financial institutions:
H1: The implementation of AI-based fraud detection systems is encouraged by regulatory compliance.
H2: Transparency in artificial fraud detection models has a significant influence on auditors’ trust and adoption plans.
H3: Algorithmic bias negatively affects the perception of AI fairness in fraud detection.
H4: Fairness perception positively influences trust in AI systems.
H5: Trust in AI systems has a positive effect on AI-driven fraud detection tool adoption.
H6: When encountering AI tools, adopters are influenced by trust.
H7: The perception of fairness mediates the relationship between trust and the adoption of AI.
3. Methodology
This research utilized Partial Least Squares Structural Equation Modeling (PLS-SEM) via SmartPLS 4 to examine the factors contributing to the adoption of AI-based fraud detection systems in UAE and Qatari banks. PLS-SEM was chosen over Covariance-Based SEM (CB-SEM) because it can handle complex prediction models involving latent variables, non-normally distributed data, and modest sample sizes. It is also useful in exploratory studies and theory building, especially in new research topics such as AI regulation in financial compliance (
Dash & Paul, 2021).
In a bid to enhance cross-comparative understanding, Multi-Group Analysis (MGA) was utilized to analyze structural dissimilarities in trust, fairness perception, and AI adoption across different contextual factors—namely, regulatory regimes (UAE vs. Qatar) and types of auditors (internal vs. external). MGA facilitates the analysis of differences in relationships among constructs for different subgroups, thus deepening the understanding of institutional and professional drivers of AI adoption (
Cheah et al., 2023).
3.1. Conceptual Framework
This paper develops a conceptual model to examine the impact of organizational, technological, and ethical considerations on the adoption of AI-based fraud detection systems in the banking markets of Qatar and the UAE. The framework includes seven main constructs derived from a review of the literature: transparency, algorithmic bias, fairness perception, trust, regulatory compliance, AI exposure, and AI adoption. Based on trust theory and AI ethics studies, the model posits that transparency, fairness, and previous exposure to AI are key factors in determining professional trust in AI systems, which consequently impacts adoption. Regulatory compliance is also expected to play a direct role in adoption, especially in compliance-sensitive financial environments.
The conceptual model includes both direct and indirect (mediated and moderated) relationships among the variables, which were tested using PLS-SEM.
Figure 1 shows the relationship between the variables.
H1: Regulatory Compliance → AI Adoption.
H2: Transparency → Trust and Adoption.
H3: Algorithmic Bias → Fairness Perception.
H4: Fairness Perception → Trust.
H5: Trust → AI Adoption.
H6: AI Exposure → Trust and Adoption.
This framework allows for a multidimensional analysis of how institutional and perceptual factors interact to influence trust and the successful adoption of AI technologies in regulated financial environments.
3.2. Multi-Group Analysis (MGA) Comparisons
To explore contextual differences in how AI trust and adoption are influenced by regulatory and individual factors, a Multi-Group Analysis (MGA) was conducted using SmartPLS. MGA enables the statistical comparison of path relationships between predefined subgroups, providing insight into how structural relationships vary across different contexts.
Three sets of group comparisons were designed as follows:
This comparison investigates the influence of differing regulatory environments in the UAE and Qatar on perceptions of AI transparency, fairness, and trust. It helps identify whether national governance frameworks shape how professionals evaluate and adopt AI fraud detection systems.
- 2.
Auditor Type Analysis
This group comparison assesses how internal versus external auditors differ in their trust in AI, based on their roles in evaluating compliance, risk, and operational transparency. Given their distinct oversight responsibilities, variations in adoption behavior and perceptions of fairness are expected.
- 3.
AI Exposure and Fairness Perception Groups
The participants were divided based on a mix of their previous exposure to AI, sense of fairness, and years of work experience:
Group 1 (High AI Exposure and Trust): Those with over 11 years of experience, previous AI use, and a positive perception of fairness.
Group 2 (Low AI Exposure and Trust): Participants with fewer than 10 years of experience, no AI exposure, and a perception of bias in AI systems.
This systematic classification facilitated the investigation of how experience and moral conceptions mediate the association between trust and adoption, allowing for a clearer understanding of how human elements drive AI implementation in different compliance environments.
3.3. Population and Sample
This survey focused on AI-related fraud detection, compliance, and risk management professionals directly employed by traditional banks in Qatar and the UAE. The targeted participants included auditors, compliance officers, risk managers, and AI specialists who are likely to be familiar with fraud detection systems and the interpretation of AI technology in regulation (
Morshed et al., 2024b).
To ensure sample validity, participants were screened based on the following inclusion criteria:
A minimum of two years’ professional experience in fraud detection, compliance oversight, or AI risk evaluation.
Demonstrated exposure to AI-based fraud detection, either through system usage, audit/review processes, or AI implementation projects.
Active roles in interpreting or applying AI-generated insights related to fraud monitoring or compliance functions (
Marty & Ruel, 2024).
Individuals not involved in fraud-specific roles (e.g., general IT, credit risk AI developers, or unrelated technology functions) were excluded. This ensured that the sample reflected real-world users and evaluators of AI fraud detection tools.
The sample was compiled using a stratified purposive sampling approach to ensure balanced representation across countries and professional roles. A total of 560 professionals were invited, and after screening for completeness and eligibility, 409 valid responses were retained, resulting in a strong response rate of 73%, which is considered high for regulated industry surveys (
Yaw et al., 2022).
This targeted sampling approach supports this study’s aim of understanding AI trust and adoption among decision-makers who are directly affected by AI implementation in compliance-sensitive environments.
3.4. Data Collection, Bias Mitigation, and Power Analysis
The data were collected over a span of six months (July–December 2024) by combining online survey tools (Qualtrics and Google Forms), targeted invitations on LinkedIn professional networks, and personalized email invitations to UAE- and Qatar-based financial sector professionals. This multi-mode collection ensured coverage of different roles, including auditors, compliance teams, and AI governance teams.
To ensure sample structure and comparability for Multi-Group Analysis (MGA) purposes, a purposive strategy based on stratified sampling was used. The sample was balanced in terms of important subgroups: internal and external auditors, compliance officers, risk managers, and AI specialists. Geographic coverage was equally balanced between the UAE (representing 51.3%) and Qatar (representing 48.7%) for country-level MGA comparability purposes. Details of the participant breakdown by profession, country, education, experience, AI exposure, fairness perception, and gender are summarized in
Table 1.
We invited 560 professionals in total. Of those, 409 valid responses were kept after eligibility screening and data quality checks, resulting in a 73% usable response rate, which is strong for the banking and regulatory sector, where confidentiality restrictions may prevent participation (
Matthews, 2017).
To mitigate response bias, the following procedures were applied:
Respondents were assured anonymity and confidentiality to encourage truthful and unbiased answers.
Randomized question order and reverse-coded items were embedded to detect patterned or inattentive responses.
A pilot test with 12 participants helped refine question clarity and ensure contextual fit with fraud detection practices.
A post hoc statistical power analysis conducted using G*Power 3.1 assured that the sample size of 409 provided 80% statistical power to capture medium to large effect sizes (f2 = 0.15) at a significance level of 0.05. This provides assurance for the statistically reliable structural relationships in the PLS-SEM model.
3.5. Questionnaire Development
The questionnaire was developed to measure key constructs affecting the adoption of AI-driven fraud detection systems in conventional banks in the UAE and Qatar. The items were based on validated scales adapted from prior studies on AI transparency, trust, fairness perception, algorithmic bias, regulatory compliance, and technology adoption (
Morshed et al., 2024a).
Each construct was operationalized using multi-item reflective measures and rated on a five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree), which is consistent with best practices in behavioral and information systems research.
To ensure contextual relevance, particularly for the compliance-heavy banking sector, the following actions were taken:
Item wordings were refined in consultation with academic experts and compliance officers.
A pilot test with 12 professionals was conducted to assess clarity, realism, and fraud-specific applicability.
Terms like “AI decisions”, “fraud alerts”, and “regulatory oversight” were calibrated to reflect real-world audit and risk processes in Gulf financial institutions.
The final questionnaire included seven constructs as follows:
Transparency (e.g., the interpretability and auditability of AI fraud decisions).
Algorithmic bias (e.g., bias detection and discriminatory risk classifications).
Fairness perception (e.g., equity and non-discrimination in AI outcomes).
Trust (e.g., confidence in AI fraud alerts and system reliability).
Regulatory compliance (e.g., adherence to explainability and governance standards).
AI exposure (e.g., familiarity and prior experience with AI systems).
AI adoption (e.g., organizational readiness and willingness to adopt AI tools).
Descriptive statistics and item-level diagnostics are presented in
Table 2. Skewness and kurtosis values fell within acceptable ranges, confirming the normality assumptions for Likert-based survey data.
3.6. Measurement Model Evaluation
The measurement model’s robustness and validity were assessed through reliability, validity, multicollinearity diagnosis, predictive power, and model fit analysis. Reliability and validity tests ensure internal consistency and construct distinctiveness, while multicollinearity diagnostics detect redundancy among predictors. Predictive power analysis evaluates real-world applicability, and model fit and bias assessments confirm structural alignment and rule out systematic bias. The results are presented in
Table 3,
Table 4 and
Table 5.
The measurement model was rigorously evaluated to ensure its statistical robustness and theoretical integrity. Internal consistency reliability was confirmed through Cronbach’s alpha and Composite Reliability (CR), with all constructs exceeding the recommended threshold of 0.70. Convergent validity was supported by Average Variance Extracted (AVE) values, all above the 0.50 benchmark. Discriminant validity was verified using the Heterotrait-Monotrait (HTMT) ratio, with all inter-construct values falling below the 0.85 threshold (
Fornell & Larcker, 1981). Additionally, multicollinearity diagnostics indicated that all Variance Inflation Factor (VIF) values ranged from 1.98 to 2.25, which is well within acceptable limits, suggesting that the predictor variables were not excessively correlated.
The explained variances (R2) for the important dependent variables were as follows: trust = 0.40, AI adoption = 0.45, fairness perception = 0.39. These values are modest, but as values for behavioral models in social science are concerned, they are acceptable and indicate substantial variance in outcome variables considering the adoption decision’s complexity in regulated financial settings.
The predictive relevance (Q
2) values were positive for all the important constructs, validating the generalizability of the model to novel data. A Standardized Root Mean Square Residual (SRMR) measure as low as 0.057 signifies a good model fit, and Harman’s Single-Factor Test confirmed that common method bias was not a concern (variance explained = 32.4%) (
Ali et al., 2024).
As shown in
Table 5, the measurement model has a good fit, signifying that the observed data supports the hypothesized structure. Additionally, the common method bias analysis results ensure that no single factor has an undue impact on the variance, hence ensuring the reliability and validity of the findings. These findings ascertain the measurement model’s strength, ensuring that it is suitable for subsequent structural analysis and hypothesis testing (
Morshed, 2025).
4. Results
The principal findings regarding the determinant aspects that influence financial institutions’ adoption of AI for fraud detection are emphasized below. The key drivers identified were trust, transparency, perceived fairness, algorithmic bias, regulatory compliance, and exposure to AI. Statistical verification of explained variance, effect size estimation, and hypothesis testing substantiated these drivers. Their indirect and interaction effects were demonstrated through significant results from mediation and moderation analysis, which introduced new layers of complexity in comprehending adoption dynamics.
Subgroup analysis underscored differences by country (UAE versus Qatar), by professional role (internal versus external) and by level of exposure to AI (high versus low), focusing on contextual and experiential differences. Causal validation also offered evidence supporting the use of trust as a viable and potent predictor of the adoption of AI, emphasizing its utility in guiding strategies for implementation.
4.1. Hypothesis Testing, Effect Size, and Explained Variance
Table 6 and
Figure 2 show that the investigation of direct effects validated that all six hypothesized associations were statistically significant and in line with theoretical expectations. Compliance regulation had a strong impact on AI adoption (H1), confirming that transparent legal frameworks and standards for auditing are key drivers of institutional acceptance of AI systems for fraud detection (
Ghasemaghaei & Kordzadeh, 2024). Transparency positively influenced both trust and adoption (H2a, H2b), supporting prior work indicating that transparent AI encourages user confidence and system integration, especially in regulated environments such as banking (
Abu Huson et al., 2025;
Tahir et al., 2024). The route from algorithmic bias to perceived fairness showed a highly negative association (H3), demonstrating that biases or inconsistencies in fraud alerts can undermine the ethical acceptance of AI outputs. Conversely, the perception of fairness greatly enhanced trust (H4), as has prior theory connecting the assurance of ethics to users’ confidence (
Kim et al., 2025). Trust was the most powerful predictor of AI adoption (H5), reaffirming its pivotal role in AI-based system adoption models (
McNally & Bastos, 2025;
Rahmani et al., 2024). Finally, prior exposure to AI greatly improved trust and adoption measures (H6a, H6b), attesting to the value of experientially acquired familiarity in building confidence in complex AI tools (
Mohsen et al., 2024).
Effect size analysis in
Table 7. shows that trust is the strongest predictor of AI adoption (f
2 = 0.31), underscoring its central role. Transparency (f
2 = 0.26) and fairness perception (f
2 = 0.29) moderately influence trust, while fairness perception also moderates the trust–adoption link (f
2 = 0.22). Regulatory compliance and AI exposure have smaller yet meaningful effects. Overall, the model explains a moderate variance in trust, fairness, and adoption, supporting its theoretical and practical relevance (
wael Al-khatib et al., 2024).
As
Table 8; The model exhibits high predictability, with AI uptake (R
2 = 0.48) explained primarily by trust, perception of fairness, and regulation. Trust (R
2 = 0.42) in turn is heavily driven by transparency and prior experiences with AI, implying that system transparency and user experience generate confidence. Perception of fairness (R
2 = 0.41) is primarily explained by the perception of minimizing algorithmic bias, pinpointing mitigating bias as key to maximizing perceptions of fairness. Combined, these findings provide theoretical support for the framework, validating the central roles of trust, fairness, and regulation in promoting AI adoption in financial risk management (
Rahmani et al., 2024).
4.2. Mediation and Moderation Analysis
The mediation model tested, in
Table 9, whether fairness perception mediates the effect of algorithmic bias on AI adoption. The initial direct effect of algorithmic bias on AI adoption was statistically significant (β = −0.21,
p = 0.0026), as higher levels of perceived bias directly decrease the intentions to use AI-based fraud detection systems. However, after adding fairness perception as a mediating variable, the direct effect became weaker and statistically non-significant (β = −0.08,
p = 0.1478). Instead, the indirect route—from algorithmic bias through fairness perception to AI adoption—was statistically significant (indirect β = −0.14,
p = 0.0013). These findings imply that fairness perception completely mediates the effect of algorithmic bias on adoption, with fairness concerns being a priority area for mitigating the negative influences of bias on implementation.
This indicates a case of partial mediation, suggesting that the negative impact of algorithmic bias on adoption operates primarily through its erosion of fairness perceptions. In other words, professionals are less likely to adopt AI systems not only because they perceive them as biased, but, more importantly, because such bias undermines their confidence in the system’s fairness. These findings highlight the pivotal role of ethical perception in mediating trust and acceptance of AI within regulated financial environments (
Ghasemaghaei & Kordzadeh, 2024).
In
Table 10; the moderation analysis examined whether fairness perception moderates the relationship between trust and AI adoption. The main effects of both trust (β = 0.42,
p < 0.001) and fairness perception (β = 0.31,
p < 0.001) on AI adoption were statistically significant, reaffirming their individual importance in influencing adoption decisions.
More importantly, the interaction term trust × fairness perception also showed a significant positive effect on AI adoption (β = 0.19, p = 0.0009), indicating that fairness perception acts as a moderator. This means that the strength of the relationship between trust and AI adoption is amplified when individuals perceive the AI system as fair. In practical terms, even when trust is high, AI adoption increases even more significantly if the user also believes the system is equitable and unbiased.
This finding underscores the synergistic effect between cognitive trust and ethical perception, suggesting that trust alone may not be sufficient without a perception of fairness in AI outcomes, especially in high-stakes financial environments (
Liefgreen et al., 2024).
4.3. Subgroup Comparison
PLS-MGA results show that AI adoption in the UAE and Qatar is driven by similar factors, with most relationships between constructs remaining stable. However, two key differences emerge: Fairness Perception has a stronger impact in the UAE, indicating greater sensitivity to fairness concerns, while trust plays a more critical role in the UAE, suggesting that respondents rely more on trust when adopting AI fraud detection tools. (see
Figure 3).
A measurement invariance test, in
Table 11, confirmed construct comparability, ensuring that observed differences reflect real variations in perception. A permutation test validated the robustness of the results, confirming the statistical significance of differences in fairness perception and trust. These findings suggest that AI adoption strategies in the UAE should emphasize fairness and trust-building, while in Qatar, efforts may be needed to reinforce their importance. The overall consistency in other relationships indicates shared AI adoption trends, with differences mainly in the emphasis on fairness and trust.
The results, in
Table 12, show a significant difference in trust levels between internal and external auditors regarding AI fraud detection. Internal auditors reported higher trust (mean = 4.35) than external auditors (mean = 4.12), with a t-value of 2.89 and a
p-value of 0.0041, indicating that the difference is unlikely due to chance. The negative t-value (−2.89) for external auditors represents the reverse comparison, reinforcing their lower trust. Internal auditors’ greater trust likely stems from frequent interaction with AI tools, while external auditors, relying on external validation and regulations, remain more skeptical. Enhancing transparency and assurance mechanisms could strengthen external auditors’ trust (
Kahyaoglu & Aksoy, 2021).
PLS-MGA results, in
Table 13, confirm that trust affects AI adoption differently for internal and external auditors. Trust has a stronger impact on internal auditors (β = 0.52) than on external auditors (β = 0.38), with a significant difference (
p = 0.029). Internal auditors who work closely with AI are more likely to adopt it when they trust its decisions. External auditors, constrained by regulatory frameworks and professional skepticism, rely less on trust. Tailored AI adoption strategies, including verification processes, explainability measures, and compliance assurances, are crucial for increasing external auditors’ confidence in AI fraud detection (
Hassan et al., 2025).
The evidence, in
Table 14, shows that those with greater exposure to AI technologies embrace AI-based fraud detection solutions at much higher levels than those with low exposure. The high-exposure group reported a mean adoption score of 4.42, compared to 4.08 for their low-exposure counterparts, with a statistically significant t-value of 3.76 (
p = 0.0003). This difference shows that exposure to AI tools has a positive effect on adoption behavior. The negative t-value of the low-exposure group further confirms their relatively lower adoption levels. Overall, the findings confirm that exposure to AI diminishes skepticism, boosts users’ confidence, and facilitates widespread acceptance of AI-based solutions for fraud detection scenarios (
Chandratreya, 2024).
The Partial Least Squares Multi-Group Analysis (PLS-MGA) findings, in
Table 15, show that both familiarity with AI and trust play a significantly stronger role in AI adoption among high-exposure participants compared to low-exposure participants. In particular, the effect of familiarity on adoption is also greater in the high-exposure group (β = 0.48) than in the low-exposure group (β = 0.31), with the difference being statistically significant (
p = 0.018). These findings confirm that heavy users are more likely to embrace AI systems as long as they understand them better, reinforcing the significance of familiarity in shaping adoption behavior.
Similarly, trust is a stronger predictor in the high-exposure group (β = 0.52) than in the low-exposure group (β = 0.40, p = 0.041), implying that greater exposure increases confidence in AI’s accuracy and reliability. In contrast, fairness perception shows no significant difference (p = 0.422), indicating that both groups value fairness equally in their adoption decisions.
4.4. Causal Validation Techniques
2SLS regression results in
Table 16 confirm that trust drives AI adoption, not vice versa. In the first stage, prior AI exposure outside of banking predicts trust (β = 0.49,
p < 0.001), validating it as a strong instrumental variable (IV). In the second stage, instrumented trust predicts AI adoption (β = 0.51,
p < 0.001), showing that trust influences adoption independently of past behaviors. The absence of endogeneity concerns rules out reverse causality, confirming trust as a key driver of AI adoption (
Wang & Wang, 2024).
Lagged regression, in
Table 17, reinforces this causal link by introducing a six-month gap between trust (T1) and AI adoption (T2). Trust at T1 predicts AI adoption at T2 (β = 0.47,
p < 0.001), thereby eliminating simultaneity bias. These findings confirm trust’s lasting impact on AI adoption, strengthening this study’s causal model.
The logistic regression analysis results, in
Table 18, prove that auditors’ trust in AI-based fraud detection tools is a significant factor in whether or not they adopt them. Trust is identified as a strong predictor of adoption (β = 0.85,
p < 0.001), with an odds ratio of 2.34—implying that where there is a one-unit increase in trust, there is more than a two-fold increase in the likelihood of adopting AI. Also, perceived fairness has a significant impact on adoption behavior (β = 0.52,
p = 0.0025), with an odds ratio of 1.68. These results indicate that auditors who perceive AI systems as fair are significantly more likely to adopt them in their work. Overall, the results underscore the significant roles that both trust and perceived fairness play in the adoption of AI in auditing.
The negative intercept value (β
0 = −1.37,
p < 0.001) suggests that without trust and fairness perceptions, the likelihood of choosing AI adoption is low. These results validate that both fairness perception and trust are crucial determinants in the adoption of AI, and auditors who have faith in the transparency and fairness of the system are likely to adopt it for fraud detection (
Adelakun et al., 2024). These results are illustrated more in
Figure 4.
5. Discussion
The use of artificial intelligence (AI) and machine learning (ML) in fraud detection at banks in the UAE, Qatar, and the West is discussed here. By automating the detection of anomalies, reducing audits, and enabling real-time blocking of fraud, detection accuracy is improved by ML technologies. Unlike rule-based systems, ML algorithms are adaptive to changes in fraud patterns, thereby improving detection and minimizing the occurrence of false positives. Adoption patterns differ by region: Western banks prioritize swiftness and the efficiency of operations; however, UAE and Qatari banks prioritize reliability, fairness, and conformity to regulations, which is indicative of tighter vigilance and varying risk tolerance (
Hussin et al., 2025).
The adoption statistics confirm regional distinctions; 75% of American and EU banks utilize AI-based fraud detection programs as opposed to 58% of Middle Eastern banks (
Saif-Ur-Rehman et al., 2024).
AI implementation has achieved fraud loss reductions of as much as 60% according to Western banks, whereas those in the UAE and Qatar, although slower to adapt, have seen a 45% improvement in fraud detection effectiveness—fueled by a greater emphasis on transparency and institutional confidence. These differences reflect how regulatory environments and organizational cultures determine the pace and urgency of AI integration in financial risk management. Transparency is critical in AI fraud detection. Explainability tools like SHAP and LIME boost auditor confidence (
Salih et al., 2025), but trust and fairness remain key (
Andrae, 2025). Western banks favor black-box AI for real-time detection in high-volume environments (e.g., Citibank, Bank of America), sacrificing explainability (
Kim et al., 2025). In contrast, UAE and Qatari regulators mandate Explainable AI (XAI), prioritizing interpretability over speed and underscoring the efficiency–trustworthiness trade-off.
Comparative studies show that black-box deep learning AI achieves precision rates of 85–99% and recall rates of 80–98%. A JPMorgan Chase case study highlights AI and big data integration in fraud prevention (
Ellahi, 2024). However, UAE financial institutions favor XAI, despite its 10–15% lower accuracy (
(CBUAE Rulebook, n.d.), balancing fraud detection with regulatory trust and fairness.
Fairness mitigates AI bias, which undermines institutional trust. Cases like Wells Fargo’s discriminatory fraud detection stress the need for safeguards (
Saxena et al., 2024). Western regulators, under the EU AI Act and the U.S. AI Risk Management Framework, focus on bias mitigation, while UAE and Qatari regulators emphasize compliance-driven AI that includes data protection, fairness, and human oversight.
Findings align with AI adoption theories. The Technology Acceptance Model (TAM) links adoption to perceived usefulness and ease of use, reflecting Western banks’ efficiency focus (
Murillo et al., 2021). The Unified Theory of Acceptance and Use of Technology (UTAUT) emphasizes regulatory-driven adoption in UAE and Qatari banks (
Aytekin et al., 2022). AI trust frameworks reinforce the role of transparency and fairness in institutional acceptance.
Global AI trends favor hybrid models that balance accuracy and interpretability by integrating rule-based and ML approaches (
Mohsen et al., 2024). Federated learning supports multi-bank fraud detection while ensuring compliance with the GDPR and the UAE’s Data Protection Law (
Rahmani et al., 2024). The key challenge remains balancing AI transparency and detection accuracy. XAI improves interpretability but is computationally intensive, whereas black-box models optimize real-time prevention. This contrast is evident in Citibank’s deep learning focus versus Deutsche Bank’s emphasis on fairness-aware AI.
Trust in AI fraud detection varies: internal auditors who are familiar with AI show higher confidence, while external auditors require more verification (
Habbal et al., 2024). AI literacy and region-specific adaptation enhance fraud detection effectiveness and compliance.
Ultimately, UAE and Qatari banks prioritize transparency and fairness, while Western institutions focus on speed and efficiency. As AI adoption evolves, financial institutions will likely adopt hybrid solutions that integrate real-time fraud detection with regulatory-compliant transparency.
Implications
Scale AI-based fraud detection through the integration of real-time anomaly detection within payments and banking to reduce fraud-related losses.
Employ hybrid AI models that blend machine learning with rule-based systems to ensure accuracy and adaptability to changing fraud strategies.
Arm fraud teams with AI-powered case management tools for fraud risk scoring, fraud pattern visualization, and investigations.
Fortify user trust through AI-powered fraud prevention that offers personalized security without interrupting the experience (for example, adaptive fraud scoring).
Sponsor AI education for fraud examiners, security personnel, and bank employees to enhance AI literacy and decision-making.
Employ AI-powered audit tools for flagging dubious transactions, creating fraud risk reports, and defining traceable fraud alert reasons.
Use AI-driven forensic analysis to examine fraud cases to identify new fraud typologies in regional financial systems.
Enhance fraud audit transparency with Explainable AI (XAI) to validate detection conclusions and ensure industry compliance.
Fortify fraud detection by combining AI with transaction monitoring, risk rating, and real-time alerts within international financial networks.
Refine AI-based fraud detection models by exposing them to varied financial data that reflects worldwide transaction patterns and global risks.
Craft AI-based fraud detection APIs and cloud-based solutions for seamless banking and financial integration.
Create localized AI models for various financial habits, ensuring that they are effective in high-cash, digital-first, and cross-border markets.
Automate fraud handling using smart workflows to enable quicker investigation and resolution of suspect transactions.
Scale adoption by demonstrating fraud reduction, cost savings, and operational efficiency.
Adopt adaptive, AI-based fraud prevention in mobile banking, online commerce, and cryptocurrency transactions.
Synchronize AI with business objectives by linking fraud detection to risk management and fraud loss recovery initiatives.
Use AI-based customer education to increase fraud vigilance and reduce misclassification.
6. Conclusions
The use of machine learning in banks across Qatar and the UAE has greatly improved fraud detection and risk management. The adoption of such technologies can be successful only if transparency, institutional trust, and fairness are ensured. Techniques like SHAP and LIME are important for ensuring the interpretability of fraud detection processes, thereby enhancing the confidence of auditors. All banks may be interested in enhancing efficiency, yet their strategies for adoption differ based on their priorities and integration plans. AI-driven fraud detection offers obvious benefits, yet long-term success is reliant on trust and the assurance of fairness.
On the other hand, this research has some limitations. The focus on UAE and Qatari banks constricts the generalizability of the findings, although it provides a deeper understanding of regional phenomena. The utilization of survey information may invite the risk of response bias, which is offset by the use of strong validation procedures. Furthermore, a trade-off between the accuracy of models and their interpretability in real-time presents a challenge for subsequent investigations.
This study contributes empirically and practically by providing deeper insights into trust, transparency, and fairness in AI-based fraud detection. It provides actionable recommendations for financial institutions and developers aiming to reconcile technological performance with operational and ethical standards, especially in Gulf-region environments. To strengthen the adoption and effectiveness of AI fraud detection, financial institutions should take the following actions:
Purchase XAI tools to enhance AI decision interpretability.
Use fairness-conscious AI systems to mitigate discrimination and advocate for responsible AI regulation.
Create hybrid fraud detection models that combine rule-based methods with ML methods.
Implement AI literacy programs for compliance officers and auditors.
Map fraud detection models to institutional requirements with effectiveness and efficacy.
Future studies must examine AI adoption in various financial markets to determine how varying operational environments condition implementation. Future studies must also deploy real-time fraud detection models that are balanced in terms of explainability and accuracy, analyze the effect of AI literacy initiatives on adoption, and tune AI fraud detection without undermining trust, fairness, or operational effectiveness.
Author Contributions
Conceptualization, H.Y. and A.A.-A.; methodology, H.Y. and A.A.-A.; software, H.Y.; validation, H.Y. and A.A.-A.; formal analysis, H.Y. and A.A.-A.; investigation, H.Y. and A.A.-A.; resources, A.A.-A.; data curation, H.Y.; writing—original draft preparation, H.Y.; writing—review and editing, H.Y. and A.A.-A.; visualization, H.Y.; supervision, A.A.-A.; project administration, A.A.-A.; funding acquisition, A.A.-A. All authors have read and agreed to the published version of the manuscript.
Funding
The APC funded By Middle East University, Jordan.
Institutional Review Board Statement
This study has received official ethical approval from the Institutional Review Board (IRB) of Middle East University, Jordan, on May 2024 (Reference No. MEU/SD/2024/317).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The data presented in this study are available on request from the corresponding author due to (Ethical restriction).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Abu Huson, Y., Sierra García, L., García Benau, M. A., & Mohammad Aljawarneh, N. (2025). Cloud-based artificial intelligence and audit report: The mediating role of the auditor. VINE Journal of Information and Knowledge Management Systems. Available online: https://www.emerald.com/insight/content/doi/10.1108/vjikms-03-2024-0089/full/html (accessed on 8 February 2025).
- Adelakun, B. O., Onwubuariri, E. R., Adeniran, G. A., & Ntiakoh, A. (2024). Enhancing fraud detection in accounting through AI: Techniques and case studies. Finance & Accounting Research Journal, 6(6), 978–999. [Google Scholar]
- Akhtar, M. A. K., Kumar, M., & Nayyar, A. (2024). Transparency and accountability in explainable AI: Best practices. In M. A. K. Akhtar, M. Kumar, & A. Nayyar (Eds.), Towards ethical and socially responsible explainable AI (Vol. 551, pp. 127–164). Springer Nature Switzerland. [Google Scholar] [CrossRef]
- Al-Abbadi, L. H., Alshawabkeh, R., Alkhazali, Z., Al-Aqrabawi, R., & Rumman, A. A. (2025). Business intelligence and strategic entrepreneurship for sustainable development goals (SDGs) through: Green technology innovation and green knowledge management. Economics—Innovative and Economics Research Journal, 13(1), 45–68. [Google Scholar] [CrossRef]
- AL-Dosari, K., Fetais, N., & Kucukvar, M. (2024). Artificial intelligence and cyber defense system for banking industry: A qualitative study of AI applications and challenges. Cybernetics and Systems, 55(2), 302–330. [Google Scholar] [CrossRef]
- Ali, A. A. A., Sharabati, A.-A. A., Allahham, M., & Nasereddin, A. Y. (2024). The relationship between supply chain resilience and digital supply chain on sustainability, supply chain dynamism as a moderator. Available online: https://www.preprints.org/manuscript/202402.1600 (accessed on 6 January 2025).
- Andrae, S. (2025). Fairness and bias in machine learning models for credit decisions. In Machine learning and modeling techniques in financial data science (pp. 1–24). IGI Global Scientific Publishing. Available online: https://www.igi-global.com/chapter/fairness-and-bias-in-machine-learning-models-for-credit-decisions/368534 (accessed on 1 March 2025).
- Ashfaq, M., & Ayub, U. (2021). Knowledge, attitude, and perceptions of financial industry employees towards AI in the GCC region. In E. Azar, & A. N. Haddad (Eds.), Artificial intelligence in the gulf (pp. 95–115). Springer. [Google Scholar] [CrossRef]
- Aytekin, A., Özköse, H., & Ayaz, A. (2022). Unified theory of acceptance and use of technology (UTAUT) in mobile learning adoption: Systematic literature review and bibliometric analysis. COLLNET Journal of Scientometrics and Information Management, 16(1), 75–116. [Google Scholar] [CrossRef]
- Barnes, A. J., Zhang, Y., & Valenzuela, A. (2024). AI and culture: Culturally dependent responses to AI systems. Current Opinion in Psychology, 58, 101838. [Google Scholar] [CrossRef] [PubMed]
- Bhardwaj, N., & Parashar, G. (2025). The disagreement dilemma in explainable AI: Can bias reduction bridge the gap. International Journal of System Assurance Engineering and Management. Available online: https://link.springer.com/article/10.1007/s13198-025-02712-9 (accessed on 2 February 2025).
- CBUAE Rulebook. (n.d.). Big data analytics and artificial intelligence (AI). Available online: https://rulebook.centralbank.ae/en/rulebook/big-data-analytics-and-artificial-intelligence-ai (accessed on 3 March 2025).
- Chandratreya, A. (2024). Revolutionizing market segmentation in emerging economies: AI-driven innovations and strategies. In AI innovations in service and tourism marketing (pp. 129–161). IGI Global. Available online: https://www.igi-global.com/chapter/revolutionizing-market-segmentation-in-emerging-economies/352827 (accessed on 2 February 2025).
- Cheah, J.-H., Magno, F., & Cassia, F. (2023). Reviewing the SmartPLS 4 software: The latest features and enhancements. Journal of Marketing Analytics, 12, 97–107. [Google Scholar] [CrossRef]
- Dash, G., & Paul, J. (2021). CB-SEM vs PLS-SEM methods for research in social sciences and technology forecasting. Technological Forecasting and Social Change, 173, 121092. [Google Scholar] [CrossRef]
- Deshpande, A. S., Shinde, S., & Patil, Y. (2023, November 24–25). Relevance and applicability of cybersecurity frameworks in the context of BFSI vertical in India. 2023 International Conference on Integrated Intelligence and Communication Systems (ICIICS) (pp. 1–6), Kalaburagi, India. Available online: https://ieeexplore.ieee.org/abstract/document/10421516/ (accessed on 2 January 2025).
- Dichev, A., Zarkova, S., & Angelov, P. (2025). Machine learning as a tool for assessment and management of fraud risk in banking transactions. Journal of Risk and Financial Management, 18(3), 130. [Google Scholar] [CrossRef]
- Elamin, A. M., Ali, L., Ahmed, A. Z. E., & Aldabbas, H. (2025). Factors affecting attitudes toward e-shopping in the United Arab Emirates. Cogent Business & Management, 12(1), 2442542. [Google Scholar] [CrossRef]
- Ellahi, E. (2024). Fraud detection and prevention in finance: Leveraging artificial intelligence and big data. Dandao Xuebao/Journal of Ballistics, 36(1), 54–62. [Google Scholar] [CrossRef]
- Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
- Fundira, M., Edoun, E. I., & Pradhan, A. (2024). Evaluating end-users’ digital competencies and ethical perceptions of AI systems in the context of sustainable digital banking. Sustainable Development, 32(5), 4866–4878. [Google Scholar] [CrossRef]
- Ghasemaghaei, M., & Kordzadeh, N. (2024). Ethics in the age of algorithms: Unravelling the impact of algorithmic unfairness on data analytics recommendation acceptance. Information Systems Journal. Available online: https://onlinelibrary.wiley.com/doi/full/10.1111/isj.12572 (accessed on 2 January 2025). [CrossRef]
- Habbal, A., Ali, M. K., & Abuzaraida, M. A. (2024). Artificial intelligence trust, risk and security management (AI trism): Frameworks, applications, challenges and future research directions. Expert Systems with Applications, 240, 122442. [Google Scholar] [CrossRef]
- Hassan, S. W. U., Kiran, S., Gul, S., Khatatbeh, I. N., & Zainab, B. (2025). The perception of accountants/auditors on the role of corporate governance and information technology in fraud detection and prevention. Journal of Financial Reporting and Accounting, 23(1), 5–29. [Google Scholar] [CrossRef]
- Hjelkrem, L. O., & Lange, P. E. de. (2023). Explaining deep learning models for credit scoring with SHAP: A case study using open banking data. Journal of Risk and Financial Management, 16(4), 221. [Google Scholar] [CrossRef]
- Hosain, M. T., Jim, J. R., Mridha, M. F., & Kabir, M. M. (2024). Explainable AI approaches in deep learning: Advancements, applications and challenges. Computers and Electrical Engineering, 117, 109246. [Google Scholar] [CrossRef]
- Hussin, H. A., Tayfor, A. E., & Mohmmed, K. A. (2025). Financial forecasting and risk analysis: Economic variables’ impact on banks performance using statistical and machine learning models. Available online: https://www.naturalspublishing.com/download.asp?ArtcID=30522 (accessed on 15 March 2025).
- Ibrahim, I. A., & Truby, J. (2022). Governance in the era of Blockchain technology in Qatar: A roadmap and a manual for trade finance. Journal of Banking Regulation, 23(4), 419–438. [Google Scholar] [CrossRef]
- Kahyaoglu, S. B., & Aksoy, T. (2021). Artificial intelligence in internal audit and risk assessment. In U. Hacioglu, & T. Aksoy (Eds.), Financial ecosystem and strategy in the digital era (pp. 179–192). Springer International Publishing. [Google Scholar] [CrossRef]
- Kaluarachchi, B. N., & Sedera, D. (2024). Improving efficiency through AI-powered customer engagement by providing personalized solutions in the banking industry. In Integrating AI-driven technologies into service marketing (pp. 299–342). IGI Global. Available online: https://www.igi-global.com/chapter/improving-efficiency-through-ai-powered-customer-engagement-by-providing-personalized-solutions-in-the-banking-industry/355999 (accessed on 2 January 2025).
- Kim, T. H., Ojo, S., Krichen, M., Alamro, M. A., Mihoub, A., & Sampedro, G. A. (2025). Automated explainable and interpretable framework for anomaly detection and human activity recognition in smart homes. Neural Computing and Applications. Available online: https://link.springer.com/article/10.1007/s00521-025-10991-3 (accessed on 27 February 2025).
- Liefgreen, A., Weinstein, N., Wachter, S., & Mittelstadt, B. (2024). Beyond ideals: Why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it. AI & Society, 39(5), 2183–2199. [Google Scholar] [CrossRef]
- Maghyereh, A., & Ziadat, S. A. (2024). Pattern and determinants of tail-risk transmission between cryptocurrency markets: New evidence from recent crisis episodes. Financial Innovation, 10(1), 77. [Google Scholar] [CrossRef]
- Majrashi, K. (2025). Employees’ perceptions of the fairness of AI-based performance prediction features. Cogent Business & Management, 12(1), 2456111. [Google Scholar] [CrossRef]
- Marín Díaz, G., Galán Hernández, J. J., & Galdón Salvador, J. L. (2023). Analyzing employee attrition using explainable AI for strategic HR decision-making. Mathematics, 11(22), 4677. [Google Scholar] [CrossRef]
- Marty, J., & Ruel, S. (2024). Why is “supply chain collaboration” still a hot topic? A review of decades of research and a comprehensive framework proposal. International Journal of Production Economics, 273, 109259. [Google Scholar] [CrossRef]
- Matthews, L. (2017). Applying multigroup analysis in PLS-SEM: A step-by-step process. In H. Latan, & R. Noonan (Eds.), Partial least squares path modeling (pp. 219–243). Springer International Publishing. [Google Scholar] [CrossRef]
- McNally, N., & Bastos, M. (2025). The news feed is not a black box: A longitudinal study of facebook’s algorithmic treatment of news. Digital Journalism, 1–20. Available online: https://www.tandfonline.com/doi/full/10.1080/21670811.2025.2450623 (accessed on 27 February 2025). [CrossRef]
- Mohsen, S. E., Hamdan, A., & Shoaib, H. M. (2024). Digital transformation and integration of artificial intelligence in financial institutions. Journal of Financial Reporting and Accounting, 23, 680–699. [Google Scholar] [CrossRef]
- Morshed, A. (2025). Ethical challenges in designing sustainable business models for responsible consumption and production: Case studies from Jordan. Management & Sustainability: An Arab Review. Available online: https://www.emerald.com/insight/content/doi/10.1108/msar-09-2024-0131/full/html (accessed on 20 December 2024).
- Morshed, A., & Khrais, L. T. (2025). Cybersecurity in digital accounting systems: Challenges and solutions in the arab gulf region. Journal of Risk and Financial Management, 18(1), 41. [Google Scholar] [CrossRef]
- Morshed, A., Maali, B., Ramadan, A., Ashal, N., Zoubi, M., & Allahham, M. (2024a). The impact of supply chain finance on financial sustainability in Jordanian SMEs. Uncertain Supply Chain Management, 12(4), 2767–2776. [Google Scholar] [CrossRef]
- Morshed, A., Ramadan, A., Maali, B., Khrais, L. T., & Baker, A. A. R. (2024b). Transforming accounting practices: The impact and challenges of business intelligence integration in invoice processing. Journal of Infrastructure, Policy and Development, 8(6), 4241. [Google Scholar] [CrossRef]
- Murillo, G. G., Novoa-Hernández, P., & Rodríguez, R. S. (2021). Technology acceptance model and moodle: A systematic mapping study. Information Development, 37(4), 617–632. [Google Scholar] [CrossRef]
- Odeh, A., Al-Haija, Q. S. A., Taleb, A. A., Salameh, W., & Alhajahjeh, T. (2024, December 1–3). Enhancing security and efficiency in mobile payment systems: An integrated approach utilizing advanced technologies. 8th IET Smart Cities Symposium (SCS 2024) (pp. 723–727), Mnamah, Bahrain. [Google Scholar]
- Oguntibeju, O. O. (2024). Mitigating artificial intelligence bias in financial systems: A comparative analysis of debiasing techniques. Asian Journal of Research in Computer Science, 17(12), 165–178. [Google Scholar] [CrossRef]
- Price, D. (2025). The gulf cooperation council, innovation frontiers, intellectual property and artificial intelligence: Technological, economic, and social revolutions. In Innovation and development of knowledge societies (pp. 196–220). Routledge. Available online: https://www.taylorfrancis.com/chapters/edit/10.4324/9781003528517-11/gulf-cooperation-council-innovation-frontiers-intellectual-property-artificial-intelligence-david-price (accessed on 15 March 2025).
- Pulivarthy, P., & Whig, P. (2025). Bias and fairness addressing discrimination in AI systems. In Ethical dimensions of AI development (pp. 103–126). IGI Global. Available online: https://www.igi-global.com/chapter/bias-and-fairness-addressing-discrimination-in-ai-systems/359640 (accessed on 27 February 2025).
- Rahmani, A., Aboojafari, R., Naeini, A. B., & Mashayekh, J. (2024). Adoption of digital innovation for resource efficiency and sustainability in the metal industry. Resources Policy, 90, 104719. [Google Scholar] [CrossRef]
- Saif-Ur-Rehman, M., Barson, N., & Hamdan, Y. H. (2024). Industry 4.0 technologies and firm performance with digital supply chain platforms and supply chain capabilities. Pakistan Journal of Commerce and Social Sciences, 18(4), 893–924. [Google Scholar]
- Salih, A. M., Raisi-Estabragh, Z., Galazzo, I. B., Radeva, P., Petersen, S. E., Lekadir, K., & Menegaz, G. (2025). A perspective on explainable artificial intelligence methods: SHAP and LIME. Advanced Intelligent Systems, 7(1), 2400304. [Google Scholar] [CrossRef]
- Saxena, A., Verma, S., & Mahajan, J. (2024). Transforming Banking: The Next Frontier. In Generative AI in banking financial services and insurance: A guide to use cases, approaches, and insights (pp. 85–121). Apress. [Google Scholar]
- Schuett, J. (2024). Risk management in the artificial intelligence act. European Journal of Risk Regulation, 15(2), 367–385. [Google Scholar] [CrossRef]
- Tahir, H. A., Alayed, W., Hassan, W. U., & Haider, A. (2024). A novel hybrid XAI solution for autonomous vehicles: Real-time interpretability through LIME–SHAP integration. Sensors, 24(21), 6776. [Google Scholar] [CrossRef]
- Villani, M., Lockhart, J., & Magazzeni, D. (2022). Feature importance for time series data: Improving KernelSHAP. Available online: https://arxiv.org/abs/2210.02176 (accessed on 27 February 2025).
- wael Al-khatib, A., Moh’d Anwer, A.-S., & Khattab, M. (2024). How can generative artificial intelligence improve digital supply chain performance in manufacturing firms? Analyzing the mediating role of innovation ambidexterity using hybrid analysis through CB-SEM and PLS-SEM. Technology in Society, 78, 102676. [Google Scholar] [CrossRef]
- Wang, X., & Wang, Y. (2024). Analysis of trust factors for AI-assisted diagnosis in intelligent healthcare: Personalized management strategies in chronic disease management. Expert Systems with Applications, 255, 124499. [Google Scholar] [CrossRef]
- Yaw, S. P., Tan, G. W. H., Foo, P. Y., Leong, L. Y., & Ooi, K. B. (2022). The moderating role of gender on behavioural intention to adopt mobile banking: A henseler’s PLS-MGA and permutation approach. International Journal of Mobile Communications, 20(6), 727. [Google Scholar] [CrossRef]
- Yeo, W. J., Van Der Heever, W., Mao, R., Cambria, E., Satapathy, R., & Mengaldo, G. (2025). A comprehensive review on financial explainable AI. Artificial Intelligence Review, 58(6), 189. [Google Scholar] [CrossRef]
- Younisse, R., Ahmad, A., & Abu Al-Haija, Q. (2022). Explaining intrusion detection-based convolutional neural networks using shapley additive explanations (shap). Big Data and Cognitive Computing, 6(4), 126. [Google Scholar] [CrossRef]
Figure 1.
Conceptual framework diagram.
Figure 1.
Conceptual framework diagram.
Figure 2.
Path model for PLS-SEM results.
Figure 2.
Path model for PLS-SEM results.
Figure 3.
Multi-group analysis (MGA) comparison chart.
Figure 3.
Multi-group analysis (MGA) comparison chart.
Figure 4.
AI adoption likelihood (logistic regression results).
Figure 4.
AI adoption likelihood (logistic regression results).
Table 1.
Sample distribution.
Table 1.
Sample distribution.
Category | Subcategory | No. (%) |
---|
Profession | Internal Auditors | 145 (35.4%) |
| External Auditors | 135 (33.0%) |
| Compliance Officers | 75 (18.3%) |
| Risk Managers | 54 (13.2%) |
| AI Specialists | 45 (11.0%) |
Country | UAE | 210 (51.3%) |
| Qatar | 199 (48.7%) |
Education Level | Bachelor’s Degree | 250 (61.1%) |
| Master’s Degree | 135 (33.0%) |
| Doctorate | 24 (5.9%) |
Professional Exp. | 0–5 years | 78 (19.1%) |
| 6–10 years | 115 (28.1%) |
| 11–15 years | 130 (31.8%) |
| 16+ years | 86 (21.0%) |
AI Exposure | Prior AI Experience | 195 (47.7%) |
| No Prior AI Experience | 214 (52.3%) |
Fairness Perception | Perceives AI as Fair | 220 (53.8%) |
| Perceives AI as Biased | 189 (46.2%) |
Gender | Male | 295 (72.1%) |
| Female | 114 (27.9%) |
Table 2.
Measurement model descriptive statistics.
Table 2.
Measurement model descriptive statistics.
Variable | Measurement Dimensions | Mean | SD | Min | Max | Skewness | Kurtosis |
---|
Transparency | Overall Construct | 4.01 | 0.71 | 2 | 5 | −0.45 | 0.12 |
| Interpretability of AI fraud detection decisions | 3.98 | 0.73 | 2 | 5 | −0.42 | 0.15 |
| Clarity and accessibility of AI explanations | 4.05 | 0.69 | 2 | 5 | −0.47 | 0.10 |
| Justifiability and auditability of fraud alerts | 4.00 | 0.72 | 2 | 5 | −0.44 | 0.13 |
| Perceived openness of AI decision-making | 4.02 | 0.70 | 2 | 5 | −0.43 | 0.11 |
Algorithmic Bias | Overall Construct | 3.55 | 0.81 | 2 | 5 | −0.30 | −0.05 |
| Consistency and fairness in fraud classification | 3.52 | 0.82 | 2 | 5 | −0.28 | −0.07 |
| Presence of biased or discriminatory outcomes | 3.60 | 0.80 | 2 | 5 | −0.32 | −0.03 |
| Effectiveness of bias mitigation strategies | 3.55 | 0.79 | 2 | 5 | −0.29 | −0.06 |
| Perceived impartiality of AI fraud assessments | 3.53 | 0.81 | 2 | 5 | −0.31 | −0.04 |
Fairness Perception | Overall Construct | 4.12 | 0.68 | 2 | 5 | −0.50 | 0.20 |
| Equity in AI fraud detection outcomes | 4.10 | 0.69 | 2 | 5 | −0.48 | 0.18 |
| Perceived fairness in fraud risk assessments | 4.15 | 0.66 | 2 | 5 | −0.51 | 0.22 |
| Trust in AI’s ability to make unbiased decisions | 4.13 | 0.67 | 2 | 5 | −0.49 | 0.19 |
| Effectiveness of fairness-enhancing mechanisms | 4.09 | 0.70 | 2 | 5 | −0.46 | 0.16 |
Trust | Overall Construct | 4.21 | 0.63 | 2 | 5 | −0.58 | 0.32 |
| Confidence in AI-generated fraud alerts | 4.19 | 0.62 | 2 | 5 | −0.55 | 0.30 |
| Reliability of AI fraud detection decisions | 4.23 | 0.61 | 2 | 5 | −0.60 | 0.34 |
| Willingness to rely on AI models | 4.22 | 0.64 | 2 | 5 | −0.57 | 0.31 |
| Alignment with professional judgment | 4.20 | 0.65 | 2 | 5 | −0.56 | 0.29 |
Regulatory Compliance | Overall Construct | 4.08 | 0.75 | 2 | 5 | −0.52 | 0.25 |
| Adherence to financial regulations | 4.07 | 0.76 | 2 | 5 | −0.50 | 0.23 |
| Auditability and explainability of AI decisions | 4.10 | 0.74 | 2 | 5 | −0.54 | 0.27 |
| Compliance with AI governance standards | 4.09 | 0.75 | 2 | 5 | −0.53 | 0.26 |
| Regulatory oversight in AI fraud detection | 4.05 | 0.77 | 2 | 5 | −0.51 | 0.24 |
AI Adoption | Overall Construct | 4.14 | 0.72 | 2 | 5 | −0.55 | 0.28 |
| Readiness to integrate AI in fraud detection | 4.12 | 0.73 | 2 | 5 | −0.53 | 0.27 |
| Perceived effectiveness of AI in fraud prevention | 4.15 | 0.70 | 2 | 5 | −0.56 | 0.29 |
| Compatibility with risk management practices | 4.13 | 0.71 | 2 | 5 | −0.54 | 0.28 |
| Organizational support for AI implementation | 4.16 | 0.72 | 2 | 5 | −0.57 | 0.30 |
AI Exposure | Overall Construct | 3.90 | 0.75 | 2 | 5 | −0.42 | 0.18 |
| Prior experience using AI fraud detection | 3.88 | 0.76 | 2 | 5 | −0.40 | 0.17 |
| Familiarity with AI-driven fraud detection tools | 3.93 | 0.74 | 2 | 5 | −0.44 | 0.19 |
Fairness Perception | Overall Construct | 4.08 | 0.70 | 2 | 5 | −0.50 | 0.22 |
| Perceived fairness in AI fraud decisions | 4.10 | 0.71 | 2 | 5 | −0.49 | 0.21 |
| Bias concerns in AI-driven fraud detection | 4.05 | 0.72 | 2 | 5 | −0.52 | 0.23 |
Table 3.
Reliability, validity, and multicollinearity diagnosis.
Table 3.
Reliability, validity, and multicollinearity diagnosis.
Construct | Cronbach’s Alpha (α) | Composite Reliability (CR) | Average Variance Extracted (AVE) | HTMT Ratio (Highest Value) | Variance Inflation Factor (VIF) |
---|
Transparency | 0.85 | 0.88 | 0.65 | 0.70 | 2.10 |
Algorithmic Bias | 0.82 | 0.86 | 0.61 | 0.68 | 2.05 |
Fairness Perception | 0.84 | 0.87 | 0.66 | 0.69 | 1.98 |
Trust | 0.86 | 0.89 | 0.67 | 0.72 | 2.20 |
Regulatory Compliance | 0.83 | 0.87 | 0.64 | 0.71 | 2.15 |
AI Adoption | 0.85 | 0.88 | 0.65 | 0.73 | 2.25 |
AI Exposure | 0.81 | 0.85 | 0.62 | 0.67 | 2.00 |
Fairness Perception | 0.86 | 0.89 | 0.68 | 0.70 | 2.05 |
Table 4.
Model predictive power and predictive relevance.
Table 4.
Model predictive power and predictive relevance.
Construct | Explained Variance (R2) | Predictive Relevance (Q2) |
---|
Trust | 0.40 | 0.27 |
AI Adoption | 0.45 | 0.30 |
Fairness Perception | 0.39 | 0.26 |
Table 5.
Model fit and common method bias assessment.
Table 5.
Model fit and common method bias assessment.
Fit Index/Test | Value | Threshold |
---|
Standardized Root Mean Square Residual (SRMR) | 0.057 | ≤0.08 |
Harman’s Single-Factor Test (Variance Explained by One Factor) | 32.4% | <50% |
Table 6.
Hypothesis Testing Results.
Table 6.
Hypothesis Testing Results.
Hypothesis | Path | β | t-Value | p-Value | Result |
---|
H1 | Regulatory Compliance → AI Adoption | 0.311 | 5.12 | <0.001 | Supported |
H2a | Transparency → Trust | 0.342 | 6.21 | <0.001 | Supported |
H2b | Transparency → AI Adoption | 0.278 | 4.87 | <0.001 | Supported |
H3 | Algorithmic Bias → Fairness Perception | –0.366 | 7.45 | <0.001 | Supported |
H4 | Fairness Perception → Trust | 0.315 | 5.98 | <0.001 | Supported |
H5 | Trust → AI Adoption | 0.392 | 6.33 | <0.001 | Supported |
H6a | AI Exposure → Trust | 0.289 | 4.62 | <0.001 | Supported |
H6b | AI Exposure → AI Adoption | 0.241 | 4.19 | <0.001 | Supported |
Table 7.
Effect sizes (f2) and explained variance (R2).
Table 7.
Effect sizes (f2) and explained variance (R2).
Construct Relationship | Effect Size (f2) | Effect Magnitude |
---|
Regulatory Compliance → AI Adoption | 0.12 | Small |
Transparency → Trust | 0.26 | Medium |
Transparency → AI Adoption | 0.10 | Small |
Algorithmic Bias → Fairness Perception | 0.21 | Medium |
Fairness Perception → Trust | 0.29 | Medium |
Trust → AI Adoption | 0.31 | Medium |
AI Exposure → Trust | 0.08 | Small |
AI Exposure → AI Adoption | 0.07 | Small |
Fairness Perception × Trust → AI Adoption | 0.22 | Medium |
Table 8.
Explained variance (R2) for endogenous constructs.
Table 8.
Explained variance (R2) for endogenous constructs.
Construct | R2 Value | Variance Explained |
---|
Trust | 0.42 | 42% of the variance in trust is explained by transparency and AI exposure |
AI Adoption | 0.48 | 48% of AI adoption variance is explained by trust, fairness perception, and regulatory compliance |
Fairness Perception | 0.41 | 41% of variance is explained by algorithmic bias |
Table 9.
Mediation analysis—fairness perception as a mediator.
Table 9.
Mediation analysis—fairness perception as a mediator.
Path | Direct Effect (β) | Indirect Effect (β) | t-Value | p-Value | Mediation Type |
---|
Algorithmic Bias → AI Adoption (without Mediator) | –0.21 | — | 3.02 | 0.0026 | N/A |
Algorithmic Bias → Fairness Perception | –0.39 | — | 4.98 | 0.00007 | N/A |
Fairness Perception → AI Adoption | 0.35 | — | 4.60 | 0.00003 | N/A |
Algorithmic Bias → AI Adoption (with Mediator) | –0.08 | — | 1.45 | 0.1478 | N/A |
Algorithmic Bias → Fairness Perception → AI Adoption | — | −0.14 | 3.21 | 0.0013 | Partial Mediation |
Table 10.
Moderation analysis—fairness perception as a moderator.
Table 10.
Moderation analysis—fairness perception as a moderator.
Path | β (Coefficient) | t-Value | p-Value | Moderation Effect |
---|
Trust → AI Adoption | 0.42 | 5.48 | <0.001 | N/A |
Fairness Perception → AI Adoption | 0.31 | 4.21 | <0.001 | N/A |
Trust × Fairness Perception → AI Adoption (Interaction Term) | 0.19 | 3.35 | 0.0009 | Significant |
Table 11.
PLS-MGA results—UAE vs. Qatar.
Table 11.
PLS-MGA results—UAE vs. Qatar.
Path | UAE (β) | Qatar (β) | Difference (Δβ) | p-Value (MGA Test) | Significant Difference? |
---|
Transparency → Trust | 0.44 | 0.38 | 0.06 | 0.217 | No |
Transparency → AI Adoption | 0.31 | 0.26 | 0.05 | 0.261 | No |
Algorithmic Bias → Fairness Perception | −0.41 | −0.36 | 0.05 | 0.191 | No |
Fairness Perception → AI Adoption | 0.39 | 0.32 | 0.07 | 0.048 | Yes |
Trust → AI Adoption | 0.46 | 0.35 | 0.11 | 0.022 | Yes |
Regulatory Compliance → AI Adoption | 0.29 | 0.27 | 0.02 | 0.364 | No |
AI Exposure → Trust | 0.25 | 0.24 | 0.01 | 0.411 | No |
Table 12.
Trust comparison between internal and external auditors.
Table 12.
Trust comparison between internal and external auditors.
Auditor Type | Mean Trust Score | Standard Deviation | t-Value | p-Value | Significant Difference? |
---|
Internal Auditors | 4.35 | 0.58 | 2.89 | 0.0041 | Yes |
External Auditors | 4.12 | 0.64 | −2.89 | 0.0041 | Yes |
Table 13.
PLS-MGA results—trust and AI adoption by auditor type.
Table 13.
PLS-MGA results—trust and AI adoption by auditor type.
Path | Internal Auditors (β) | External Auditors (β) | Difference (Δβ) | p-Value (MGA Test) | Significant Difference? |
---|
Trust → AI Adoption | 0.52 | 0.38 | 0.14 | 0.029 | Yes |
Table 14.
AI adoption comparison—high vs. low AI exposure.
Table 14.
AI adoption comparison—high vs. low AI exposure.
AI Exposure Group | Mean AI Adoption Score | Standard Deviation | t-Value | p-Value | Significant Difference? |
---|
High AI Exposure | 4.42 | 0.55 | 3.76 | 0.0003 | Yes |
Low AI Exposure | 4.08 | 0.67 | −3.76 | 0.0003 | Yes |
Table 15.
PLS-MGA results—AI exposure and key adoption drivers.
Table 15.
PLS-MGA results—AI exposure and key adoption drivers.
Path | High AI Exposure (β) | Low AI Exposure (β) | Difference (Δβ) | p-Value (MGA Test) | Significant Difference? |
---|
AI Familiarity → AI Adoption | 0.48 | 0.31 | 0.17 | 0.018 | Yes |
Fairness Perception → AI Adoption | 0.39 | 0.36 | 0.03 | 0.422 | No |
Trust → AI Adoption | 0.52 | 0.40 | 0.12 | 0.041 | Yes |
Table 16.
2SLS regression results.
Table 16.
2SLS regression results.
Path | Stage 1 (IV → Trust, β) | Stage 2 (Trust → AI Adoption, β) | t-Value | p-Value | Endogeneity Issue? |
---|
Prior AI Exposure → Trust (First Stage) | 0.49 | — | 4.91 | <0.001 | — |
Trust (Predicted) → AI Adoption (Second Stage) | — | 0.51 | 5.23 | <0.001 | No |
Table 17.
Lagged variable regression results.
Table 17.
Lagged variable regression results.
Path | β (Coefficient, T1 → T2) | t-Value | p-Value | Causal Relationship Confirmed? |
---|
Trust (T1) → AI Adoption (T2) | 0.47 | 4.78 | <0.001 | Yes |
Table 18.
Logistic regression results—trust and fairness predicting AI adoption.
Table 18.
Logistic regression results—trust and fairness predicting AI adoption.
Predictor | Coefficient (β) | Odds Ratio (Exp(β)) | z-Value | p-Value | Significant? |
---|
Trust | 0.85 | 2.34 | 4.21 | <0.001 | Yes |
Fairness Perception | 0.52 | 1.68 | 3.02 | 0.0025 | Yes |
Intercept (β0) | −1.37 | — | −3.55 | <0.001 | Yes |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).