Next Article in Journal
An Ensemble-Based Multi-Classification Machine Learning Classifiers Approach to Detect Multiple Classes of Cyberbullying
Next Article in Special Issue
More Capable, Less Benevolent: Trust Perceptions of AI Systems across Societal Contexts
Previous Article in Journal
Knowledge Graph Extraction of Business Interactions from News Text for Business Networking Analysis
Previous Article in Special Issue
Evaluating the Role of Machine Learning in Defense Applications and Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services

by
Mustafa Pamuk
1,*,
Matthias Schumann
1 and
Robert C. Nickerson
2
1
Faculty of Business and Economics, University of Goettingen, 37073 Goettingen, Germany
2
Department of Information Systems, College of Business, San Francisco State University, San Francisco, CA 94132, USA
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2024, 6(1), 143-155; https://doi.org/10.3390/make6010008
Submission received: 21 November 2023 / Revised: 28 December 2023 / Accepted: 4 January 2024 / Published: 11 January 2024
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)

Abstract

:
The intended automation in the financial industry creates a proper area for artificial intelligence usage. However, complex and high regulatory standards and rapid technological developments pose significant challenges in developing and deploying AI-based services in the finance industry. The regulatory principles defined by financial authorities in Europe need to be structured in a fine-granular way to promote understanding and ensure customer safety and the quality of AI-based services in the financial industry. This will lead to a better understanding of regulators’ priorities and guide how AI-based services are built. This paper provides a classification pattern with a taxonomy that clarifies the existing European regulatory principles for researchers, regulatory authorities, and financial services companies. Our study can pave the way for developing compliant AI-based services by bringing out the thematic focus of regulatory principles.

1. Introduction

The use of AI in financial services is becoming increasingly interesting for financial companies due to potential cost savings and/or quality improvements [1]. It introduces benefits and risks for all participants using AI-based services [2]. Furthermore, increasing data volumes and processing capacities provide the prerequisites necessary for more automation using AI. However, AI-enabled services and products may create financial and non-financial risks and raise consumer and investor protection considerations [3]. Concurrently, regulators are involved in the finance industry to ensure market safety, consumer protection, and market integrity against any inequitable discrimination by automated services [2,4]. To be up to date and gain wider public acceptance, the regulatory principles are gradually renewed and adapted based on market structure and technological changes to provide safety and the necessary high adaptability. However, looking at the basic characteristics of these AI-based services, the development, deployment, and maintenance of such services are challenging due to the increased complexity and required coordination with regulators [1]. But, any conceptual and structural changes in such services are mostly unique, complex, and costly [1,5,6]. As a result of this increased complexity and rapid technological developments in AI, a deep understanding of the regulatory principles is required to consider and fulfill obligations, as well as to drive further measures [1].
Furthermore, AI-based services cannot ensure proper results forever; rather, it is necessary to perform multiple periodic tests and validations, including continuous monitoring, and adjustments. For example, due to financial and cross-sectoral differences, it may not be suitable to continue working with datasets before and after the COVID-19 pandemic [7]. Thus, the financial situation of a company in the healthcare sector and a company in the travel sector differs significantly from the situation before the COVID-19 pandemic. Furthermore, regulatory and supervisory authorities also anticipate such situations and make it essential for financial companies to be prepared with emergency measures and the monitoring structures of AI-based services [7,8,9]. Due to the still immature regulatory environment, it is necessary to identify, analyze, and structure the key regulatory aspects and priorities so that related concepts can be addressed in research and practice. Moreover, due to the complexity of desired AI-based services, financial companies must deal with multiple regulatory expectations, including technological, organizational, and communicational cornerstones, to ensure the robustness, security, quality, compliance, and functionality of AI-based services.
The problem of the immature regulatory environment for practitioners makes the development of compliant AI services more complex. Thus, the development process of AI-based services, which consists of several different phases as foreseen by the regulators, must be addressed in a more organized manner for deriving and defining compliant and practical approaches [8,9]. Likewise, the identification of key regulatory aspects and priorities from the regulatory principles is necessary for researchers and financial firms to derive appropriate measures. Missing a clear definition of regulatory expectations makes conducting specific compliant AI-based service research and development difficult. Therefore, an overall regulatory picture of the finance industry can help determine and address research gaps that can in turn contribute to the consolidation of regulations. Derived from the motivation of structuring regulatory principles to promote successful and near-field interventions between practice and regulatory authorities, we define the following research question:
RQ: How can European regulatory principles be classified into useful dimensions and characteristics for the development of AI applications in financial services?
In the following, we first introduce the theoretical background of the European regulatory environment for AI-based services. Subsequently, we present and adapt the methodological approach for taxonomy development proposed by Nickerson et al. (2013) before we show the research process in detail. We then describe the development process that leads to the final taxonomy. Afterward, we discuss our results, mention limitations, and state a conclusion with an outlook on future research.

2. Theoretical Background of AI Regulations in the Finance Industry

Due to the global nature of the finance industry, it is necessary to consider the perspective of the Organization for Economic Co-operation and Development (OECD), which also proposed AI principles to ensure an innovative and trustworthy use of AI. In this way, regulatory authorities collaborate to establish and monitor the cross-industrial use of AI at global, European, and national levels. Moreover, regulators are proposing to create benchmarks for AI that are practical and flexible enough to be proven over time [5,10]. Given the rapid pace of technological change and the increasing need and motivation to benefit from intelligent, automated, and more efficient systems in the financial industry, it is a priority of European regulators to establish a fundamental and forward-looking legal basis for the use of AI [6]. As a result, the European Commission announced the establishment of a standard regulatory framework as part of its Digital Finance Strategy until 2024 [11].
The first-ever proposed legal framework on AI aims to provide developers, deployers, and users with clear requirements and obligations regarding specific uses of AI [11]. Due to the increasing complexity and possible lack of explainability with respect to AI algorithms, the European Commission observed an essential prerequisite addressing the risks posed specifically by AI applications and proposed a risk-based approach consisting of four levels of risk in AI: unacceptable, high, limited, and minimal or no risk [11]. Following this approach, the European Commission published the Proposal of Artificial Intelligence Act in 2021 [12]. But, when the structure of the existing regulatory principles is considered in detail, it is clear that the structure still consists of the first-ever drafts called Ethics Guidelines for Trustworthy AI from 2019 proposed by the High-Level Expert Group on AI (AI HLEG) [13]. Nevertheless, the European objective of establishing a forward-looking legal basis is still pending and cannot provide developers and financial companies with a suitable structure that has clear requirements.
One of the primary objectives of regulatory authorities is to keep the field secure, innovative, and forward-looking. For innovative AI-based services, the concerns and basic principles have been addressed by the OECD in 2019 [14]. However, the problem of missing regulatory frameworks and incompatibilities with existing regulations has remained under discussion over the years. Due to the ongoing discussion on defining final regulations over the years, there are uncertainties regarding the design and organization of AI-based services in the market. However, due to rushed technological developments, financial companies are increasingly interested in improving the efficiency and quality of financial services and products [1,7]. Furthermore, the increasing risks must be reconciled with legal requirements to promote responsible AI development and deployment and ensure the safety of customers in practice [15]. According to recent research, there is a consensus on the primary objectives of regulations for the use of AI across industries, which can be summarized into four points: fairness, sustainability, accuracy, and explainability [15,16,17,18,19,20].
Nevertheless, the financial and organizational costs of offering these services must be carefully evaluated by financial companies [21]. Due to the immature regulatory basis, the coordination and approval processes between financial companies and regulators can take a long time. In particular, a broad assessment of AI-based services and maintenance costs in comparison to the improvement achieved is necessary regardless of whether the return on investment is sufficient for financial companies [5,22].

3. Research Method

The development of a taxonomy for structuring regulatory principles from financial authorities for AI-based services consists of the taxonomy development methodology according to Nickerson et al. [23]. This methodology helps us structure the taxonomy development process. A taxonomy (T) has a set of dimensions (D), which consists of a set of mutually exclusive and collectively exhaustive characteristics (C), as defined in the following formula [23]:
T = D i ,   i = 1 , , n   |   D i = C i j ,   j = 1 , , k i , k i 2
The purpose of this taxonomy is to structure European regulatory principles to highlight key regulatory aspects and priorities, including existing interests and concerns for further research. Moreover, the countries in Europe also have specifications for the use of AI that must be considered to give an overall picture. The objects to be classified are the European and country-specific regulatory principles for AI-based services in financial services. In the first step, it is necessary to define the meta-characteristic of the taxonomy as a basis for the selection of the characteristics. As a result of this selection, the identified characteristics can be summarized as a logical implication of the meta-characteristics and reflect the purpose of the taxonomy [23]. From this background, we define the meta-characteristic of this taxonomy as the key regulatory aspects and priorities of financial regulatory principles for AI-based services. As defined in the taxonomy development process, empirical-to-conceptual (inductive) or conceptual-to-empirical (deductive) approaches can be distinctively used to evolve the taxonomy. The conceptual-to-empirical approach (Step 4c) can be used to conceptualize the (new) characteristics and dimensions of objects, whereas the empirical-to-conceptual approach (Step 4e) can be used to identify a (new) subset of objects in the taxonomy (Figure 1).
The design of this methodology can be considered as a search process for a useful taxonomy [24]. Therefore, Nickerson et al. [23] define objective and subjective ending conditions (Step 2) as a crucial point to evaluate the usefulness of the developed taxonomy (see Table 1). We use a subset of the objective ending conditions that are related to the correctness of the taxonomy and the process, whereas subjective conditions ensure that the taxonomy is meaningful and practical [23,25].

4. Research Process

We first start by giving an overview of identified European regulatory authorities and the conducted literature review. Afterward, we determine the choice of the iterative methods in each iteration (Step 3). As mentioned above, European regulators are leading the way in the development of regulatory principles. Therefore, we start with an empirical-to-conceptual approach (first iteration) to derive the fundamental structure of the taxonomy using European regulatory principles. As a result, we decided to extend the taxonomy considering country-specific regulatory principles with an empirical-to-conceptual approach (second iteration), including identified use cases and reports. Lastly, to rethink and finalize the taxonomy, we conducted a conceptual-to-empirical approach and finalized the taxonomy of regulatory principles (third iteration).

4.1. Review of European Regulatory Authorities and the Literature

We first identified the European supervisory authorities for the finance industry and analyzed the regulatory principles for AI usage (Table 2). Due to the iterative process of analyzing regulatory principles, we also included the identified publications of the authorities from different years, including regulatory backgrounds. For a representative and useful taxonomy, we consider the leading countries in the finance industry in Europe ranked by their GDP to confirm and eventually restructure the taxonomy [26]. We found that the identified countries contain exemplary AI approaches for financial companies rather than defining any country-specific regulatory perspective. Therefore, we decided not to extend the list of county-specific regulations since these countries consent to the same regulatory perspective with overarching principles from the European Commission.
Additionally, we conducted a structured literature review based on the methodological guidelines of Cooper [48] and vom Brocke et al. [49] to consider the current scientific state. The search is conducted in established scientific databases, such as ACM Digital Library, AIS Electronic Library, Ebscohost, EmeraldInsight, Jstor, ScienceDirect, and SpringerLink. The following search terms are used to identify the existing regulatory requirements and essential components: “Regulatory principles”, “Financial Services”, “Artificial Intelligence”, and “Machine Learning”. After analyzing the titles and abstracts of accessible publications, we found 374 publications, of which 38 appeared to be relevant to dealing with the regulatory principles for AI-based services in the finance industry. With back and forward searches, we selected 16 additional relevant publications. Relevant articles are those that consider regulatory requirements for AI-based services.

4.2. First Iteration E2C

Due to the overarching role, we start with the European regulatory principles for the use of AI shown in Table 2 and follow an empirical-to-conceptional approach (E2C) in the first iteration [27]. We assume that examining these principles can help understand the underlying objective of supervisory authorities to ensure market safety, consumer protection, and market integrity (Step 4e). First, we distinguish the regulatory principles with underlying goals for the development of AI-based services, which can help summarize the perspective of European regulatory authorities. However, these goals differ depending on the use case, so companies must take an individual and case-based approach depending on the task. From this background, different goals arise from the regulatory principles, which must be checked with respect to whether they have been achieved. The underlying goals of regulatory principles can provide an overview of existing priorities and concerns of regulators, so we have added “goals” as our first dimension [12]. According to the consensus on the main objectives outlined in Section 2, we have observed the following characteristics in the regulatory principles that can be adopted in the taxonomy: “explainability and accountability”, “fairness, privacy, and human rights”, and “accuracy” [15,17,18]. Moreover, entire processes (front or back office) that have access to these AI-based services must be equipped with appropriate measures for their “sustainability and robustness” (Step 5e) [9,12,28].
Second, we assume that regulatory principles consider the underlying “approach” at each stage of the value chain. The principles address the complexity because it may hinder the adoption of innovative AI-based services given the need for effective human oversight and skilled management [27]. The principles can be characterized based on the product development stages “design”, “development”, “training”, “testing”, and “validation” up until the “deployment” of proposed AI-based services [12]. Moreover, “cooperation with authorities” is considered an important part of proposed services that must be well designed, so we added it as the seventh characteristic under “approach” to address respective regulatory principles (Step 5e) [29].
A risk-free operation and thoughtful management are recognized as crucial points for AI-based services, so we added “risk management” as the third dimension [12,27]. According to the regulatory principles of the European Commission, financial companies must “estimate and evaluate the risks” that may arise if any AI-based service is operated as intended and under conditions of reasonably foreseeable misuse. Moreover, under the regulatory principles, “human oversight” is emphasized to minimize risks that may arise from algorithmic decisions [29]. This is an important part of design and development in that a natural person can oversee the functioning of these AI-based services. As another characteristic, we have identified the principles, including “control measures”, required in the regulation and management of risks that cannot be automatically eliminated. These “control measures”, together with “human oversight”, are critical components in managing potential attacks or even system failures and protecting ongoing operations with pre-defined procedures and capabilities. In addition, we identified regulatory principles that address “conformity assessment” for the prevention or minimization of risks to protect the fundamental rights posed by such systems, as well as ensure the availability of adequate capacity and resources at designated bodies. Moreover, “mitigation measures” [12] and “maintenance” [9] must be well defined and planned to reduce the risks before these systems are placed on the market (Step 5e).
As a fourth dimension, we identified the “monitoring” of these services, which includes both organizational and technical components in regulatory principles. The “(technical) documentation” plays a crucial role in understanding the underlying regulatory and technical dimensions of AI-based services. This documentation is required to assess the compliance of regulatory authorities relative to the system and enable the traceability of the underlying technical and organizational systems that operate with AI. Moreover, we identified “logging” and “post-market” as characteristics that are considered crucial parts of monitoring with respect to regulatory principles for the ongoing validation, overall evaluation, and implementation of necessary adjustments. Further, we identified the continuous monitoring of “functionality (of the model)” as another characteristic necessary for managing and reducing risks: for example, via defined control measures. In the respective principles, the monitoring of functionality is also considered to determine the performance limits of AI-based services (Step 5e).
As the last dimension, we include the regulatory principles addressing the “data”, which set out the characteristics regarding “governance”, “relevance and representativeness”, “collection”, and ”preparation” [9,12,27]. The regulatory principles for data are even more detailed and define the criteria for preventing any systematic discrimination, so we decided to consider “data” as a separate dimension. The taxonomy after the first iteration can be seen in Figure 2 (Step 6e).
The current taxonomy is concise and robust since the derived dimensions characterize the regulatory principles of European supervisory authorities. However, it is necessary to consider country-specific regulatory principles in Europe to ensure their usefulness and satisfy the taxonomy’s purposes. The second iteration can help expand and confirm the identified characteristics. Moreover, we can conclude that the objective ending conditions are not met after this iteration, and a second iteration must be conducted (Step 7).

4.3. Second Iteration E2C

In the second iteration, we follow an empirical-to-conceptual approach based on the review of country-specific regulatory principles in Table 2 and examine use cases and reports to find further dimensions and characteristics in the taxonomy. We assume that country-specific regulatory principles can help confirm or (re-)structure dimensions and characteristics by incorporating national guidelines and interpretations for a useful and robust taxonomy.
In the second iteration, we identified regulatory principles that consider the trade-off between explainability and accuracy as an important issue for AI-based services [8,32,34,38,41] because an increased level of explanation limits the performance of the AI algorithm [50]. Therefore, the identified regulatory principles consider the “accuracy” of AI models as a challenge compared to traditional financial models that are rule-based, with explicitly fixed parameterization [3,32,34,38]. Furthermore, we confirm that the characteristics “sustainability and robustness” [3,34,38], “explainability and accountability” [3,34,47], and “fairness, privacy, and human rights” [8,34,41] are considered goals in the regulatory principles from European countries. Securing data and IT infrastructure for better “sustainability and robustness” is seen as continuous investment in regulatory principles and is required to ensure the resilience of AI-based services [34,47,51].
The underlying “approach” was considered in more detail due to the use-case-based description of country-specific regulatory principles and priorities. Since (re-)training can change everything overnight, the underlying approach of “training” is required to justify the approach [32]. The same issue arises again through the processes of “testing” [32,47] and “validation” [8,32,34,38] carried out by financial companies. Due to the required impact assessment on customers and employees, the issues with the “deployment” of AI-based services were addressed in the regulatory principles as an important part before the final placement on the market [34,38]. Since each change must be reviewed by the respective supervisory authorities, “cooperation with the authorities” is required for financial companies to develop an appropriate review process [8,32,34].
Country-specific regulatory principles recognize the ”estimation and evaluation” of risks as an important part of defining the limits and identifying possible risks of AI-based services [32,34,41]. Likewise, we found that “human oversight” is considered to be essential concerning the level of automation chosen to avoid any algorithmic risks [3,8,34]. Dependence on both of these characteristics is also required to provide “control measures” to limit any damage that can arise in the case of failure [38,47]. Moreover, the “maintenance” of AI-based services is seen as part of the model’s changes to ensure the adaptability of such systems to updated or changed datasets [8,32]. Furthermore, we can also confirm that country-specific regulations consider “mitigation measures” as an appropriate characteristic to limit the damage to a minimum in the case of failure [3,8,32].
We have seen that all characteristics for “monitoring” were confirmed in the country-specific regulatory principles due to the required observation of functionality and long-term assessments [8,38]. We found, in the country-specific regulatory principles, that they also consider “(technical) documentation” as an important part of AI-based services to ensure clarity for both internal and external parties [3,8,38]. Moreover, “logging” is required as a part of monitoring and for understanding the operation of the system [38,47]. Likewise, country-specific regulations recognize “post-market” monitoring for ongoing validation, overall evaluation, and appropriate adjustments [32,38].
Furthermore, we confirm that the “governance” [34,41], “relevancy and representativeness” [3,34,47], “collection”, and "preparation” [3,32] of data were considered in county-specific regulatory principles to ensure a high level of data quality. As a result of the analysis of country-specific regulatory principles, we have seen that identified regulatory principles consider the concerns about data quality and privacy issues in more detail [34,40,41]. The second iteration confirmed the dimensions and characteristics mentioned above, and no structural changes to taxonomy (Figure 2) are therefore necessary (Step 6e).
As a result of the second iteration, we can confirm that the current state of the taxonomy is concise and robust enough since the derived dimensions characterize the European regulatory principles, including the regulatory principles of the top six countries ranked by GDP. Moreover, we can confirm that the identified characteristics reflect country-specific regulatory principles, as no additional dimensions and characteristics have been identified. We believe that the current taxonomy is comprehensive, extendible, and explanatory. The existing dimensions and characteristics are sufficient for the objects. However, we can conclude that not all objective ending conditions are met after this iteration, and a third iteration must be conducted (Step 7).

4.4. Third Iteration—C2E—Final Taxonomy

To rethink the current state of the taxonomy of regulatory principles, we decided to follow a conceptional-to-empirical approach (Step 4c). We found that the dimension “approach” can be restructured based on the country-specific regulatory principles in Table 2: The characteristics “development”, “training”, “testing”, “deployment”, and “validation” represent the fundamental steps for designing an AI-based service, so they can be grouped by “design”. Regarding the “relevancy and representativeness” of data, we assume that the quality of the underlying dataset is mainly characterized, so we decided to rename it as “quality” (Step 5c).
We define the final taxonomy of the regulatory principles after the third iteration (Step 6c) in Figure 3. We conclude that the current taxonomy is comprehensive, extendible, and explanatory, as there is no need for further iterations. We can summarize that the subjective and the objective ending conditions are met after this iteration (Step 7).

5. Limitations and Discussion

Our limitations can guide future research on regulatory principles and the use of AI-based services in the finance industry. First, we identified the regulatory authorities in Europe and leading European countries ranked by their GDP. The regulatory principles, including their published articles concerning AI usage in financial services, were examined to understand and consider the background of regulatory principles. The primary limitation of this study was the mixed and confused structure of regulatory principles that made understanding expectations and goals difficult. It is therefore a challenge to address regulatory concerns and (further) develop the appropriate measures for innovative AI-based services. Moreover, we also observed that the European Commission’s proposal was already considered by country-specific regulatory principles as a fundamentally aligned strategy for the use of AI in the financial industry. However, these country-specific regulatory principles contain more use-case-based and exemplary procedures rather than set rules and clear guidelines [52]. This was a limitation when considering and restructuring our taxonomy. However, we were able to confirm that the taxonomy already meets the countries’ expectations and represents country-specific regulations in the second iteration. Therefore, we confirm that our taxonomy is useful and reflects the priorities of the regulators in providing a better understanding for practitioners and researchers. Nevertheless, a limitation remains due to the subjective interpretation and exploratory approach, so the taxonomy must be tested over time.
The final taxonomy indicates the key regulatory aspects and priorities of regulators for financial companies to consider within five major dimensions. The objectives identified show that the regulatory authorities expect clear solutions for these areas from researchers and financial companies. The goals must also be considered along the identified approaches to satisfy the regulators’ expectations. In addition, financial companies must consider, demonstrate, and satisfy the robustness of AI-based services. It is considered another important issue for a risk-free system to ensure consumer safety and operational well-being [53]. With the proposed taxonomy of regulatory principles, we identified regulatory principles considering different approaches. Furthermore, for each of these approaches, further research is necessary to outline a clear framework with precise guidelines for the use of AI in financial services. Moreover, the final taxonomy indicates that the management of risks must be recognized together with monitoring structures that are essential in identifying, managing, and preventing potential risks. Regulators are required to make continuous improvements and adjustments from financial companies to secure and increase the quality of AI-based services.
Taxonomy provides insight into the regulatory priorities and the focus of supervisory authorities. However, criteria for compliant services in the form of use cases are not yet available for the financial sector and research, both of which can be investigated in more detail using this taxonomy. Due to the confusing, repetitive, and recurring structure of regulatory principles, understanding the overlapping paragraphs and conducting and structuring further research are difficult. However, as the European Commission’s AI Act comes into force in 2024, the mechanisms for compliant AI-based services are still to be examined, where taxonomy can accelerate and promote this process via a structured presentation of regulatory priorities. In addition, taxonomy summarizes the objectives of regulatory principles for further research, which is necessary to iteratively define the compliance criteria.
One of the main interests is the explainability of not only the AI models created but also the underlying process from creation to deployment. It is therefore necessary to examine specific measures and relevant components on a case-by-case basis, as defined by the dimensions in the taxonomy, e.g., for monitoring and risk management. Due to the uniqueness of each AI-powered service, the taxonomy can be used to determine the characteristics of compliant services for each corresponding set of use cases [15,20]. Against this background, it is necessary to structure and evaluate the thematic reference in further research: e.g., for credit scoring, transaction monitoring, and insolvency forecasting. For example, a compliant assessment and evaluation of risks for AI-based credit scoring can be clearly defined. In this way, the taxonomy can provide a structured overview of compliant mechanisms assigned to the respective dimensions and characteristics based on various use cases in the financial industry. In addition, experience with AI-based systems from other sectors can be evaluated and structurally adapted to the financial sector. The taxonomy can encourage and support cooperation between the financial industry and supervisory authorities by clearly defining expectations and associated measures. The interaction, e.g., for the approval process of compliance, can be structured using the dimensions of the taxonomy and extended by further characteristics depending on the use case.
Moreover, the goals are of particular interest to regulatory authorities. We identified a lack of satisfactory criteria and clear guidelines, such as for explainability, accountability, and accuracy; companies must take a case-by-case approach and consult with authorities [54]. The proposed taxonomy indicates that the underlying approach followed by financial services companies plays a crucial role for the regulators, including organizational and technical phases. As an approach becomes more complex, the process for coordinating with regulatory authorities must be better structured [21]. In this context, complexity is an obstacle to the use of AI-based financial services due to the required human control and management training [27].
AI can have a further profound impact on the financial industry, with AI-powered applications being used for a variety of tasks, including fraud detection, risk assessment, customer service, and algorithmic trading. The AI Act and the Network and Information Security Directive (NIS/NIS2), Digital Operational Resilience Act (DORA), General Data Protection Regulation (GDPR), and Cyber Resilience Act (CRA) are all pieces of legislation that aim to regulate the use of AI in the financial industry. These regulations are all designed to promote the responsible development and use of AI and protect consumers and financial stability. But they could also make it more difficult for financial institutions to use AI in financial services. Therefore, financial institutions will need to carefully assess the risks and opportunities associated with these regulations to develop a responsible and effective AI strategy.
Furthermore, for the continuous functionality of AI-based services, it is necessary to design and integrate mechanisms for monitoring the processes. These are not only meant to protect customers, but they also improve the services offered over time, identifying existing deficits. Moreover, these kinds of monitoring mechanisms help in the understanding and checking of functionality. The regulatory principles related to monitoring also intend to increase the transparency of the service and thus facilitate the process of any technical and operational review carried out by regulators. We also see that regulatory principles are comparatively the most related to data, as a possibly appropriate dataset is a prerequisite, and the fundamental regulatory groundwork is already in place with GDPR. However, the existing risks and concerns of regulators go beyond the data and involve the entire process, including all components required for a secure service. We believe that our taxonomy can serve as a structured representation of regulatory principles so that further studies can be conducted to overcome the potential risks and harms associated with AI development, testing, and deployment in the finance industry [55].

6. Conclusions and Future Work

This paper proposes the development of a useful taxonomy of existing European regulatory principles for researchers and financial companies for the identification and addressing of key regulatory aspects and priorities in order to promote a better understanding of these regulations and guide how compliant AI-based services are built. From this background, we considered the regulatory principles of leading European countries to confirm and restructure the taxonomy. As a result, we have created a hierarchical taxonomy consisting of six dimensions following the iterative method for taxonomy development by Nickerson et al. [23] and following two empirical-to-conceptional and one conceptional-to-empirical approach. We contribute to the existing literature dealing with the use of AI in financial services from the perspective of regulatory authorities. We conducted a structured literature review and market survey to identify the European regulatory authorities and their regulatory principles for the use of AI in financial services. However, we observed unstructured and unsatisfied measures for existing risks. As a result of our discussion, we have shown that the taxonomy can provide researchers and practitioners with an overview of existing risks in order to identify and address them. We believe that highlighting regulatory priorities and characteristics with our taxonomy can help derive further measures that satisfy regulatory principles.
We also found that country-specific regulatory principles are not as sophisticated as those of Europe, as they mostly contain use cases and exemplary approaches [10]. These use-case-based explorative descriptions include possible exemplary measures that prevent any severe sanctions such that companies in the finance industry need to consider them when building similar AI-based services. For this purpose, our study sets out an overall view of European regulatory principles so that further studies can be conducted both for deriving potential measures and promoting further innovative AI-based services in the finance industry. Since we have discussed the problem of clear legal requirements and concerns, future research can focus not only on establishing a clear legal basis but also on providing compliant technical measures based on appropriate approaches. This can help facilitate the future of AI in the finance industry by providing technical and organizational guidelines [22]. Likewise, the identified objectives of the regulatory principles in our taxonomy also indicate the main concerns represented as goals that need to be explored in further studies for compliant and innovative services. Moreover, due to the dynamic nature of AI-based services, the required coordination with regulators poses another important issue [56]. In this respect, it must be questioned whether the competencies of regulators and responsible managers are sufficient for this and how they must be improved [13]. Likewise, it is necessary to examine the entire process, including deployment and post-market analysis, to provide an efficient control and approval process.

Author Contributions

Conceptualization, M.P. and M.S.; methodology, M.P. and R.C.N.; formal analysis, M.P.; investigation, M.P.; writing—original draft preparation, M.P.; writing—review and editing, M.P. and M.S.; visualization, M.P.; supervision, M.S.; project administration, M.S.; funding acquisition, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge support by the Open Access Publication Funds/transformative agreements of the Göttingen University.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

We would like to thank the reviewers for their thoughtful comments and efforts toward improving our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. OECD. Artificial Intelligence, Machine Learning and Big Data in Finance—OECD. Available online: https://www.oecd.org/finance/artificial-intelligence-machine-learning-big-data-in-finance.htm (accessed on 5 October 2022).
  2. Lee, J. Access to Finance for Artificial Intelligence Regulation in the Financial Services Industry. Eur. Bus. Org. Law Rev. 2020, 21, 731–757. [Google Scholar] [CrossRef]
  3. BaFin. Big Data and Artificial Intelligence: Principles for the Use of Algorithms in Decision-Making Processes. Available online: https://www.bafin.de/dok/16185950 (accessed on 31 January 2023).
  4. Cao, L. AI in Finance: Challenges, Techniques, and Opportunities. ACM Comput. Surv. 2022, 55, 1–38. [Google Scholar] [CrossRef]
  5. Huang, C.; Wang, X. Financial Innovation Based on Artificial Intelligence Technologies. In Proceedings of the 2019 International Conference on Artificial Intelligence and Computer Science. AICS 2019: 2019 International Conference on Artificial Intelligence and Computer Science, Wuhan, China, 12 July 2019; ACM: New York, NY, USA, 2019; pp. 750–754, ISBN 9781450371506. [Google Scholar]
  6. Daníelsson, J.; Macrae, R.; Uthemann, A. Artificial intelligence and systemic risk. J. Bank. Financ. 2021, 140, 106290. [Google Scholar] [CrossRef]
  7. World Bank. Harnessing Artificial Intelligence for Development on the Post-COVID-19 Era; World Bank: Washington, DC, USA, 2021. [Google Scholar]
  8. Deutsche Bundesbank; BaFin. Machine Learning in Risk Models—Characteristics and Supervisory Priorities: Consultation Paper. Available online: https://www.bundesbank.de/resource/blob/793670/61532e24c3298d8b24d4d15a34f503a8/mL/2021-07-15-ml-konsultationspapier-data.pdf (accessed on 10 October 2023).
  9. EBA. Discussion Paper on Machine Learning for IRB Models. Available online: https://www.eba.europa.eu/sites/default/files/document_library/Publications/Discussions/2022/Discussion%20on%20machine%20learning%20for%20IRB%20models/1023883/Discussion%20paper%20on%20machine%20learning%20for%20IRB%20models.pdf (accessed on 5 November 2023).
  10. Covington. Artificial Intelligence in Financial Services in Europe. Available online: https://www.knplaw.com/wp-content/uploads/2022/02/Artificial-Intelligence-in-Financial-Services-in-Europe-2022.pdf (accessed on 5 December 2023).
  11. European Commission. Shaping Europe’s Digital Future: Regulatory Framework Proposal on Artificial Intelligence. Available online: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed on 28 August 2023).
  12. European Commission. Regulation of the European Parliament and of the Council: Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206 (accessed on 28 December 2023).
  13. AI HLEG. Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self-Assessment. Available online: https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment (accessed on 11 May 2023).
  14. OECD. AI-Principles Overview. Available online: https://oecd.ai/en/ai-principles (accessed on 30 August 2023).
  15. Giudici, P.; Raffinetti, E. SAFE Artificial Intelligence in finance. Financ. Res. Lett. 2023, 56, 104088. [Google Scholar] [CrossRef]
  16. Caton, S.; Haas, C. Fairness in Machine Learning: A Survey. ACM Comput. Surv. 2023. [Google Scholar] [CrossRef]
  17. OECD. SAFE (Sustainable, Accurate, Fair and Explainable). Available online: https://oecd.ai/en/catalogue/metrics/safe-%28sustainable-accurate-fair-and-explainable%29 (accessed on 28 December 2023).
  18. Memarian, B.; Doleck, T. Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education: A systematic review. Comput. Educ. Artif. Intell. 2023, 5, 100152. [Google Scholar] [CrossRef]
  19. Sharma, S.; Rawal, Y.S.; Pal, S.; Dani, R. Fairness, Accountability, Sustainability, Transparency (FAST) of Artificial Intelligence in Terms of Hospitality Industry. In ICT Analysis and Applications; Fong, S., Dey, N., Joshi, A., Eds.; Springer Nature: Singapore, 2022; pp. 495–504. ISBN 978-981-16-5654-5. [Google Scholar]
  20. Buckley, R.P.; Zetzsche, D.A.; Arner, D.W.; Tang, B. Regulating Artificial Intelligence in Finance: Putting the Human in the Loop. Syd. Law Rev. 2021, 43, 43–81. [Google Scholar]
  21. Kruse, L.; Wunderlich, N.; Beck, R. Artificial Intelligence for the Financial Services Industry: What Challenges Organizations to Succeed. In Proceedings of the 52nd Annual Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; Bui, T.X., Ed.; University of Hawaii at Manoa Hamilton Library ScholarSpace: Honolulu, HI, USA, 2019. ISBN 978-0-9981331-2-6. [Google Scholar]
  22. Cao, L. AI in Finance: A Review. SSRN J. 2020. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3647625 (accessed on 16 May 2023).
  23. Nickerson, R.C.; Varshney, U.; Muntermann, J. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 2013, 22, 336–359. [Google Scholar] [CrossRef]
  24. Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design Science in Information Systems Research. MIS Q. 2004, 28, 75. [Google Scholar] [CrossRef]
  25. Eickhoff, M.; Muntermann, J.; Weinrich, T. What do FinTechs Actually do? A Taxonomy of FinTech Business Models. ICIS 2017 Proceedings, 22. Available online: https://aisel.aisnet.org/icis2017/EBusiness/Presentations/22 (accessed on 16 May 2023).
  26. Statista. GDP of European Countries 2022. Available online: https://www.statista.com/statistics/685925/gdp-of-european-countries/ (accessed on 25 July 2023).
  27. ESMA. Artificial Intelligence in EU Securities Markets. Available online: https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf (accessed on 5 November 2023).
  28. EBA. EBA Report on Big Data and Advanced Analytics. Available online: https://www.eba.europa.eu/eba-report-identifies-key-challenges-roll-out-big-data-and-advanced-analytics (accessed on 13 November 2022).
  29. ECB. Opinion of the European Central Bank of 29 December 2021 on a Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52021AB0040 (accessed on 10 May 2023).
  30. European Commission. Artificial Intelligence for Europe. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0237&from=EN (accessed on 28 December 2023).
  31. Deutsche Bundesbank; BaFin. Machine Learning in Risk Models—Characteristics and Supervisory Priorities: Responses to the Consultation Paper. Available online: https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Fachartikel/2022/fa_bj_2202_Maschinelles_Lernen_en.html (accessed on 11 May 2023).
  32. Deutsche Bundesbank. Policy Discussion Paper: The Use of Artificial Intelligence and Machine Learning in the Financial Sector. Available online: https://www.bundesbank.de/resource/blob/598256/5e89d5d7b7cd236ad93ed7581800cea3/mL/2020-11-policy-dp-aiml-data.pdf (accessed on 11 September 2023).
  33. Bank of England; Prudential Regulation Authority; Financial Conduct Authority. Machine Learning in UK Financial Services: The Bank of England and Financial Conduct Authority Conducted a Second Survey into the State of Machine Learning in UK Financial Services. Available online: https://www.bankofengland.co.uk/Report/2022/machine-learning-in-uk-financial-services (accessed on 10 May 2023).
  34. Bank of England; Prudential Regulation Authority; Financial Conduct Authority. DP5/22—Artificial Intelligence and Machine Learning. Available online: https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence (accessed on 10 May 2023).
  35. Bank of England; Financial Conduct Authority. Research Note: Machine Learning in UK Financial Services. Available online: https://www.fca.org.uk/publication/research/research-note-on-machine-learning-in-uk-financial-services.pdf (accessed on 5 November 2023).
  36. Autorité des Marchés Financiers. Artificial Intelligence in Finance: Recommendations for Its Responsible Use. Available online: https://lautorite.qc.ca/fileadmin/lautorite/grand_public/publications/professionnels/rapport-intelligence-artificielle-finance-an.pdf (accessed on 5 December 2023).
  37. Autorité des Marchés Financiers. The AMF Sets Out Its Guidelines for Digital Finance in Europe. Available online: https://www.amf-france.org/en/news-publications/news/amf-sets-out-its-guidelines-digital-finance-europe (accessed on 12 May 2023).
  38. Banque de France. Governance of Artificial Intelligence in Finance: Discussion Document. Available online: https://acpr.banque-france.fr/en/governance-artificial-intelligence-finance (accessed on 12 May 2023).
  39. Banque de France. Governance of Artificial Intelligence in Finance: Summary of Consultation Responses. Available online: https://acpr.banque-france.fr/sites/default/files/medias/documents/summary_-_ai_governance_in_finance_-_def.pdf (accessed on 5 December 2023).
  40. Banca D’Italia. Legal Framework. Available online: https://www.bancaditalia.it/compiti/vigilanza/normativa/index.html?com.dotmarketing.htmlpage.language=1 (accessed on 16 May 2023).
  41. Banca D’Italia. Artificial Intelligence in Credit Scoring.: An Analysis of Some Experiences in the Italian Financial System. Available online: https://www.bancaditalia.it/pubblicazioni/qef/2022-0721/QEF_721_EN.pdf?language_id=1 (accessed on 5 December 2023).
  42. Banca D’Italia. Questioni di Economia e Finanza: Financial Intermediation and New Technology: Theoretical and Regulatory Implications of Digital Financial Markets. Available online: https://www.bancaditalia.it/pubblicazioni/qef/2023-0758/QEF_758_23.pdf (accessed on 16 May 2023).
  43. MIMIT. Intelligenza Artificiale, Online la Strategia. Available online: https://www.mimit.gov.it/index.php/it/notizie-stampa/intelligenza-artificiale-online-la-strategia (accessed on 24 May 2023).
  44. ENIA. National Strategy for Artificial Intelligence. Available online: https://portal.mineco.gob.es/RecursosArticulo/mineco/ministerio/ficheros/National-Strategy-on-AI.pdf (accessed on 24 May 2023).
  45. ENIA. National Artificial Intelligence Strategy: Digital Spain 2025. Axis 09. Data Economy and Artificial Intelligence. Measure 41. Available online: https://espanadigital.gob.es/sites/agendadigital/files/2022-06/E09M41_National_Artificial_Intelligence_Strategy.pdf (accessed on 24 May 2023).
  46. Banco de España. Machine Learning in Credit Risk: Measuring the Dilemma between Prediction and Supervisory Cost. Documentos de Trabajo N.º 2032. Available online: https://www.bde.es/f/webbde/SES/Secciones/Publicaciones/PublicacionesSeriadas/DocumentosTrabajo/20/Files/dt2032e.pdf (accessed on 24 May 2023).
  47. De Nederlandsche Bank. General Principles for the Use of Artificial Intelligence in the Financial Sector. Available online: https://www.dnb.nl/media/voffsric/general-principles-for-the-use-of-artificial-intelligence-in-the-financial-sector.pdf (accessed on 25 May 2023).
  48. Cooper, H.M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowl. Soc. 1988, 1, 104–126. [Google Scholar] [CrossRef]
  49. vom Brocke, J.; Simons, A.; Riemer, K.; Niehaves, B.; Plattfaut, R.; Cleven, A. Standing on the Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems Research. CAIS 2015, 37, 9. [Google Scholar] [CrossRef]
  50. Wambsganss, T.; Engel, C.; Fromm, H. Improving Explainability and Accuracy through Feature Engineering: A Taxonomy of Features in NLP-based Machine Learning. ICIS 2021 Proceedings 2021, 1. Available online: https://aisel.aisnet.org/icis2021/data_analytics/data_analytics/1 (accessed on 16 May 2023).
  51. de Nikolai, K. DNB Publishes General Principles for the Use of AI in the Financial Sector. Available online: https://www.regulationtomorrow.com/the-netherlands/dnb-publishes-general-principles-for-the-use-of-ai-in-the-financial-sector/ (accessed on 25 May 2023).
  52. Friedrich, L.; Hiese, A.; Dreßler, R.; Wolfenstetter, F. Künstliche Intelligenz in Banken—Status quo, Herausforderungen und Anwendungspotenziale. In Künstliche Intelligenz; Springer Gabler: Berlin/Heidelberg, Germany, 2021; pp. 49–63. [Google Scholar]
  53. Mirestean, A.; Farias, A.; Deodoro, J.; Boukherouaa, E.B.; AlAjmi, K.; Iskender, E.; Ravikumar, R.; Shabsigh, G. Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance. Dep. Pap. 2021, 2021, 1. [Google Scholar] [CrossRef]
  54. Hertig, G. Use of AI by Financial Players: The Emerging Evidence. SSRN J. 2022. [Google Scholar] [CrossRef]
  55. IOSCO. The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers. Available online: https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf (accessed on 28 September 2023).
  56. FSI. Humans Keeping AI in Check—Emerging Regulatory Expectations in the Financial Sector; Bank for International Settlements, Financial Stability Institute: Basel, Switzerland, 2021; ISBN 978-92-9259-497-8. [Google Scholar]
Figure 1. Taxonomy development process by Nickerson et al. [23].
Figure 1. Taxonomy development process by Nickerson et al. [23].
Make 06 00008 g001
Figure 2. The taxonomy of the European regulatory principles after the first iteration.
Figure 2. The taxonomy of the European regulatory principles after the first iteration.
Make 06 00008 g002
Figure 3. Final taxonomy of European regulatory principles.
Figure 3. Final taxonomy of European regulatory principles.
Make 06 00008 g003
Table 1. Ending conditions of the taxonomy.
Table 1. Ending conditions of the taxonomy.
#Objective Ending ConditionsSubjective Ending Conditions
1Each characteristic in each dimension
has at least one object.
Concise
2There is no duplication of dimensions and
characteristics; each characteristic in each
dimension is unique.
Robust
3No new dimensions or characteristics are
provided in the last iteration.
Comprehensive
4All cases of regulatory principles (object) from the literature and practice review were checked.Extendible
5No object has two distinct characteristics
in the same dimension.
Explanatory
Table 2. List of financial regulatory principles identified in Europe.
Table 2. List of financial regulatory principles identified in Europe.
Land/CountriesAuthoritiesDocument/Source
Europe *European Security and Market Authority[27]
European Banking Authority[9,28]
European Central Bank[29]
European Commission[12,30]
GermanyFederal Financial Supervisory Authority[3]
Deutsche Bundesbank; Federal Financial Supervisory Authority[8,31]
Deutsche Bundesbank[32]
United KingdomBank of England; Prudential Regulation Authority; Financial Conduct Authority[33,34]
Bank of England; Financial Conduct Authority[35]
France **Autorité des marchés financiers[36,37]
Banque de France[38,39]
Italy **Banca D’Italia[40,41,42]
Ministero delle Imprese e del Made in Italy[43]
Spain **Estrategia Nacional de Inteligencia Artificial[44,45]
Banco de España[46]
NetherlandsDe Nederlandsche Bank[47]
* It contains the overarching principles and perspectives in Europe. ** No specific regulatory principles are solely dedicated to the usage of AI in financial services.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pamuk, M.; Schumann, M.; Nickerson, R.C. What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services. Mach. Learn. Knowl. Extr. 2024, 6, 143-155. https://doi.org/10.3390/make6010008

AMA Style

Pamuk M, Schumann M, Nickerson RC. What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services. Machine Learning and Knowledge Extraction. 2024; 6(1):143-155. https://doi.org/10.3390/make6010008

Chicago/Turabian Style

Pamuk, Mustafa, Matthias Schumann, and Robert C. Nickerson. 2024. "What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services" Machine Learning and Knowledge Extraction 6, no. 1: 143-155. https://doi.org/10.3390/make6010008

APA Style

Pamuk, M., Schumann, M., & Nickerson, R. C. (2024). What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services. Machine Learning and Knowledge Extraction, 6(1), 143-155. https://doi.org/10.3390/make6010008

Article Metrics

Back to TopTop