Next Article in Journal
An Empirical Analysis for the Determination of Risk Factors of Work-Related Accidents in the Maritime Transportation Sector
Next Article in Special Issue
ECLIPSE: Holistic AI System for Preparing Insurer Policy Data
Previous Article in Journal
Contrarian Profits in Thailand Sustainability Investment-Listed versus in Stock Exchange of Thailand-Listed Companies
Previous Article in Special Issue
A Combined Neural Network Approach for the Prediction of Admission Rates Related to Respiratory Diseases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Explainable Artificial Intelligence (XAI) in Insurance

1
Department of Accounting and Finance, University of Limerick, V94 PH93 Limerick, Ireland
2
Research Center for the Insurance Market, Institute for Insurance Studies, TH Köln, 50968 Cologne, Germany
3
Motion-S S.A., Avenue des Bains 4, Mondorf-les-Bains, L-5610 Luxembourg, Luxembourg
4
Faculty of Science, Technology and Medicine (FSTM), University of Luxembourg, Esch-sur-Alzette, L-4365 Luxembourg, Luxembourg
*
Author to whom correspondence should be addressed.
Risks 2022, 10(12), 230; https://doi.org/10.3390/risks10120230
Submission received: 27 October 2022 / Revised: 23 November 2022 / Accepted: 24 November 2022 / Published: 1 December 2022
(This article belongs to the Special Issue Data Science in Insurance)

Abstract

:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.

1. Introduction

Artificial Intelligence (AI) revenues in insurance are expected to grow 23% to $3.4 billion between 2019–2024, yet the suitability of black-box AI models in insurance practices remains questionable (Bean 2021; Chen et al. 2019; GlobalData 2021). The growth of AI as an intelligent decision-making methodology that can perform complex computational tasks is revolutionising financial services, particularly within insurance practices. Data and its potential use are seen as a primary strategic asset and a source of competitive advantage in financial services firms, with AI models’ leverage of such data providing numerous advantages (Kim and Gardner 2015). Such advantages of AI use in the insurance industry include enhanced fraud detection in claims management, granularity and personalisation when pricing insurance premiums, the creation of smart contracts, analysis of legal documents, virtual assistants (chatbots) and office operations (EIOPA 2021; Eling et al. 2021; McFall et al. 2020; Ngai et al. 2011; OECD 2020; Riikkinen et al. 2018; Zarifis et al. 2019). AI encompasses the collation of multiple technologies in a single system which enables machines to interpret data and aid complex computational decision-making (Chi et al. 2020). Although AI models’ advantages abound, recent literature highlights the AI models’ opacity which is coined as black-box thinking (Adadi and Berrada 2018; Carabantes 2020). The Insurance Value Chain (IVC) makes extensive use of AI methods at every stage of the value creation process, with AI particularly impactful in claims management and underwriting and pricing departments (Eling et al. 2021). This research systematically reviews all peer-reviewed applications of (X)AI in insurance between 2000 and 2021 with a critical focus on explainability of the models. This is the first study to investigate XAI in an applied, insurance industry context.
The rationale for Explainable Artificial Intelligence (XAI) development is primarily driven by three main reasons: (i) demand for the production of more transparent models, (ii) necessity of techniques to allow for humans to interact with them, and (iii) trustworthy inferences from such transparent models (Došilović et al. 2018; Fox et al. 2017; Mullins et al. 2021). Decision-makers require an explanation of the AI system to aid in their understanding of their decision-making processes (Biran and Cotton 2017; Hoffman et al. 2018). Throughout this systematic review, AI is defined using recent recommendations by AI experts: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real virtual environments such AI systems are designed to operate with varying levels of autonomy” (Krafft et al. 2020). As an extension of AI models, XAI involves enhancing current AI models by developing their transparency, interpretability and explicability, with such AI advancements ultimately aiming to make AI models more understandable to humans (Adadi and Berrada 2018; Floridi et al. 2018). By presenting an analysis of insurance’s AI applications’ degree of explainability, the reader gleans an insight into the progress made to-date in insurance practice and research to satisfy the want for transparency and explanations of AI-driven decisions. Practically, end-insurance consumers affected by AI-enhanced decisions will be less likely to trust in the decisions made by machines when they do not trust and understand the AI processes involved (Burrell 2016; Ribeiro et al. 2016).
Insurance’s influence on socio-economic development cannot be understated, with the sound development of national insurance markets allowing for the promotion of financial stability, improved welfare and business innovation (Ferguson 2008; Ungur 2017). Insurance affordability is a key determinant of societal progress, with the modelling of insurance pricing practices playing a key role in this affordability (Daniels 2011), with actuarially fair pricing of insurance premiums allowing for a population to access insurance at rates which they can reasonably afford (Grant 2012). Transparency and explainability of AI models are core requirements to achieve impactful trustworthy AI in society (Felzmann et al. 2019; Maynard et al. 2022; Moradi and Samwald 2021). Trustworthiness is a core concept within the insurance industry, with enhanced XAI explanations directly affecting trust levels amongst insurance companies and their stakeholders.
This paper is structured as follows; Section 2 presents related works of this review while analysing current research on XAI’s definition and related taxonomies, also outlining related work on (X)AI’s impact on the IVC. Section 3 presents the methodological system to collect and analyse relevant literature on (X)AI use along the IVC. The search technique to arrive at relevant articles is especially emphasised to ensure the validity of eventual research results and allow for future research reproducibility. Section 4 outlines the review’s findings of the systematically chosen sample of literature and their AI methods through the lens of defined XAI criteria. Section 5 presents a novel discussion of the review’s results on the prevalence of XAI along the IVC, focusing on the extent to which AI applications along the IVC are explainable. Section 6 concludes the systematic review, reiterating points of interest regarding the future of XAI applications in insurance practices.

XAI Terminology

Kelley et al. (2018) define AI as “a computer system that can sense its environment, comprehend, learn, and take action from what it’s learning”, with XAI intuitively expanding on this description by allowing humans to be present at every stage of this AI decision-making lifespan. A common misnomer of AI models’ explainability is that it is simply the improvement of trust in AI systems and their decision processes, through developing “causal structures in observational data” (Goodman and Flaxman 2017; Lipton 2018). Models’ explainability enhances the interpretability, i.e., understanding how a model came to a certain decision (Lou et al. 2013), while also positively impacting fair and ethical decision making for high-computational tasks (Srihari 2020). Table 1 outlines the XAI variables and categories used within the systematic review to analyse the degree of explainability present in AI methods applied within the insurance industry. Additionally, the following categories of XAI methods are used to classify published applications of AI in insurance: (1) Feature Interaction and Importance, (2) Attention Mechanism, (3) Dimensionality Reduction, (4) Knowledge Distillation and Rule Extraction, and (5) Intrinsically Interpretable Models1. Additional categorisations and terminology determinations are summarised in Clinciu and Hastie (2019) and Arrieta et al. (2020).

2. Fundamental Concepts & Background

2.1. Artificial Intelligence Applications in Insurance

AI use abounds across the entirety of the IVC with Eling et al. (2021) and EIOPA (2021) providing a thorough examination of the six main stages of the IVC and their goals. Tekaya et al. (2020) preface AI research in financial services by offering an overview of current use-cases and advantages of implementing Big Data and AI models in banking, credit risk management, fraud detection and the insurance industry. Several other articles highlight the importance and advantages of AI applications in the insurance industry, predicting major shifts in operations in the coming years (Paruchuri 2020; Riikkinen et al. 2018; Umamaheswari and Janakiraman 2014). Popular areas within insurance research where AI has been applied include fraud detection (Sithic and Balasubramanian 2013; Verma et al. 2017) and claims reserving (Baudry and Robert 2019; Blier-Wong et al. 2021; Lopez and Milhaud 2021; Wüthrich 2018). Grize et al. (2020) focus on ML applications in non-life insurance, highlighting AI’s positive impact on risk assessment to improve the insurance companies’ overall profitability in the long run.
Fang et al. (2016) used Big Data to develop a new profitability method for insurers using historical customer data, where they found that the Random Forest (RF) model outperformed other methods of forecasting (linear regression and SVM). Shapiro (2007) documents the extent to which fuzzy logic (FL) has been applied to insurance practices, which prompted Baser and Apaydin (2010)’s later research on claims reserving using hybrid fuzzy least squares regression and Khuong and Tuan (2016)’s creation of a neuro-fuzzy inference system for insurance forecasting. NallamReddy et al. (2014) present a robust review of clustering techniques used in insurance. Quan and Valdez (2018) use another understandable and transparent AI method, Decision Trees (DT), to investigate their use in insurance claims prediction. Interestingly, later research acknowledges the low predictive power of DTs and boosts their intrinsic interpretability to provide a more robust insurance pricing model (Henckaerts et al. 2021).
Sarkar (2020) argues that the insurance industry holds the potential for algorithmic capabilities to enhance each stage of the industry’s value chain. Through highlighting AI’s offerings at each stage of the IVC, the research prompted further studies from Walsh and Taylor (2020) and Eling et al. (2021) to determine precise AI opportunities available to the insurance industry. Walsh and Taylor (2020) highlight AI models’ ability to mimic, or augment, human capabilities with NLP, Internet of Things (IoT) and computer vision. Eling et al. (2021) analyse AI’s impact at each step on IVC and specifically highlights the potential for AI to enhance revenue streams, loss prediction and loss prevention measures for insurance practitioners.
Bias inherent to black-box AI systems threatens trust within the insurance industry, with this bias primarily driven by either humans’ input or algorithmic bias (Koster 2020; Ntoutsi et al. 2020). There is potential for these models’ impediments to compound and extenuate bias in their decision-making processes with unfair outcomes possible within the insurance industry (Confalonieri et al. 2021; Koster et al. 2021). This issue of bias is further aggravated when the lacking transparency in systems makes it difficult to dispute or appeal a biased decision by AI algorithms (von Eschenbach 2021). Bias in AI models could potentially lead to discriminatory behaviour of the AI system, caused by the model’s tendency to use sensitive information resulting in unfair decisions (Barocas and Selbst 2016). There is strong research conducted on the determination of responsible AI, with Koster et al. (2021) providing a framework to create a responsible AI system, and Arrieta et al. (2020) outlining degrees of fairness to be implemented in an AI system to reduce discriminatory issues. Although a thorough examination of trust as it pertains to social sciences, leading into its importance in human-AI relationships, is beyond the scope of the current review, trust in AI systems is considered critical for the sustained use of AI technologies in insurance (Mayer et al. 1995; Siau and Wang 2018). Toreini et al. (2020) propose a Chain of Trust framework to further enhance users’ trust in AI and ML technologies, while research on explanations in AI’s use in medical diagnostic settings proves advantageous for clinician’s trust and understanding in these technologies (Diprose et al. 2020; Tonekaboni et al. 2019). Jacovi et al. (2021) outline that the agreement between a human and AI system is contractual, therefore the interaction between a human and AI system must be explicit for trust to be present in the relationship between both parties (Hawley 2014; Tallant 2017). Trust derived from explanations in AI systems is enhanced by the provision of explanations and understandability, supporting the growth of XAI demand within the insurance industry.

2.2. Explainable Artificial Intelligence

XAI’s recent history is firmly rooted in the field of AI, with contributions of explainability and transparency paving the way for XAI’s growth. Lundberg and Lee (2017) cited explainability as the “interpretable approximation of the original complex [AI] model”, while later Al-Shedivat et al. (2020) reference explainability as a “local approximation of a complex model (by another model)”. What is clear from the increased research focus on AI in the late 2010’s is that the notion of explainability did not drastically mature—research continues to ask the same questions pertaining to AI. Such issues include the fairness of an AI system, the transparency of decision pathways, and the explanation to be provided to the end user. A further important consideration is that XAI is merely the process of making AI understandable to humans, including its actions, recommendations and underlying decisions (Anjomshoae et al. 2019). Neither AI, nor XAI, are on the cusp of machine-led moral decisions or understanding (Ford 2018). Humans are still at the core of (X)AI, with bias and fairness central issues to contend with. This section outlines current research in XAI and its impact on the research field of AI.
The evaluation of the insurance industry’s (X)AI applications’ explainability contributes to the interdisciplinary literature on XAI. Through presenting the current discussion and taxonomies of XAI in the literature, the authors highlight the necessity of defined XAI criteria and categories in line with those used in this paper’s analysis. Gade et al. (2019) outline the main challenges for XAI researchers which include (1) ‘defining model explainability’, (2) ‘formulating explainability tasks for understanding model behaviour and developing solutions for these tasks’, and (3) ‘designing measures for evaluating the performance of models in explainability tasks’. Vilone and Longo (2020)’s later systematic study contributed a classification system for published XAI literature, aiming to establish boundaries in the field of XAI research. Four main clusters of research were found by Vilone and Longo (2020); (1) ‘reviews focused on specific aspects of XAI’, (2) ‘the theories and notions related to the concept of explainability’, (3) ‘the methods aimed at explaining the inferential process of data-driven and knowledge-based modelling approaches’, and 4) ‘the ways to evaluate the methods for explainability’.
Extending on the above, the literature on XAI is attempting to determine a sound definition of XAI, which is commonly referred to as ‘explainability’ rather than ‘interpretability’. Islam et al. (2020) note that explainability is more than interpretability in terms of importance and trust in the prediction. Interpretability is often the end goal with explanations acting as tools to reach interpretability (Honegger 2018). Additionally, the General Data Protection Regulations (GDPR) (EU 2016) which is discussed later in this paper covers only explainability (Došilović et al. 2018). These considerations encourage the authors to focus on the need for a domain-specific definition of XAI relevant to insurance practices. Instead of offering actionable definitions of XAI, other works classify the requirements that an explainable system should meet (Lipton 2018; Xie et al. 2020) or the methods of evaluations underhich an AI system can be deemed explainable (Doshi-Velez and Kim 2017; Hoffman et al. 2018; Lipton 2018; Rosenfeld 2021).
Reviews of XAI in medicine ignited the XAI research field, with many studies on the technology’s effects on disease diagnosis, classification and treatment published in recent years. Payrovnaziri et al. (2020) involved the review of 49 articles published in the period 2009–2019 to group XAI methods used in the medical field. In this study, Payrovnaziri et al. (2020) grouped XAI methods into 5 different groups: (1) ‘Knowledge Distillation and Rule Extraction’, (2) ‘Intrinsically Interpretable Models’, (3) ‘Data Dimensionality Reduction’, (4) ‘Attention Mechanism’ and (5) ‘Feature Interaction and Importance’. Antoniadi et al. (2021) outline challenges pertaining to AI’s use for clinical decision support systems, emphasising lacking transparency as a key issue. Notwithstanding the obvious advantages of XAI methods to enhance understandability and aid medical practitioners’ decisions which abound, their research finds a distinct lack of XAI applications in medicine.
Finance-related studies on XAI include Demajo et al. (2020); Hadji Misheva et al. (2021) and Biecek et al. (2021)’s research on credit scoring and risk management. Similarly, Bussmann et al. (2020) explore XAI in fintech risk management and peer-to-peer lending platforms, while Kute et al. (2021) also focus on risk management in finance applications through their review of DL and XAI technologies in the identification of suspicious money laundering practices. Gramegna and Giudici (2020) analyse XAI’s potential to identify policyholders’ reasons for buying or abandoning non-life insurance coverage. The grouping and assessment of like-minded policyholders allows for additional high-quality information on policyholders to be obtained, with transparent and accessible AI models used. Adadi and Berrada (2018) provide a foundational background to the main concepts and implications of an XAI system, citing data security and fair lending in financial services as key issues surrounding XAI use in financial services. Concerning banking and accounting practices, Burgt (2020) states that trust in AI systems in the banking industry is paramount and provides a discussion on the trade-off between explainability and predictability of AI systems. Gramespacher and Posth (2021) then utilise XAI to optimise the return target function of a loan portfolio, while Mehdiyev et al. (2021) add to the conversation by analysing tax auditing practices and public administration’s appetite for XAI. Albeit the obvious advantages of developing transparent decision-making systems in public administration, this research cites the requirements of safe, reliable, and trustworthy AI systems as creating additional complexity in AI systems which take some time to implement widely. The interest in human-centred decision-making machines reaches beyond medical and finance domains. Putnam and Conati (2019) provide a survey that finds students seek additional explanations from their Intelligent Tutoring System to aid their education prospects. Natural Language Processing (NLP) is another research area with significant interest in XAI methods as revealed by Danilevsky et al. (2020) with sarcasm detection in dialogues later reviewed by Kumar et al. (2021). Anjomshoae et al. (2019) reviews inter-robot explainability and addresses the issue of explainability to non-users of ML robots through personalisation and context awareness.
The current systematic review builds upon previous research on XAI methods’ classification and analysis of XAI literature during the systematic selection of literature. Although the above literature does provide a brief overview of the current understanding of XAI and related key concerns highlighted in the literature, this is the first paper to review XAI applications in the insurance industry.

2.3. The Importance of Explainability in Insurance Analytics

The personal data of EU citizens is described as a fundamental right by the EU Charter of Fundamental Rights and has been addressed since 1995 by the Data Protection Directive (Taylor 2017; Yeung et al. 2019). Citizen rights to privacy are operationalised through a number of data governance mechanisms ranging from consent platforms and data management systems which produce compliance measures to the control, use and lifespan of personal data. Accordingly, the data regulation environment is one of the most robust and sophisticated that is built on a strategy to both empower citizens to engage with the digital world and also to inform and guide commercial use of personal data. Data is protected by several regulatory instruments that provide a specific response to data use. These range from the data governance and the digital markets act to the GDPR (Andrew and Baker 2021; Goddard 2017). The range of different instruments speaks to the complexity of data use and data commercialisation scenarios. Insurance analytics often concerns the use of citizen and customer data to provide value to both the insureds and the insurance business model. Insurance analytics already uses personal data to optimise front- and back-end operations, risk modelling and risk pricing (Hollis and Strauss 2007; Keller et al. 2018; Ma et al. 2018; Mizgier et al. 2018; Naylor 2017). Furthermore, insurance analytics can provide important value in fraud management, claims management and better managing risk pooling by creating more accurate behavioural profiles of insureds (Barry and Charpentier 2020; Cevolini and Esposito 2020; Tanninen 2020). The commercial promise of insurance analytics also raises questions and concerns regarding the potential harms of undermining the core social solidarity of insurance by changing the pricing structure and limiting access to insurance products and services to those that meet stricter parameters of risk pricing. The importance of access to insurance is evident in compulsory products such as motor and, in some states, life insurance. Health insurance and insurance analytics are becoming a more controversial issue as increased reliance on private health care in parallel with increased use of insurance analytics are highlighting the tension between affordability and welfare. In short, insurance analytics offers scalable optimisation and high-value commercial solutions to IVCs and business models. Still, EU regulation is seeking to govern the use by the steering industry to more equitable, transparent and explainable (Kuo and Lupton 2020) uses of data analytics (EIOPA 2021; Mullins et al. 2021; van den Boom 2021).

3. Methodology

3.1. Literature Search Strategy

This literature search plan and related inclusion and exclusion criteria build upon the framework applied within Eling et al. (2021), with the aim of expanding upon their research to assess the prevalence of XAI methods in the IVC’s AI applications. Eling et al. (2021)’s research assessed AI’s impact on the IVC and the insurability of risks. The research presented in this paper expands on the abovementioned research to determine not only the impact on the IVC of AI systems being used, but also their degree of explainability. This framework is a suitable addition to the current study as a guide to literature inclusion criteria: inclusion of AI literature concerned with different stages along the IVC.
Analysis was conducted on a systematically selected body of literature from the following databases: EBSCOhost (Business Source Complete and EconLit), ACM Digital Library2, Scopus, Web of Science and IEEE Xplore. These databases were chosen due to their wide breadth of content spanning both insurance and finance-related research, while also accounting for computer science journals to access research on AI applications. The above databases were chosen to feasibly and approximately align the current review with Eling et al. (2021)’s research, while considering database accessibility limitations.
Table 2 outlines the key search terms used interchangeably with AI in the abovementioned databases, alongside ‘Insurance’ OR ‘Insurer’ using Boolean terminology. This broad set of search terms ensures an all-encompassing article-base of the IVC’s use of AI and are adapted from Eling et al. (2021)’s literature search method.
Figure 1 outlines the systematic literature search process where an initial 419 articles were scanned for relevancy to this paper. Key relevancy criteria included the assessment of articles’ contents concerning their place along the IVC. The IVC stages are extensively outlined in Table 3, as adapted from both Eling et al. (2021) and EIOPA (2021)’s research. The articles included in the systematic study of XAI in insurance are categorised according to the specific stage of the IVC which they refer to. This categorisation allows for further assessment of XAI use within the entire IVC process.
In addition to the above, the articles’ relevancy was filtered using the following criteria set:
  • Time Period: Articles3 published between 1 January 2000–31 December 2021 are included,
  • Relevancy: The presence of keywords (Table 2) in the abstract is necessary for the article’s inclusion. Additionally, the articles need to be relevant to the assessment of AI applications along the IVC directly (e.g., articles concerned with determining drivers’ behaviour using telematics information, which may later inform insurance companies’ pricing practices were excluded, as well as generalised surveys on AI uses in insurance4),
  • Singularity: Duplicate articles found across the various databases are excluded,
  • Accessibility: Only peer-reviewed articles that are accessible through the aforementioned databases and are accessible in full text are included (i.e., extended abstracts are not included),
  • Language: Only articles published in English are included.
Articles published before 2000 are not included in the current review due to the increased understanding of AI from 2000 onwards (Liao et al. 2012), and the creation of the European GDPR in 2016 (implemented in the European Union in 2018) which is especially applicable to conversations on future XAI regulation.
The initial screening process included the assessment of 419 articles (following duplicate removal) based on their title, source, and abstract for the presence of the key search terms. In all, 66 articles were included for final review at this stage of the literature search. A backward search of the relevant articles (n = 66) was then conducted, which identified a further 37 articles. The backward search is a popular method of rigorous literature searching within systematic reviews in a range of disciplines including medicine (Mohamadloo et al. 2017), law (Siegel et al. 2021) and finance (Eckert and Hüsig 2021). The backward search entailed the assessment of the 66 relevant articles’ bibliographies for additional articles of relevance to the current review. Based on this rigorous selection process, a total of 103 articles were identified as relevant for the current study (Reference Appendix A for the complete database of articles meeting the relevance threshold for inclusion in this systematic review). Figure 2 provides a breakdown of the publication year dispersion of each of these 103 articles. These results are comprised of ~75% journal articles (n = 77 and ~25% conference papers/proceedings (n = 26). The PRISMA flow diagram depicts the systematic review process (Figure 3). The PRISMA statement enhances transparency of systematic reviews, to ensure the research conducted during the course of a systematic review is robust and reliable (Page and Moher 2017). Each stage of the literature search for the systematic review is highlighted within the PRISMA diagram (Figure 3).

3.2. Literature Extraction Process

The evaluation of the full-text articles is sub sectioned into two distinct phases in line with both core contributions of this review. Initially, the articles’ applied AI method was distinguished, alongside the prediction task(s) of this AI method. Secondly, the degree of explainability of the AI method employed is analysed. Here, the degree of explainability is evident in the XAI criteria applicable to each AI method employed in each article.
The criteria used in evaluating the AI methods’ degree of explainability (Table 4) are adapted from Payrovnaziri et al. (2020)’s systematic review methodology and modified to suit this review on the insurance industry. The inclusion of the XAI variables and criteria is supported by previous research in XAI, with the criteria synthesised from Mueller et al. (2019); Du et al. (2019); Carvalho et al. (2019) and Payrovnaziri et al. (2020).

3.3. Limitations of the Research

Limitations of the current review are outlined to ensure the validity and reliable reproducibility of results. In particular, the authors are unable to access 18 references which Eling et al. (2021) presented following their literature search process, while the industry reports reviewed within the same article are not included in the current systematic review. The lack of industry reports’ analysis in this paper leads to an absence of articles concerning the Support Activities stage on the IVC. In Eling et al. (2021)’s research, all articles found pertaining to insurance companies’ Support Activities were industry reports.
Industry reports were not included in this paper as access to articles with complete methodological processes outlined is pertinent to the current systematic review, a section which industry reports regularly omit in their publications. The inclusion of academic articles and conference articles ensures the methods of AI integration in each of the reviewed articles is outlined, in particular a coherent methodology discussion which can be assessed using the XAI criteria outlined in this paper.
The authors note the limitations of Payrovnaziri et al. (2020)’s research framework pertaining to XAI literature. In particular, the XAI categorisations presented feature some overlap across various XAI categories. For example, attention mechanism targets feature attribution, a category which is also covered under the feature interaction and importance categorisation. Nevertheless, this framework provides optimal categorisations for the scope of this work to assess the degree of explainability within AI applications in insurance, as defined boundaries of each XAI categorisation is provided.

4. Systematic Review Results

4.1. AI Methods and Prediction Tasks

The systematically chosen articles are first assessed based on the AI method employed and associated prediction task, with a focus on then distinguishing the degree of explainability evident in the literature. The stage of the IVC each article refers to is also clarified in the systematic research findings. Research on AI’s use along the IVC over the twenty-one-year period of this review revealed AI is popular at every stage of the IVC, except for insurance companies’ Support Activities. Such activities include general HR, IT and Public Relations departments in insurance companies. As mentioned above, a viable reason for the lack of articles concerned with this stage of the IVC is that Eling et al. (2021)’s study found articles on this subject through their review of industry reports, which the present systematic review did not include in the systematic review. The Underwriting and Pricing stage reveals significant research results (40%), with Claim Management (34%) also making extensive use of AI methods for fraud management and identification in particular.
Table 5 lists all the articles alongside the AI method employed and prediction task. A range of AI methods are used in the articles including; (1) Ensemble, (2) Neural Network (NN), (3) Clustering, (4) Regression (Linear and Logistic), (5) Fuzzy Logic, (6) Bayesian Network (BN), (7) Decision Tree, (8) Support Vector Machine (SVM). Other methods used include Instance- and Rule-based, Regularisation and Reinforcement Learning. The most popular AI method used is Ensemble (23%), with both NNs (20%) and Clustering (14%) also proving popular.
The line of insurance business the research in each article refers to is also classified, with non-life insurance lines returning a high number of articles in the systematic review (55%). Motor insurance prediction problems are popular areas of research, including driving behaviour classification and automobile insurance fraud (44%). Articles concerning insurers’ life-business shows health(care) insurance as a popular area of research (13%), with health insurance fraud prevention and the classification of health insureds the most prominent research areas.

4.2. XAI Categories along the IVC

The following categories of XAI methods are highlighted within the article database; (1) Feature Interaction and Importance, (2) Attention Mechanism, (3) Dimensionality Reduction, (4) Knowledge Distillation and Rule Extraction, and (5) Intrinsically Interpretable Models. Figure 4 shows each stage on the IVC and the corresponding XAI method employed in the reviewed articles. The XAI methods’ interpretability techniques are then categorised into (1) intrinsic or post hoc, (2) local or global and (3) model-specific or model-agnostic (Table 6). According to the reviewed articles, most of the research on AI applications in insurance is concerned with Knowledge Discovery and Distillation, which is also grouped with Rule Extraction (35%) XAI methods for the purpose of the current review.

4.3. Feature Interaction and Importance

Analysing (X)AI models’ input features’ importance and interaction is a popular XAI method, with ~27% of reviewed articles utilising this method. The determination of features’ importance contributed to the development of thorough XAI methods to complete many prediction tasks at each stage on the IVC. Smith et al. (2000) utilise Artificial Neural Networks (ANN) to gain an insight into customer policies which were likely to renew or terminate at the close of the policy period through analysing those factors which contribute to policy termination. This assessment of optimal premium pricing through data mining and ML methods instructs research on insurance customer retention and profitability. Additionally, addressing customer retention is Larivière and Van den Poel (2005)’s research which explored three predictor variables which encompass potential explanatory variables to inform insurance customer retention. Their RF model provides an importance measure between the explanatory and dependence variables for the prediction task.
Claim management and insurance fraud detection are areas which benefit from analysing the interaction and importance of feature inputs in AI applications through the isolation of important features which contribute to fraud (Belhadji et al. 2000). Similarly, Tao et al. (2012) avoid the curse of dimensionality through using the kernel function for SVMs in their XAI approach for insurance fraud identification, while Supraja and Saritha (2017) use this XAI method to ready their data for automobile fraud detection using fuzzy rule-based predictive techniques.
Feature interaction and importance is also useful in assessing risk across a wide range of insurance activities and informing underwriting and pricing of premiums. Biddle et al. (2018) add to literature on automated underwriting in life insurance applications using the XAI method of Feature Interaction and Importance. Recursive Feature Elimination is used to reduce the feature space through iteratively wrapping and training a classifier on several feature subsets and then providing feature rankings for each subset. Premium pricing of automobile insurance is researched by Yeo et al. (2002)’s, where cluster grouping of policyholders according to relative features aids in determining the price sensitivity of policyholder groups to premium prices.

4.4. Attention Mechanism

The Attention Mechanism within an AI model primarily attempts to find a set of positions in a sequence with the most relevant information on a prediction task (Payrovnaziri et al. 2020), which in turn enhances interpretability, according to Mascharka et al. (2018).
In line with the current review, Attention Mechanism is used to compute the weight of claim occurrences to inform fraud detection (Viaene et al. 2004) and inform insurer insolvency prediction (Ibiwoye et al. 2012). Lin and Chang (2009) apply Attention Mechanism in their determination of premium rates of ‘in-between’ risks through weight classification of different tariff classes. The method also aids in the determination of litigation risk of liability insurance within the accountancy profession, as Sevim et al. (2016) incorporate Attention Mechanism in their development of an ANN model, while Deprez et al. (2017) apply Attention Mechanism to mortality modelling through back-testing parametric mortality models. Samonte et al. (2018) use this XAI method for automatic document classification of medical record notes using NLP. The enhancement of the Hierarchical Attention Network model (EnHAN) assigns topics for each word in a given text and learns topical word embedding in a hierarchical manner. Topical word embedding models solve the multi-label, multi-class classification problem within medical records to inform cluster processes for billing and insurance claims.
Wei and Dan (2019) apply Attention Mechanism to parameter optimisation of SVM features, while Zhang and Kong (2020) also optimised parameters for input in NB model to inform insurance product recommendations. In terms of sequence generation, this XAI method was used by Matloob et al. (2020) to inform their predictive model for fraudulent behaviour in health insurance.

4.5. Dimensionality Reduction

Researchers typically use dimensionality reduction techniques in order to reduce the set of features inputted in the model principally to improve a model’s efficiency (Motoda and Liu 2002). Kumar et al. (2010), for instance, use a frequency-based feature selection technique to reduce the dataset dimensions. This action aided in developing a model for error prevention in health insurance claims processing through reducing data storage requirements and improved model execution time. They found that using a lower frequency threshold and limiting the input feature improved the predictive accuracy. Finding similar results in terms of improved predictive accuracy, Li et al. (2018) use Principal Component Analysis (PCA) to increase the diversity of each of the 100 trees used in a RF model. This action improves the overall accuracy of the algorithm. In this instance, PCA transforms the data at each node to another space when computing the best split at that node which contributed to satisfactory feature selection in the development of the RF algorithm for fraud detection. PCA is also used in Underwriting and Pricing of life insurance through model development for risk assessment of life insurance customers (Boodhun and Jayabalan 2018).
For the popular prediction tasks related to automobile insurance, the reduction in dataset dimensionality is also useful. Liu et al. (2014) reduce their large claim frequency prediction to a multi-class prediction problem to aid the eventual implementation of Adaptive Boosting (AdaBoost) to automobile insurance data. The act of reducing the number of frequency classes contributes to AdaBoost presenting as superior to SVM, NN, DTs and GLM in terms of prediction ability and interpretability. Huang and Meng (2019) bin variables to approximate continuous variables in the dataset and construct tariff classes with high-level predictive power which enhances the model’s accuracy and predictive power in the classification of usage-based insurance (UBI) products. An ANN model is optimised in Vassiljeva et al. (2017) to inform automobile contract development through assessing drivers’ risk, while Bian et al. (2018) reduced their data dimensions to include only the five most relevant factors in determining drivers’ behaviour.
Other stages on the IVC benefit from data dimensionality reduction, with Desik et al. (2016)’s identification of relevant data clusters to inform model development of marketing strategies within different insurance product groups proving successful. The Sales and Distribution stage of the IVC uses a similar reduction of dataset features which hold no bearing on insurance customers’ likelihood of renewal (Kwak et al. 2020).

4.6. Knowledge Distillation and Rule Extraction

Knowledge Distillation and Rule Extraction components of AI models refers to the combination of large models to create a smaller, more manageable model (Hinton et al. 2015). For instance, both Cheng et al. (2020) and Jin et al. (2021) investigate optimal insurance strategies (insurance, reinsurance and investment) using the MCAM to develop adequate NN models for their respective prediction tasks. In another work concerning NNs and Knowledge Distillation XAI methods, Kiermayer and Weiß (2021) approximate representative portfolios of both term life insurance plans and Defined Contribution pension plans to aid in determining the insurer’s solvency capital requirements. These representative portfolios are inputted in a NN model, which significantly outperforms k-means clustering for insurance portfolio grouping and the evaluation of insurers’ investment surplus. The combination of models was also utilised by Xu et al. (2011) where a random rough subspace method is incorporated into a NN to aid optimised insurance fraud detection.
In terms of extracting actionable knowledge from models, Lee et al. (2020) propose a methodology for extracting variables from textual data (word similarities) to use such variables in claims analyses, thus improving actuarial modelling. Similarly, Wang and Xu (2018) apply LDA-based deep learning for the extraction of text features in claims data to detect automobile insurance fraud.
The development of association rules aids in building XAI models which are regularly understandable and useful for prediction tasks across the entirety of the IVC. Ravi et al. (2017) develop a model for analysing insurance customer complaints and categorising them for insurance customer service offices. Customer grievances are assigned an association rule which are categorised by treating grievance variables as holding a certain degree of membership with the different rules. Association rule learning is also implemented in fraud detection through the identification of frequent fraud occurrence patterns (Verma et al. 2017) and the computation of relative weights of variables related to suspicious claim activity using Adaboost AI methods (Viaene et al. 2004).

4.7. Intrinsically Interpretable Models

Aside from the interpretability techniques outlined above, other researchers have relied on the intrinsic predictive capabilities of models in their research. Through preserving the predictive capabilities of less complex AI models using boosting and optimisation techniques, the predictive power of Intrinsically Interpretable Models proves useful along the IVC.
Researchers implemented Intrinsically Interpretable Models for a range of prediction tasks including; (1) double GLMs to model insurance costs’ dispersion and mean (Smyth and Jørgensen 2002), (2) prediction of insurance losses through boosting trees (Guelman 2012), (3) prediction of insurance customers’ profitability (Fang et al. 2016), and (4) cluster identification and classification (Karamizadeh and Zolfagharifar 2016; Lin et al. 2017).
Carfora et al. (2019) identified clusters of driver behaviour to inform UBI pricing through unsupervised ML classification techniques and cluster analysis. K-means clustering is used to classify driver aggressiveness to inform a risk index of driving behaviour on different road types (primarily urban vs. highway). Benedek and László (2019) compare several interpretable AI techniques in their identification of insurance fraud indicators, which each facilitate the segmentation of such fraud indicators. DTs are highlighted as suitable AI methods for such indicator identification and classification.

5. Discussion

5.1. AI’s Application on the Insurance Value Chain

The use of AI applications at each stage on the IVC is promising, with a variety of prediction tasks fulfilled by AI applications. In line with Eling et al. (2021)’s findings, AI is disrupting the insurance industry in a number of ways. The automation of underwriting tasks and the identification and prevention of fraudulent behaviour are key areas where AI is impacting the IVC. This is in line with a survey by the Coalition Against Insurance Fraud (2020) reporting 56% of insurance companies’ surveyed AI as their primary mode of insurance fraud detection. An interesting note is the distinction between Eling et al. (2021)’s findings on AI’s use in Support Activities and the presence of XAI methods in such activities. The literature search process for this review did not result in any articles concerning XAI use in insurance Support Activities (including HR, IT, Legal and General Management). The authors accept that this finding is likely attributed to restricted keyword searches which do not consider Support Activities, opening the possibility of further research on XAI’s presence in insurance companies’ Support Activities.

5.2. XAI Definition, Evaluation and Regulatory Compliance

Research on XAI (Section 2.1) highlight the disjointed understanding of XAI both across and within industries, thus providing motivation for the current review. There appears no consistent definition of XAI in the reviewed insurance literature, a finding which is in line with Payrovnaziri et al. (2020)’s findings of XAI’s use and definition in medicine research. The main issue posed by this finding is that the evaluation of XAI methods is made increasingly difficult when there is no defined definition and scope of XAI. This review develops an XAI evaluation criteria, incorporating interpretability evaluation as either (i) intrinsic or post hoc, (ii) local or global and (iii) model-specific or model-agnostic. The results provide an extension to XAI survey research conducted by Adadi and Berrada (2018), Arrieta et al. (2020) and Das and Rad (2020) who each defined inter-related taxonomies of XAI. The development of an all-encompassing XAI definition for insurers and AI experts will allow for further adoption of XAI methods in the insurance industry.
Each definition of XAI discussed in Section 2.2 is derived from the early definition of explainability as the “assignment of causal responsibility” originally cited in Josephson and Josephson (1996). Although each paper providing additional insight into XAI definitions is useful, the lacking cohesion amongst these studies hampers the consolidation of each individual contribution into an interdisciplinarily accepted XAI definition. The authors acknowledge that an all-purpose XAI definition is difficult to determine, as both notions of explainability and interpretability (which are often used interchangeably and used in creating XAI definitions) are domain-specific notions (Freitas 2014; Rudin 2018). Lipton (2018) cites interpretability as an ill-defined concept as interpretability is not a fixed notion in and of itself. In efforts to define XAI specifically within the insurance industry, the authors accept all referenced definitions of XAI and findings of XAI use on the IVC to-date and propose the following XAI definition specific to the insurance industry:
“XAI is the transfer of understanding to AI models’ end-users by highlighting key decision- pathways in the model and allowing for human interpretability at various stages of the model’s decision-process. XAI involves outlining the relationship between model inputs and prediction, meanwhile maintaining predictive accuracy of the model throughout”
In addition to benefitting XAI research, the authors note that a solid definition of XAI pertaining directly to the insurance industry (and financial services at large) will aid the development of adapted regulation, which is in line with recommendations from Palacio et al. (2021). The GDPR (EU 2016) established a regime of “algorithmic accountability” and (insureds) “right to explanation” from decision-making algorithms (Bayamlıoğlu 2021; Wulf and Seizov 2022). XAI promotes such transparent and interpretable traits, yet a comprehensive implementation of these methods necessitates regulatory compliance (Henckaerts et al. 2020). In the momentary absence of specific regulation of XAI models, the authors highlight the potential for XAI methods to be paired with existing governance measures in the insurance industry to mitigate concerns surrounding the use of novel AI methods until satisfactory regulation is developed. This recommendation is in line with governance guidelines from EIOPA (2021), for example the maintenance of human oversight in decision-making processes.

5.3. The Relationship between Explanation and Trust

The recent proliferation of XAI literature is partly driven by the need to maintain users’ trust in AI to further develop AI adoption (Jacovi et al. 2021; Robinson 2020). Despite this rationale, prior XAI research has not considered the notion of trust in much detail. As a multidimensional and dynamic construct, a concise definition of trust has received considerable critical attention but remained elusive. The interplay between explainability and trust can be further substantiated by exploring what constitutes user trust in AI. So far, it has been established that explanations can positively affect users’ trustworthiness assessment in several use cases, such as recommendation agents (Xiao and Benbasat 2007) or information security (Pieters 2011). In particular, explanations can foster cognitive-based trust that prevails early in the human-AI relationship. This initial trust development phase is often referred to as swift trust (Meyerson et al. 1996). This notion of interpersonal trust, following the common act of anthropomorphising machines, affects how humans interact with such machines (Hoffman 2017). Users are affected by the reliability of their ‘partner’ in the interpersonal relationship (the machine), however the lack of humane empathy and ability to apologise for mistakes during automated decision-making hinders the fostering of a truly anthropomorphised machine being involved in a real interpersonal relationship with a human (Beck et al. 2002). As interaction history is lacking, the extent to which a user can understand a given process or decision is paramount (Colaner 2022). However, the question remains whether there is a threshold after which this positive effect can be reversed. If users suffered from such explanation overload, more explanations would not be significantly associated with trust. This assessment is subjective and perceptual in nature and might well be influenced by a user’s general propensity to trust AI models. This assumption accords with previous findings by McKnight et al. (2002) that the disposition to trust positively influences the trustworthiness assessment in e-commerce. Further work is thus required to examine how, precisely, the trust construct can be integrated into XAI research.

6. Conclusions

The primary contribution of this systematic review to widespread XAI understanding is an in-depth analysis of published literature on XAI in insurance practices. The growing commercialisation of AI applications leads to the potential of insurers to create high-value solutions in response to the industry’s efficiency issues and respond appropriately to changes in the business landscape (Balasubramanian et al. 2018). The necessity to highlight transparent and understandable AI processes applied within the insurance industry prompts this investigation of XAI applications and their current use cases. This review of key literature provides a comprehensive analysis of XAI applications in insurance for both key insurance regulators and insurance practitioners which will allow for extensive application in future regulatory decision-making. Legally, the opacity of black-box AI systems hinders regulatory bodies from determining whether data is processed fairly (Carabantes 2020; Rieder and Simon 2017), with XAI enhancing the potential for AI systems’ regulation under the GDPR in Europe (EU 2016).
This review assesses 103 articles (comprised of journal articles and conference papers/proceedings) which outline XAI applications at each stage of the IVC. The lack of explainability evaluation and consensus on XAI definitions hinders the potential progress of the XAI research field in insurance practices as there is no clear way to evaluate the degree of explainability in XAI. This review attempts to bridge this gap by defining XAI criteria and incorporating such criteria into a systematic review of XAI applications in insurance literature. Utilising this XAI criteria the degree of explainability in each XAI application is provided, assigning each AI method to a grouped XAI approach, and then evaluating the model’s interpretability as either (i) intrinsic or post hoc, (ii) local or global, and (iii) model-specific or model-agnostic. Findings reiterate the authors’ hypothesis that XAI methods are popular within insurance research, enabling the transparent use of AI methods in industry research. The transparency XAI methods afford insurance companies enhances the application of AI models in an industry striving for a basis of trust with multiple stakeholders.
Additionally, this paper analyses XAI definitions and proposes a revised definition of XAI. This proposed definition is informed by previous XAI definitions in XAI literature and systematic reviews of literature on AI applications on the IVC. The authors acknowledge this definition will not be widely applicable to a wide range of industries, therefore it’s reiterated that the proposed XAI definition is applicable to financial services and the insurance industry. This definition will aid in adapting regulation in the insurance industry to suit an AI-rich insurance industry. Further clarification is necessary on the relationship between explanation and trust as both concepts pertain to XAI, with research recommendations centered on the extent to which explanations assist in the development of trust in AI models.
To achieve a substantial understanding of the entire potential of XAI research requires an interdisciplinary effort. The systematic review of XAI methods in different research areas is a stepping-stone to full understanding of the research field, with medicine reviews providing the bulk of knowledge on the topic at the time of writing. Considering the research gap regarding XAI applications along the IVC, this paper is one of the first attempts to provide an overview of XAI’s current use within the insurance industry.

Author Contributions

Conceptualization, E.O., B.S., M.M. and M.C.; methodology, E.O. and B.S.; investigation, E.O. and B.S.; data curation, E.O.; writing—original draft preparation, E.O. and J.R.; writing—review and editing, B.S., M.M., M.C., J.R. and G.C.; visualization, E.O.; supervision, B.S., M.M. and M.C.; funding acquisition, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Government of the Grand Duchy of Luxembourg, project LIAISON CVN-20210910RDI170010283594 RDI-REDDEX RDI REDIND.

Data Availability Statement

The authors can confirm that all relevant data are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AdaBoostAdaptive Boosting
AIArtificial Intelligence
ANNArtificial Neural Network
BNBayesian Network
BPNNBack Propagation Neural Network
CHAIDChi-Squared Automatic Interaction Detection
CNNConvolutional Neural Networks
CPLFCost-Sensitive Parallel Learning Framework
CRMCustomer Relationship Management
DFSVMDual Membership Fuzzy Support Vector Machine
DLDeep Learning
ESIMEvolutionary Support Vector Machine Inference Model
EvoDMEvolutionary Data Mining
FLFuzzy Logic
GAMGeneralised Additive Model
GLMGeneralised Linear Model
HVSVMHull Vector Support Vector Machine
IoTInternet of Things
IVCInsurance Value Chain
KDD
LASSOLeast Absolute Shrinkage and Selection Operator
MCAMMarkov Chain Approximation Method
MLMachine Learning
NBNaïve Bayes
NCANeighbourhood Component Analysis
NLPNatural Language Processing
NNNeural Network
PCAPrincipal Component Analysis
RFRandom Forest
SBSSequential Backward Selection
SFSSequential Forward Selection
SHAPShapley Additive exPlanations
SOFMSelf-Organising Feature Map
SOMSelf-Organising Map
UBIUsage-Based Insurance
WEKAWaikato Environment for Knowledge Analysis
XAIExplainable Artificial Intelligence
XGBoostExtreme Gradient Boosting Algorithms

Appendix A. XAI Variables

Key XAI variables and criteria used both in the systematic review and throughout this paper are briefly outlined below as foundation for the paper’s results and discussion. These XAI groupings are derived from Payrovnaziri et al. (2020), who synthesised the groupings from Du et al. (2019) and Carvalho et al. (2019)’s XAI reviews. This particular lens is suitable for the current study as key criteria for the determination of XAI’s presence in AI method approaches.

Appendix A.1. Intrinsic vs. Post hoc Interpretability

The main differentiating aspect between an intrinsic and a post hoc interpretable explanation is whether interpretability is achieved through imposing constraints on the complexity of the model (intrinsic) or whether the models’ explainability was analysed after training (post hoc) (Molnar 2019). Intrinsic methods primarily describe how the model works, which denotes a high degree of transparency in the model which is interpretable by itself (Lipton 2018; Rudin 2018). Lipton (2018) contrastingly summarises post hoc explainability as what else can the model tell us. Carvalho et al. (2019) clarifies that it is possible to apply post hoc methods to intrinsic models, as post hoc methods are usually derived from the main model. In summary, intrinsic models achieve their interpretability by incorporating it directly into their structures, while post hoc models require the creation of a second model to provide explanations for the existing model (Du et al. 2019).

Appendix A.2. Local vs. Global Interpretability

Local explanations primarily reveal the impact of input features on the overall model’s prediction, while local explanations inspect model concepts to describe how the model works (Molnar 2019). Popular local explanation methods include: (1) the reporting of the decision path, (2) the assigning of credit to each input feature in the model and, (3) the application of several model-agnostic approaches which require the repeated execution of the model for each explanation (Baehrens et al. 2010; Lundberg et al. 2020; Štrumbelj and Kononenko 2014). A global explanation provides an overall view of the AI system, through listing the system’s rules or features that eventually determine their predictive outcome (Lundberg et al. 2020). In terms of trustworthiness, Adadi and Berrada (2018) cite local explanations as more trustworthy than global ones as the latter connotes a sense of understanding of the mechanism by which the model works.

Appendix A.3. Model-Specific vs. Model-Agnostic Interpretation

Both model-specific and model-agnostic interpretation methods are derived from the above intrinsic vs. post hoc explainability criteria. As the name suggests, model-specific interpretation methods are limited to specific model classes as each method is based on a specific model’s internals (Molnar 2019). Model-specific interpretability is by definition achieved from Intrinsically Interpretable Models (Adadi and Berrada 2018; Carvalho et al. 2019). Alternatively, model-agnostic methods can be applied to any model (black-box or otherwise) and are applied after the model has been trained (similar to post hoc interpretability). This method includes the analysis of relationships between the system’s feature inputs and outputs, without sacrificing the model’s predictive power (Carvalho et al. 2019; Lipton 2018). Table A1 below provides a summary of the above interpretability criteria and their generalised relationships.
Table A1. Association between XAI Interpretability Criteria where In-model and Post-model interpretability are defined using XAI variables.
Table A1. Association between XAI Interpretability Criteria where In-model and Post-model interpretability are defined using XAI variables.
In-ModelIntrinsicModel-specific
Post-ModelPost hocModel-agnostic

Appendix B. Database of Reviewed Articles

Appendix B.1. Journal Articles Included in the Systematic Review

ReferenceTitleLead AuthorYearSourceVolumeIssue Number
Aggour et al. (2006)Automating the underwriting of insurance applicationsAggour2006AI Magazine273
Baecke and Bocca (2017)The value of vehicle telematics data in insurance risk selection processesBaecke2017Decision Support Systems98
Baudry and Robert (2019)A machine learning approach for individual claims reserving in insuranceBaudry2019Applied Stochastic Models in Business and Industry355
Belhadji et al. (2000)A model for the detection of insurance fraudBelhadji2000The Geneva Papers on Risk and Insurance-Issues and Practice254
Benedek and László (2019)Identifying Key Fraud Indicators in the Automobile Insurance Industry Using SQL Server Analysis ServicesBenedek2019Studia Universitatis Babes-Bolyai642
Bermúdez et al. (2008)A Bayesian dichotomous model with asymmetric link for fraud in insuranceBermúdez2008Insurance: Mathematics and Economics422
Boodhun and Jayabalan (2018)Risk prediction in life insurance industry using supervised learning algorithmsBoodhun2018Complex & Intelligent Systems42
Carfora et al. (2019)A “pay-how-you-drive” car insurance approach through cluster analysisCarfora2019Soft Computing239
Chang and Lai (2021)A Neural Network-Based Approach in Predicting Consumers’ Intentions of Purchasing Insurance PoliciesChang2021Acta Informatica Pragensia102
Cheng et al. (2011)Decision making for contractor insurance deductible using the evolutionary support vector machines inference modelCheng2011Expert Systems with Applications386
Cheng et al. (2020)Optimal insurance strategies: A hybrid deep learning Markov chain approximation approachCheng2020ASTIN Bulletin: The Journal of the IAA502
Christmann (2004)An approach to model complex high–dimensional insurance dataChristmann2004Allgemeines Statistisches Archiv884
David (2015)Auto insurance premium calculation using generalized linear modelsDavid2015Procedia Economics and Finance20
Delong and Wüthrich (2020)Neural networks for the joint development of individual payments and claim incurredDelong2020Risks82
Denuit and Lang (2004)Non-life rate-making with Bayesian GAMsDenuit2004Insurance: Mathematics and Economics353
Deprez et al. (2017)Machine learning techniques for mortality modelingDeprez2017European Actuarial Journal72
Desik and Behera (2012)Acquiring Insurance Customer: The CHAID WayDesik2012IUP Journal of Knowledge Management103
Desik et al. (2016)Segmentation-Based Predictive Modeling Approach in Insurance Marketing StrategyDesik2016IUP Journal of Business Strategy132
Devriendt et al. (2021)Sparse regression with multi-type regularized feature modelingDevriendt2021Insurance: Mathematics and Economics96
Duval and Pigeon (2019)Individual loss reserving using a gradient boosting-based approachDuval2019Risks73
Fang et al. (2016)Customer profitability forecasting using Big Data analytics: A case study of the insurance industryFang2016Computers & Industrial Engineering101
Frees and Valdez (2008)Hierarchical insurance claims modelingFrees2008Journal of the American Statistical Association103484
Gabrielli (2021)An individual claims reserving model for reported claimsGabrielli2021European Actuarial Journal112
Gan (2013)Application of data clustering and machine learning in variable annuity valuationGan2013Journal of the American Statistical Association533
Gan and Valdez (2017)Regression modeling for the valuation of large variable annuity portfoliosGan2018North American Actuarial Journal221
Ghorbani and Farzai (2018)Fraud detection in automobile insurance using a data mining based approachGhorbani2018International Journal of Mechatronics, Elektrical and Computer Technology (IJMEC)827
Gramegna and Giudici (2020)Why to buy insurance? An Explainable Artificial Intelligence ApproachGramegna2020Risks84
Guelman (2012)Gradient boosting trees for auto insurance loss cost modeling and predictionGuelman2012Expert Systems with Applications393
Gweon et al. (2020)An effective bias-corrected bagging method for the valuation of large variable annuity portfoliosGweon2020ASTIN Bulletin: The Journal of the IAA503
Herland et al. (2018)The detection of medicare fraud using machine learning methods with excluded provider labelsHerland2018Journal of Big Data51
Huang and Meng (2019)Automobile insurance classification ratemaking based on telematics driving dataHuang2019Decision Support Systems127
Ibiwoye et al. (2012)Artificial neural network model for predicting insurance insolvencyIbiwoye2012International Journal of Management and Business Research21
Jain et al. (2019)Assessing risk in life insurance using ensemble learningJain2019Journal of Intelligent & Fuzzy Systems372
Jeong et al. (2018)Association rules for understanding policyholder lapsesJeong2018Risks63
Jiang et al. (2018)Cost-sensitive parallel learning framework for insurance intelligence operationJiang2018IEEE Transactions on Industrial Electronics6612
Jin et al. (2021)A hybrid deep learning method for optimal insurance strategies: Algorithms and convergence analysisJin2021Insurance: Mathematics and Economics96
Johnson and Khoshgoftaar (2019)Medicare fraud detection using neural networksJohnson2019Journal of Big Data61
Joram et al. (2017)A knowledge-based system for life insurance underwritingJoram2017International Journal of Information Technology and Computer Science3
Karamizadeh and Zolfagharifar (2016)Using the clustering algorithms and rule-based of data mining to identify affecting factors in the profit and loss of third party insurance, insurance company autoKaramizadeh2016Indian Journal of science and Technology97
Kašćelan et al. (2016)A nonparametric data mining approach for risk prediction in car insurance: a case study from the Montenegrin marketKašćelan2016Economic research-Ekonomska istraživanja291
Khodairy and Abosamra (2021)Driving Behavior Classification Based on Oversampled Signals of Smartphone Embedded Sensors Using an Optimized Stacked-LSTM Neural NetworksKhodairy2021IEEE Access9
Kiermayer and Weiß (2021)Grouping of contracts in insurance using neural networksKiermayer2021Scandinavian Actuarial Journal20214
Kose et al. (2015)An interactive machine-learning-based electronic fraud and abuse detection system in healthcare insuranceKose2015Applied Soft Computing36
Kwak et al. (2020)Driver Identification Based on Wavelet Transform Using Driving PatternsKwak2020IEEE Transactions on Industrial Informatics174
Larivière and Van den Poel (2005)Predicting customer retention and profitability by using random forests and regression forests techniquesLariviere2005Expert systems with applications292
Lee et al. (2020)Actuarial applications of word embedding modelsLee2020ASTIN Bulletin: The Journal of the IAA501
Li et al. (2018)A principle component analysis-based random forest with the potential nearest neighbor method for automobile insurance fraud identificationLi2018Applied Soft Computing70
Lin (2009)Using neural networks as a support tool in the decision making for insurance industryLin2009Expert Systems with Applications363
Lin et al. (2017)An ensemble random forest algorithm for insurance big data analysisLin2017IEEE Access5
Liu et al. (2014)Using multi-class AdaBoost tree for prediction frequency of auto insuranceLiu2014Journal of Applied Finance and Banking45
Matloob et al. (2020)Sequence Mining and Prediction-Based Healthcare Fraud Detection MethodologyMatloob2020IEEE Access8
Neumann et al. (2019)Machine Learning-Based Predictions of Customers’ Decisions in Car InsuranceNeumann2019Applied Artificial Intelligence339
Pathak et al. (2005)A fuzzy-based algorithm for auditors to detect elements of fraud in settled insurance claimsPathak2005Managerial Auditing Journal206
Ravi et al. (2017)Fuzzy formal concept analysis based opinion mining for CRM in financial servicesRavi2017Applied Soft Computing60
Sadreddini et al. (2021)Cancel-for-Any-Reason Insurance Recommendation Using Customer Transaction-Based ClusteringSadreddini2021IEEE Access9
Sakthivel and Rajitha (2017)Artificial intelligence for estimation of future claim frequency in non-life insuranceSakthivel2017Global Journal of Pure and Applied Mathematics136
Sevim et al. (2016)Risk Assessment for Accounting Professional Liability InsuranceSevim2016Sosyoekonomi2429
Shah and Guez (2009)Mortality forecasting using neural networks and an application to cause-specific data for insurance purposesShah2009Journal of Forecasting286
Sheehan et al. (2017)Semi-autonomous vehicle motor insurance: A Bayesian Network risk transfer approachSheehan2017Transportation Research Part C: Emerging Technologies82
Siami et al. (2020)A mobile telematics pattern recognition framework for driving behavior extractionSiami2020IEEE Transactions on Intelligent Transportation Systems223
Smith et al. (2000)An analysis of customer retention and insurance claim patterns using data mining: A case studySmith2000Journal of the Operational Research Society515
Smyth and Jørgensen (2002)Fitting Tweedie’s compound Poisson model to insurance claims data: dispersion modellingSmyth2002ASTIN Bulletin: The Journal of the IAA321
Sun et al. (2018)Abnormal group-based joint medical fraud detectionSun2018IEEE Access7
Tillmanns et al. (2017)How to separate the wheat from the chaff: Improved variable selection for new customer acquisitionTillmanns2017Journal of Marketing812
Vaziri and Beheshtinia (2016)A holistic fuzzy approach to create competitive advantage via quality management in services industry (case study: life-insurance services)Vaziri2016Management decision548
Viaene et al. (2002)Auto claim fraud detection using Bayesian learning neural networksViaene2002Expert Systems with Applications293
Viaene et al. (2004)A case study of applying boosting Naive Bayes to claim fraud diagnosisViaene2004Journal of Risk and Insurance693
Viaene et al. (2005)A case study of applying boosting Naive Bayes to claim fraud diagnosisViaene2005IEEE Transactions on Knowledge and Data Engineering165
Wang (2020)Research on the Features of Car Insurance Data Based on Machine LearningWang2020Procedia Computer Science166
Wang and Xu (2018)Leveraging deep learning with LDA-based text analytics to detect automobile insurance fraudWang2018Decision Support Systems105
Wei and Dan (2019)Market fluctuation and agricultural insurance forecasting model based on machine learning algorithm of parameter optimizationWei2019Journal of Intelligent & Fuzzy Systems375
Wüthrich (2020)Bias regularization in neural network models for general insurance pricingWüthrich2020European Actuarial Journal10 1
Yan et al. (2020a)Research on the UBI Car Insurance Rate Determination Model Based on the CNN-HVSVM AlgorithmYan2020IEEE Access8
Yan et al. (2020b)Improved adaptive genetic algorithm for the vehicle Insurance Fraud Identification Model based on a BP Neural NetworkYan2020Theoretical Computer Science817
Yang et al. (2006)Extracting actionable knowledge from decision treesYang2006IEEE Transactions on Knowledge and data Engineering191
Yang et al. (2018)Insurance premium prediction via gradient tree-boosted Tweedie compound Poisson modelsYang2018Journal of Business & Economic Statistics363
Yeo et al. (2002)A mathematical programming approach to optimise insurance premium pricing within a data mining frameworkYeo2002Journal of the Operational research Society5311

Appendix B.2. Conference Papers Included in the Systematic Review

ReferenceTitleLead AuthorYearSource
Alshamsi (2014)Predicting car insurance policies using random forestAlshamsi20142014 10th International Conference on Innovations in Information Technology (IIT)
Bian et al. (2018)Good drivers pay less: A study of usage-based vehicle insurance modelsBian2018Transportation research part A: policy and practice
Biddle et al. (2018)Transportation research part A: policy and practiceBiddle2018Australasian Database Conference
Bonissone et al. (2002)Evolutionary optimization of fuzzy decision systems for automated insurance underwritingBonissone20022002 IEEE World Congress on Computational Intelligence. 2002 IEEE International Conference on Fuzzy Systems
Bove et al. (2021)Contextualising local explanations for non-expert users: an XAI pricing interface for insuranceBove2021IUI Workshops
Cao and Zhang (2019)Using PCA to improve the detection of medical insurance fraud in SOFM neural networksCao2019Proceedings of the 2019 3rd International Conference on Management Engineering, Software Engineering and Service Sciences
Dhieb et al. (2019)Extreme gradient boosting machine learning algorithm for safe auto insurance operationsDhieb20192019 IEEE International Conference on Vehicular Electronics and Safety (ICVES)
Gan and Huang (2017)A data mining framework for valuing large portfolios of variable annuitiesGan2017Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Ghani and Kumar (2011)Interactive learning for efficiently detecting errors in insurance claimsGhani2011Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Kieu et al. (2018)Distinguishing trajectories from different drivers using incompletely labeled trajectoriesKieu2018Proceedings of the 27th ACM international conference on information and knowledge management
Kowshalya and Nandhini (2018)Predicting fraudulent claims in automobile insuranceKowshalya20182018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT)
Kumar et al. (2010)Data mining to predict and prevent errors in health insurance claims processingKumar2010Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Kyu and Woraratpanya (2020)Car Damage Detection and ClassificationKyu2020Proceedings of the 11th International Conference on Advances in Information Technology
Lau and Tripathi (2011)Mine your business—A novel application of association rules for insurance claims analyticsLau2011CAS E-Forum. Arlington: Casualty Actuarial Society
Liu and Chen (2012)Application of evolutionary data mining algorithms to insurance fraud predictionLiu2012Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT
Morik et al. (2002)End-user access to multiple sources-Incorporating knowledge discovery into knowledge managementMorik2002International Conference on Practical Aspects of Knowledge Management
Samonte et al. (2018)ICD-9 tagging of clinical notes using topical word embeddingSamonte2018Proceedings of the 2018 International Conference on Internet and e-Business
Sohail et al. (2021)Feature importance analysis for customer management of insurance productsSohail20212021 International Joint Conference on Neural Networks (ICJNN)
Supraja and Saritha (2017)Robust fuzzy rule based technique to detect frauds in vehicle insuranceSupraja20172017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS)
Tao et al. (2012)Insurance fraud identification research based on fuzzy support vector machine with dual membershipTao20122012 International Conference on Information Management, Innovation Management and Industrial Engineering
Vassiljeva et al. (2017)Computational intelligence approach for estimation of vehicle insurance risk levelVassiljeva20172017 International Joint Conference on Neural Networks (IJCNN)
Verma et al. (2017)Fraud detection and frequent pattern matching in insurance claims using data mining techniquesVerma20172017 Tenth International Conference on Contemporary Computing (IC3)
Xu et al. (2011)Random rough subspace based neural network ensemble for insurance fraud detectionXu20112011 Fourth International Joint Conference on Computational Sciences and Optimization
Yan and Bonissone (2006)Designing a Neural Network Decision System for Automated Insurance UnderwritingYan2006Insurance Studies
Zahi and Achchab (2019)Clustering of the population benefiting from health insurance using k-meansZahi2019Proceedings of the 4th International Conference on Smart City Applications
Zhang and Kong (2020)Dynamic estimation model of insurance product recommendation based on Naive Bayesian modelZhang2020Proceedings of the 2020 International Conference on Cyberspace Innovation of Advanced Technologies

Notes

1
The five XAI categories used were introduced to XAI literature by Payrovnaziri et al. (2020), adapted from research conducted by Du et al. (2019) and Carvalho et al. (2019).
2
Searched ‘The ACM Guide to Computing Literature’.
3
‘Articles’ throughout this review refers to both academic articles and conference papers.
4
Several such surveys and reviews are discussed in Section 2.2.

References

  1. Adadi, Amina, and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6: 52138–60. [Google Scholar] [CrossRef]
  2. Aggour, Kareem S., Piero P. Bonissone, William E. Cheetham, and Richard P. Messmer. 2006. Automating the underwriting of insurance applications. AI Magazine 27: 36–36. [Google Scholar]
  3. Alshamsi, Asma S. 2014. Predicting car insurance policies using random forest. Paper presented at the 2014 10th International Conference on Innovations in Information Technology (IIT), Al Ain, United Arab Emirates, November 9–11. [Google Scholar]
  4. Al-Shedivat, Maruan, Avinava Dubey, and Eric P. Xing. 2020. Contextual Explanation Networks. Journal of Machine Learning Research 21: 194:1–94:44. [Google Scholar]
  5. Andrew, Jane, and Max Baker. 2021. The general data protection regulation in the age of surveillance capitalism. Journal of Business Ethics 168: 565–78. [Google Scholar] [CrossRef]
  6. Anjomshoae, Sule, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable agents and robots: Results from a systematic literature review. Paper presented at the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17. [Google Scholar]
  7. Antoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, Lan Wei, Claudia Mazo, Brett A Becker, and Catherine Mooney. 2021. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: A systematic review. Applied Sciences 11: 5088. [Google Scholar] [CrossRef]
  8. Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, and Richard Benjamins. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58: 82–115. [Google Scholar] [CrossRef] [Green Version]
  9. Baecke, Philippe, and Lorenzo Bocca. 2017. The value of vehicle telematics data in insurance risk selection processes. Decision Support Systems 98: 69–79. [Google Scholar] [CrossRef]
  10. Baehrens, David, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. 2010. How to explain individual classification decisions. The Journal of Machine Learning Research 11: 1803–31. [Google Scholar]
  11. Balasubramanian, Ramnath, Ari Libarikian, and Doug McElhaney. 2018. Insurance 2030—The Impact of AI on the Future of Insurance. New York: McKinsey & Company. [Google Scholar]
  12. Barocas, Solon, and Andrew D Selbst. 2016. Big data’s disparate impact. California Law Review 104: 671. [Google Scholar] [CrossRef]
  13. Barry, Laurence, and Arthur Charpentier. 2020. Personalization as a promise: Can Big Data change the practice of insurance? Big Data & Society 7: 2053951720935143. [Google Scholar]
  14. Baser, Furkan, and Aysen Apaydin. 2010. Calculating insurance claim reserves with hybrid fuzzy least squares regression analysis. Gazi University Journal of Science 23: 163–70. [Google Scholar]
  15. Baudry, Maximilien, and Christian Y. Robert. 2019. A machine learning approach for individual claims reserving in insurance. Applied Stochastic Models in Business and Industry 35: 1127–55. [Google Scholar] [CrossRef]
  16. Bayamlıoğlu, Emre. 2021. The right to contest automated decisions under the General Data Protection Regulation: Beyond the so-called “right to explanation”. Regulation & Governance 16: 1058–78. [Google Scholar]
  17. Bean, Randy. 2021. Transforming the Insurance Industry with Big Data, Machine Learning and AI. Forbes. July 6. Available online: https://www.forbes.com/sites/randybean/2021/07/06/transforming-the-insurance-industry-with-big-data-machine-learning-and-ai/?sh=4004a662f8a6 (accessed on 11 August 2021).
  18. Beck, Hall P., Mary T. Dzindolet, and Linda G. Pierce. 2002. Operators’ automation usage decisions and the sources of misuse and disuse. In Advances in Human Performance and Cognitive Engineering Research. Bingley: Emerald Group Publishing Limited. [Google Scholar]
  19. Belhadji, El Bachir, George Dionne, and Faouzi Tarkhani. 2000. A model for the detection of insurance fraud. The Geneva Papers on Risk and Insurance-Issues and Practice 25: 517–38. [Google Scholar] [CrossRef]
  20. Benedek, Botond, and Ede László. 2019. Identifying Key Fraud Indicators in the Automobile Insurance Industry Using SQL Server Analysis Services. Studia Universitatis Babes-Bolyai 64: 53–71. [Google Scholar] [CrossRef] [Green Version]
  21. Bermúdez, Lluís, José María Pérez, Mercedes Ayuso, Esther Gómez, and Francisco. J. Vázquez. 2008. A Bayesian dichotomous model with asymmetric link for fraud in insurance. Insurance: Mathematics and Economics 42: 779–86. [Google Scholar] [CrossRef]
  22. Bian, Yiyang, Chen Yang, J. Leon Zhao, and Liang Liang. 2018. Good drivers pay less: A study of usage-based vehicle insurance models. Transportation Research Part A: Policy and Practice 107: 20–34. [Google Scholar] [CrossRef]
  23. Biddle, Rhys, Shaowu Liu, and Guandong Xu. 2018. Automated Underwriting in Life Insurance: Predictions and Optimisation (Industry Track). Paper presented at Australasian Database Conference, Gold Coast, QLD, Australia, May 24–27. [Google Scholar]
  24. Biecek, Przemysław, Marcin Chlebus, Janusz Gajda, Alicja Gosiewska, Anna Kozak, Dominik Ogonowski, Jakub Sztachelski, and Piotr Wojewnik. 2021. Enabling Machine Learning Algorithms for Credit Scoring—Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models. arXiv arXiv:2104.06735. [Google Scholar]
  25. Biran, Or, and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. Paper presented at the IJCAI-17 Workshop on Explainable AI (XAI), Melbourne, VIC, Australia, August 19–21. [Google Scholar]
  26. Blier-Wong, Christopher, Hélène Cossette, Luc Lamontagne, and Etienne Marceau. 2021. Machine Learning in P&C Insurance: A Review for Pricing and Reserving. Risks 9: 4. [Google Scholar]
  27. Bonissone, Piero. P., Raj Subbu, and Kareem S. Aggour. 2002. Evolutionary optimization of fuzzy decision systems for automated insurance underwriting. Paper presented at the 2002 IEEE World Congress on Computational Intelligence, 2002 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE’02. Proceedings (Cat. No. 02CH37291), Honolulu, HI, USA, May 12–17. [Google Scholar]
  28. Boodhun, Noorhannah, and Manoj Jayabalan. 2018. Risk prediction in life insurance industry using supervised learning algorithms. Complex & Intelligent Systems 4: 145–54. [Google Scholar]
  29. Bove, Clara, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2021. Contextualising local explanations for non-expert users: An XAI pricing interface for insurance. Paper presented at the IUI Workshops, College Station, TX, USA, April 13–17. [Google Scholar]
  30. Burgt, Joost van der. 2020. Explainable AI in banking. Journal of Digital Banking 4: 344–50. [Google Scholar]
  31. Burrell, Jenna. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3: 2053951715622512. [Google Scholar]
  32. Bussmann, Niklas, Paolo Giudici, Dimitri Marinelli, and Jochen Papenbrock. 2020. Explainable ai in fintech risk management. Frontiers in Artificial Intelligence 3: 26. [Google Scholar] [CrossRef] [PubMed]
  33. Cao, Hongfei, and Runtong Zhang. 2019. Using PCA to improve the detection of medical insurance fraud in SOFM neural networks. Paper presented at the 2019 3rd International Conference on Management Engineering, Software Engineering and Service Sciences, Wuhan, China, January 12–14. [Google Scholar]
  34. Carabantes, Manuel. 2020. Black-box artificial intelligence: An epistemological and critical analysis. AI & Society 35: 309–17. [Google Scholar]
  35. Carfora, Maria Francesca, Fabio Martinelli, Francesco Mercaldo, Vittoria Nardone, Albina Orlando, Antonella Santone, and Gigliola Vaglini. 2019. A “pay-how-you-drive” car insurance approach through cluster analysis. Soft Computing 23: 2863–75. [Google Scholar] [CrossRef]
  36. Carvalho, Diogo V., Eduardo M. Pereira, and Jaime S. Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8: 832. [Google Scholar] [CrossRef] [Green Version]
  37. Cevolini, Alberto, and Elena Esposito. 2020. From pool to profile: Social consequences of algorithmic prediction in insurance. Big Data & Society 7: 2053951720939228. [Google Scholar]
  38. Chang, Wen Teng, and Kee Huong Lai. 2021. A Neural Network-Based Approach in Predicting Consumers’ Intentions of Purchasing Insurance Policies. Acta Informatica Pragensia 10: 138–54. [Google Scholar] [CrossRef]
  39. Chen, Irene Y., Peter Szolovits, and Marzyeh Ghassemi. 2019. Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics 21: 167–79. [Google Scholar]
  40. Cheng, Min-Yuan, Hsien-Sheng Peng, Yu-Wei Wu, and Yi-Hung Liao. 2011. Decision making for contractor insurance deductible using the evolutionary support vector machines inference model. Expert Systems with Applications 38: 6547–55. [Google Scholar] [CrossRef]
  41. Cheng, Xiang, Zhuo Jin, and Hailiang Yang. 2020. Optimal insurance strategies: A hybrid deep learning Markov chain approximation approach. ASTIN Bulletin: The Journal of the IAA 50: 449–77. [Google Scholar] [CrossRef]
  42. Chi, Oscar Hengxuan, Gregory Denton, and Dogan Gursoy. 2020. Artificially intelligent device use in service delivery: A systematic review, synthesis, and research agenda. Journal of Hospitality Marketing & Management 29: 757–86. [Google Scholar]
  43. Christmann, Andreas. 2004. An approach to model complex high–dimensional insurance data. Allgemeines Statistisches Archiv 88: 375–96. [Google Scholar] [CrossRef]
  44. Clinciu, Miruna-Adriana, and Helen Hastie. 2019. A survey of explainable AI terminology. Paper presented at the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019), Tokyo, Japan, October 29–November 1. [Google Scholar]
  45. Coalition Against Insurance Fraud. 2020. Artificial Intelligence & Insurance Fraud. Washington, DC: Coalition Against Insurance Fraud. Available online: https://insurancefraud.org/wp-content/uploads/Artificial-Intelligence-and-Insurance-Fraud-2020.pdf (accessed on 2 May 2021).
  46. Colaner, Nathan. 2022. Is explainable artificial intelligence intrinsically valuable? AI & Society 37: 231–38. [Google Scholar]
  47. Confalonieri, Roberto, Ludovik Coba, Benedikt Wagner, and Tarek R. Besold. 2021. A historical perspective of explainable Artificial Intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 11: e1391. [Google Scholar] [CrossRef]
  48. Daniels, Norman. 2011. The ethics of health reform: Why we should care about who is missing coverage. Connecticut Law Review 44: 1057. [Google Scholar]
  49. Danilevsky, Marina, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural language processing. arXiv arXiv:2010.00711. [Google Scholar]
  50. Das, Arun, and Paul Rad. 2020. Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv arXiv:2006.11371. [Google Scholar]
  51. David, Mihaela. 2015. Auto insurance premium calculation using generalized linear models. Procedia Economics and Finance 20: 147–56. [Google Scholar] [CrossRef]
  52. Delong, Łukasz, and Mario V. Wüthrich. 2020. Neural networks for the joint development of individual payments and claim incurred. Risks 8: 33. [Google Scholar] [CrossRef] [Green Version]
  53. Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. 2020. Explainable AI for interpretable credit scoring. arXiv arXiv:2012.03749. [Google Scholar]
  54. Denuit, Michel, and Stefan Lang. 2004. Non-life rate-making with Bayesian GAMs. Insurance: Mathematics and Economics 35: 627–47. [Google Scholar] [CrossRef]
  55. Deprez, Philippe, Pavel V. Shevchenko, and Mario V. Wüthrich. 2017. Machine learning techniques for mortality modeling. European Actuarial Journal 7: 337–52. [Google Scholar] [CrossRef]
  56. Desik, P. H. Anantha, Samarendra Behera, Prashanth Soma, and Nirmala Sundari. 2016. Segmentation-Based Predictive Modeling Approach in Insurance Marketing Strategy. IUP Journal of Business Strategy 13: 35–45. [Google Scholar]
  57. Desik, P. H. Anantha, and Samarendra Behera. 2012. Acquiring Insurance Customer: The CHAID Way. IUP Journal of Knowledge Management 10: 7–13. [Google Scholar]
  58. Devriendt, Sander, Katrien Antonio, Tom Reynkens, and Roel Verbelen. 2021. Sparse regression with multi-type regularized feature modeling. Insurance: Mathematics and Economics 96: 248–61. [Google Scholar] [CrossRef]
  59. Dhieb, Najmeddine, Hakim Ghazzai, Hichem Besbes, and Yehia Massoud. 2019. Extreme gradient boosting machine learning algorithm for safe auto insurance operations. Paper presented at the 2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Cairo, Egypt, September 4–6. [Google Scholar]
  60. Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand, and Reece Robinson. 2020. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association 27: 592–600. [Google Scholar] [CrossRef]
  61. Doshi-Velez, Finale, and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv arXiv:1702.08608. [Google Scholar]
  62. Došilović, Filip Karlo, Mario Brčić, and Nikica Hlupić. 2018. Explainable artificial intelligence: A survey. Paper presented at the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, May 21–25. [Google Scholar]
  63. Du, Mengnan, Ninghao Liu, and Xia Hu. 2019. Techniques for interpretable machine learning. Communications of the ACM 63: 68–77. [Google Scholar] [CrossRef] [Green Version]
  64. Duval, Francis, and Mathieu Pigeon. 2019. Individual loss reserving using a gradient boosting-based approach. Risks 7: 79. [Google Scholar]
  65. Eckert, Theresa, and Stefan Hüsig. 2021. Innovation portfolio management: A systematic review and research agenda in regards to digital service innovations. Management Review Quarterly 72: 187–230. [Google Scholar] [CrossRef]
  66. EIOPA. 2021. Artificial Intelligence Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in the European Insurance Sector. Luxembourg: European Insurance and Occupational Pensions Authority (EIOPA). [Google Scholar]
  67. Eling, Martin, Davide Nuessle, and Julian Staubli. 2021. The impact of artificial intelligence along the insurance value chain and on the insurability of risks. The Geneva Papers on Risk and Insurance-Issues and Practice 47: 205–41. [Google Scholar] [CrossRef]
  68. EU. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 27 June 2020).
  69. Fang, Kuangnan, Yefei Jiang, and Malin Song. 2016. Customer profitability forecasting using Big Data analytics: A case study of the insurance industry. Computers & Industrial Engineering 101: 554–64. [Google Scholar]
  70. Felzmann, Heike, Eduard Fosch Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux. 2019. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society 6: 2053951719860542. [Google Scholar]
  71. Ferguson, Niall. 2008. The Ascent of Money: A Financial History of the World. London: Penguin. [Google Scholar]
  72. Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, and Francesca Rossi. 2018. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28: 689–707. [Google Scholar] [CrossRef] [Green Version]
  73. Ford, Martin. 2018. Architects of Intelligence: The Truth about AI from the People Building it. Birmingham: Packt Publishing Ltd. [Google Scholar]
  74. Fox, Maria, Derek Long, and Daniele Magazzeni. 2017. Explainable planning. arXiv arXiv:1709.10256. [Google Scholar]
  75. Frees, Edward W., and Emiliano A. Valdez. 2008. Hierarchical insurance claims modeling. Journal of the American Statistical Association 103: 1457–69. [Google Scholar] [CrossRef]
  76. Freitas, Alex A. 2014. Comprehensible classification models: A position paper. ACM SIGKDD Explorations Newsletter 15: 1–10. [Google Scholar] [CrossRef]
  77. Gabrielli, Andrea. 2021. An individual claims reserving model for reported claims. European Actuarial Journal 11: 541–77. [Google Scholar] [CrossRef]
  78. Gade, Krishna, Sahin Cem Geyik, Krishnaram Kenthapadi, Varun Mithal, and Ankur Taly. 2019. Explainable AI in industry. Paper presented at the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, August 4–8. [Google Scholar]
  79. Gan, Guojun, and Emiliano A. Valdez. 2017. Valuation of large variable annuity portfolios: Monte Carlo simulation and synthetic datasets. Dependence Modeling 5: 354–74. [Google Scholar] [CrossRef]
  80. Gan, Guojun, and Jimmy Xiangji Huang. 2017. A data mining framework for valuing large portfolios of variable annuities. Paper presented at the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13–17. [Google Scholar]
  81. Gan, Guojun. 2013. Application of data clustering and machine learning in variable annuity valuation. Insurance: Mathematics and Economics 53: 795–801. [Google Scholar]
  82. Ghani, Rayid, and Mohit Kumar. 2011. Interactive learning for efficiently detecting errors in insurance claims. Paper presented at the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, August 21–24. [Google Scholar]
  83. Ghorbani, Ali, and Sara Farzai. 2018. Fraud detection in automobile insurance using a data mining based approach. International Journal of Mechatronics, Elektrical and Computer Technology (IJMEC) 8: 3764–71. [Google Scholar]
  84. GlobalData. 2021. Artificial Intelligence (AI) in Insurance—Thematic Research. London: GlobalData. [Google Scholar]
  85. Goddard, Michelle. 2017. The EU General Data Protection Regulation (GDPR): European regulation that has a global impact. International Journal of Market Research 59: 703–5. [Google Scholar] [CrossRef]
  86. Goodman, Bryce, and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine 38: 50–57. [Google Scholar] [CrossRef] [Green Version]
  87. Gramegna, Alex, and Paolo Giudici. 2020. Why to Buy Insurance? An Explainable Artificial Intelligence Approach. Risks 8: 137. [Google Scholar] [CrossRef]
  88. Gramespacher, Thomas, and Jan-Alexander Posth. 2021. Employing explainable AI to optimize the return target function of a loan portfolio. Frontiers in Artificial Intelligence 4: 693022. [Google Scholar] [CrossRef]
  89. Grant, Eric. 2012. The Social and Economic Value of Insurance. Geneva: The Geneva Association (The International Association for the Study of Insurance Economics). Available online: https://www.genevaassociation.org/sites/default/files/research-topics-docu-ment-type/pdf_public/ga2012-the_social_and_economic_value_of_insurance.pdf (accessed on 3 July 2020).
  90. Grize, Yves-Laurent, Wolfram Fischer, and Christian Lützelschwab. 2020. Machine learning applications in nonlife insurance. Applied Stochastic Models in Business and Industry 36: 523–37. [Google Scholar] [CrossRef]
  91. Guelman, Leo. 2012. Gradient boosting trees for auto insurance loss cost modeling and prediction. Expert Systems with Applications 39: 3659–67. [Google Scholar] [CrossRef]
  92. Gweon, Hyukjun, Shu Li, and Rogemar Mamon. 2020. An effective bias-corrected bagging method for the valuation of large variable annuity portfolios. ASTIN Bulletin: The Journal of the IAA 50: 853–71. [Google Scholar] [CrossRef]
  93. Hadji Misheva, Branka, Ali Hirsa, Joerg Osterrieder, Onkar Kulkarni, and Stephen Fung Lin. 2021. Explainable AI in Credit Risk Management. Credit Risk Management, March 1. [Google Scholar]
  94. Hawley, Katherine. 2014. Trust, distrust and commitment. Noûs 48: 1–20. [Google Scholar] [CrossRef] [Green Version]
  95. Henckaerts, Roel, Katrien Antonio, and Marie-Pier Côté. 2020. Model-Agnostic Interpretable and Data-driven suRRogates suited for highly regulated industries. Stat 1050: 14. [Google Scholar]
  96. Henckaerts, Roel, Marie-Pier Côté, Katrien Antonio, and Roel Verbelen. 2021. Boosting insights in insurance tariff plans with tree-based machine learning methods. North American Actuarial Journal 25: 255–85. [Google Scholar] [CrossRef]
  97. Herland, Matthew, Taghi M. Khoshgoftaar, and Richard A. Bauder. 2018. Big data fraud detection using multiple medicare data sources. Journal of Big Data 5: 1–21. [Google Scholar] [CrossRef]
  98. Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv arXiv:1503.02531. [Google Scholar]
  99. Hoffman, Robert R. 2017. A taxonomy of emergent trusting in the human–machine relationship. In Cognitive Systems Engineering: The Future for a Changing World. Boca Raton: CRC Press, pp. 137–64. [Google Scholar]
  100. Hoffman, Robert R., Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv arXiv:1812.04608. [Google Scholar]
  101. Hollis, Aidan, and Jason Strauss. 2007. Privacy, Driving Data and Automobile Insurance: An Economic Analysis. Munich: University Library of Munich. [Google Scholar]
  102. Honegger, Milo. 2018. Shedding light on black box machine learning algorithms: Development of an axiomatic framework to assess the quality of methods that explain individual predictions. arXiv arXiv:1808.05054. [Google Scholar]
  103. Huang, Yifan, and Shengwang Meng. 2019. Automobile insurance classification ratemaking based on telematics driving data. Decision Support Systems 127: 113156. [Google Scholar] [CrossRef]
  104. Ibiwoye, Ade, Olawale Olaniyi Ajibola, and Ashim Babatunde Sogunro. 2012. Artificial neural network model for predicting insurance insolvency. International Journal of Management and Business Research 2: 59–68. [Google Scholar]
  105. Islam, Sheikh Rabiul, William Eberle, and Sheikh K. Ghafoor. 2020. Towards quantification of explainability in explainable artificial intelligence methods. Paper presented at the Thirty-Third International Flairs Conference, North Miami Beach, FL, USA, May 17–20. [Google Scholar]
  106. Jacovi, Alon, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. Paper presented at the 2021 ACM Conference on Fairness, Accountability, and Transparency, Toronto, ON, Canada, March 3–10. [Google Scholar]
  107. Jain, Rachna, Jafar A. Alzubi, Nikita Jain, and Pawan Joshi. 2019. Assessing risk in life insurance using ensemble learning. Journal of Intelligent & Fuzzy Systems 37: 2969–80. [Google Scholar]
  108. Jeong, Himchan, Guojun Gan, and Emiliano A. Valdez. 2018. Association rules for understanding policyholder lapses. Risks 6: 69. [Google Scholar] [CrossRef] [Green Version]
  109. Jiang, Xinxin, Shirui Pan, Guodong Long, Fei Xiong, Jing Jiang, and Chengqi Zhang. 2018. Cost-sensitive parallel learning framework for insurance intelligence operation. IEEE Transactions on Industrial Electronics 66: 9713–23. [Google Scholar] [CrossRef]
  110. Jin, Zhuo, Hailiang Yang, and George Yin. 2021. A hybrid deep learning method for optimal insurance strategies: Algorithms and convergence analysis. Insurance: Mathematics and Economics 96: 262–75. [Google Scholar] [CrossRef]
  111. Johnson, Justin M., and Taghi M. Khoshgoftaar. 2019. Medicare fraud detection using neural networks. Journal of Big Data 6: 1–35. [Google Scholar] [CrossRef] [Green Version]
  112. Joram, Mutai K., Bii K. Harrison, and Kiplang’at N. Joseph. 2017. A knowledge-based system for life insurance underwriting. International Journal of Information Technology and Computer Science 3: 40–49. [Google Scholar] [CrossRef] [Green Version]
  113. Josephson, John R., and Susan G. Josephson. 1996. Abductive Inference: Computation, Philosophy, Technology. Cambridge: Cambridge University Press. [Google Scholar]
  114. Karamizadeh, Faramarz, and Seyed Ahad Zolfagharifar. 2016. Using the clustering algorithms and rule-based of data mining to identify affecting factors in the profit and loss of third party insurance, insurance company auto. Indian Journal of Science and Technology 9: 1–9. [Google Scholar] [CrossRef]
  115. Kašćelan, Vladimir, Ljiljana Kašćelan, and Milijana Novović Burić. 2016. A nonparametric data mining approach for risk prediction in car insurance: A case study from the Montenegrin market. Economic Research-Ekonomska Istraživanja 29: 545–58. [Google Scholar] [CrossRef] [Green Version]
  116. Keller, Benno, Martin Eling, Hato Schmeiser, Markus Christen, and Michele Loi. 2018. Big Data and Insurance: Implications for Innovation, Competition and Privacy. Geneva: Geneva Association-International Association for the Study of Insurance. [Google Scholar]
  117. Kelley, Kevin H., Lisa M. Fontanetta, Mark Heintzman, and Nikki Pereira. 2018. Artificial intelligence: Implications for social inflation and insurance. Risk Management and Insurance Review 21: 373–87. [Google Scholar] [CrossRef]
  118. Khodairy, Moayed A., and Gibrael Abosamra. 2021. Driving Behavior Classification Based on Oversampled Signals of Smartphone Embedded Sensors Using an Optimized Stacked-LSTM Neural Networks. IEEE Access 9: 4957–72. [Google Scholar] [CrossRef]
  119. Khuong, Mai Ngoc, and Tran Manh Tuan. 2016. A new neuro-fuzzy inference system for insurance forecasting. Paper presented at the International Conference on Advances in Information and Communication Technology, Bikaner, India, August 12–13. [Google Scholar]
  120. Kiermayer, Mark, and Christian Weiß. 2021. Grouping of contracts in insurance using neural networks. Scandinavian Actuarial Journal 2021: 295–322. [Google Scholar] [CrossRef]
  121. Kieu, Tung, Bin Yang, Chenjuan Guo, and Christian S. Jensen. 2018. Distinguishing trajectories from different drivers using incompletely labeled trajectories. Paper presented at the 27th ACM International Conference on Information and Knowledge Management, Torino, Italy, October 22–26. [Google Scholar]
  122. Kim, Hyong, and Errol Gardner. 2015. The Science of Winning in Financial Services-Competing on Analytics: Opportunities to Unlock the Power of Data. Journal of Financial Perspectives 3: 1–34. [Google Scholar]
  123. Kopitar, Leon, Leona Cilar, Primoz Kocbek, and Gregor Stiglic. 2019. Local vs. global interpretability of machine learning models in type 2 diabetes mellitus screening. In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems. Berlin: Springer, pp. 108–19. [Google Scholar]
  124. Kose, Ilker, Mehmet Gokturk, and Kemal Kilic. 2015. An interactive machine-learning-based electronic fraud and abuse detection system in healthcare insurance. Applied Soft Computing 36: 283–99. [Google Scholar] [CrossRef]
  125. Koster, Harold. 2020. Towards better implementation of the European Union’s anti-money laundering and countering the financing of terrorism framework. Journal of Money Laundering Control 23: 379–86. [Google Scholar] [CrossRef]
  126. Koster, Olivier, Ruud Kosman, and Joost Visser. 2021. A Checklist for Explainable AI in the Insurance Domain. Paper presented at the International Conference on the Quality of Information and Communications Technology, Algarve, Portugal, September 8–11. [Google Scholar]
  127. Kowshalya, G., and M. Nandhini. 2018. Predicting fraudulent claims in automobile insurance. Paper presented at the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, April 20–21. [Google Scholar]
  128. Krafft, Peaks, Meg Young, Michael Katell, Karen Huang, and Ghislain Bugingo. 2020. Defining AI in policy versus practice. Paper presented at the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, February 7–8. [Google Scholar]
  129. Kumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. 2021. Explainable Artificial Intelligence for Sarcasm Detection in Dialogues. Wireless Communications and Mobile Computing 2021: 2939334. [Google Scholar] [CrossRef]
  130. Kumar, Mohit, Rayid Ghani, and Zhu-Song Mei. 2010. Data mining to predict and prevent errors in health insurance claims processing. Paper presented at the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, July 24–28. [Google Scholar]
  131. Kuo, Kevin, and Daniel Lupton. 2020. Towards Explainability of Machine Learning Models in Insurance Pricing. arXiv arXiv:2003.10674. [Google Scholar]
  132. Kute, Dattatray V., Biswajeet Pradhan, Nagesh Shukla, and Abdullah Alamri. 2021. Deep learning and explainable artificial intelligence techniques applied for detecting money laundering—A critical review. IEEE Access 9: 82300–17. [Google Scholar] [CrossRef]
  133. Kwak, Byung Il, Mee Lan Han, and Huy Kang Kim. 2020. Driver Identification Based on Wavelet Transform Using Driving Patterns. IEEE Transactions on Industrial Informatics 17: 2400–10. [Google Scholar] [CrossRef]
  134. Kyu, Phyu Mar, and Kuntpong Woraratpanya. 2020. Car Damage Detection and Classification. Paper presented at the 11th International Conference on Advances in Information Technology, Bangkok, Thailand, July 1–3. [Google Scholar]
  135. Larivière, Bart, and Dirk Van den Poel. 2005. Predicting customer retention and profitability by using random forests and regression forests techniques. Expert Systems with Applications 29: 472–84. [Google Scholar] [CrossRef]
  136. Lau, Lucas, and Arun Tripathi. 2011. Mine your business—A novel application of association rules for insurance claims analytics. In CAS E-Forum. Arlington: Casualty Actuarial Society. [Google Scholar]
  137. Lee, Gee Y., Scott Manski, and Tapabrata Maiti. 2020. Actuarial applications of word embedding models. ASTIN Bulletin: The Journal of the IAA 50: 1–24. [Google Scholar] [CrossRef]
  138. Li, Yaqi, Chun Yan, Wei Liu, and Maozhen Li. 2018. A principle component analysis-based random forest with the potential nearest neighbor method for automobile insurance fraud identification. Applied Soft Computing 70: 1000–9. [Google Scholar] [CrossRef]
  139. Liao, Shu-Hsien, Pei-Hui Chu, and Pei-Yuan Hsiao. 2012. Data mining techniques and applications–A decade review from 2000 to 2011. Expert Systems with Applications 39: 11303–11. [Google Scholar] [CrossRef]
  140. Lin, Chaohsin. 2009. Using neural networks as a support tool in the decision making for insurance industry. Expert Systems with Applications 36: 6914–17. [Google Scholar] [CrossRef]
  141. Lin, Justin, and Ha-Joon Chang. 2009. Should Industrial Policy in developing countries conform to comparative advantage or defy it? A debate between Justin Lin and Ha-Joon Chang. Development Policy Review 27: 483–502. [Google Scholar] [CrossRef]
  142. Lin, Weiwei, Ziming Wu, Longxin Lin, Angzhan Wen, and Jin Li. 2017. An ensemble random forest algorithm for insurance big data analysis. IEEE Access 5: 16568–75. [Google Scholar] [CrossRef]
  143. Lipton, Zachary C. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16: 31–57. [Google Scholar] [CrossRef]
  144. Liu, Jenn-Long, and Chien-Liang Chen. 2012. Application of evolutionary data mining algorithms to insurance fraud prediction. Paper presented at the 4th International Conference on Machine Learning and Computing IPCSIT, Hong Kong, China, March 10–11. [Google Scholar]
  145. Liu, Qing, David Pitt, and Xueyuan Wu. 2014. On the prediction of claim duration for income protection insurance policyholders. Annals of Actuarial Science 8: 42–62. [Google Scholar] [CrossRef] [Green Version]
  146. Lopez, Olivier, and Xavier Milhaud. 2021. Individual reserving and nonparametric estimation of claim amounts subject to large reporting delays. Scandinavian Actuarial Journal 2021: 34–53. [Google Scholar] [CrossRef]
  147. Lou, Yin, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. Accurate intelligible models with pairwise interactions. Paper presented at the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, August 11–14. [Google Scholar]
  148. Lundberg, Scott M, and Su-In Lee. 2017. A unified approach to interpreting model predictions. Paper presented at the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, December 4–9. [Google Scholar]
  149. Lundberg, Scott M, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence 2: 56–67. [Google Scholar] [CrossRef]
  150. Ma, Yu-Luen, Xiaoyu Zhu, Xianbiao Hu, and Yi-Chang Chiu. 2018. The use of context-sensitive insurance telematics data in auto insurance rate making. Transportation Research Part A: Policy and Practice 113: 243–58. [Google Scholar] [CrossRef]
  151. Mascharka, David, Philip Tran, Ryan Soklaski, and Arjun Majumdar. 2018. Transparency by design: Closing the gap between performance and interpretability in visual reasoning. Paper presented at the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, June 18–23. [Google Scholar]
  152. Matloob, Irum, Shoab Ahmed Khan, and Habib Ur Rahman. 2020. Sequence Mining and Prediction-Based Healthcare Fraud Detection Methodology. IEEE Access 8: 143256–73. [Google Scholar] [CrossRef]
  153. Mayer, Roger C., James H. Davis, and F. David Schoorman. 1995. An integrative model of organizational trust. Academy of Management Review 20: 709–34. [Google Scholar] [CrossRef]
  154. Maynard, Trevor, Luca Baldassarre, Yves-Alexandre de Montjoye, Liz McFall, and María Óskarsdóttir. 2022. AI: Coming of age? Annals of Actuarial Science 16: 1–5. [Google Scholar] [CrossRef]
  155. McFall, Liz, Gert Meyers, and Ine Van Hoyweghen. 2020. The personalisation of insurance: Data, behaviour and innovation. Big Data & Society 7: 2053951720973707. [Google Scholar]
  156. McKnight, D. Harrison, Vivek Choudhury, and Charles Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research 13: 334–59. [Google Scholar] [CrossRef] [Green Version]
  157. Mehdiyev, Nijat, Constantin Houy, Oliver Gutermuth, Lea Mayer, and Peter Fettke. 2021. Explainable Artificial Intelligence (XAI) Supporting Public Administration Processes–On the Potential of XAI in Tax Audit Processes. Cham: Springer. [Google Scholar]
  158. Meyerson, Debra, Karl E. Weick, and Roderick M. Kramer. 1996. Swift trust and temporary groups. Trust in Organizations: Frontiers of Theory and Research 166: 195. [Google Scholar]
  159. Mizgier, Kamil J., Otto Kocsis, and Stephan M. Wagner. 2018. Zurich Insurance uses data analytics to leverage the BI insurance proposition. Interfaces 48: 94–107. [Google Scholar] [CrossRef]
  160. Mohamadloo, Azam, Ali Ramezankhani, Saeed Zarein-Dolab, Jamshid Salamzadeh, and Fatemeh Mohamadloo. 2017. A systematic review of main factors leading to irrational prescription of medicine. Iranian Journal of Psychiatry and Behavioral Sciences 11: e10242. [Google Scholar] [CrossRef]
  161. Molnar, Christoph. 2019. Interpretable Machine Learning. Morrisville: Lulu Press. [Google Scholar]
  162. Moradi, Milad, and Matthias Samwald. 2021. Post-hoc explanation of black-box classifiers using confident itemsets. Expert Systems with Applications 165: 113941. [Google Scholar] [CrossRef]
  163. Morik, Katharina, Christian Hüppej, and Klaus Unterstein. 2002. End-user access to multiple sources-Incorporating knowledge discovery into knowledge management. Paper presented at the International Conference on Practical Aspects of Knowledge Management, Vienna, Austria, December 2–3. [Google Scholar]
  164. Motoda, Hiroshi, and Huan Liu. 2002. Feature selection, extraction and construction. Communication of IICM (Institute of Information and Computing Machinery, Taiwan) 5: 2. [Google Scholar]
  165. Mueller, Shane T., Robert R. Hoffman, William Clancey, Abigail Emrey, and Gary Klein. 2019. Explanation in Human-AI Systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for Explainable AI. arXiv arXiv:1902.01876. [Google Scholar]
  166. Mullins, Martin, Christopher P. Holland, and Martin Cunneen. 2021. Creating ethics guidelines for artificial intelligence and big data analytics customers: The case of the consumer European insurance market. Patterns 2: 100362. [Google Scholar] [CrossRef]
  167. NallamReddy, Sundari, Samarandra Behera, Sanjeev Karadagi, and A Desik. 2014. Application of multiple random centroid (MRC) based k-means clustering algorithm in insurance—A review article. Operations Research and Applications: An International Journal 1: 15–21. [Google Scholar]
  168. Naylor, Michael. 2017. Insurance Transformed: Technological Disruption. Berlin: Springer. [Google Scholar]
  169. Neumann, Łukasz, Robert M. Nowak, Rafał Okuniewski, and Paweł Wawrzyński. 2019. Machine Learning-Based Predictions of Customers’ Decisions in Car Insurance. Applied Artificial Intelligence 33: 817–28. [Google Scholar] [CrossRef]
  170. Ngai, Eric W. T., Yong Hu, Yiu Hing Wong, Yijun Chen, and Xin Sun. 2011. The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature. Decision Support Systems 50: 559–69. [Google Scholar] [CrossRef]
  171. Ntoutsi, Eirini, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria-Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, and Emmanouil Krasanakis. 2020. Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10: e1356. [Google Scholar] [CrossRef] [Green Version]
  172. OECD, Organisation for Economic Co-operation and Development. 2020. The Impact of Big Data and Artificial Intelligence (AI) in the Insurance Sector. Paris: OECD. Available online: https://www.oecd.org/finance/Impact-Big-Data-AI-in-the-Insurance-Sector.pdf (accessed on 1 September 2021).
  173. Page, Matthew J., and David Moher. 2017. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement and extensions: A scoping review. Systematic Reviews 6: 1–14. [Google Scholar] [CrossRef] [PubMed]
  174. Palacio, Sebastian, Adriano Lucieri, Mohsin Munir, Jörn Hees, Sheraz Ahmed, and Andreas Dengel. 2021. XAI Handbook: Towards a Unified Framework for Explainable AI. arXiv arXiv:2105.06677. [Google Scholar]
  175. Paruchuri, Harish. 2020. The Impact of Machine Learning on the Future of Insurance Industry. American Journal of Trade and Policy 7: 85–90. [Google Scholar] [CrossRef]
  176. Pathak, Jagdish, Navneet Vidyarthi, and Scott L. Summers. 2005. A fuzzy-based algorithm for auditors to detect elements of fraud in settled insurance claims. Managerial Auditing Journal 20: 632–44. [Google Scholar] [CrossRef]
  177. Payrovnaziri, Seyedeh Neelufar, Zhaoyi Chen, Pablo Rengifo-Moreno, Tim Miller, Jiang Bian, Jonathan H. Chen, Xiuwen Liu, and Zhe He. 2020. Explainable artificial intelligence models using real-world electronic health record data: A systematic scoping review. Journal of the American Medical Informatics Association 27: 1173–85. [Google Scholar] [CrossRef]
  178. Pieters, Wolter. 2011. Explanation and trust: What to tell the user in security and AI? Ethics and Information Technology 13: 53–64. [Google Scholar] [CrossRef] [Green Version]
  179. Putnam, Vanessa, and Cristina Conati. 2019. Exploring the Need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS). Paper presented at the IUI Workshops, Los Angeles, CA, USA, March 16–20. [Google Scholar]
  180. Quan, Zhiyu, and Emiliano A. Valdez. 2018. Predictive analytics of insurance claims using multivariate decision trees. Dependence Modeling 6: 377–407. [Google Scholar] [CrossRef]
  181. Ravi, Kumar, Vadlamani Ravi, and P. Sree Rama Krishna Prasad. 2017. Fuzzy formal concept analysis based opinion mining for CRM in financial services. Applied Soft Computing 60: 786–807. [Google Scholar] [CrossRef]
  182. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “Why should i trust you?” Explaining the predictions of any classifier. Paper presented at the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13–17. [Google Scholar]
  183. Rieder, Gernot, and Judith Simon. 2017. Big data: A new empiricism and its epistemic and socio-political consequences. In Berechenbarkeit der Welt? Berlin: Springer, pp. 85–105. [Google Scholar]
  184. Riikkinen, Mikko, Hannu Saarijärvi, Peter Sarlin, and Ilkka Lähteenmäki. 2018. Using artificial intelligence to create value in insurance. International Journal of Bank Marketing 36: 1145–68. [Google Scholar] [CrossRef]
  185. Robinson, Stephen Cory. 2020. Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI). Technology in Society 63: 101421. [Google Scholar] [CrossRef]
  186. Rosenfeld, Avi. 2021. Better metrics for evaluating explainable artificial intelligence. Paper presented at the 20th International Conference on Autonomous Agents and Multiagent Systems, London, UK, May 3–7. [Google Scholar]
  187. Rudin, Cynthia. 2018. Please stop explaining black box models for high stakes decisions. Stat 1050: 26. [Google Scholar]
  188. Sadreddini, Zhaleh, Ilknur Donmez, and Halim Yanikomeroglu. 2021. Cancel-for-Any-Reason Insurance Recommendation Using Customer Transaction-Based Clustering. IEEE Access 9: 39363–74. [Google Scholar] [CrossRef]
  189. Sakthivel, K. M., and C. S. Rajitha. 2017. Artificial intelligence for estimation of future claim frequency in non-life insurance. Global Journal of Pure and Applied Mathematics 13: 1701–10. [Google Scholar]
  190. Samonte, Mary Jane C., Bobby D. Gerardo, Arnel C. Fajardo, and Ruji P. Medina. 2018. ICD-9 tagging of clinical notes using topical word embedding. Paper presented at the 2018 International Conference on Internet and e-Business, Singapore, April 25–27. [Google Scholar]
  191. Sarkar, Abhineet. 2020. Disrupting the Insurance Value Chain. In The AI Book: The Artificial Intelligence Handbook for Investors, Entrepreneurs and FinTech Visionaries. New York: Wiley, pp. 89–91. [Google Scholar]
  192. Sevim, Şerafettin, Birol Yildiz, and Nilüfer Dalkiliç. 2016. Risk Assessment for Accounting Professional Liability Insurance. Sosyoekonomi 24: 93–112. [Google Scholar] [CrossRef] [Green Version]
  193. Shah, Paras, and Allon Guez. 2009. Mortality forecasting using neural networks and an application to cause-specific data for insurance purposes. Journal of Forecasting 28: 535–48. [Google Scholar] [CrossRef]
  194. Shapiro, Arnold F. 2007. An overview of insurance uses of fuzzy logic. In Computational Intelligence in Economics and Finance. Berlin: Springer, pp. 25–61. [Google Scholar]
  195. Sheehan, Barry, Finbarr Murphy, Cian Ryan, Martin Mullins, and Hai Yue Liu. 2017. Semi-autonomous vehicle motor insurance: A Bayesian Network risk transfer approach. Transportation Research Part C: Emerging Technologies 82: 124–37. [Google Scholar] [CrossRef]
  196. Siami, Mohammad, Mohsen Naderpour, and Jie Lu. 2020. A mobile telematics pattern recognition framework for driving behavior extraction. IEEE Transactions on Intelligent Transportation Systems 22: 1459–72. [Google Scholar] [CrossRef]
  197. Siau, Keng, and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal 31: 47–53. [Google Scholar]
  198. Siegel, Magdalena, Constanze Assenmacher, Nathalie Meuwly, and Martina Zemp. 2021. The legal vulnerability model for same-sex parent families: A mixed methods systematic review and theoretical integration. Frontiers in Psychology 12: 683. [Google Scholar] [CrossRef] [PubMed]
  199. Sithic, H. Lookman, and T. Balasubramanian. 2013. Survey of insurance fraud detection using data mining techniques. arXiv arXiv:1309.0806. [Google Scholar]
  200. Smith, Kate A., Robert J. Willis, and Malcolm Brooks. 2000. An analysis of customer retention and insurance claim patterns using data mining: A case study. Journal of the Operational Research Society 51: 532–41. [Google Scholar] [CrossRef]
  201. Smyth, Gordon K., and Bent Jørgensen. 2002. Fitting Tweedie’s compound Poisson model to insurance claims data: Dispersion modelling. ASTIN Bulletin: The Journal of the IAA 32: 143–57. [Google Scholar] [CrossRef] [Green Version]
  202. Sohail, Misbah, Pedro Peres, and Yuhua Li. 2021. Feature importance analysis for customer management of insurance products. Paper presented at the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, July 18–22. [Google Scholar]
  203. Srihari, Sargur. 2020. Explainable Artificial Intelligence: An Overview. Journal of the Washington Academy of Sciences. [Google Scholar]
  204. Stovold, Elizabeth, Deirdre Beecher, Ruth Foxlee, and Anna Noel-Storr. 2014. Study flow diagrams in Cochrane systematic review updates: An adapted PRISMA flow diagram. Systematic Reviews 3: 1–5. [Google Scholar] [CrossRef]
  205. Štrumbelj, Erik, and Igor Kononenko. 2014. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems 41: 647–65. [Google Scholar] [CrossRef]
  206. Sun, Chenfei, Zhongmin Yan, Qingzhong Li, Yongqing Zheng, Xudong Lu, and Lizhen Cui. 2018. Abnormal group-based joint medical fraud detection. IEEE Access 7: 13589–96. [Google Scholar] [CrossRef]
  207. Supraja, K., and S. Jessica Saritha. 2017. Robust fuzzy rule based technique to detect frauds in vehicle insurance. Paper presented at the 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), Chennai, India, August 1–2. [Google Scholar]
  208. Tallant, Jonathan. 2017. Commitment in cases of trust and distrust. Thought 6: 261–267. [Google Scholar] [CrossRef]
  209. Tanninen, Maiju. 2020. Contested technology: Social scientific perspectives of behaviour-based insurance. Big Data & Society 7: 2053951720942536. [Google Scholar]
  210. Tao, Han, Liu Zhixin, and Song Xiaodong. 2012. Insurance fraud identification research based on fuzzy support vector machine with dual membership. Paper presented at the 2012 International Conference on Information Management, Innovation Management and Industrial Engineering, Sanya, China, October 20–21. [Google Scholar]
  211. Taylor, Linnet. 2017. What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society 4: 2053951717736335. [Google Scholar]
  212. Tekaya, Balkiss, Sirine El Feki, Tasnim Tekaya, and Hela Masri. 2020. Recent applications of big data in finance. Paper presented at the 2nd International Conference on Digital Tools & Uses Congress, Virtual Event, October 15–17. [Google Scholar]
  213. Tillmanns, Sebastian, Frenkel Ter Hofstede, Manfred Krafft, and Oliver Goetz. 2017. How to separate the wheat from the chaff: Improved variable selection for new customer acquisition. Journal of Marketing 81: 99–113. [Google Scholar] [CrossRef]
  214. Tonekaboni, Sana, Shalmali Joshi, Melissa D McCradden, and Anna Goldenberg. 2019. What clinicians want: Contextualizing explainable machine learning for clinical end use. Paper presented at the Machine Learning for Healthcare Conference, Ann Arbor, MI, USA, August 9–10. [Google Scholar]
  215. Toreini, Ehsan, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad Van Moorsel. 2020. The relationship between trust in AI and trustworthy machine learning technologies. Paper presented at the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27–30. [Google Scholar]
  216. Umamaheswari, K., and S. Janakiraman. 2014. Role of data mining in insurance industry. Int J Adv Comput Technol 3: 961–66. [Google Scholar]
  217. Ungur, Cristina. 2017. Socio-economic valences of insurance. Revista Economia Contemporană 2: 112–18. [Google Scholar]
  218. van den Boom, Freyja. 2021. Regulating Telematics Insurance. In Insurance Distribution Directive. Berlin: Springer, pp. 293–325. [Google Scholar]
  219. Vassiljeva, Kristina, Aleksei Tepljakov, Eduard Petlenkov, and Eduard Netšajev. 2017. Computational intelligence approach for estimation of vehicle insurance risk level. Paper presented at the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, May 14–19. [Google Scholar]
  220. Vaziri, Jalil, and Mohammad Ali Beheshtinia. 2016. A holistic fuzzy approach to create competitive advantage via quality management in services industry (case study: Life-insurance services). Management Decision 54: 2035–62. [Google Scholar] [CrossRef]
  221. Verma, Aayushi, Anu Taneja, and Anuja Arora. 2017. Fraud detection and frequent pattern matching in insurance claims using data mining techniques. Paper presented at the 2017 Tenth International Conference on Contemporary Computing (IC3), Noida, India, August 10–12. [Google Scholar]
  222. Viaene, Stijn, Guido Dedene, and Richard A. Derrig. 2005. Auto claim fraud detection using Bayesian learning neural networks. Expert Systems with Applications 29: 653–66. [Google Scholar] [CrossRef]
  223. Viaene, Stijn, Richard A. Derrig, and Guido Dedene. 2004. A case study of applying boosting Naive Bayes to claim fraud diagnosis. IEEE Transactions on Knowledge and Data Engineering 16: 612–20. [Google Scholar] [CrossRef]
  224. Viaene, Stijn, Richard A. Derrig, Bart Baesens, and Guido Dedene. 2002. A comparison of state-of-the-art classification techniques for expert automobile insurance claim fraud detection. Journal of Risk and Insurance 69: 373–421. [Google Scholar] [CrossRef]
  225. Vilone, Giulia, and Luca Longo. 2020. Explainable artificial intelligence: A systematic review. arXiv arXiv:2006.00093. [Google Scholar]
  226. von Eschenbach, Warren J. 2021. Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology 34: 1607–22. [Google Scholar]
  227. Walsh, Nigel, and Mike Taylor. 2020. Cutting to the Chase: Mapping AI to the Real-World Insurance Value Chain. In The AI Book: The Artificial Intelligence Handbook for Investors, Entrepreneurs and FinTech Visionaries. New York: Wiley, pp. 92–97. [Google Scholar]
  228. Wang, Hui Dong. 2020. Research on the Features of Car Insurance Data Based on Machine Learning. Procedia Computer Science 166: 582–87. [Google Scholar] [CrossRef]
  229. Wang, Yibo, and Wei Xu. 2018. Leveraging deep learning with LDA-based text analytics to detect automobile insurance fraud. Decision Support Systems 105: 87–95. [Google Scholar] [CrossRef]
  230. Wei, Cheng, and Li Dan. 2019. Market fluctuation and agricultural insurance forecasting model based on machine learning algorithm of parameter optimization. Journal of Intelligent & Fuzzy Systems 37: 6217–28. [Google Scholar]
  231. Wulf, Alexander J., and Ognyan Seizov. 2022. “Please understand we cannot provide further information”: Evaluating content and transparency of GDPR-mandated AI disclosures. AI & Society, 1–22. [Google Scholar] [CrossRef]
  232. Wüthrich, Mario V. 2018. Machine learning in individual claims reserving. Scandinavian Actuarial Journal 2018: 465–80. [Google Scholar] [CrossRef]
  233. Wüthrich, Mario V. 2020. Bias regularization in neural network models for general insurance pricing. European Actuarial Journal 10: 179–202. [Google Scholar] [CrossRef]
  234. Xiao, Bo, and Izak Benbasat. 2007. E-commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly 31: 137–209. [Google Scholar] [CrossRef] [Green Version]
  235. Xie, Ning, Gabrielle Ras, Marcel van Gerven, and Derek Doran. 2020. Explainable deep learning: A field guide for the uninitiated. arXiv arXiv:2004.14545. [Google Scholar]
  236. Xu, Wei, Shengnan Wang, Dailing Zhang, and Bo Yang. 2011. Random rough subspace based neural network ensemble for insurance fraud detection. Paper presented at the 2011 Fourth International Joint Conference on Computational Sciences and Optimization, Kunming, China, April 15–19. [Google Scholar]
  237. Yan, Chun, Meixuan Li, Wei Liu, and Man Qi. 2020a. Improved adaptive genetic algorithm for the vehicle Insurance Fraud Identification Model based on a BP Neural Network. Theoretical Computer Science 817: 12–23. [Google Scholar] [CrossRef]
  238. Yan, Chun, Xindong Wang, Xinhong Liu, Wei Liu, and Jiahui Liu. 2020b. Research on the UBI Car Insurance Rate Determination Model Based on the CNN-HVSVM Algorithm. IEEE Access 8: 160762–73. [Google Scholar] [CrossRef]
  239. Yan, Weizhong, and Piero P. Bonissone. 2006. Designing a Neural Network Decision System for Automated Insurance Underwriting. Paper presented at the 2006 IEEE International Joint Conference on Neural Network Proceedings, Vancouver, BC, Canada, July 16–21. [Google Scholar]
  240. Yang, Qiang, Jie Yin, Charles Ling, and Rong Pan. 2006. Extracting actionable knowledge from decision trees. IEEE Transactions on Knowledge and Data Engineering 19: 43–56. [Google Scholar] [CrossRef]
  241. Yang, Yi, Wei Qian, and Hui Zou. 2018. Insurance premium prediction via gradient tree-boosted Tweedie compound Poisson models. Journal of Business & Economic Statistics 36: 456–70. [Google Scholar]
  242. Yeo, Ai Cheo, Kate A. Smith, Robert J. Willis, and Malcolm Brooks. 2002. A mathematical programming approach to optimise insurance premium pricing within a data mining framework. Journal of the Operational Research Society 53: 1197–203. [Google Scholar] [CrossRef]
  243. Yeung, Karen, Andrew Howes, and Ganna Pogrebna. 2019. AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing. In The Oxford Handbook of AI Ethics. Oxford: Oxford University Press. [Google Scholar]
  244. Zahi, Sara, and Boujemâa Achchab. 2019. Clustering of the population benefiting from health insurance using K-means. Paper presented at the 4th International Conference on Smart City Applications, Casablanca, Morocco, October 2–4. [Google Scholar]
  245. Zarifis, Alex, Christopher P. Holland, and Alistair Milne. 2019. Evaluating the impact of AI on insurance: The four emerging AI-and data-driven business models. Emerald Open Research 1: 15. [Google Scholar] [CrossRef]
  246. Zhang, Bo, and Dehua Kong. 2020. Dynamic estimation model of insurance product recommendation based on Naive Bayesian model. Paper presented at the 2020 International Conference on Cyberspace Innovation of Advanced Technologies, Guangzhou, China, December 4–6. [Google Scholar]
Figure 1. Literature Search Process. Backward Searching includes the assessment of the references in each of the 103 relevant articles for additional articles of relevance to the current review. Note: Eling et al. (2021).
Figure 1. Literature Search Process. Backward Searching includes the assessment of the references in each of the 103 relevant articles for additional articles of relevance to the current review. Note: Eling et al. (2021).
Risks 10 00230 g001
Figure 2. Insurance AI Articles Meeting Relevance Threshold (2000–2021) outlines the number of systematically reviewed articles by year according to the inclusion and exclusion criteria outlined in Section 3.1.
Figure 2. Insurance AI Articles Meeting Relevance Threshold (2000–2021) outlines the number of systematically reviewed articles by year according to the inclusion and exclusion criteria outlined in Section 3.1.
Risks 10 00230 g002
Figure 3. The PRISMA Flow Diagram is a recognised standard for systematic review literature search processes (Stovold et al. 2014). * ‘Source’ refers to the article inclusion criteria for this systematic review: journal articles and conference papers/proceedings are included.
Figure 3. The PRISMA Flow Diagram is a recognised standard for systematic review literature search processes (Stovold et al. 2014). * ‘Source’ refers to the article inclusion criteria for this systematic review: journal articles and conference papers/proceedings are included.
Risks 10 00230 g003
Figure 4. IVC Stage and Corresponding XAI Method Employed present the seven IVC stages assessed in the systematically chosen articles and the XAI method used in their methodology. Support Activities is not included in this paper as no articles returned in the systematic literature search presented prediction tasks in line with insurance companies’ Support Activities.
Figure 4. IVC Stage and Corresponding XAI Method Employed present the seven IVC stages assessed in the systematically chosen articles and the XAI method used in their methodology. Support Activities is not included in this paper as no articles returned in the systematic literature search presented prediction tasks in line with insurance companies’ Support Activities.
Risks 10 00230 g004
Table 1. XAI Variables used during the literature analysis to assess the explainability of AI systems applied within insurance industry practices. Reference Appendix A for additional discussion on XAI variables used during analysis.
Table 1. XAI Variables used during the literature analysis to assess the explainability of AI systems applied within insurance industry practices. Reference Appendix A for additional discussion on XAI variables used during analysis.
Intrinsic vs. Post hocIntrinsic InterpretabilityDescribes how a model works and is interpretable by itself. Interpretability is achieved through imposing constraints on the model.Lipton (2018); Molnar (2019); Rudin (2018)
Post hoc InterpretabilityAnalyses what else the (original) model can tell us, necessitating additional models to achieve explainability. The original model’s explainability is analysed after training.Du et al. (2019); Lipton (2018); Molnar (2019)
Local vs. GlobalLocal InterpretabilityReveals the impact of input features on the overall model’s prediction.Baehrens et al. (2010); Lundberg et al. (2020)
Global InterpretabilityLocal explanations are combined to present the overall AI model’s rules or features which determine their predictive outcome.Kopitar et al. (2019); Lundberg et al. (2020)
Model-Specific vs. Model-AgnosticModel-specific InterpretationInterpretation is limited to specific model classes as each interpretation method is based on a specific model’s internals.Molnar (2019)
Model-agnostic InterpretationApplied to any AI model after the model’s training. Analyses relationships between AI model’s feature inputs and outputs.Carvalho et al. (2019); Lipton (2018)
Table 2. Key Search Terms Interchangeable with (Explainable) Artificial Intelligence in the Literature Search Process.
Table 2. Key Search Terms Interchangeable with (Explainable) Artificial Intelligence in the Literature Search Process.
Artificial Intelligence (AI)Smart DevicesAnalyticsSupport Vector Machine (SVM)
Genetic AlgorithmNeural Network (NN)Computational IntelligenceMachine Learning (ML)
Convolutional Neural Network (CNN)Artificial Neural Network (ANN)Explainable Artificial Intelligence (XAI)Deep Learning
Data MiningBig DataFuzzy SystemsFuzzy Logic
Swarm IntelligenceNatural Language Processing (NLP)Image AnalysisMachine Vision
Table 3. The Insurance Value Chain. The stages of the insurance industry’s IVC is adapted from Grize et al. (2020); Eling et al. (2021) and EIOPA (2021).
Table 3. The Insurance Value Chain. The stages of the insurance industry’s IVC is adapted from Grize et al. (2020); Eling et al. (2021) and EIOPA (2021).
Value Chain StageMain TasksImpact of Artificial Intelligence Applications
MarketingMarket and customer research
Analysis of target groups Development of pricing strategies
Design of advertisement and communication
- Improved prediction of customer lifetime value
- Enhanced customer segmentation for personalised customer outreach and tailored communication strategies
- Advanced insight about preferences in consumer purchasing behaviour for the identification of target product propositions and the generation of new ideas for product innovation
- Churn models to enhance customer retention
Product DevelopmentConfiguration of products Verification of legal requirements- The establishment of add-on services such as early detection of new diseases and their prevention enables the development of new revenue streams in addition to risk coverage
- Entry into new markets and development of ecosystems with business partnerships in artificial intelligence-driven markets (e.g., autonomous driving, real-time health, and elderly care with nanobots, natural catastrophe management, smart home ecosystems)
- Development of novel products utilising AI methods (e.g., usage-based, situational, and parametric insurance)
Sales and DistributionCustomer acquisition and consultation Sales conversations
Product sale
After-sales services
- Support of human sales agents by offering advanced sales insights (e.g., cross- and up-selling opportunities) through smart data-driven virtual sales assistants (chatbots) for improved customer consultation and tailored product recommendations
- Proactive customer relationship management and improved after-sales services through increased client transparency
- Chatbots for automated product consultation and sale of standardised insurance products
- Customer Relationship Management (CRM) analytics used to inform nudging and cross-selling of related services (“next-best-action”)
Underwriting and PricingProduct pricing (actuarial methods)
Application handling
Risk assessment
Assessment of final contract details
- Automated application handling, underwriting and risk assessment processes enable accurate insurance quotes within minutes
- New data and insights allow the formation of small and homogenous risk pools, reduction in adverse selection and moral hazard in risk assessment
- Micro-segmentation of insurance customers based on behavioural traits to provide personalised insurance pricing (e.g., dynamic online pricing)
Contract Administration and Customer ServicesChange of contract data
Customer Queries
- Development of chatbots for the automated answering of written and verbal customer queries using Natural Language Processing (NLP)
- Offering advice about health and fitness goals or improved road safety to promote loss prevention
- Proactive customer outreach and regular customer engagement
Claim ManagementClaim settlement Investigation of fraud- Automated claims management leads to decreasing claim settlement life cycles and increased payout accuracy
- Improved fraud detection reduces fraud-related loss positions: anomaly detection, social network analytics and behavioural modelling
- Loss reserving aided by AI estimating the value of losses
Asset and Risk ManagementAsset allocation
Asset liability management Risk control
- Automated investment research with more accurate and detailed market data enables portfolio management to make better-informed decisions due to new insights and more sophisticated analysis of data
- Automated risk reporting
- Development of robo-advisors for automated asset management
- Automated trading systems improve asset allocation
Table 4. AI Methods and XAI Criteria used for the systematic analysis of the literature.
Table 4. AI Methods and XAI Criteria used for the systematic analysis of the literature.
AI MethodXAI Criteria
Bayesian NetworkInstance-basedFeature Interaction and Importance
ClusteringRegressionAttention Mechanism
Neural NetworkReinforcement Learning(Data) Dimensionality Reduction
Decision TreeRegularisationKnowledge Distillation & Rule Extraction
EnsembleRule-basedIntrinsically Interpretable Models
Fuzzy LogicSupport Vector Machine
Table 5. AI Methods and Prediction Tasks. Abbreviations in Table 5 are outlined in the Abbreviations section of this paper.
Table 5. AI Methods and Prediction Tasks. Abbreviations in Table 5 are outlined in the Abbreviations section of this paper.
AI MethodPrediction Task(s)Life/Non-LifeLine of Insurance
Business
Marketing
1Chang and Lai (2021)Neural NetworkANNs used to predict the propensity of consumers to purchase an insurance policy--
2Desik et al. (2016)RegressionDevelop a predictive modelling solution to aid the identification of the best insurance product group for current insurance product group of customers--
3Fang et al. (2016)EnsemblePrediction of insurance customer profitabilityLifeHealth
4Larivière and Van den Poel (2005)EnsemblePrediction of customer retention and profitability--
5Lin et al. (2017)EnsembleClassification to enhance the marketing of insurance productsLife-
6Morik et al. (2002)Rule-basedExtraction of low-level knowledge data to answer high-level questions on customer acquisition, customer up- and cross-selling and customer retention within insurance companies--
Product Development
7Alshamsi (2014)EnsemblePrediction of automobile insurance policies chosen by customers using Random Forest (RF)Non-lifeMotor
8Karamizadeh and Zolfagharifar (2016)ClusteringK-means used to identify clusters which contribute to the profit and loss of auto insurance companiesNon-lifeMotor
9Khodairy and Abosamra (2021)Neural NetworkDriving behaviour classificationNon-lifeMotor
10Shah and Guez (2009)Neural NetworkCalculation of life expectancy (mortality forecasting) based on the individual’s health statusLifeHealth
11Sheehan et al. (2017)Bayesian NetworkBN risk estimation approach for the emergence of new risk structures, including autonomous vehiclesNon-lifeMotor and ProductLiability
Sales and Distribution
12Desik and Behera (2012)Decision TreeCreation of business rules from customer-led data to improve insurer competitiveness--
13Gramegna and Giudici (2020)EnsembleXGBoost predictive classification algorithm provides Shapley valuesNon-life -
14Jeong et al. (2018)Rule-basedAssociation between policyholder switching after a claim and the associated change in premiumNon-lifeMotor
15 Tillmanns et al. (2017)Bayesian NetworkSelection of promising prospective insurance customers from a vendor’s address list--
16Wang (2020)EnsemblePrediction of auto-renewal using RFNon-lifeMotor
17Yang et al. (2006)EnsembleEnsemble of DTs used to maximise the expected net profit of customers--
18Zahi and Achchab (2019)ClusteringGrouping of health insured populationLifeHealth
19Zhang and Kong (2020)Bayesian NetworkEstimation of insurance product recommendation--
Underwriting and Pricing
20Aggour et al. (2006)Fuzzy LogicEncoded the underwriting guidelines to automate the underwriting procedures of long-term care and life insurance policiesLifeLong Term Care
21Baecke and Bocca (2017)RegressionAssess the enhanced accuracy of risk selection predictive models utilising driving behaviour variables in addition to traditional accident risk predictorsNon-lifeMotor
22Bian et al. (2018)EnsembleEnsemble learning-based approach to obtain information on a user’s risk classification which informs the compensation payoutNon-lifeMotor
23Biddle et al. (2018)Instance-basedPrediction of the applications of exclusions in life insurance policies when automated underwriting methods are employedLife-
24Bonissone et al. (2002)Fuzzy LogicAutomation of underwriting practices--
25Boodhun and Jayabalan (2018)Neural NetworkPredict the risk level of life insurance applicantsLife-
26Bove et al. (2021)Rule-basedPredetermined feature values providedNon-lifeMotor
27Carfora et al. (2019)ClusteringEvaluation of UBI automobile insurance policiesNon-lifeMotor
28Cheng et al. (2011)Support Vector MachineEvaluation of loss risk and development of criteria for optimal insurance deductible decision makingNon-lifeConstruction
29Christmann (2004)EnsembleIndirect estimation of the pure premium in motor vehicle insuranceNon-LifeMotor
30David (2015)RegressionUse of the GLM to establish policyholders’ pure premiumNon-lifeMotor
31Denuit and Lang (2004)RegressionGAMs used for rate-makingNon-lifeMotor
32Deprez et al. (2017)EnsembleMortality modelling using boosting regression techniquesLife-
33Devriendt et al. (2021)RegularisationLASSO penalty development to aid regularisation techniques in ML--
34Gan (2013)ClusteringSelection of representative policies for the assessment of variable annuity policy pricingLife-
35Gan and Huang (2017)ClusteringValuation of variable annuity policiesLife-
36Gan and Valdez (2017)Reinforcement LearningMonte Carlo-based modelling for variable annuity portfoliosLife-
37Guelman (2012)EnsembleGradient Boosting Trees used to predict insurance lossesNon-lifeMotor
38Gweon et al. (2020)EnsembleBias-corrected bagging method used to improve predictive performance of regression treesNon-life-
39Huang and Meng (2019)RegressionRisk probability prediction based on telematics driving dataNon-lifeMotor
40Jain et al. (2019)EnsembleRisk assessment of potential policyholders using risk scores within numerous ensembles of AI methodsLife-
41Jiang et al. (2018)Instance-basedA novel model for analysis of imbalanced datasets in end-to-end insurance processesLife-
42Joram et al. (2017)Rule-basedKnowledge-based system to enhance life underwriting processesLife-
43Kašćelan et al. (2016)ClusteringAssessment and classification of premiumsNon-lifeMotor
44Kieu et al. (2018)ClusteringDeal with inadequately labelled data trajectories with drivers’ identifiersNon-lifeMotor
45Kumar et al. (2010)Support Vector MachinePrediction of claims which need reworking due to errorsLifeHealth
46Kwak et al. (2020)EnsembleDriver identification using RFNon-lifeMotor
47Lin (2009)Neural NetworkPrice the correct premium rate for ‘in-between’ risks between predefined tariff ratesNon-lifeProperty & Casualty
48Liu et al. (2014)EnsembleAdaboost to predict claim frequency of auto insuranceNon-lifeMotor
49Neumann et al. (2019)Decision TreePrediction of insurance customers’ decisions following an automobile accidentNon-lifeMotor
50Sakthivel and Rajitha (2017)Neural NetworkPrediction of an insurance portfolio’s claim frequency for forthcoming yearsNon-lifeMotor
51Samonte et al. (2018)Neural NetworkAutomatic multi-class labelling of ICD-9 codes of patient notesLifeHealth
52Sevim et al. (2016)Neural NetworkDetermination of litigation risks for accounting professional liability insuranceNon-lifeProfessional Liability
53Siami et al. (2020)Instance-BasedUnsupervised pattern recognition framework for mobile telematics data to propose a solution to unlabelled telematics dataNon-lifeMotor
54Smith et al. (2000)Neural NetworkNNs used to classify policyholders as likely to renew or terminate, to aid the achievement of maximum potential profitability for the insurance companyNon-lifeMotor
55Wei and Dan (2019)Support Vector MachineStock price predictionNon-lifeAgriculture
56Wüthrich (2020)Neural NetworkOptimisation of NN insurance pricing modelsNon-lifeMotor
57Yan and Bonissone (2006)Neural NetworkClassification to enhance NN functionality for automated insurance underwriting--
58Yan et al. (2020b)Rule-basedRating model for UBI automobile insurance rates--
59Yang et al. (2018)EnsembleGradient Boosting Trees used to predict insurance premiumsNon-lifeMotor
60Yeo et al. (2002)ClusteringOptimisation of insurance premium pricingNon-lifeMotor
Contract Administration and Customer Services
61Ravi et al. (2017)Fuzzy LogicCreation of association rules which analyse customer grievances and summarise them--
62Sadreddini et al. (2021)ClusteringPrediction of airline customer clusters and appropriate Cancellation Protection Service insurance fee per customer groupNon-lifeAirline
63Sohail et al. (2021)Bayesian NetworkThe optimal set of hyperparameters for the later used ML model is found using Bayesian optimisation methods--
64Vassiljeva et al. (2017)Neural NetworkAutomobile insurance customers’ risk estimate using ANN to inform contract developmentNon-lifeMotor
65Vaziri and Beheshtinia (2016)Fuzzy LogicValue creation for insurance customersLife-
Claim Management
66Baudry and Robert (2019)EnsembleEstimation of outstanding liabilities on a given policy using an ensemble of regression trees--
67Belhadji et al. (2000)RegressionCalculate the probability of fraud in insurance filesNon-lifeMotor
68Benedek and László (2019)Rule-basedIdentification of fraud indicatorsNon-LifeMotor
69Bermúdez et al. (2008)Bayesian NetworkBayesian skewed logit model used to fit an insurance database (binary data)Non-lifeMotor
70Cao and Zhang (2019)Instance-BasedSOFM NN used to extract characteristics of medical insurance fraud behaviourLifeHealth
71Delong and Wüthrich (2020)Neural NetworkNNs testing of regression modelsNon-lifeLiability
72Duval and Pigeon (2019)RegressionAssessment of claim frequency
73Dhieb et al. (2019)EnsembleXGBoost used to detect automobile insurance fraudulent claimsNon-lifeMotor
74Frees and Valdez (2008)RegressionAssessment of claim frequencyNon-lifeMotor
75Gabrielli (2021)Neural NetworkEstimation of claims reserves for individual reported claimsNon-life
76Ghani and Kumar (2011)Support Vector MachineError detection in insurance claimsLifeHealth
77Ghorbani and Farzai (2018)ClusteringDetection of fraud patternsNon-lifeMotor
78Herland et al. (2018)EnsembleMedicare provider claims fraudLifeHealth
79Johnson and Khoshgoftaar (2019)Neural NetworkAutomation of fraud detection using ANNLifeHealth
80Kose et al. (2015)ClusteringDetection of fraudulent claimsLifeHealth
81Kowshalya and Nandhini (2018)Rule-basedFraudulent claim detectionNon-lifeMotor
82Kyu and Woraratpanya (2020)Neural NetworkCNN used to prevent claims leakageNon-lifeMotor
83Lau and Tripathi (2011)Rule-basedAssociation Rules’ provision of actionable business insights for insurance claims dataNon-lifeLiability
84Lee et al. (2020)RegressionGLM and GAM used in NLP to extract variables from text and use these variables in claims analysisNon-lifeProperty &Casualty
85Li et al. (2018)EnsembleRandom Forest for automobile insurance fraud detectionNon-lifeMotor
86Liu and Chen (2012)ClusteringEnhance the accuracy of claims fraud predictionNon-lifeMotor
87Matloob et al. (2020)Rule-basedFraud detectionLifeHealth
88Pathak et al. (2005)Fuzzy LogicTo distinguish whether fraudulent actions are involved in insurance claims settlement--
89Smyth and Jørgensen (2002)RegressionGLM to model insurance costs’ dispersionNon-lifeMotor
90Sun et al. (2018)Instance-basedDetermination of joint medical fraud through reducing the occurrence of false positives caused by non-fraudulent abnormal behaviourLifeHealth
91Supraja and Saritha (2017)Fuzzy LogicUtilising fuzzy rule-based techniques to improve fraud detectionNon-lifeMotor
92Tao et al. (2012)Fuzzy LogicDFSVM used to solve the issue of misdiagnosed fraud detection due to the ‘overlap’ problem in insurance fraud samplesNon-lifeMotor
93Verma et al. (2017)ClusteringK-means used to increase performance and reduce the complexity of the modelLifeHealth
94Viaene et al. (2002)RegressionFraud detectionNon-lifeMotor
95Viaene et al. (2004)EnsembleAdaboost used in insurance claim fraud detectionNon-lifeMotor
96Viaene et al. (2005)Bayesian NetworkNN for fraud detectionNon-lifeMotor
97Wang and Xu (2018)Neural NetworkNN used to detect automobile insurance fraudNon-lifeMotor
98Xu et al. (2011)EnsembleRandom rough subspace methodNon-lifeMotor
99Yan et al. (2020a)EnsembleOptimisation of BP Neural Network by combining it with an improved genetic algorithmNon-lifeMotor
Asset and Risk Management
100Cheng et al. (2020)Neural NetworkOptimal reinsurance and dividend strategies for insurance companies--
101Ibiwoye et al. (2012)Neural NetworkInsurer insolvency prediction--
102Jin et al. (2021)Neural NetworkDetermine the optimal insurance, reinsurance, and investment strategies of an insurance company--
103Kiermayer and Weiß (2021)ClusteringGrouping of insurance contractsLifeLife
Table 6. XAI Methods and their approach in the articles is outlined, with the additional XAI assessment of (i) intrinsic or post hoc, (ii) local or global, and (iii) model-specific or model-agnostic interpretability methods. Abbreviations in Table 6 are outlined in the Abbreviations section of this paper.
Table 6. XAI Methods and their approach in the articles is outlined, with the additional XAI assessment of (i) intrinsic or post hoc, (ii) local or global, and (iii) model-specific or model-agnostic interpretability methods. Abbreviations in Table 6 are outlined in the Abbreviations section of this paper.
XAI CategoryXAI ApproachIntrinsic/Post- hocLocal/GlobalModel- Specific/Agnostic
Marketing
1Chang and Lai (2021)Feature Interaction and ImportanceDataset is pre-processed with three feature selection methods; (1) Neighbourhood Component Analysis (NCA), (2) Sequential Forward Selection (SFS) and, (3) Sequential Backward Selection (SBS)IntrinsicGlobalModel-agnostic
2Desik et al. (2016)Dimensionality ReductionIdentification of relevant data clusters to inform model development for differing product groupsPost hocLocalModel-agnostic
3Fang et al. (2016)Intrinsically Interpretable ModelRF regressionIntrinsicGlobalModel-specific
4Larivière and Van den Poel (2005)Feature Interaction and ImportanceExploration of three major predictor categories as explanatory variablesIntrinsicLocalModel-specific
5Lin et al. (2017)Intrinsically Interpretable ModelRF provides automatic feature selection which aids interpretability of the modelIntrinsicGlobalModel-specific
6Morik et al. (2002)Knowledge Distillation and Rule ExtractionBridge the gap between databases and their users by implementing KDD methodsIntrinsicLocalModel-specific
Product Development
7Alshamsi (2014)Feature Interaction and ImportanceClassification of data into different sets according to different policy options availableIntrinsicLocalModel-specific
8Karamizadeh and Zolfagharifar (2016)Intrinsically Interpretable ModelPattern recognition with clustering algorithms to find missing data to minimise insurance lossesIntrinsicGlobalModel-specific
9Khodairy and Abosamra (2021)Feature Interaction and ImportanceExtraction of relevant featuresPost hocLocalModel-agnostic
10Shah and Guez (2009)Feature Interaction and ImportanceNN proposed as a better predictor of life expectancy than the Lee–Carter model due to the ability to adapt for each sex and each cause of life expectancy through a learning algorithm using historical dataPost hocLocalModel-agnostic
11Sheehan et al. (2017)Knowledge Distillation and Rule ExtractionDetermination of causal and probabilistic dependencies through subjective assumptions (of the data)IntrinsicLocalModel-specific
Sales and Distribution
12Desik and Behera (2012)Feature Interaction and ImportanceCHAID used to create groups and gain an understanding of their impact on the dependent variableIntrinsicLocalModel-specific
13Gramegna and Giudici (2020)Intrinsically Interpretable ModelSimilarity clustering of the returned Shapley values to analyse customers’ insurance buying behaviourIntrinsicGlobalModel-specific
14Jeong et al. (2018)Knowledge Distillation and Rule ExtractionAssociation rule learning to identify relationships among variablesIntrinsicGlobalModel-specific
15Tillmanns et al. (2017)Feature Interaction and ImportancePCA is used to reduce the dimensionality of the features and reduce the chance of overfittingPost hocLocalModel-agnostic
16Wang (2020)Dimensionality ReductionRemoval of dataset features which have no bearing on the customers’ likelihood to renewIntrinsicLocalModel-specific
17Yang et al. (2006)Knowledge Distillation and Rule ExtractionDevelopment of postprocessing step to extract actionable knowledge from DTs to obtain actions which are associated with attribute-value changesIntrinsicLocalModel-specific
18Zahi and Achchab (2019)Intrinsically Interpretable ModelClustering the insured population using k-meansIntrinsicGlobalModel-specific
19Zhang and Kong (2020)Attention MechanismParameter optimisation for NB modelPost hocLocalModel-agnostic
Underwriting and Pricing
20Aggour et al. (2006)Feature Interaction and ImportanceUse of NLP and explanation of the interaction of different model features which alters the modelIntrinsicGlobalModel-specific
21Baecke and Bocca (2017)Feature Interaction and ImportanceStepwise feature selectionIntrinsicGlobalModel-specific
22Bian et al. (2018)Dimensionality ReductionFound the 5 most relevant features to inform driving behaviourIntrinsicLocalModel-specific
23Biddle et al. (2018)Feature Interaction and ImportanceRecursive Feature Elimination to provide feature rankings for feature subsetsPost hocGlobalModel-agnostic
24Bonissone et al. (2002)Knowledge Distillation and Rule ExtractionFuzzy rule-based decision systems used to encode risk classification of complex underwriting tasksIntrinsicLocalModel-specific
25Boodhun and Jayabalan (2018)Dimensionality ReductionCorrelation-Based Feature Selection and PCAIntrinsicLocalModel-specific
26Bove et al. (2021)Feature Interaction and ImportanceSHAP is used to provide the contribution of each feature value to the prediction in comparison to the average predictionPost hocLocalModel-agnostic
27Carfora et al. (2019)Intrinsically Interpretable ModelIdentification of driver behaviour using ML algorithmsIntrinsicGlobalModel-specific
28Cheng et al. (2011)Knowledge Distillation and Rule ExtractionDevelopment of loss prediction model using the ESIMIntrinsicGlobalModel-specific
29Christmann (2004)Dimensionality ReductionExploitation of knowledge from certain characteristics of datasets to estimate conditional probabilities and conditional expectations given the knowledge of the variable representing the pure premiumIntrinsicLocalModel-specific
30David (2015)Dimensionality ReductionUse of policyholders’ relevant characteristics to determine the pure premiumIntrinsicLocalModel-specific
31Denuit and Lang (2004)Knowledge Distillation and Rule ExtractionBayesian GAMs developed using MCAM inferenceIntrinsicLocalModel-specific
32Deprez et al. (2017)Attention MechanismBack-testing parametric mortality modelsPost hocGlobalModel-agnostic
33Devriendt et al. (2021)Knowledge Distillation and Rule ExtractionDevelopment of SMuRF algorithm to allow for Sparse Multi-type Regularised Feature modellingIntrinsicGlobalModel-specific
34Gan (2013)Knowledge Distillation and Rule ExtractionGaussian Process Regression employed to value variable annuity policiesIntrinsicLocalModel-specific
35Gan and Huang (2017)Knowledge Distillation and Rule ExtractionKriging Regression method employed to value variable annuity policiesIntrinsicLocalModel-specific
36Gan and Valdez (2017)Knowledge Distillation and Rule ExtractionGeneralised Beta of the Second Kind (GB2) Regression method employed to value variable annuity policiesIntrinsicLocalModel-specific
37Guelman (2012)Intrinsically Interpretable ModelInterpretable results given by the simple linear model through showcasing the relative influence of the input variables and their partial dependence plotsIntrinsicGlobalModel-specific
38Gweon et al. (2020)Knowledge Distillation and Rule ExtractionBagging creates several regression trees which fits a bootstrap sample of the training data and makes a prediction through averaging the predicted outcomes from the bootstrapped treesPost hocLocalModel-agnostic
39(Huang and Meng 2019)Dimensionality ReductionVariables are binned to discretise continuous variables and construct tariff classes with significant predictive effects to improve interpretability of UBI predictive modelsPost hocIntrinsicModel-agnostic
40Jain et al. (2019)Feature Interaction and ImportanceUsing WEKA software, the dimensional feature set was reduced for useIntrinsicGlobalModel-specific
41Jiang et al. (2018)Feature Interaction and ImportanceImbalanced data trend forecasting using learning descriptions and sequences and adjusting the CPLFPost hocLocalModel-specific
42Kašćelan et al. (2016)Knowledge Distillation and Rule ExtractionContainment of the sets of rules with similar purpose and/or structure which defines the knowledge basesIntrinsicGlobalModel-agnostic
43Kieu et al. (2018)Intrinsically Interpretable ModelClustering provides homogeneity within classifications of risk and heterogeneity between risk classificationsIntrinsicGlobalModel-specific
44Kumar et al. (2010)Intrinsically Interpretable ModelGradient Boosting DTs used to classify (unlabelled) trajectoriesPost hocLocalModel-specific
45Kwak et al. (2020)Dimensionality ReductionFrequency-based feature selection techniqueIntrinsicGlobalModel-specific
46Lin (2009)Dimensionality ReductionReduction in feature values’ noise (normalisation of sensing data)IntrinsicLocalModel-specific
47Liu et al. (2014)Attention MechanismUse of premium rate determination rules as network inputs in the BPNN to create the ‘missing rates’ of in-between risksPost hocLocalModel-specific
48Neumann et al. (2019)Dimensionality ReductionReduction in claim frequency prediction problem to multi-class problemPost hocGlobalModel-specific
49Sakthivel and Rajitha (2017)Knowledge Distillation and Rule ExtractionCombination of simple linear weights and residual components to replicate non-linear effects to resemble a fully parametrised PPCI-like (Payments per Claim Incurred) modelIntrinsicLocalModel-specific
50Samonte et al. (2018)Knowledge Distillation and Rule ExtractionBuilt a predictive model using previous Bayesian credibility inputs to predict the value of another fieldPost hocLocalModel-specific
51Carfora et al. (2019)Attention MechanismNLP used for document classification of medical record notes, with RNNs employed to encode vectors in Bi-LTSM modelIntrinsicLocalModel-specific
52Sevim et al. (2016)Attention MechanismModel is developed from the relationships between the variables gained from previous data and then testedPost hocLocalModel-specific
53Siami et al. (2020)Feature Interaction and ImportanceSOM to reduce data complexityIntrinsicGlobalModel-specific
54Smith et al. (2000)Feature Interaction and ImportanceAssessed the variables of relevance to the current task through rejecting variables with x2 < 3.92Post hocLocalModel-agnostic
55Wei and Dan (2019)Attention MechanismParameter optimisation for SVM modelIntrinsicGlobalModel-specific
56Wüthrich (2020)Feature Interaction and ImportanceEnhancement of neural network efficiency through feature selectionIntrinsicGlobalModel-specific
57Yan and Bonissone (2006)Knowledge Distillation and Rule ExtractionComparison of four NN models for automated insurance underwritingPost hocLocalModel-specific
58Yan et al. (2020b)Knowledge Distillation and Rule ExtractionCombination of the CNN and HVSVM models to create a model with higher discrimination accuracy than either model presents alonePost hocGlobalModel-specific
59Yang et al. (2018)Intrinsically Interpretable ModelTDBoost package provides interpretable resultsIntrinsicLocalModel-specific
60Yeo et al. (2002)Feature Interaction and ImportanceGrouping of important clusters to input in NN model for insurance retention rates and price sensitivity predictionIntrinsicLocalModel-specific
Contract Administration and Customer Services
61Ravi et al. (2017)Knowledge Distillation and Rule ExtractionTreatment of each variable as having a certain degree of membership with certain rules to categorise complaintsIntrinsicGlobalModel-specific
62Sadreddini et al. (2021)Feature Interaction and ImportanceCancellation Protection Service insurance fee is calculated based on the relevant weight of each clusterIntrinsicGlobalModel-specific
63Sohail et al. (2021)Feature Interaction and ImportanceSHAP is used in evaluating the feature importance in predicting the output levelPost hocGlobalModel-agnostic
64Vassiljeva et al. (2017)Dimensionality ReductionOnly relevant parameters are considered in the ANN modelIntrinsicLocalModel-specific
65Vaziri and Beheshtinia (2016)Knowledge Distillation and Rule ExtractionDevelopment of integrated ML model to carry out the prediction taskIntrinsicLocalModel-specific
Claim Management
66Baudry and Robert (2019)Feature Interaction and ImportanceDefinition of policy subsets within the synthetic datasetPost hocLocalModel-agnostic
67Belhadji et al. (2000)Feature Interaction and ImportanceRegression used to isolate significant contributory variables in fraudIntrinsicLocalModel-specific
68Benedek and László (2019)Intrinsically Interpretable ModelComparison of various intrinsic AI methods for fraud indicator identificationIntrinsicLocalModel-specific
69Bermúdez et al. (2008)Knowledge Distillation and Rule ExtractionUse of a skewed logit model to more accurately classify fraudulent insurance claimsPost hocGlobalModel-agnostic
70Cao and Zhang (2019)Dimensionality ReductionPCA in the reduction in data’s dimensionalityPost hocLocalModel-agnostic
71Dhieb et al. (2019)Dimensionality ReductionExtraction of relevant featuresPost hocGlobalModel-specific
72Delong and Wüthrich (2020)Attention MechanismDescribe the joint development process of individual claim payments and claims incurredIntrinsicGlobalModel-agnostic
73Duval and Pigeon (2019)Knowledge Distillation and Rule ExtractionCombination of many regression trees together in order to optimise the objective function and then learn a prediction functionIntrinsicGlobalModel-agnostic
74Frees and Valdez (2008)Knowledge Distillation and Rule ExtractionComparison of various fitted models which summarise all the covariates’ effects on claim frequencyIntrinsicGlobalModel-specific
75Gabrielli (2021)Knowledge Distillation and Rule ExtractionNN proposed which is modelled through learning from one probability/regression function to the other via parameter sharingPost hocLocalModel-specific
76Ghani and Kumar (2011)Knowledge Distillation and Rule ExtractionDevelopment of an interactive prioritisation component to aid auditors in their fraud detectionPost hocLocalModel-specific
77Ghorbani and Farzai (2018)Knowledge Distillation and Rule ExtractionDefinition of rules based on each cluster to determine future fraud propensity (using WEKA)IntrinsicGlobalModel-specific
78Herland et al. (2018)Feature Interaction and ImportanceRemoved unnecessary data featuresIntrinsicLocalModel-specific
79Johnson and Khoshgoftaar (2019)Feature Interaction and ImportanceClass imbalance within the dataset is rectified using one-hot encodingPost hocLocalModel-specific
80Kose et al. (2015)Knowledge Distillation and Rule ExtractionDevelopment of an electronic fraud & abuse detection modelPost hocGlobalModel-agnostic
81Kowshalya and Nandhini (2018)Feature Interaction and ImportanceClassifier construction using NBIntrinsicLocalModel-specific
82Kyu and Woraratpanya (2020)Feature Interaction and ImportanceFine-tuning of the datasetPost hocLocalModel-specific
83Lau and Tripathi (2011)Knowledge Distillation and Rule ExtractionDevelopment of Association Rules function for Workers’ Compensation claim data analysisIntrinsicGlobalModel-specific
84Lee et al. (2020)Knowledge Distillation and Rule ExtractionTransformation of words to vectors, where each vector represents some feature of the wordIntrinsicLocalModel-specific
85Li et al. (2018)Dimensionality ReductionPCA used to transform data at each node to another space when computing the best split at that nodeIntrinsicGlobalModel-specific
86Matloob et al. (2020)Knowledge Distillation and Rule ExtractionSequence generation to inform predictive model for fraudulent behaviourIntrinsicLocalModel-specific
87Liu and Chen (2012)Knowledge Distillation and Rule ExtractionTwo evolutionary data mining (EvoDM) algorithms developed to improve insurance fraud prediction; (1) GAK-means (combination of K-means algorithm with genetic algorithm) and, (2) MPSO-K-means (combination of K-means algorithm with Momentum-type Particle Swarm Optimisation (MPSO))Post-hocLocalModel-specific
88Pathak et al. (2005)Knowledge Distillation andRule ExtractionMimic the expertise of the human insurance auditors in real life insurance claim settlement scenariosPost-hocLocalModel-agnostic
89Smyth and Jørgensen (2002)Intrinsically Interpretable ModelModelling of insurance costs’ dispersion and meanIntrinsicLocalModel-specific
90Sun et al. (2018)Feature Interaction and ImportanceFormulation of compact clusters of individual behaviour in a large datasetIntrinsicLocalModel-specific
91Supraja and Saritha (2017)Feature Interaction and ImportanceK-means clustering used to prepare dataset prior to FL technique applicationIntrinsicLocalModel-specific
92Tao et al. (2012)Feature Interaction and ImportanceAvoidance of curse of dimensionality problem through kernel function use for SVM’s calculationPost hocGlobalModel-agnostic
93Verma et al. (2017)Knowledge Distillation and Rule ExtractionAssociation rule learning to identify frequent fraud occurring patterns for varying groupsIntrinsicLocalModel-specific
94Viaene et al. (2002)Dimensionality ReductionRemoval of fraud indicators with 10 or less instances to aid model convergence and stability during estimationIntrinsicGlobalModel-specific
95Viaene et al. (2004)Attention MechanismComputation of the relative importance (weight) of individual components of suspicious claim occurrencesIntrinsicGlobalModel-specific
96Viaene et al. (2005)Feature Interaction and ImportanceDetermination of relevant inputs for the NN modelPost hocLocalModel-agnostic
97Wang and Xu (2018)Dimensionality ReductionExtraction of text features hiding in the text descriptions of claims (Latent Dirichlet Allocation-based deep learning for text analytics)Post hocLocalModel-agnostic
98Xu et al. (2011)Knowledge Distillation and Rule ExtractionRandom rough subspace method incorporated into NN to detect insurance fraudIntrinsicGlobalModel-specific
99Yan et al. (2020a)Dimensionality ReductionPCA used to reduce dimensions of the multi-dimensional feature matrix, where the reduced data retains the main information of the original dataIntrinsicGlobalModel-specific
Asset and Risk Management
100Cheng et al. (2020)Knowledge Distillation and Rule ExtractionDevelopment of deep learning Markov chain approximation method (MCAM)IntrinsicGlobalModel-specific
101Ibiwoye et al. (2012)Attention MechanismTuning of the NNIntrinsicLocalModel-specific
102Jin et al. (2021)Knowledge Distillation and Rule ExtractionMCAM to estimate the initial guess of the NNIntrinsicGlobalModel-specific
103Kiermayer and Weiß (2021)Knowledge Distillation and Rule ExtractionApproximation of representative portfolio groups to then nest in NNPost hocLocalModel-specific
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Owens, E.; Sheehan, B.; Mullins, M.; Cunneen, M.; Ressel, J.; Castignani, G. Explainable Artificial Intelligence (XAI) in Insurance. Risks 2022, 10, 230. https://doi.org/10.3390/risks10120230

AMA Style

Owens E, Sheehan B, Mullins M, Cunneen M, Ressel J, Castignani G. Explainable Artificial Intelligence (XAI) in Insurance. Risks. 2022; 10(12):230. https://doi.org/10.3390/risks10120230

Chicago/Turabian Style

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. 2022. "Explainable Artificial Intelligence (XAI) in Insurance" Risks 10, no. 12: 230. https://doi.org/10.3390/risks10120230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop