**Figure 5.** Concordance analysis for the term 'Explainability'. **Figure 5.** Concordance analysis for the term 'Explainability'.

#### **5. Discussion**

The findings from the results highlight critical tenets regarding the concept of explainability in AI. From the results, it is evident that specific keywords were prominently featured compared to others. Notably, words such as 'use', 'explanation', and 'model', as highlighted in Figure 2, confirm increased usability. The findings are key in addressing the shortcoming of the whole concept of explainability as reported by Soni et al. [5]. Similar results are reported in the case of Figure 3, which identifies notable words such as 'use', 'model', 'learn', and 'data'. The analysis reveals that the AI communities approach the concept of explainability differently, but with some general resemblance. The term is used to assist in evaluating the mechanism of machine language systems, such as understanding the working of the system, yet, at other times, to relate to concepts of particular inputs like determining how a given input was mapped to an output. From the analysis, certain key observations were made.

#### *5.1. Opaque Systems*

Mainly, this refers to a system where mapping inputs to outputs are not visible to the user. It can be perceived as a mechanism that only predicts an input, without stating why and how the predictions are made. Opaque systems exist, for example, when closed-source AI systems are licensed by an organization where the licensor prefers to keep their workings private. Additionally, black-box methodologies are categorized as opaque because they use algorithms that do not give insight into the actual reasoning for the system mapping of inputs to outputs [21].

#### *5.2. Interpretable Systems*

This refers to AI systems where clients cannot see, understand, and study how mathematical concepts are used to map inputs to the output. Thus, related issues include transparency and understanding. Regression models are considered interpretable, for instance, in comparing covariate heights to determine the significance of each aspect to the mapping. However, deep neural networks' functioning may not be interpretable, especially regarding the fact that input features are mostly based on automatic non-linear models [21].

#### *5.3. Comprehensible Systems*

Comprehensible AI models emit symbols without the use of inputs. These symbols are mostly words and allow the user to relate input properties to the outputs. Thus, the user can compile and understand the symbols relying on personal reasoning and knowledge about them. Thus, comprehensibility becomes a graded notion, with the extent of clarity being the difficulty or ease of use. The required form of knowledge from the user's side relates to the cognitive intuition concerning how the output, inputs, and symbols relate to each other [21]. Interpretable and comprehensible systems are improvements over opaque systems. The concept of interpretation and comprehension are separate: interpretation mostly relies on the transparency of system models, whereas comprehension requires emitting symbols that allow a user to reason and think [21]. Most present-day AI systems can produce accurate output but are also highly sophisticated if not outright opaque, making their workings incomprehensible and difficult to interpret. As part of the efforts to renew user trust in AI systems, there are calls to increase system explainability.

There is an increasing need to enhance user confidence in AI systems [25]. User confidence and trust are cited as the primary reasons for pursuing explainable AI. System users seek explanations for various reasons—to manage social interactions, assign responsibility, and persuade others. A critical aspect of explainable AI is that it creates an opportunity for personalized human–computer interaction. Instead of the brick and mortar models of decision-making techniques that cannot be understood or interpreted by humans, explainability ensures that the customer journey through machine learning and AI systems is modeled in a way that mimics human interactions. There is a need for machine learning to explain why and how individual recommendations or decisions are made [26]. The implication is that

consumers' activity on AI systems will be complemented to make better and accurate decisions faster. Thus, organizations will be able to leverage customer value. When giving advice, recommendations, or even descriptive aspects, looking for justifications and reasons behind the recommended action is important. It is not enough to predict or suggest outcomes as the best or the preferred action without showing connections between the data and the factors involved to arrive at the decisions [26]. better and accurate decisions faster. Thus, organizations will be able to leverage customer value. When giving advice, recommendations, or even descriptive aspects, looking for justifications and reasons behind the recommended action is important. It is not enough to predict or suggest outcomes as the best or the preferred action without showing connections between the data and the factors involved to arrive at the decisions [26].

*Future Internet* **2020**, *12*, x FOR PEER REVIEW 13 of 15

McNaught and Lam [22] state that most people have the perception that a doctor is 'a kind of black box' who transforms symptoms of illnesses and related test outcomes into the best diagnosis for treatment. Doctors deliver diagnostic recommendations to patients without explaining why and how they arrived at such decisions. Mostly, doctors use high-level indicators or symptoms (which, in the context of AI systems, denote system symbols). However, when handling a patient, the doctor should be like a comprehensible model. When interacting with medical staff and other physicians, the doctor may be like an interpretable model. Other doctors will interpret the technical analysis like an analyst would do to a ML system to ensure that the decisions arrived at are backed up by the reasonable assessment of the evidence provided. McNaught and Lam [22] state that most people have the perception that a doctor is 'a kind of black box' who transforms symptoms of illnesses and related test outcomes into the best diagnosis for treatment. Doctors deliver diagnostic recommendations to patients without explaining why and how they arrived at such decisions. Mostly, doctors use high-level indicators or symptoms (which, in the context of AI systems, denote system symbols). However, when handling a patient, the doctor should be like a comprehensible model. When interacting with medical staff and other physicians, the doctor may be like an interpretable model. Other doctors will interpret the technical analysis like an analyst would do to a ML system to ensure that the decisions arrived at are backed up by the reasonable assessment of the evidence provided.

Thus, XAI ensures that the information provided is accurate and personalized, to engage with targeted users in the most optimal manner [26]. In this way, XAI will increase the offer's relevance, thus enhancing user engagement and interest. Additionally, XAI will offer aspects that drive predicted outcomes, allowing real-time adjustment of primary business aspects to optimize gains and corporate outcomes. Transparency and reasoning will contribute to a decrease in abandoned products and an increase in order value, and thus higher revenue and conversion rates. Therefore, this research proposes implementing XAI frameworks that possess the following features: interpretability and comprehensibility. However, responsible XAI is more than just the ML features: user-related aspects of external AI features that should be considered are trust, fairness, confidence, ethics, and safety, as highlighted in So [27]. Moreover, the actual meaning of XAI is dependent on the perspective of the user, as highlighted by So [27] in Figure 6. As illustrated in Figure 6, XAI utilizes customer data to support the process of decision-making, with the impact being witnessed in improved business outcomes. Thus, XAI ensures that the information provided is accurate and personalized, to engage with targeted users in the most optimal manner [26]. In this way, XAI will increase the offer's relevance, thus enhancing user engagement and interest. Additionally, XAI will offer aspects that drive predicted outcomes, allowing real-time adjustment of primary business aspects to optimize gains and corporate outcomes. Transparency and reasoning will contribute to a decrease in abandoned products and an increase in order value, and thus higher revenue and conversion rates. Therefore, this research proposes implementing XAI frameworks that possess the following features: interpretability and comprehensibility. However, responsible XAI is more than just the ML features: user-related aspects of external AI features that should be considered are trust, fairness, confidence, ethics, and safety, as highlighted in So [27]. Moreover, the actual meaning of XAI is dependent on the perspective of the user, as highlighted by So [27] in Figure 6. As illustrated in Figure 6, XAI utilizes customer data to support the process of decision-making, with the impact being witnessed in improved business outcomes.

**Figure 6.** An analogy showing the differences between present-day AI and XAI models. **Figure 6.** An analogy showing the differences between present-day AI and XAI models.

#### **6. Conclusions 6. Conclusions**

The study's main purpose was to lay the foundation for a universal definition of the term 'explainability'. The analyzed data from the word cloud plots revealed that the term 'explainability' is mainly associated with words such as model, explanation, and use. These were the most prominent words exhibited in the corpus generated from the word cloud. In the corpus from word cloud 1, they include emotion, design decision, data, control, image, variance, interpret, decision, show, result, control, estimate, data, predict, method, train, study, use, learn, model, explanation, and system. In The study's main purpose was to lay the foundation for a universal definition of the term 'explainability'. The analyzed data from the word cloud plots revealed that the term 'explainability' is mainly associated with words such as model, explanation, and use. These were the most prominent words exhibited in the corpus generated from the word cloud. In the corpus from word cloud 1, they include emotion, design decision, data, control, image, variance, interpret, decision, show, result, control, estimate, data, predict, method, train, study, use, learn, model, explanation, and system.

corpus word cloud 2, they include model, use, learn, data, method, predict, behavior, task, perform,

In corpus word cloud 2, they include model, use, learn, data, method, predict, behavior, task, perform, base, show, infer, image, algorithm, propose, optimal, object, general, explain, network. After the Voyant analysis was conducted, the most prominent words that appeared on word cloud 1 included control, data, decision, design, and emotion, while in word cloud 2, they included algorithm, base, behavior, data, and explain. When the words obtained from the Voyant analysis were combined, they provided specific meanings of the word 'explainability'. The main definitions obtained from the combination of the most frequent words include 'an algorithm that explains data behavior' or 'an algorithm behavior that is based on data explanation' or 'a design of data control that enhances decision making' or 'designing and controlling data in such a way that enhances emotion and decisions'.

The application of AI in e-commerce stands to expand in the future, as businesses are appreciating their role in influencing consumer demands. The rapid development of research technology and increased access to the internet present e-commerce businesses with an opportunity to expand their various platforms. Notably, the influence of AI in e-commerce spills over to customer retention and satisfaction. The customers are at the center of the changes and adoption of AI in e-commerce. Hence, e-commerce can further develop contact with customers and establish developed customer relationship management systems.

The researcher of this study has made an effort to provide a critical outline of the key tenets of AI and its role in e-commerce, as well as a comprehensive insight regarding the role of AI in addressing the needs of consumers in the e-commerce industry. Even though the study has attempted to provide a universal meaning of ' explainability', the actual impact of AI on consumers' decisions is not yet clear, considering the notion of the "black box", i.e., if the decisions arrived at cannot be explained and the reasons behind such actions given, it will be difficult for people to trust AI systems. Therefore, there is a need for future studies further to examine the need for explainable AI systems in e-commerce and find solutions to the 'black box' issue.

For future research, this study may serve as a 'template' for the definition of an explainable system that characterizes three aspects: opaque systems where users have no access to insights that define the involved algorithmic mechanism; interpretable systems where users have access to the mathematical algorithmic mechanism, and comprehensible systems where users have access to symbols that enhance their decision making.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
