Next Article in Journal
Sustainable Consumption and Production: Exploring the Links with Resources Productivity in the EU-28
Next Article in Special Issue
Hybrid Decision Model for Evaluating Blockchain Business Strategy: A Bank’s Perspective
Previous Article in Journal
Real-Time Data Utilization Barriers to Improving Production Performance: An In-depth Case Study Linking Lean Management and Industry 4.0 from a Learning Organization Perspective
Previous Article in Special Issue
A Decision-Making Model for Adopting Al-Generated News Articles: Preliminary Results
 
 
Article
Peer-Review Record

Managing Uncertainty in AI-Enabled Decision Making and Achieving Sustainability

Sustainability 2020, 12(21), 8758; https://doi.org/10.3390/su12218758
by Junyi Wu * and Shari Shang
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2020, 12(21), 8758; https://doi.org/10.3390/su12218758
Submission received: 28 August 2020 / Revised: 14 October 2020 / Accepted: 18 October 2020 / Published: 22 October 2020

Round 1

Reviewer 1 Report

The re-submitted article describing "Managing uncertainty in AI-enabled decision making and achieving sustainability" incorporates previous comments and recommendations.
In chap. 6 Conclusion - the last paragraph - is an error in the word "deployed" ("delpoyed").

Author Response

Thank you for your suggestion. We have corrected this misspelling word.

Reviewer 2 Report

A very interesting paper on managing AI uncertainty. Especially figure 1 explains how to avoid or manage uncertainties. The identification of three dimensions of uncertainty, informational, environmental, and intentional uncertainty, are unique contribution of this study. However, I have some concerns about the methodologies of the research, the proposed solutions to each type of uncertainties.

 

Comments

  1. Do you have any evidence that those solutions to the three types of uncertainties really help AI and organizations? I mean either experiments or data from the real-world instead of just from literature.
  2. What is the relation between AI uncertainties with decision-makers’ uncertainties?
  3. Another issue is AI biases. Because there are input biases (the sources of information used to develop AI), processing biases (human or systems), and output biases. So, what do you think about the relationships between the uncertainties and biases in AI systems?
  4. I have questions regarding the methodology of this paper and other things. I guess that it is a review paper. So, could you please explain the sources of information you used to collect those papers. For example, the web of science, Scopus, or others.
  5. Another important issue is that instead of scientometrics techniques the authors used case-based approaches from previous research. I am wondering whether the paper is a review paper or a case-based paper? I observed that authors used quotes from different papers and provided like co-occurrences of those three types of mechanisms. So, could you please explain the reason to use the case-based method instead of using any bibliometrics or scientometrics techniques.
  6. Another important question is about the generalizability of the mechanism.

 

 

Author Response

Reviewer Comments and Author Responses

  1. Do you have any evidence that those solutions to the three types of uncertainties really help AI and organizations? I mean either experiments or data from the real-world instead of just from literature.

Author response:
Thanks for your feedback. This research classifies uncertainties into three types: informational uncertainty, environmental uncertainty, and intentional uncertainty.

We suggest establishing norms, collecting available information, and extrapolating potential information as solutions to manage informational uncertainty. In the decision-making, the primary and essential task is to effectively collect available data and extend the availability as complete as possible. Humans are good at collecting data to judge and reduce informational uncertainty. The more information, the more helpful it is to decision-making. The next question is, how do we manage to process information effectively? A simple solution is to establish a data collection norm. Through the advanced established norm, we try our best to obtain the available information. Besides, we can even use extrapolating tools to infer and extract potential information. Those principles are also suitable for organizational decisions.

In computer science, with the amount and the speed that computers can process gradually enhancing, business analysis (BI) and big data are useful applications to help an organization effectively apply plenty of information to reduce uncertainty.

We suggest exploring and updating, soliciting advice, and improving readiness as solutions to manage environmental uncertainty. Humans manage information based on accumulating knowledge. Knowledge is the interpretative frames used to construct an inferential understanding of the world. While the world is continuously changing, the interpretative frames are needed to be correspondingly adjusted.

To be clear, the environment (the interpretative frames) changes across the boundary and even across time. A rule which applies to the transportation field may not necessarily apply to the health care field. Besides, a rule from the last century may not necessarily apply to this century. Knowledge needs to change and verify continuously. No matter how much computer can process plenty of information, it also requires much preparatory work (such as consulting expert opinions), continuous re-evaluation and adjustment, and even some buffer alternatives (Tarafdar et al., 2019).

We suggest establishing public criteria, allowing individual preferences, and allowing random as solutions to manage intentional uncertainty. In addition to accessing information and extending knowledge, we believe that one of the world's most precious treasures is diversity. It is a potential concern that most computer applications rely on one single standard and tend to generate one single solution. We understand that diversity is quite difficult to maintain, and some people may even regard diversity as one source of uncertainty. Herein, we are trying to balance this dilemma in several ways.

On the one hand, we need to respond to the consensus of the public majority. On the other hand, we need to respect individual preferences within a reasonable range. Besides, we also keep some randomness to provide its flexibility. All the weight needs to decide in advance. In this way, AI can make decisions depends on the weight, thereby reducing intentional uncertainty.

 

  1. What is the relation between AI uncertainties with decision-makers’ uncertainties?

Author response:
Thank you for your comments. In the process of decision-making, there are three main stages, i.e., defining problems, collecting data, and deciding alternatives (Lundberg, 1962; Mintzberg et al., 1976). The decision-making process's uncertainty can generate in the three stages: (a) inadequate understanding in the problem definition stage, (b) incomplete information in the data collection stage, and (c) undifferentiated criteria in the alternative deciding stage (Lipshitz & Strauss, 1997).

In the field of organizational management, environment scanning and information retrieval may help to grasp and respond to the environment's variation. However, in computer science, it is believed that the information may lead to the so-called "GIGO (garbage in, garbage out)" if there is a lack of an appropriate interpreting framework for the received information. In other words, AI cannot make decisions on its own.

Although computers' strengths are in the rapid calculation of large amounts of data, it still needs some support from humans, such as assistance in defining problems and providing preference criteria for selection options. In other words, the uncertainties humans encounter will still appear in the AI-enabled decision-making process. Taking an autonomous vehicle as an example, we can find that the above three uncertainties are all possible in the process of AI-enabled decision-making. For example, autonomous vehicles may not figure out what the road creatures are if they do not store any descriptive data in their database.

 

  1. Another issue is AI biases. Because there are input biases (the sources of information used to develop AI), processing biases (human or systems), and output biases. So, what do you think about the relationships between the uncertainties and biases in AI systems?

Author response:
Thank you for your reminding. There are many studies about uncertainty, and there are also many studies about bias; however, few studies discuss both. We need to have a clear definition before discussing the relationship between the uncertainties and biases in AI applications. According to Tversky & Kahneman (1974), uncertainty is a characteristic of a phenomenon that is unaccessible, unpredictable, and unmeasurable. On the other hand, bias is more like the result of a phenomenon, which is a systematic misunderstanding, overestimate, or underestimate.

However, certain and predictable results will impact on the trust of emerging technological applications. Biased results can lead to wrong results, which cannot be trusted. Hence, the method we use to solve environmental uncertainty problems, i.e., continuous exploring and updating, can be used to solve the bias problems.

 

  1. I have questions regarding the methodology of this paper and other things. I guess that it is a review paper. So, could you please explain the sources of information you used to collect those papers. For example, the web of science, Scopus, or others.

Author response:
Thank you for your feedback. The research team keeps following the news related to AI development, especially AI applications. Besides, the literature cited and used in the research method is mainly come from the Web of Science by using the following keywords: “artificial intelligence,” “decision making,” and “uncertainty.”

 

  1. Another important issue is that instead of scientometrics techniques the authors used case-based approaches from previous research. I am wondering whether the paper is a review paper or a case-based paper? I observed that authors used quotes from different papers and provided like co-occurrences of those three types of mechanisms. So, could you please explain the reason to use the case-based method instead of using any bibliometrics or scientometrics techniques.

Author response:
Thank you for your comments. There are mainly two reasons for this research to use the case-based method instead of using other quantitative methods. First, we discuss AI-enabled decision-making applications in terms of AI projects such as driverless cars and unmanned stores. The development of those applications is still in the preliminary stage, so it is relatively hard to collect a large amount of data to support our arguments. Second, the research topic is the uncertainty of decision-making. As an inherent characteristic of decision-making, uncertainty is hard to measure and evaluate. With those considerations, we finally decided to use a case-based method to conduct this research.

 

  1. Another important question is about the generalizability of the mechanism.

Author response:
Thank you for your suggestion. This research generates the mechanism by collecting and concluding nine papers. While the nine papers have a distinct decision task, case quantity, and main learning paradigms, it helps us examine 14 real cases and generalize their practical approaches. While the mechanism is generalized from various research backgrounds (different contexts), it helps to clarify the developing dynamics in this novel field and achieve some explorative findings. The findings are one of the most important contributions of this research. However, decision-making in the distinct domains is varied; future research is worthwhile to accumulate more facts and evidence to verify our research findings.

 

Reference:

  • Lipshitz, R., & Strauss, O. (1997). Coping with uncertainty: A naturalistic decision-making analysis. Organizational behavior and human decision processes69(2), 149-163.
  • Lundberg, C. C. (1962). Administrative decisions: A scheme for analysis. Academy of Management Journal5(2), 165-178.
  • Mintzberg, H., Raisinghani, D., & Theoret, A. (1976). The structure of" unstructured" decision processes. Administrative science quarterly, 246-275.
  • Tarafdar, M., Beath, C. M., & Ross, J. W. (2019). Using AI to enhance business operations. MIT Sloan Management Review11.
  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science185(4157), 1124-1131.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The submitted manuscript conducted a content analysis to understand how uncertainty can be managed in AI-enabled decision making and designed a management mechanism for uncertainty addressing. It is well written from the perspective of English expression. However, the manuscript is not capable of pointing out and addressing the research issues and seems not to provide significant scientific contributions. My comments on the manuscript are summarized as follows:
1. “1. Introduction” is insufficient to obtain a research question. At the end of this subsection, the authors provide one pending question “how can uncertainty be managed in AI-enabled decision making?” I do not think this point can be taken as the research question for the manuscript. In addition, morality decision is not only hard to make for AI applications but also human beings. The example of “trolley problem” is not very suitable.
2. At the beginning of “4. Managing uncertainty in AI-enabled decisions”, the authors conclude that “According to the results of the content analysis of eight cases, most AI applications focus on solving informational uncertainty, and environmental and intentional uncertainty remains unsolved.” How could authors figure out the eight cases in this manuscript are enough to examine uncertainty manage in AI-enabled decision making?
3. It is better for authors to clarify the contributions to the field firstly. In the manuscript, authors do not state clearly what are the advantages for the proposed management mechanism of AI-enabled decision making and explain how to design a management mechanism to address different types of uncertainty based on those methods that mainly focus on informational uncertainty solving. Also comparisons to existing models and methods should be carried out.
4. From “5. Conclusion”, I still do not know the scientific contributions of the manuscript. Please explain the real contributions of your manuscript. And how they answer the pending question “How can uncertainty be managed in AI-enabled decision making?”.
5. Many references are somewhat out-of-date.

Reviewer 2 Report

Research methods should be provided in your manuscript.

Reviewer 3 Report

he article is very interesting especially at the present time, when risk management is the basis of integration of management systems (so-called HLS). Assessing the degree of uncertainty associated with the use of AI in the implementation of Industry 4.0 in industrial enterprises is an important factor supporting management decision-making. The literature review is comprehensive.
However, the analysis performed in Table 1 (based on Appendix) requires a description of the degree of uncertainty in the analysis of 8 articles and the classification of their outputs into 3 groups (sources of uncertainty). There is a lack of a suitable way of evaluating the opinions of individual assessors - authors, e.g. AHP method.
It would be appropriate to supplement the article with the application of the authors' opinions to their own analysis and to describe possible developments in the study of this area in the future.
 
Back to TopTop