Next Article in Journal
Real Quadratic-Form-Based Graph Pooling for Graph Neural Networks
Next Article in Special Issue
FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction
Previous Article in Journal
Benefits from Variational Regularization in Language Models
Previous Article in Special Issue
Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair
 
 
Article
Peer-Review Record

Fairness and Explanation in AI-Informed Decision Making

Mach. Learn. Knowl. Extr. 2022, 4(2), 556-579; https://doi.org/10.3390/make4020026
by Alessa Angerschmid 1, Jianlong Zhou 2,3,*, Kevin Theuermann 4, Fang Chen 3 and Andreas Holzinger 1,2,4,5
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
Mach. Learn. Knowl. Extr. 2022, 4(2), 556-579; https://doi.org/10.3390/make4020026
Submission received: 14 May 2022 / Revised: 10 June 2022 / Accepted: 14 June 2022 / Published: 16 June 2022
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)

Round 1

Reviewer 1 Report

Dear Authors

Overall, the article is about trust in AI, its transparency and fairness. This is a very important topic that requires in-depth research. Discussions on this topic have been going on for a long time. There is no doubt that the application of artificial intelligence requires regulations that have been adopted by the EU. In 2020, the "White Paper on Artificial Intelligence" was created, which should be referred to in this article in the introduction. It is also important to consider how prepared a person is to consciously use artificial intelligence algorithms dedicated to making decisions. How does a person perceive artificial intelligence? Does he think that artificial intelligence can be helpful in everyday life? What functions could it fulfill? Here you can refer to the article https://doi.org/10.2478/mspe-2022-0014

The methods section should definitely be expanded and the research sample should be indicated on which the tests were carried out with the use of fictitious scenarios. How was this research sample selected, how was the research conducted?

It is also worth expanding the topic of gender discrimination in decision-making by artificial intelligence? Future research will explore how the problem of discrimination affects trust in AI.

The same content is repeated in several sections of the article. It is also worth improving the article in this regard, giving up unnecessary repetitions.

Good look

Reviewer

Author Response

Question: There is no doubt that the application of artificial intelligence requires regulations that have been adopted by the EU. In 2020, the "White Paper on Artificial Intelligence" was created, which should be referred to in this article in the introduction. 

Answer: Thanks for the reviewer’s insightful comments. The EU white paper has been added in the introduction section.

Question: It is also important to consider how prepared a person is to consciously use artificial intelligence algorithms dedicated to making decisions. How does a person perceive artificial intelligence? Does he think that artificial intelligence can be helpful in everyday life? What functions could it fulfill? Here you can refer to the article https://doi.org/10.2478/mspe-2022-0014

Answer: Authors agree with the consideration of these aspects.  We cite the suggested reference in the introduction in the revised version to demonstrate the importance of consideration of human perception on AI technologies in decision making. This also aligns with the research of perception of fairness investigated in this paper.

Question: The methods section should definitely be expanded and the research sample should be indicated on which the tests were carried out with the use of fictitious scenarios. How was this research sample selected, how was the research conducted?

Answer: Thanks for the insightful comments! We updated the method section by merging subsections of “Task Design” and “Experiment Setup”, and added more details on the conduction of the experiment, which are highlighted in the revised version.

Question: It is also worth expanding the topic of gender discrimination in decision-making by artificial intelligence? Future research will explore how the problem of discrimination affects trust in AI.

Answer: Authors agree that gender discrimination in decision-making by AI is important. However, this is not the focus of this paper and we could investigate this interesting topic in the future research work.

Question: The same content is repeated in several sections of the article. It is also worth improving the article in this regard, giving up unnecessary repetitions.

Answer: Thanks for pointing this out! We proofread the manuscript and remove repetitions.

Our major revisions are highlighted with red color.

Reviewer 2 Report

This is a paper on a very interesting topic. It is studying through a user study the effect of  fairness and explanation on perception of fairness and user trust in AI-informed decision making. In addition, the paper considers these two characteristics at the same time. 
The paper is on a very important topic. The paper presents different results and outcomes collected from the two user studies. 
However, the paper is not ready to be published yet . There are additional sections without any text on them. the paper overall is difficult to follow because it is not organized very well. There are some very long paragraphs without any organization. Even the first paragraph of the introduction is very long. The figures take more space than needed. 
also, in the introduction, it could be more clear about the user studies and how these were conducted. The figure seem to be bigger than needed, or spread out. 

A minor detail is that the alignment of the figure titles need to be better.
So overall I would like to re read this paper once it is in a better shape. 

Author Response

Question: There are additional sections without any text on them. The paper overall is difficult to follow because it is not organized very well. There are some very long paragraphs without any organization. Even the first paragraph of the introduction is very long. The figures take more space than needed. Also, in the introduction, it could be more clear about the user studies and how these were conducted. The figure seem to be bigger than needed, or spread out. A minor detail is that the alignment of the figure titles need to be better.

Answer: Thanks for the kind comments! We structructured the introduction section with different subsections. We also proofread the manuscript by rephrasing long paragraphs. 

Figures have been rearranged with appropriate size of location to make full use of spaces. The corresponding titles were also updated to show their contents.

A short statement on the user study is included in the later part of the introduction section following the reviewer’s comments.

Our major revisions are highlighted with red color.

Reviewer 3 Report

 

In this paper the authors investigated the effects of introduced fairness and explanation on perception of fairness and user trust in AI-informed decision making In addition, the authors study found that the AI explanations increased user’s perception of fairness.

 

To improve this work, the authors need to address the following issues

 

  1. The writing skills for this papers are quite poor. The paper needs a thorough proofreading and most of the sentences need to be rephrased.

     

  2. The figures need to be improved. For instance on figures number 3 to number 18, the font are not clear or visible they need to be enhanced.

  3. The performance evaluation for this paper from figure 3 and other figures mainly focus on the performance of the proposed scheme. It does compare nor discuss the performance of relevant work. Therefore, the evaluation part need to be improved.

Author Response

Question: The writing skills for this paper are quite poor. The paper needs a thorough proofreading and most of the sentences need to be rephrased.

Answer:   Thanks for the kind comments! We have done a thorough proofreading to fix any potential writing issues.

Question: The figures need to be improved. For instance on figures number 3 to number 18, the font are not clear or visible they need to be enhanced.

Answer: Thanks for pointing this out! We enhanced the font sizes of these figures.

Question: The performance evaluation for this paper from figure 3 and other figures mainly focus on the performance of the proposed scheme. It does compare nor discuss the performance of relevant work. Therefore, the evaluation part need to be improved.

Answer: Thanks for the insightful comment! We agree with this and added the comparison of our work with previous studies in the discussion section. 

Our major revisions are highlighted with red color.

Reviewer 4 Report

"Fairness and Explanation in AI-Informed Decision Making" is an investigation of the efficacy of different types of explanation as means of in building human trust in the legitimacy of AI decision making tools. It is explicitly about the human perception of fairness, not an investigation of fairness itself in the output. The concept of fairness is essential to any model of good decision making that involves consequences for human beings. For AI-based systems to be accepted in playing a role in making such decisions, human trust that a system which most people will be able to neither understand the internal workings of nor being able to relate to directly must be built.

Fairness is a property of decisions in which the decision lacks bias, that is, where there is a lack of influence of irrelevant factors in the algorithm's arriving at the decision. Gender, for example, ought not play a role in the price for a good or service offered to a customer. To charge men more than women for the same sandwich, for example, would be unfair. Trust is the human belief in the likelihood of a system's accuracy in the making of decisions. The perception of fairness by a human evaluator of a system is an integral element in the development of trust in the system. The more a human evaluator perceives the system's decisions to be free of bias, the more trust the evaluator will have.

One route to developing trust in any decision-making apparatus, whether AI or human-based, is the production of explanations. If one knows how the decider decided, one will form a perception of the fairness of the evaluator which will engender or undermine trust in that evaluator. But there are different types of explanations and the authors consider two. One is a top-down approach wherein the human is allowed to peek inside the black box of the algorithm by being provided with an outline of the important elements utilized in making the decision. Feature-importance based explanations thereby provide one means of understanding how the decision was made. The other is a bottom-up, case-based approach wherein the explanation is based on prior decisions made by the algorithm. By being provided with examples of the decisions the algorithm, a perception of fairness in the specific will inform belief concerning the general ability of the system to avoid bias. The question to be investigated is which of these approaches is more effective in building human trust. It is an interesting question whether explanations whose operative feature is based on general or specific aspects is most effective in creating trust.

The authors note that an additional independent variable is the decision context, that is, whether it is a decision about pricing in the marketplace, judicial rulings, or health-care diagnoses will play an operative role in the amount of trust a particular explanation will create. They thereby chose two different scenarios, one in health-care decision making and one in medical treatment.

In setting out the research question, the sources and other relevant studies one would expect are duly mentioned and discussed. The question itself is clear and the methodology is standard and as well-executed as possible given the constraints of research during the global pandemic. The data is well-presented and well-explained.

The results are interesting. In general, where explanations were effective in increasing trust, there was no difference between the top-down and bottom-up approaches, but did differ slightly in the different categories of decision context. The employment of explanation type will require understanding the nature of the decision made.

This is a strong paper that is both relevant to the special issue to which it was submitted and is professional quality research. Its results will be of interest to the discourse community as it makes a contribution to an ongoing discussion within the community. I support its publication.

 

Author Response

Authors would like to thank insightful comments and summaries from the reviewer. The insightful comments are much helpful for our future work on the investigation of fairness and explanation in AI-informed decision making.

Round 2

Reviewer 1 Report

Dear Authors

Thank you for introducing corrections and additions to the article. In my opinion, the article can be published.

I wish you scientific success in further research on artificial intelligence.

Reviewer

Back to TopTop