Next Article in Journal
An Attention-Based ConvLSTM Autoencoder with Dynamic Thresholding for Unsupervised Anomaly Detection in Multivariate Time Series
Next Article in Special Issue
VloGraph: A Virtual Knowledge Graph Framework for Distributed Security Log Analysis
Previous Article in Journal
Robust Reinforcement Learning: A Review of Foundations and Recent Advances
Previous Article in Special Issue
An Analysis of Cholesteric Spherical Reflector Identifiers for Object Authenticity Verification
 
 
Article
Peer-Review Record

Counterfactual Models for Fair and Adequate Explanations

Mach. Learn. Knowl. Extr. 2022, 4(2), 316-349; https://doi.org/10.3390/make4020014
by Nicholas Asher 1,*, Lucas De Lara 2, Soumya Paul 3 and Chris Russell 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Mach. Learn. Knowl. Extr. 2022, 4(2), 316-349; https://doi.org/10.3390/make4020014
Submission received: 8 February 2022 / Revised: 13 March 2022 / Accepted: 15 March 2022 / Published: 31 March 2022
(This article belongs to the Special Issue Selected Papers from CD-MAKE 2021 and ARES 2021)

Round 1

Reviewer 1 Report

The submission introduces a novel theoretical foundation for characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals, where the correspondence between logical and mathematical formulations is intended. The work is very well presented, has outstanding scientific soundness, and seems to fit in well with the journal topics. The theoretical basis has the potential to serve as inspiration for multiple lines of future work, which leads this reviewer to suggest its acceptance after addressing the following minor revisions.

  • The paper is an extended version of a manuscript submitted, revised, accepted and approved in the joint EU conference CD-MAKE 2021 together with ARES 2021; situation that evidences a good quality of the work. Despite of this, and in order to facilitate the separation of the work already published in the conference proceedings (https://doi.org/10.1007/978-3-030-84060-0_6) and the submission to this journal, it is highlight suggested to include a brief explanation of how this papers extends such previous work
  • The Introduction should explicitly enumerate how the submission contributes to the state-of-the-art. Additionally, it is recommended to present the organization of the rest of the document.
  • The article presents a theoretical paper of great interest, but provides little detail about its applicability to computational problems. Based on this, a brief analysis of operational aspects, such as possible computational cost, dependence on data sets or cases, resilience/vulnerability to adversarial attacks, etc., would be interesting.

Author Response

Many thanks to reviewer 1 for his/her comments.  The new version of the paper makes explicit the new research contributions that go beyond the CD-MAKE 2021 conference paper.  The organization of the whole paper is now presented explicitly in the introduction, and the paper now points to the new improvements over the state of the art both in the introduction and throughout the other sections.

Reviewer 2 Report

Summary: In this paper, the authors study how to specify what are the good explanations for machine learning models by analyzing the epistemically accessible and pragmatic aspects of explanations. The authors characterized sufficiently good, or fair and adequate, explanations in terms of counterfactuals and the agent that requested the explanation. The authors provided a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases and then provide formal results about the algorithmic complexity of fair and adequate explanations.  

 

Comments:

1 . The authors listed quite a lot of definitions and propositions in the paper to describe the complete counterfactual explanations. It indeed specifies what are good counterfactual explanations. Yet from the technical perspective (I work on machine learning), I cannot get a sense of how these definitions are special (or superior to others) in the context of machine learning. It feels like all these descriptions are for establishing a logically sound metric for counterfactual explanations but it may or may not have to do with machine learning. The authors might want to modify the paper show more clearly the relationship between these newly proposed metrics and machine learning models.  

2 . Another big issue of this paper is that although the paper is talking about explanations for machine learning models, there is no numerical experiments on evaluating the actual effectiveness or usefulness of the proposed new standards of explanations. At least the authors could use real world experiments to show the readers what are the “good”, “adequate” and “fair” explanations and what are bad ones. 

 

Typo: sentence too long at Line 316

Author Response

Many thanks to reviewer 2 for his/her comments.  The first author has gone through the paper and corrected many typos, incomplete sentences.  I cannot find a sentence on line 316 but I hope that the problem is fixed in any case.  The paper now addresses how this approach improves on other frameworks for explanation for ML algorithms.  In particular, the new section on transport based counterfactual models has six important advantages for explaining ML.  I have also affirmed that this is a theoretical paper.  Though my colleagues in Serrurier et al, 2021, show some of the empirical consequences of transport based views, there is a lot of work to do on experiments that would really extend the paper too far from its original purpose, which was to set out as precisely as possible a logical framework for counterfactual explanations that can link directly to different statistical methods for testing classifiers and other ML algorithms.

Round 2

Reviewer 2 Report

Summary: In this paper, the authors study how to specify what are the good explanations for machine learning models by analyzing the epistemically accessible and pragmatic aspects of explanations. The authors characterized sufficiently good, or fair and adequate, explanations in terms of counterfactuals and the agent that requested the explanation. The authors provided a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases and then provide formal results about the algorithmic complexity of fair and adequate explanations.  

The authors’ response has addressed my previous issues. Although I feel like more experiments would make it better, as the authors said, it is a theoretical paper. Therefore, I would like to see the paper accepted.

Back to TopTop