Next Article in Journal
BIM-Based Research Framework for Sustainable Building Projects: A Strategy for Mitigating BIM Implementation Barriers
Previous Article in Journal
Probabilistic Design of Retaining Wall Using Machine Learning Methods
 
 
Article
Peer-Review Record

Regularized Chained Deep Neural Network Classifier for Multiple Annotators

Appl. Sci. 2021, 11(12), 5409; https://doi.org/10.3390/app11125409
by Julián Gil-González 1,*,†, Andrés Valencia-Duque 1,*,†, Andrés Álvarez-Meza 2, Álvaro Orozco-Gutiérrez 1 and Andrea García-Moreno 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2021, 11(12), 5409; https://doi.org/10.3390/app11125409
Submission received: 19 April 2021 / Revised: 19 May 2021 / Accepted: 4 June 2021 / Published: 10 June 2021
(This article belongs to the Topic Applied Computer Vision and Pattern Recognition)

Round 1

Reviewer 1 Report

The paper proposes a deep network architecture for the task of classification with multiple annotators.

The paper is well written and organized and the experimentation is thorough and the results compelling.

The evaluation is not completely sound given that the results from multiple datasets are averaged and that is meaningless unless the datasets are related. This is not a good practice for comparing methods.

I refer you to the classical paper from Demsar "Statistical Comparisons of Classifiers over Multiple Data Sets" Journal of Machine Learning Research 7 (2006) 1–30, where is recommended the Friedman test over the ranks of the algorithms as a more sound methodology.

There are a couple of typos in the text:

Line 34: "schem" should be "schema"

Line 294: "probes" should be "proves"

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

The idea presented in the paper is quite interesting and useful.

However, there are some points that require some more work, in my opinion:

  1. The Introduction section is too long, it would be better to have an overview, or state of the art section
  2. The definition of the method is easy to follow, but it lacks some insights:
    1. Could other activation layer be used
    2. Does the Backpropagation remain the same
    3. Could the method be adapted to other types of layers (convolution, recurrent, etc)
  3. The result and discussion sections are well designed, but I would like to see other validation:
    1. The simulation of the multiple annotators in synthetic datasets is quite simple
    2. How would the method behave with a malicious annotator, how many malicious annotators are required to decrease performance?
    3. Other configuration of annotators would help to establish the resilience of the method
  4. Finally, the GPC-GOLD appears to outperform your proposed solution by a large margin, is there any improvement to be made to overcome this?

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The authors address my previous concerns.

Back to TopTop