Next Article in Journal
Gene-Similarity Normalization in a Genetic Algorithm for the Maximum k-Coverage Problem
Next Article in Special Issue
Similarity Measures for Learning in Lattice Based Biomimetic Neural Networks
Previous Article in Journal
On a Periodic Capital Injection and Barrier Dividend Strategy in the Compound Poisson Risk Model
Previous Article in Special Issue
WINkNN: Windowed Intervals’ Number kNN Classifier for Efficient Time-Series Applications
 
 
Article
Peer-Review Record

Reduced Dilation-Erosion Perceptron for Binary Classification

Mathematics 2020, 8(4), 512; https://doi.org/10.3390/math8040512
by Marcos Eduardo Valle
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Mathematics 2020, 8(4), 512; https://doi.org/10.3390/math8040512
Submission received: 3 March 2020 / Revised: 27 March 2020 / Accepted: 30 March 2020 / Published: 2 April 2020

Round 1

Reviewer 1 Report

The paper studies an innovant idea of using a) reduce ordering, b) innovative morphological perceptron and c) ensemble/bagging methods to produce an really interesting approach for binary classification. This reviewer thinks that the author’s proposition is original and it has the merit to be considered for publication in this journal.  The contribution is original and the experimental results show the goodness of the method. Author do not study multiclass classification problem but it seems to have a natural extension by means of one-against-all approach from supervised classification.

Minor Comments:

Figure 5 require more explication (what is the meaning of links / any reference to explain the meaning of that?).

Caption in Fig 6 can be improved, that is not informative.

Additionally, I had tested the available code in GitHub,  unfortunately I had gotten some problem to use it due to version in optimisation package (it should be nice if author indicate in his GitHub the versions of different libraries used in the method implementation)

Author Response

Please, find below the reply to the comments of the reviewer. The changes have been typed in red in the manuscript.

Comment: The paper studies an innovant idea of using a) reduce ordering, b) innovative morphological perceptron and c) ensemble/bagging methods to produce an really interesting approach for binary classification. This reviewer thinks that the author’s proposition is original and it has the merit to be considered for publication in this journal. The contribution is original and the experimental results show the goodness of the method. Author do not study multiclass classification problem but it seems to have a natural extension by means of one-against-all approach from supervised classification.

Reply: I thank the reviewer comments. As remarked by the reviewer, I only addressed binary classification problems. Multiclass classification problems can be addressed by means of one-against-all or one-against-one approaches. I addressed this remark in Section 6, lines 428-433, of the revised version of the manuscript.

Minor Comments:

Comment: Figure 5 require more explication (what is the meaning of links/any reference to explain the meaning of that?).

Reply: In the revised version of the manuscript, I point out the following: “… an edge in this diagram means that the hypothesis test discarded the null hypothesis that the classifier on the top yielded balanced accuracy score less than or equal to the classifier on the bottom. For example, Student’s t-test discarded the null hypothesis that the ensemble r-DEP classifier performs as well as or worst than the hard-voting ensemble of SVCs. In other words, the ensemble r-DEP statistically outperformed the ensemble of SVCs. Concluding, in Figure 5, the method on the top of an edge statistically outperformed the method on the bottom. In the revised version of the manuscript, we also included references 66 and 67 which address the graphical interpretation of hypothesis test.

Comment: Caption in Fig 6 can be improved, that is not informative.

Reply: In the revised version of the manuscript, I reformulated the caption of Figure 6 as follows: “Boxplot summarizing the average balanced accuracy scores provided on Table 4. In general, the ensemble and bagging r-DEP classifiers yielded largest balanced accuracy scores.”

Comment: Additionally, I had tested the available code in GitHub, unfortunately I had gotten some problem to use it due to version in optimization package (it should be nice if author indicate in his GitHub the versions of different libraries used in the method implementation).

Reply: I thank the reviewer for the information. I updated the GitHub read-me file listing the versions of the considered packages. In particular, I emphasized that I used MOSEK, which can be obtained at https://www.mosek.com/, as the default solver for the convex-concave optimization problem. That may be the problem you got with the optimization package. Other solvers can be used instead of the MOSEK. For example, the CVXOPT solver which is available on cvxpy can be used but, unfortunately, this solver is very slow making it inappropriate for medium and large scale problems.

Finally, we would like to point out that we took the opportunity to correct further typos and reformulated some minor phrases on the presentation of the reduced dilation-erosion perceptron on Section 4.

 

Reviewer 2 Report


Very interesting article, although the author did not avoid several mistakes.
The publication lacks some information, practical examples, where you can use the proposed solutions. Can be given in introduction or at the end in the paper.

Some comments:

- Line 140.  Increasing operator. "... if x<=y implies Fi(x) <= Fi(x) ... ". In my opinion should be " Fi (x) <= Fi(y) " - like in line 153. If not, please explain.

- Line 190.  Figure 1. Should be clearly described what is the line in the middle of the chart.

- Line 202 (second line) - What you mean by "... the positive and the negative classes" ?

- Line 229.  "...In our 229 computational implementation, we adopted C = 10^2" - Why ? Please,
explain.

- Line 312. Table 1. Accuracy score. What units are the results in ? % ??? Absolute to 1 ?

- Line 334. Table 2 - Note the same like in Table.1

- Line 380. Table 4 - Note the same like in Table.1

- Line 382. Should be "Rosemblatt's perceptron". You lost "r".

- Line 409. Figure 6. Axis "Y" (vertical) is not described. What are these points above DEP rectangle ?

- Line 417. The position of the DEP rectangle (brown) on the chart is definitely different from the others. In my opinion, this should be explained why this is so. People who read the publication  do not always read the entire test. This is just my suggestion.

Author Response

Please, find below the reply to the comments of the reviewer. The changes have been typed in red in the manuscript.

Comment: Very interesting article, although the author did not avoid several mistakes. The publication lacks some information, practical examples, where you can use the proposed solutions. Can be given in introduction or at the end in the paper.

Reply: I thank the reviewer for the valuable comments and I apologize for the lack of information. I hope I properly addressed his comments. In particular, some information on practical applications of the proposed method is given in Section 6, lines 428-433, of the revised version of the manuscript.

Some comments:

- Line 140. Increasing operator. "... if x<=y implies Fi(x) <= Fi(x) ... ". In my opinion should be " Fi (x) <= Fi(y) " - like in line 153. If not, please explain.

Reply: We thank the reviewer’s comment. We corrected the typo.

- Line 190. Figure 1. Should be clearly described what is the line in the middle of the chart.

Reply: In the revised version of the manuscript, we clarify that the piece-wise linear curve corresponds to the decision boundary of the dilation-erosion perceptron classifier.

- Line 202 (second line) - What you mean by "... the positive and the negative classes" ?

Reply: In the revised version of the manuscript (second paragraph o section 3), we clarify that the elements of V associated with the class labels -1 and +1 belong respectively to the negative and positive classes.

- Line 229. "...In our 229 computational implementation, we adopted C = 10^-2" - Why ? Please, explain.

Reply: In the revised version of the manuscript we point out that “Although in our computational implementation we adopted $C = 10^{-2}$, we recommend to fine tune this hyper-parameter using, for example, exhaustive search or a randomized parameter optimization strategy \cite{bergstra12}.”

- Line 312. Table 1. Accuracy score. What units are the results in ? % ??? Absolute to 1 ?

Reply: In the revised version of the manuscript, I explain in Example 6 the following: “Table 1 contains the accuracy score (between 0 and 1) of each of the classifiers on both training and test sets.” I hope I have clarified the unit of the results without being wordy.

- Line 334. Table 2 - Note the same like in Table.1

Reply: Similarly, we included in Example 7 the following: “Table 2 lists the accuracy score (between 0 and 1) of all the classifiers...”

- Line 380. Table 4 - Note the same like in Table.1

Reply: We point out in the revised version of the manuscript the following: “Therefore, we used the balanced accuracy score, which ranges from 0 to 1, to measure the performance of a classifier [66]”.

- Line 382. Should be "Rosemblatt's perceptron". You lost "r".

Reply: We corrected the typo.

- Line 409. Figure 6. Axis "Y" (vertical) is not described. What are these points above DEP rectangle ?

Reply: In the revised version of the manuscript we included y-label in Figure 6. The three points above the DEP box corresponds to the average balanced accuracy score values 0.90, 0.88, and 0.77 obtained from the datasets Ionosphere, Breast Cancer Wisconsin, and Internet Advertisement, respectively. We address this remark in the last paragraph of section 5 of the revised version of the manuscript.

- Line 417. The position of the DEP rectangle (brown) on the chart is definitely different from the others. In my opinion, this should be explained why this is so. People who read the publication do not always read the entire test. This is just my suggestion.

Reply: In the revised version of the manuscript, I point out at the last paragraph of Section 5 that the poor performance of the DEP classifier follows because it presupposes a relationship between the partial orderings of features and classes. Precisely, the samples from the positive class must be in general greater than or equal to the samples from the negative class.

Finally, we would like to point out that we took the opportunity to correct further typos and reformulated some minor phrases on the presentation of the reduced dilation-erosion perceptron on Section 4.

Back to TopTop