Next Article in Journal
A Vehicle Recognition Model Based on Improved YOLOv5
Previous Article in Journal
Demand Response Management via Real-Time Pricing for Microgrid with Electric Vehicles under Cyber-Attack
 
 
Article
Peer-Review Record

Kernel Reverse Neighborhood Discriminant Analysis

Electronics 2023, 12(6), 1322; https://doi.org/10.3390/electronics12061322
by Wangwang Li, Hengliang Tan *, Jianwei Feng, Ming Xie, Jiao Du, Shuo Yang and Guofeng Yan
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2023, 12(6), 1322; https://doi.org/10.3390/electronics12061322
Submission received: 31 December 2022 / Revised: 13 February 2023 / Accepted: 28 February 2023 / Published: 10 March 2023
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

Preprint is good, but few issues must be addressed:
1) It's known that LDA have many problems, when data have non-linearity. For example: a) The presence of outliers can bias the results of LDA b) LDA is sensitive to the scale of the data, and the features should be standardized before applying LDA. c) LDA is sensitive to the size of the sample, and results may be poor for small sample size.
How proposed KRNDA method perform on outliers, scaling and sample size? Please add comparison
2) When applying neighborhood algorithm the selection of neighborhood size is issue. How to address this problem on selecting an appropriate size.
3) Authors must add the open-source code (matlab, or other) for experiments, to be reproducable.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The article deals with a proposal to extend the nLDA method towards kernel-based functions. The authors propose to use a nonlinear transformation of input data to map features to a higher-dimensional space. In this way, mutimodal data can potentially be more easily classified in the case of strong nonlinear separability of the original classes. The proposed method is called Kernel Reverse Neighborhood Discriminant Analysis (KRNDA).

Unfortunately, the paper in its current form contains a number of inaccuracies and deficiencies that must be corrected in order to be considered for publication.

The authors should address the following issues:

1. Abstract/Conlusions

The abstract should include representative quantitative data on the effectiveness of the method compared to baseline methods or an indication of potential areas of application where the method has a significant advantage. A more extensive discussion should be included in the Conclusions.

2. Sections 3 (3.2) and 4

In terms of mathematical notation, the authors use rather long and elaborate names. It would be good to try to shorten the notation a bit, e.g. replace some literals like NN, RNN with more compact symbols. As it stands, the mathematical notation is very difficult to read and even illegible in places, such as the subscripts for sums.

3. Sections 3 and 4

Long formulae should not be placed directly in the text, as this reduces readability or, in drastic cases, mathematical expressions render incorrectly. It is recommended to embed extensive mathematical formulae on separate lines and refer to them by number. Such correction should be applied throughout the work. An example of such problematic places are lines 131-136, 172 and others.

4. Pages 7-8, Eqs. 23-26

The authors leave out issues that are probably obvious to them, but not necessarily to the reader anymore. At least with the notation so complicated and in places illegible. Among other things, there is no clear explanation of how to read the symbol K_b (equations 23 and 24), as well as K_r (equations 25 and 26). Should they be understood in an analogous way to K_m (Eq. 21 and 22), which stands for the inner product between all training samples and the m-th training sample, in the mapped space F. If so, such explanations of the symbols used should be included in the text below these expressions.

5. Section 5 (benchmarking)

The results of the experiments performed for different data sets are presented in Tables 3, 4 and 6. Unfortunately, the authors used incomplete methodology. This issue absolutely requires completion and recalculation.

In order for the results obtained for different models and the proposed KRNDA scheme to be statistically comparable at all, they should be performed for many variants of random data division into training and test parts, then calculate the mean accuracy (MEAN) and standard deviation (STD). With such results at your disposal, you can only conclude whether one method outperforms another. Various versions of the cross-validation technique or direct random data splitting methods can be used, e.g. k-fold cross-validation, k=10, 20 or more. Without such basic information as MEAN and STD, no reliable conclusions can be drawn like those given in the article. Without knowing the value of standard deviations, it is impossible to say whether the proposed method is statistically better than other versions of LDA. Nothing can be said about robustness either.

The evaluation for the sigma parameter should be carried out in a similar way. A one-time experiment for random selection of training and test data is not methodologically reliable and is burdened with unacceptably high uncertainty.

6. Other kernel functions (benchmarking)

A demonstration of the method potential should also be shown for other commonly used kernel functions such as polynomial.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Manuscript Number: electronics-2166597

Title:  Kernel Reverse Neighborhood Discriminant Analysis

Article Type: Review Article

The subject of research includes in this journal. The research paper is a interesting issue using we employ the Gaussian kernel to map the original data to a high-dimensional feature space, where the non-linear multimodal classes can be better classified. We give the details of the deduction of the proposed Kernel Reverse Neighborhood Discriminant Analysis (KRNDA). This method is already known and used widely. In the paper titled “Kernel Reverse Neighborhood Discriminant Analysis”.

In my recommendation is minor revision.

1.       The quality of the language is insufficient. Have a native speaker or similar assist you.

2.       Please explain why you chose these methods?

3.       How was the division into number of training set number of test set?

4.       In my opinion, the final conclusions should also be drawn up as they are not as specific as the scope of work?

5.       Authors should include more work in the literature review. Because after analyzing the Scopus or WoS databases you can find many similar works?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop