Next Article in Journal
Piloting a Meta-Database of Agroecological Transitions: An Example from Sustainable Cereal Food Systems
Next Article in Special Issue
Multi-Feature Patch-Based Segmentation Technique in the Gray-Centered RGB Color Space for Improved Apple Target Recognition
Previous Article in Journal
Analysis of Dairy Product Price Transmission in Hungary: A Nonlinear ARDL Model
Previous Article in Special Issue
Automatic Detection and Monitoring of Insect Pests—A Review
 
 
Article
Peer-Review Record

Identification Process of Selected Graphic Features Apple Tree Pests by Neural Models Type MLP, RBF and DNN

Agriculture 2020, 10(6), 218; https://doi.org/10.3390/agriculture10060218
by Piotr Boniecki 1, Maciej Zaborowicz 1,*, Agnieszka Pilarska 2 and Hanna Piekarska-Boniecka 3
Reviewer 1:
Reviewer 2: Anonymous
Agriculture 2020, 10(6), 218; https://doi.org/10.3390/agriculture10060218
Submission received: 28 May 2020 / Accepted: 9 June 2020 / Published: 10 June 2020
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)

Round 1

Reviewer 1 Report

The paper has been improved and revised accordingly.

Reviewer 2 Report

The authors have satisfactorily addressed my comments, concerns, and recommended improvements to the article. I therefore would recommend that the article is at a suitable level of quality and sufficient standard to be accepted and published.

 

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.

 

Round 1

Reviewer 1 Report

The paper presents the merit of ANN for the identification of five selected pests in apple tree orchards in Poland based on visual features consisting mainly with the selected geometrical features of the digital pest images, or the objects. The ANN structures compared are MLP, RBF, and DNN. The results using 1000 pest images show that MLP has the best performance.

The paper is interesting and well-written. The presentation of the paper is also nice.However, it needs some revisions as follows:

  1. The paper requires to mention clearly how the 1000 images or the data is obtained, or generated. What is the quality of the data used? Does it need any preprocessing to improve the performances?
  2. It would be convincing to show the results with and without feature selection. In this way, it could be possible to find also whether the seven features are equally prominent/significant or not? If not then how is their ranking from most prominent to least prominent?

Author Response

Dear Reviewers,

Thank you very much for your comments and all your guidance. We have improved our text. We hope that it is now much better and meets all scientific standards.

We have also had it corrected by nativespeker.

Thank you for your time!

On behalf of all co-authors,

Maciej Zaborowicz

Author Response File: Author Response.docx

Reviewer 2 Report

This paper applies Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Deep Neural Networks (DNN) classification models for the purpose of identifying digital camera images of five selected pests which may be present in apple tree orchards. Rather than using the raw image data, seven features were extracted from each image and used as input data to train, validate, and test the three neural network models. Five of these features were shaped-based coefficients and the other two were derived from specific pixel information contained in the images. The authors claim that the performance of these three neural network models exhibited good classification accuracy and suggested that the MLP model was the best performing classifier overall.

While this work is of interest, there are a number of clarifications required and areas in the methodology adopted both of which need attending to. If the authors were to address the concerns below, then the paper would be in a much improved state.

  1. On Page 2, the claim that “An important property of deep networks is the ability to operate and process data in real time” needs to be clarified as such architectures normally take a significant time to train given the complexity of it and the number of data examples normally used to train these type of neural network architectures [1] or is this claim about the real-time performance of trained DNNs?

  2. Has the use of the seven features been used for similar work in the past? Providing supporting literature would aid in justifying the claims made for selecting those specific features.

  3. What was the class distribution for each of the three data files used for training, validating, and testing the neural network models. In other words, were there the same number of examples for each of the five pest classes in each of the three data files and in what order were the data examples presented for training? Related to this question, is the authors could also discuss what effect on the three neural networks performance would there be when there is class imbalance present in the data sets.

  4. Were the three data sets normalised before being used for training, validation, and testing of the neural network models and if so, what form of normalisation was used?

  5. Although Table 1 presents the structure of the learning file, it would be better if this table was to have examples, of say, the non-normalised data contained within the data file itself.

  6. In Table 2, having the RMS error alone would not necessarily demonstrate the efficacy of the generated neural network models. As this is a problem that deals with classification then there should be confusion matrices documented as the result of testing the three neural networks. In addition, precision, recall, and F1 values should be part of this reporting. Including such performance metrics would provide insight in which of the five pest classes were classified more/less accurately than other pest classes.

    The reason why presenting these aforementioned evaluation metrics is of importance because although the DNN has the same RMS error value when trained, validated, and tested, it may not be the case that the distribution of classifications and mis-classifications of the five pest classes by the same neural network model is consistent across all three data sets.

  7. The results included in Table 2 also suggest that only one type of neural network model was generated from each of the three data files. In order to have a more objective evaluation of the utility of the three neural network architectures then n-fold cross validation should have been applied. In this way all instances of the overall data set are used for both training and testing the models thus overcoming any selection bias. The average RMS error along with its variance for each neural network architecture could then be reported on.

  8. The optimal MLP architecture was reported as having a structure of 7:7-27-5:1 but was the number of hidden sigmoidal neurons incorrectly reported as 19?

  9. At the end of Section 3 it was stated that sensitivity analysis was performed with the result that all variables were important for the operation of the model. But were they equally important? Was there any specific feature or set of features that contributed more to a classification outcome?

  10. It was claimed in the paper that the DNN model was trained with network sampling of 3347 samples per second, this figure requires some context as to the computer hardware that was used to obtain this result. Also, was this figure an average value?

  11. It would be better if the limitations/weaknesses of this work was discussed which would inform what future work could be conducted in this area. For example, handcrafted features were extracted from the images but it is possible for a suitable Convolutional Neural Network (CNN) architecture be used to automatically extract features from these images as part of its learning process. A comparison between the seven features used in this work and the features extracted from a CNN would be an interesting topic for future research.

Selected typos and suggested grammatical corrections:

Page 4: Section 2 – “...treaning file, containing...” should be “..learning file, containing...”

Page 5: Section 3 – “It is calculated as per formula 17.” should be “It is calculated as per formula 8.”.

References

[1] H. Zhu et al., "Benchmarking and Analyzing Deep Neural Network Training," 2018 IEEE International Symposium on Workload Characterization (IISWC), Raleigh, NC, 2018, pp. 88-100, doi: 10.1109/IISWC.2018.8573476.

Author Response

Dear Reviewers,

Thank you very much for your comments and all your guidance. We have improved our text. We hope that it is now much better and meets all scientific standards.

We have also had it corrected by nativespeker.

Thank you for your time!

On behalf of all co-authors,

Maciej Zaborowicz

Author Response File: Author Response.docx

Back to TopTop