You are currently viewing a new version of our website. To view the old version click .
by
  • Yang Li1,
  • Tianle Qiao1 and
  • Wenbo Leng1
  • et al.

Reviewer 1: Jorge Parraga-Alava Reviewer 2: Siniša Randjić Reviewer 3: Nen-Fu Huang

Round 1

Reviewer 1 Report

The work propose an interesitng approach to tackle the task of semantic segmentation of wheat stripe rust damage images using deep learning.  Although the subject is not new, the authors carry out the study in an interesting way. At the introductory level, it is necessary that the authors include references to similar works carried out, for example,  1) Aphids Detection on Lemons Leaf Image Using Convolutional Neural Networks. As well as other crops/image dataset: 1) LeLePhid: an image dataset for aphid detection and infestation severity on lemon leaves and 2) RoCoLe: A robusta coffee leaf images dataset for evaluation of machine learning based methods in plant diseases recognition.

Author Response

Reviewer #1:

The work proposes an interesting approach to tackle the task of semantic segmentation of wheat stripe rust damage images using deep learning. Although the subject is not new, the authors carry out the study in an interesting way. At the introductory level, it is necessary that the authors include references to similar works carried out, for example, 1) Aphids Detection on Lemons Leaf Image Using Convolutional Neural Networks. R1: Thank you for the suggestion, we have added a description of this work in the Introduction. Our polished manuscript has been revised as mentioned below:Page2 Line 116-120. As well as other crops/image datasets: 1) LeLePhid: an image dataset for aphid detection and infestation severity on lemon leaves and 2) RoCoLe: A robusta coffee leaf images dataset for evaluation of machine learning based methods in plant diseases recognition. Thanks for the suggestion, we have fully checked for the corrected use of some scientific terms and gene description. R2: Many thanks for your suggestion, we have added a description of these work in the Introduction. Our polished manuscript has been revised as mentioned below:Page2 Line 103-116.

Reviewer 2 Report

I have no special remarks or suggestions.

Author Response

I have no special remarks or suggestions.

R1: Thank you for your review.

Reviewer 3 Report

The Auhors have explained the Results section properly and have mentioned that the expermental results did not overcome the benchmarking work(4.Discussion). However, the authors must provide a proper benchmarking section with the recent revelent work. And also the authors should explain why the work should be published if the work does not overcome the benchmarking work.

Author Response

Reviewer #3: The Authors have explained the Results section properly and have mentioned that the experimental results did not overcome the benchmarking work (4. Discussion).

R1: We are thankful for your valuable comments. Sorry, we didn't make it clear in the paper, but the Octave-UNet model already outperforms other benchmarks. It's just that a few metrics of Octave-UNet do not exceed 90%, but when a new dataset is released, generally not all metrics have more than 90%. When famous datasets such as COCO and Pascal VOC in the computer vision field were first proposed, the benchmark's results are not high, and we need to continue to work hard to optimize the model and design a better model, which is the charm of scientific research. But the advice you gave is very useful, what we want to emphasize is that the database we built is challenging and the segmentation task is novel, we have rewritten the beginning of the discussion section and we reformulate the importance and necessity of constructing the CDTS dataset. Our polished manuscript has been revised as mentioned below:Page11 Line 426-435.

However, the authors must provide a proper benchmarking section with the recent relevant work.

R2: Many thanks for your valuable comments and suggestions. There are two reasons for not using recent relevant work to get benchmarks: 1. The deep learning model for wheat stripe rust segmentation has not been proposed yet. Most of the papers for other disease segmentation do not have open-source code. 2. In terms of models, we focus on exploring the performance of models with different architectures on the dataset. Each architecture uses the most popular models in the field of computer vision, and the selected models are representative. Based on different architectures, we can clearly know the idea behind the architecture and lay the foundation for subsequent model innovation based on the idea of optimal architecture.

And also the authors should explain why the work should be published if the work does not overcome the benchmarking work.
R3: We are thankful for your valuable comments. Sorry for not expressing this clearly in our writing, the Octave-UNet model we used has exceeded the benchmark. It's just that the effect on the dataset still has room for improvement. This paper focuses on filling the lack of a large-scale dataset of wheat stripe rust and a new segmentation paradigm is proposed according to actual needs. Then, based on this paradigm, the area ratio of spores to leaves can be obtained. This work has promoted the research progress of quantitative data acquisition of spores and this data is very important for disease evaluation and genetic experiment verification. According to our literature research, there is no similar work. After constructing the dataset, we use models with different ideas, such as multi-scale, dilated convolution, encoding, and decoding models, high-frequency and low-frequency information semantic fusion, and explore a model suitable for solving the fine segmentation of small area contours, which lays the foundation for subsequent model innovation research. We sincerely thank you again, your advice is very helpful to our writing. We have revised the abstract, discussion, and conclusion sections, and thank you for guiding our follow-up research. Our polished manuscript has been revised as mentioned below:Page1 Line 24-27; Page1 Line 29-30; Page13 Line 499-500; Page 13 Line 502.