Next Article in Journal
MiPo: How to Detect Trajectory Outliers with Tabular Outlier Detectors
Previous Article in Journal
Efficacious GPR Implementations of Z-Transform-Based Hybrid LOD-FDTD with Subgridding Scheme: Theoretical Formalism and Numerical Study
Previous Article in Special Issue
Estimating Crop Seed Composition Using Machine Learning from Multisensory UAV Data
 
 
Article
Peer-Review Record

Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image

Remote Sens. 2022, 14(21), 5388; https://doi.org/10.3390/rs14215388
by Xingmei Xu 1, Lu Wang 1, Meiyan Shu 2, Xuewen Liang 1, Abu Zar Ghafoor 2, Yunling Liu 2, Yuntao Ma 1,2 and Jinyu Zhu 3,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Remote Sens. 2022, 14(21), 5388; https://doi.org/10.3390/rs14215388
Submission received: 11 September 2022 / Revised: 20 October 2022 / Accepted: 24 October 2022 / Published: 27 October 2022

Round 1

Reviewer 1 Report

The authors present great work for the Detection and counting of maize leaves. 

Authors should improve the related work with more recent works 

authors should highlight the contribution at the end of the introduction 

authors should improve their English with proof reading 

explanation of figure 7 needs more details 

Author Response

First of all, we would like to thank you for your efforts towards our manuscript. We have revised the manuscript according to your suggestions.

Point 1: The authors present great work for the Detection and counting of maize leaves. Authors should improve the related work with more recent works

Response 1: Thanks for your suggestion. According to your opinion, we carefully considered it, and we should add some introduction about the two-stage network.. In the fifth paragraph of the introduction we added, “Crop leaf counting is a further application after obtaining complete individual plants. It is difficult to obtain crop leaves directly by single object detection or segmentation method due to the influence of soil and weeds in the field environment. To reduce the influence of complex background, researchers put forward the strategy of using the two-stage method to obtain the object [22, 40-42]. The superposition of two deep learning methods can reduce the amount of information processed, and more accurate and detailed object features can be obtained. Liu et al. [42] developed an integrated CNN-based method for estimating the severity of apple Alternaria leaf blotch in complex field conditions. The complex background was first removed by segmenting the leaves with a segmentation network. The disease was then identified on the segmented leaves with 96.41% final prediction accuracy.”

Point 2: Authors should highlight the contribution at the end of the introduction

Response 2: Thanks for your suggestion. In your opinion, in the last paragraph of the introduction, we added the contribution, “The early growth stage of maize seedlings in the field was taken as the research object in this study. And the deep learning network was used to realize the leaf counting of maize seedlings under the complex field environment with UAV digital image. The main contributions are as follows: (1) A two-stage deep learning strategy for rapid acquisition of maize leaves in a complex field environment was proposed. (2) A new loss function SmoothLR was proposed to improve the convergence speed and segmentation performance of Mask R-CNN. The improved Mask R-CNN was used to segment the complete foreground of maize seedlings from the complex field background. (3) The YOLOv5 model was used to detect and count the maize leaves and was compared with the mainstream detection model. The counting of leaves based on UAV images guides high-throughput investigation of crop growth period and final yield.”

Point 3: Authors should improve their English with proof reading

Response 3: I would like to thank you for your comments. We have revised and carefully corrected the English by a native speaker.

Point 4: Explanation of figure 7 needs more details

Response 4: We found that the final conclusion of Figure 7 was not introduced. In chapter 3.2 we added, “Overall, the performance of YOLOv5l, YOLOv5m and YOLOv5X were better, and YOLOv5n was the worst.”

Reviewer 2 Report

This study proposes a method for detecting and counting maize leaves based on deep learning with RGB images collected by unmanned aerial vehicle . The Mask R-CNN network was used to separate the complete maize seedlings from the complex background to reduce the impact of weeds on leaf counting. Then, YOLOv5 was used to detect and count the individual leaves of maize seedlings after segmentation.In summary, the research is interesting and provides valuable results, but the current document has several weaknesses that must be strengthened in order to obtain a documentary result that is equal to the value of the publication.

Chapter 1: Introduction

· The first paragraph introducing the research topic gives a too simple, and even incomplete, view of the problems related to your topic and should be revised and completed with citations to authority references (Recognition and localization methods for vision-based fruit picking robots: a review). 

· The novelty of the study is not apparent enough. In the introduction section, please highlight the contribution of your work by placing it in context with the work that has done previously in the same domain.

· On a general level, the study of the proposed detection techniques is reasonable, and the explanation of the objectives of the work may be valid. However, the limitations of your work are not rigorously assumed and justified.

· Vision technology integrated with deep learning is emerging these years in various engineering fields. The authors may add more state-of-art articles for the integrity of the introduction. For object detection, please refer to A Study on Long–Close Distance Coordination Control Strategy for Litchi Picking; Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision.

Chapter 2: Materials and Methods

· The number of cropped about 700 sample images is too small for training, and it is recommended to increase the original sample size.

· How to solve the situation that some bottom corn leaves are blocked by the upper leaves when the drone shoots from top to bottom?

· This paper uses Mask R-CNN seedlings for instance segmentation, mainly introduces the network structure of Mask R-CNN, and lacks innovation.

· In this paper, the detection and counting of maize leaves directly using different versions of YOLOv5 models for detection and comparison is also lack of innovation, and it is only a simple application.

Chapter 5Conclusions

· First, we have already used Mask R-CNN to segment the target instance, and then discussing different soil backgrounds is of little significance.

· Without improving the model, how to further improve the generalization ability of the model?

Author Response

Please see the attachment. Because the equation can't be copied into the box,  we uploded the Word file.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper is a deserves publication after some improvements.

This paper has interesting methods and results

The following points should be improved before proceeding with the publications:

l   The structure of this research is quite complicated. Please describe the proposed method using a Flowchart in which the steps are summarized. This can help the reproducibility of the method by users.   l   Environmental illuminance has always been an important factor affecting machine vision. Please describe the experiment in detail in the text such as reducing the environmental illumination.  

 

l   A more interesting question worth exploring, An image which is caused by UAV vibration and which is caused by environmental interference.

Author Response

Please see the attachment. Because the figure can't be copied into the box,  we uploded the Word file.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Congrats! The authors have successfully addressed all my comments. Therefore, I recommend the publication of this manuscript.

Back to TopTop