Next Article in Journal
ICNet: A Dual-Branch Instance Segmentation Network for High-Precision Pig Counting
Previous Article in Journal
The Efficacy of Plant Pathogens Control by Complexed Forms of Copper
 
 
Article
Peer-Review Record

Light-YOLO: A Lightweight and Efficient YOLO-Based Deep Learning Model for Mango Detection

Agriculture 2024, 14(1), 140; https://doi.org/10.3390/agriculture14010140
by Zhengyang Zhong 1,2, Lijun Yun 1,2,*, Feiyan Cheng 1,2, Zaiqing Chen 1,2 and Chunjie Zhang 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Agriculture 2024, 14(1), 140; https://doi.org/10.3390/agriculture14010140
Submission received: 21 December 2023 / Revised: 13 January 2024 / Accepted: 16 January 2024 / Published: 18 January 2024
(This article belongs to the Section Digital Agriculture)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This is an interesting well-written article on the development of mangoes detection using advanced Yolo deep-learning technology. I have some comments:

1. Introduction. Please provide sufficient reasoning for selecting mangoes as your target. Some mangoes have very high economic values and are appropriate for establishing this high robot harvesting technology.

2.  Materials and Methods. Please provide the GPS location of the orchard for mangoes sampling.

3. Materials and Methods. The size of the image is 500x500 pixels. It is lower compared to the reported in Table 1 for mango images. Please explain.

4. Materials and Methods. How did the authors separate images into training and validation? Using a specific algorithm or just random? 

4. Results. Figures 2-8 (except Figure 3). The figures can be enlarged to be more visible.

5. Results. No discussion about the figures of merit (accuracy or P and  Recall rate or R). 

6. The accuracy of detection depends on the level of significance. The authors suggested providing the result of ROC if possible.

Author Response

 

   

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Paper proposes a lightweight model named Light-YOLO based on the Darknet53 structure to detect mango fruits, which present a challenge due to the various hues similar to the branches and leaves.

The introduction presents the domain and several state of the art fruit detection AI models and their results, summarized in a table.

The various versions of the YOLO architectures and their development is analysed, with the purpose to present the paper focus on improvement on the efficiency part with a redesign of the down-sampling block of the architecture, bottleneck structure and other layers. The model model is evaluated against accuracy, recall, mean average accuracy, Params, average time and FLOPs, in comparison with 5 other YOLO models, showing small improvement in both time efficiency and accuracy performance. It would be useful to know why those specific models have been selected for comparison and not others, and also if there should be a comparison with other state-of-the-art models presented in introduction?

The impact of the results and the limitations of the study/experiments is then discussed in separate section, and conclusions. Please also present in conclusions some steps for future work based on the limitations of this study. References are well chosen and present sufficient recent works.

The paper is very well written overall, congratulations on the work!

Comments on the Quality of English Language

Small typos and formatting error need checking:

-        Reformulate abstract with sentences:

To swiftly and precisely detect mangoes in natural environment, mitigating instances of 10 false or missed detection . “ So maybe specify the aim of this paper is to …

-        Citations are not consistent (both Author and year and numbers are used, without and paranthesys etc.). Please check the required citation style and use it. For example, the following sentence is not ok with those numbers in the middle without brackets: “Widely recognized attention mechanisms encompass the spatial attention mechanism 32-33, channel attention mechanism 34-35, and the mixed attention mechanism 36-37.”

Author Response

 

   

Author Response File: Author Response.pdf

Back to TopTop