Next Article in Journal
Exploring the Macroscopic Properties of Humic Substances Using Modeling and Molecular Simulations
Previous Article in Journal
In Vitro Degradability and Methane Production from By-Products Fed to Ruminants
Previous Article in Special Issue
Deep Learning for Detecting and Classifying the Growth Stages of Consolida regalis Weeds on Fields
 
 
Article
Peer-Review Record

RDE-YOLOv7: An Improved Model Based on YOLOv7 for Better Performance in Detecting Dragon Fruits

Agronomy 2023, 13(4), 1042; https://doi.org/10.3390/agronomy13041042
by Jialiang Zhou 1, Yueyue Zhang 1 and Jinpeng Wang 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Agronomy 2023, 13(4), 1042; https://doi.org/10.3390/agronomy13041042
Submission received: 15 March 2023 / Revised: 28 March 2023 / Accepted: 30 March 2023 / Published: 31 March 2023
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)

Round 1

Reviewer 1 Report

 

RDE-YOLOv7: An Improved Model Based on YOLOv7 for Better Performance in Detecting Dragon Fruits

 

This paper proposes a dragon fruit detection method based on RDE-YOLOv7 to more accurately identify and locate dragon fruit. RepGhost and decoupled head are introduced into YOLOv7 to better extract features and predict results. Overall, the experimental results show that the presented model can perform detection and localisation in real-world environments.

 

Comments: The idea of this paper is quite straightforward. Although the background of the research has been clearly stated, the limited literature in the review of related work has limited the comparison between the proposed work and other methods. In addition, the novelty of this paper is considered low, author should better clarify the novelty of their method. Moreover, authors claim that their method can work under harvesting conditions, while only a very simplified experiment was conducted in an extremely simplified environment. Authors should consider testing their method in a real environment to convince the reader. Finally, the English needs to be significantly improved.

 

The details of the comments are shown below:

 

1.        “These methods for detecting dragon fruits cannot applying to picking robots, because they only detect dragon fruits. In real scenes, the diverse postures of dragon fruit make it difficult to pick. “What does it mean? It has been detected, but cannot be used for picking?

 

2.        Can this model be used solely to detect pitaya fruit? If there is a good improvement that can be achieved in other categories of target recognition, please show it.

 

3.        The final robot picking section should be supplemented with the original version of Yolov 7 for experiments such as comparing positioning accuracy.

 

4.        In addition, this article improves the network for the dragon picking robot, and the recognition effects shown are in situations where the target is relatively obvious. For the recognition of fuzzy images, it is entirely possible to achieve good results by adding corresponding enhancements to the dataset. In general, this article has no truly gratifying innovation.

 

5.        Author should include more literatures in the related work comparison. Several related papers are given as below:

1. “Geometry-aware fruit grasping estimation for robotic harvesting in apple orchards

2. “Intelligent robots for fruit harvesting: Recent developments and future challenges”

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

This paper describes a YOLOv7-based object detection algorithm to detect dragon fruit, allowing automatic picking of such fruit by robots. The approach relies on a modified version of the original YOLOv7 architecture, achieving slight improvements in terms of mAP, and applies the methodology directly on the field.

1.       Some forms, such as the possessive (e.g., row 29), or “we have tried some tricks” (row 234), are not used in scientific papers. Please revise the style.

2.       The Introduction section contains an extremely concise background on the existing state-of-the-art. This must be definitely improved by adding more context. Please find enclosed the following relevant citations to be included:

a.       https://doi.org/10.1016/j.compag.2019.01.012

b.       https://doi.org/10.1016/j.compag.2023.107757

c.       https://doi.org/10.3390/agronomy12020319

3.       Section 2:

a.       I cannot find a categorization of the different shooting distances used for data acquisition. Please describe them, or specify if they are not available, and why.

b.       I don’t think that details such as “labels saved in .txt” and the naming convention are relevant to the context of the paper. Please remove non-relevant details, stressing relevant ones.

4.       Rows 134 – 135: authors state YOLOv7 may have generic problems when detecting different objects and different scenes. Please add a reference supporting your assumption.

5.       Please check figures 2 – 5 for correctness. Furthermore, section 2.3 is too long, and much of the narrative can be embedded in the figures’ captions.

6.       Row 202: typo in “RDE-YOLOv4”.

7.       The evaluation metrics considered for object detection are well known to any DL practicioneer. Please shorten section 2.4.

8.       Improve figure 7 sharpness.

9.       Check the formatting of Table 4 and the consequent paragraph.

10.   Improvements achieved in terms of inference time should be clearly highlighted, as authors propose this new version of YOLOv7 for the purpose of being used on agricultural robots, which may be used for high throughput phenotyping and, as a consequence, pose constraints in terms of data throughput and power efficiency.

 

Overall, the paper can be considered for publication after a major revision. I strongly suggest to perform a complete grammar and style check.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I thank the authors for their fixes on the manuscript, the paper is much clearer now and I can see value of this paper to the harvesting robot for dragon fruits. 

Reviewer 2 Report

The authors answered all the comments, improving the manuscript's quality.

I just recommend checking Figure 7, as I don't think it may not fit within the required borders.

However, the manuscript can be considered for publication in Agronomy.

Back to TopTop