Next Article in Journal
The Impact and Mechanism of Digital Villages on Agricultural Resilience in Ecologically Fragile Ethnic Areas: Evidence from China’s Provinces
Next Article in Special Issue
Research on Multi-Step Fruit Color Prediction Model of Tomato in Solar Greenhouse Based on Time Series Data
Previous Article in Journal
CFD-Based Study on the Airflow Field in the Crushing Chamber of 9FF Square Bale Corn Stalk Pulverizer
 
 
Article
Peer-Review Record

Lightweight Network for Corn Leaf Disease Identification Based on Improved YOLO v8s

Agriculture 2024, 14(2), 220; https://doi.org/10.3390/agriculture14020220
by Rujia Li 1, Yadong Li 1, Weibo Qin 2, Arzlan Abbas 2, Shuang Li 1, Rongbiao Ji 1, Yehui Wu 1, Yiting He 1 and Jianping Yang 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Agriculture 2024, 14(2), 220; https://doi.org/10.3390/agriculture14020220
Submission received: 27 December 2023 / Revised: 19 January 2024 / Accepted: 27 January 2024 / Published: 29 January 2024
(This article belongs to the Special Issue Machine Vision Solutions and AI-Driven Systems in Agriculture)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. The article uses lightweight modules GhostNet, Triplet Attention and ECIoU_Loss function to improve the yolov8s network. These are all existing methods. Please describe the author's own contributions and innovations?

2. Please describe how the added improved modules GhostNet, Triplet Attention and ECIoU_Loss function enhance the detection effect?

3. Considering the various factors such as light, angle, occlusion and other conditions, how can the deployment of the improved model on the WeChat developer applet solve the problem of monitoring leaf diseases in actual corn planting fields?

4. There are formatting problems in the article.

(1) Format issues on lines 156, 160, and 215

(2) Figure 4 and Figure 5 use vector graphics with unified fonts

(3) Chapter 2.5 Questions

(4) Abbreviations appearing for the first time in the paper must be accompanied by the full name. Please refer to the bibliographic abbreviation format requirements.

Comments on the Quality of English Language

The translation of the article is not professional enough, such as target detection, data enhancement, etc. Please correct it.

Author Response

  1. The article uses lightweight modules GhostNet, Triplet Attention and ECIoU_Loss function to improve the yolov8s network. These are all existing methods. Please describe the author's own contributions and innovations?

Answer: The main contribution is that we have improved the existing model based on lightweight, improved recognition rate and detection speed, which is more suitable for actual production needs. This attempt has never been done before, and ablation experiments have been conducted for each step of improvement, which proves the value and necessity of the improvement. Specifically, we use the lightweight GhostNet structure to replace the YOLOv8 backbone; exchanging the head C2f and Conv modules with C3Ghost and GhostNet further simplifies the model architecture; we also introduce a lightweight attention mechanism, Triplet attention, To improve the accuracy of identifying the output of the back-of-neck layer; the ECIoU_Loss function is introduced to replace the original CIoU_Loss, which effectively alleviates the problems related to the aspect ratio penalty, thereby significantly improving the recognition rate and convergence speed.

  1. Please describe how the added improved modules GhostNet, Triplet Attention and ECIoU_Loss function enhance the detection effect?

Answer: How the added improved modules GhostNet, Triplet Attention and ECIoU_Loss functions enhance the detection effect have been described in detail in 2.6. GhostNet_Triplet_YOLOv8s target detection algorithm. Please refer to the attachment

  1. Considering the various factors such as light, angle, occlusion and other conditions, how can the deployment of the improved model on the WeChat developer applet solve the problem of monitoring leaf diseases in actual corn planting fields?

Answer: The model is trained using a large number of corn disease images obtained from Kaggle. These images cover various lighting conditions and backgrounds. At the same time, we also simulate various data enhancement techniques such as random cropping, blurring, brightness adjustment, noise addition, and flipping. angles and occlusion conditions to enhance the generalization ability of the model. The model proposed in this article takes advantage of the lightweight advantages of the GhostNet structure and the triple attention mechanism to enable it to quickly process images uploaded by users in a mini program and provide results in a real-time manner. The mini program interface is designed to be user-friendly and convenient for farmers in various fields. Upload images under various lighting and angle conditions. At the same time, the mini program also provides guidance to help users take better-quality photos in different environments. After processing the pictures uploaded by users, the mini program not only provides the type and occurrence probability of the disease, but also provides prevention and treatment suggestions. This information helps users make decisions and take timely measures. In these ways, we ensure that the model can accurately and effectively monitor leaf disease problems in actual corn cultivation fields. Please refer to the attachment

  1. There are formatting problems in the article.

(1) Format issues on lines 156, 160, and 215

(2) Figure 4 and Figure 5 use vector graphics with unified fonts

(3) Chapter 2.5 Questions

  • Abbreviations appearing for the first time in the paper must be accompanied by the full name. Please refer to the bibliographic abbreviation format requirements.

Answer: All have been corrected.

The English translation has been corrected, and technical terms and grammatical issues have been checked. Please refer to the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Title: Lightweight Network for Corn Leaf Disease Identification Based on Improved YOLOv8s
Authors: RuJia Li, YaDong Li, Weibo Qin, Arzlan Abbas, Shuang Li, RongBiao Ji, YeHui Wu, Yiting He, JianPing Yang

Summary: This investigation proposes to use YOLOv8s to classify several corn leaf diseases from images. The study is well motivated and well executed. I enjoyed reading this article. It presents a practical solution to an important problem. The study used publicly available Kaggle datasets that the researchers augmented with standard image enhancing techniques.

The main contribution of this investigation is the integration of GhostNet that replaces the backbone network of YOLOv8s. The approach uses the ECIoU_Loss instead of CIOU_Loss to enhance the network convergence rates.

The experimental results show that the approach achieves a rate a precision rate of 87.50%, a recall rate of 87.70%, and mAP of 91.40%.

What I really appreciate about this study is the compact size of this model equal to 11.20MB. That makes this investigation very practical.

One minor suggestion that I would like the authors to make is to separate Section 3 Results and Discussion into Section 3 Results and Section 4 Discussion. In the results, the authors should simply state and report the results. In the discussion, the authors should state what these results have indicated to the research community.

Another minor suggestion is that Sections like 3.1, 3.2 and 3.3 should be combined into a single paragraph. It is not worth it to introduce a section for a separate paragraph.

Author Response

Answer: I have divided Section 3 Results and Discussion into Section 3 Results and Section 4 Discussion.

I have combined subsections like 3.1, 3.2, 3.3 into a single paragraph in section 3.1. Please refer to the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

1. It would be good to explain how much the performance improved by augmenting 1407 images to 5763.

   In addition, the proposed technique requires explanation as to how much image augmentation is helpful.

2. It would be good to present a sample of the augmented image above.

3. The authors used a dataset published on Kaggle, but an explanation is needed on how to take pictures in order to actually use the proposed technique in the field. Tha is, authors shall explain how various factors such as distance to the target object, angle, background, and illuminance should be reflected and photographed to achieve the best performance.

4. In Section 2.3, the authors said that they adopted GhostNet, which has better performance than MobileNetv3, as the backbone, but there is no comparison with MobileNet in Table 3. In other words, as mentioned in the previous section, it would be good to compare performance in actual experiments.

5. The sentence below is considered to be a typo.

  line 165, parameters. the amount --> the amount of parameter

Author Response

  1. It would be good to explain how much the performance improved by augmenting 1407 images to 5763.

   In addition, the proposed technique requires explanation as to how much image augmentation is helpful.

Answer: I have answered this question by comparing the model performance before and after enhancement by increasing the number of images from 1407 to 5763, and described the image enhancement techniques (such as rotation, scaling, color adjustment, etc.) in detail, and explained that these techniques are How to increase data diversity, reduce the risk of overfitting and improve model generalization ability. Modify Section 2.1.

Please refer to the attachment

  1. It would be good to present a sample of the augmented image above.

Answer: The image enhanced image has been added to Figure 2. Please refer to the attachment

  1. The authors used a dataset published on Kaggle, but an explanation is needed on how to take pictures in order to actually use the proposed technique in the field. Tha is, authors shall explain how various factors such as distance to the target object, angle, background, and illuminance should be reflected and photographed to achieve the best performance.

Answer: This study selected four public corn disease data sets from Kaggle, with a total of 1921 images. These images contain sufficiently diverse conditions, such as different target distances, angle shots, and backgrounds. These conditions are consistent with the actual farmland environment. conditions that may be encountered. At the same time, we also increase the diversity of images through data enhancement techniques such as random cropping, blurring, brightness adjustment, noise addition, and flipping, which can help the model learn to identify diseases in complex backgrounds. In addition, this article uses a triple attention mechanism to further improve the model's recognition accuracy for leaves photographed from different angles. This ensures that the model can accurately locate and identify diseases no matter how the user holds the mobile phone to shoot. For situations where the background is complex and the leaves may be partially obscured, the introduction of the ECIoU_Loss function reduces the penalty on the aspect ratio, allowing the algorithm to more accurately identify and locate disease areas. And through actual deployment and testing on the WeChat Developer Mini Program, the applicability of the model in actual corn fields has been proven. Through user feedback and field data collection, we continue to optimize the model to adapt to the actual environment. In short, our research has integrated a lightweight neural network structure and attention mechanism, optimized the loss function, and implemented it in the WeChat applet. These operations not only improve the performance of the algorithm, but also ensure its performance under actual farmland conditions. practicability and robustness, thereby providing scientific and technological support for agricultural production.

  1. In Section 2.3, the authors said that they adopted GhostNet, which has better performance than MobileNetv3, as the backbone, but there is no comparison with MobileNet in Table 3. In other words, as mentioned in the previous section, it would be good to compare performance in actual experiments.

Answer: I have added MobileNetv3 as the backbone network and GhostNet as the backbone network in the original Table 3 (now Table 4) for comparison. Please refer to the attachment

  1. The sentence below is considered to be a typo.

line 165, parameters. the amount --> the amount of parameter

Answer: Already revised. Please refer to the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

The authors well improved the manuscript by reflecting reviewer's comments.

Back to TopTop