Next Article in Journal
Geospatial Analysis, Source Apportionment, and Ecological–Health Risks Assessment of Topsoil Heavy Metal(loid)s in a Typical Agricultural Area
Previous Article in Journal
Soil Organic Carbon Monitoring and Modelling via Machine Learning Methods Using Soil and Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Correction

Correction: Liao et al. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture 2024, 14, 1617

1
College of Engineering, South China Agricultural University, Guangzhou 510642, China
2
Key Laboratory of Key Technology on Agricultural Machine and Equipment (South China Agricultural University), Ministry of Education, Guangzhou 510642, China
3
Guangdong Provincial Key Laboratory of Agricultural Artificial Intelligence (GDKL-AAI), Guangzhou 510642, China
4
College of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
5
College of Agriculture, South China Agricultural University, Guangzhou 510642, China
6
School of Information Technology & Engineering, Guangzhou College of Commerce, Guangzhou 511363, China
*
Authors to whom correspondence should be addressed.
Agriculture 2025, 15(9), 911; https://doi.org/10.3390/agriculture15090911
Submission received: 3 April 2025 / Accepted: 8 April 2025 / Published: 22 April 2025
(This article belongs to the Section Digital Agriculture)
In the original publication [1], there was a mistake in Figure 1 as published. The image count in the offline processing section was mistakenly printed as 1119 instead of 1210, and 1486 instead of 1577. These errors were the result of a calculation mistake. The corrected Figure 1 appears below.
In the original publication, there was a mistake in Table 7 as published. The first, second, and last rows were incorrectly printed with the same data as in Table 6. This occurred due to a failure to update these specific rows when applying the template. The corrected Table 7 appears below.
In the original publication, there was a mistake in Figure 11 as published. In the original submission, the caption stated ”The red boxes indicate areas where the leaf and lesion regions are over-segmented, while the yellow boxes highlight areas where the leaf or lesion regions are under-segmented”. However, the figure inadvertently reversed the colors. The corrected Figure 11 appears below.
There was an error in the original publication. The value of 1.24 M was derived by subtracting 1.60 from 2.84 for the YOLOv10n-STC-SE model. However, due to a calculation error, it was incorrectly printed as 1.26 M, and the correct values now reflect the accurate results from our analysis.
A correction has been made to Results and Discussion, Comparative Experiments on Attentional Mechanisms, Paragraph 2:
From the table, it can be observed that with the improvements brought about by the STC module, the F1 score of the baseline model improved by 1.12%. This performance enhancement was mainly attributed to the STC structure’s effective utilization of multi-scale feature information, which enhanced the model’s perceptual ability toward the input image. Additionally, the self-attention mechanism within the STC module strengthened the relationships between cotton leaves, lesions, and the background, achieving better global associations. The parameters and FLOPs were reduced by 1.24 M and 3.8 G, respectively. Building on the STC module, we further introduced the SE, CBAM, CoorAtt, and GAM attention mechanisms to study the impact of hybrid attention mechanisms on the model performance. By incorporating the SE attention mechanism, the F1 and mAP(M) scores increased by 0.85% and 0.8%, respectively, and the model size decreased by 0.44 MB. The CBAM and GAM attention mechanisms showed a decrease in the model segmentation ability, while the CoorAtt attention mechanism provided a slight improvement in the F1 score but was not as effective as the SE mechanism. The SE attention mechanism outperformed the other attention mechanisms. Combining the global self-attention information modeling method of the SW-MSA in the STC with the efficient reweighting transformation method of SENet retained the global feature information brought about by the self-attention mechanism and incorporated the low computational cost characteristics of the SE module’s global context module. This combination improved the model’s instance segmentation capability.
There was an error in the original publication. The statement ‘Table 9 shows that YOLO-VW has notable benefits in the following areas’ was mistakenly printed as Table 8. Additionally, the value 48.8% should have been the result of dividing 1.59 by 3.26, but, due to a calculation error, it was incorrectly printed as 44.2%. The comparison with ’51.8%, 46.6%, and 17.8% of YOLOv9t’ was added due to an earlier omission.
A correction has been made to Results and Discussion, Comparative Experimental Analysis of Different Models, Paragraph 2:
Analysis of the data in the table reveals that YOLO-VW demonstrates significant advantages in terms of the accuracy, lightweight design, and speed. Table 9 shows that YOLO-VW has notable benefits in the following areas:
  • Accuracy: YOLO-VW achieved a segmentation accuracy mAP(M) of 89.2%, which represents improvements of 3.9%, 2.9%, 4.2%, 2.9%, 12.8%, 1.9%, and 2.4% compared to YOLOv5s, YOLOv7-tiny, YOLOv8n, YOLOv9t, SOLOv2, Mask R-CNN, and the baseline model YOLOv10n, respectively.
  • Lightweight Design: In terms of the weight, parameters, and FLOPs, YOLO-VW was compressed to 25.6%, 21.5%, and 30.4% of YOLOv5s; 29.5%, 24.8%, and 33.9% of YOLOv7-tiny; 56.6%, 48.8%, and 65% of YOLOv8n; 51.8%, 46.6%, and 17.8% of YOLOv9t; 2%, 3.4%, and 4% of SOLOv2; and 2.1%, 3.6%, and 3.8% of Mask R-CNN, respectively. Furthermore, compared to the YOLOv10n baseline model, YOLO-VW achieved compression ratios of 64.4%, 56%, and 66.1% of the original size across these three metrics.
  • Detection Speed: YOLO-VW exhibited excellent performance, achieving 157.98 FPS, an improvement of 21.37 FPS over the original model. This reduction in the computation time is attributable to the improvements in the lightweight modules.
There was an error in the original publication. The statement “The results showed that the F1 and mAP(M) of the YOLO-VW model were 88.89% and 89.29%, which were increased by 3.91%, and 2.4%, respectively, compared with the YOLOv10n model. The numbers of parameters and FLOPs were also reduced by 1.59 M and 7.8G, respectively.” contains an error where “89.2%” was mistakenly printed as “89.29%”. This was caused by a printing oversight. Additionally, the preposition “by” should be changed to “to” due to an expression error in the English phrasing.
A correction has been made to Conclusions, Paragraph 1:
Due to the unclear boundary contours of CVW spots and the complex background of leaves, deep learning models are prone to problems such as mis-segmentation, over-segmentation, segmentation boundary errors, and too many parameters, which leads to difficulty in ensuring a lightweight and high accuracy simultaneously. With the aim of solving these problems, in this study, a CVW hazard level assessment system based on improved YOLOv10n was proposed:
  • An improved YOLO-VW model, incorporating improved modules such as STC, GhostConv, SE, and SGD, demonstrated improved detection accuracy while reducing the model parameters and computation.
  • The results showed that the F1 and mAP(M) of the YOLO-VW model were 88.89% and 89.2%, which were increased by 3.91%, and 2.4%, respectively, compared with the YOLOv10n model. The numbers of parameters and FLOPs were also reduced to 1.59 M and 7.8G, respectively. Compared with the YOLOv5s, YOLOv7-tiny, YOLOv8n, YOLOv9t, SOLOv2, and Mask R-CNN models, the YOLO-VW model obtained the greatest accuracy in CVW segmentation with the smallest model size and the most minor parameters.
  • The lightweight CVW hazard level assessment system was deployed in a client-server platform, with an Android smartphone app developed for testing the YOLO-VW and YOLOv10n models; the YOLO-VW model showed a processing time of 2.42 s per image and an accuracy of 85.5%, which was 15% higher than that of the YOLOv10n model.
The authors state that the scientific conclusions are unaffected. This correction was approved by the Academic Editor. The original publication has also been updated.

Reference

  1. Liao, J.; He, X.; Liang, Y.; Wang, H.; Zeng, H.; Luo, X.; Li, X.; Zhang, L.; Xing, H.; Zang, Y. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture 2024, 14, 1617. [Google Scholar] [CrossRef]
Figure 1. Overall flow of the CVW hazard level assessment.
Figure 1. Overall flow of the CVW hazard level assessment.
Agriculture 15 00911 g001
Figure 11. The examples of the CVW segmentation detection in different environments: (a) original images. From top to bottom: cloudy, sunny, rainy, dusk, nighttime images taken without flash, and nighttime images taken with flash; (b) segmentation results of YOLOv10n; and (c) segmentation results of YOLO-VW. Note: the black regions represent the background, the green regions represent healthy leaves, and the red regions represent lesions. The red boxes indicate areas where the leaf and lesion regions are over-segmented, while the yellow boxes highlight areas where the leaf or lesion regions are under-segmented.
Figure 11. The examples of the CVW segmentation detection in different environments: (a) original images. From top to bottom: cloudy, sunny, rainy, dusk, nighttime images taken without flash, and nighttime images taken with flash; (b) segmentation results of YOLOv10n; and (c) segmentation results of YOLO-VW. Note: the black regions represent the background, the green regions represent healthy leaves, and the red regions represent lesions. The red boxes indicate areas where the leaf and lesion regions are over-segmented, while the yellow boxes highlight areas where the leaf or lesion regions are under-segmented.
Agriculture 15 00911 g011
Table 7. Comparison results of different optimizers.
Table 7. Comparison results of different optimizers.
ModelOptimizerP
(%)
R
(%)
F1
(%)
mAPM@0.5
(%)
Weight/MBParameters/MFLOPs/G
YOLO-VWAdam90.485.387.7888.33.701.597.8
AdamW89.085.086.9588.33.701.597.8
SGD92.185.988.8989.23.691.597.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, J.; He, X.; Liang, Y.; Wang, H.; Zeng, H.; Luo, X.; Li, X.; Zhang, L.; Xing, H.; Zang, Y. Correction: Liao et al. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture 2024, 14, 1617. Agriculture 2025, 15, 911. https://doi.org/10.3390/agriculture15090911

AMA Style

Liao J, He X, Liang Y, Wang H, Zeng H, Luo X, Li X, Zhang L, Xing H, Zang Y. Correction: Liao et al. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture 2024, 14, 1617. Agriculture. 2025; 15(9):911. https://doi.org/10.3390/agriculture15090911

Chicago/Turabian Style

Liao, Juan, Xinying He, Yexiong Liang, Hui Wang, Haoqiu Zeng, Xiwen Luo, Xiaomin Li, Lei Zhang, He Xing, and Ying Zang. 2025. "Correction: Liao et al. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture 2024, 14, 1617" Agriculture 15, no. 9: 911. https://doi.org/10.3390/agriculture15090911

APA Style

Liao, J., He, X., Liang, Y., Wang, H., Zeng, H., Luo, X., Li, X., Zhang, L., Xing, H., & Zang, Y. (2025). Correction: Liao et al. A Lightweight Cotton Verticillium Wilt Hazard Level Real-Time Assessment System Based on an Improved YOLOv10n Model. Agriculture 2024, 14, 1617. Agriculture, 15(9), 911. https://doi.org/10.3390/agriculture15090911

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop