Fast Quality Detection of Astragalus Slices Using FA-SD-YOLO
Abstract
:1. Introduction
- (1)
- The C2F-F module, which combines the FasterNet module with C2f, is integrated into the YOLOv8n model. This modification reduces FLOPS (Floating Point Operations Per Second) while maintaining the model’s capability to accurately detect the quality of Astragalus slices.
- (2)
- The AIFI module is used in place of the SPPF module, allowing the YOLOv8n model to better focus on key detection features of the target objects (the Astragalus slices), thus improving detection accuracy.
- (3)
- To further reduce FLOPS, the YOLOv8n head network is replaced with the SD module.
2. Materials and Methods
2.1. Image Enhancement and Construction of Datasets
2.2. Construction of the Astragalus membranaceus Slice Detection Model
2.2.1. YOLOv8 Model
2.2.2. Structure of the C2f-F Module
2.2.3. Construction of AIFI and Focal Modulation Modules
2.2.4. Construction of SD Module
2.3. Experimental Platform
2.4. Evaluation Indicators
3. Results and Analysis
3.1. YOLOv8 Models and Performance Comparison
3.2. Effect of Different C2f-F Module Positions on Model Performance
3.3. Performance of Different Modules Replacing SPPF
3.4. Ablation Test
3.5. Quality Analysis Results of Astragalus membranaceus Slices by Different Detection Models
4. Discussion
5. Conclusions
- (1)
- By combining the FasterNet module with the C2F module to create the C2F-F module as a substitute for the original C2F module and examining the impact of the C2F-F module’s placement (in the backbone, neck, or both) on model performance, the experiments showed that incorporating the C2F-F module in both the backbone and neck networks resulted in a 1.5% increase in mean average precision (mAP). Although there was a slight 0.3% decrease in the F1 score, the computational complexity (FLOPS) was reduced by 1.8 G, demonstrating that the FasterNet module effectively reduces computational requirements while maintaining detection performance.
- (2)
- The study aimed to investigate the effects of replacing the SPPF module with the AIFI module and the Focused Modulation module on detection efficiency. The research showed that substituting the SPPF module with the AIFI module enhanced model performance, with the mAP reaching 92.9%, while FLOPS decreased by 0.1 G. These results highlight the AIFI module’s effectiveness in facilitating feature enhancement. When both the C2F-F and AIFI modules were integrated into the model, the F1 score and mAP achieved were 88.3% and 93.1%, respectively, thereby improving the model’s detection performance for the quality of Astragalus membranaceus slices.
- (3)
- The integration of the FasterNet module, the AIFI module, and the SD module into the YOLOv8n model resulted in the FA-SD-YOLO model, which demonstrated excellent performance in the quality detection of Huangqi slices. The accuracy and recall rates of the new model were 88.6% and 89.6%, respectively, while the F1 score and mAP were 89.1% and 93.2%, respectively. Compared with the original YOLOv8n model, the accuracy rate increased by 1.8%, the recall rate by 1.3%, the F1 score by 1.6%, and the mAP by 2.4%, while the FLOPS decreased by 43.2%, fully demonstrating the FA-SD-YOLO model’s effectiveness in enhancing detection capabilities.
- (4)
- Comparative analysis with various mainstream object recognition models (including YOLOv3 tiny, YOLOv3, YOLOv5s, YOLOv6s, YOLOv09t, YOLOv9n, YOLOv10n, and YOLOv11n) revealed that the FA-SD YOLO model required only 13.8 ms per image for recognition speed, with FLOPS of only 4.6 G, significantly lower than the other six models compared, and it also had the highest mAP value.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Yang, B.; Ji, C.; Chen, X.; Cui, L.; Bi, Z.; Wan, Y.; Xu, J. Protective effect of astragaloside IV against matrix metalloproteinase-1 expression in ultraviolet-irradiated human dermal fibroblasts. Arch. Pharm. Res. 2011, 34, 1553–1560. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Wang, C.; Li, W.; Chen, J. Hyperspectral image classification by AdaBoost weighted composite kernel extreme learning machines. Neurocomputing 2018, 275, 1725–1733. [Google Scholar] [CrossRef]
- Zhao, M.; Yang, B.; Li, L.; Si, Y.; Chang, M.; Ma, S.; Li, R.; Wang, Y.; Zhang, Y. Efficacy of Modified Huangqi Chifeng decoction in alleviating renal fibrosis in rats with IgA nephropathy by inhibiting the TGF-β1/Smad3 signaling pathway through exosome regulation. J. Ethnopharmacol. 2022, 285, 114795. [Google Scholar] [CrossRef] [PubMed]
- Dai, G.; Fan, J.; Dewi, C. ITF-WPI: Image and text based cross-modal feature fusion model for wolfberry pest recognition. Comput. Electron. Agric. 2023, 212, 108129. [Google Scholar] [CrossRef]
- Dai, G.; Tian, Z.; Fan, J.; Sunil, C.; Dewi, C. DFN-PSAN: Multi-level deep information feature fusion extraction network for interpretable plant disease classification. Comput. Electron. Agric. 2024, 216, 108481. [Google Scholar] [CrossRef]
- Xue, Q.; Miao, P.; Miao, K.; Yu, Y.; Li, Z. An online automatic sorting system for defective Ginseng Radix et Rhizoma Rubra using deep learning. Chin. Herb. Med. 2023, 15, 447–456. [Google Scholar] [CrossRef] [PubMed]
- Almazaydeh, L.; Alsalameen, R.; Elleithy, K. Herbal leaf recognition using mask-region convolutional neural network (Mask r-Cnn). J. Theor. Appl. Inf. Technol. 2022, 100, 3664–3671. [Google Scholar]
- Zhao, P. Explore the identification of Chinese herbal medicine based on the VGG-16 model. Appl. Comput. Eng. 2023, 4, 645–650. [Google Scholar] [CrossRef]
- Liu, S.; Chen, W.; Li, Z.; Dong, X. Chinese Herbal Classification Based on Image Segmentation and Deep Learning Methods; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
- Teng, Y.; Zhao, W.; Han, Y.; Wang, Y.S.; Wang, S.; Song, J. Improved CBAM Fritillaria Ussuriensis Detection Model Based on YOLOv5. Chin. Agric. Mach. Equip. 2023, 8–12. Available online: https://xueshu.baidu.com/usercenter/paper/show?paperid=1r3m0830w76n0850ry000ad0f1048877&site=xueshu_se (accessed on 12 October 2024).
- Gao, S.; Zhou, Z.; Huang, X.; Gao, L.; Bian, H. Intelligent Identification of Traditional Chinese Medicine Decoction Pieces Based on Deep Learning Algorithms. J. Chin. Med. Mater. 2023, 46, 57–61. [Google Scholar] [CrossRef]
- Tian, G.; Li, X.; Wu, Y.; Liu, A.; Zhang, Y.; Ma, Y.; Guo, W.; Sun, X.; Fu, B.; Li, D. Recognition effect of models based on different microscope objectives. In Proceedings of the 3rd International Symposium on Artificial Intelligence for Medicine Sciences, Amsterdam, The Netherlands, 13–15 October 2022; pp. 133–141. [Google Scholar] [CrossRef]
- Zhang, H.; Pan, Y.; Liu, X.; Chen, Y.; Gong, X.; Zhu, J.; Yan, J.; Zhang, H. Recognition of the rhizome of red ginseng based on spectral-image dual-scale digital information combined with intelligent algorithms. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2023, 297, 122742. [Google Scholar] [CrossRef] [PubMed]
- Varghese, R.; Sambath, M. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024. [Google Scholar] [CrossRef]
- Chen, J.; Kao, S.-H.; He, H.; Zhuo, W.; Wen, S.; Lee, C.-H.; Chan, S.-H.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar] [CrossRef]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Yang, J.; Li, C.; Dai, X.; Yuan, L.; Gao, J. Focal modulation networks. Adv. Neural Inf. Process. Syst. 2022, 35, 4203–4217. [Google Scholar]
- Ashraf, T.; Bin Afzal Mir, F.; Gillani, I.A. TransFed: A way to epitomize Focal Modulation using Transformer-based Federated Learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 554–563. [Google Scholar]
- Wu, Y.; He, K. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Meng, Y.; Jin, D.; Liu, G.; Xu, S.; Han, J.; Shi, D. Text detection with kernel-sharing dilated convolutions and attention-guided FPN. Opt. Precis. Eng. 2021, 29, 13. [Google Scholar] [CrossRef]
- Shi, L.; Lei, J.; Wang, J.; Yang, C.; Liu, Z.; Xi, L.; Xiong, S. Lightweight Wheat Growth Stage Identification Model Based on Improved FasterNet. Trans. Chin. Soc. Agric. Mach. 2024, 55, 226–234. [Google Scholar] [CrossRef]
- Yang, Q.; Meng, H.; Gao, G.D. A real-time object detection method for underwater complex environments based on FasterNet-YOLOv7. J. Real-Time Image Process. 2024, 21, 8.1–8.14. [Google Scholar] [CrossRef]
- Xie, M.; Bian, H.; Jiang, C.; Zheng, Z.; Wang, W. An Improved YOLOv5 Algorithm for Tyre Defect Detection. Electronics 2024, 13, 2207. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, Z.; Guo, K.; Wang, B.; Wang, H. Study on the algorithm of quality grade classification of Radix Astragali in Gansu. J. Northwest Norm. Univ. 2023, 59, 58–64. [Google Scholar] [CrossRef]
- Zhang, B. Development of an Online Detection and Sorting System for the Appearance Quality of Astragalus Decoction; Beijing Forestry University: Beijing, China, 2022. [Google Scholar] [CrossRef]
Model | Width | mAP (%) | Model Size (MB) | Parameter | GFLOPS |
---|---|---|---|---|---|
YOLOv8n | 0.25 | 90.8 | 5.96 | 3006623 | 8.1 |
YOLOv8s | 0.50 | 90.8 | 21.4 | 11127519 | 28.4 |
YOLOv8m | 0.75 | 91.0 | 49.6 | 25842655 | 78.7 |
YOLOv8l | 1.00 | 91.4 | 83.5 | 43610463 | 164.8 |
YOLOv8x | 1.25 | 90.4 | 130 | 68157423 | 258.1 |
Model | p (%) | R (%) | F1 (%) | Map (%) | Model Size (MB) | FLOPs (G) | Layer |
---|---|---|---|---|---|---|---|
YOLOv8n | 86.8 | 88.3 | 87.5 | 90.8 | 5.95 | 8.1 | 168 |
Position 1 | 85.7 | 87.3 | 86.5 | 91.6 | 5.26 | 7.0 | 186 |
Position 2 | 83.0 | 89.8 | 86.3 | 92.2 | 5.30 | 7.4 | 180 |
Position1 + Position 2 | 86.9 | 87.4 | 87.2 | 92.3 | 4.61 | 6.3 | 198 |
Model | P (%) | R (%) | F1 (%) | mAP (%) | Model Size (MB) | FLOPs (G) | Layer |
---|---|---|---|---|---|---|---|
YOLOv8n | 86.8 | 88.3 | 87.5 | 90.8 | 5.95 | 8.1 | 168 |
AIFI | 85.4 | 89.1 | 87.2 | 92.9 | 5.81 | 8.0 | 175 |
Focal Modulation | 87.2 | 86.6 | 86.9 | 92.6 | 6.16 | 8.2 | 175 |
Model | P (%) | R (%) | F1 (%) | AP (%) | F1 (%) | mAP (%) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Normal | Rot | SS | SR | Stem | Normal | Rot | SS | SR | Stem | |||||
YOLOv8n | 86.8 | 88.3 | 93.8 | 83.4 | 87.0 | 79.0 | 94.4 | 96.8 | 87.1 | 91.2 | 83.9 | 95.1 | 87.5 | 90.8 |
YOLOv8-F | 86.9 | 87.4 | 92.7 | 81.6 | 84.9 | 80.3 | 96.0 | 96.9 | 89.9 | 92.6 | 87.4 | 94.5 | 87.2 | 92.3 |
YOLOv8-A | 85.4 | 89.1 | 92.5 | 81.9 | 85.9 | 80.2 | 95.4 | 96.8 | 90.4 | 92.9 | 88.8 | 95.4 | 87.2 | 92.9 |
YOLOv8-SD | 84.5 | 88.9 | 94.0 | 81.0 | 84.8 | 78.3 | 95.3 | 96.9 | 84.7 | 93.5 | 87.9 | 95.7 | 86.6 | 91.7 |
YOLOv8-FA | 90.1 | 86.6 | 93.2 | 80.3 | 87.0 | 84.9 | 95.9 | 97.2 | 87.7 | 94.0 | 90.5 | 96.0 | 88.3 | 93.1 |
FA-SD-YOLO | 88.6 | 89.6 | 93.4 | 84.4 | 86.7 | 85.8 | 94.8 | 97.3 | 89.1 | 93.0 | 91.2 | 95.4 | 89.1 | 93.2 |
Model | P | R | F1 | mAP | |
---|---|---|---|---|---|
YOLOv8n | Mean (%) | 82.8 | 85.4 | 84.1 | 89.3 |
Std | 0.026 | 0.011 | 0.013 | 0.015 | |
CV (%) | 3.163 | 1.338 | 1.503 | 1.658 | |
FA-SD-YOLO | Mean (%) | 86.4 | 85.5 | 85.9 | 91.2 |
Std | 0.016 | 0.017 | 0.007 | 0.002 | |
CV (%) | 1.801 | 2.756 | 0.752 | 0.219 |
Model | p (%) | R (%) | F1 (%) | mAP (%) | ms/img | FLOPS (G) |
---|---|---|---|---|---|---|
YOLOv3_tiny | 84.5 | 88.6 | 86.5 | 90.7 | 12.9 | 18.9 |
YOLOv3 | 86.6 | 87.0 | 86.8 | 90.8 | 28.1 | 282.2 |
YOLOv5s | 87.6 | 89.6 | 88.6 | 90.4 | 14.1 | 15.8 |
YOLOv6s | 84.1 | 88.2 | 86.1 | 91.8 | 13.6 | 11.8 |
YOLOv9t | 87.9 | 85.5 | 86.7 | 91.3 | 14.3 | 7.6 |
YOLOv9s | 84.8 | 87.3 | 86.0 | 89.8 | 15.2 | 26.7 |
YOLOv10n | 84.8 | 86.1 | 85.4 | 91.7 | 14.0 | 6.5 |
YOLOv11n | 86.0 | 87.8 | 86.8 | 91.8 | 14.6 | 6.3 |
FA-SD-YOLO | 88.6 | 89.6 | 89.1 | 93.2 | 13.8 | 4.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, F.; Zhang, J.; Liu, Q.; Liang, C.; Zhang, S.; Li, M. Fast Quality Detection of Astragalus Slices Using FA-SD-YOLO. Agriculture 2024, 14, 2194. https://doi.org/10.3390/agriculture14122194
Zhao F, Zhang J, Liu Q, Liang C, Zhang S, Li M. Fast Quality Detection of Astragalus Slices Using FA-SD-YOLO. Agriculture. 2024; 14(12):2194. https://doi.org/10.3390/agriculture14122194
Chicago/Turabian StyleZhao, Fan, Jiawei Zhang, Qiang Liu, Chen Liang, Song Zhang, and Mingbao Li. 2024. "Fast Quality Detection of Astragalus Slices Using FA-SD-YOLO" Agriculture 14, no. 12: 2194. https://doi.org/10.3390/agriculture14122194
APA StyleZhao, F., Zhang, J., Liu, Q., Liang, C., Zhang, S., & Li, M. (2024). Fast Quality Detection of Astragalus Slices Using FA-SD-YOLO. Agriculture, 14(12), 2194. https://doi.org/10.3390/agriculture14122194