Study on the Detection Method for Daylily Based on YOLOv5 under Complex Field Environments
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Collection of Daylily
2.1.1. Collection Equipment and Methods
2.1.2. Data Preprocessing
2.1.3. Dataset Labeling
2.2. YOLOv5 Network Model and Evaluation Indicators
2.2.1. YOLOv5 Model Principle and Structure
2.2.2. Experimental Environment Settings
2.2.3. Model Parameter Settings and Evaluation Indicators
3. Experiment and Analysis
3.1. Basic Model Test of Daylily Detection
3.2. Parameter Optimization Experiment of Daylily Detection Model
3.3. Backbone Network Optimization Experiment of Daylily Detection Model
3.4. Visual Analysis of the Detection Results of the Final Optimization Model on the Test Set
4. Discussion
4.1. Analysis of the Difference in Detection Performance of the Final Model for Daylily in Different Growth Stages
4.2. Analysis of the Impact of Different Optimization Methods on Object Detection Performance
5. Conclusions
- (1)
- The mAP of YOLOv5s can reach 70.2%, and the inference speed can reach 178FPS. Compared with SSD, Faster R-CNN, and YOLOv4 models, it has higher detection accuracy and inference speed;
- (2)
- Adjusting the depth and width coefficients of the YOLOv5s network and optimizing the backbone network can further improve detection precision and inference speed. Among them, the YOLOv5s model with a Transformer-based network depth of 1.33 and a width of 1.25 has the best detection performance, and its mAP is 78.1%. The inference speed is 93FPS, and compared with the original YOLOv5s model, the mAP increased by 7.9 percentage points; compared with the YOLOv5s model based on CSPDarknet with the same network parameters, the mAP increased by 0.2 percentage points, and the inference speed increased by 69 percentage points;
- (3)
- Under the influence of occlusion, overlapping, visual blur, natural light bright or dark, weather, and other factors, the final optimized YOLOv5 model can still efficiently detect daylilies, and the mAP of the Immature, Pluckable, and Flowering classes, which, respectively, reached 85.8%, 85.5%, and 84.8%; this method has good stability and can meet the requirements of daylily picking operations. This study can provide a certain technical reference for the detection of crops in similar environments and the development of intelligent picking of daylilies.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhu, J.Z.; Qiu, Z.L.; Gao, B.D.; Zhong, J.; Li, X.G. Identification, biological characteristics, screening of fungicides and establishment of loop-mediated isothermal amplification for the pathogen causing leaf spot of citron daylily Hemerocallis citrina. J. Plant Prot. 2022, 49, 1631–1641. [Google Scholar] [CrossRef]
- Wu, W.-T.; Mong, M.-C.; Yang, Y.-C.; Wang, Z.-H.; Yin, M.-C. Aqueous and Ethanol Extracts of Daylily Flower (Hemerocallis fulva L.) Protect HUVE Cells against High Glucose. J. Food Sci. 2018, 83, 1463–1469. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Qin, N.; Huang, J.; Guo, A.; Kang, X.; Li, S.; Xing, G. Dynamic changes of pectin epitopes and daylily tepals during flower opening and senescence of Hemerocallis citrina. Sci. Hortic. 2021, 288, 110367. [Google Scholar] [CrossRef]
- Liu, W.; Zhao, Y.; Sun, J.; Li, G.; Shan, Y.; Chen, P. Study the effects of drying processes on chemical compositions in daylily flowers using flow injection mass spectrometric fingerprinting method and chemometrics. Food Res. Int. 2017, 102, 493–503. [Google Scholar] [CrossRef]
- Matraszek-Gawron, R.; Chwil, M.; Terlecka, P.; Skoczylas, M.M. Recent Studies on Anti-Depressant Bioactive Substances in Selected Species from the Genera Hemerocallis and Gladiolus: A Systematic Review. Pharmaceuticals 2019, 12, 172. [Google Scholar] [CrossRef]
- Wang, W.; Zhang, X.; Liu, Q.; Lin, Y.; Zhang, Z.; Li, S. Study on Extraction and Antioxidant Activity of Flavonoids from Hemerocallis fulva (Daylily) Leaves. Molecules 2022, 27, 2916. [Google Scholar] [CrossRef]
- Jiao, D. The Cultivation and Picking and Processing Technology of Datong Daylily. Agric. Technol. Equip. 2013, 16, 39–40+42. [Google Scholar]
- Chu, G.L.; Zhang, W.; Wang, Y.J.; Ding, N.N.; Liu, Y.Y. A method of fruit picking robot target identification based on machine vision. J. Chin. Agric. Mech. 2018, 2, 83–88. [Google Scholar] [CrossRef]
- Shang, Y.Y.; Zhang, Q.R.; Song, H.B. Application of deep learning using YOLOv5s to apple flower detection in natural scenes. Trans. Chin. Soc. Agric. Eng. 2022, 9, 222–229. [Google Scholar]
- Bazi, Y.; Melgani, F. Convolutional SVM Networks for Object Detection in UAV Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3107–3118. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, J.; Wang, W.; Jin, Z.; Su, Y.; Chen, H. Smart contract vulnerability detection combined with multi-objective detection. Comput. Netw. 2022, 217, 109289. [Google Scholar] [CrossRef]
- Kontschieder, P.; Fiterau, M.; Criminisi, A.; Bulo, S.R. Deep Neural Decision Forests. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Chen, Y.F.; Yun, T.; Zhou, Y.; Deng, Y.H.; Wang, X. Texture Image Segmentation Based on PSO Optimizing SVM. Comput. Appl. Softw. 2014, 4, 214–218. [Google Scholar]
- LE, V.; Zhu, Y.; Nguyen, A. Research on Depth Image Gesture Segmentation and HOG-SVM Gesture Recognition Method. Comput. Appl. Softw. 2016, 12, 122–126. [Google Scholar]
- Yao, Q.; Chen, G.-t.; Wang, Z.; Zhang, C.; Yang, B.-j.; Tang, J. Automated detection and identification of white-backed planthoppers in paddy fields using image processing. J. Integr. Agric. 2017, 16, 1547–1557. [Google Scholar] [CrossRef]
- Zhang, X.; Ma, H.; Wei, S.; Yang, L. Design of Day-Lily Robot Recognition System Based on ZYNQ. In Proceedings of the 2021 International Conference on Networking, Communications and Information Technology (NetCIT), Manchester, UK, 26–27 December 2021; pp. 154–157. [Google Scholar]
- Zhao, J.; Dai, F.; Zhang, T. System Design of Daylily Picking Robot. J. Robot. Netw. Artif. Life 2022, 9, 20–24. [Google Scholar]
- Ma, J.J.; Pan, Q.; Liang, Y.; Hu, J.W.; Zhao, C.H.; Wang, H.X. Object detection for depth-first random forest classifier. J. Chin. Inert. Technol. 2018, 4, 518–523. [Google Scholar] [CrossRef]
- Zhang, K.B.; Zhang, A.Q.; Li, C.S. Nutrient deficiency diagnosis method for rape leaves using color histogram on HSV space. Trans. Chin. Soc. Agric. Eng. 2016, 32, 179–187. [Google Scholar]
- Guo, X.; Hao, Q.; Yang, F. Apple multi-object detection method based on improved HOG and SVM. Foreign Electron. Meas. Technol. 2022, 11, 154–159. [Google Scholar] [CrossRef]
- García-Santillán, I.D.; Montalvo, M.; Guerrero, J.M.; Pajares, G. Automatic detection of curved and straight crop rows from images in maize fields. Biosyst. Eng. 2017, 156, 61–79. [Google Scholar] [CrossRef]
- Hong, W.; Ma, Z.; Ye, B.; Yu, G.; Tang, T.; Zheng, M. Detection of Green Asparagus in Complex Environments Based on the Improved YOLOv5 Algorithm. Sensors 2023, 23, 1562. [Google Scholar] [CrossRef]
- Hlaing, S.H.; Khaing, A.S. Weed and crop segmentation and classification using area thresholding. Int. J. Res. Eng. Technol. 2014, 3, 375–382. [Google Scholar]
- Quan, L.; Feng, H.; Lv, Y.; Wang, Q.; Zhang, C.; Liu, J.; Yuan, Z. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R–CNN. Biosyst. Eng. 2019, 184, 1–23. [Google Scholar] [CrossRef]
- Ouyang, W.; Wang, X.; Zeng, X.; Qiu, S.; Luo, P.; Tian, Y.; Li, H.; Yang, S.; Wang, Z.; Loy, C.-C.; et al. DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Zhang, L.; Li, Y.; Jin, T.; Wang, W.; Jin, Z.; Zhao, C.; Cai, Z.; Chen, H. SPCBIG-EC: A Robust Serial Hybrid Model for Smart Contract Vulnerability Detection. Sensors 2022, 22, 4621. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Wang, J.; Wang, W.; Jin, Z.; Zhao, C.; Cai, Z.; Chen, H. A Novel Smart Contract Vulnerability Detection Method Based on Information Graph and Ensemble Learning. Sensors 2022, 22, 3581. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Tian, L. Research on the Method of Apple Leaf Disease Detection Based on SSD Network; Northwest A&F University: Xi’an, China, 2022. [Google Scholar] [CrossRef]
- Gao, F.; Fu, L.; Zhang, X.; Majeed, Y.; Li, R.; Karkee, M.; Zhang, Q. Multi-class fruit-on-plant detection for apple in SNAP system using Faster R-CNN. Comput. Electron. Agric. 2020, 176, 105634. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, F.; Li, W.; Wang, Y.Y.; Wang, Y.B.; Luo, X. Study on improved YOLOv5 algorithm for fruit and vegetable detection in complex environments. J. Chin. Agric. Mech. 2023, 1, 185–191. [Google Scholar] [CrossRef]
- Zhu, R.; Zou, H.; Li, Z.; Ni, R. Apple-Net: A Model Based on Improved YOLOv5 to Detect the Apple Leaf Diseases. Plants 2023, 12, 169. [Google Scholar] [CrossRef]
- Bao, W.X.; Xie, W.J.; Hu, G.S.; Yang, X.J.; Su, B.B. Wheat ear counting method in UAV images based on TPH-YOLO. Trans. Chin. Soc. Agric. Eng. 2023, 1, 185–191. [Google Scholar]
- Xie, Z.Y.; Hu, Y.R. Multi-target recognition system of flowers based on YOLOv4. First J. Nanjing Agric. Univ. 2022, 4, 818–827. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. arXiv 2019, arXiv:1905.02244. [Google Scholar]
- Huang, J.Z.; Ng, M.K.; Rong, H.Q.; Li, Z. Automated Variable Weighting in k-means Type Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 657–668. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Li, Y.; Xiao, L.M.; Li, W.Q.; Li, H. Recognition and location method of orange based on improved YOLOv4 model. Trans. Chin. Soc. Agric. Eng. 2022, 12, 173–182. [Google Scholar]
- Liu, X.T. Image Classification of Submarine Volcanic Smog Map Based on Convolution Neural Network. Master’s Thesis, University of South China, Hengyang, China, 2020. [Google Scholar] [CrossRef]
- Wang, J.Y.; Fang, J. Method for Estimating Number of People in Dense Place Based on Yolov5. J. Jilin Univ. 2021, 6, 682–687. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Mark Liao, H.-Y.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Cheng, G.; Zhou, P.; Han, J. Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Lin, L.; Xu, Z.; Chen, C.-M.; Wang, K.; Hassan, M.R.; Alam, M.G.R.; Hassan, M.M.; Fortino, G. Understanding the impact on convolutional neural networks with different model scales in AIoT domain. J. Parallel Distrib. Comput. 2022, 170, 1–12. [Google Scholar] [CrossRef]
- Huang, L.; Yang, Y.; Yang, C.Y.; Yang, W.; Li, Y.H. FS-YOLOv5: Lightweight Infrared Rode object detection method. Comput. Eng. Appl. 2022, 1–13. [Google Scholar]
- Shao, X.Q.; Li, X.; Yang, T.; Yang, Y.D.; Liu, S.B.; Yuan, Z.W. Underground personnel detection and tracking based on improved YOLOv5s and DeepSORT. Coal Sci. Technol. 2023, 1–12. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More Features from Cheap Operations. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Duan, J.L.; Wang, Z.R.; Zou, X.J.; Yuan, H.T.; Huang, G.S.; Yang, Z. Recognition of bananas to locate bottom fruit axis using improved YOLOv5. Trans. Chin. Soc. Agric. Eng. 2022, 19, 122–130. [Google Scholar]
- Yang, J.; Qian, Z.; Zhang, Y.J.; Qin, Y.; Miao, H. Real-time recognition of tomatoes in complex environments based on improved YOLOv4-tiny. Trans. Chin. Soc. Agric. Eng. 2022, 9, 215–221. [Google Scholar]
- Liang, X.T.; Pang, Q.; Yang, Y.; Wen, C.W.; Li, Y.L.; Huang, W.Q.; Zhang, C.; Zhao, C.J. Online detection of tomato defects based on YOLOv4 model pruning. Trans. Chin. Soc. Agric. Eng. 2022, 6, 283–292. [Google Scholar]
Classes | Training Set Labels | Validation Set Labels | Test Set Labels | Training Set | Validation Set | Test Set |
---|---|---|---|---|---|---|
Immature | 31,551 | 8048 | 4504 | 2134 | 610 | 304 |
Pluckable | 8901 | 2249 | 1270 | 1301 | 372 | 185 |
Flowering | 8245 | 2081 | 1176 | 1285 | 368 | 183 |
Other | 5401 | 1369 | 768 | 422 | 121 | 60 |
Total | 54,098 | 13,747 | 7718 | 2940 | 840 | 420 |
Parameters | Value |
---|---|
Input image resolution | 640 × 640 |
Iterations | 100 |
Batch size | 16 |
Initial learning rate | 0.01 |
Learning rate momentum | 0.937 |
Weight decay coefficient | 5 × 10−4 |
IoU threshold | 0.2 |
Model | Classes | Precision (%) | Recall (%) | mAP (%) | FPS |
---|---|---|---|---|---|
YOLOv5s | Immature | 73.2 | 76.0 | 78.2 | 178 |
Pluckable | 78.5 | 76.6 | 81.7 | ||
Flowering | 78.0 | 76.1 | 81.0 | ||
Other | 58.3 | 39.5 | 40.0 | ||
All | 72.0 | 67.0 | 70.2 | ||
YOLOv4 | Immature | 68.0 | 4.6 | 29.0 | 48 |
Pluckable | 78.1 | 18.7 | 32.0 | ||
Flowering | 90.7 | 6.8 | 22.0 | ||
Other | 0 | 0 | 0 | ||
All | 59.1 | 6.8 | 21.2 | ||
SSD | Immature | 80.0 | 2.2 | 19.0 | 33 |
Pluckable | 85.1 | 29.0 | 46.0 | ||
Flowering | 83.7 | 21.0 | 38.0 | ||
Other | 0 | 0 | 3.0 | ||
All | 62.1 | 13.2 | 26.2 | ||
Faster R-CNN | Immature | 33.9 | 65.8 | 47.0 | 14 |
Pluckable | 49.3 | 64.2 | 59.0 | ||
Flowering | 36.9 | 71.5 | 61.0 | ||
Other | 20.6 | 28.0 | 14.0 | ||
All | 35.2 | 57.1 | 45.3 |
Model | Depth_Multiple | Width_Multiple | Params (M) |
---|---|---|---|
YOLOv5s | 0.33 | 0.5 | 7.2 |
YOLOv5s1 | 0.33 | 0.25 | 1.9 |
YOLOv5s2 | 0.67 | 0.75 | 21.2 |
YOLOv5s3 | 1.0 | 1.0 | 46.5 |
YOLOv5s4 | 1.33 | 1.25 | 86.7 |
Model | Classes | Precision (%) | Recall (%) | mAP (%) | FPS |
---|---|---|---|---|---|
YOLOv5s | Immature | 73.2 | 76.0 | 78.2 | 178 |
Pluckable | 78.5 | 76.6 | 81.7 | ||
Flowering | 78.0 | 76.1 | 81.0 | ||
Other | 58.3 | 39.5 | 40.0 | ||
All | 72.0 | 67.0 | 70.2 | ||
YOLOv5s1 | Immature | 73.4 | 73.2 | 77.0 | 213 |
Pluckable | 76.2 | 73.7 | 80.1 | ||
Flowering | 76.7 | 72.3 | 78.5 | ||
Other | 54.9 | 32.2 | 34.4 | ||
All | 70.3 | 62.9 | 67.5 | ||
YOLOv5s2 | Immature | 81.7 | 79.4 | 83.5 | 169 |
Pluckable | 83.3 | 78.5 | 84.5 | ||
Flowering | 85.3 | 78.9 | 83.5 | ||
Other | 72.9 | 47.9 | 51.9 | ||
All | 80.8 | 71.2 | 75.8 | ||
YOLOv5s3 | Immature | 84.7 | 80.1 | 84.3 | 167 |
Pluckable | 84.0 | 79.7 | 85.6 | ||
Flowering | 87.4 | 79.0 | 85.1 | ||
Other | 75.5 | 49.8 | 55.7 | ||
All | 82.9 | 72.1 | 77.7 | ||
YOLOv5s4 | Immature | 84.4 | 80.8 | 85.4 | 55 |
Pluckable | 82.7 | 82.0 | 85.4 | ||
Flowering | 85.8 | 78.6 | 84.1 | ||
Other | 78.6 | 49.7 | 56.7 | ||
All | 83.0 | 72.8 | 77.9 |
Model | YOLOv4 | SSD | Faster R-CNN | YOLOv5s | YOLOv5s1 | YOLOv5s2 | YOLOv5s3 | YOLOv5s4 |
---|---|---|---|---|---|---|---|---|
Precision (%) | 59.1 | 62.1 | 35.2 | 72.0 | 70.3 | 80.8 | 82.9 | 83.0 |
Precision Change | −12.9 | −9.9 | −36.8 | 0 | −1.7 | +8.8 | +10.9 | +11.0 |
Recall (%) | 6.8 | 13.2 | 57.1 | 67.0 | 62.9 | 71.2 | 72.1 | 72.8 |
Recall Change | −60.2 | −53.8 | −9.9 | 0 | −4.1 | +4.2 | +5.1 | +5.8 |
mAP (%) | 21.2 | 26.2 | 45.3 | 70.2 | 67.5 | 75.8 | 77.7 | 77.9 |
mAP Change | −49.0 | −44.0 | −24.9 | 0 | −2.7 | +5.6 | +7.5 | +7.7 |
FPS | 48 | 33 | 14 | 178 | 213 | 169 | 167 | 55 |
FPS Change | −130 | −145 | −164 | 0 | +35 | −9 | −11 | −123 |
Model | Backbone | Classes | Precision (%) | Recall (%) | mAP (%) | FPS |
---|---|---|---|---|---|---|
YOLOv5s4 | CSPDarknet | Immature | 84.4 | 80.8 | 85.4 | 55 |
Pluckable | 82.7 | 82.0 | 85.4 | |||
Flowering | 85.8 | 78.6 | 84.1 | |||
Other | 78.6 | 49.7 | 56.7 | |||
All | 82.9 | 72.8 | 77.9 | |||
YOLOv5s4 | Ghost | Immature | 76.6 | 73.1 | 78.2 | 86 |
Pluckable | 78.6 | 74.3 | 81.1 | |||
Flowering | 80.7 | 74.5 | 81.6 | |||
Other | 59.7 | 35.4 | 37.4 | |||
All | 73.9 | 64.3 | 69.6 | |||
YOLOv5s4 | Transformer | Immature | 82.8 | 82.9 | 85.8 | 93 |
Pluckable | 80.9 | 82.3 | 85.5 | |||
Flowering | 85.0 | 81.3 | 84.8 | |||
Other | 76.9 | 51.2 | 56.2 | |||
All | 81.4 | 74.4 | 78.1 | |||
YOLOv5s4 | MobileNetv3 | Immature | 64.8 | 67.7 | 68.8 | 161 |
Pluckable | 71.1 | 68.6 | 74.1 | |||
Flowering | 68.0 | 71.0 | 74.0 | |||
Other | 46.4 | 15.5 | 17.5 | |||
All | 62.6 | 57.7 | 58.6 |
Model | Precision (%) | Precision Change | Recall (%) | Recall Change | mAP (%) | mAP Change | FPS | FPS Change |
---|---|---|---|---|---|---|---|---|
YOLOv5s4 | 82.9 | 0 | 72.8 | 0 | 77.9 | 0 | 55 | 0 |
Transformer + YOLOv5s4 | 81.4 | −1.5 | 74.4 | +1.6 | 78.1 | +0.2 | 93 | +38 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yan, H.; Cai, S.; Li, Q.; Tian, F.; Kan, S.; Wang, M. Study on the Detection Method for Daylily Based on YOLOv5 under Complex Field Environments. Plants 2023, 12, 1769. https://doi.org/10.3390/plants12091769
Yan H, Cai S, Li Q, Tian F, Kan S, Wang M. Study on the Detection Method for Daylily Based on YOLOv5 under Complex Field Environments. Plants. 2023; 12(9):1769. https://doi.org/10.3390/plants12091769
Chicago/Turabian StyleYan, Hongwen, Songrui Cai, Qiangsheng Li, Feng Tian, Sitong Kan, and Meimeng Wang. 2023. "Study on the Detection Method for Daylily Based on YOLOv5 under Complex Field Environments" Plants 12, no. 9: 1769. https://doi.org/10.3390/plants12091769