Research on the Behavior Recognition of Beef Cattle Based on the Improved Lightweight CBR-YOLO Model Based on YOLOv8 in Multi-Scene Weather
Abstract
:Simple Summary
Abstract
1. Introduction
- Inner-MPD IoU Loss is proposed in this paper, which can handle the fine details of cattle, solve the problems of bounding box regression and data set imbalance, improve the computational efficiency and enhance the model interpretation by using the chain rule.
- The novel structure of the Multi-Convolutional Focused Pyramid (MCFP) module is innovatively proposed. Through the pyramid-type diffusion mechanism, the module enables various scale features to be integrated into rich contextual information, so that the network can explore and learn in depth the detailed features of the cow in different states.
- Design a new Detection Head. The Lightweight Multi-Scale Feature Fusion Detection Head (LMFD) is designed to take full advantage of deep separable convolution without increasing computational complexity. This means that our model can achieve richer expressiveness and stronger feature representation while maintaining computational efficiency.
2. Materials and Methods
2.1. Materials
2.1.1. Data Source
2.1.2. Data Set Construction
2.2. Method
2.2.1. Cattle Behavior Recognition-YOLO (CBR-YOLO)
2.2.2. StarNet
2.2.3. SPPF-LSKA
2.2.4. Multi-Convolutional Focused Pyramid Module
2.2.5. Lightweight Multi-Scale Feature Fusion Detection Head
2.2.6. Inner-MPDIoU Loss
3. Results and Analysis
3.1. Experimental Platform and Parameter Setting
3.2. Analysis and Accuracy Evaluation of Cattle Identification Results
3.2.1. Evaluation Indicators
3.2.2. Comparative Experiments of Different Models
3.3. Ablation Experiment
3.3.1. The Influence of the Improved Module on the Algorithm
3.3.2. Heat Map Visualization Analysis
3.3.3. Visualization of Feature Map
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Boopathi Rani, R.; Wahab, D.; Dung, G.B.D.; Seshadri, M.R.S. Cattle Health Monitoring and Tracking System. In International Conference on VLSI, Communication and Signal Processing; Springer Nature: Singapore, 2022; pp. 789–795. [Google Scholar]
- Noe, S.M.; Zin, T.T.; Tin, P.; Kobayashi, I. Automatic detection and tracking of mounting behavior in cattle using a deep learning-based instance segmentation model. Int. J. Innov. Comput. Inf. Control. 2022, 18, 211–220. [Google Scholar] [CrossRef]
- Noinan, K.; Wicha, S.; Chaisricharoen, R. The IoT-Based Weighing System for Growth Monitoring and Evaluation of Fattening Process in Beef Cattle Farm. In Proceedings of the 2022 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON), Chiang Rai, Thailand, 26–28 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 384–388. [Google Scholar]
- Kim, J.; Moon, N. Dog Behavior Recognition Based on Multimodal Data from a Camera and Wearable Device. Appl. Sci. 2022, 12, 3199. [Google Scholar] [CrossRef]
- Wu, Y.; Liu, M.; Peng, Z.; Liu, M.; Wang, M.; Peng, Y. Recognising Cattle Behaviour with Deep Residual Bidirectional LSTM Model Using a Wearable Movement Monitoring Collar. Agriculture 2022, 12, 1237. [Google Scholar] [CrossRef]
- Sun, G.; Shi, C.; Liu, J.; Ma, P.; Ma, J. Behavior Recognition and Maternal Ability Evaluation for Sows Based on Triaxial Acceleration and Video Sensors. IEEE Access 2021, 9, 65346–65360. [Google Scholar] [CrossRef]
- Zhu, L.; Geng, X.; Li, Z.; Liu, C. Improving YOLOv5 with Attention Mechanism for Detecting Boulders from Planetary Images. Remote Sens. 2021, 13, 3776. [Google Scholar] [CrossRef]
- Qiao, Y.; Guo, Y.; He, D. Cattle Body Detection Based on YOLOv5-ASFF for Precision Livestock Farming. Comput. Electron. Agric. 2023, 204, 107579. [Google Scholar] [CrossRef]
- Glenn, J. Ultralytics YOLOv8. Available online: https://github.com/ultralytics/ultralytics (accessed on 30 January 2024).
- Varghese, R.M.S. YOLOv8: A Novel Object Detection Algorithm with Enhanced Performance and Robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, Tamil Nadu, 18–19 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
- Li, X.; Cai, C.; Zhang, R.; Ju, L.; He, J. Deep Cascaded Convolutional Models for Cattle Pose Estimation. Comput. Electron. Agric. 2019, 164, 104885. [Google Scholar] [CrossRef]
- Ma, X.; Dai, X.; Bai, Y.; Wang, Y.; Fu, Y. Rewrite the Stars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024. [Google Scholar]
- Lau, K.W.; Po, L.-M.; Rehman, Y.A.U. Large Separable Kernel Attention: Rethinking the Large Kernel Attention Design in CNN. Expert Syst. Appl. 2024, 236, 121352. [Google Scholar] [CrossRef]
- Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; Hu, S.-M. Visual Attention Network. Comput. Vis. Media 2023, 9, 733–752. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Yeh, I.-H.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Ma, S.; Xu, Y. MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression. arXiv 2023, arXiv:2307.07662. [Google Scholar]
- Zhang, H.; Xu, C.; Zhang, S. Inner-IoU: More Effective Intersection over Union Loss with Auxiliary Bounding Box. arXiv 2023, arXiv:2311.02877. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2015. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Jocher, G.R.; Stoken, A.; Chaurasia, A.; Borovec, J.; NanoCode; TaoXie; Kwon, Y.; Michael, K.; Liu, C.; Fang, J.; et al. Ultralytics/Yolov5: V6.0—YOLOv5n “Nano” Models, Roboflow Integration, TensorFlow Export, OpenCV DNN Support. 2021. Available online: https://ui.adsabs.harvard.edu/abs/2021zndo...5563715J/abstract (accessed on 15 March 2024).
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 7464–7475. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 618–626. [Google Scholar]
Image Quantity | Standing | Walking | Eating | Lying | |
---|---|---|---|---|---|
Training set | 3805 | 3159 | 3025 | 2561 | 2943 |
Validation set | 621 | 554 | 315 | 279 | 312 |
Test set | 625 | 571 | 343 | 230 | 337 |
All | 5051 | 4284 | 3683 | 3070 | 3592 |
Environment Configuration | Parameters |
---|---|
GPU | 2*A100(80 GB) |
CPU | Intel(R)Xeon(R)Gold 6148 CPU @2.40 GHz |
Development environment | PyCharm 2023.2.5 |
Language | Python 3.8.10 |
Framework | PyTorch 2.0.1 |
Operating platform | CUDA 11.8 |
Operating system | Linux |
Models | P% | mAP% | Recall% | FLOPs/G | Parameters |
---|---|---|---|---|---|
SSD [18] | 79.9 | 80.1 | 76.2 | 206.6 | 4.48 × 107 |
Faster R-CNN [19] | 82.8 | 82.0 | 82.7 | 310.7 | 2.47 × 107 |
YOLOv3 [20] | 77.0 | 78.5 | 76.1 | 15.6 | 8.67 × 106 |
YOLOv3-tiny | 76.9 | 65.1 | 68.8 | 12.9 | 6.93 × 106 |
YOLOv5n [21] | 82.2 | 81.6 | 75.7 | 4.2 | 1.76 × 106 |
YOLOv6 [22] | 78.7 | 78.6 | 69.0 | 11.1 | 4.23 × 106 |
YOLOv7-tiny [23] | 77.3 | 77.6 | 78.0 | 13.2 | 6.0 × 106 |
YOLOv8n | 83.5 | 82.8 | 75.9 | 8.7 | 3.00 × 106 |
YOLOv8s | 84.1 | 84.6 | 82.9 | 28.6 | 1.12 × 107 |
YOLOv8m | 85.7 | 85.1 | 83.9 | 78.9 | 2.59 × 107 |
YOLOv9 | 81.4 | 80.9 | 76.8 | 26.7 | 6.0 × 107 |
YOLOv10 [24] | 81.1 | 81.8 | 76.2 | 8.2 | 2.69 × 106 |
CBR-YOLO | 90.7 | 90.2 | 84.3 | 4.8 | 1.40 × 106 |
Models | Lying Precision (%) | Standing Precision (%) | Eating Precision (%) | Walking Precision (%) |
---|---|---|---|---|
Faster R-CNN | 84.2 | 77.9 | 87.4 | 81.7 |
YOLOv8n | 84.2 | 79.6 | 88.8 | 81.3 |
YOLOv8s | 84.6 | 80.7 | 87.5 | 83.4 |
CBR-YOLO | 91.2 | 86.5 | 95.5 | 89.5 |
Models | Lying mAP (%) | Standing mAP (%) | Eating mAP (%) | Walking mAP (%) |
---|---|---|---|---|
Faster R-CNN | 77.8 | 81.2 | 86.7 | 82.5 |
YOLOv8n | 78.9 | 81.6 | 86.5 | 83.6 |
YOLOv8s | 80.6 | 83.2 | 87.8 | 86.5 |
CBR-YOLO | 86.9 | 88.9 | 93.4 | 91.5 |
Model | Inner-MPD IoU | StarNet | LSKA | MCFP | LMFD | mAP@0.5/% | Precision/% | Parameters/M | FLOPs/G |
---|---|---|---|---|---|---|---|---|---|
1 | 82.8 | 83.5 | 3.01 | 8.7 | |||||
2 | √ | 84.1 | 84.9 | 3.01 | 8.7 | ||||
3 | √ | √ | 84.3 | 86.8 | 2.36 | 6.5 | |||
4 | √ | √ | √ | 87.3 | 87.7 | 2.43 | 7.2 | ||
5 | √ | √ | 87.5 | 87.8 | 3.20 | 10.1 | |||
6 | √ | √ | √ | 88.0 | 88.1 | 3.47 | 10.4 | ||
7 | √ | √ | √ | 89.6 | 90.1 | 1.66 | 6.0 | ||
8 | √ | √ | √ | √ | 90.1 | 90.9 | 1.73 | 6.1 | |
Ours | √ | √ | √ | √ | √ | 90.2 | 90.7 | 1.4 | 5.2 |
StarNet | C2f | MCFP | SPPF | mAP@0.5 (%) | Precision/% |
---|---|---|---|---|---|
√ | 83.1 | 84.8 | |||
√ | 82.9 | 84.1 | |||
√ | 81.4 | 84.6 | |||
√ | 84.7 | 85.9 |
Params/M | FLOPs | Precision | |
---|---|---|---|
Yolov8n_Detect | 7.52 × 105 | 8.9 G | 83.50% |
LMFD | 1.12 × 105 | 6.7 G | 84.9% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mu, Y.; Hu, J.; Wang, H.; Li, S.; Zhu, H.; Luo, L.; Wei, J.; Ni, L.; Chao, H.; Hu, T.; et al. Research on the Behavior Recognition of Beef Cattle Based on the Improved Lightweight CBR-YOLO Model Based on YOLOv8 in Multi-Scene Weather. Animals 2024, 14, 2800. https://doi.org/10.3390/ani14192800
Mu Y, Hu J, Wang H, Li S, Zhu H, Luo L, Wei J, Ni L, Chao H, Hu T, et al. Research on the Behavior Recognition of Beef Cattle Based on the Improved Lightweight CBR-YOLO Model Based on YOLOv8 in Multi-Scene Weather. Animals. 2024; 14(19):2800. https://doi.org/10.3390/ani14192800
Chicago/Turabian StyleMu, Ye, Jinghuan Hu, Heyang Wang, Shijun Li, Hang Zhu, Lan Luo, Jinfan Wei, Lingyun Ni, Hongli Chao, Tianli Hu, and et al. 2024. "Research on the Behavior Recognition of Beef Cattle Based on the Improved Lightweight CBR-YOLO Model Based on YOLOv8 in Multi-Scene Weather" Animals 14, no. 19: 2800. https://doi.org/10.3390/ani14192800
APA StyleMu, Y., Hu, J., Wang, H., Li, S., Zhu, H., Luo, L., Wei, J., Ni, L., Chao, H., Hu, T., Sun, Y., Gong, H., & Guo, Y. (2024). Research on the Behavior Recognition of Beef Cattle Based on the Improved Lightweight CBR-YOLO Model Based on YOLOv8 in Multi-Scene Weather. Animals, 14(19), 2800. https://doi.org/10.3390/ani14192800