An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model
Abstract
:1. Introduction
2. Data Collection and Dataset Production
- Effective data screening: Because pigs were not constantly passing through the channel while the videos were collected, we used VSDC Free Video Editor ×32 video editing software to clip the time periods where pigs passed through, and we manually counted the pigs in these video segments, obtaining a total of 130 video segments.
- Pig detection dataset creation: We first used a Python 3.8 script to randomly extract 1500 .jpg format pig images from the 130 video segments, and then we manually labeled the pigs in the images using the open-source software labelImg. Finally, the dataset was divided into training, testing, and validation sets in an 8:1:1 ratio to train the YOLOv5xpig pig detection algorithm.
- Pig tracking dataset creation: Firstly, we randomly selected 20 video segments from the 130 video segments as the pig tracking dataset. Then, to reduce the difficulty of manual data creation, we used the YOLOv5xpig pig detection algorithm to crop each pig in every frame of each video segment into a .jpg image. After manual screening and calibration, all images of the same pig were placed in the same folder, resulting in a pig tracking dataset of 197 pigs, as shown in Figure 2.
3. Design of MPC-YD Algorithm
3.1. MPC-YD Algorithm Architecture
3.2. YOLOv5xpig Pig Counting Object Detection Algorithm
3.3. Pig Counting and Reidentification Network
3.4. Pig Tracking Method Based on Spatial State Correction
- (1)
- Read all pig IDs from the current and previous frames, and use the positions and IDs of pigs in both frames to identify new and lost pigs.
- (2)
- Use the state in the eight-dimensional spatial state to determine whether a new pig ID needs to be corrected. Calculate R using Formula (4); if R > 0, then there is a complete pig that is not close to the edge of the image, and the pig ID needs to be corrected.
- (3)
- Read all lost pig information, use the Euclidean distance to find the lost pig closest to the new pig, and replace the new pig ID with the old pig ID.
3.5. Pig Counting Method Based on Frame Number and Detection Area Judgment
- (1)
- In order to record all pig information for counting calibration and judgment, a two-dimensional list is used to save all pig information in chronological order after being corrected based on spatial state. Here, represents the pig number, represents the total number of times the pig has appeared in the different frames of the current video, represent the aspect ratio and detection height of the pig bounding box, and represents the value of the detection center .
- (2)
- The average frame number and the maximum detection area of pigs in the target tracking dataset are calculated through List.
- (3)
- Formula (5) is used to determine whether the current pig needs to be counted. If N > 0, it is considered a valid count (the same pig has appeared in multiple frames with a complete body), and pig counting is performed. Then, the information of the counted pig is recorded using ).
- (4)
- The change process of the value in List and List1 is used to determine whether the counted pig has turned back. If a pig disappears from view and then turns back, the counting result needs to be adjusted according to the number of times that pig has turned back to ensure counting accuracy.
4. Evaluation Metrics and Experimental Process
4.1. Experimental Environment and Training Parameters
4.2. Evaluation Metrics for the MPC-YD Pig Counting Algorithm
4.3. Training of the MPC-YD Pig Counting Algorithm
5. Analysis of Results
5.1. Analysis of Pig Object Detection Accuracy
5.2. Analysis of Pig Object Tracking Accuracy
5.3. Analysis of Pig Counting Results
5.4. Counting Test in Different Breeding Environments
6. Discussion
7. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Tian, M.; Guo, H.; Chen, H.; Wang, Q.; Long, C.; Ma, Y. Automated pig counting using deep learning. Comput. Electron. Agric. 2019, 163, 104840. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. Computer Vision & Pattern Recognition. arXiv 2016, arXiv:1506.02640. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 42, 386–397. [Google Scholar] [CrossRef]
- Huang, Z.; Huang, L.; Gong, Y.; Huang, C.; Wang, X. Mask Scoring R-CNN. arXiv 2019, arXiv:1903.00241. [Google Scholar]
- Qi, L.; Zhang, X.; Chen, Y.; Chen, Y.; Jia, J. PointINS: Point-based Instance Segmentation. arXiv 2020, arXiv:2003.06148. [Google Scholar] [CrossRef]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple Online and Realtime Tracking. arXiv 2016, arXiv:1602.00763. [Google Scholar]
- Kim, C.; Li, F.; Ciptadi, A.; Rehg, J.M. Multiple Hypothesis Tracking Revisited. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Wojke, N.; Bewley, A.; Paulus, D. Simple Online and Realtime Tracking with a Deep Association Metric. arXiv 2017, arXiv:1703.07402. [Google Scholar]
- Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
- Li, G.; Shi, G.; Jiao, J. YOLOv5-KCB: A New Method for Individual Pig Detection Using Optimized K-Means, CA Attention Mechanism and a Bi-Directional Feature Pyramid Network. Sensors 2023, 23, 5242. [Google Scholar] [CrossRef]
- Dong, S.; Wang, R.; Liu, K.; Jiao, L.; Li, R.; Du, J.; Teng, Y.; Wang, F. CRA-Net: A channel recalibration feature pyramid network for detecting small pests. Comput. Electron. Agric. 2021, 191, 106518. [Google Scholar] [CrossRef]
- Li, D.; Chen, Y.; Zhang, K.; Li, Z. Mounting Behaviour Recognition for Pigs Based on Deep Learning. Sensors 2019, 19, 4924. [Google Scholar] [CrossRef] [Green Version]
- Tu, S.; Yuan, W.; Liang, Y.; Wang, F.; Wan, H. Automatic Detection and Segmentation for Group-Housed Pigs Based on PigMS R-CNN. Sensors 2021, 21, 3251. [Google Scholar] [CrossRef]
- Zhou, H.; Li, Q.; Xie, Q. Individual Pig Identification Using Back Surface Point Clouds in 3D Vision. Sensors 2023, 23, 5156. [Google Scholar] [CrossRef]
- Brunet, H.; Creach, P.; Concordet, D. Optimal estimation of broiler movement for commercial tracking. Smart Agric. Technol. 2023, 3, 100113. [Google Scholar] [CrossRef]
- Bellocchio, E.; Crocetti, F.; Costante, G.; Fravolini, M.L.; Valigi, P. A novel vision-based weakly supervised framework for autonomous yield estimation in agricultural applications. Eng. Appl. Artif. Intell. 2022, 109, 104615. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, Q.; Meng, F. A benchmark for multi-class object counting and size estimation using deep convolutional neural networks. Eng. Appl. Artif. Intell. 2022, 116, 105449. [Google Scholar] [CrossRef]
- Lins, E.A.; Rodriguez, J.P.M.; Scoloski, S.I.; Pivato, J.; Lima, M.B.; Fernandes, J.M.C.; Da Silva Pereira, P.R.V.; Lau, D.; Rieder, R. A method for counting and classifying aphids using computer vision. Comput. Electron. Agric. 2020, 169, 105200. [Google Scholar] [CrossRef]
- Zhao, Y.; Wu, W.; Zhou, Y.; Zhu, B.; Yang, T.; Yao, Z.; Ju, C.; Sun, C.; Liu, T. A backlight and deep learning based method for calculating the number of seeds per silique. Biosyst. Eng. 2022, 213, 182–194. [Google Scholar] [CrossRef]
- Gao, F.; Fang, W.; Sun, X.; Wu, Z.; Zhao, G.; Li, G.; Li, R.; Fu, L.; Zhang, Q. A novel apple fruit detection and counting methodology based on deep learning and trunk tracking in modern orchard. Comput. Electron. Agric. 2022, 197, 107000. [Google Scholar] [CrossRef]
- Kestur, R.; Meduri, A.; Narasipura, O. MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard. Eng. Appl. Artif. Intell. 2019, 77, 59–69. [Google Scholar] [CrossRef]
- Kim, J.; Suh, Y.; Lee, J.; Chae, H.; Ahn, H.; Chung, Y.; Park, D. EmbeddedPigCount: Pig Counting with Video Object Detection and Tracking on an Embedded Board. Sensors 2022, 22, 2689. [Google Scholar] [CrossRef] [PubMed]
- Oczak, M.; Maschat, K.; Berckmans, D.; Vranken, E.; Baumgartner, J. Automatic estimation of number of piglets in a pen during farrowing, using image analysis. Biosyst. Eng. 2016, 151, 81–89. [Google Scholar] [CrossRef]
- Huang, E.; Mao, A.; Gan, H.; Camila Ceballos, M.; Parsons, T.D.; Xue, Y.; Liu, K. Center clustering network improves piglet counting under occlusion. Comput. Electron. Agric. 2021, 189, 106417. [Google Scholar] [CrossRef]
- Jensen, D.B.; Pedersen, L.J. Automatic counting and positioning of slaughter pigs within the pen using a convolutional neural network and video images. Comput. Electron. Agric. 2021, 188, 106296. [Google Scholar] [CrossRef]
- Chen, G.; Shen, S.; Wen, L.; Luo, S.; Bo, L. Efficient Pig Counting in Crowds with Keypoints Tracking and Spatial-aware Temporal Response Filtering. arXiv 2020, arXiv:2005.13131. [Google Scholar]
- Wu, B.; Liu, C.; Jiang, F.; Li, J.; Yang, Z. Dynamic identification and automatic counting of the number of passing fish species based on the improved DeepSORT algorithm. Front. Environ. Sci. 2023, 11, 1059217. [Google Scholar] [CrossRef]
- Cao, Y.; Chen, J.; Zhang, Z. A sheep dynamic counting scheme based on the fusion between an improved-sparrow-search YOLOv5x-ECA model and few-shot deepsort algorithm. Comput. Electron. Agric. 2023, 206, 107696. [Google Scholar] [CrossRef]
- Chen, X.; Pu, H.; He, Y.; Lai, M.; Zhang, D.; Chen, J.; Pu, H. An Efficient Method for Monitoring Birds Based on Object Detection and Multi-Object Tracking Networks. Animals 2023, 13, 1713. [Google Scholar] [CrossRef]
- Wojke, N.; Bewley, A. Deep Cosine Metric Learning for Person Re-identification. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 748–756. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Stergiou, A.; Poppe, R.; Kalliatakis, G. Refining activation downsampling with SoftPool. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 10357–10366. [Google Scholar]
Name | Patch Size/Stride | Output Size |
---|---|---|
Conv 1 | 3 × 3/1 | 32 × 128 × 64 |
Max pool 2 | 3 × 3/2 | 32 × 64 × 32 |
Residual 3 | 3 × 3/1 | 32 × 64 × 32 |
Residual 4 | 3 × 3/2 | 64 × 32 × 16 |
Residual 5 | 3 × 3/1 | 64 × 32 × 16 |
Residual 6 | 3 × 3/1 | 64 × 32 × 16 |
Residual 7 | 3 × 3/2 | 128 × 16 × 8 |
Residual 8 | 3 × 3/1 | 128 × 16 × 8 |
Residual 9 | 3 × 3/2 | 256 × 8 × 4 |
Residual 10 | 3 × 3/1 | 256 × 8 × 4 |
Residual 11 | 3 × 3/2 | 512 × 4 × 1 |
Avgpool 12 | 4 × 1/1 | 512 × 1 × 1 |
YOLOv5xpig | DeepSORTpig | ||
---|---|---|---|
Parameter Name | Parameter Value | Parameter Name | Parameter Value |
Epochs | 450 | Epochs | 80 |
Img-size | 640 | Max_DIST | 0.1 |
Max-det | 20 | MIN_CONFIDENCE | 0.5 |
Iou-thres | 0.45 | NMS_MAX_OVERLAP | 0.6 |
Batchsize | 2 | MAX_IOU_DISTANCE | 0.5 |
Task Model | MS (MB) | mAP:0.5 (%) | mAP:0.95 (%) | Recall(%) | FPS(f/s) |
---|---|---|---|---|---|
YOLOv5s | 14.4 | 94.25 | 65.83 | 93.71 | 87 |
YOLOv5m | 41.9 | 95.05 | 69.13 | 95.25 | 69 |
YOLOv5l | 91.6 | 94.74 | 71.06 | 94.76 | 43 |
YOLOv5x | 170.1 | 96.12 | 71.22 | 94.88 | 35 |
YOLOv5xpig | 170.2 | 99.24 | 78.29 | 97.37 | 33 |
Faster R-CNN | 360.1 | 94.64 | 68.25 | 88.94 | 19 |
Model | Detector | MOTA | MOTP |
---|---|---|---|
DeepSORT | YOLOv5xpig | 84.12 | 83.79 |
DeepSORTpig | YOLOv5xpig | 85.32 | 84.13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, Y.; Xiao, D.; Liu, J.; Tan, Z.; Liu, K.; Chen, M. An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model. Sensors 2023, 23, 6309. https://doi.org/10.3390/s23146309
Huang Y, Xiao D, Liu J, Tan Z, Liu K, Chen M. An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model. Sensors. 2023; 23(14):6309. https://doi.org/10.3390/s23146309
Chicago/Turabian StyleHuang, Yigui, Deqin Xiao, Junbin Liu, Zhujie Tan, Kejian Liu, and Miaobin Chen. 2023. "An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model" Sensors 23, no. 14: 6309. https://doi.org/10.3390/s23146309
APA StyleHuang, Y., Xiao, D., Liu, J., Tan, Z., Liu, K., & Chen, M. (2023). An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model. Sensors, 23(14), 6309. https://doi.org/10.3390/s23146309