Transforming Poultry Farming: A Pyramid Vision Transformer Approach for Accurate Chicken Counting in Smart Farm Environments
Abstract
:1. Introduction
- It proposes a novel and effective method for counting chickens in smart farm environments, using a DL approach based on transformer architecture and customized loss function incorporating curriculum loss.
- It addresses the challenges associated with smart chicken counting, such as illumination changes, occlusion, cluttered background, continuous growth, and camera distortion.
- It evaluates the proposed method on a newly created dataset and shows that it can achieve high performance efficiency and robustness for chicken counting in smart farm environments.
2. Related Works
3. Method
3.1. Chicken Counting Dataset
3.2. Transformer-Based Chicken Counting Architecture
3.3. Evaluation Metrics
4. Experimental Results
4.1. Results
4.2. SAFECount Comparison
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Rajak, P.; Ganguly, A.; Adhikary, S.; Bhattacharya, S. Internet of Things and smart sensors in agriculture: Scopes and challenges. J. Agric. Food Res. 2023, 14, 100776. [Google Scholar] [CrossRef]
- Revanth, M.; Kumar, K.S.; Srinivasan, M.; Stonier, A.A.; Vanaja, D.S. Design and Development of an IoT Based Smart Poultry Farm. In Proceedings of the 2021 International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA), Coimbatore, India, 8–9 October 2021. [Google Scholar] [CrossRef]
- Shaikh, T.A.; Mir, W.A.; Rasool, T.; Sofi, S. Machine Learning for Smart Agriculture and Precision Farming: Towards Making the Fields Talk. Arch. Comput. Methods Eng. 2022, 29, 4557–4597. [Google Scholar] [CrossRef]
- Neethirajan, S. Automated Tracking Systems for the Assessment of Farmed Poultry. Animals 2022, 12, 232. [Google Scholar] [CrossRef]
- Vaarst, M.; Steenfeldt, S.; Horsted, K. Sustainable development perspectives of poultry production. World’s Poult. Sci. J. 2015, 71, 609–620. [Google Scholar] [CrossRef]
- E Manser, C. Effects of Lighting on the Welfare of Domestic Poultry: A Review. Anim. Welf. 1996, 5, 341–360. [Google Scholar] [CrossRef]
- Cao, L.; Xiao, Z.; Liao, X.; Yao, Y.; Wu, K.; Mu, J.; Li, J.; Pu, H. Automated chicken counting in surveillance camera environments based on the point supervision algorithm: Lc-densefcn. Agriculture 2021, 11, 493. [Google Scholar] [CrossRef]
- Tang, Z.; von Gioi, R.G.; Monasse, P.; Morel, J.-M. A Precision Analysis of Camera Distortion Models. IEEE Trans. Image Process. 2017, 26, 2694–2704. [Google Scholar] [CrossRef] [PubMed]
- Maheswari, P.; Raja, P.; Apolo-Apolo, O.E.; Pérez-Ruiz, M. Intelligent Fruit Yield Estimation for Orchards Using Deep Learning Based Semantic Segmentation Techniques—A Review. Front. Plant Sci. 2021, 12, 684328. [Google Scholar] [CrossRef] [PubMed]
- Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
- Du, X.; Cai, Y.; Wang, S.; Zhang, L. Overview of Deep Learning. In Proceedings of the 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC), Wuhan, China, 11–13 November 2016. [Google Scholar] [CrossRef]
- Ahmed, M.; Seraj, R.; Islam, S.M.S. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
- Louppe, G. Understanding Random Forests: From Theory to Practice. arXiv 2015, arXiv:1407.7502v3. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
- Zhang, Y. Support Vector Machine Classification Algorithm and Its Application. In Information Computing and Applications, Proceedings of the Third International Conference, ICICA 2012, Chengde, China, 14–16 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 179–186. [Google Scholar]
- Conte, D.; Foggia, P.; Percannella, G.; Tufano, F.; Vento, M. A method for counting people in crowded scenes. In Proceedings of the 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston, MA, USA, 29 August–1 September 2010; pp. 225–232. [Google Scholar] [CrossRef]
- Hani, N.; Roy, P.; Isler, V. Apple Counting using Convolutional Neural Networks. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2559–2565. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Dobrescu, A.; Giuffrida, M.V.; Tsaftaris, S.A. Leveraging Multiple Datasets for Deep Leaf Counting. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), Venice, Italy, 22–29 October 2017; pp. 2072–2079. [Google Scholar] [CrossRef]
- Bhattarai, U.; Karkee, M. A weakly-supervised approach for flower/fruit counting in apple orchards. Comput. Ind. 2022, 138, 103635. [Google Scholar] [CrossRef]
- Moon, J.; Lim, S.; Lee, H.; Yu, S.; Lee, K.-B. Smart Count System Based on Object Detection Using Deep Learning. Remote Sens. 2022, 14, 3761. [Google Scholar] [CrossRef]
- Fan, X.; Zhou, R.; Tjahjadi, T.; Das Choudhury, S.; Ye, Q. A Segmentation-Guided Deep Learning Framework for Leaf Counting. Front. Plant Sci. 2022, 13, 844522. [Google Scholar] [CrossRef]
- Hong, S.-J.; Nam, I.; Kim, S.-Y.; Kim, E.; Lee, C.-H.; Ahn, S.; Park, I.-K.; Kim, G. Automatic pest counting from pheromone trap images using deep learning object detectors for Matsucoccus thunbergianae monitoring. Insects 2021, 12, 342. [Google Scholar] [CrossRef] [PubMed]
- Ni, X.; Li, C.; Jiang, H.; Takeda, F. Deep learning image segmentation and extraction of blueberry fruit traits associated with harvestability and yield. Hortic. Res. 2020, 7, 110. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar] [CrossRef]
- Li, B.; Huang, H.; Zhang, A.; Liu, P.; Liu, C. Approaches on crowd counting and density estimation: A review. Pattern Anal. Appl. 2021, 24, 853–874. [Google Scholar] [CrossRef]
- Tian, M.; Guo, H.; Chen, H.; Wang, Q.; Long, C.; Ma, Y. Automated pig counting using deep learning. Comput. Electron. Agric. 2019, 163, 104840. [Google Scholar] [CrossRef]
- Gomez, A.S.; Aptoula, E.; Parsons, S.; Bosilj, P. Deep Regression Versus Detection for Counting in Robotic Phenotyping. IEEE Robot. Autom. Lett. 2021, 6, 2902–2907. [Google Scholar] [CrossRef]
- Hobbs, J.; Paull, R.; Markowicz, B.; Rose, G. Flowering density estimation from aerial imagery for automated pineapple flower counting. In AI for Social Good Workshop; Harvard University: Cambridge, MA, USA, 2020. [Google Scholar]
- Rahnemoonfar, M.; Dobbs, D.; Yari, M.; Starek, M.J. DisCountNet: Discriminating and counting network for real-time counting and localization of sparse objects in high-resolution UAV imagery. Remote Sens. 2019, 11, 1128. [Google Scholar] [CrossRef]
- Xiong, H.; Cao, Z.; Lu, H.; Madec, S.; Liu, L.; Shen, C. TasselNetv2: In-field counting of wheat spikes with context-augmented local regression networks. Plant Methods 2019, 15, 150. [Google Scholar] [CrossRef] [PubMed]
- Sun, G.; Liu, Y.; Probst, T.; Paudel, D.P.; Popovic, N.; Van Gool, L. Rethinking Global Context in Crowd Counting. arXiv 2023, arXiv:2105.10926v2. [Google Scholar] [CrossRef]
- Wang, Q.; Gao, J.; Lin, W.; Li, X. NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 2141–2149. [Google Scholar] [CrossRef] [PubMed]
- Yu, Y.; Cai, Z.; Miao, D.; Qian, J.; Tang, H. An interactive network based on transformer for multimodal crowd counting. Appl. Intell. 2023, 53, 22602–22614. [Google Scholar] [CrossRef]
- Ranjan, V.; Sharma, U.; Nguyen, T.; Hoai, M. Learning to Count Everything. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 3393–3402. [Google Scholar]
- Bengio, Y.; Louradour, J.; Collobert, R.; Weston, J. Curriculum Learning. In Proceedings of the 26th Annual International Conference on Machine Learning, New York, NY, USA, 14–18 June 2009; pp. 41–48. [Google Scholar]
- Liu, Y.; Shi, M.; Zhao, Q.; Wang, X. Point in, box out: Beyond counting persons in crowds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 6462–6471. [Google Scholar]
- Lyu, Y.; Tsang, I.W. Curriculum Loss: Robust Learning and Generalization Against Label Corruption. arXiv 2019, arXiv:1905.10045. [Google Scholar]
- Wang, Q.; Breckon, T.P. Crowd Counting via Segmentation Guided Attention Networks and Curriculum Loss. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15233–15243. [Google Scholar] [CrossRef]
- Laradji, I.H.; Rostamzadeh, N.; Pinheiro, P.O.; Vazquez, D.; Schmidt, M. Where Are the Blobs: Counting by Localization with Point Supervision. In Computer Vision-ECCV 2018, Proceedings of the 15th European Conference, Munich, Germany, 8–14 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 560–576. [Google Scholar] [CrossRef]
- Abuaiadah, D.; Switzer, A.; Bosu, M.; Liu, Y. Automatic counting of chickens in confined area using the LCFCN algorithm. In Proceedings of the 2022 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 18–20 May 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Zhu, X.; Wu, C.; Yang, Y.; Yao, Y.; Wu, Y. Automated Chicken Counting Using YOLO-v5x Algorithm. In Proceedings of the 2022 8th International Conference on Systems and Informatics (ICSAI), Kunming, China, 10–12 December 2022. [Google Scholar] [CrossRef]
- Horvat, M.; Gledec, G. A comparative study of YOLOv5 models performance for image localization and classification. In Proceedings of the 33rd Central European Conference on Information and Intelligent Systems (CECIIS 2022), Dubrovnik, Croatia, 21–23 September 2022. [Google Scholar]
- Sun, E.; Xiao, Z.; Yuan, F.; Wang, Z.; Ma, G.; Liu, J. Method of Classified Counting of Mixed Breeding Chickens Based on YOLOV5. In Proceedings of the 2023 42nd Chinese Control Conference (CCC), Tianjin, China, 24–26 July 2023. [Google Scholar]
- You, Z.; Yang, K.; Luo, W.; Lu, X.; Cui, L.; Le, X. Few-shot Object Counting with Similarity-Aware Feature Enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 6304–6313. [Google Scholar]
- Tian, Y.; Chu, X.; Wang, H. CCTrans: Simplifying and Improving Crowd Counting with Transformer. arXiv 2021, arXiv:2109.14483v1. [Google Scholar]
- O’shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015, arXiv:1511.08458v2. [Google Scholar]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
Dataset | Number of Training Images | Augmentation Strategy Used |
---|---|---|
Original | 36 | None |
Strategy A | 2290 | Rotation around feeding plates and corners |
Strategy B | 6059 | Rotations, variations in brightness, contrast, shadow, blur, partial occlusion, random cropping, and scaling |
Dataset | Val Set | Test Set | AA | ||
---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | ||
Original (non-augmented) | 47.4 | 75.8 | 40.7 | 63.2 | 0.9429 |
Augmented using Strategy A | 34.1 | 49.2 | 32.6 | 54.4 | 0.9546 |
Augmented using Strategy B | 27.8 | 40.9 | 22.0 | 32.3 | 0.9696 |
Customized Loss Function | Val Set | Test Set | AA | ||
---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | ||
Without using Curriculum Loss | 40.3 | 58.9 | 31.9 | 47.8 | 0.9555 |
Using Curriculum Loss (ours) | 27.8 | 40.9 | 22.0 | 32.3 | 0.9696 |
Method | Val Set | Test Set | AA | ||
---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | ||
SAFECount | 64.5 | 112.5 | 67.9 | 104.2 | 0.8922 |
Chicken-Counting Model (ours) | 27.8 | 40.9 | 22.0 | 32.3 | 0.9696 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khanal, R.; Choi, Y.; Lee, J. Transforming Poultry Farming: A Pyramid Vision Transformer Approach for Accurate Chicken Counting in Smart Farm Environments. Sensors 2024, 24, 2977. https://doi.org/10.3390/s24102977
Khanal R, Choi Y, Lee J. Transforming Poultry Farming: A Pyramid Vision Transformer Approach for Accurate Chicken Counting in Smart Farm Environments. Sensors. 2024; 24(10):2977. https://doi.org/10.3390/s24102977
Chicago/Turabian StyleKhanal, Ridip, Yoochan Choi, and Joonwhoan Lee. 2024. "Transforming Poultry Farming: A Pyramid Vision Transformer Approach for Accurate Chicken Counting in Smart Farm Environments" Sensors 24, no. 10: 2977. https://doi.org/10.3390/s24102977
APA StyleKhanal, R., Choi, Y., & Lee, J. (2024). Transforming Poultry Farming: A Pyramid Vision Transformer Approach for Accurate Chicken Counting in Smart Farm Environments. Sensors, 24(10), 2977. https://doi.org/10.3390/s24102977