Mathematical Modeling and Computer Vision in Animal Activity or Behavior

A special issue of Animals (ISSN 2076-2615). This special issue belongs to the section "Animal System and Management".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 6657

Special Issue Editors


E-Mail Website
Guest Editor
College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China
Interests: animal activity or behavior; computer vision; deep learning; animal voice recognition; machine learning; precision livestock farming

E-Mail Website
Co-Guest Editor
College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China
Interests: precision livestock farming; computer vision; behavior detection and analysis; animal tracking; animal welfare

E-Mail Website
Co-Guest Editor
College of Computer Sciences, Guangdong Polytechnic Normal University, Guangzhou 510665, China
Interests: intelligent livestock and poultry; animal behavior recognition; video image processing; image segmentation; feature extraction

Special Issue Information

Dear Colleagues, 

Good care is key for good productivity, health, and welfare, and thus, for an economically viable business. Economies of scale in production companies are inevitable, but these must be applied in an ecologically sound and sustainable manner. Precision livestock and poultry farming guarantee farmers the tools and procedures that will enable more efficient production, while simultaneously taking care of animal health and welfare. We have witnessed technical developments in the fields of animal monitoring, and health and welfare assessment. Unfortunately, the tools available to farmers are still limited, with many issues and challenges unsolved or not fully addressed due to their complex application in livestock and poultry farming.

We are pleased to invite original research papers or reviews that explore or discuss animal monitoring by using computer vision or voice recognition, animal health and welfare assessment, and modeling and predicting anomalous behaviors, among others.

Prof. Dr. Yueju Xue
Dr. Haiming Gan
Dr. Aqing Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Animals is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 5940 KiB  
Article
Deconvolution Enhancement Keypoint Network for Efficient Fish Fry Counting
by Ximing Li, Zhicai Liang, Yitao Zhuang, Zhe Wang, Huan Zhang, Yuefang Gao and Yubin Guo
Animals 2024, 14(10), 1490; https://doi.org/10.3390/ani14101490 - 17 May 2024
Viewed by 185
Abstract
Fish fry counting has been vital in fish farming, but current computer-based methods are not feasible enough to accurately and efficiently calculate large number of fry in a single count due to severe occlusion, dense distribution and the small size of fish fry. [...] Read more.
Fish fry counting has been vital in fish farming, but current computer-based methods are not feasible enough to accurately and efficiently calculate large number of fry in a single count due to severe occlusion, dense distribution and the small size of fish fry. To address this problem, we propose the deconvolution enhancement keypoint network (DEKNet), a method for fish fry counting that features a single-keypoint approach. This novel approach models the fish fry as a point located in the central part of the fish head, laying the foundation for our innovative counting strategy. To be specific, first, a fish fry feature extractor (FFE) characterized by parallel dual branches is designed for high-resolution representation. Next, two identical deconvolution modules (TDMs) are added to the generation head for a high-quality and high-resolution keypoint heatmap with the same resolution size as the input image, thus facilitating the precise counting of fish fry. Then, the local peak value of the heatmap is obtained as the keypoint of the fish fry, so the number of these keypoints with coordinate information equals the number of fry, and the coordinates of the keypoint can be used to locate the fry. Finally, FishFry-2023, a large-scale fish fry dataset, is constructed to evaluate the effectiveness of the method proposed by us. Experimental results show that an accuracy rate of 98.59% was accomplished in fish fry counting. Furthermore, DEKNet achieved a high degree of accuracy on the Penaeus dataset (98.51%) and an MAE of 13.32 on a public dataset known as Adipocyte Cells. The research outcomes reveal that DEKNet has superior comprehensive performance in counting accuracy, the number of parameters and computational effort. Full article
Show Figures

Figure 1

16 pages, 24320 KiB  
Article
Learning Rich Feature Representation and State Change Monitoring for Accurate Animal Target Tracking
by Kuan Yin, Jiangfan Feng and Shaokang Dong
Animals 2024, 14(6), 902; https://doi.org/10.3390/ani14060902 - 14 Mar 2024
Viewed by 613
Abstract
Animal tracking is crucial for understanding migration, habitat selection, and behavior patterns. However, challenges in video data acquisition and the unpredictability of animal movements have hindered progress in this field. To address these challenges, we present a novel animal tracking method based on [...] Read more.
Animal tracking is crucial for understanding migration, habitat selection, and behavior patterns. However, challenges in video data acquisition and the unpredictability of animal movements have hindered progress in this field. To address these challenges, we present a novel animal tracking method based on correlation filters. Our approach integrates hand-crafted features, deep features, and temporal context information to learn a rich feature representation of the target animal, enabling effective monitoring and updating of its state. Specifically, we extract hand-crafted histogram of oriented gradient features and deep features from different layers of the animal, creating tailored fusion features that encapsulate both appearance and motion characteristics. By analyzing the response map, we select optimal fusion features based on the oscillation degree. When the target animal’s state changes significantly, we adaptively update the target model using temporal context information and robust feature data from the current frame. This updated model is then used for re-tracking, leading to improved results compared to recent mainstream algorithms, as demonstrated in extensive experiments conducted on our self-constructed animal datasets. By addressing specific challenges in animal tracking, our method offers a promising approach for more effective and accurate animal behavior research. Full article
Show Figures

Figure 1

17 pages, 48830 KiB  
Article
Automatic Recognition and Quantification Feeding Behaviors of Nursery Pigs Using Improved YOLOV5 and Feeding Functional Area Proposals
by Yizhi Luo, Jinjin Xia, Huazhong Lu, Haowen Luo, Enli Lv, Zhixiong Zeng, Bin Li, Fanming Meng and Aqing Yang
Animals 2024, 14(4), 569; https://doi.org/10.3390/ani14040569 - 8 Feb 2024
Viewed by 817
Abstract
A novel method is proposed based on the improved YOLOV5 and feeding functional area proposals to identify the feeding behaviors of nursery piglets in a complex light and different posture environment. The method consists of three steps: first, the corner coordinates of the [...] Read more.
A novel method is proposed based on the improved YOLOV5 and feeding functional area proposals to identify the feeding behaviors of nursery piglets in a complex light and different posture environment. The method consists of three steps: first, the corner coordinates of the feeding functional area were set up by using the shape characteristics of the trough proposals and the ratio of the corner point to the image width and height to separate the irregular feeding area; second, a transformer module model was introduced based on YOLOV5 for highly accurate head detection; and third, the feeding behavior was recognized and counted by calculating the proportion of the head in the located feeding area. The pig head dataset was constructed, including 5040 training sets with 54,670 piglet head boxes, and 1200 test sets, and 25,330 piglet head boxes. The improved model achieves a 5.8% increase in the mAP and a 4.7% increase in the F1 score compared with the YOLOV5s model. The model is also applied to analyze the feeding pattern of group-housed nursery pigs in 24 h continuous monitoring and finds that nursing pigs have different feeding rhythms for the day and night, with peak feeding periods at 7:00–9:00 and 15:00–17:00 and decreased feeding periods at 12:00–14:00 and 0:00–6:00. The model provides a solution for identifying and quantifying pig feeding behaviors and offers a data basis for adjusting the farm feeding scheme. Full article
Show Figures

Figure 1

17 pages, 5047 KiB  
Article
Regulation of Meat Duck Activeness through Photoperiod Based on Deep Learning
by Enze Duan, Guofeng Han, Shida Zhao, Yiheng Ma, Yingchun Lv and Zongchun Bai
Animals 2023, 13(22), 3520; https://doi.org/10.3390/ani13223520 - 14 Nov 2023
Viewed by 1031
Abstract
The regulation of duck physiology and behavior through the photoperiod holds significant importance for enhancing poultry farming efficiency. To clarify the impact of the photoperiod on group-raised duck activeness and quantify duck activeness, this study proposes a method that employs a multi-object tracking [...] Read more.
The regulation of duck physiology and behavior through the photoperiod holds significant importance for enhancing poultry farming efficiency. To clarify the impact of the photoperiod on group-raised duck activeness and quantify duck activeness, this study proposes a method that employs a multi-object tracking model to calculate group-raised duck activeness. Then, duck farming experiments were designed with varying photoperiods as gradients to assess this impact. The constructed multi-object tracking model for group-raised ducks was based on YOLOv8. The C2f-Faster-EMA module, which combines C2f-Faster with the EMA attention mechanism, was used to improve the object recognition performance of YOLOv8. Furthermore, an analysis of the tracking performance of Bot-SORT, ByteTrack, and DeepSORT algorithms on small-sized duck targets was conducted. Building upon this foundation, the duck instances in the images were segmented to calculate the distance traveled by individual ducks, while the centroid of the duck mask was used in place of the mask regression box’s center point. The single-frame average displacement of group-raised ducks was utilized as an intuitive indicator of their activeness. Farming experiments were conducted with varying photoperiods (24L:0D, 16L:8D, and 12L:12D), and the constructed model was used to calculate the activeness of group-raised ducks. The results demonstrated that the YOLOv8x-C2f-Faster-EMA model achieved an object recognition accuracy (mAP@50-95) of 97.9%. The improved YOLOv8 + Bot-SORT model achieved a multi-object tracking accuracy of 85.1%. When the photoperiod was set to 12L:12D, duck activeness was slightly lower than that of the commercial farming’s 24L:0D lighting scheme, but duck performance was better. The methods and conclusions presented in this study can provide theoretical support for the welfare assessment of meat duck farming and photoperiod regulation strategies in farming. Full article
Show Figures

Figure 1

22 pages, 6217 KiB  
Article
Pose Estimation and Behavior Classification of Jinling White Duck Based on Improved HRNet
by Shida Zhao, Zongchun Bai, Lili Meng, Guofeng Han and Enze Duan
Animals 2023, 13(18), 2878; https://doi.org/10.3390/ani13182878 - 10 Sep 2023
Viewed by 1505
Abstract
In breeding ducks, obtaining the pose information is vital for perceiving their physiological health, ensuring welfare in breeding, and monitoring environmental comfort. This paper proposes a pose estimation method by combining HRNet and CBAM to achieve automatic and accurate detection of duck’s multi-poses. [...] Read more.
In breeding ducks, obtaining the pose information is vital for perceiving their physiological health, ensuring welfare in breeding, and monitoring environmental comfort. This paper proposes a pose estimation method by combining HRNet and CBAM to achieve automatic and accurate detection of duck’s multi-poses. Through comparison, HRNet-32 is identified as the optimal option for duck pose estimation. Based on this, multiple CBAM modules are densely embedded into the HRNet-32 network to obtain the pose estimation model based on HRNet-32-CBAM, realizing accurate detection and correlation of eight keypoints across six different behaviors. Furthermore, the model’s generalization ability is tested under different illumination conditions, and the model’s comprehensive detection abilities are evaluated on Cherry Valley ducklings of 12 and 24 days of age. Moreover, this model is compared with mainstream pose estimation methods to reveal its advantages and disadvantages, and its real-time performance is tested using images of 256 × 256, 512 × 512, and 728 × 728 pixel sizes. The experimental results indicate that for the duck pose estimation dataset, the proposed method achieves an average precision (AP) of 0.943, which has a strong generalization ability and can achieve real-time estimation of the duck’s multi-poses under different ages, breeds, and farming modes. This study can provide a technical reference and a basis for the intelligent farming of poultry animals. Full article
Show Figures

Figure 1

20 pages, 9036 KiB  
Article
Automatic Penaeus Monodon Larvae Counting via Equal Keypoint Regression with Smartphones
by Ximing Li, Ruixiang Liu, Zhe Wang, Guotai Zheng, Junlin Lv, Lanfen Fan, Yubin Guo and Yuefang Gao
Animals 2023, 13(12), 2036; https://doi.org/10.3390/ani13122036 - 20 Jun 2023
Viewed by 1587
Abstract
Today, large-scale Penaeus monodon farms no longer incubate eggs but instead purchase larvae from large-scale hatcheries for rearing. The accurate counting of tens of thousands of larvae in these transactions is a challenging task due to the small size of the larvae and [...] Read more.
Today, large-scale Penaeus monodon farms no longer incubate eggs but instead purchase larvae from large-scale hatcheries for rearing. The accurate counting of tens of thousands of larvae in these transactions is a challenging task due to the small size of the larvae and the highly congested scenes. To address this issue, we present the Penaeus Larvae Counting Strategy (PLCS), a simple and efficient method for counting Penaeus monodon larvae that only requires a smartphone to capture images without the need for any additional equipment. Our approach treats two different types of keypoints as equip keypoints based on keypoint regression to determine the number of shrimp larvae in the image. We constructed a high-resolution image dataset named Penaeus_1k using images captured by five smartphones. This dataset contains 1420 images of Penaeus monodon larvae and includes general annotations for three keypoints, making it suitable for density map counting, keypoint regression, and other methods. The effectiveness of the proposed method was evaluated on a real Penaeus monodon larvae dataset. The average accuracy of 720 images with seven different density groups in the test dataset was 93.79%, outperforming the classical density map algorithm and demonstrating the efficacy of the PLCS. Full article
Show Figures

Figure 1

Back to TopTop