Next Article in Journal
Long Short-Term Memory Neural Network with Attention Mechanism for Rice Yield Early Estimation in Qian Gorlos County, Northeast China
Previous Article in Journal
Seawater Tolerance of the Beach Bean Vigna marina (Burm.) Merrill in Comparison with Mung Bean (Vigna radiata) and Adzuki Bean (Vigna angularis)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resting Posture Recognition Method for Suckling Piglets Based on Piglet Posture Recognition (PPR)–You Only Look Once

1
College of Engineering, Nanjing Agricultural University, Nanjing 210031, China
2
College of Veterinary Medicine, Nanjing Agricultural University, Nanjing 210095, China
3
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
4
College of Animal Science & Technology, Nanjing Agricultural University, Nanjing 210095, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(3), 230; https://doi.org/10.3390/agriculture15030230
Submission received: 30 December 2024 / Revised: 17 January 2025 / Accepted: 19 January 2025 / Published: 21 January 2025
(This article belongs to the Section Digital Agriculture)

Abstract

:
The resting postures of piglets are crucial indicators for assessing their health status and environmental comfort. This study proposes a resting posture recognition method for piglets during lactation based on the PPR-YOLO model, aiming to enhance the detection accuracy and classification capability for different piglet resting postures. Firstly, to address the issue of numerous sows and piglets in the farrowing house that easily occlude each other, an image edge detection algorithm is employed to precisely locate the sow’s farrowing bed area. By cropping the images, irrelevant background interference is reduced, thereby enhancing the model’s recognition accuracy. Secondly, to overcome the limitations of the YOLOv11 model in fine feature extraction and small object detection, improvements are made, resulting in the proposed PPR-YOLO model. Specific enhancements include the introduction of a multi-branch Conv2 module to enrich feature extraction capabilities and the adoption of an inverted bottleneck IBCNeck module, which expands the number of channels and incorporates a channel attention mechanism. This strengthens the model’s ability to capture and differentiate subtle posture features. Additionally, in the post-processing stage, the relative positions between sows and piglets are utilized to filter out piglets located outside the sow region, eliminating interference from sow nursing behaviors in resting posture recognition, thereby ensuring the accuracy of posture classification. The experimental results show that the proposed method achieves accurate piglet posture recognition, outperforming mainstream object detection algorithms. Ablation experiments validate the effectiveness of image cropping and model enhancements in improving performance. This method provides effective technical support for the automated monitoring of piglet welfare in commercial farms and holds promising application prospects.

1. Introduction

The resting postures of piglets are indicative of their health status and current environmental comfort [1]. Moreover, different piglet postures are typically the result of various external factors, which can generally be controlled by caretakers [2]. For example, when the environmental temperature is low, piglets tend to adopt a ventral lying posture and huddle together [3]. Therefore, accurately detecting and analyzing the resting posture patterns of piglets can help producers adjust and optimize feeding strategies, thereby improving both the welfare and economic value of piglets [4].
Currently, monitoring pig behavior postures in commercial farms is primarily achieved through wearable sensors and computer vision technologies. Wearable sensors classify behaviors such as activity levels, feeding, lateral lying, and ventral lying by measuring acceleration in three-dimensional axes [5,6]. In contrast to neck collars worn on pigs, computer vision technology offers the advantages of being non-invasive and stress-free for the animals. Initially, Nasirahmadi et al. [7] utilized image processing and pig ellipse fitting to obtain pig position information and constructed Delaunay triangles to detect changes in pig group resting postures under commercial farm conditions. Subsequent work [8] employed image binarization to obtain pig boundaries and convex hulls, extracting corresponding area and perimeter measurements, and used Support Vector Machines (SVMs) to classify pig lateral and ventral lying resting postures.
Deep learning-based computer vision technologies currently possess efficient feature extraction and nonlinear relationship modeling capabilities, making pig posture recognition using deep learning a research hotspot. Riekert et al. [9] integrated Neural Architecture Search (NAS) into the Faster R-CNN framework for pig localization and posture classification, achieving an average precision (mAP) of 80.2%. Ji et al. [10] incorporated effective techniques from YOLOv5 into YOLOX, achieving improved detection results for standing, sitting, and lying postures, with an average detection precision of 91%. Wang et al. [11] enhanced the Cascade Mask R–CNN image segmentation algorithm to first extract individual pig masks and then identify pig postures via a classification network, achieving an average recognition accuracy of 98.2%. Li et al. [12], based on the OpenPose model, located key points across different pig body parts and subsequently classified pig postures using the KNN algorithm, obtaining a recognition accuracy of 93%. Unlike the approach of adopting KNN for pig posture classification, Dong et al. [13] employed the OpenPose model to acquire pig keypoint information and then combined the SORT tracking algorithm with a spatiotemporal graph convolutional network (ST-GCN) to classify three behaviors—standing, walking, and lying—ultimately achieving an accuracy of 86.67%. Additionally, related studies have utilized RGB-D images combined with object detection algorithms to recognize the postures of lactating sows [14,15].
Most of the aforementioned methods focus on the posture recognition of fattening pigs or sows, with relatively fewer studies concentrating on piglet posture recognition. During lactation, piglets are small in size and easily occluded, and they adopt a ventral lying posture when nursing, which interferes with the detection of resting postures. To address these issues, this paper proposes a resting posture recognition method for piglets during lactation based on the PPR-YOLO model, which also excludes the interference of piglet postures within the sow region in the statistics of resting posture recognition. The contributions of this study include (1) improving the algorithm’s recognition accuracy by cropping the fencing area images using an image edge detection algorithm; (2) enhancing the model’s piglet posture recognition performance by modifying the YOLOv11 object detection algorithm with multi-branch and inverted bottleneck structures; and (3) utilizing the relative positional relationships between sows and piglets to exclude interference from piglets within the sow region during resting posture recognition analysis.
The remainder of this paper is organized as follows: Section 2 introduces the materials and methods used in this study, including data collection, data processing, and the specific improvements to the PPR-YOLO model. Section 3 presents the experimental results, analyzing the model’s performance and the outcomes of the ablation experiments. Section 4 discusses the main features of the proposed method as well as future research directions. Finally, Section 5 concludes the research presented in this paper.

2. Materials and Methods

This section provides a detailed overview of the data collection methods, data processing workflow, and the piglet resting posture recognition approach based on the PPR-YOLO model. First, it introduces the experimental data acquisition environment and equipment configuration. Next, it outlines the specific steps for data preprocessing and annotation. Finally, the structural improvements to the PPR-YOLO model and their implementation details are explained.

2.1. Data Collection and Processing

2.1.1. Experimental Data Collection

The experimental data for this study were collected in August 2022 at the Yingguang Pig Farm’s sow farrowing house in Zhenjiang City, Jiangsu Province. The sow breed used was Large White. A total of 31 sow farrowing pens were randomly selected as experimental scenes, each measuring 1.9m × 2.55m and housing one sow along with 5 to 12 piglets. A hemispherical camera (DS-2CD3325D-I, Hikvision, Hangzhou, China) was installed 2.2 m directly above the center of each farrowing pen to record the daily activities of the piglets. The camera has a horizontal field of view of 87.3°, allowing full coverage of the farrowing pen area. It was connected to a switch (DS-3E0518P-E, Hikvision, China) via an Ethernet cable and powered using Power over Ethernet (POE) technology, enabling simultaneous power supply and video data transmission. The video data were recorded at a frame rate of 25 frames per second (fps) with a resolution of 1920 × 1080 pixels and stored on a network video recorder (DS-8832N-R8, Hikvision, China) for subsequent data processing and analysis. A schematic diagram of the data collection scene is shown in Figure 1.

2.1.2. Definition of Piglet Postures

Based on the research by Hay et al. [16], piglet postures are categorized into five types: lateral lying, ventral lying, standing, sitting, and kneeling. Since this study primarily focuses on the resting postures of piglets during lactation, only three categories are defined: lateral lying, ventral lying, and other postures. The standing, sitting, and kneeling postures are combined into the “Other” category. Examples of the three defined piglet postures are illustrated in Figure 2, and detailed definitions are provided in Table 1.

2.1.3. Dataset Construction

In constructing the sow–piglet target and piglet posture datasets, the collected surveillance videos were first sampled at 2 s intervals to extract images. Subsequently, the frame difference method was employed to exclude images with high similarity, resulting in a final selection of 1640 images.
Since the acquired images have a 16:9 aspect ratio, whereas the actual sow farrowing bed area has an approximate 4:3 aspect ratio, the captured images include irrelevant scenes outside the sow farrowing bed. Due to the significant differences between the edges of the sow farrowing bed and the ground, horizontal edge detection was performed on the images. The cumulative maximum values along the horizontal positions were calculated to determine the fencing position of the sow farrowing bed. The specific image cropping process is illustrated in Figure 3, and the exact coordinate positions are defined by Equation (1).
( x r , y b ) = ( arg max ( y = 0 H i m g B ( x , y ) ) , H i m g ) ( x l , y t ) = ( x r ( 1 + γ ) H i m g L c W c , 0 )
where ( x r , y b ) and ( x l , y t ) represent the coordinates of the bottom-right and top-left corners of the cropping area in the original image, respectively. B ( x , y ) denotes the pixel value at coordinate ( x , y ) in the binarized image after edge extraction. H i m g is the height of the original image, while L c and W c represent the length and width of the sow farrowing bed, respectively. γ is the compensation coefficient for the camera’s top-down viewing angle set to 0.075 in this study. After cropping, the image resolution was adjusted to 1558 × 1080 pixels.
The open-source image annotation tool LabelMe was used to annotate the original images, including bounding boxes for sows and piglets as well as piglet posture category information. The annotation information for the cropped images was directly generated by recording the cropping position coordinates. Subsequently, the annotated images were randomly split into training and testing sets in an 0.85:0.15 ratio.

2.2. Piglet Resting Posture Recognition Method Overview

To address the challenge of distinguishing the similar features of different piglet postures and eliminate interference from behaviors related to suckling, this study proposes the following three main improvements in the piglet resting posture recognition algorithm:
  • Based on the YOLOv11 model, we incorporate multi-branch and inverted bottleneck structures to develop the PPR-YOLO model, thereby enhancing its ability to extract and learn features corresponding to different piglet postures;
  • In the model’s post-processing stage, we employ a category-independent Non-Maximum Suppression (NMS) algorithm to resolve the uncertainty of multiple posture categories for a single piglet target;
  • During the lactation period, piglets often exhibit ventral lying positions while suckling, which need to be distinguished from resting postures. This study determines the relative positional relationship between sows and piglets to filter out piglets located within the sow’s area, thereby eliminating the interference of suckling behaviors in resting posture recognition.
The overall process of the piglet resting posture recognition method is illustrated in Figure 4.

2.3. PPR-YOLO Model

YOLO (You Only Look Once) [17] is a classic single-stage object detection algorithm known for its fast detection speed and high real-time performance, as it can perform object detection tasks with a single model inference. Its core architecture consists of three main components: a backbone feature extraction network (backbone), a feature fusion layer (neck), and detection heads (heads). The input image is first passed through the backbone network to extract multi-scale features, constructing a Feature Pyramid Network (FPN) [18]. Then, a Path Aggregation Network (PAN) [19] is employed to merge features of different scales, enhancing the model’s ability to detect objects of various sizes. Finally, the detection heads perform feature decoding to output the location and class information of the objects in the image. In this study, the YOLOv11 model was improved (as shown in Figure 5) to propose the PPR-YOLO model. The original Conv and BottleNeck modules [20] were replaced with Conv2 and IBCNeck structures, thereby enhancing the model’s ability to recognize the postures of smaller piglet targets.

2.3.1. Conv2 Module

A multi-branch structure provides richer feature learning capabilities, facilitating the extraction of features corresponding to different piglet postures. This study replaces the original convolution module with the Conv2 module, increasing the model’s feature extraction capacity. Conv2 is a simplified version based on RepConv (Re-parameterizable Convolution) [21]. RepConv is a convolution module that leverages the concept of structural re-parameterization, aiming to enhance feature representation and model performance during training using a multi-branch structure while simplifying to a single-branch standard convolution layer during inference. This approach ensures that the model’s accuracy is maintained without incurring additional computational overhead during inference. The structural changes in the Conv2 module are illustrated in Figure 6.
Specifically, during training, the Conv2 module comprises two convolution branches: one with a standard 3 × 3 convolution and another with a 1 × 1 convolution. Let the input feature map be x R C in × H × W , then the output of this module during training can be expressed as:
y train = σ ( BN ( ( W 3 × 3 x ) + ( W 1 × 1 x ) ) )
where σ is the SiLU activation function, B N ( ) represents batch normalization, and ∗ denotes the convolution operation. W 3 × 3 R C o u t × C i n × 3 × 3 and W 1 × 1 R C o u t × C i n × 1 × 1 are the parameters of the 3 × 3 and 1 × 1 convolution kernels, respectively.
During inference, the module merges the two parallel convolution branches by reparameterizing the weights of the 1 × 1 convolution and embedding them into the center of the 3 × 3 convolution kernel. Specifically, a zero tensor W 0 R C o u t × C i n × 3 × 3 with the same dimensions as W 3 × 3 is defined, and then W 1 × 1 is embedded into the central position of W 0 :
W 0 [ : , : , i 0 , j 0 ] = W 1 × 1
where i 0 = 1 , j 0 = 1 are the coordinates of the convolution kernel’s center (starting from zero). Then, the fused convolution kernel is obtained by adding W 3 × 3 and W 0 :
W fused = W 3 × 3 + W 0
Finally, the output during inference can be simplified as a single-branch structure:
y infer = σ BN ( W fused x )
Through this re-parameterization process, inference no longer requires explicit calculation of the two parallel convolution branches. Instead, the fused weight W f u s e d is directly used, which is numerically equivalent to the two-branch output during training but does not increase the computational cost during inference, maintaining the model’s efficiency.

2.3.2. Inverted BottleNeck Structure

To improve the model’s ability to capture subtle body feature changes in piglets and thus enhance the distinction between different postures, this paper proposes the IBCNeck structure based on the BottleNeck structure in MobileNetV3 [22]. Unlike the original BottleNeck structure in YOLO, the inverted bottleneck structure was first introduced by MobileNetV2 [23], with the core idea being to expand the input channels to improve feature representation. The structure of the IBCNeck module is shown in Figure 7.
Specifically, during inference, the IBCNeck module first expands the input feature map x R C in × H × W in the channel dimension and extracts features. The input feature x undergoes a convolution operation to obtain an intermediate feature x 1 . Then, a depthwise separable convolution is applied to x 1 to extract spatial features, producing the feature map x 2 . Subsequently, the feature map x 2 is fed into the Channel Attention (CA) module, which adaptively recalibrates the feature channels.
The channel attention module first applies global average pooling (GAP) to the feature x 2 to obtain the channel descriptor (Equation (6)). It then performs a linear transformation on x pool using a 1×1 convolution and maps the result to the (0,1) interval via the sigmoid function σ , thereby obtaining the channel weights. These weights are used to perform element-wise weighting of the feature channels in x 2 , resulting in the feature x 3 . In this manner, the channel attention mechanism reinforces important feature channels, suppresses redundant information, and achieves fine-grained recalibration of the feature maps along the channel dimension:
x pool ( c ) = 1 H × W i = 1 H j = 1 W x 2 c , i , j
After recalibration, the feature map x 3 undergoes convolution to produce a feature map u with the same dimensionality as the input. A residual connection is then applied to obtain the final output feature y . Through these steps, feature expansion and compression, adaptive channel attention weighting, and residual connections are achieved with low computational overhead. This approach balances efficiency and expressiveness, thereby enhancing the overall model’s feature representation capability and improving piglet posture classification performance.

2.4. Piglet Selection Outside the Sow’s Region

This study focuses on the posture of piglets outside the sow area. Within the sow area, piglets are primarily engaged in nursing-related behaviors; therefore, it is necessary to filter out piglets located outside the sow area. By employing an object detection algorithm to obtain the bounding box information of sows and piglets, the size of the intersection between the piglet and sow bounding boxes is used for discrimination, as illustrated in Equation (7). Piglet targets with bounding box areas not exceeding the threshold Sin are selected. As illustrated in Figure 8, if the piglet’s bounding box area exceeds the specified threshold and lies within the sow’s bounding box (highlighted in orange), that piglet (marked by a blue bounding box) is filtered out. Otherwise, it is retained (marked by a green bounding box) for further analysis:
S i n = b p i g l e t b s o w b p i g l e t
where S i n represents the area of the piglet outside the nursing area, and b s o w and b p i g l e t denote the bounding boxes of the sow and piglet, respectively.

3. Results

3.1. Training Settings and Evaluation Methods

3.1.1. Training Settings

The experiments in this study were conducted using the PyTorch deep learning framework on a computer equipped with an Intel Core i7-11700 processor, 32 GB of RAM, and a single NVIDIA GeForce RTX 3090 graphics processing unit (GPU). For the sow and piglet posture recognition model PPR-YOLO, an initial learning rate of 0.01, a weight decay coefficient of 0.0005, and a batch size of 16 were set. The AdamW optimizer [24] was selected, and the model was trained for a total of 200 epochs. During the first 190 epochs, data augmentation methods such as image rotation, scaling, and Mosaic were employed to enhance the model’s generalization capability. In the final 10 epochs, data augmentation was disabled to improve the stability of model convergence.

3.1.2. Evaluation Methods

The evaluation of sow and piglet target detection and piglet posture recognition in this paper primarily employed precision (P), recall (R), and mean average precision (mAP) at IoU thresholds of 0.5 and from 0.5 to 0.95 ([email protected] and [email protected]:0.95). The specific computation of the evaluation metrics is illustrated in Equations (8)–(11):
P = TP TP + FP
R = TP TP + FN
AP c = 0 1 P ( R )
mAP = 1 N c = 1 N AP c
where TP (True Positive) indicates predictions that match the ground truth, where a prediction is considered a TP if the Intersection over Union (IoU) with the ground truth bounding box exceeds 0.6. FP (False Positive) denotes instances where a negative sample is incorrectly predicted as positive, and FN (False Negative) denotes instances where a positive sample is incorrectly predicted as negative. N represents the total number of classes, and c denotes the c-th detection category. In the calculation of A P c , a fixed IoU threshold was used, while for the subsequent mAP computation, IoU thresholds ranging from 0.5 to 0.95 were used at ten equally spaced intervals.
Additionally, the inference efficiency of the model was evaluated using the number of parameters (Params), floating-point operations (FLOPs), and detection time (ms/frame). The number of parameters (Params) reflects the overall size of the model. FLOPs were used to measure the computational cost of the model. The higher the FLOPs, the more floating-point operations are required during inference, indicating greater computational resource usage.

3.2. Detection Results

To verify the performance of the proposed PPR-YOLO model in sow and piglet target detection, as well as piglet posture recognition tasks, we first conducted model validation on the overall images in the test set. In a total of 246 images, there were 2369 piglet targets and 246 sow targets. The results of sow and piglet posture recognition are shown in Table 2.
To determine the specific S i n threshold setting in this study, a step size of 0.1 was used on the dataset to track changes in the proportions of piglets’ resting and suckling behaviors under different S i n thresholds, as illustrated in Figure 9. The blue curve represents the recall rate of piglet suckling behavior above the S i n threshold, while the orange curve indicates the recall rate of piglet resting postures below the S i n threshold. A larger S i n threshold causes missed detections of suckling postures, whereas a smaller one leads to resting postures being misclassified as suckling postures. Therefore, to balance both, this paper sets S i n to 0.3, which excludes 96.2% of piglet suckling behaviors while retaining 91.7% of resting postures.
By setting the S i n threshold to 0.3, the piglet targets detected by the PPR-YOLO model were filtered, resulting in a total of 1659 piglet targets. The specific posture recognition results are presented in Table 3. Compared to the overall piglet posture recognition results shown in Table 2, the posture recognition performance for piglets outside the sow area did not exhibit significant changes. This is because piglets within the sow area are predominantly in a ventral lying and nursing posture, whereas piglets outside the sow area are more likely to be occluded by barriers and other factors. Consequently, in Table 3, the recognition precision (P) for piglets outside the sow area decreased by 1.9% compared to the overall piglet posture recognition precision in Table 2. Conversely, due to the tendency of piglets within the sow area to occlude each other during nursing behaviors, excluding the detection of piglets within the sow area in Table 3 resulted in an increase in the recognition recall rate (R) by 3.3% compared to the overall piglet posture recognition recall rate in Table 2.
To further evaluate the performance of the PPR-YOLO model, Figure 10 presents the curves of precision, recall, and F1 scores as functions of the confidence threshold, as well as the precision–recall (P-R) curve. The results show that precision gradually increases with the confidence threshold, reaching 100% at a confidence value of 0.995 (Figure 10a). Meanwhile, recall exhibits a downward trend; once the piglet posture confidence rises to 0.94, recall drops to 0 (Figure 10b). The F1 score peaks at a confidence value of 0.412, indicating the optimal balance between precision and recall (Figure 10c). The P-R curve further confirms the model’s overall performance under different confidence thresholds. At an IoU of 0.5, the average accuracy for all piglet posture categories is 88.2% (Figure 10d). These findings demonstrate that PPR-YOLO maintains high detection accuracy and recall capability across various threshold settings.
Figure 11 illustrates the detection results of sow targets and piglet postures. Specifically, Figure 11a,b show the detection results of overall and outside sow area piglet postures when the sow is not in the lactation state. Figure 11c and 11d depict the detection results during the sow’s lactation state, where overall and filtered outside sow area piglet postures are shown, respectively. When the sow is not in the lactation state, piglets generally move outside the sow’s bounding box, as indicated by the red rectangular markers in Figure 11b. However, due to piglets being at the edge of the enclosure and occluding each other, the model sometimes misclassifies ventral lying piglets as other postures. During the sow’s lactation state, as shown in Figure 11c, piglets exhibit severe stacking and suckling behaviors, leading to occlusion and potential missed detections. In contrast, Figure 11d demonstrates that piglets outside the sow area were successfully detected without misses.
Figure 10 presents the confusion matrices for the detection results of sow targets and piglet postures in the test set. Specifically, Figure 12a,b correspond to the overall and outside sow area image detection results, respectively. The confusion matrices (a) and (b) do not show significant differences. From confusion matrix (a), it can be observed that the body size difference between sows and piglets is substantial, resulting in no recognition confusion. In terms of specific piglet posture recognition, certain postures are prone to be confused with adjacent postures. For example, piglets in a lateral lying posture are more likely to be recognized as ventral lying postures, while other postures (sitting, kneeling, and standing) require transitioning through ventral lying postures. Therefore, the feature differences are considerable, and the probability of confusion is low.

3.3. Ablation Experiments

To evaluate the impact of image cropping of the sow’s farrowing bed area during the dataset creation process on the detection results of the PPR-YOLO algorithm, we compared the training performance of the sow target and piglet posture detection models using both the original uncropped data and the cropped dataset. The training and testing sets were divided identically, and the specific test results are presented in Table 4 below. After cropping, the images input into the model have a smaller image compression ratio. Consequently, compared to the uncropped data, the test recall rate (R), [email protected], and [email protected]:0.95 increased by 2.2%, 1.8%, and 2.3%, respectively. However, the increase in recall introduced more challenging samples, resulting in a slight decrease in precision by 1.3%. Overall, the cropped data effectively improved the model’s performance.
The proposed PPR-YOLO model is based on YOLOv11s. By replacing standard convolutions with the Conv2 module and substituting the original BottleNeck structure with the inverted bottleneck convolution module IBCNeck, the model achieves the sow target and piglet posture detection tasks described in this study. To assess the impact of different improvement strategies on the original model, we conducted ablation experiments using the control variable method. Training and testing were performed on the same dataset, and the specific results are shown in Table 5 below.
From Table 5, it can be observed that the improved Conv2 module enhances the original model’s [email protected] and [email protected]:0.95 metrics by 1.2% and 1.6%, respectively, without increasing the computational load during inference. This is because although the Conv2 module employs a multi-branch structure in the training phase to enhance feature extraction, it merges the multiple branches into a single branch at inference time via structural re-parameterization, thereby maintaining computational efficiency and memory usage. Consequently, introducing the Conv2 module does not increase time or memory complexity during inference.
Using only the Conv2 module yields results comparable to using only the IBCNeck module. However, the IBCNeck module introduces an increase in the number of parameters and floating-point operations, with a 14.9% and 34.7% increase compared to the original YOLOv11s model, respectively. Nevertheless, the PPR-YOLO model, which combines both Conv2 and IBCNeck, improves [email protected] and [email protected]:0.95 by 1.4% and 2.3% over YOLOv11s in sow and piglet posture detection tasks while retaining efficient inference through the Conv2 module’s structural re-parameterization. These results show that PPR-YOLO achieves noticeable performance gains without significantly increasing inference complexity, making it well-suited for commercial farm scenarios that demand high real-time performance and accuracy.
In this study, a class-agnostic NMS algorithm was employed during the model’s post-processing stage to address the uncertainty that a single piglet target might be assigned multiple posture classes. To validate the impact of this post-processing strategy on the PPR-YOLO model, a performance evaluation was conducted on the same test set, and the specific results are presented in Table 6. It can be observed that applying a single-class NMS strategy sacrifices some piglet detection performance in the test set (with a 0.7% decrease in recall), yet it significantly increases the detection precision (P) for piglet postures by 1.4%.
To enhance the interpretability of the improved PPR-YOLO model, Eigen-CAM [25] was utilized to visualize the features of the last convolutional layer before the small target detection head in the detection model. As shown in Figure 13, Figure 13a is the original input image, Figure 13b shows the detection results of the proposed algorithm, and Figure 13c–f display the heatmaps of regions of interest for features under different improvements. The improvements in different modules (Figure 13d,e) show certain attention enhancements compared to the baseline YOLOv11s model (Figure 13f). However, when using only the IBCNeck module, some irrelevant regions are also focused on. The visualization results of the proposed algorithm (Figure 13c) compared to the original YOLOv11s model (Figure 13f) demonstrate that the improved model pays more attention to the piglet regions, suppresses interference from irrelevant areas, and focuses more on the detailed regions of individual piglets, facilitating the differentiation of different piglet postures.

3.4. Comparison of Different Models

To further evaluate the performance of the proposed PPR-YOLO model in sow target and piglet posture detection tasks, mainstream single-stage object detection algorithms were compared, including RetinaNet, SSD, YOLOv3, YOLOv5s, and YOLOv8s [26,27,28]. These models were trained and tested on the same sow target and piglet posture detection dataset, and the specific results are presented in Table 7 below. The results indicate that the proposed PPR-YOLO model outperforms current mainstream algorithms across multiple performance metrics evaluated on the test set. The model achieved a detection precision of 87.7% and a recall rate of 88.2% and demonstrated the best performance among all compared algorithms, fully validating its accuracy in the piglet posture detection task.
Figure 14 presents a comprehensive performance comparison of the different detection algorithms. In the figure, models that were positioned closer to the outer edge exhibit better performance. As seen in Table 7, the proposed PPR-YOLO model achieves [email protected] and [email protected]:0.95 scores of 92.0% and 74.8%, respectively, significantly outperforming other models, especially in the high IoU threshold metric [email protected]:0.95. This indicates its superior detection capability in complex scenarios. The model has a parameter count of 10.8M, significantly lower than YOLOv3’s 61.5M, and is comparable to YOLOv8s’s 11.1M, highlighting its lightweight design, which reduces storage and computational resource requirements. In terms of computational complexity, although the PPR-YOLO model has 28.7 GFLOPs, which is slightly higher than YOLOv5s’s 17.5 GFLOPs, it achieves over 5% higher detection accuracy and nearly a 3% improvement in recall rate compared to YOLOv5s. This demonstrates a balanced advantage between performance and efficiency.

3.5. Analysis of Piglet Resting Postures

The proposed piglet posture detection algorithm, PPR-YOLO, was applied to continuous video detection. Six pens were selected, each housing between 6 and 11 piglets. Videos captured between 12:30 p.m. and 1:30 p.m. were selected for piglet posture detection, performing detection every 10 s. The distribution of piglet postures across different pens is shown in Figure 15, ordered from the smallest to the largest number of piglets. It was observed that in Pen C2, a higher proportion of piglets were located within the sow’s area, and the proportion of comfortable lateral lying postures was the lowest, indicating that piglets in this pen might have been experiencing hunger. In Pen C4, the highest proportion of lateral lying resting postures was observed, and the proportion of piglets within the sow’s area was only slightly lower than in Pen C1, suggesting that piglets in Pen C4 were in a more comfortable state. In Pen C1, the combined proportion of lateral and ventral lying resting postures was the highest, but the proportion of ventral lying was excessively high, indicating that piglets in this pen might have been experiencing stress or discomfort.
The posture distribution of piglets in Pens C2 and C4 over time is illustrated in Figure 16. In the figure, the horizontal axis represents the video frame index (every 10 s for a total of 1 h), while the vertical axis shows the distribution of piglet postures detected by the PPR-YOLO model at each moment in time. From bottom to top, the stacked legend denotes piglets in lateral lying, ventral lying, other postures, and piglets within the sow’s area, respectively. Due to factors such as pen layout and occlusion between piglets, the number of detected piglets fluctuated. In Pen C2, which housed seven piglets, two instances of sow lactation were observed in Figure 16a. During these times, 100% of piglets were within the sow’s area. The original video shows that during the second lactation, the sow refused to nurse, causing piglets to increase their movement within the pen (indicated by the green area), which also explains the reduced proportion of comfortable lateral lying postures. Over the entire hour, the percentages of piglets in lateral lying, ventral lying, other postures, and within the sow’s area were 10.54%, 12.23%, 4.08%, and 73.15%, respectively. In Pen C4, which housed nine piglets, two instances of sow lactation were also observed, but the piglets quickly transitioned to lateral lying postures with minimal transitional ventral lying postures, as shown in Figure 16b. This indicates that piglets in Pen C4 received sufficient milk intake and were in a satisfied state. Overall, the percentages of piglets in lateral lying, ventral lying, other postures, and within the sow’s area were 49.69%, 14.06%, 2.39%, and 33.87%, respectively.

3.6. Model Stability Analysis

3.6.1. Hyperparameter Stability Analysis

To evaluate the stability and performance of the PPR-YOLO model under different hyperparameter settings, this study conducted a sensitivity analysis on key hyperparameters. Specifically, a larger initial learning rate and a smaller batch size were tested, with the initial learning rate and batch size set to 0.1 and 8, respectively. Each set of hyperparameters was trained under the same dataset and training conditions. The detailed experimental results are listed in Table 8.
From the table, when the learning rate is 0.1 and the batch size is 16, the model achieves the best performance: precision of 87.8%, recall of 88.2%, [email protected] of 92.1%, and [email protected]:0.95 of 74.8%. With a smaller batch size (8), performance is slightly lower under the same learning rate. When the learning rate is reduced to 0.01, a batch size of 16 still maintains a high level of performance (precision at 87.7%, recall at 88.2%, [email protected] at 92.0%, and [email protected]:0.95 at 74.8%), whereas the performance declines for a batch size of 8.
As shown in Figure 17, the loss curves for the PPR-YOLO model under different hyperparameter configurations reveal varying convergence characteristics. A high learning rate (0.1) combined with a small batch size (8) results in larger oscillations at the initial training stage, indicating that the model quickly adapts to gradient changes. However, it stabilizes after data augmentation is turned off in the final 10 epochs. By contrast, with a lower learning rate (0.01) and a larger batch size (16), the loss curve is smoother with smaller oscillations, indicating that the model converges more steadily. Overall, the PPR-YOLO model maintains high detection precision and recall under various learning rate and batch size combinations. Its performance is especially stable at larger batch sizes, demonstrating robust tolerance to hyperparameter variations for diverse practical applications.

3.6.2. Model Stability Analysis in Different Scenarios

To assess the performance of the proposed PPR-YOLO model in new scenarios, this paper selected 500 monitoring images from a sow farrowing house at an experimental pig farm in Hainan for evaluation. Since the PPR-YOLO model in this study was not trained on data from this new setting, only the confidence of piglet resting posture predictions was analyzed. With the model’s confidence threshold set to 0.5, the percentage distribution for different confidence intervals was obtained and compared against the distribution in the test set. The results are shown in Figure 18, where the new scenario confidence distribution is represented with colored stripes. Compared with the confidence distribution of this study’s test set, piglet postures in the new farm scenario most frequently appear in the 0.8 confidence range, with an average proportion of 34.4%. By contrast, in this paper’s test set, most confidence values are above 0.9, with an average proportion of 70.4%.
Some of the piglet resting posture recognition results of the PPR-YOLO model in the new scenario are illustrated in Figure 19. Figure 19a,b, respectively, display piglet posture detection during daytime and nighttime. In Figure 19a, piglets resting along the pen’s edge are heavily occluded, causing the model to misclassify some ventral lying postures as other postures. Figure 19b shows good recognition of piglet resting postures under nighttime conditions. However, the posture recognition confidence levels in this unfamiliar scenario are generally lower, suggesting increased uncertainty. Consequently, future efforts will involve applying transfer learning to further improve piglet posture recognition under unfamiliar conditions.

4. Discussion

4.1. Main Features of the Proposed Method

This study primarily focuses on piglet resting posture recognition by introducing the PPR-YOLO model tailored for sow targets and piglet posture detection tasks, thereby enhancing the ability to distinguish between different piglet postures. Since the postures of piglets within the sow’s area are related to suckling behaviors, this study emphasizes the distribution of piglet resting postures outside the sow’s area by analyzing the relative positional relationships between sows and piglets.
The main limitation in piglet posture recognition tasks lies in the challenge of distinguishing different postures from a top–down view. This study improves the model’s feature extraction capabilities for different piglet postures by enhancing the multi-branch and inverted bottleneck structures, similar to how Luo et al. [29] and Huang et al. [30] utilized attention mechanisms to improve YOLO-based object detection models. The IBCNeck inverted bottleneck module introduced a channel attention mechanism, enhancing the model’s ability to differentiate despite occlusions and similar postures.
Additionally, piglet posture recognition faces the limitation of detecting small targets. Therefore, extracting pigs from the image for further analysis is a feasible approach. Nasirahmadi et al. [8] achieved a classification accuracy of up to 94.4% by removing the background of fattening pigs, extracting the boundary area and convex hull perimeter, and using an SVM algorithm to distinguish between lateral and ventral lying postures. Witte and Marx Gómez [31] employed YOLOv5 to detect pig locations and then used the EfficientNet classification algorithm to identify lying and non-lying pigs, achieving 93% accuracy. However, these scenarios were applied to pigs in the fattening stage, which differs from the sow farrowing house environment where occlusions are more severe. Moreover, performing posture recognition in two separate stages somewhat reduces the algorithm’s real-time performance. This study improved the model’s input image resolution through image cropping, resulting in a 2.3% increase in the average precision metric of the PPR-YOLO model.
This study focuses on the resting posture recognition of piglets during the lactation period. However, during sow lactation, piglets gather around the sow’s udder area, predominantly exhibiting ventral lying postures unrelated to regular resting states. Therefore, this study filters out piglets within the sow’s area based on their relative positional relationships, facilitating the analysis of posture changes among piglets in the pens and understanding the current health and environmental comfort of the piglet population.

4.2. Future Work

Although the proposed PPR-YOLO-based method for recognizing the resting postures of piglets during the lactation period has achieved satisfactory results in experiments, there are still some shortcomings. Firstly, the current study’s dataset originates from a single farm, limiting the sample size and diversity. Future work will involve expanding the dataset to include sow farrowing houses from different regions and breeds to enhance the model’s generalization capabilities. Secondly, although YOLOv11 has been improved with Conv2 and IBCNeck modules, there is still room for improvement in terms of real-time performance and computational efficiency. Future research may explore more lightweight network architectures or employ model compression techniques to adapt to resource-constrained real-world applications. Additionally, the current method relies primarily on visual data; future work could integrate multi-modal sensor data (such as temperature and humidity) to achieve more comprehensive environmental and health monitoring, further improving posture recognition accuracy. Lastly, addressing the complexity of piglet behaviors within the sow’s area, future work will involve developing more intelligent post-processing algorithms to distinguish whether those piglets are engaged in suckling behavior or simply resting. Through these improvements, future research will further enhance the reliability and application value of piglet resting posture recognition technology, providing robust technical support for intelligent management and animal welfare monitoring in commercial farms.

5. Conclusions

This study addresses the problem of recognizing the resting postures of piglets during the lactation period within sow farrowing houses by proposing an innovative solution based on the PPR-YOLO model. By comprehensively applying image processing techniques and deep learning model enhancements, the detection precision and classification capability of piglet resting postures were significantly improved. The specific research content and results are as follows:
  • To tackle the issue of numerous sows and piglets in the farrowing house that are prone to occlusions, this study employed an image edge detection algorithm to accurately locate the sow farrowing bed area and reduced irrelevant background interference through cropping. The experimental results demonstrate that after image cropping, the model’s [email protected] and [email protected]:0.95 increased by 1.8% and 2.3%, respectively, effectively enhancing the model’s recognition accuracy;
  • Building upon the YOLOv11 model, this study introduced multi-branch Conv2 modules and inverted bottleneck structures (IBCNeck modules) to enhance the model’s ability to extract subtle posture features and detect small targets. The optimized PPR-YOLO model achieved [email protected] and [email protected]:0.95 scores of 92.0% and 74.8%, respectively, significantly outperforming mainstream object detection algorithms;
  • By analyzing the relative positional relationships between sows and piglets, piglets located outside the sow’s area were filtered out, eliminating the interference of suckling behaviors in resting posture recognition. The results indicate that the posture recognition performance of piglets outside the sow’s area remained stable, ensuring the accuracy of posture classification.
In summary, the proposed PPR-YOLO-based method for recognizing the resting postures of piglets during the lactation period effectively enhanced the precision and reliability of piglet posture recognition through image preprocessing, model structure optimization, and post-processing filtering. The experimental results validate the superior performance of this method in complex scenarios, providing strong technical support for automated monitoring and piglet welfare management in commercial farms.

Author Contributions

Conceptualization, J.C., L.L. (Longshen Liu) and W.Y.; methodology, J.C., L.L. (Longshen Liu) and W.Y.; software, J.C.; validation, L.L. (Luo Liu) and P.L.; formal analysis, J.C.; investigation, J.C., L.L. (Luo Liu) and P.L.; resources, M.S., L.L. (Longshen Liu) and W.Y.; data curation, J.C. and L.L. (Luo Liu).; writing—original draft preparation, J.C.; writing—review and editing, J.C., L.L. (Longshen Liu) and M.S.; visualization, J.C.; supervision, M.S. and W.Y.; project administration, L.L. (Longshen Liu) and W.Y.; funding acquisition, L.L. (Longshen Liu) and W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (2021ZD0113803) and the National Natural Science Foundation of China (Grant No. 32272929).

Institutional Review Board Statement

This study involved only observational data and did not involve any handling of animals; therefore, ethical approval was not required.

Data Availability Statement

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Acknowledgments

We acknowledge Yingguang Pig Farm for the use of its animals and facilities.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Benjamin, M.; Yik, S. Precision Livestock Farming in Swine Welfare: A Review for Swine Practitioners. Animals 2019, 9, 133. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, Z.; Lu, J.; Wang, H. A Review of Posture Detection Methods for Pigs Using Deep Learning. Appl. Sci. 2023, 13, 6997. [Google Scholar] [CrossRef]
  3. Matthews, S.G.; Miller, A.L.; Clapp, J.; Plötz, T.; Kyriazakis, I. Early Detection of Health and Welfare Compromises through Automated Detection of Behavioural Changes in Pigs. Vet. J. 2016, 217, 43–51. [Google Scholar] [CrossRef] [PubMed]
  4. Sadeghi, E.; Kappers, C.; Chiumento, A.; Derks, M.; Havinga, P. Improving Piglets Health and Well-Being: A Review of Piglets Health Indicators and Related Sensing Technologies. Smart Agric. Technol. 2023, 5, 100246. [Google Scholar] [CrossRef]
  5. Cornou, C.; Lundbye-Christensen, S.; Kristensen, A.R. Modelling and Monitoring Sows’ Activity Types in Farrowing House Using Acceleration Data. Comput. Electron. Agric. 2011, 76, 316–324. [Google Scholar] [CrossRef]
  6. Liu, L.-S.; Ni, J.-Q.; Zhao, R.-Q.; Shen, M.-X.; He, C.-L.; Lu, M.-Z. Design and Test of a Low-Power Acceleration Sensor with Bluetooth Low Energy on Ear Tags for Sow Behaviour Monitoring. Biosyst. Eng. 2018, 176, 162–171. [Google Scholar] [CrossRef]
  7. Nasirahmadi, A.; Richter, U.; Hensel, O.; Edwards, S.; Sturm, B. Using Machine Vision for Investigation of Changes in Pig Group Lying Patterns. Comput. Electron. Agric. 2015, 119, 184–190. [Google Scholar] [CrossRef]
  8. Nasirahmadi, A.; Sturm, B.; Olsson, A.-C.; Jeppsson, K.-H.; Müller, S.; Edwards, S.; Hensel, O. Automatic Scoring of Lateral and Sternal Lying Posture in Grouped Pigs Using Image Processing and Support Vector Machine. Comput. Electron. Agric. 2019, 156, 475–481. [Google Scholar] [CrossRef]
  9. Riekert, M.; Klein, A.; Adrion, F.; Hoffmann, C.; Gallmann, E. Automatically Detecting Pig Position and Posture by 2D Camera Imaging and Deep Learning. Comput. Electron. Agric. 2020, 174, 105391. [Google Scholar] [CrossRef]
  10. Ji, H.; Yu, J.; Lao, F.; Zhuang, Y.; Wen, Y.; Teng, G. Automatic Position Detection and Posture Recognition of Grouped Pigs Based on Deep Learning. Agriculture 2022, 12, 1314. [Google Scholar] [CrossRef]
  11. Wang, L.; Liu, Q.; Cao, Y.; Hao, X. Posture Recognition of Group-Housed Pigs Using Improved Cascade Mask R-CNN and Cooperative Attention Mechanism. Trans. Chin. Soc. Agric. Eng. 2023, 39, 144–153. [Google Scholar]
  12. Li, G.; Jv, Q.; Liu, F.; Yao, Z. Pig Pose Recognition Method Based on Openpose. In Proceedings of the Advances in Precision Instruments and Optical Engineering; Liu, G., Cen, F., Eds.; Springer Nature: Singapore, 2022; pp. 533–545. [Google Scholar]
  13. Dong, L.; Meng, X.; Pan, M.; Zhu, Y.; Liang, Y.; Gao, X.; Liu, H. Recognizing Pig Behavior on Posture and Temporal Features Using Computer Vision. Trans. Chin. Soc. Agric. Eng. 2022, 38, 148–157. [Google Scholar]
  14. Zheng, C.; Zhu, X.; Yang, X.; Wang, L.; Tu, S.; Xue, Y. Automatic Recognition of Lactating Sow Postures from Depth Images by Deep Learning Detector. Comput. Electron. Agric. 2018, 147, 51–63. [Google Scholar] [CrossRef]
  15. Zhu, X.; Chen, C.; Zheng, B.; Yang, X.; Gan, H.; Zheng, C.; Yang, A.; Mao, L.; Xue, Y. Automatic Recognition of Lactating Sow Postures by Refined Two-Stream RGB-D Faster R-CNN. Biosyst. Eng. 2020, 189, 116–132. [Google Scholar] [CrossRef]
  16. Hay, M.; Vulin, A.; Génin, S.; Sales, P.; Prunier, A. Assessment of Pain Induced by Castration in Piglets: Behavioral and Physiological Responses over the Subsequent 5 Days. Appl. Anim. Behav. Sci. 2003, 82, 201–218. [Google Scholar] [CrossRef]
  17. Redmon, J. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  18. Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  19. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
  20. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  21. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making Vgg-Style Convnets Great Again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13733–13742. [Google Scholar]
  22. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V. Searching for Mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 1314–1324. [Google Scholar]
  23. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  24. Loshchilov, I.; Hutter, F. Fixing Weight Decay Regularization in Adam. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  25. Muhammad, M.B.; Yeasin, M. Eigen-CAM: Class Activation Map Using Principal Components. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
  26. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. Ssd: Single Shot Multibox Detector; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  27. Ross, T.-Y.; Dollár, G. Focal Loss for Dense Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2980–2988. [Google Scholar]
  28. Farhadi, A.; Redmon, J. Yolov3: An Incremental Improvement; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1804, pp. 1–6. [Google Scholar]
  29. Luo, Y.; Zeng, Z.; Lu, H.; Lv, E. Posture Detection of Individual Pigs Based on Lightweight Convolution Neural Networks and Efficient Channel-Wise Attention. Sensors 2021, 21, 8369. [Google Scholar] [CrossRef] [PubMed]
  30. Huang, L.; Xu, L.; Wang, Y.; Peng, Y.; Zou, Z.; Huang, P. Efficient Detection Method of Pig-Posture Behavior Based onMultiple Attention Mechanism. Comput. Intell. Neurosci. 2022, 2022, 1759542. [Google Scholar] [PubMed]
  31. Witte, J.-H.; Marx Gómez, J. Introducing a new Workflow for Pig Posture Classification based on a combination of YOLO and EfficientNet. In Proceedings of the 55th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2022. [Google Scholar]
Figure 1. Schematic diagram of piglet daily activity data collection.
Figure 1. Schematic diagram of piglet daily activity data collection.
Agriculture 15 00230 g001
Figure 2. Examples of different piglet postures. (a) Lateral lying; (b) ventral lying; (c) other postures.
Figure 2. Examples of different piglet postures. (a) Lateral lying; (b) ventral lying; (c) other postures.
Agriculture 15 00230 g002
Figure 3. Image-cropping process flowchart.
Figure 3. Image-cropping process flowchart.
Agriculture 15 00230 g003
Figure 4. Piglet resting posture recognition process flow.
Figure 4. Piglet resting posture recognition process flow.
Agriculture 15 00230 g004
Figure 5. PPR-YOLO model structure diagram. © indicates Concat operation, and ⊕ indicates addition operation.
Figure 5. PPR-YOLO model structure diagram. © indicates Concat operation, and ⊕ indicates addition operation.
Agriculture 15 00230 g005
Figure 6. RepConv and improved Conv2 module.
Figure 6. RepConv and improved Conv2 module.
Agriculture 15 00230 g006
Figure 7. IBCNeck module structure diagram. ⊙ indicates multiplication, ⊕ indicates addition, Conv denotes standard convolution, DWConv denotes depthwise separable convolution, Pool represents global average pooling, FC denotes fully connected layer, and σ represents the Sigmoid activation function.
Figure 7. IBCNeck module structure diagram. ⊙ indicates multiplication, ⊕ indicates addition, Conv denotes standard convolution, DWConv denotes depthwise separable convolution, Pool represents global average pooling, FC denotes fully connected layer, and σ represents the Sigmoid activation function.
Agriculture 15 00230 g007
Figure 8. Piglets outside sow’s region diagram.
Figure 8. Piglets outside sow’s region diagram.
Agriculture 15 00230 g008
Figure 9. Changes in the proportions of piglet resting and suckling postures.
Figure 9. Changes in the proportions of piglet resting and suckling postures.
Agriculture 15 00230 g009
Figure 10. Performance evaluation of the PPR-YOLO model in target detection.
Figure 10. Performance evaluation of the PPR-YOLO model in target detection.
Agriculture 15 00230 g010
Figure 11. Sow target and piglet posture detection results.
Figure 11. Sow target and piglet posture detection results.
Agriculture 15 00230 g011aAgriculture 15 00230 g011b
Figure 12. Confusion matrix of sow target and piglet posture detection results.
Figure 12. Confusion matrix of sow target and piglet posture detection results.
Agriculture 15 00230 g012
Figure 13. Model ablation experiment heatmap comparison.
Figure 13. Model ablation experiment heatmap comparison.
Agriculture 15 00230 g013
Figure 14. Comprehensive performance comparison of different detection algorithms.
Figure 14. Comprehensive performance comparison of different detection algorithms.
Agriculture 15 00230 g014
Figure 15. Piglet posture distribution across different pens.
Figure 15. Piglet posture distribution across different pens.
Agriculture 15 00230 g015
Figure 16. Piglet posture distribution over time in Pens C2 and C4.
Figure 16. Piglet posture distribution over time in Pens C2 and C4.
Agriculture 15 00230 g016
Figure 17. Model performance trends under different hyperparameter settings.
Figure 17. Model performance trends under different hyperparameter settings.
Agriculture 15 00230 g017
Figure 18. Comparison of detection confidence distributions between the new scenario and the test set.
Figure 18. Comparison of detection confidence distributions between the new scenario and the test set.
Agriculture 15 00230 g018
Figure 19. PPR-YOLO model’s piglet posture detection in a new scenario.
Figure 19. PPR-YOLO model’s piglet posture detection in a new scenario.
Agriculture 15 00230 g019
Table 1. Definition and description of piglet postures.
Table 1. Definition and description of piglet postures.
Posture CategoryPosture Description
Lateral LyingThe body weight is supported by the side, with the shoulder and floor in contact.
Ventral LyingThe body weight is supported by the abdomen, with the sternum and floor in contact.
OtherThe body weight is supported by the legs (standing, kneeling, and sitting).
Table 2. Overall area recognition results.
Table 2. Overall area recognition results.
CategoryInstancesP (%)R (%)[email protected] (%)[email protected]:0.95 (%)
Lateral Lying22880.783.387.771.1
Ventral Lying74879.580.986.766.0
Other139390.888.794.170.8
Sow24699.710099.591.1
All (Piglet)236983.784.389.569.3
Table 3. Piglet posture results outside sow area.
Table 3. Piglet posture results outside sow area.
CategoryInstancesP (%)R (%)[email protected] (%)[email protected]:0.95 (%)
Lateral Lying19580.084.986.671.4
Ventral Lying52278.087.586.368.2
Other94287.590.291.672.3
All (Piglet)165981.887.688.270.7
Table 4. Data cropping impact on PPR-YOLO model.
Table 4. Data cropping impact on PPR-YOLO model.
DatasetP (%)R (%)[email protected] (%)[email protected]:0.95 (%)Input Image Size (Width × Height)
Cropped87.788.292.074.8640 × 448
Uncropped 89.086.090.272.5640 × 384
Table 5. Ablation experiment results of the improved YOLOv11s model.
Table 5. Ablation experiment results of the improved YOLOv11s model.
Conv2IBCNeckP (%)R (%)[email protected] (%)[email protected]:0.95 (%)Params/MFLOPs/G
××86.186.390.672.59.421.3
×86.586.891.874.19.421.3
×87.187.391.974.510.828.7
87.788.292.074.810.828.7
Note: In the table, a “√” denotes the inclusion of the module, while a “×” indicates its exclusion.
Table 6. Effects of different NMS strategies on the PPR-YOLO Model.
Table 6. Effects of different NMS strategies on the PPR-YOLO Model.
NMSP (%)R (%)[email protected] (%)[email protected]:0.95 (%)
Multiclass86.388.992.274.9
Single class87.788.292.074.8
Table 7. Evaluation results of different size models for pig target detection.
Table 7. Evaluation results of different size models for pig target detection.
ModelP (%)R (%)[email protected] (%)[email protected]:0.95 (%)Params/MFLOPs/G
RetinaNet61.064.884.658.119.8143.0
SSD81.669.985.561.224.888.0
YOLOv384.979.690.162.261.551.5
YOLOv5s86.785.491.168.77.517.5
YOLOv8s87.386.591.774.011.128.4
PPR-YOLO87.788.292.074.810.828.7
Table 8. Model performance under different hyperparameter settings.
Table 8. Model performance under different hyperparameter settings.
Initial Learning RateBatch SizeP (%)R (%)[email protected] (%)[email protected]:0.95 (%)
0.1887.486.691.974.5
0.11687.888.292.174.8
0.01887.386.591.874.5
0.011687.788.292.074.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Liu, L.; Li, P.; Yao, W.; Shen, M.; Liu, L. Resting Posture Recognition Method for Suckling Piglets Based on Piglet Posture Recognition (PPR)–You Only Look Once. Agriculture 2025, 15, 230. https://doi.org/10.3390/agriculture15030230

AMA Style

Chen J, Liu L, Li P, Yao W, Shen M, Liu L. Resting Posture Recognition Method for Suckling Piglets Based on Piglet Posture Recognition (PPR)–You Only Look Once. Agriculture. 2025; 15(3):230. https://doi.org/10.3390/agriculture15030230

Chicago/Turabian Style

Chen, Jinxin, Luo Liu, Peng Li, Wen Yao, Mingxia Shen, and Longshen Liu. 2025. "Resting Posture Recognition Method for Suckling Piglets Based on Piglet Posture Recognition (PPR)–You Only Look Once" Agriculture 15, no. 3: 230. https://doi.org/10.3390/agriculture15030230

APA Style

Chen, J., Liu, L., Li, P., Yao, W., Shen, M., & Liu, L. (2025). Resting Posture Recognition Method for Suckling Piglets Based on Piglet Posture Recognition (PPR)–You Only Look Once. Agriculture, 15(3), 230. https://doi.org/10.3390/agriculture15030230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop