Next Article in Journal
Traffic Signal Timing Optimization Model Based on Video Surveillance Data and Snake Optimization Algorithm
Previous Article in Journal
A Mobile Sensing Framework for Bridge Modal Identification through an Inverse Problem Solution Procedure and Moving-Window Time Series Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Individual Pig Identification Using Back Surface Point Clouds in 3D Vision

1
College of Engineering, Heilongjiang Bayi Agricultural University, Daqing 163319, China
2
College of Electrical and Information, Northeast Agricultural University, Harbin 150030, China
3
Key Laboratory of Swine Facilities Engineering, Ministry of Agriculture, Harbin 150030, China
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(11), 5156; https://doi.org/10.3390/s23115156
Submission received: 24 April 2023 / Revised: 16 May 2023 / Accepted: 21 May 2023 / Published: 28 May 2023
(This article belongs to the Section Smart Agriculture)

Abstract

:
The individual identification of pigs is the basis for precision livestock farming (PLF), which can provide prerequisites for personalized feeding, disease monitoring, growth condition monitoring and behavior identification. Pig face recognition has the problem that pig face samples are difficult to collect and images are easily affected by the environment and body dirt. Due to this problem, we proposed a method for individual pig identification using three-dimension (3D) point clouds of the pig’s back surface. Firstly, a point cloud segmentation model based on the PointNet++ algorithm is established to segment the pig’s back point clouds from the complex background and use it as the input for individual recognition. Then, an individual pig recognition model based on the improved PointNet++LGG algorithm was constructed by increasing the adaptive global sampling radius, deepening the network structure and increasing the number of features to extract higher-dimensional features for accurate recognition of different individuals with similar body sizes. In total, 10,574 3D point cloud images of ten pigs were collected to construct the dataset. The experimental results showed that the accuracy of the individual pig identification model based on the PointNet++LGG algorithm reached 95.26%, which was 2.18%, 16.76% and 17.19% higher compared with the PointNet model, PointNet++SSG model and MSG model, respectively. Individual pig identification based on 3D point clouds of the back surface is effective. This approach is easy to integrate with functions such as body condition assessment and behavior recognition, and is conducive to the development of precision livestock farming.

1. Introduction

With the development of farm intensification, precision livestock farming (PLF) holds an increasingly important position in farm management [1]. The individual identification of different animals is a prerequisite for obtaining accurate body condition data of different individuals, which can enable precise control such as disease monitoring, precise feeding, and personalized management [2], and also help to improve animal welfare [3]. Individual identification is also the first step to achieve traceability technology in the animal product supply chain [4], and can provide a more accurate reference for animal insurance [5].
Traditionally, methods of individual identification of pigs mainly use radio frequency identification (RFID), and RFID [6] chips are implanted in ear tags or collars for individual animal identification. The contact identification causes pain and tends to induce stress in animals. In addition, the cost of additional labor is required for tag installation and separation, and tags can also be lost and worn out with long-term use [7]. With the development of sensors [8,9] and machine vision technology [10,11], it has become possible to apply non-contact machine vision methods of individual animal identification [12,13].
Individual pig identification based on machine vision includes manual marker recognition, facial recognition and body shape recognition. The early approach was to mark features such as numbers, colors and patterns on the back of pigs and then use pattern recognition techniques to distinguish different individuals [14]. However, this approach suffers from the problems of easy erasure of marker symbols and mutual occlusion between individuals [15].
In recent years, some studies have applied deep learning methods to pig face recognition by using the skin and texture of pig faces as features for individual pig recognition. Frontal pig face images were collected and a convolutional neural network (CNN) pig face recognition model with higher accuracy was established in comparison with Fisherfaces and VGG-Face algorithms [16]. A method for automatic screening of pig face images was explored [17], and a pig face recognition model for different growth stages was established [18]. To further improve the performance of pig face recognition models based on deep learning, most of the current research has focused on optimizing deep learning algorithms to improve the accuracy of pig face recognition and reducing the number of parameters of the models [19,20]. Facial recognition requires the cooperation of the object to be recognized. For example, human facial recognition usually requires the acquisition of a complete face image to obtain a high recognition accuracy [21]. However, due to the pigs’ natural tendency to move, it is difficult to look directly at the camera, and it is difficult to collect a standard frontal image of the pig face. At the same time, pig faces are often obscured by dirt, which poses a challenge for pig face recognition [17].
Some other studies have used the texture and color of the body skin and body size as the main features for individual pig identification [22,23]. An individual recognition model featuring the pig body was developed based on the YOLOv4 model used for target detection by collecting pictures of the pig body from multiple angles in a complex background [24]. A model for pig body segmentation and individual identification in different stacking states was developed [25]. In most of these studies, two-dimension (2D) RGB images were used as model inputs to extract body color, hair, texture and shape features for individual recognition. However, different from the rich markings of cows [26], the color of the whole body of pigs is relatively consistent, and they do not have obvious skin features [27]. The complexity of the background, the intensity of the light, the camera angle and alignment, and the soiling of the pig’s face and body surface can affect the accuracy of individual recognition during feature collection and are greater challenges for non-contact individual recognition techniques [28].
In recent years, with the development of three-dimension (3D) visual technology, image processing algorithms and computing hardware, the use of 3D images in PLF has been increasing [29,30]. Three-dimensional images are generally acquired by depth sensors or light detection and ranging (LiDAR) sensors, using binocular parallax, time-of-flight (TOF), and structured light for imaging [31]. Due to the inclusion of height information, 3D images can express more dimensional features than 2D images and are less affected by the environment and lighting [32]. Currently, 3D vision technology has a wide range of applications and good prospects in the industry [33], agriculture [34], and life fields such as face recognition [35], robotics [36], and automatic driving [37]. In PLF, 3D images are commonly used for 3D body surface reconstruction [38,39], monitoring of body conditions such as body size and weight [40], and also gradually applied for posture recognition [41], but few studies have been applied for individual identification.
Compared with pig face images, pig back surface images are more stable and easier to acquire. The point cloud of the pig’s back surface contains information about the width, length and height of different parts of the pig, which can represent the body size of the pig more comprehensively and is less affected by the environment, light and dirt on the pig’s body surface than 2D images. Therefore, we propose to establish an individual identification model based on the point clouds of pig back surface and automatically extract the 3D body shape features from the 3D point cloud data to provide a new idea for exploring the non-contact pig individual identification method.

2. Materials and Methods

2.1. Pigs and Housing

The 3D point cloud images of pigs used in this paper were collected from a free-range fattening barn in a pig farm in Wangkui County, Heilongjiang Province, China. The building style of the pig house is a semi-enclosed type with windows. Under natural ventilation and natural light conditions, 10 pigs were randomly selected for image and data acquisition. Different individuals were distinguished by marking symbols. Data were collected twice on 1 July and 10 July 2022, with the data collected on July 1 being training and validation data and the data collected on July 10 being test data. The breed of pigs is a crossbreed (Landrace × Large White), 110 d to 150 d old, weight range of 60 kg to 90 kg. The pigs are white in color with dirt on the body surface. The color of the floor and walls of the pig house is light gray.

2.2. Data Collection

A depth camera (model ORBBEC Astra Pro) was used to acquire color and depth images. The camera captures images by combining an infrared laser emitting source with an infrared-sensitive camera, using a binocular camera and utilizing the TOF principle. Both RGB and depth images were acquired at a resolution of 640 × 480 pixels with a frame rate of 30 fps. The horizontal and vertical fields of view are 66.1° and 40.2° for color images and 58.4° and 45.5° for depth images, respectively.
For image acquisition, the depth camera is fixed on a retractable stand and shot vertically above the pig’s back to capture a top-down depth image of the pig with its back straight in a relatively stationary standing position. The location of the image acquisition equipment is shown in Figure 1. The camera is connected to the control computer via the USB port. Manual control of video capture using the computer. RGB and depth images were stored by frame. Weight was measured using a scale with a range of 0 to 800 kg and an accuracy of 0.5 kg. Body size was measured using a tape measure and measuring stick.

2.3. Dataset Description and Processing

The body weight (BW), chest width (CW), hip width (HW), chest height (CH), hip height (HH) and body length (BL) data of the 10 pigs are shown in Table 1. Among them, pig1, pig2, pig6 and pig7 are more similar in body size, pig3, pig4 and pig5 are more similar in body size, and pig8, pig9 and pig10 are more similar in body size.
Image filtering was performed manually by color images, and then the corresponding depth images were retained based on the filtered color images. To avoid the interference of color information such as light and pig surface dirt, only depth images were retained after screening, and color images were not used as data for this study. This study relied only on the features of body shape not on color features for individual identification. Image selection was based on the following criteria:
  • The image contained the complete pig body;
  • The posture of the pig was standing and the body was straight.
A point cloud image represents the shape of an object as a set of points, each point containing 3D position coordinates x , y , z . Since the depth image is two-dimensional in appearance and the point cloud image is three-dimensional in appearance, the point cloud image is more intuitive than the depth image. Standard depth images and point cloud images can be transformed to each other. In visual studio software (Version 2019; Microsoft Inc., USA), the OpenCV library and the PCL library were used to convert depth images to point clouds based on the intrinsic parameters of the camera. We used a checkerboard to calibrate the camera and obtain the focal length of the camera, f x = 601.9267 , f y = 603.3360 . The conversion of depth image to point cloud is to convert the data from image coordinate system to world coordinate system. The calculation formula is as follows:
x y z = D 1 f x 0 0 0 1 f y 0 0 0 1   x   y 1
where x , y , z are the point cloud coordinates, D is the depth value, f x and f y are the camera focal lengths, and   x and   y are the depth image coordinates.
After converting the depth image into a point cloud image, the number of points of adjacent frames was determined automatically, and when the number of points of adjacent frames was the same, the frames were identified as duplicate frames and deleted automatically. The transformed point cloud contains the background and the pig body. The top view of the partial point cloud of 10 pigs containing the background is shown in Figure 2.
A total of 10,574 images were collected and filtered. A total of 8462 images collected and screened on 1 July were divided into a training set and a validation set in a ratio of 3:1. The 2112 images collected and screened on July 10 were used as the test set. The division of each dataset and the number distribution of point cloud images for each pig are shown in Table 2. As can be seen from the table, the number of samples was more evenly distributed among the pigs except for pig3. At the time of image acquisition, pig3 captured fewer images than the other pigs, resulting in a smaller number of final samples. In the analysis of experimental results, the final segmentation results and individual identification results of pig3 were discussed.

2.4. Segmentation and Identification Methods

2.4.1. Overall Flow

Individual pig identification consists of two main parts: the construction of pig body segmentation model and the construction of individual identification model. Firstly, the acquired depth image is converted into a point cloud image after data pre-processing. Then, in order to reduce the interference of background and at the same time reduce the amount of input data for the recognition model, a PointNet++ point cloud segmentation model was built to segment the point cloud of the pig’s back. Finally, the point cloud on the back of the pig body was used as the input for individual identification through the Pointnet++LGG individual classification model, and the specific number of the pig was given. The process of pig individual identification based on the 3D point cloud of pig’s back surface is shown in Figure 3.

2.4.2. Pig Body Segmentation Methods

The goal of pig body segmentation is to segment the pig back point cloud from the background point cloud. First, the back point cloud and background point cloud are labeled by software to obtain the true classification value of each data point. Then, a pig body segmentation model based on the PointNet++ algorithm was established to complete the segmentation of pig back point clouds.
  • Point cloud labeling
The point clouds were labeled as the pig’s back point clouds and the background point clouds by using CloudCompare software. There were differences in the shape of pig heads collected from different individuals, and the point cloud images of some individual pig heads were incomplete. To avoid the interference of head features, the back body part of the pig with the head removed was used as the segmentation target. The pig head was identified by two characteristic points. The points with the greatest change in curvature at the junction of the neck and shoulder were used as the segmentation points. As shown in Figure 4, points a and b were head and neck division points. The two points were connected and the head of the pig was divided. Head and background were tagged as the same label. The infrared light from the camera could not penetrate the body of the pig, resulting in no point cloud data on the ground at the edge of the body. The blank area without point clouds in the top view of the back was used as a split edge. The point clouds in the vertical coverage area of the partition boundary line were marked as target point clouds. The vertical part contains almost no point cloud due to the occlusion of the pig’s body. As a result, the point clouds of the pig’s back were marked as a continuous whole.
2.
Building a PointNet++ segmentation model
The PointNet [42] and PointNet++ [43] algorithms take points in the point clouds as input and extract features directly from the unstructured point clouds. The segmentation problem in this study is a part segmentation, and the target area of the segmentation is the back of the pig. The location of the point cloud is relatively concentrated, and there are no complex local features in other parts except for the detailed segmentation of the head and neck. Therefore, segmentation algorithms that pay excessive attention to local details are not needed. The PointNet and PointNet++ algorithms are classical point cloud algorithms that can meet the requirements of the segmentation problem in this study and are fast and concise. Compared with the PointNet algorithm, the PointNet++ algorithm takes local features into account and has better feature extraction capabilities. Therefore, the PointNet++ algorithm was chosen to build a pig body segmentation model in this study.
Since the number of point clouds in this study is very large, the original point clouds need to be sampled randomly, and the number of sampling points is n . The randomly sampled points are used as input to the model for feature extraction and segmentation. A set of 3D points { P i | i = 1 , 2 , , n } is used as model input. Each point P i is a vector of x , y , z coordinates without RGB information and normal vector information with input dimension n . The output is a classification label for each point, determining whether each point belongs to the pig body classification or the background classification. The output dimension is n     2 . The pig body segmentation model established in this study is divided into two processes: downsampling for feature extraction and upsampling for feature propagation. Figure 5 shows the architecture of the segmentation model, with the feature extraction process on the left and the feature propagation process on the right.
In the process of feature extraction, n points are first randomly sampled from the input point cloud, and then two layers of sampling, grouping and feature extraction are performed. Among them, farthest point sampling is used as resampling method, single scale radius grouping is used as grouping method, and multilayer perceptron (MLP) is used to extract features to all points within the group. For a set of input points P 1 , P 2 , , P n , an ensemble function f is defined to represent the feature vector of the input point set. The feature extraction process is as follows:
f ( P 1 , P 2 , , P n ) = γ MAX i = 1 , 2 , , n h P i
where P i is the input point, γ and h form the MLP network, and the max pooling function is used as the aggregation function.
In the process of feature propagation, a hierarchical propagation strategy with distance-based interpolation (IP) and cross-layer link hopping is used. After two IP and upsampling (UP) layers, the features are propagated to the original point set. Calculate the interpolation of each feature point based on the inverse distance weighted average of k-nearest neighbor. The features of the interpolation point x are noted as f . The calculation formula is as follows:
f ( x ) = i = 1 k ω i x f i i = 1 k ω i x   where   ω i x = 1 d x , x i 2
where d x , x i is the distance between the interpolation point x and the k nearest x i neighbors, ω i x is the inverse distance.

2.4.3. Individual Pig Recognition Methods

After completing the segmentation of the pig body, the segmented point clouds of pig back are used as input to identify different pig individuals by pig individual classification model. Using the basic single-scale grouping (SSG) strategy and multi-scale grouping (MSG) strategy, an individual recognition baseline model based on the PointNet++ classification algorithm is established. Since the problem in this study is classification between similar individuals of the same breed of pigs, an improved local-global grouping (LGG) strategy with a larger feature range and higher dimensions is proposed, and a PointNet++ LGG individual recognition model is established. From the segmented pig back point clouds, n points are randomly sampled and input to the model, and the input point clouds are represented as a set of 3D points { P i | i = 1 , 2 , , n } . Each point P i is a vector of x , y , z   coordinates. The dimensionality of the input data is n 3 . The output is the classification probability value of 10 pigs, so the output dimension is 1 10 . Compared with the segmentation model, the individual recognition model has only the process of downsampling and feature extraction, without the process of upsampling and feature propagation.
3.
Individual pig identification model based on PointNet++ classification algorithm
The structure of the individual recognition model of PointNet++ is shown in Figure 6. After two layers of sampling, grouping, and extracting features, the number of feature points is compressed from 2500 points to 128 points, and the feature dimension is raised from 3 to 643. Then the features are further compressed to 1024 features in one dimension by the MLP layer. After utilizing the fully connected layer and then the softmax function for classification, the problem of 10 classification is completed.
The SSG method uses a ball to query all points within a single sampling radius, and then the MLP is used to extract features from these points. However, due to the non-uniform density of random sampling points, feature extraction based on multi-scale radius plays an important role. The MSG sets different sampling radii r 1 , r 2 , r 3 to form multi-scale features by stringing together features of different scales. This sampling method helps to reduce the effect of sampling density. The principle of SSG and MSG is shown in Figure 7.
4.
Improved pig individual recognition model based on the PointNet++ LGG classification algorithm
Since the pigs have more similar body size features, the classification model needs to learn to higher dimensional features to perform better differentiation between similar individuals. Since the pigs in this study were relatively similar in size, the classification model needed to learn higher dimensional features to better distinguish between similar individuals. In this paper, by using an improved local-global grouping strategy (LGG), we increase the range of feature extraction in PointNet++ grouping, enhance the dimensionality of feature extraction, and extract richer features of similar individuals.
The principle of the improved grouping strategy LGG proposed in this paper is shown in Figure 8. Iterative farthest point sampling (FPS) is used to select point x i 1 , x i 2 , , x i m when the input point in the model sampling process is x 1 , x 2 , , x n such that x i j is the farthest point to the set x i 1 , x i 2 , , x i j 1 . The farthest distance between the sampled points is denoted as R fst . Make R fst as the maximum sampling radius. When extracting the features, the global sampling radius R fst is added to the local sampling radius ( r 1 , r 2 , r 3 ) . Since the value of R fst varies according to different samples, the feature extraction process adapts itself to different sampling samples.
In the improved LGG grouping strategy, the radius r 1 , r 2 , r 3 , R fst is of different sizes and contains different numbers of sampling points, thus containing different amounts of information and extracting features of different dimensions. f 1 , f 2 , f 3 , f global are the features at different sampling radii, and the dimensions are ordered from smallest to largest. MLP is used to extract features of different radii. Concatenate local features of different dimensions   { f 1 , f 2 , f 3 } and global features f global . The feature of the point set consisting of m sampled points centered at point i is denoted as f ( x i 1 , x i 2 , , x i m ) and is calculated as follows:
f ( x i 1 , x i 2 , , x i m ) = f j 1 + f j 2 + f j 3 + f jglobal , j = 1 , 2 , , m
where f j 1 , f j 2 , f j 3 and f jglobal are the characteristics of point j at radius r 1 , r 2 , r 3 , R fst , respectively.
The features x i j of each point are extracted from its different neighborhood radii, and then the features under different radii are concatenated. The same method is used to extract features at different radii. f jr denotes the characteristic of the point x i j at radius r (r∈ r 1 , r 2 , r 3 , R fst ) and is calculated as follows:
f jr = γ h p nj , j N j , r 1 , 2 , 3 , global
where N j is the neighbourhood point of point x i j   under radius r, p nj is the relative coordinate of the neighbourhood point, h is the MLP process, and γ is the maximum polling function.
The structure of the PointNet++LGG improved pig individual recognition model is shown in Figure 9. The model uses the LGG grouping strategy with a total number of MLPs of 9. After two sampling grouping (SGM) and MLP feature extraction modules, the final generated feature dimension is 1 ∗ 2048. The increase in the number of model layers can extract higher and deeper classification features. The increase in feature dimension will better include these features. The final problem of 10 classification of similar individuals is completed by compressing the features through the fully connected layer and using the softmax classifier.

2.5. Experiment and Parameter Setting

The models were developed using Python 3.7.0 and libraries available in PyTorch 1.0.0. The computer was configured with 32 GB RAM, Windows 10 (64-bit), Intel i7-9700 3.0 GHz CPU, NVIDIA Tesla T4 GPU, and 16 GB discrete graphics memory. All models sampled 2500 points randomly on the original point cloud, normalizing them to a unit ball. Random rotation was used in the training to dynamically enhance the point cloud. The position of each point was dithered using Gaussian noise with a mean of zero and a standard deviation of 0.02.
In both the segmentation model and the individual recognition model, the Loss function adopts the Negative Log-Likelihood Loss (NLLLoss) function, the training learning rate is set to 0.001, the batch size is 8, the number of iterations is 50 and 150, respectively, and the models are optimized by Adam. In the PointNet++ individual recognition model, both SSG and MSG models adopt the default hyperparameters of the algorithm. The SSG model has two sampling radii of 0.2 and 0.4, and the number of sampling points of 32 and 64, respectively. The MSG model has sampling radii of [0.1, 0.2, 0.4] and [0.2, 0.4, 0.8], and the number of sampling points of [16, 32, 128] and [32, 64, 128], respectively. The sampling radii of LGG are [0.1, 0.2, 0.4, R fst ] and [0.2, 0.4, 0.8,  R fst ], and the number of sampling points are [16, 32, 128, 256] and [32, 64, 128, 128], respectively, where R fst is the variable, which is calculated based on the maximum distance of sampling points for different samples.

2.6. Evaluation Metrics

In the pig body segmentation model, overall segmentation accuracy (OA), mean intersection over union (mIoU), Precision, Recall, and F1 score were used to evaluate the performance of the segmentation model. OA was the ratio of correctly predicted points to the total number of points. Precision was the accuracy of pig body prediction. Recall was the proportion of pig body points that are correctly detected. F1 score was a combined evaluation index of Precision and Recall. IoU was the ratio of the predicted result to the true value. The IoU value of the pig body was defined as IoU1, and the IoU value of the background was defined as IoU2. mIoU was the average of the IoU of the pig body and the background.
False positive (FP) indicates pixels predicted to be pig body but with a false result. True positive (TP) indicates pixels predicted to be pig body with the true result. False negative (FN) indicates pixels predicted to be background but with a false result. True negative (TN) indicates pixels predicted to be a background with the true result. Accuracy, mIoU, Precision, Recall, and F1 score were defined as follows:
Accuracy = TP + TN TP + TN + FP + FN
Precision = TP TP + FP
Recall = TP TP + FN
F 1 score = 2 Recall Precision Recall + Precision
IoU i = TP i TP i + FP i + FN i
mIoU = 1 2 IoU 1 + IoU 2
In the individual recognition model, Accuracy, Precision, Recall, and F1 score were used to evaluate the performance of the pig individual recognition model as shown in Equations (6) to (9), respectively. TP was the number of correctly classified pigs in this category. FP was the number of incorrectly classified pigs in this category. TN was the number of correctly classified pigs that are not in this category. FN was the number of pigs that are not correctly classified that are not in this category.

3. Results and Discussion

3.1. Pig Body Segmentation

3.1.1. Model Training Results

In order to verify the performance of the PointNet++ pig body segmentation model established in this paper, a comparative analysis was made with the performance of the PointNet segmentation algorithm under the same data set and test environment. Figure 10 shows the test accuracy and loss value change curves of the validation set for different models during the training process. The PointNet++ segmentation model test accuracy ended up at around 99.81% with a loss value close to 0. The PointNet segmentation model ended up with an accuracy of around 99.56% and a loss value close to 0. It can be seen that both models have high accuracy, and both can complete the point cloud segmentation of pig body and background very well. The PointNet++ segmentation model was 0.25% more accurate than the PointNet segmentation model, and the loss value decreased more quickly.

3.1.2. Model Test Results

The performance of the segmentation model on the test set was shown in Table 3. The OA of PointNet was 99.53% and that of PointNet++ was 99.80%. The overall segmentation accuracy of both models was high, where the OA of PointNet++ model was 0.27% higher than that of PointNet model. The mIoU, Precision, Recall and F1 scores of the PointNet++ model were slightly better than PointNet model with 0.81%, 1.25%, 0.25% and 0.67%, respectively. This did not indicate that the PointNet++ model showed better results than PointNet on every pig. There was a small randomness in the results of different pig segmentation, and this randomness generally originated from the random error when manually labeling the samples, and this error had a small random effect on the final segmentation effect.
The segmentation effects of the PointNet and PointNet++ models were shown in Figure 11. Although there was a slight difference in their segmentation effect on the neck, it did not affect the overall profile and had a negligible effect on individual recognition. Both models successfully segmented the body and background of the pig.
In general, the PointNet++ model had good results in pig body point cloud segmentation, and demonstrated segmentation advantages in the head and neck segmentation part of the pig due to the algorithm itself focusing on local features. When the model was tested using separate datasets for each pig, there was no significant difference in segmentation between individuals, demonstrating that the PointNet++ model is robust to segmentation of the point clouds on the back of the pigs.

3.2. Individual Pig Identification

3.2.1. Model Training Results

As shown in Figure 12, the validation set accuracy grew during the training period of each model, and the loss values all decreased and finally converged. The PointNet model had an accuracy of 80% and a final loss value of 0.56. The PointNet++ SSG model had an accuracy of 82% and a final loss value of 0.51. The PointNet++ MSG model had an accuracy of 96 % and a final loss value of 0.12. The PointNet++ LGG model had an accuracy of 97% and a final loss value of 0.11.
It can be seen that the PointNet++LGG model proposed in this paper converged the fastest and had the highest accuracy, followed by the PointNet++MSG model, while the PointNet++ SSG model and the PointNet model had relatively low accuracy. Due to the large amount of point cloud data, each model used sampling to reduce the amount of data. The sampling method was random, which brought the problem of uneven point cloud distribution. Therefore, the feature extraction by multi-scale radius could be compatible with point cloud features of different densities and had better results in the experiment. The LGG model combined small-scale radius and global radius features, and set different feature dimensions to extract features according to the radius size, taking into account the feature extraction range and the amount of features, making the model able to obtain features of different individuals more quickly, so the PointNet++LGG model had the fastest convergence speed.
The final recognition accuracy of the model was affected by many factors. For example, sampling 2500 points from 50,000 to 60,000 points during the initial sampling inevitably resulted in some feature loss. Therefore, the PointNet++LGG model achieved a recognition accuracy of up to 97%, which had a high recognition performance.

3.2.2. Model Test Results

The classification performance in the test set was shown in Table 4. The Accuracy, Precision, Recall and F1 scores of the PointNet++LGG model were 95.26%, 95.51%, 95.53% and 95.52%, respectively. They were higher than the PointNet++MSG model by 2.18%, 1.74%, 2.19%, 2.03%, the PointNet++SSG model by 16.76%,10.20%, 16.92%, 13.7%, and the PointNet model by 17.19%, 12.29%, 19.37%, 15.99%, respectively. The PointNet++ LGG model proposed in this paper had the best performance in classifying individual pigs at the overall level.
In order to evaluate the model performance from more scales and to understand the performance of each model in different categories, samples from each category were tested separately, and the results were shown in Table 5. The classification accuracy values of different pigs in the PointNet++LGG model, the highest being pig2 (100%) and the lowest being pig9 (85.64%), were similar to the overall classification accuracy (95.26%) and worked better for each classification dataset. The PointNet++MSG model had low classification accuracy for pig10 (78.33%). The PointNet++SSG model had low classification accuracy for pig10 (52.5%). The PointNet model had very low classification accuracy for pig1 (44.71%), which affected the final overall classification effect of the model. The low recognition rate of different models in a specific category might be caused by the similarity between samples from different categories. The uniformity of the recognition effect of PointNet++LGG model for different categories indicated that the model was compatible with different samples and had good stability.
In order to see more clearly the details of the classification results for each model for each pig, a classification confusion matrix was created and the results were shown in Figure 13. The predicted labels were represented by the horizontal coordinates of the matrix and the true labels were represented by the vertical coordinates of the matrix, with the values on the diagonal lines indicating the number of TPs, with higher values being darker in color. FN was the sum of the values in the horizontal coordinate with the diagonal position removed, FP was the sum of the values in the vertical coordinate with the diagonal position removed, and TN was the sum of the values in positions other than this row and column.
As shown in Figure 13a, the confusion matrix TP value of PointNet++LGG model was high, the recognition results were basically distributed on the diagonal, and the classification effect for each pig was relatively uniform, with a low probability of recognition error. As shown in Figure 13b, the values of the confusion matrix of the PointNet++MSG model were also mainly distributed on the diagonal. However, the FP value of pig5 was 44 relatively large, and other pigs were identified as pig5 more often. The FN value of pig10 was 56 relatively large, and the probability of being misidentified as other pigs was higher. As shown in Figure 13c, most of the values of the confusion matrix of the PointNet++SSG model were distributed on the diagonal. Pig1, pig8 and pig10 had larger FN values of 60, 96 and 117, respectively, and were more likely to be misidentified as other pigs. Pig7 had an FP value of 240, and other pigs were easily misidentified as this category. As shown in Figure 13d, in the confusion matrix of the PointNet model, FN values of pig1 and pig10 were large, 123 and 62, respectively. The FP values of pig2, pig5 and pig9 were larger, which were 115, 111 and 125, respectively.
To summarize the classification of different individuals, pig1 was more often identified as pig2, pig10 was more often identified as pig5 and pig7, and the number of other pigs identified as pig5 was more evenly distributed.

3.2.3. Visual Analysis of Samples for Classification

The similarity problem exhibited by the same individuals in different models might be related to the characteristics of the samples themselves. To explore the sources of variability in the recognition effects of different models in different categories, some samples in individual recognition were visualized. The top view of the segmented partial back point cloud image selected from pig1, pig2, pig5, pig7 and pig10 was shown in Figure 14. Combining the body size and weight data in Table 1, it could be seen that the body size and weight of pig1 and pig2 were similar and belonged to highly similar samples. The PointNet model often identified pig2 as pig1, indicating that the PointNet model has weak feature extraction ability for similar samples.
Pig10 differed significantly from pig5 and pig7 in body size and weight, and was misidentified probably because the model captured disturbing features other than body size. For example, the images of pig7 and pig10 were similar in the shape of the neck, and some images of pig5 and pig10 had cracks that were present on the body surface. These cracks were generated by infrared rays emitted by the depth camera at the greater curvature of the pig’s back and can generally be filled by pre-processing means such as point cloud complementation. In this paper, a deep learning algorithm was used to extract features automatically, and the segmented original point cloud was directly input into the model without any point cloud noise reduction process to achieve fully automatic recognition process. Therefore, the extraction of effective features and the filtering of invalid features was an important performance of individual recognition algorithms. From the experimental results, it could be seen that the PointNet model and the PointNet++SSG model easily identify pig10 as pig5 and pig7. The PointNet model only focused on the global information, and the PointNet++SSG model was less capable of learning the unevenly sampled point cloud features due to the use of a single feature extraction radius. Therefore, both models learned larger weights on the invalid features. The PointNet++MSG model, which incorrectly identified a small number of pig10 test samples as pig5, may also have focused too much on local information and assigned a certain weight on the invalid features.
The PointNet++LGG model proposed in this paper showed the highest classification accuracy compared with the other three models, and the classification effect for each pig was uniform, and the learning ability for similar samples was better, which indicated that the model had better stability.

3.3. Discussion

Unlike the point cloud segmentation of living scenes [44] and industrial scenes [33], the segmentation problem in this study was relatively simple. It was confirmed by experiment that the point cloud segmentation algorithm PointNet++ was used to successfully extract the point cloud of pig back with 99.8% accuracy, which had good segmentation effect. The segmentation of animal bodies by means of point cloud computation was performed in most previous studies. Wang et al. [45] collected point clouds of pig bodies for body size measurements and used a random sample consensus algorithm to remove ground point clouds. Shi et al. [38] collected 3D point clouds of the pig body for pig reconstruction, and the original point cloud consisted of target pigs, pens, ground and noise points. The railing and noise were removed by the point cloud filtering method, and then the ground was removed by the random sample consistency algorithm. The calculation process of this point cloud calculation is more complicated and requires different methods to remove the ground, walls and fences in the background. The PointNet++ pig body segmentation model constructed in this paper took the original point cloud collected as an input, and the model automatically completed the segmentation of the pig body, which was a very convenient process.
The dataset commonly used in classical point cloud classification algorithms such as the PointNet and PointNet++ algorithms was ModelNet40 [46]. The dataset used for the individual identification problem in this paper differed from its comparison in three ways. First, ModelNet40 distinguishes between different kinds of objects, while the objects in this study were different individuals of the same breed of pigs, and there was great similarity between the samples, which increased the difficulty of model classification. Second, the object in ModelNet40 had 10,000 points, while the segmented point cloud image of the pig’s back had 50,000–60,000 points, which added to the sampling challenge. Third, all samples in ModelNet40 were standard samples with uniform distribution. The samples in this study contain noise and cracks, which were not pre-processed and were used directly as input to the model to build a fully automated individual recognition system. This imposed higher requirements on the feature extraction ability and generalization ability of the model. The effect of different sampling grouping strategies on the classification effect of the model was compared experimentally. The results showed that the PointNet++LGG model constructed in this paper, which considered global features, achieved a recognition accuracy of 95.26%. PointNet++LGG classified all individuals uniformly, showing stronger feature extraction and better generalization ability. The addition of the LGG strategy in the PointNet++LGG model increased the feature extraction range of the model, deepened the structure of the network and increased the number of feature points, filtered out interfering features, and extracted higher-dimensional features used to distinguish similar individuals.
Individual identification has shown an important role in PLF and is the basis for achieving precision feeding and precision management. Coupling machine vision-based individual recognition with weight estimation, body size estimation, and behavior estimation in the same model, integrated in edge devices, will greatly improve the efficiency of farm management. Distinguishing individuals by pig face recognition, on the one hand, has the problem of difficult picture acquisition, and on the other hand, the analysis of other aspects in precision farming cannot be accomplished by pig face. At the same time, 3D data can provide richer features than 2D data, and now has a large number of applications in precision farming. Andrea Pezzuolo et al. [47] used a Kinect camera to measure body size based on pig back point cloud. Li et al. [48] calculated body size based on pig back point cloud data to build a regression model between body weight and body size. He et al. [49] built a body weight estimation model based on a pig back point cloud by a deep learning algorithm. Song et al. [31] calculated body size based on a cow back point cloud. The method of individual identification based on back 3D point cloud data proposed in this study belongs to a new exploration in the field of animal individual identification, which provides possible ideas for the integration of PLF functions in several aspects.
This study is a preliminary exploration of individual recognition using 3D body shape features and has some limitations. Experiments were conducted on individual recognition of ten pigs, and as the number of pigs increases, it will bring new challenges to the algorithm, and the number of pigs can be considered to increase in subsequent studies. In addition, how much the change in the appearance of the pigs affects the classification results over time is also a factor to be considered in subsequent experiments.

4. Conclusions

The following conclusions were drawn in this study:
  • A fully automated method of non-contact identification of individual pigs based on 3D point cloud of the pig back in two stages was proposed. The PointNet++ pig body segmentation model was established to segment the pig back point cloud from the background, and the segmentation accuracy reached 99.80%.
  • The PointNet++LGG algorithm with improved grouping strategy was proposed. The Accuracy, Precision, Recall and F1 score in individual recognition reached 95.26%, 95.51%, 95.53% and 95.52%, respectively. Compared with the PointNet algorithm, PointNet++SSG algorithm and PointNet++MSG algorithm, the recognition accuracy was higher, the recognition of similar individuals was better, the recognition of different individuals was more uniform, and the generalization ability was stronger.
  • The proposed method of pig individual recognition based on three-dimensional point cloud image of the back was a new exploration in the field of animal individual recognition, avoiding the stress reaction to the animal by radio frequency identification (RFID), avoiding the problem of difficult to obtain pig face samples in pig face recognition, and providing a new method and idea for individual recognition of other animals.

Author Contributions

Writing—original draft preparation, H.Z.; writing—review and editing, Q.L. and Q.X.; formal analysis, Q.X.; software, H.Z.; data curation, H.Z.; supervision, Q.X.; investigation, Q.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the project of the National Natural Science Foundation of China (NSFC) (32072787); the project of the Postdoctoral Science Foundation of Heilongjiang Province (LBH-Q21070), China; the project of the Heilongjiang Bayi Agricultural University Support Program for San Zong (ZDZX202102); the project of Scholar Plan at Northeast Agriculture University (19YJXG02), China.

Institutional Review Board Statement

The Animal Ethics Committee in Northeast Agricultural University approved the experimental protocol, with the project number 32072787. The sampling procedures complied with the “Guidelines on Ethical Treatment of Experimental Animals” (2006) No. 398 set by the Ministry of Science and Technology, China.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aquilani, C.; Confessore, A.; Bozzi, R.; Sirtori, F.; Pugliese, C. Review: Precision livestock larming technologies in pasture-based livestock systems. Animal 2022, 16, 100429. [Google Scholar] [CrossRef] [PubMed]
  2. García, R.; Aguilar, J.; Toro, M.; Pinto, A.; Rodríguez, P. A systematic literature review on the use of machine learning in precision livestock farming. Comput. Electron. Agric. 2020, 179, 105826–205837. [Google Scholar] [CrossRef]
  3. Tzanidakis, C.; Simitzis, P.; Arvanitis, K.; Panagakis, P. An overview of the current trends in precision pig farming technologies. Livest. Sci. 2021, 249, 104530. [Google Scholar] [CrossRef]
  4. Bao, J.; Xie, Q.J. Artificial intelligence in animal farming: A systematic literature review. J. Clean. Prod. 2022, 331, 129956–129968. [Google Scholar] [CrossRef]
  5. Thölke, H.; Wolf, P. Economic advantages of individual animal identification in fattening pigs. Agriculture 2022, 12, 126. [Google Scholar] [CrossRef]
  6. Collins, L.M.; Smith, L.M. Review: Smart agri-systems for the pig industry. Animal 2022, 16, 100518. [Google Scholar] [CrossRef]
  7. Wang, M.; Larsen, M.L.V.; Liu, D.; Winters, J.F.M.; Rault, J.; Norton, T. Towards re-identification for long-term tracking of group housed pigs. Biosyst. Eng. 2022, 222, 71–81. [Google Scholar] [CrossRef]
  8. Tzanidakis, C.; Tzamaloukas, O.; Simitzis, P.; Panagakis, P. Precision livestock farming applications (PLF)for grazing animals. Agriculture 2023, 13, 288. [Google Scholar] [CrossRef]
  9. Jin, H.; Meng, G.; Pan, Y.; Zhang, X.; Wang, C. An improved intelligent control system for temperature and humidity in a pig house. Agriculture 2022, 12, 1987. [Google Scholar] [CrossRef]
  10. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef]
  11. Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine learning in agriculture: A comprehensive updated review. Sensors 2021, 21, 3758. [Google Scholar] [CrossRef] [PubMed]
  12. Fang, C.; Zheng, H.; Yang, J.; Deng, H.; Zhang, T. Study on poultry pose estimation based on multi-parts detection. Animals 2022, 12, 1322. [Google Scholar] [CrossRef] [PubMed]
  13. Akçay, H.G.; Kabasakal, B.; Aksu, B.; Demir, N.; Öz, M.; Erdogan, A. Automated bird counting with deep learning for regional bird distribution mapping. Animals 2020, 10, 1207. [Google Scholar] [CrossRef] [PubMed]
  14. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.H.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic identification of marked pigs in a pen using image pattern recognition. Comput. Electron. Agric. 2013, 93, 111–120. [Google Scholar] [CrossRef]
  15. Li, J.; Green-Miller, A.R.; Hu, X.; Lucic, A.; Mahesh, M.M.R.; Dilger, R.N.; Condotta, I.C.F.S.; Aldridge, B.; Hart, J.M.; Ahuja, N. Barriers to computer vision applications in pig production facilities. Comput. Electron. Agric. 2022, 200, 107227. [Google Scholar] [CrossRef]
  16. Hansena, M.F.; Smith, M.L.; Smith, L.N.; Salter, M.G. Baxterc. E.M.; Farish, M.; Grieve, B. Towards on-farm pig face recognition using convolutional neural networks. Comput. Ind. 2018, 98, 145–152. [Google Scholar] [CrossRef]
  17. Marsot, M.; Mei, J.; Shan, X.; Ye, L.; Feng, P.; Feng, X.; Li, C.; Zhao, Y. An adaptive pig face recognition approach using convolutional neural networks. Comput. Electron. Agric. 2020, 173, 105386–105395. [Google Scholar] [CrossRef]
  18. Sihalath, T.; Basak, J.K.; Bhujel, A.; Arulmozhi, E.; Moon, B.E.; Kim, H.T. Pig identification using deep convolutional neural network nased on different age range. J. Biosyst. Eng. 2021, 46, 182–195. [Google Scholar] [CrossRef]
  19. Yan, H.; Cui, Q.; Liu, Z. Pig face identification based on improved alexnet model. INMATEH Agric. Eng. 2020, 61, 97–104. [Google Scholar] [CrossRef]
  20. Wang, Z.; Liu, T. Two-stage method based on triplet margin loss for pig face recognition. Comput. Electron. Agric. 2022, 194, 106737. [Google Scholar] [CrossRef]
  21. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, present, and future of face recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  22. Zhu, W.X.; Guo, Y.Z.; Jiao, P.P.; Ma, C.H.; Chen, C. Recognition and drinking behaviour analysis of individual pigs based on machine vision. Livest. Sci. 2017, 205, 129–136. [Google Scholar] [CrossRef]
  23. Huang, W.; Zhu, W.; Ma, C.; Guo, Y.; Chen, C. Identification of group-housed pigs based on gabor and local binary pattern features. Biosyst. Eng. 2018, 166, 90–100. [Google Scholar] [CrossRef]
  24. Li, S.; Kang, X.; Feng, Y.; Liu, G. Detection method for individual pig based on improved YOLOv4 Convolutional Neural Network. In Proceedings of the 2021 4th International Conference on Data Science and Information Technology, Shanghai, China, 23–25 July 2021; pp. 231–235. [Google Scholar]
  25. Lu, J.S.; Wang, W.; Zhao, K.; Wang, H.Y. Recognition and segmentation of individual pigs based on Swin. Anim. Genet. 2022, 53, 794–802. [Google Scholar] [CrossRef] [PubMed]
  26. Li, W.Y.; Ji, Z.T.; Wang, L.; Sun, C.H.; Yang, X.T. Automatic individual identification of Holstein dairy cows using tailhead. Comput. Electron. Agric. 2017, 142, 622–631. [Google Scholar] [CrossRef]
  27. Hu, H.Q.; Dai, B.S.; Shen, W.; Wei, X.L.; Sun, J.; Li, R.; Zhang, Y.G. Cow identification based on fusion of deep parts features. Biosyst. Eng. 2020, 192, 245–256. [Google Scholar] [CrossRef]
  28. Chen, C.; Zhu, W.; Norton, T. Behaviour recognition of pigs and cattle: Journey from computer vision to deep learning. Comput. Electron. Agric. 2021, 187, 106255. [Google Scholar] [CrossRef]
  29. Zang, J.; Zhuang, Y.; Ji, H.; Teng, G. Pig weight and body size estimation using a multiple output regression convolutional neural network: A fast and fully automatic method. Sensors 2021, 21, 3218. [Google Scholar] [CrossRef]
  30. Du, A.; Guo, H.; Lu, J.; Su, Y.; Ma, Q.; Ruchay, A.; Marinello, F.; Pezzuolo, A. Automatic livestock body measurement based on keypoint detection with multiple depth cameras. Comput. Electron. Agric. 2022, 198, 107059. [Google Scholar] [CrossRef]
  31. Song, X.; Bokkers, E.A.M.; Van der Tol, P.P.J.; Koerkamp, P.W.G.G.; van Mourik, S. Automated body weight prediction of dairy cows using 3-dimensional vision. J. Dairy Sci. 2018, 101, 4448–4459. [Google Scholar] [CrossRef]
  32. Yin, L.; Zhu, J.; Liu, C.; Tian, X.; Zhang, S. Point cloud-based pig body size measurement featured by standard and non-standard postures. Comput. Electron. Agric. 2022, 199, 107135. [Google Scholar]
  33. Yin, C.; Wang, B.; Gan, V.J.L.; Wang, M.Z.; Cheng, J.C.P. Automated semantic segmentation of industrial point clouds using ResPointNet++. Autom. Constr. 2021, 130, 103874. [Google Scholar] [CrossRef]
  34. Yu, T.; Hu, C.; Xie, Y.; Liu, J.Z.; Li, P.P. Mature pomegranate fruit detection and location combining improved F-PointNet with 3D point cloud clustering in orchard. Comput. Electron. Agric. 2022, 200, 107233. [Google Scholar] [CrossRef]
  35. Li, M.; Huang, B.; Tian, G. A comprehensive survey on 3D face recognition methods. Eng. Appl. Artif. Intell. 2022, 110, 104669. [Google Scholar] [CrossRef]
  36. Kim, P.; Chen, J.; Cho, Y.K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
  37. Alaba, S.Y.; Ball, J.E. A survey on deep-learning-based lidar 3d object detection for autonomous driving. Sensors 2022, 22, 9577. [Google Scholar] [CrossRef]
  38. Shi, S.; Yin, L.; Liang, S.; Zhong, H.J.; Tian, X.H.; Liu, C.X.; Sun, A.D.; Liu, H.X. Research on 3D surface reconstruction and body size measurement of pigs based on multi-view RGB-D cameras. Comput. Electron. Agric. 2020, 175, 105543–105552. [Google Scholar] [CrossRef]
  39. Samperio, E.; Lidon, I.; Rebollar, R.; Castejón-Limas, M.; Álvarez-Aparicio, C. Lambs’ live weight estimation using 3D images. Animal 2021, 15, 100212–100219. [Google Scholar] [CrossRef]
  40. Wang, K.; Zhu, D.; Guo, H.; Ma, Q.; Su, W.; Su, Y. Automated calculation of heart girth measurement in pigs using body surface point clouds. Comput. Electron. Agric. 2019, 156, 565–573. [Google Scholar] [CrossRef]
  41. Riekert, M.; Klein, A.; Adrion, F.; Hoffmann, C.; Gallmann, E. Automatically detecting pig position and posture by 2D camera imaging and deep learning. Comput. Electron. Agric. 2020, 174, 105391. [Google Scholar] [CrossRef]
  42. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 1 July 2017. [Google Scholar]
  43. Qi, C.R.; Li, Y.; Hao, S.; Guibas, L.J. PointNet++: Deep hierarchical fFeature learning on point sets in a metric space. arXiv 2017, arXiv:1706.02413. [Google Scholar]
  44. Bello, S.A.; Wang, C.; Wambugu, N.M.; Adam, J.M. FFPointNet: Local and global fused feature for 3D point clouds analysis. Neurocomputing 2021, 461, 55–62. [Google Scholar] [CrossRef]
  45. Wang, K.; Guo, H.; Ma, Q.; Su, W.; Chen, L.C.; Zhu, D.H. A portable and automatic Xtion-based measurement system for pig body size. Comput. Electron. Agric. 2018, 148, 291–298. [Google Scholar] [CrossRef]
  46. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
  47. Pezzuolo, A.; Guarino, M.; Sartori, L.; González, L.A.; Marinello, F. On-barn pig weight estimation based on body measurements by a Kinect v1 depth camera. Comput. Electron. Agric. 2018, 148, 29–36. [Google Scholar] [CrossRef]
  48. Li, G.; Liu, X.; Ma, Y.; Wang, B.; Zheng, L.; Wang, M. Body size measurement and live body weight estimation for pigs based on back surface point clouds. Biosyst. Eng. 2022, 218, 10–22. [Google Scholar] [CrossRef]
  49. He, H.; Qiao, Y.; Li, X.; Chen, C.; Zhang, X. Automatic weight measurement of pigs based on 3D images and regression network. Comput. Electron. Agric. 2021, 187, 106299–106304. [Google Scholar] [CrossRef]
Figure 1. Location of image acquisition equipment.
Figure 1. Location of image acquisition equipment.
Sensors 23 05156 g001
Figure 2. Top view of the point cloud of the pig’s back.
Figure 2. Top view of the point cloud of the pig’s back.
Sensors 23 05156 g002
Figure 3. The process of pig individual identification based on the 3D point cloud of pig’s back surface.
Figure 3. The process of pig individual identification based on the 3D point cloud of pig’s back surface.
Sensors 23 05156 g003
Figure 4. Pig point cloud segmentation markers, points a and b are head and neck segmentation points, 1 is marked as pig back and is shown in white, 0 is marked as background and is shown in black, (a) top view of the point cloud with segmented labels, and (b) side view of the point cloud with segmented labels.
Figure 4. Pig point cloud segmentation markers, points a and b are head and neck segmentation points, 1 is marked as pig back and is shown in white, 0 is marked as background and is shown in black, (a) top view of the point cloud with segmented labels, and (b) side view of the point cloud with segmented labels.
Sensors 23 05156 g004
Figure 5. PointNet++ Segmentation Model. MLP: multilayer perceptron; IP: interpolation; UP: upsampling.
Figure 5. PointNet++ Segmentation Model. MLP: multilayer perceptron; IP: interpolation; UP: upsampling.
Sensors 23 05156 g005
Figure 6. PointNet++ individual identification model. MLP: multilayer perceptron.
Figure 6. PointNet++ individual identification model. MLP: multilayer perceptron.
Sensors 23 05156 g006
Figure 7. Principles of SSG and MSG, (a) SSG, and (b) MSG. SSG: single-scale grouping strategy; MSG: multi-scale grouping.
Figure 7. Principles of SSG and MSG, (a) SSG, and (b) MSG. SSG: single-scale grouping strategy; MSG: multi-scale grouping.
Sensors 23 05156 g007
Figure 8. Improved LGG grouping strategy. LGG: local-global grouping strategy.
Figure 8. Improved LGG grouping strategy. LGG: local-global grouping strategy.
Sensors 23 05156 g008
Figure 9. Improved pig individual recognition model based on PointNet++LGG. MLP: multilayer perceptron; SGM: sampling grouping module.
Figure 9. Improved pig individual recognition model based on PointNet++LGG. MLP: multilayer perceptron; SGM: sampling grouping module.
Sensors 23 05156 g009
Figure 10. Accuracy and loss on validation set during training of the PointNet and PointNet++ segmentation models, (a) Accuracy, and (b) Loss.
Figure 10. Accuracy and loss on validation set during training of the PointNet and PointNet++ segmentation models, (a) Accuracy, and (b) Loss.
Sensors 23 05156 g010
Figure 11. Segmentation effects for the PointNet model and PointNet++ model, (a) ground truth, (b) segmentation effect of the PointNet model, (c) segmentation effect of the PointNet++ model.
Figure 11. Segmentation effects for the PointNet model and PointNet++ model, (a) ground truth, (b) segmentation effect of the PointNet model, (c) segmentation effect of the PointNet++ model.
Sensors 23 05156 g011
Figure 12. Accuracy and loss on validation set during training of the PointNet, PointNet++SSG, PointNet++MSG and PointNet++LGG identification models, (a) Accuracy, and (b) Loss.
Figure 12. Accuracy and loss on validation set during training of the PointNet, PointNet++SSG, PointNet++MSG and PointNet++LGG identification models, (a) Accuracy, and (b) Loss.
Sensors 23 05156 g012
Figure 13. Confusion matrix for individual recognition models, (a) PointNet++LGG, (b) PointNet++MSG, (c) PointNet++SSG, and (d) PointNet.
Figure 13. Confusion matrix for individual recognition models, (a) PointNet++LGG, (b) PointNet++MSG, (c) PointNet++SSG, and (d) PointNet.
Sensors 23 05156 g013
Figure 14. Visual analysis of samples for classification, (a) pig1, (b) pig2, (c) pig5, (d) pig7, and (e) pig10.
Figure 14. Visual analysis of samples for classification, (a) pig1, (b) pig2, (c) pig5, (d) pig7, and (e) pig10.
Sensors 23 05156 g014
Table 1. Weight and body size of 10 pigs. BW: body weight; CW: chest width; HW: hip width; CH: chest height; HH: hip height; BL: body length.
Table 1. Weight and body size of 10 pigs. BW: body weight; CW: chest width; HW: hip width; CH: chest height; HH: hip height; BL: body length.
Body SizePig1Pig2Pig3Pig4Pig5Pig6Pig7Pig8Pig9Pig10
BW (kg)89.087.575.076.577.588.089.563.569.060.5
CW (cm)30.632.130.327.729.631.530.5625.927.625.9
HW (cm)29.030.125.528.126.028.531.2823.823.628.0
CH (cm)67.566.456.657.557.863.660.953.254.452.1
HH (cm)68.168.356.957.958.965.662.755.355.756.2
BL (cm)97.795.482.886.886.696.790.685.989.277.4
Table 2. Dataset division.
Table 2. Dataset division.
DatasetPig1Pig2Pig3Pig4Pig5Pig6Pig7Pig8Pig9Pig10Total
Training set6317793847357276215445206687416350
Validation set2092561282452422071821732232472112
Test set2092561282452422071821732232472112
Table 3. Test results of the PointNet and PointNet++ segmentation models. OA: overall segmentation accuracy; mIoU: mean intersection over union.
Table 3. Test results of the PointNet and PointNet++ segmentation models. OA: overall segmentation accuracy; mIoU: mean intersection over union.
ModelsEvaluation MetricsTotalPig1Pig2Pig3Pig4Pig5Pig6Pig7Pig8Pig9Pig10
PointNetOA (%)99.53
Precision (%)97.9298.5699.5994.7896.9799.0498.7099.7392.4799.5996.73
Recall (%)99.7399.9399.9599.9399.9799.9999.9499.0999.4899.7999.73
F1 score (%)98.8299.2499.7797.2998.4599.51993199.4195.8599.6998.21
mIoU (%)98.5599.0399.7096.8898.1199.4199.1799.1295.2599.6497.89
PointNet++OA (%)99.80
Precision (%)99.1799.1299.6898.0098.0399.4999.2699.5899.2099.6798.98
Recall (%)99.9899.9099.9699.7699.9499.9799.9599.7299.2899.7299.71
F1 score (%)99.4999.5199.8298.8798.9799.7399.6099.6599.2499.7099.34
mIoU (%)99.3699.3799.7698.6898.7599.6799.5299.4899.1199.6599.22
Table 4. Comparison of the overall classification accuracy of different models.
Table 4. Comparison of the overall classification accuracy of different models.
ModelsAccuracy (%)Precision (%)Recall (%)F1Score
PointNet78.0783.2276.1679.53
PointNet++SSG78.5085.3178.6181.82
PointNet++MSG93.0893.7793.3493.55
PointNet++LGG
(Improved model)
95.2695.5195.5395.52
Table 5. Test results of the PointNet and PointNet ++ classification models.
Table 5. Test results of the PointNet and PointNet ++ classification models.
ModelsAccuracy(%)
Pig1Pig2Pig3Pig4Pig5Pig6Pig7Pig8Pig9Pig10
PointNet44.7193.7596.0990.4197.0887.5083.5283.3388.4285.00
PointNet++(SSG)68.2695.3188.2877.5087.5093.00100.0050.0071.7552.50
PointNet++(MSG)96.63100.0098.3397.0092.1297.5099.4392.8592.5978.33
PointNet++(LGG)
(Improved model)
90.86100.0094.5389.5899.1699.0099.4397.6185.6490.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, H.; Li, Q.; Xie, Q. Individual Pig Identification Using Back Surface Point Clouds in 3D Vision. Sensors 2023, 23, 5156. https://doi.org/10.3390/s23115156

AMA Style

Zhou H, Li Q, Xie Q. Individual Pig Identification Using Back Surface Point Clouds in 3D Vision. Sensors. 2023; 23(11):5156. https://doi.org/10.3390/s23115156

Chicago/Turabian Style

Zhou, Hong, Qingda Li, and Qiuju Xie. 2023. "Individual Pig Identification Using Back Surface Point Clouds in 3D Vision" Sensors 23, no. 11: 5156. https://doi.org/10.3390/s23115156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop