Next Article in Journal
Initial Impact of Different Feeding Methods on Feed Intake Time in Stabled Icelandic Horses
Previous Article in Journal
Asiatic Black Bear–Human Conflict: A Case Study from Guthichaur Rural Municipality, Jumla, Nepal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calculating Volume of Pig Point Cloud Based on Improved Poisson Reconstruction

1
College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, China
2
National Engineering Research Center for Swine Breeding Industry, Guangzhou 510642, China
3
College of Animal Science, South China Agricultural University, Guangzhou 510642, China
4
State Key Laboratory of Swine and Poultry Breeding Industry, Guangzhou 510640, China
5
College of Foreign Studies, South China Agricultural University, Guangzhou 510642, China
*
Authors to whom correspondence should be addressed.
Animals 2024, 14(8), 1210; https://doi.org/10.3390/ani14081210
Submission received: 13 March 2024 / Revised: 14 April 2024 / Accepted: 16 April 2024 / Published: 17 April 2024
(This article belongs to the Section Pigs)

Abstract

:

Simple Summary

The volume of a pig, a new phenotype feature, can be used to estimate its weight due to its high correlation with body weight. The proportions of different body parts, such as the head and legs, can be determined through point cloud segmentation, providing new phenotype information for breeding pigs with smaller heads and stouter legs. However, due to the irregular shape and potential missing parts of the pig point cloud, it is challenging to form a closed surface for volume calculation. The study addresses this challenge by using an improved Poisson reconstruction algorithm, which provides smoother, more continuous, and complete reconstruction results, and its accuracy and reliability have been confirmed. The study also found that the correlation coefficient between pig body volume and weight was 0.95, indicating a strong relationship. This research could be valuable for improving the efficiency and accuracy of livestock weight estimation and breeding.

Abstract

Pig point cloud data can be used to digitally reconstruct surface features, calculate pig body volume and estimate pig body weight. Volume, as a pig novel phenotype feature, has the following functions: (a) It can be used to estimate livestock weight based on its high correlation with body weight. (b) The volume proportion of various body parts (such as head, legs, etc.) can be obtained through point cloud segmentation, and the new phenotype information can be utilized for breeding pigs with smaller head volumes and stouter legs. However, as the pig point cloud has an irregular shape and may be partially missing, it is difficult to form a closed loop surface for volume calculation. Considering the better water tightness of Poisson reconstruction, this article adopts an improved Poisson reconstruction algorithm to reconstruct pig body point clouds, making the reconstruction results smoother, more continuous, and more complete. In the present study, standard shape point clouds, a known-volume Stanford rabbit standard model, a measured volume piglet model, and 479 sets of pig point cloud data with known body weight were adopted to confirm the accuracy and reliability of the improved Poisson reconstruction and volume calculation algorithm. Among them, the relative error was 4% in the piglet model volume result. The average absolute error was 2.664 kg in the weight estimation obtained from pig volume by collecting pig point clouds, and the average relative error was 2.478%. Concurrently, it was determined that the correlation coefficient between pig body volume and pig body weight was 0.95.

1. Introduction

During the growth stage of pigs, body size measurement, weight estimation and other information can be harnessed to monitor pig growth, daily weight gain, and feed weight gain ratio and for science-based pig breeding [1]. Pig body weight and pig body size are the two most important evaluation indicators for phenotypic group selection and lineage performance. In particular, pig body weight is an important factor in managing the supply chain at various stages [2], and body weight information can be used to estimate pig growth, feed conversion efficiency, and disease occurrence rate [3].
Currently, weight estimation methods mainly adopt electronic scales for automatic measurement, which still requires strenuous manual labor. And the measurement accuracy of the electronic scales is easily affected by various factors such as the movement, urination, and defecation of pigs. Moreover, electronic scales are prone to erosion when placed in a pig farm, and the maintenance cost is relatively high. In order to solve the above problems, numerous researchers have conducted many studies on contactless body weight estimation of pigs. The common methods are to obtain the two-dimensional images of pigs and collect pig body size data or projected contour area. By establishing a correlation prediction model concerning body size, projected area, and body weight, pig weight estimation can be achieved [4,5,6,7]. Chen et al. [8] accurately predicted pig weight by constructing a deep neural network to automatically measure multiple body parameters. Nguyen et al. [9] collected RGB-D data of fattening pigs using a handheld camera. By computing the 3D point clouds of each target pig, latent features from a 3D generative model were employed to predict pig weight using three regression models (SVR, MLP, and AdaBoost). Thapar et al. [10] developed a novel, portable, and user-friendly method for hands-off body dimension measurement and body weight prediction of the Ghoongroo pig by using a smartphone with an object measurement app—On 3D CameraMeasure. Cang et al. [11] obtained depth images of the pig’s back from the top view camera and input them into a deep neural network to implement body weight estimation. Li et al. [5] acquired the key body size information extracted from the point clouds of a pig’s back and used it as independent variables for regression analysis models to estimate body weight. However, it is a tough task to collect relevant parameters on a curved surface of a pig body by using 2D cameras or single-view 3D cameras, and therefore, difficult to further improve the accuracy of pig body weight estimation in multi-scenario application. Further research has shown that three-dimensional parameters such as volume can more intuitively present animal phenotypes and are highly correlated with their body weight [12].
Unlike two-dimensional images, the volume information contains three-dimensional information of the curved surface of the pig body, which compensates for the insufficient dimensionality estimation of 2D image estimation methods and enhances the accuracy of body weight estimation. Some scholars at home and abroad have used volume as a major parameter for estimating livestock weight. The main approach consists of three steps: the first step is to obtain depth images of the pig’s back; the second step is to calculate the pig’s volume parameters by combining the back area; the final step is to establish a model using volume, body weight, and other parameters for the purpose of weight estimation [13,14,15,16]. Fu et al. [17] approximated the head and torso of a pig as a cone and a cylinder, respectively, thus obtaining a simplified three-dimensional model of a breeding pig. They then established a weight estimation model based on the volume of the three-dimensional model and weight to estimate the body weight of the breeding pig. Chu used the surface area obtained from point cloud reconstruction and volume parameters excluding limbs and head to construct a model to estimate the body weight of dairy cows, with high accuracy achieved in the estimation results [18]. Using a 3D point cloud is a non-contact weight measurement method to calculate the volume and the estimated body weight of a pig. Additionally, the volume of each pig body part can be obtained via point cloud segmentation [19]; therefore, more phenotypic features of pigs that cannot be acquired by manual measurement are available.
The traditional method for calculating the volume of complex objects is the drainage method: by completely immersing the object in water, the volume of the object can be calculated based on the difference between water surface heights before and after immersion in the area of the container. As pigs are living organisms, applying this method requires customized containers, which is difficult to implement in actual scenarios.
Thanks to the rapid development of consumer-grade 3D acquisition devices, low-cost and lightweight 3D acquisition devices such as Morpho3D, Kinect, and ASUS Xtion Pro have entered into the research field of intelligent agriculture, particularly in animal husbandry. Hence, the research on non-contact automatic measuring systems for body size of animals such as pigs, cows, and sheep has been widely carried out [5,14,20,21,22]. Currently, researchers use two or more depth cameras from different angles to collect surface point clouds of large-sized quadruped animals such as live pigs and cows and automatically measure various body sizes such as body length, body height, thoracic circumference, abdominal circumference, and chest depth by fusing complete point clouds [19,21,23,24,25,26,27,28,29]. There are also studies using deep learning models to position key body points on RGB images and using intrinsic camera parameters to project the detected key points onto the surface of livestock point clouds, thus achieving body size measurement of pigs and cows [30,31]. Up-to-date research has focused on automatic and accurate animal body measurement using three-dimensional surface point clouds [32]. Meanwhile, the three-dimensional phenotype reconstruction model has also brought new thoughts for calculating the volume, surface area, and volume of various parts of animals with the aid of point clouds.
At present, the research on the calculation method of irregular object point cloud volume is still in the growth stage, and there is much room to be desired for the study of calculation algorithm for complex object point cloud volume. The calculation methods related to irregular object point cloud model volume mainly include the slice method [33], Monte Carlo method, and model reconstruction method [34,35].
In 2016, Zhi et al. [36] proposed a 3D point cloud volume calculation method based on the slicing method. As shown in Figure 1, this method mainly uses the idea of slice integration to partition point cloud data, determine the slice contour, calculate the area of each part, and multiply the area by the slice interval to obtain the approximate volume of the slices; thus, individual data can add up to the overall point cloud volume.
Subsequently, researchers improved the method of slicing, calculating volume from diverse aspects such as slice spacing, selection of inner and outer contour projections for slice projection, and cutting direction [37,38,39]. As the most classic three-dimensional volume calculation method, the slicing method has good accuracy and operation efficiency in calculating volume when handling simple objects such as crown canopy, coal stockpiles, and grain piles. However, as for complex objects, it is necessary to consider the errors caused by point cloud contour fitting (inner and outer), point cloud multi-contour fitting, as well as cutting direction and cutting interval. The missing point clouds in volume calculation will affect the water tightness of the 3D reconstruction model, so it is essential to obtain high-quality preprocessed point clouds for volume calculation.
Using the Monte Carlo method to calculate volume is a very representative algorithm [40,41,42] that involves the probability of random sampling points inside or outside the 3D model as the solution to the problem. This method is characterized by using random sampling to obtain approximate results. As the number of sampling times increases, the probability of obtaining correct results gradually increases. Conversely, if the sampling is not sufficient, it will cause obvious errors [43]. The Monte Carlo method requires that a large number of random points be inserted to improve the calculation accuracy. Judging whether the random points are inside the model or not is time-consuming [44]; therefore, Monte Carlo method is more frequently used to calculate the volume of smaller objects and it is suitable for closed models.
The model reconstruction method first reconstructs the point cloud into a mesh model structure, then decomposes the mesh model into multiple polyhedra, and finally sums the volumes of all polyhedra to obtain the total volume. The advantage of this method is that it converts discrete point cloud data into a surface model with continuity, creating a high-precision digital model that better preserves the geometric shape and surface features of the point cloud data. At the same time, it makes the calculated volume closer to the original volume.
In summary, for complex pig point clouds, solving the point cloud volume based on the model reconstruction method is an ideal option. However, preprocessing of point cloud data is required prior to model reconstruction to lessen the impacts of point cloud data quality, noise, and voids on the reconstruction results. Meanwhile, selecting appropriate model reconstruction algorithms and parameters to adapt to the characteristics of pig point clouds is also a crucial step.
Mesh model reconstruction algorithms primarily include the Delaunay triangulation method [45,46] and Poisson’s reconstruction method [47,48]. Delaunay performs non-overlapping triangulation on point clouds to generate a mesh model composed of triangles, while Poisson reconstruction interpolates and fits point cloud data using Poisson’s equations to generate a smooth curved surface model. For comparison, Delaunay is sensitive to the noise and voids present in point clouds, which may lead to incomplete and inaccurate reconstruction results. Poisson reconstruction has a certain degree of robustness and can fill the voids and smooth the surface to a certain extent.
The difference between the Delaunay triangulation algorithm and Poisson reconstruction algorithm is shown in Figure 2. Delaunay triangulation cannot guarantee the complete closure of the constructed mesh model, and the voids in the mesh model can cause volume calculation errors. Compared with the former, the latter is more complex, but the mesh model constructed by the Poisson reconstruction algorithm has water tightness and is more applicable to volume calculation.
The research on contactless calculation of livestock volume is in the exploratory stage. As the surface point cloud of livestock is complex and massive, the existing research mainly uses model reconstruction methods to calculate livestock volume. In 2016, Cheng proposed an improved Delaunay triangulation algorithm to construct a cattle point cloud mesh model and combined it with the tetrahedral summation method to calculate cattle volume [50], but this algorithm was not applied to real cattle point clouds and its accuracy was not verified. In 2022, Guo et al. [51] constructed a pig body mesh model through Delaunay triangulation and incorporated it with the triangular prism summation method to calculate the pig body volume. The body weight was estimated based on the volume and achieved good results in 20 test samples, with an estimated relative error within 8%.
At present, there are many methods for calculating the volume of irregular object point cloud models, but research on the volume computation of pig body point clouds is very limited. Moreover, the method of calculating the volume of livestock point clouds has high requirements in that the quality of the collected point clouds, the accuracy, and the robustness of volume calculation need to be further improved.
The present study intends to register and fuse pig body point clouds collected by three depth cameras that capture the data from different angles simultaneously. After preprocessing such as target point cloud extraction and denoising, an improved Poisson reconstruction algorithm is employed to construct a meshed 3D model of the pig body for volume calculation. Subsequently, a regression model is established based on the volume calculation results of the pig body point clouds to estimate the body weight.

2. Materials and Methods

2.1. Experimental Data Acquisition and Algorithmic Process

The pig point cloud data in this study were acquired from Jingbao Food Company, affiliated with Wens Foodstuff Group Co., Ltd., Heyuan City, China from July to August in 2022. A total of 479 sets of point cloud data were collected experimentally from 58 Landrace pigs. The target pigs were on an empty stomach for 2 h prior to the point cloud and weight data collection, and the pig weight ranged from 83 kg to 132 kg.
Three depth cameras collect local point clouds of pig bodies from different angles (shown in Figure 3), with each local point cloud located in different coordinate systems. Therefore, it is necessary to use 3D point cloud registration technology to unify the three local point clouds into the same world coordinate system and fuse them into one complete pig point cloud [27]. The point cloud also includes the railings, the ground in the data acquisition area, as well as various noise points and outliers following registration and fusion. Consequently, first extracting the target pig body point cloud and then denoising it are required, and for the specific operation, refer to previous research conducted by [52]. After obtaining high-quality point cloud data of pig bodies, an improved Poisson reconstruction algorithm is employed to process these point cloud data by constructing a mesh-based three-dimensional model of the pig body. Subsequently, this model is decomposed into multiple tetrahedra to facilitate the calculation of the pig body volume. The algorithmic process of this study is illustrated in Figure 4.

2.2. Validation of Pig Body Volume Calculation Method

All experiments were conducted in conformity with the guidelines on animal research established by the laboratory Animal Ethics Committee of South China Agricultural University (reference number: 2020G008). Due to the difficulty of obtaining corresponding volume data from real-time collected pig body point clouds, we verified the feasibility and accuracy of the pig body volume calculation method proposed in this study and focused on three aspects:
(1) The cube, cylinder, the Stanford rabbit [49] standard model, and the piglet model with known accurate volume values were used as test samples for the accuracy of our volume calculation method. The application of the standard point cloud model and piglet model in our experiment is shown in Figure 5.
(2) The correlation between the volume of real pig point cloud and body weight was calculated. By randomly selecting 300 test samples of pig body point clouds collected in the experiment, the volume calculation method in this article was utilized to acquire the volume of the target pig body point cloud, and the Pearson correlation coefficient between pig body volume and pig body weight was analyzed to confirm the strong correlation between body weight and the volume of the pig body point cloud, as shown in Figure 6.
(3) A linear regression weight estimation model was established based on the relationship between the volume calculation results and body weight in the sample set with the point cloud of 300 pigs, as shown in Figure 6. Subsequently, 179 sets of pig body point clouds were used as test samples to verify the accuracy of the volume calculation results using the relative error of body weight estimation.

2.3. Pig Point Cloud Smoothing

After pre-treatment, there is still a small portion of burr noise on the surface of the pig body point cloud, as shown in Figure 7. These burrs and noise points can cause sharp areas on the surface of the pig point cloud, resulting in deviations in subsequent point cloud normal vector estimation and distortion of the mesh model reconstructed by the Poisson algorithm.
To solve the above problems, we utilize the moving least squares method within a local range to fit the point cloud on the pig body surface into a smooth curved surface. The fitting function u 0 ( x ) for a point x 0 in the point cloud can be defined as [53]
u 0 ( x ) = j = 1 m a j ( x 0 ) × p j ( x )
Among them, a j is a set of coefficients for the fitting function, and p j ( x ) is a set of basis functions. Generally speaking, the quadratic basis function in three-dimensional space can use p T as follows:
p T = ( 1 , x , y , z , x y , z x , y z , x 2 , y 2 , z 2 )
To minimize the weighted sum of squares difference between the values of the sampling points x 0 and the values of the fitting function neighboring the sampling points, an optimization model J is established
J = b = 1 k w ( x 0 x b ) [ u 0 ( x 0 ) u b ] 2
where k is the number of neighboring sampling points of point x 0 , x b is a neighboring sampling point of point x 0 , u b is the value of the neighboring sampling point at u ( x 0 ) , and w ( x 0 x b ) is the distance weight function. When point x b is closer to point x 0 , the weight value w ( x ) increases, and vice versa.
When the derivative value of model J over a ( x ) is 0, the fitting function u ( x 0 ) can be obtained. Based on u ( x 0 ) , the fitting function and its value can be calculated at any point to acquire a smoothed point cloud on the pig body surface, as shown in Figure 8.

2.4. Estimating Normal Vector via Principal Component Analysis

Before reconstructing the three-dimensional model of the pig body, the normal vector of the pig body point cloud is first calculated using principal component analysis. We select a point x 0 in the point cloud to estimate its normal vector, and a local region with k nearest points of x 0 is formed. When the region is small and smooth enough, it can be regarded as a plane, and the normal vector of point x 0 is the normal vector of that plane. We can transform finding a plane normal vector into finding a normal vector so that the projection points from points in the domain where the normal vector belongs to the vector are the most concentrated, that is, the projection variance of all points in a certain region in the vector direction is the smallest. Assisted by principal component analysis for constructing the minimum objective function and obtaining the normal vector, the following Equation (4) is used:
min n = 1 i = 1 k x i x 0 n 2
The process of solving Formula (4) can be transformed into eigenvalue decomposition of the covariance matrix C composed of local domain point sets:
C = 1 k X T X
Among them,
X = x 1 . x x 0 . x x 1 . y x 0 . y x 1 . z x 0 . z x k . x x 0 . x x k . y x 0 . y x k . z x 0 . z
The minimization objective function (4) can be rephrased as [54]:
m i n n = 1 n T C n
The eigenvector corresponding to the minimum eigenvalue of the covariance matrix C represents that the variance is the smallest in this direction, and the projection of regional points is the most intensive, namely, the normal vector estimation value of the sampling point x 0 .

2.5. Redirected Point Cloud Normal Vector

Principal component analysis is a local fitting method that can only estimate the straight line where the normal vector of a point cloud is located, but this algorithm cannot distinguish whether the direction of the normal vector is inside or outside the point cloud, nor can it acquire the correct orientation. Therefore, the point cloud normal vector estimated by principal component analysis will lead to opposite orientations between adjacent normal vectors. From a global perspective, provided that the normal vector direction of the point cloud faces partly inward and partly outward, errors are likely to occur in subsequent point cloud reconstruction.
First, this study stipulates that the normal vector towards the interior of the point cloud is the correct orientation, while the normal vector towards the exterior of the point cloud is the wrong orientation. The classic point cloud normal vector redirection is constructing an undirected graph based on neighborhood distance, propagating the point cloud normal vector as the correct direction. The method follows this principle: assuming that two points are close enough, they can be considered to be points on the same plane, and their normal vectors should have approximately identical orientation. We construct an undirected graph in a point cloud, with each edge having a weight value of E i , j :
E i , j = ψ ( i , j ) ω x i x j
where ω x i x j is the distance weight function; ψ ( i , j ) is the correlation between the normal vector of point i and the normal vector of point j; ψ i , j = < n i , n j > represents the inner products of n i ; and n j . The more consistent the orientation of the normal vectors of point i and point j is, the larger ψ ( i , j ) . We select a vertex of a dimension in the point cloud as the starting point, specifying that the normal vector orientation of the vertex is consistent with the negative direction of the dimension. Then we propagate the normal vector orientation in the direction of generating the maximum spanning tree. The problem of redirecting point cloud normal vector is transformed into the problem of optimizing the maximum spanning tree with a weighted undirected graph.
However, the method of constructing an undirected graph based on neighborhood distance is only applicable to ideal smooth point clouds. As shown in Figure 9, incorrect normal vector orientations may occur at sharp points and on thin planes. Due to the constraints of the distance weight function, the incorrect normal vector orientations are prone to propagate to neighboring points. To address the above issues, Jakob et al. [55] proposed constructing a KNN graph formed by connecting K nearest neighbor points of all points to redirect the point cloud normal vector.
Assuming that two adjacent points are connected as an edge, then this edge should act approximately as a tangent line to the plane on which it is located, perpendicular to the normal vector. Therefore, it is necessary to provide new constraints for the weight function E i , j of the edge, and the modifications to ψ i , j are as follows:
ψ i , j = < n i , n j > ,
n i = n i p i p j p i p j < p i p j p i p j , n i >
When the vector formed by point i and point j and the normal vector n i of point i are closer to perpendicular, the values of ψ i , j become larger. As shown in Figure 10, a KNN graph is constructed to redirect the point cloud normal vector at the sharp points and on the thin planes of the point cloud. When the angle between the line (formed by neighboring points and the redirected points) and the normal vector of the neighboring point is smaller, it indicates that the normal vector orientation of the neighboring point is less trustworthy, and therefore, the obtained weight value is smaller. However, when the line is close to perpendicular to the normal vector, it signifies that the normal vector orientation of the neighboring point is more trustworthy, and therefore, the obtained weight value is larger. Compared with traditional methods, the new constraint method is more robust in sharp points and on thin planes of point clouds.

2.6. Reconstructing Mesh Model

2.6.1. Surface Reconstruction Based on Poisson Equation

Poisson reconstruction can reconstruct the curved surface of a point cloud by constructing the indicator function, and the specific idea is seeking the best approximation in the gradient field of the indicator function to the point cloud vector field. The value of the indicator function χ in the inner space of the curved point cloud surface is 1, and the value of the function in the outer space of the point cloud surface is 0, as shown in Figure 11. Obviously, the gradient of the indicator function χ only has non-zero value on the curved point cloud surface, so solving the point cloud surface function can be converted into solving the indicator function χ .
We specify a normal vector field V to solve the indicator function χ , so that the gradient field χ of indicator function χ is infinitely close to the normal vector field V , that is, χ V is infinitely close to 0. Therefore, there is a minimization function E ( χ ) :
E ( χ ) = m i n   χ V
The theorem mentioned above states that the indicator function χ has a discontinuity from 0 to 1 in the orthogonal direction of the curved point cloud surface, resulting in an infinite value when its gradient field is calculated. To overcome this issue, it is necessary to use a smoothing filter function. According to the divergence theorem, the gradient field of the smoothed indicator function is equal to the normal vector field of the smoothed surface.
During the solution process, as point cloud data are discrete in three-dimensional space, we need to obtain the normal vector field V through discrete approximation. After solving for V , χ = V is not directly soluble because V is not necessarily an irrotational field and cannot be integrated directly. Therefore, the problem is reformulated as a least-squares minimization problem to find the best solution:
E ( χ ) = χ ( p ) V ( p ) 2 d p
When E ( χ ) attains its minimum value, the Poisson equation is satisfied.
Δ χ = V
Among them, Δ represents the Laplace operator, and represents the divergence operator.

2.6.2. Octree Space Discretization

Solving Poisson’s equation requires discretization of the Poisson problem. This paper uses octree to partition the entire function space. Let the octree be O; for every node o on the octree, o O , there is a node function F o , and the node function is composed of a base function F. It is required that the normal field V of the point cloud and the indicator function χ can be expressed by the linear combination of the node function F o . Then, solving the indicator function χ can be transformed into solving the linear combination of the F o . As shown in Figure 12, assuming the depth of the octree is D, the center of node o is o . c , and the node width is o . w , then the node function o can be defined as:
F o ( q ) = F q o . c o . w 1 o . w 3
The basis function F is actually a smoothing function centered around the origin, which can be defined as
F ( q ) = F q 2 D
To further improve the operational efficiency of the algorithm, F will be defined as follows:
F x , y , z B ( x ) B ( y ) B ( z ) * n
Among them, the B-spline basis function B ( ω ) has a constant value of 1 only when ω is in the interval (−0.5, 0.5), and the other intervals are 0. * n represents n-fold convolution, and when n approaches infinity, F ( q ) approximates the Gaussian function infinitely.
The normal field V can be computed by using the normal vectors of the neighboring points. Subsequently, the indicator function χ can be obtained by solving the Poisson equation in reverse. However, although both V and χ can be defined by basis functions in the function space of an octree, Δ χ and V may not necessarily be in the same function space. Therefore, it is also imperative to minimize the distance between the projection of Δ χ and V in the octree function space:
o ϵ O Δ χ V , F o 2 = o ϵ O Δ χ , F o V , F o 2
Let v be an | o | -dimensional vector and the coordinate value v 0 of the o-th node be V , F o . The solution of the indicator function χ can be solved by approximating v with the Laplace operator for projection of χ in function space and the vector composed of F 0 [56].

2.6.3. Isosurface Extraction

In Poisson reconstruction, to make the isosurface more approximate to the point cloud set, the average estimated value of the indicator function is used as the threshold μ i s o of the isosurface:
μ i s o = 1 | S | s S χ s . p

2.6.4. Improved Poisson Equation Surface Reconstruction

The Poisson reconstruction algorithm directly seeks the best approximation between the gradient field and the normal field of the indicator function and uses a global offset to correct the error, which will lead to errors between the curved point cloud surface constructed by the Poisson reconstruction algorithm and the real object surface. The accuracy of estimating the normal vector of a point cloud depends on the quality of the point cloud, but the normal field is not consistently reliable. Therefore, this article considers adding position constraints to correct the position of the reconstructed point cloud surface. The improved Poisson reconstruction algorithm adds position constraints based on Poisson reconstruction, and its definition of a new minimization function E ( χ ) consists of gradient constraints and position constraints:
E ( χ ) = χ ( p ) V ( p ) 2 d p + λ A r e a ( S ) p S τ ( p ) p S τ ( p ) χ 2 ( p )
where λ is the weight factor for balancing gradient constraints and position constraints, S is the point cloud sample, the point cloud vector field is V , A r e a ( S ) is the reconstruction area near the input sample point, and τ ( p ) is the sample point weight. This article adopts dynamic sample point weights and dynamically adjusts the sample point weights based on the credibility of the normal vector. Assuming there are n adjacent points in the neighborhood of point p, the sample point weight τ ( p ) is defined as
τ ( p ) = i = 0 n g ( c o s θ ) n
Among them, when g ( x ) is in x [ 0 , 1 ] , g ( x ) = c o s θ = N p × N i N p N i , where θ is the angle between N p and N i , when g ( x ) is in x [ 1 , 0 ] , g ( x ) = 0. When the angle between the normal vector of point p and the normal vector of neighboring points is smaller, it can be considered that the accuracy of the normal vector of point p is higher, and the value of weight τ ( p ) is closer to 1. On the contrary, provided that the angle between the normal vector of point p and the normal vector of neighboring points is closer to a right angle, it is considered that the normal vector of point p is more untrustworthy, and the value of weight τ ( p ) is closer to 0.
Let . , . [ 0 , 1 ] 3 represent the standard inner product on the function space of the octree, such as
F , G [ 0 , 1 ] 3 = F ( p ) G ( p ) d p
U , V [ 0 , 1 ] 3 = U ( p ) V ( p ) d p
and let . , . ( τ , S ) represent the weighted sum of function values in the function space of the octree, which has bilinear, symmetric, non-negative, and semi-stereotyped forms, such as
F , G ( τ , S ) = A r e a ( S ) p S τ ( p ) p S τ ( p ) F ( p ) G ( p )
Thus, Equation (19) can be simplified into the following expression:
E ( χ ) = V χ , V χ [ 0 , 1 ] 3 + λ χ , χ ( τ , S )
Furthermore, Formula (24) can also be expressed in the form of Poisson’s equation:
( Δ λ I ) χ = V
Among them, I is a positive definite operator used to convert the different space of the gradient constraint and position constraint where they are located.

2.7. Volume Calculation of Three-Dimensional Mesh Models

The tetrahedral summation method is a three-dimensional volume calculation method marked by wide applicability and high accuracy. A three-dimensional mesh model is a model composed of vertices and edges, with vertices connected to each other to form triangular facets; hence, it is also known as a triangular facet model. For each triangular facet in the network model, connecting three vertices and the origin forms a tetrahedron, which is the basic unit of the tetrahedron summation method.
To calculate the volume of a tetrahedron, first the normal vector of the triangular facet is calculated, which can be obtained by the cross product of the vectors of two edges, as shown in Figure 13. For N A B C , the coordinates of vertices A, B, and C are ( x 1 , y 1 , z 1 ) , ( x 2 , y 2 , z 2 ) , and ( x 3 , y 3 , z 3 ) , respectively. Then, the vector A B = x 2 x 1 , y 2 y 1 , z 2 z 1 , vector B C = x 3 x 2 , y 3 y 2 , z 3 z 2 , and normal vector N A B C = A B × B C . In this article, the normal vector of the triangular facet is specified to face the interior of the tetrahedron. The volume of each tetrahedron is assigned a positive sign or a negative sign. The rule for assigning a positive sign or a negative sign is described as follows: when the normal vector of the triangular facet and the origin are on the same side, a positive sign is assigned. Conversely, a negative sign is assigned. Furthermore, the judgment basis can be the sign of the result derived from inner product of the vector pointing from vertex to origin and the normal vector of the triangular facet. As shown in Figure 13, the tetrahedral volume formed by the triangle facet ABC and the origin can be expressed as
V O A B C = A O N A B C A O N A B C 1 6 x 3 y 2 z 1 + x 2 y 3 z 1 + x 3 y 1 z 2 x 1 y 3 z 2 x 2 y 1 z 3 + x 1 y 2 z 3
Furthermore, provided that i represents the ith triangular facet and I represents the first vertex of the triangular facet, the volume of the ith tetrahedron can be expressed as
V i = I O N i I O N i 1 6 x i 3 y i 2 z i 1 + x i 2 y i 3 z i 1 + x i 3 y i 1 z i 2 x i 1 y i 3 z i 2 x i 2 y i 1 z i 3 + x i 1 y i 2 z i 3
As shown in Figure 14, the volume of an irregular network model can be decomposed into positive and negative volume regions in the tetrahedral summation method. By summing the volumes of all signed triangles, the volume of the network model can be obtained, and the result of the volume is unrelated to whether the origin is inside the model or not. Therefore, the volume of the network model can be expressed as
V t o t a l = i V i
It is worth noting that calculating the three-dimensional volume involves computing the normal vector of the surface. Due to the ambiguity of the normal vector, it is essential to determine the direction of the normal vector. If the vertex numbers in the constructed network model data are in order, the orientation of the surface normal vector can be judged by the right-hand rule, as shown in Figure 15. However, not all the vertex numbers in the network model data are orderly. If the right-hand rule cannot be applied, the normal vector orientation of the triangle facet is determined according to the point cloud that has previously been processed for normal vector redirection. By searching for the nearest point cloud of these three vertices in the point cloud through the spatial coordinates of the three vertices of a triangular facet, the normal vector orientation of the triangular patch can be identified based on the orientation of the normal vector in the point cloud.

3. Experimental Results and Analysis

3.1. Analysis of Point Cloud Reconstruction Results

This article respectively compares the effects of Poisson reconstruction and improved Poisson reconstruction algorithms, as well as the outcomes of comparison without adding a smoothing process.
From the visualization results of point cloud reconstruction in Figure 16, it can be observed that there was a significant difference in the reconstruction effect between the smoothed and non-smoothed point clouds. As shown in Figure 16a,c,e, reconstruction errors may occur in the uneven surface area of the point cloud that had not been smoothed. For example, in the bottom area and the pig leg area of the pig body, the corresponding parts appear swollen or broken, which are erroneous phenomena during reconstruction. By contrast, the point cloud that had been smoothed, as shown in Figure 16b,d,f, had fewer errors in reconstruction. Our experimental results demonstrate that point cloud smoothing can indeed improve the accuracy of point cloud normal vector estimation, making point cloud reconstruction more effective.
Compared with Poisson reconstruction, the reconstruction effect of improved Poisson reconstruction is similar in the trunk part of the pig body, while in more complex areas on the point cloud surface, such as the pig leg area, the reconstruction effect differs significantly. The surface processed by Poisson reconstruction tends to contract inward on complex point cloud surfaces, as shown in Figure 16c,d, while improved Poisson can reconstruct surfaces with the outermost point cloud defined as the standard, as shown in Figure 16e,f.
The number of point clouds in the pig leg area is small; moreover, the pig leg point cloud may be partially missing due to being blocked by the railing in the data acquisition area. When the target pigs pass through the collection channel, the dust on the ground kicked up by their hooves causes many noise point clouds around the pig legs, constituting the most complex area in point cloud reconstruction and the area with the largest number of errors in volume calculation. The images of pig leg point clouds reconstructed by using different methods are shown in Figure 17. In Figure 17a,b, the pig legs reconstructed by Poisson contract inward, making the volume of the pig legs smaller compared to their actual volume. By comparison, the pig legs reconstructed by improved Poisson are fuller than those reconstructed by traditional Poisson and more in line with the actual pig legs, as shown in Figure 17c,d.
Prior to Poisson reconstruction, octree is used to perform discretization of voxels. The octree depth is an important parameter: the larger the octree depth parameter, the higher the resolution, and the richer the details. In Figure 18, the reconstruction results are visualized under different octree depths.
In improved Poisson reconstruction, when the depth of the octree is less than six, the reconstruction effect is significantly reduced, and the reconstructed pig body network structure is deformed, which means that when the depth of the octree is less than six, its resolution is not sufficient to reconstruct a network structure model of the pig body with good performance, as shown in Figure 18a. On the other hand, when the depth of the octree is greater than or equal to eight, the pig model reconstructed with the increase in the octree depth does not have obvious deformation. As a result, from a local perspective, the increase in the octree depth makes the reconstructed mesh model more refined and more detailed, with a smoother surface achieved.
As shown in Table 1, when the depth of the octree increased, the number of triangular facets of the mesh model increased significantly; so did the running time. At the same time, under the same octree depth, the number of triangular facets produced by the improved Poisson reconstruction was larger than that of the traditional Poisson reconstruction, which indicates that the surface details of the improved Poisson reconstruction are richer and the degree of reduction is higher than that of the traditional Poisson reconstruction. Based on the above experimental findings, the improved Poisson reconstruction algorithm with the octree depth set as eight is most suitable for reconstructing the pig mesh model.

3.2. Analysis of Volume Calculation Results

In this article, the standard cube point cloud model, the standard cylinder point cloud model generated by the CloudCompare software (www.cloudcompare.org), and the classic Stanford rabbit [49] model (shown in Figure 19), which can calculate the true value of reliable volume, are used as references to test the accuracy of the volume calculation method proposed in this paper, and comparison is made between the slicing method of calculating three-dimensional volume and our method, with the slicing interval being the minimum spacing of point clouds.
Table 2 and Table 3 show that the slicing method had high computational accuracy when applied to calculating the volume of regular point cloud models, but there was a large error when calculating the volume of irregular and complex models. This can be attributed to the fact that the slicing integration method relies on the calculation of the slice contour; therefore, when handling complex models, a large error occurs in the calculation of the slice contour area. Additionally, the slice contour has internal and external contour problems as well as multiple contour problems [57], resulting in calculation accuracy errors. By contrast, constructing a mesh model to calculate volume had good computational accuracy in both regular and irregular models, signifying the feasibility and accuracy of using the tetrahedral summation method to calculate the volume of the mesh model. Further experiments showed that the increase in octree depth did not significantly improve the accuracy of volume calculation, but the increase in octree depth could simultaneously lead to a larger number of triangular facets and a greater possibility of over fitting.
Furthermore, to test the accuracy of the volume calculation method proposed in this article for irregular object volume calculation, a small pig body model with similar shape was used for testing experiments, as shown in Figure 20. First, the drainage method was adopted to calculate the true volume of the piglet model. Specifically, a transparent water bucket and a measuring tape were used. The first step is to immerse the piglet model completely in water; the second step is to calculate the volume of the discharged water to obtain its true volume. The average of ten measurement results is taken as the final result, and the true volume of the piglet model is 4537.3 cm3. In the third step, the point cloud reconstruction and volume calculation method proposed in this study is employed to calculate the volume of the piglet model.
According to Table 4, the volumes were obtained first by reconstructing mesh models via Poisson and improved Poisson, and then by using the tetrahedral summation method. The relative errors of the volume results were within 6%. The volume relative error obtained by improved Poisson reconstruction was 1% lower than that obtained by traditional Poisson reconstruction. The piglet model proves that the volume calculation method proposed in this paper has satisfactory accuracy, and the volume accuracy of the mesh model obtained by improved Poisson is higher than that of the mesh model obtained by traditional Poisson. Taking the improved Poisson reconstruction with an octree depth of eight as an example, under the condition of similar computation time (shown in Table 1), although its enhancement on the piglet model is only 1.5%, with increasing volume, a 1.5% error can lead to an increased variation in estimated weight. According to Equation (29), a 1.5% volume change may result in a weight variation of around 2 kg. Meanwhile, it can be found that when the octree depth is greater than six, the increase in octree depth has little impact on the final result of the volume calculation.
In order to verify the accuracy of the algorithm applied in real pig bodies, the strong correlation between animal real volume and real body weight data was utilized in this study [58], and the real body weight of pigs was accurately obtained by the weighbridge scale. This article establishes an estimation model based on the calculation results of pig body point cloud volume to estimate pig body weight, indirectly evaluating the error of pig body point cloud volume calculation by means of the deviation between the estimated weight value and the actual value.
In the experiment, 479 pig point cloud sample data were divided into 300 training sample sets and 179 test sample sets. The training sample set was used to build the body weight prediction model, and the test sample was used to estimate the body weight. The improved Poisson reconstruction algorithm with an octree depth of eight was combined with the tetrahedron summation method to obtain the volume calculation results of 300 pigs, so as to draw the scatter plot of their volume and weight, as shown in Figure 21.
Figure 21 shows a linear relationship between the calculated volume of the pig point cloud and the actual body weight of the pig, with a Pearson correlation coefficient of r = 0.95, indicating a strong positive correlation between the calculation result of pig point cloud volume and the actual pig body weight. From these 300 training sample data, a linear regression prediction model is established. The linear regression equation can be expressed as f ( x ) = a x + b , where a = ( x i x ¯ ) ( y i y ¯ ) ( x i x ¯ ) 2 , b = y a x . Therefore, the mathematical relationship of the body weight estimation model is
y = 1070.4 x 1.6524
Among them, R 2 represents the degree to which goodness of fit reflects linear fitting. The closer the goodness of fit is to 1, the higher the impact of x on y, and the better the fitting effect. The definition of R 2 is as follows:
R 2 = 1 y i f 2 y i y ¯ 2
According to Figure 21 and the goodness of fit R 2 = 0.921 of the linear regression equation, it can be concluded that the body weight estimation model has a good fitting effect on 300 training sample data. Using the same method proposed by us, the volume of 179 test sample sets was calculated, and the volume results were substituted into the body weight estimation model to estimate the body weight results, which were compared and analyzed with the corresponding weighbridge weighing.
The scatter plot of the estimated body weight and actual body weight of pigs is shown in Figure 22. The blue dotted line represents the linear fitting straight line between the estimated body weight and the actual body weight of 179 pigs, while the red straight line represents the y = x function. The fitting straight line is very close to the y = x function, indicating that the estimation effect of the body weight estimation model is highly desirable.
Box-plots of absolute error and relative error for the body weight estimation model are shown in Figure 23. The maximum absolute error of body weight estimation was 6.801 kg; the minimum absolute error was 0.036 kg; the average absolute error was 2.664 kg. The upper quartile value of absolute error was 4.028 kg, while the lower quartile value was 1.148 kg. The maximum relative error of body weight estimation was 6.462%; the minimum relative error was 0.035%. The average relative error was 2.478%, and the upper quartile and lower quartile values of relative error were 3.706% and 1.100%, respectively. Through the above error analysis, it can be concluded that the body weight estimation model established based on the volume calculation results has superb estimation performance. Compared with the methods proposed in the existing literature, Guo et al. [51] used Delaunay triangulation to build a mesh model and then obtained the volume calculation result through the sum of three prisms. The average relative error of body weight estimated from the volume calculation results of 20 sample point clouds was 5.162%. Kongsro estimated the body weight of 71 pigs based on the projected area and the body height of pig body surface, and the root-mean-square deviation was 4.8% [15]. Compared with previous research results, the pig body volume calculation method proposed in this research has higher accuracy and further feasibility in predicting body weight.

4. Discussion

For large quantities of pig body point cloud data with complex surface contours, previous studies have typically focused on using 2D image-based or simple 3D reconstruction technology to obtain livestock volume data. However, limited by data dimensionality and complexity, these methods often fail to accurately capture the intricate features of the pig body surface. Additionally, the slicing method, one of the commonly used algorithms for point cloud volume calculation, is prone to potential errors as many factors are taken into consideration, such as inner fitting and outer contour fitting, multi-contour fitting, slicing direction, and slicing interval. Due to the missing parts in the pig body point cloud, pig body data acquisition is highly demanding. Moreover, the Monte Carlo method is not suitable for volume calculation of large animals like pigs.
In contrast, the method proposed in this paper calculates pig body volume by using an octree-enhanced Poisson model reconstruction, demonstrating the feasibility and accuracy of this approach through multiple sets of experiments. The specific work and main innovations of this paper are as follows:
(1)
High-quality target pig point clouds are obtained by using the moving least squares method to further smooth pig body point cloud data for subsequent point cloud reconstruction and volume calculation.
(2)
The maximum spanning tree is constructed based on the undirected graph to redirect pig body point cloud normal vector, thus improving the accuracy of the point cloud normal vector, which in turn makes our method conducive to reducing the erroneous reconstruction caused by the Poisson reconstruction algorithm in the complex area of the pig body point cloud.
(3)
The improved Poisson reconstruction algorithm, with the addition of position constraints and dynamic sample point weights, can more accurately represent complex surfaces and more advantageously repair small-scale voids in the pig point cloud in actual scenes, and it can reduce the errors generated during reconstruction. Meanwhile, experimental outcomes show that superior reconstruction effects can be achieved in voids and the pig leg point cloud.
(4)
By decomposing the reconstructed pig mesh model into multiple tetrahedrons, the signed volumes of all tetrahedrons are summed to obtain the mesh model volume. Through the experiments on the standard model, the relative errors of the volume calculation results were all less than 1%, which proves that the tetrahedron summation method has a high accuracy in calculating the volume of the standard model. According to the volume calculation experiment conducted on the piglet model after collecting its point cloud, the relative error of the volume result obtained from Poisson reconstruction was about 5.6%, while the relative error of the volume result obtained from improved Poisson reconstruction was about 4%. The 479 pig point cloud data sets collected from the pig farm were divided into 300 training samples and 179 test samples. An estimation model for body weight was first established by using the volume calculation results of the training sample point cloud. The pig body weight was subsequently estimated using the volume calculation results of 179 test samples. Finally, comparisons were made between the estimated pig body weight and the actual pig body weight for further analyses. The experimental results reveal that the maximum absolute error of body weight estimation was 6.801 kg; the average absolute error was 2.664 kg; the maximum relative error and the average value were 6.462% and 2.478%, respectively. Given the difficulty in obtaining live pig body volume, we put forward in this paper a method to indirectly verify the accuracy of pig body point cloud volume calculation, confirming that the correlation coefficient between pig body volume and pig body weight was 0.95.
In the past, obtaining livestock body volume data either manually or automatically has been a challenging research topic. As a new phenotype, the volume data can be used not only to estimate body weight more accurately, but also to evaluate meat density efficiently. Through point cloud segmentation, it can also be applied to the volume size prediction and proportion estimation of various body parts, which has new reference value in breeding management and selective breeding.

5. Conclusions

The present study presents an innovative approach to accurately reconstructing and calculating the volume of pig body point clouds, leveraging an improved Poisson reconstruction algorithm integrated with octree. Through a series of experiments and validations, we have demonstrated the feasibility and accuracy of our method, achieving significant advancements in livestock body volume estimation. By incorporating techniques such as moving least squares smoothing, normal vector adjustment, and tetrahedron summation for volume calculation, we have addressed challenges associated with irregular point cloud shapes and missing data, achieving impressive results with low relative errors.
Furthermore, the strong correlation between pig body volume and weight, as evidenced by a correlation coefficient of 0.95, underscores the utility of volume data in accurate weight estimation and efficient breeding management. This novel phenotype not only enhances weight estimation accuracy but also offers potential applications in meat density evaluation and body part proportion estimation through point cloud segmentation. Our research opens new avenues for precise livestock management and selective breeding strategies. In future studies, we aim to explore further advancements, particularly in segmenting various parts of the pig body point cloud for more comprehensive volume analysis.

Author Contributions

Conceptualization, Z.W. and L.Y.; methodology, J.L., H.C., R.W. and R.L.; software, J.L., R.W. and R.L.; validation, J.L., H.C. and R.L.; formal analysis, J.L., H.C., R.W. and R.L.; investigation, G.C., H.Z. and S.Z.; resources, G.C., H.Z. and S.Z.; data curation, X.W., X.L. and H.W.; writing—original draft preparation, J.L., H.C. and R.L.; writing—review and editing, X.W., X.L. and H.W.; visualization, X.W., X.L. and H.W.; supervision, Z.W. and L.Y.; project administration, J.L. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This project has received sponsorship from the National Natural Science Foundation of China [32172780], Key Area R&D Plan of Guangdong Province [2019B020219001], National Engineering Research Center for Breeding Swine Industry, and the Key Laboratory of Guangzhou for Intelligent Agriculture [201902010081].

Institutional Review Board Statement

The animal study protocol was approved by the Experimental Animal Ethics Committee of South China Agricultural University under protocol 2020G008.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons.

Acknowledgments

The authors express their gratitude to Guangdong Wens Foodstuff Group Co., Ltd. for generously supplying the materials used in the experiments. Additionally, we sincerely thank all individuals who assisted us in data acquisition and the manual measurement of target pigs.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

Specialized TerminologyDescription
CuttingIt refers to the process of dividing a large point cloud dataset
into multiple subsets, which typically represent different objects
or different parts of objects within the point cloud.
Cutting directionWhen performing segmentation of three-dimensional
point cloud data, a directional segmentation strategy can be adopted,
which involves selecting one or more predefined directions for segmentation.
These directions can align with the standard coordinate axes (X, Y, Z axes)
of the point cloud data, or they can be non-axial directions selected
based on specific scene requirements.
Cutting intervalIt refers to the distance selected for
segmentation along a specific direction
in three-dimensional point cloud data.
Point cloud contour fittingIt refers to fitting a set of scattered three-dimensional
point clouds into contours with specific geometric shapes
using mathematical models.
Point cloud multi-contourContours of multiple independent objects appear
simultaneously in a single point cloud dataset.
In a complex scene’s point cloud, there
may be multiple different objects,
each with its own contours and boundaries.
Point cloud multi-contour fittingIt refers to the process of analyzing
and processing complex point cloud data to
identify and fit the multiple contours of these objects.
SlicePoint cloud slicing refers to subsets obtained
by cutting three-dimensional
point cloud data along specific directions,
which can be viewed as polygonal prisms.
Slice contourSlice contour defines the bottom
contour of the prism.
Slice integrationIt refers to a technique used to calculate
the volume of three-dimensional point clouds.
For more details, refer to Zhi et al. [36].
Slice interval, Slice spacingThe slice interval defines the height of the prism.
Slice projectionIt refers to the process of projecting a prism onto a two-dimensional plane.

References

  1. He, D.; Liu, D.; Zhao, K. Review of perceiving animal information and behavior in precision livestock farming. Trans. Chin. Soc. Agric. Mach. 2016, 47, 231–244. [Google Scholar]
  2. Wongsriworaphon, A.; Arnonkijpanich, B.; Pathumnakul, S. An approach based on digital image analysis to estimate the live weights of pigs in farm environments. Comput. Electron. Agric. 2015, 115, 26–33. [Google Scholar] [CrossRef]
  3. Menesatti, P.; Costa, C.; Antonucci, F.; Steri, R.; Pallottino, F.; Catillo, G. A low-cost stereovision system to estimate size and weight of live sheep. Comput. Electron. Agric. 2014, 103, 33–38. [Google Scholar] [CrossRef]
  4. Banhazi, T.; Tscharke, M.; Ferdous, W.; Saunders, C.; Lee, S. Improved image analysis based system to reliably predict the live weight of pigs on farm: Preliminary results. Aust. J. Multi-Discip. Eng. 2011, 8, 107–119. [Google Scholar] [CrossRef]
  5. Li, G.; Liu, X.; Ma, Y.; Wang, B.; Zheng, L.; Wang, M. Body size measurement and live body weight estimation for pigs based on back surface point clouds. Biosyst. Eng. 2022, 218, 10–22. [Google Scholar] [CrossRef]
  6. Liu, T.H.; Li, Z.; Teng, G.H.; Luo, C. Prediction of pig weight based on radical basis function neural network. Trans. Chin. Soc. Agric. Mach. 2013, 44, 245–249. [Google Scholar]
  7. Wu, Y.; Liu, Z.Y.; Gu, Y.N.; Zhao, H.J. Sow weight estimation based on machine vision. Electron. Technol. Softw. Eng. 2020, 01, 100–101. [Google Scholar]
  8. Chen, H.; Liang, Y.; Huang, H.; Huang, Q.; Gu, W.; Liang, H. Live pig-weight learning and prediction method based on a multilayer RBF network. Agriculture 2023, 13, 253. [Google Scholar] [CrossRef]
  9. Nguyen, A.H.; Holt, J.P.; Knauer, M.T.; Abner, V.A.; Lobaton, E.J.; Young, S.N. Towards rapid weight assessment of finishing pigs using a handheld, mobile RGB-D camera. Biosyst. Eng. 2023, 226, 155–168. [Google Scholar] [CrossRef]
  10. Thapar, G.; Biswas, T.K.; Bhushan, B.; Naskar, S.; Kumar, A.; Dandapat, P.; Rokhade, J. Accurate estimation of body weight of pigs through smartphone image measurement app. Smart Agric. Technol. 2023, 4, 100194. [Google Scholar] [CrossRef]
  11. Cang, Y.; He, H.; Qiao, Y. An intelligent pig weights estimate method based on deep learning in sow stall environments. IEEE Access 2019, 7, 164867–164875. [Google Scholar] [CrossRef]
  12. Du, X.D.; Li, X.X.; Fan, S.R.; Yan, Z.C.; Ding, X.D.; Yang, J.F.; Zhang, L.P. A review of the methods of pig body size measurement and body weight estimation. Chin. J. Anim. Sci. 2023, 59, 41–46+56. [Google Scholar] [CrossRef]
  13. Cominotte, A.; Fernandes, A.; Dorea, J.; Rosa, G.; Ladeira, M.; Van Cleef, E.; Pereira, G.; Baldassini, W.; Neto, O.M. Automated computer vision system to predict body weight and average daily gain in beef cattle during growing and finishing phases. Livest. Sci. 2020, 232, 103904. [Google Scholar] [CrossRef]
  14. Condotta, I.C.; Brown-Brandl, T.M.; Silva-Miranda, K.O.; Stinn, J.P. Evaluation of a depth sensor for mass estimation of growing and finishing pigs. Biosyst. Eng. 2018, 173, 11–18. [Google Scholar] [CrossRef]
  15. Kongsro, J. Estimation of pig weight using a Microsoft Kinect prototype imaging system. Comput. Electron. Agric. 2014, 109, 32–35. [Google Scholar] [CrossRef]
  16. Paudel, S.; de Sousa, R.V.; Sharma, S.R.; Brown-Brandl, T. Deep Learning Models to Predict Finishing Pig Weight Using Point Clouds. Animals 2023, 14, 31. [Google Scholar] [CrossRef]
  17. Fu, W.; Teng, G.; Yang, Y. Research on three-dimensional model of pig’s weight estimating. Trans. CSAE 2006, 22, 84–87. [Google Scholar]
  18. Chu, M.Y. Research on Body Size Measurement and Weight Estimation of Cow Based on Three-Dimensional Reconstruction. Master’s Thesis, Hebei Agricultural University, Baoding, China, 2021. [Google Scholar]
  19. Hao, H.; Jincheng, Y.; Ling, Y.; Gengyuan, C.; Sumin, Z.; Huan, Z. An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size. Comput. Electron. Agric. 2023, 205, 107560. [Google Scholar] [CrossRef]
  20. Okayama, T.; Kubota, Y.; Toyoda, A.; Kohari, D.; Noguchi, G. Estimating body weight of pigs from posture analysis using a depth camera. Anim. Sci. J. 2021, 92, e13626. [Google Scholar] [CrossRef]
  21. Pezzuolo, A.; Guarino, M.; Sartori, L.; González, L.A.; Marinello, F. On-barn pig weight estimation based on body measurements by a Kinect v1 depth camera. Comput. Electron. Agric. 2018, 148, 29–36. [Google Scholar] [CrossRef]
  22. Wang, K.; Guo, H.; Ma, Q.; Su, W.; Chen, L.; Zhu, D. A portable and automatic Xtion-based measurement system for pig body size. Comput. Electron. Agric. 2018, 148, 291–298. [Google Scholar] [CrossRef]
  23. Le Cozler, Y.; Allain, C.; Caillot, A.; Delouard, J.; Delattre, L.; Luginbuhl, T.; Faverdin, P. High-precision scanning system for complete 3D cow body shape imaging and analysis of morphological traits. Comput. Electron. Agric. 2019, 157, 447–453. [Google Scholar] [CrossRef]
  24. Ozkaya, S.; Neja, W.; Krezel-Czopek, S.; Oler, A. Estimation of bodyweight from body measurements and determination of body measurements on Limousin cattle using digital image analysis. Anim. Prod. Sci. 2015, 56, 2060–2063. [Google Scholar] [CrossRef]
  25. Pezzuolo, A.; Guarino, M.; Sartori, L.; Marinello, F. A feasibility study on the use of a structured light depth-camera for three-dimensional body measurements of dairy cows in free-stall barns. Sensors 2018, 18, 673. [Google Scholar] [CrossRef]
  26. Ruchay, A.; Kober, V.; Dorofeev, K.; Kolpakov, V.; Miroshnikov, S. Accurate body measurement of live cattle using three depth cameras and non-rigid 3-D shape recovery. Comput. Electron. Agric. 2020, 179, 105821. [Google Scholar] [CrossRef]
  27. Shuai, S.; Ling, Y.; Shihao, L.; Haojie, Z.; Xuhong, T.; Caixing, L.; Aidong, S.; Hanxing, L. Research on 3D surface reconstruction and body size measurement of pigs based on multi-view RGB-D cameras. Comput. Electron. Agric. 2020, 175, 105543. [Google Scholar] [CrossRef]
  28. Weales, D.; Moussa, M.; Tarry, C. A robust machine vision system for body measurements of beef calves. Smart Agric. Technol. 2021, 1, 100024. [Google Scholar] [CrossRef]
  29. Yin, L.; Cai, G.; Tian, X.; Sun, A.; Shi, S.; Zhong, H.; Liang, S. Three dimensional point cloud reconstruction and body size measurement of pigs based on multi-view depth camera. Trans. Chin. Soc. Agric. Eng. 2019, 35, 201–208. [Google Scholar]
  30. Du, A.; Guo, H.; Lu, J.; Su, Y.; Ma, Q.; Ruchay, A.; Marinello, F.; Pezzuolo, A. Automatic livestock body measurement based on keypoint detection with multiple depth cameras. Comput. Electron. Agric. 2022, 198, 107059. [Google Scholar] [CrossRef]
  31. Luo, X.; Hu, Y.; Gao, Z.; Guo, H.; Su, Y. Automated measurement of livestock body based on pose normalisation using statistical shape model. Biosyst. Eng. 2023, 227, 36–51. [Google Scholar] [CrossRef]
  32. Liu, Y.; Zhou, J.; Bian, Y.; Wang, T.; Xue, H.; Liu, L. Estimation of Weight and Body Measurement Model for Pigs Based on Back Point Cloud Data. Animals 2024, 14, 1046. [Google Scholar] [CrossRef] [PubMed]
  33. Xu, J.; Moyer, D.; Grant, P.E.; Golland, P.; Iglesias, J.E.; Adalsteinsson, E. SVoRT: Iterative transformer for slice-to-volume registration in fetal brain MRI. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 3–13. [Google Scholar]
  34. Kwon, K.; Mun, D. Iterative offset-based method for reconstructing a mesh model from the point cloud of a pig. Comput. Electron. Agric. 2022, 198, 106996. [Google Scholar] [CrossRef]
  35. Qiao, J.F.; Zhou, Y.Z.; Wang, Y.; Fan, X.; Nong, Y.H.; Li, B.Y.; Zi, W.H. Advances in 3D laser scanning volume measurement technology and application. Laser Infrared 2021, 51, 1115–1122. [Google Scholar]
  36. Zhi, Y.; Zhang, Y.; Chen, H.; Yang, K.; Xia, H. A method of 3d point cloud volume calculation based on slice method. In Proceedings of the 2016 International Conference on Intelligent Control and Computer Application (ICCA 2016), Zhengzhou, China, 16–17 January 2016; Atlantis Press: Amsterdam, The Netherlands, 2016; pp. 155–158. [Google Scholar]
  37. Bi, X.W.; Zhang, J.C.; Chen, Y.; Yan, X.G.; Li, B. A method for calculating point cloud object volume based on opposite-direction slicing method. J. Gansu Sci. 2021, 33, 18–26. [Google Scholar]
  38. Chang, W.C.; Wu, C.H.; Tsai, Y.H.; Chiu, W.Y. Object volume estimation based on 3D point cloud. In Proceedings of the 2017 International Automatic Control Conference (CACS), Pingtung, Taiwan, 12–15 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
  39. Liu, D.L.; Wang, J.X. Calculation of mass center and volume of irregular objects. Geotech. Investig. Surv. 2021, 49, 46–50. [Google Scholar]
  40. Biljecki, F.; Ledoux, H.; Stoter, J. Error propagation in the computation of volumes in 3D city models with the Monte Carlo method. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 31–39. [Google Scholar] [CrossRef]
  41. Covre, N.; Luchetti, A.; Lancini, M.; Pasinetti, S.; Bertolazzi, E.; De Cecco, M. Monte Carlo-based 3D surface point cloud volume estimation by exploding local cubes faces. Acta IMEKO 2022, 2022, 1–9. [Google Scholar] [CrossRef]
  42. Jaekel, U. A Monte Carlo method for high-dimensional volume estimation and application to polytopes. Procedia Comput. Sci. 2011, 4, 1403–1411. [Google Scholar] [CrossRef]
  43. Raychaudhuri, S. Introduction to Monte Carlo simulation. In Proceedings of the 2008 Winter Simulation Conference, Miami, FL, USA, 7–10 December 2008; pp. 91–100. [Google Scholar]
  44. Lv, X.; Wu, J.; Zhang, M.; Zhang, B.; Li, N. Volume measurement study of 3D medical reconstruction model based on Quasi-Monte Carlo method. Appl. Res. Comput. 2014, 31, 612–614+618. [Google Scholar]
  45. Chen, S.; Zhang, S.; Liu, M.; Zheng, R. Underwater terrain three-dimensional reconstruction algorithm based on improved Delaunay triangulation. Comput. Sci. 2020, 47, 137–141. [Google Scholar]
  46. Jie, Y.; Pin, L.; Changwen, Z. A comparative research on methods of delaunay triangulation. Image Graph 2010, 15, 1158–1167. [Google Scholar]
  47. Kazhdan, M.; Hoppe, H. Screened poisson surface reconstruction. ACM Trans. Graph. (ToG) 2013, 32, 1–13. [Google Scholar] [CrossRef]
  48. Ouyang, N.; Yang, B.W. Three-dimensional point cloud reconstruction of normal estimated screened Poisson algorithm. TV Technol. 2017, 41, 237–242. [Google Scholar] [CrossRef]
  49. Stanford University. The Stanford 3D Scanning Repository. 2014. Available online: https://graphics.stanford.edu/data/3Dscanrep/ (accessed on 1 April 2023).
  50. Cheng, L.F. Research and Implementation of the Volume Calculation of Cattle Based on Point Cloud. Master’s Thesis, Northwest A&F University, Xianyang, China, 2016. [Google Scholar]
  51. Guo, X.F.; Gu, Y.N.; Xie, R.J. Development and research of pig weight measurement system based on laser thunder based on laser radar. Comput. Meas. Control 2022, 30, 88–94. [Google Scholar] [CrossRef]
  52. Lin, R.; Hu, H.; Wen, Z.; Yin, L. Research on denoising and segmentation algorithm application of pigs’ point cloud based on DBSCAN and PointNet. In Proceedings of the 2021 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Trento-Bolzano, Italy, 3–5 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 42–47. [Google Scholar]
  53. Ren, H.; Cheng, J.; Huang, A. The complex variable interpolating moving least-squares method. Appl. Math. Comput. 2012, 219, 1724–1736. [Google Scholar] [CrossRef]
  54. Sanchez, J.; Denis, F.; Coeurjolly, D.; Dupont, F.; Trassoudaine, L.; Checchin, P. Robust normal vector estimation in 3D point clouds through iterative principal component analysis. ISPRS J. Photogramm. Remote Sens. 2020, 163, 18–35. [Google Scholar] [CrossRef]
  55. Jakob, J.; Buchenau, C.; Guthe, M. Parallel globally consistent normal orientation of raw unorganized point clouds. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2019; Volume 38, pp. 163–173. [Google Scholar]
  56. Haoran, L.; Tiancan, M.; Zhi, G. 3D face reconstruction based on global ICP and improved Poisson. Acta Geod. Cartogr. Sin. 2023, 52, 454. [Google Scholar]
  57. Liu, J.; Li, H. Volume measurement of irregular objects based on improved point cloud slicing method. Acta Opt. Sin. 2021, 41, 133–145. [Google Scholar]
  58. Salazar-Cuytun, R.; Garcia-Herrera, R.A.; Munoz-Benitez, A.L.; Munoz-Osorio, G.; Camacho-Perez, E.; Ptacek, M.; Portillo-Salgado, R.; Vargas-Bello-Perez, E.; Canul, A.J.C. Relationship between body volume and body weight in Pelibuey ewes. Trop. Subtrop. Agroecosystems 2021, 24, 1–7. [Google Scholar] [CrossRef]
Figure 1. Slicing method for slice contour extraction.
Figure 1. Slicing method for slice contour extraction.
Animals 14 01210 g001
Figure 2. Comparison of Stanford rabbit [49] point cloud reconstruction. (a) Stanford rabbit point cloud. (b) Delaunay triangulation algorithm. (c) Poisson reconstruction algorithm.
Figure 2. Comparison of Stanford rabbit [49] point cloud reconstruction. (a) Stanford rabbit point cloud. (b) Delaunay triangulation algorithm. (c) Poisson reconstruction algorithm.
Animals 14 01210 g002
Figure 3. Schematic diagram of camera placement position. (a) Schematic diagram of depth cameras fixed on both sides. (b) Schematic diagram of the depth camera fixed from above.
Figure 3. Schematic diagram of camera placement position. (a) Schematic diagram of depth cameras fixed on both sides. (b) Schematic diagram of the depth camera fixed from above.
Animals 14 01210 g003
Figure 4. Flowchart of improved Poisson reconstruction algorithm.
Figure 4. Flowchart of improved Poisson reconstruction algorithm.
Animals 14 01210 g004
Figure 5. Testing the accuracy of the volume calculation method applied to the standard point cloud model and piglet model.
Figure 5. Testing the accuracy of the volume calculation method applied to the standard point cloud model and piglet model.
Animals 14 01210 g005
Figure 6. Volume calculation and body weight estimation process for experimental pig point cloud dataset (the blue line depicts the process of obtaining the Pearson correlation coefficient, whereas the green line illustrates the process of calculating the relative error in the estimated body weight).
Figure 6. Volume calculation and body weight estimation process for experimental pig point cloud dataset (the blue line depicts the process of obtaining the Pearson correlation coefficient, whereas the green line illustrates the process of calculating the relative error in the estimated body weight).
Animals 14 01210 g006
Figure 7. Burr noise points on the surface point cloud of pig body without smoothing treatment.
Figure 7. Burr noise points on the surface point cloud of pig body without smoothing treatment.
Animals 14 01210 g007
Figure 8. Smoothed point cloud on pig body surface.
Figure 8. Smoothed point cloud on pig body surface.
Animals 14 01210 g008
Figure 9. Construction of undirected graph based on neighborhood distance (specific example: erroneous propagation of normal vector, with green arrows indicating correct orientation and red arrows indicating incorrect orientation). (a) Sharp points in point clouds. (b) Thin planes in point clouds.
Figure 9. Construction of undirected graph based on neighborhood distance (specific example: erroneous propagation of normal vector, with green arrows indicating correct orientation and red arrows indicating incorrect orientation). (a) Sharp points in point clouds. (b) Thin planes in point clouds.
Animals 14 01210 g009
Figure 10. Constructing KNN graph to redirect point cloud normal vector (the blue arrows represent the known normals, while the green ones indicate the normals of the points to be solved; the longer the normal vector, the greater the weight). (a) Sharp points in point clouds. (b) Thin planes in point clouds.
Figure 10. Constructing KNN graph to redirect point cloud normal vector (the blue arrows represent the known normals, while the green ones indicate the normals of the points to be solved; the longer the normal vector, the greater the weight). (a) Sharp points in point clouds. (b) Thin planes in point clouds.
Animals 14 01210 g010
Figure 11. Relation between normal vector field of pig point cloud and gradient of exponential function. (a) Gradient field of indicator function χ . (b) Normal vector field of pig point cloud.
Figure 11. Relation between normal vector field of pig point cloud and gradient of exponential function. (a) Gradient field of indicator function χ . (b) Normal vector field of pig point cloud.
Animals 14 01210 g011
Figure 12. Octree space division of pig point cloud.
Figure 12. Octree space division of pig point cloud.
Animals 14 01210 g012
Figure 13. Calculation of tetrahedral OABC volume.
Figure 13. Calculation of tetrahedral OABC volume.
Animals 14 01210 g013
Figure 14. Tetrahedral summation method. (The pentahedron ABCED is decomposed into positive volume area and negative volume area, V t o t a l = V 1 + V 2 V 3 V 4 V 5 ) .
Figure 14. Tetrahedral summation method. (The pentahedron ABCED is decomposed into positive volume area and negative volume area, V t o t a l = V 1 + V 2 V 3 V 4 V 5 ) .
Animals 14 01210 g014
Figure 15. Judging the normal vector orientation according to the right-hand rule.
Figure 15. Judging the normal vector orientation according to the right-hand rule.
Animals 14 01210 g015
Figure 16. Comparison of different point cloud reconstruction methods. (a) Preprocessed pig body point cloud. (b) Smoothed pig body point cloud. (c) Traditional Poisson reconstruction. (d) Smoothing plus traditional Poisson reconstruction. (e) Improved Poisson reconstruction. (f) Smoothing plus improved Poisson reconstruction.
Figure 16. Comparison of different point cloud reconstruction methods. (a) Preprocessed pig body point cloud. (b) Smoothed pig body point cloud. (c) Traditional Poisson reconstruction. (d) Smoothing plus traditional Poisson reconstruction. (e) Improved Poisson reconstruction. (f) Smoothing plus improved Poisson reconstruction.
Animals 14 01210 g016aAnimals 14 01210 g016b
Figure 17. Comparison of pig leg area using different point cloud reconstruction methods. (a) Traditional Poisson reconstruction. (b) Smoothing plus traditional Poisson reconstruction. (c) Improved Poisson reconstruction. (d) Smoothing plus improved Poisson reconstruction.
Figure 17. Comparison of pig leg area using different point cloud reconstruction methods. (a) Traditional Poisson reconstruction. (b) Smoothing plus traditional Poisson reconstruction. (c) Improved Poisson reconstruction. (d) Smoothing plus improved Poisson reconstruction.
Animals 14 01210 g017
Figure 18. Comparison of improved Poisson reconstruction effects under different octree depths. (a) Octree depth, d = 6. (b) Octree depth, d = 8. (c) Octree depth, d = 10. (d) Octree depth, d = 12.
Figure 18. Comparison of improved Poisson reconstruction effects under different octree depths. (a) Octree depth, d = 6. (b) Octree depth, d = 8. (c) Octree depth, d = 10. (d) Octree depth, d = 12.
Animals 14 01210 g018
Figure 19. Standard models. (a) Cube. (b) Cylinder. (c) Rabbit.
Figure 19. Standard models. (a) Cube. (b) Cylinder. (c) Rabbit.
Animals 14 01210 g019
Figure 20. Piglet model.
Figure 20. Piglet model.
Animals 14 01210 g020
Figure 21. Scatter plot of point cloud volume calculation results and actual body weight of 300 pigs.
Figure 21. Scatter plot of point cloud volume calculation results and actual body weight of 300 pigs.
Animals 14 01210 g021
Figure 22. Scatter plot of estimated body weight and actual body weight of 179 pigs.
Figure 22. Scatter plot of estimated body weight and actual body weight of 179 pigs.
Animals 14 01210 g022
Figure 23. Box-plot of absolute error and relative error in body weight estimation model. (a) Absolute error of estimated body weight. (b) Relative error of estimated body weight.
Figure 23. Box-plot of absolute error and relative error in body weight estimation model. (a) Absolute error of estimated body weight. (b) Relative error of estimated body weight.
Animals 14 01210 g023
Table 1. Influence of octree depth on reconstruction time and number of facets.
Table 1. Influence of octree depth on reconstruction time and number of facets.
Poisson ReconstructionImproved Poisson Reconstruction
DepthTime (s)Number of Triangular FacetsTime (s)Number of Triangular Facets
60.32888020.5549322
82.251109,3502.343122,734
107.854451,2468.192458,122
1212.891493,29416.408546,662
Table 2. Volume calculation results of standard models.
Table 2. Volume calculation results of standard models.
ModelTrue Volume ValueSlicing MethodPoisson Reconstruction, d = 8Poisson Reconstruction, d = 12Improved Poisson Reconstruction, d = 8Improved Poisson Reconstruction, d = 12
Cube10.99991.00091.00090.99920.9992
Cylinder3.14153.14833.1263.1263.12453.1244
Rabbit753,955936,250756,517756,501755,526755,515
Table 3. Relative error of volume calculation results of standard models.
Table 3. Relative error of volume calculation results of standard models.
ModelSlicing MethodPoisson Reconstruction, d = 8Poisson Reconstruction, d = 12Improved Poisson Reconstruction, d = 8Improved Poisson Reconstruction, d = 12
Cube0.01%0.09%0.09%0.08%0.08%
Cylinder0.2164%0.4933%0.4933%0.5411%0.5443%
Rabbit24.1784%0.3398%0.3376%0.2083%0.2069%
Table 4. Volume calculation results and errors of the piglet model.
Table 4. Volume calculation results and errors of the piglet model.
Reconstruction MethodVolume ResultAbsolute ErrorRelative Error
Poisson reconstruction, d = 64788.32515.532%
Poisson reconstruction, d = 84283.5253.85.5936%
Poisson reconstruction, d = 124283.6253.75.591%
Improved Poisson reconstruction, d = 64803.1265.85.858%
Improved Poisson reconstruction, d = 84350.9186.44.1081%
Improved Poisson reconstruction, d = 124352.1185.24.0817%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, J.; Chen, H.; Wu, R.; Wang, X.; Liu, X.; Wang, H.; Wu, Z.; Cai, G.; Yin, L.; Lin, R.; et al. Calculating Volume of Pig Point Cloud Based on Improved Poisson Reconstruction. Animals 2024, 14, 1210. https://doi.org/10.3390/ani14081210

AMA Style

Lin J, Chen H, Wu R, Wang X, Liu X, Wang H, Wu Z, Cai G, Yin L, Lin R, et al. Calculating Volume of Pig Point Cloud Based on Improved Poisson Reconstruction. Animals. 2024; 14(8):1210. https://doi.org/10.3390/ani14081210

Chicago/Turabian Style

Lin, Junyong, Hongyu Chen, Runkang Wu, Xueyin Wang, Xinchang Liu, He Wang, Zhenfang Wu, Gengyuan Cai, Ling Yin, Runheng Lin, and et al. 2024. "Calculating Volume of Pig Point Cloud Based on Improved Poisson Reconstruction" Animals 14, no. 8: 1210. https://doi.org/10.3390/ani14081210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop