Next Article in Journal
Adaptive Orthogonal Basis Function Detection Method for Unknown Magnetic Target Motion State
Previous Article in Journal
Biogas Production Prediction Based on Feature Selection and Ensemble Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Contact Measurement of Animal Body Size Based on Structured Light

College of Information Engineering, Northwest A&F University, Xianyang 712100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(2), 903; https://doi.org/10.3390/app14020903
Submission received: 7 December 2023 / Revised: 17 January 2024 / Accepted: 19 January 2024 / Published: 20 January 2024

Abstract

:
To improve the accuracy of non-contact measurements of animal body size and reduce costs, a new monocular camera scanning equipment based on structured light was built with a matched point cloud generation algorithm. Firstly, using the structured light 3D measurement model, the camera intrinsic matrix and extrinsic matrix could be calculated. Secondly, the least square method and the improved segment–facet intersection method were used to implement and optimize the calibration of the light plane. Then, a new algorithm was proposed to extract gray- centers as well as a denoising and matching algorithm, both of which alleviate the astigmatism of light on animal fur and the distortion or fracture of light stripes caused by the irregular shape of an animal’s body. Thirdly, the point cloud was generated via the line–plane intersection method from which animal body sizes could be measured. Finally, an experiment on live animals such as rabbits and animal specimens such as fox and the goat was conducted in order to compare our equipment with a depth camera and a 3D scanner. The result shows that the error of our equipment is approximately 5%, which is much smaller than the error of the other two pieces of equipment. This equipment provides a practicable option for measuring animal body size.

1. Introduction

The body size parameters of an animal can reflect its growth and health condition. It is also an essential indicator of selecting and breeding needs, for rare animal protection, and for judging the health condition of animals. The traditional way to obtain those parameters is to use tools such as measuring sticks and tapes to make contact with the animals directly (the so-called manual measurement). This method could be more efficient and easier. Human subjective factors easily influence the results, and the measurement may lead to animal stress reactions, which are dangerous to the operators.
With the development of computer vision, how to use the linkage between computers and cameras to obtain three-dimensional point clouds on animal body measurement has aroused researchers’ interest. Focused on a single animal, Marinello et al. [1] used four Kinect sensors to obtain the point clouds of a cow’s body in four directions. Ruchay et al. [2] built a depth camera-based system to automatically measure the cattle body parameters, and their average difference was 14.6%; they also used a depth camera and a 3D shape recovery method to obtain 103 point clouds of one cattle body [3] with a 90% confidence level. Shuai et al. [4] put a pig into a cave and used multi-view RGB-D cameras to obtain the point cloud. The average relative error was 4.67%. Focused on multiple animals, Du et al. [5] used depth cameras and key-point detection methods to experiment on cattle and pigs. They retrieved data on 103 cattle and nine pig and compared them with the manual results. It showed that the margin of error on cattle was above 20%, and the margin on pigs was above 30%. Other than using a depth camera, Salau et al. [6] used an SR4K time-of-flight camera to determine the body traits of cows and had a very precise result, but this method mostly requires the animal to stay still, so it has limited usage scenarios. Silvia Zuff et al. [7] proposed a new method to capture a detailed 3D shape of lions, tigers, and bears from images alone; they used a robust prior model of articulated animal shapes and deformed the animal shapes into a canonical reference pose. Yufu Wang et al. [8] presented a new method to capture new bird species using an articulated template and images. They captured 17 bird species, and the mean PCK (percentage of correct key points) of these species was 0.963. Nadine Ruegg [9] proposed the breed-augmented regression using the classification method to recover dogs’ 3D shape and pose from a single image with a 0.913 PCK. Shangzhe Wu [10] presented a new method dubbed MagicPony to predict the articulated animal’s 3D shape, articulation, viewpoint, texture, and lighting, which had a PCK (percentage of correct key points) of 0.635.
The method using structured light to obtain the surface point cloud of an object has been used mainly in 3D scanning. Focused on industrial products, Rocchini et al. [11] designed a low-cost 3D scanner and used it for an archeological statue. Georgopoulos et al. [12] used structured light to measure various object materials such as marble, wood, stone, etc. Xiao et al. [13] combined structured light with close-range photogrammetry to measure large-scale 3D shapes. Guo et al. [14] used a high-precision air-floating rotary table and a structured light sensor to obtain the point cloud of gears. Niven et al. [15] scanned modern skeletal material, which was non-movable, in a museum using a structured light scanner. Le Cozler et al. [16] used a Morpho3D camera based on structured light to retrieve the cow body point cloud; they set up the equipment on a sliding door, guided the cow to walk through the door slowly, and had an accuracy near 80%. However, this method needs to train and guide the animal and can only collect the data after a period of time with limited usage scenarios.
The key point in using structured light to acquire the point cloud of an object is the calibration of the light plane. Qiao et al. [17] used a wrist-mounted camera and a robotic manipulator to calibrate the light plane. Liu et al. [18] made a single ball target and used nonlinear optimization under a maximum likelihood criterion method to calculate the light plane equation. Shao et al. [19] drew a concentric circle on the calibration plate and input some parameters such as concentric circle radius to obtain the light plane equation. Sun et al. [20] made a cylindrical target and reconstructed the cylindrical quadric surface to calibrate the light plane. However, the methods above either needed a calibration target with complex shapes or delicate patterns or used complicated computing, even 3D modeling with lots of parameters, which need to be input before calibrating.
In this paper, we present a new light plane calibration algorithm that only needs a single checkerboard calibration board without additional parameters other than those required for camera calibration. We also propose a new non-contact measurement method for animal body parameters, an instantaneous, accurate method suitable for multiple animals. Regarding error evaluation, we use different numbers of calibration images when we calibrate the equipment to explore how the camera and projector accuracy affect the measuring precision.

2. Materials and Methods

To measure the animal body parameters, this study uses structured light to obtain the body point cloud and calculate the animal body parameters. Firstly, we use the Zhang Zhengyou calibration method to obtain the camera’s intrinsic and extrinsic matrices. Then, we use these two matrices to calculate the chessboard corner points coordinates in the world coordinate system and use the least square method to fit the chessboard calibration pattern plane. Secondly, we propose a new light plane calibration method that automatically detects the gray center of light stripes without additional parameters. Then, we use the improved segment–facet intersection method to calculate these gray-center coordinates in world coordinates and use the least square method again to calibrate the light plane. Thirdly, we propose a new grey-center detecting, denoising, and matching algorithm to accurately obtain the gray center of the animal surface and use the improved segment–facet intersection method to obtain the point cloud. Finally, we utilize the characteristics of structured light to calculate the animal body height and body length automatically and to design an interactive interface for selecting feature points on the point cloud to calculate other body measurements. The complete process of our method is shown in Figure 1, and the description of the parameters used below is shown in Table 1.

2.1. Camera Calibration

To achieve the instantaneity, accuracy, and robustness required in obtaining animal body point clouds, this research uses a line-structured light 3D measurement perspective projection model [21], as shown in Figure 2. The projector projects a light plane and intersects the chessboard calibration plate plane (XwOwYw) at line L. Point P (Xw, Yw, Zw) is a point on Line L in the world coordinate, point P (u, v) is the point P pixel coordinates in the image pixel coordinate, plane OXY is the camera imaging plane, and point Oc is the camera optical center. The primary function of this model is to reconstruct pixel points in the image pixel coordinate to points in the world coordinate, and the reconstruction process satisfies the constraint conditions of Equation (1). Matrix I is the camera intrinsic matrix, matrix [R, t] is the extrinsic matrix of a single picture, points (Xc, Yc, Zc) are point coordinates in the camera coordinate, R is the rotation matrix, t is a translation vector, and d is the distortion coefficient.
d u v 1 = I X c Y c Z c = I [ R , t ] X w Y w Z w 1
For obtaining intrinsic matrix I and extrinsic matrix [R, t], we use the Zhang Zhengyou calibration method [22]. The Zhang Zhengyou calibration method is a camera calibration method proposed by Professor Zhang Zhengyou in 2000. This method first uses four pairs of points to solve the homographic relationship between world coordinates and pixel coordinates, which is the product of the intrinsic and extrinsic matrix (or I[R, t]). Then, based on the constraint that column vectors in matrix R are orthogonal and all have a norm of one, the intrinsic and extrinsic matrix are solved via the least squares method. Finally, the maximum likelihood estimation method is used to estimate the distortion coefficients and optimize the calibration results. Zhang Zhengyou’s calibration method is a cross between traditional and self-calibration methods, but it overcomes the disadvantage of traditional calibration methods that require high-precision calibration objects, only requiring a printed checkerboard. Compared to self-calibration, it also improves accuracy and is easier to operate, which makes Zhang’s calibration method widely used in computer vision.

2.2. Light Plane Calibration

2.2.1. Calculate Chessboard Calibration Plate Plane

With the line-structured light 3D measurement perspective projection model, we can also calibrate the chessboard calibration plate plane. We detect the chessboard corner points using the Zhang Zhengyou calibration method and calculate the coordinates of the chessboard corner points in the world coordinate system using Equation (1). Then, we use the least square method [23] to fit the chessboard calibration pattern plane.
The least squares method is a standard regression analysis method to approximate overdetermined systems. We set the equation of a spatial plane to Ax + By + C = z and set the world coordinates of chessboard corner points to be (xi, yi, zi). S is the distance between the corner points and the chessboard calibration pattern plane, which can be calculated using Equation (2). We use the partial derivatives of x, y, and z to minimize S and then simplify the formula to obtain Equation (3), where n is the number of the points, and we convert it to its matrix form as Equation (4), which provides the general solutions A, B, and C.
S = ( A x i + B y i + C z i ) 2
A x i 2 + B x i y i + C x i = x i z i A x i y i + B y i 2 + C y i = y i z i A x i + B y i + n C = z i
x i 2 x i y i x i x i y i y i 2 y i x i y i n · A B C = x i z i y i z i z i

2.2.2. Calculate Light Plane

After obtaining the chessboard calibration plate plane, we project the light plane above the chessboard area in the plate, shown in Figure 3. The selection of the number of stripes in the structured light image should be based on the object to be projected, and the distance between the camera and the projector should be based on the distance between the object and the device. We also set an angle between the axes of the camera and the projector to ensure that the light stripes projected by the projector are centered in the camera frame, which is not a fixed value, and its size depends on the specific circumstances during the experiment. It is sufficient as long as the light stripes are centered in the camera frame. These stripes appearing on the plate are the intersection lines of the chessboard calibration plate plane and light plane. We use an improved Otsu method to separate the image background and these lines. The Otsu method [24] is an efficient algorithm for image binarization, which was proposed by Japanese scholar Otsu in 1979. It utilizes the maximum between-class variance to divide the original image into the foreground and background. However, due to factors such as the material, color, and surface smoothness of the calibration plate, the reflectivity of the plate varies, which can easily lead to uneven brightness of the strips in the plate, as shown in Figure 4a. Using the Otsu method directly may separate the astigmatism region into the foreground area, as shown in Figure 4b. Therefore, we use the histogram equalization method and the Otsu method on the image and detect the gray-center of each pixel row. Then, we delete the rows in which the number of gray centers does not fit the stripes number. Because we are using structured light, we can estimate the pixel distances between these stripes, divide the original image into several different windows, and use the Otsu method in each window to solve the light scattering problem. The image only has two grayscale values: 0 and 255. Then, we use the image’s grayscale to find the boundary of light stripes and obtain the gray centers of the light plane.
After obtaining the gray centers of the light plane, we use the improved segment-facet intersection method to calculate those gray-centers coordinates in world coordinates. The improved segment–facet intersection method [25] was proposed by Huijun Wang et al. They proved that the point coordinates in world coordinates can be calculated using Equations (5) and (6), where (Xw, Yw, Zw) are the point coordinates in world coordinates, A, B, C are the undetermined coefficients of the plane equation Ax + By + C = z, matrix I is the camera intrinsic matrix and (u, v) are the point coordinates in pixel coordinates. Then, we rotate and translate the calibration plate to obtain several groups of gray centers, distribute those points into sets that belong to the same light plane, and calculate the light plane with the least squares method. The pseudo-code of light plane calibration is shown in Algorithm 1.
X n Y n Z n = I 1 u v 1
X w = t · X n Y w = t · Y n Z w = t t = C A · X n + B · Y n 1
Algorithm 1: Light Plane Calibration Algorithm.
Input: camera intrinsic matrix, camera extrinsic matrix, images, chessboard corner points
Initialize: light plane, gray center
  •  forall image do
  •   Calculate the corner point coordinates in world coordinate using Equation (1);
  •   Plane equation ← corner points and least square method;
  •   Pixel distances between stripes ← Otsu method;
  •   Divide the image into different windows;
  •    forall image do
  •     Otsu method;
  •    end for
  •   Boundary of light stripes ← grayscale value;
  •   gray_center = (left_boundary + right_boundary) / 2;
  •   Gray-centers coordinates ← the improved segment–facet intersection method;
  • End  end for
  • Light plane ← grey-centers and least square method;
Output: light plane.

2.3. Measurement of Animal Body Parameters

2.3.1. Gray-Center Detection

As shown in Figure 5, after the calibration of the light plane, structured light is projected onto the surface of animals to obtain the distortion or fracture information of light stripes. When finding the gray centers, the RGB images are translated to grayscale images. Figure 6a is the original image in which we project the structured light onto a fox specimen, and the grayscale value of the 900th pixel row in this image is shown in Figure 6b. The grayscale distribution of light stripes in the image should be in accord with Figure 6c, and the task is to find out the position of the peak value (redpoint). However, as shown in Figure 6d, some scattered points appear in the grayscale distribution. The reason is that there might be some loss or fracture on light stripes because of the irregular shape of the animal body, and meanwhile, the scattering of light may also occur on animal fur. In addition, the reflectivity is not identical in different areas due to the complex colors of animal fur, which leads to an abnormal distribution, such as that seen in Figure 6e.
To avoid the impact of light scattering while retrieving gray centers, this study proposes a method based on the average grayscale in the sliding window and between-group variance. First of all, to get rid of the interference of environmental light, the image is binarized, and then we initialize the left window and the right window for pixels and calculate the mean value of each window as well as the between-group variance between these two windows. For convenience’s sake, it is necessary to set a signal to record the boundary of stripes. Second, we traverse each line in the image. Along with the traversal, for each line, the two windows slide on the image. If the mean value of the left window is smaller than that of the right window, and the between-groups variance is greater than a certain value, it can be determined that this pixel point is the starting point of a light stripe. This pixel is recorded as the left boundary, and the signal is then changed. We continue to traverse until it finds a pixel, which means the value of the left window is larger than the right window, and the between-groups variance is greater than a certain value. This pixel is recorded as the right boundary of the light stripe. However, due to the use of binarization to remove the influence of environmental light, the left boundary will have a significant deviation when searching the light stripes boundary, so we mirror the image and re-traverse it. Because the image has been mirrored, it is necessary to find the right boundary of the light stripe on the reversed image and locate the left boundary by obtaining the coordinates of the original image. Eventually, the gray center of the light stripe is obtained by taking an average of the left and right boundaries. For the pseudo-code, see Algorithm 2.
Algorithm 2: Animal Surface Gray-Center Detection Algorithm.
Input: image, slide window size, threshold value of binarization, inter group variance_ ratio threshold
Initialize: left window, right window, flag, left boundary, right boundary, temp boundary, gray center
  • Turn image to gray scale;
  • Binarize image;
  •  for i = 1: image.rows do
  •   for j = 1: image.cols do
  •     Calculate average of gray value in left window and right window;
  •     Calculate inter group variance radio between two window;
  •      if flag == false  and left average < right average and inter group variance ratio > inter group variance_ ratio threshold then
  •       Flag = true;
  •      end if
  •     If if flag == true  and left average > right average and inter group variance ratio > inter group variance ratio threshold then
  •       Right boundary.push_back(pixelij);
  •       Flag = false;
  •      end if
  •     Update left window, right window;
  •     Update average of gray value in left window and right window;
  •     Update inter group variance radio between the two windows;
  •    end for
  •  end for
  • Mirror the image and repeat the step 1∼17 to obtain temp boundary;
  • Left boundary = image.cols − temp boundary;
  • Gray center point = (left boundary + right boundary)/2;
  • Output: gray center point matrix.

2.3.2. Denoising and Matching of Gray Center Point

Using Algorithm 1 can address the impact shown in Figure 6d, but it cannot resolve the light astigmatism caused by the animal fur in Figure 6e, which requires a denoising algorithm. The astigmatism of fur has characteristics such as uncertain distribution and small numbers of grayscale centers clustering in light streaks. We propose a denoising algorithm based on the central intensity of the grayscale within the deleted neighborhood for the characteristics of the distribution of these astigmatic points. The main idea of this algorithm is to traverse all the grayscale centers, calculate the concentration of grayscale centers within a certain range, and remove any point with a concentration less than a certain value as noise. We calculate the average stripe width from the left and right boundaries in Algorithm 2 and create a window in which columns and rows are equal to the average stripe width for each pixel. Then, we count the gray-center point in this window. If the number is less than the stripe width, we set this point as an outlier and discard it.
After denoising, the grayscale centers must match the sequence number of the corresponding light planes. Therefore, this study proposes a new matching algorithm that considers the trend of optical stripes orientation. First, we traverse each row in the center of the grayscale image, finding the first row with the same columns as the number of light planes as the main row and using the abscissa of each point as the initial reference point. Then, we traverse upward and downward, respectively, and match the points with the slightest difference in the abscissa from the reference point in each row as the same light stripe. When traversing up and down, we record the offset between each row and the main row and update the coordinates of the reference points used for matching. This is because the distance between the light stripes is relatively close, and the shape is complex. If the reference point is not updated, it will result in the intersection of adjacent light stripes. Therefore, it is necessary to use the average offset value recorded earlier as a direction of the light stripes to update the reference point, thereby effectively avoiding the problem of matching light stripes that intersect. The pseudo-code of denoising and matching the gray-center is shown in Algorithm 3.
Algorithm 3: Animal Surface Gray-Center Denoising and Matching Algorithm.
Input: gray center point matrix, average stripes width, the number of light planes.
Initialize: main center line.
  •  foreach points ∈ gray center point matrix do
  •   Create the window which cols and rows = average stripes width;
  •   Count the gray center point in the window;
  •    if count < average stripes width then
  •     delete the point from gray center point;
  •    end if
  •  end for
  • for col : gray center point matrix.cols, do
  •   If if col_size() == number of light planes, then
  •     main center line = col;
  •   end if
  • end for
  •  for i = position of main center line − 1; i > 0; i − − perform do
  •    for j = points: cols[i] perform do
  •     Find the points most near to j in main center line;
  •     Match the points as the same light stripe;
  •     Update the main center line;
  •   End  end for
  • End  end for
  • For for i = position of main center line + 1; i < gray center point matrix.size(); i + + do
  •   For for j = points: cols[i] do
  •     Find the points most near to j in main center line;
  •     Match the points as the same light stripe;
  •     Update the main center line;
  •   end for
  •  end for
Output: gray center point.

2.3.3. Calculating the Body Size Parameters

When the gray scale centers and the light planes are matched, the improved segment–facet intersection method is used to obtain the 3D point cloud of the animal. Because the point clouds are grouped according to light stripes, the animal body height can be automatically calculated based on each group’s highest and lowest points. We also filter out the overlapped portions between the first and last sets of point clouds in the height direction to calculate their average distance as the animal’s body length. We also make an interactive interface to select feature points on the point cloud to calculate other body size parameters.

3. Results

In this section, we will introduce and discuss the non-contact monocular scanning equipment we have developed and the result of light plane calibration. To verify the measurement accuracy of the equipment, we also conducted error evaluations on the camera, light plane, and body measurement results. The experiment was conducted at the laboratory of the Information Engineering College, Northwest A&F University, and the Xi’an Yangling Agriculture Expo Park in China. We performed experiments on live animals, such as rabbits, and animal specimens, such as foxes and goats. We used a depth camera and a 3D scanner based on binocular vision to collect point clouds and measuring tapes to measure the animal body parameters manually. This study used two acrylic boards, a Hikvision MV-CA060-11GM (Hangzhou, China) industrial camera, a HOTACK D013D projector (Guangdong, China), and a tripod to mount the equipment shown in Figure 7. The depth camera we used was Kinect for Windows V2 (Redmond, WA, USA), and the 3D scanner was an All-Weather Broadleaf Crop 3D Scanner (Xianyang, China). The device parameters are shown in Table 2, Table 3, Table 4 and Table 5.
The experiment results for light plane calibration are shown in Figure 8. In this experiment, we used a forty-stripes structured light image. We projected the light plane above the chessboard area in Figure 8a, and Figure 8b shows the gray-center detection result. Those white lines on the image are the gray-center we detect. The chessboard corner points we calculated under world coordinate are shown in Figure 8c, and the light plane equation which had been calibrated is shown in Figure 8d.
The experiment on live rabbits was conducted at the laboratory of the Information Engineering College, Northwest A&F University in 25 June 2023. We chose rabbits because they were docile and would not be stimulated when exposed to light, and they had short limbs and were a small size, which could perfectly represent small-sized animals. Their white fur also had good reflectivity and less scattering. The experiment on the animal specimen was conducted at the Xi’an Yangling Agriculture Expo Park in China in 13 July 2023. We chose the fox specimen because it had longer hair, which could test the device’s accuracy under severe scattering conditions. Its body was curled up, which could test the device’s accuracy under large body deformation conditions. The color of its fur varied greatly, which could also test the device’s accuracy in the face of complex animal color surfaces. We also chose the goat specimen because it had a large body size and irregular shape, which led to more striped fractures, and it could test the robustness of the device.
The experiment results for the live rabbit are shown in Figure 9, where a structured light image with 50 stripes was used. Figure 9a shows our lovely rabbit, and Figure 9b is the image of the rabbit on which we projected the structured light image. Figure 9c shows the gray-center detection result, from which you can see the stripes boundaries in white lines and gray center in black lines. Figure 9d shows the denoise result; the gray centers were displayed in black dots. Figure 9e is the rabbit body point cloud obtained using our equipment, and Figure 9f is the rabbit body point cloud obtained via a depth camera.
The experiment results for the fox and goat specimens are shown in Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14. A structured light image with 40 stripes was used in these two experiments. We automatically calculated their body length and width data from the point clouds of these three animals. We used the interactive interface to measure the fox tail length and goat neck width parameters. Figure 10a,c are the front of the animal specimens we used, and Figure 10b,d are the back of the animal specimens. The image illuminated by structured light in the front is shown in Figure 11a and Figure 12a, and Figure 11b and Figure 12b shows the image from the back. The gray-center detection result in the front is shown in Figure 11c and Figure 12c, and the result from the back is shown in Figure 11d and Figure 12d. You can see the stripes boundaries in white lines and the gray centers in black lines. The denoise result in the front is shown in Figure 11e and Figure 12e, and the result from the back is shown in Figure 11f and Figure 12f. The grey centers are displayed in black dots. Figure 13a and Figure 14a are the specimen front body point clouds obtained via our equipment, and Figure 13b and Figure 14b are the specimen back body point clouds obtained via our equipment. Figure 13c and Figure 14c are the specimen front body point clouds, which were obtained via the depth camera, and Figure 13d and Figure 14d are the specimen back body point clouds obtained via the depth camera. Figure 13e and Figure 14e are the specimen front body point clouds, which were obtained via a 3D scanner, and Figure 13f is the fox specimen back body point cloud obtained via a 3D scanner.
The experiment results for the error evaluations are shown in Figure 15. Different numbers of images were used in the calibration stage to show how the camera and light plane accuracy affect the measuring precision. For the camera, the parameter used for error evaluation is the reprojection error, which indicates the difference between the pixels captured directly and the pixels calculated using Equation (1). The reprojection error can be calculated using Equation (7), where (u, v)T is the pixel captured directly, (u’, v’)T is the pixels calculated using Equation (1) and nc is the number of calibration points. For the light plane, firstly, all light plane equations were transformed into the form of Ax + Byz =C and combined as Ax = b. Then, we solved the system of equations by using the least squares method. And last, the least squares solution was substituted into the system of equations to calculate a new non-homogeneous term. We used the original non-homogeneous term as a benchmark to calculate the standard deviation, regarded as an error evaluation parameter for the light plane. This standard deviation represented the fluctuation of the intersection points of all light planes. If the standard deviation was less than the size of the DMD, it pointed out that all light planes could intersect on the projector’s DMD, indicating that the calibration of the light planes was relatively accurate. The algorithm is shown in Algorithm 4. For the measuring precision, the difference between the manual measurement result and our equipment’s result was used to represent the measuring accuracy, including absolute error and relative error. The camera calibration error and light plane calibration error are shown in Figure 15a,b, from which we can conclude that with an increase in the number of calibration images, the calibration accuracy of both the camera and the light plane has improved. Figure 15c,d show the variation of measuring error of goat specimen’s body length with the increasing number of calibration images. The error evaluation result shows that as the calibration accuracy improves, the measurement error gradually decreases and converges.
E 1 = 1 n c   i = 1 n c ( u i u i ) 2 + ( v i v i ) 2
Algorithm 4: Light Plane Error Evaluation Algorithm.
Input: light plane calibration results
Initialize: light plane equations
  • for light plane equation do
  •   Transform into the form of Ax + By − z = −C;
  •  end for
  • Combine all equations as Ax = b;
  • Calculate the solution x ^ by using least squares method;
  • b ^ A x ^
  • n number of light plane equations
  • E 2 1 n + 1 b ^ b 2
Output: light plane error E2.

4. Discussion

The parameters calculated from the live rabbit body point cloud are shown in Table 6. Our equipment’s average margin of error of measurement is 4.9% compared to that of the manual measurement. The maximum average margin of error is 5.5%, and the minimum average margin of error is 4.3%. The average margin error of measurement obtained with the depth camera is 13.8%, the maximum average margin of error is 18.7%, and the minimum average margin of error is 8.9%. Because the 3D scanner is not instantaneous, it cannot obtain the body point cloud of a live rabbit.
The parameters calculated from the fox specimen body point cloud are shown in Table 6. Our equipment’s average margin of error of measurement is 2.7%, the maximum average margin of error is 4.0%, and the minimum average margin of error is 1.1%. The average margin of error of measurement obtained with the depth camera is 9.3%, the maximum average margin of error is 12.1%, and the minimum average margin of error is 7.6%. The average margin of error of measurement obtained using the 3D scanner is 4.4%, the maximum average margin of error is 9.4%, and the minimum average margin of error is 1.0%. The parameters calculated from the goat specimen body point cloud are shown in Table 6. Our equipment’s average margin of error of measurement is 1.3%, the maximum average margin of error is 2.0%, and the minimum average margin of error is 0.8%. The average margin of error of measurement obtained via the depth camera is 14.1%, the maximum average margin of error is 26.3%, and the minimum average margin of error is 1.61%. The average margin of error of measurement obtained using the 3D scanner is 8.8%, the maximum average margin of error is 16.1%, and the minimum average margin of error is 1.2%.
The results of the animal body parameter measurements show that our algorithm successfully alleviates the problem of light scattering on animal fur and has the characteristics of being both instantaneous and robust, which also presents that our equipment has better accuracy compared to the depth camera and 3D scanner on these three animals. The parameter results show that these three types of equipment have high accuracy in determining body width and in determining the measurements for the fox specimen, which is middle-sized. Nevertheless, when the object is transformed into a big-sized animal such as a goat, the drawback of the depth camera and 3D scanner is obvious, while our equipment still has near accuracy. Our equipment is also small and more accessible than the depth camera and 3D scanner.
In addition, we also compared our method with some methods proposed in the literature. We compared our method with animals with similar body sizes and measurement objectives in the literature to better control variables. Therefore, we chose the body size data of the goat specimen for comparison. The results are shown in Table 7. The data show that our method achieves a better relative measurement error than the methods in the literature.

5. Conclusions

This study proposes a new, instantaneous, accurate method suitable for multiple animals to obtain the animal body point cloud and measure body parameters. We propose a new algorithm to combine camera calibration with light plane calibration and use multi-striped structured light images projected on animals. Moreover, we propose a new algorithm to detect, denoise, and match the gray center to obtain the 3D point cloud with high accuracy, from which we can automatically calculate the body length and width and can design an interactive interface for other parameters. We also conducted experiments on multiple animals to validate the robustness of our equipment. However, our equipment still has some limitations, such as weak resistance to ambient light interference and errors in body size measurement, which need to be addressed in further work.

Author Contributions

Conceptualization, N.G.; methodology, F.X.; software, F.X.; validation, F.X., Z.Z. and Y.Z.; investigation, F.X. and Z.Z.; data curation, F.X.; writing—original draft preparation, F.X.; writing—review and editing, F.X. and Y.Z.; supervision, N.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The animal study protocol was approved by the Ethics Committee of Northwest A&F University Institutional Animal Care and Use Committee (protocol code: XN2024-0103; date of approval: 28 June 2022).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author (Nan Geng: [email protected], College of Information Engineering, Northwest A&F University, Yangling 712100, Shaanxi, China).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Marinello, F.; Pezzuolo, A.; Cillis, D.; Gasparini, F.; Sartori, L. Application of Kinect-Sensor for three-dimensional body measurements of cows. In Proceedings of the 7th European Precision Livestock Farming, ECPLF, Milan, Italy, 15–18 September 2015. [Google Scholar]
  2. Ruchay, A.; Dorofeev, K.; Kalschikov, V.; I Kolpakov, V.; Dzhulamanov, K.M. A depth camera-based system for automatic measurement of live cattle body parameters. In Earth and Environmental Science; IOP Publishing: Bristol, UK, 2019; Volume 341. [Google Scholar]
  3. Ruchay, A.; Kober, V.; Dorofeev, K.; Kolpakov, V.; Miroshnikov, S. Accurate body measurement of live cattle using three depth cameras and non-rigid 3-D shape recovery. Comput. Electron. Agric. 2020, 179, 105821. [Google Scholar] [CrossRef]
  4. Shi, S.; Yin, L.; Liang, S.; Zhong, H.; Tian, X.; Liu, C.; Sun, A.; Liu, H. Research on 3D surface reconstruction and body size measurement of pigs based on multi-view RGB-D cameras. Comput. Electron. Agric. 2020, 175, 105543. [Google Scholar] [CrossRef]
  5. Du, A.; Guo, H.; Lu, J.; Su, Y.; Ma, Q.; Ruchay, A.; Marinello, F.; Pezzuolo, A. Automatic livestock body measurement based on keypoint detection with multiple depth cameras. Comput. Electron. Agric. 2022, 198, 107059. [Google Scholar] [CrossRef]
  6. Salau, J.; Haas, J.H.; Junge, W.; Bauer, U.; Harms, J.; Bieletzki, S. Feasibility of automated body trait determination using the SR4K time-of-flight camera in cow barns. SpringerPlus 2014, 3, 225. [Google Scholar] [CrossRef] [PubMed]
  7. Zuffi, S.; Kanazawa, A.; Black, M.J. Lions and tigers and bears: Capturing non-rigid, 3d, articulated shape from images. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3955–3963. [Google Scholar]
  8. Wang, Y.; Kolotouros, N.; Daniilidis, K.; Badger, M. Birds of a Feather: Capturing Avian Shape Models from Images. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14739–14749. [Google Scholar]
  9. Rueegg, N.; Zuffi, S.; Schindler, K.; Black, M.J. BARC: Learning to regress 3d dog shape from images by exploiting breed information. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 3876–3884. [Google Scholar]
  10. Wu, S.; Li, R.; Jakab, T.; Rupprecht, C.; Vedaldi, A. MagicPony: Learning Articulated 3D Animals in the Wild. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 8792–8802. [Google Scholar]
  11. Rocchini, C.; Cignoni, P.; Montani, C.; Pingi, P.; Scopigno, R. A low cost 3D scanner based on structured light. In Computer Graphics Forum; Blackwell Publishers Ltd.: Oxford, UK; Boston, MA, USA, 2001; Volume 20, pp. 299–308. [Google Scholar]
  12. Georgopoulos, A.; Ioannidis, C.; Valanis, A. Assessing the performance of a structured light scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38 Pt 5, 251–255. [Google Scholar]
  13. Xiao, Y.L.; Wen, Y.; Li, S.; Zhang, Q.; Zhong, J. Large-scale structured light 3D shape measurement with reverse photography. Opt. Lasers Eng. 2020, 130, 106086. [Google Scholar] [CrossRef]
  14. Guo, X.; Shi, Z.; Yu, B.; Zhao, B.; Li, K.; Sun, Y. 3D measurement of gears based on a line structured light sensor. Precis. Eng. 2020, 61, 160–169. [Google Scholar] [CrossRef]
  15. Niven, L.; Steele, T.E.; Finke, H.; Gernat, T.; Hublin, J.-J. Virtual skeletons: Using a structured light scanner to create a 3D faunal comparative collection. J. Archaeol. Sci. 2009, 36, 2018–2023. [Google Scholar] [CrossRef]
  16. Le Cozler, Y.; Allain, C.; Caillot, A.; Delouard, J.; Delattre, L.; Luginbuhl, T.; Faverdin, P. High-precision scanning system for complete 3D cow body shape imaging and analysis of morphological traits. Comput. Electron. Agric. 2019, 157, 447–453. [Google Scholar] [CrossRef]
  17. Qiao, Y.; Wang, Y.; Ning, J.; Peng, L. Calibration of line structured light senor based on active vision systems. In Proceedings of the 2013 International Conference on Computational Problem-Solving (ICCP), Jiuzhai, China, 26–28 October 2013; pp. 249–252. [Google Scholar]
  18. Liu, Z.; Li, X.; Li, F.; Zhang, G. Calibration method for line-structured light vision sensor based on a single ball target. Opt. Lasers Eng. 2015, 69, 20–28. [Google Scholar] [CrossRef]
  19. Shao, M.; Dong, J.; Madessa, A.H. A new calibration method for line-structured light vision sensors based on concentric circle feature. J. Eur. Opt. Soc.-Rapid Publ. 2019, 15, 1–11. [Google Scholar] [CrossRef]
  20. Sun, J.; Ding, D.; Cheng, X.; Zhou, F.; Zhang, J. Calibration of line-structured light vision sensor based on freeplaced single cylindrical target. Opt. Lasers Eng. 2022, 152, 106951. [Google Scholar] [CrossRef]
  21. Wei, Z.; Li, C.; Ding, B. Line structured light vision sensor calibration using parallel straight lines features. Optik 2014, 125, 4990–4997. [Google Scholar] [CrossRef]
  22. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  23. Abdi, H. The method of least squares. Encycl. Meas. Stat. 2007, 1, 530–532. [Google Scholar]
  24. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  25. Hui, J.W. Development of Three-Dimensional Shape Information Acquisition System for Plant Leaves; Northwest A&F University: Yangling, China, 2019; pp. 29–30. [Google Scholar]
  26. Guo, H.; Ma, Q.; Zhang, S.; Su, W.; Zhu, D.; Gao, Y. Prototype System of Shape Measurements of Animal Based on 3D Reconstruction. Trans. Chin. Soc. Agric. Mach. 2014, 45, 227–232. [Google Scholar]
Figure 1. Method Procedure.
Figure 1. Method Procedure.
Applsci 14 00903 g001
Figure 2. The line-structured light 3D Measurement perspective projection model.
Figure 2. The line-structured light 3D Measurement perspective projection model.
Applsci 14 00903 g002
Figure 3. Projecting the light plane above the area of chessboard Area.
Figure 3. Projecting the light plane above the area of chessboard Area.
Applsci 14 00903 g003
Figure 4. (a) Original light stripe; (b) simply using OSTU for image binarization.
Figure 4. (a) Original light stripe; (b) simply using OSTU for image binarization.
Applsci 14 00903 g004
Figure 5. Project the structured light to the surface of animal.
Figure 5. Project the structured light to the surface of animal.
Applsci 14 00903 g005
Figure 6. (a) Original image; (b) the gray scale distribution chart; (c) the gray scale distribution under ideal condition; (d) the gray scale distribution influenced by light scattering; (e) the gray scale distribution influenced by animal fur.
Figure 6. (a) Original image; (b) the gray scale distribution chart; (c) the gray scale distribution under ideal condition; (d) the gray scale distribution influenced by light scattering; (e) the gray scale distribution influenced by animal fur.
Applsci 14 00903 g006
Figure 7. (a) Acrylic board A; (b) our equipment; (c) projector and camera; (d) Acrylic board B.
Figure 7. (a) Acrylic board A; (b) our equipment; (c) projector and camera; (d) Acrylic board B.
Applsci 14 00903 g007aApplsci 14 00903 g007b
Figure 8. (a) Original light stripes; (b) gray center detection result; (c) chessboard corner points under world coordinate; (d) light plane.
Figure 8. (a) Original light stripes; (b) gray center detection result; (c) chessboard corner points under world coordinate; (d) light plane.
Applsci 14 00903 g008
Figure 9. (a) Live rabbit; (b) structured light projecting result; (c) gray-center detection result; (d) gray-center denoise result; (e) rabbit body point cloud obtained using our equipment; (f) rabbit body point cloud obtained via depth camera.
Figure 9. (a) Live rabbit; (b) structured light projecting result; (c) gray-center detection result; (d) gray-center denoise result; (e) rabbit body point cloud obtained using our equipment; (f) rabbit body point cloud obtained via depth camera.
Applsci 14 00903 g009aApplsci 14 00903 g009b
Figure 10. (a) Goat specimen front; (b) goat specimen back; (c) fox specimen front; (d) fox specimen back.
Figure 10. (a) Goat specimen front; (b) goat specimen back; (c) fox specimen front; (d) fox specimen back.
Applsci 14 00903 g010
Figure 11. (a) Structured light project result (front); (b) structured light project result (back); (c) gray-center detection result (front); (d) gray-center detection result (back); (e) gray-center denoise result (front); (f) gray-center denoise result (back).
Figure 11. (a) Structured light project result (front); (b) structured light project result (back); (c) gray-center detection result (front); (d) gray-center detection result (back); (e) gray-center denoise result (front); (f) gray-center denoise result (back).
Applsci 14 00903 g011
Figure 12. (a) Structured light project result (front); (b) structured light project result (back); (c) gray center detection result (front); (d) gray center detection result (back); (e) gray center denoise result (front); (f) gray center denoise result (back).
Figure 12. (a) Structured light project result (front); (b) structured light project result (back); (c) gray center detection result (front); (d) gray center detection result (back); (e) gray center denoise result (front); (f) gray center denoise result (back).
Applsci 14 00903 g012aApplsci 14 00903 g012b
Figure 13. (a) Fox specimen front body point cloud obtained via our equipment; (b) fox specimen back body point cloud obtained via our equipment; (c) fox specimen front body point cloud obtained via depth camera; (d) fox specimen back body point cloud obtained via depth camera; (e) fox specimen front body point cloud obtained via 3D scanner; (f) fox specimen back body point cloud obtained via 3D scanner.
Figure 13. (a) Fox specimen front body point cloud obtained via our equipment; (b) fox specimen back body point cloud obtained via our equipment; (c) fox specimen front body point cloud obtained via depth camera; (d) fox specimen back body point cloud obtained via depth camera; (e) fox specimen front body point cloud obtained via 3D scanner; (f) fox specimen back body point cloud obtained via 3D scanner.
Applsci 14 00903 g013
Figure 14. (a) Goat specimen front body point cloud obtained via our equipment; (b) goat specimen back body point cloud obtained via our equipment; (c) goat specimen front body point cloud obtained via depth camera; (d) goat specimen back body point cloud obtained via depth camera; (e) goat specimen front body point cloud obtained via 3D Scanner.
Figure 14. (a) Goat specimen front body point cloud obtained via our equipment; (b) goat specimen back body point cloud obtained via our equipment; (c) goat specimen front body point cloud obtained via depth camera; (d) goat specimen back body point cloud obtained via depth camera; (e) goat specimen front body point cloud obtained via 3D Scanner.
Applsci 14 00903 g014
Figure 15. (a) Camera calibration error; (b) light plane error; (c) absolute measuring error; (d) relative measuring error.
Figure 15. (a) Camera calibration error; (b) light plane error; (c) absolute measuring error; (d) relative measuring error.
Applsci 14 00903 g015
Table 1. Parameters Introduction.
Table 1. Parameters Introduction.
ParameterIntroduction
(u, v)pixel point coordinates in image pixel coordinate
(Xw, Yw, Zw)point coordinates in world coordinate
(Xc, Yc, Zc)point coordinates in camera coordinate
Icamera intrinsic matrix
R,tcamera extrinsic matrix
Rrotation matrix
ttranslation vector
ddistortion coefficient
Ax + By + C = zplane equation
(xi, yi, zi)chessboard corner point coordinates in world coordinate
Sdistance between the corner points and chessboard calibration pattern plane
nthe number of points
Occamera optical center
Table 2. Parameters of Hikvision MV-CA060-11GM industrial camera.
Table 2. Parameters of Hikvision MV-CA060-11GM industrial camera.
ParameterValue
typeMV—CA060—11GM
sensor modelCMOS, rolling shutter
sensor type IMX178
pixel size2.4 m × 2.4 m
target size1/1.8″
resolution3072 × 2048
maximum frame rate17 fps
dynamic range71.3 dB
signal-to-noise ratio41.3 dB
exposure time27 μs~2.5 s
pixel formatMono 8/10/10p/12/12p
cache capacity128 MB
size29 mm × 29 mm × 42 mm
weight68 g
operating systemWindows XP/7/10 32/64 bits, Linux 32/64 bits, MacOS 64 bits
protocol standardsGigE Vision V2.0, GenICam
authenticationCE, FCC, RoHS, KC
Table 3. Parameters of HOTACK D013D projector.
Table 3. Parameters of HOTACK D013D projector.
ParameterValue
typeD013D
Android version7.1.2
core version3.10.104, TueAug3114: 02: 06CST2021
version numberUIV 202111111.144855
serial numberWW1615202109300045
total memory1.9 GB
DMD size0.2″ (5.08 mm)
Table 4. Parameters of Kinect for Windows V2.
Table 4. Parameters of Kinect for Windows V2.
ParameterValue
typeKinect for Windows V2
color resolution1920 × 1080
color frame rate30 fps
depth resolution512 × 424
depth frame rate30 fps
player6
skeleton6
joint25
range of detection0.5~4.5 m
FOV70 × 60
Table 5. Parameters of All-Weather Broadleaf Crop 3D Scanner.
Table 5. Parameters of All-Weather Broadleaf Crop 3D Scanner.
ParameterValue
typeAll-weather Broadleaf Crop 3D Scanner.
supply voltage220 V
size500 × 330 × 142 (mm)
scan accuracy<0.5 mm
scan speed≥10,000 p/s
maximum scanning time≤400 s
Table 6. Animal body parameters measurement result.
Table 6. Animal body parameters measurement result.
AnimalParameterBody LengthBody WidthTail LengthNeck Width
Live Rabbitmanual measurement (cm)29.113.9--
our equipment (cm)27.513.3--
error (cm)1.60.6--
error (%)5.54.3--
depth camera (cm)31.711.3--
error (cm)2.62.6--
error (%)8.918.7--
Goat Specimenmanual measurement (cm)54.224.9-9.9
our equipment (cm)54.824.7-10.1
error (cm)0.60.2-0.2
error (%)1.10.8-2.0
depth camera (cm)62.024.5-7.3
error (cm)7.80.4-2.6
error (%)14.41.6-26.3
3D Scanner (cm)62.925.2-10.8
error (cm)8.70.3-0.9
error (%)16.11.2-9.1
Fox Specimenmanual measurement (cm)40.118.125.0-
our equipment (cm)38.917.924.0-
error (cm)1.20.21.0-
error (%)3.01.14.0-
depth camera (cm)43.415.926.9-
error (cm)3.32.21.9-
error (%)8.212.17.6-
3D Scanner (cm)39.716.425.7-
error (cm)0.41.70.7-
error (%)1.09.42.8-
Table 7. Method comparison result.
Table 7. Method comparison result.
MethodBody Length Error (%)Body Width Error (%)
Key-point detection [5]11.512.4
Depth camera-based system [2]2.05.6
Multi-view RGB-D camera [4]3.04.1
3D shape recovery [3]2.41.0
3D reconstruction [26]1.920.37
Our method1.10.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, F.; Zhang, Y.; Zhang, Z.; Geng, N. A Non-Contact Measurement of Animal Body Size Based on Structured Light. Appl. Sci. 2024, 14, 903. https://doi.org/10.3390/app14020903

AMA Style

Xu F, Zhang Y, Zhang Z, Geng N. A Non-Contact Measurement of Animal Body Size Based on Structured Light. Applied Sciences. 2024; 14(2):903. https://doi.org/10.3390/app14020903

Chicago/Turabian Style

Xu, Fangzhou, Yuxuan Zhang, Zelin Zhang, and Nan Geng. 2024. "A Non-Contact Measurement of Animal Body Size Based on Structured Light" Applied Sciences 14, no. 2: 903. https://doi.org/10.3390/app14020903

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop