Next Article in Journal
A Preliminary Investigation about the Influence of WIMU PROTM Location on Heart Rate Accuracy: A Comparative Study in Cycle Ergometer
Previous Article in Journal
Fear of Missing Out: Constrained Trial of Blockchain in Supply Chain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utilizing 3D Point Cloud Technology with Deep Learning for Automated Measurement and Analysis of Dairy Cows

1
National Institute of Animal Science, Rural Development Administration, Cheonan 31000, Chungcheongnam-do, Republic of Korea
2
ZOOTOS Co., Ltd., R&D Center, Anyang 14118, Gyeonggi-do, Republic of Korea
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(3), 987; https://doi.org/10.3390/s24030987
Submission received: 22 November 2023 / Revised: 16 January 2024 / Accepted: 30 January 2024 / Published: 2 February 2024
(This article belongs to the Section Smart Agriculture)

Abstract

:
This paper introduces an approach to the automated measurement and analysis of dairy cows using 3D point cloud technology. The integration of advanced sensing techniques enables the collection of non-intrusive, precise data, facilitating comprehensive monitoring of key parameters related to the health, well-being, and productivity of dairy cows. The proposed system employs 3D imaging sensors to capture detailed information about various parts of dairy cows, generating accurate, high-resolution point clouds. A robust automated algorithm has been developed to process these point clouds and extract relevant metrics such as dairy cow stature height, rump width, rump angle, and front teat length. Based on the measured data combined with expert assessments of dairy cows, the quality indices of dairy cows are automatically evaluated and extracted. By leveraging this technology, dairy farmers can gain real-time insights into the health status of individual cows and the overall herd. Additionally, the automated analysis facilitates efficient management practices and optimizes feeding strategies and resource allocation. The results of field trials and validation studies demonstrate the effectiveness and reliability of the automated 3D point cloud approach in dairy farm environments. The errors between manually measured values of dairy cow height, rump angle, and front teat length, and those calculated by the auto-measurement algorithm were within 0.7 cm, with no observed exceedance of errors in comparison to manual measurements. This research contributes to the burgeoning field of precision livestock farming, offering a technological solution that not only enhances productivity but also aligns with contemporary standards for sustainable and ethical animal husbandry practices.

1. Introduction

Modern agriculture is experiencing a transformative evolution, with advancements in sensor technologies playing a pivotal role in reshaping traditional farming practices. In the domain of livestock management, particularly in dairy farming, the integration of cutting-edge technologies offers unprecedented opportunities for optimizing productivity, ensuring animal welfare, and promoting sustainable practices. This paper introduces a pioneering approach to the automated measurement and analysis of dairy cows, leveraging the capabilities of 3D point cloud technology. The convergence of precision livestock farming and sensor-based methodologies promises a paradigm shift in how farmers monitor and manage their dairy herds.
Dairy farming, a cornerstone of the global agricultural landscape, confronts multifaceted challenges spanning productivity and resource optimization to animal health and welfare. Traditional methods of monitoring cows often rely on manual measurements and subjective observations, which can be time-consuming, labor-intensive, and prone to human error. In response to these challenges, the proposed system harnesses the power of 3D point cloud technology, offering a non-intrusive, accurate, and automated means of capturing detailed anatomical information.
The utilization of 3D imaging sensors enables the generation of high-resolution point clouds that provide a comprehensive representation of the physical characteristics of dairy cows. These point clouds serve as a rich source of data for the development of sophisticated algorithms aimed at extracting valuable metrics related to body dimensions, posture, and health indicators. By automating the measurement and analysis processes, the proposed system not only alleviates the burden on farmers but also opens new avenues for in-depth and continuous monitoring, enabling timely interventions and preventive measures.
The significance of this research extends beyond the realm of traditional livestock management. As the agricultural landscape embraces smart farming practices, the integration of cloud-based platforms for data storage and analysis becomes imperative. The scalability and accessibility afforded by cloud technologies empower farmers to remotely access real-time insights into the health and well-being of their dairy herds. This paper unfolds the conceptual framework, methodology, and potential implications of the automated 3D point cloud approach, representing a promising stride toward a technologically enriched and ethically sound future for dairy farming, detailed as shown in Figure 1.
Particularly, the contribution of this paper can be summarized as follows:
  • The multi-camera synchronization system helps minimize outliers and increase the accuracy of dairy cow 3D reconstruction, thereby avoiding factors that seriously affect the accuracy of dairy cow body size measurement.
  • Enhancements to the previous 3D reconstruction system result in more precise stitching between the bottom and top cameras based on the camera system’s initialization matrix.
  • Automatic measurement and analysis of various parts of dairy cows are conducted with high accuracy.
The remainder of this paper is organized as follows: Section 2 reviews the automated measurement and analysis processes algorithm and their applications in dairy farming tasks. In Section 3, a reconstruction framework for improving the dairy cow 3D point cloud quality is presented. In Section 4, the automated measurement and analysis of dairy cows via a 3D point cloud approach is proposed. Section 5 presents the experimental results and the evaluations of the proposed approach in the multiple datasets. Section 6 concludes this paper.

2. Related Works

The use of 3D imaging technology has found applications in diverse fields, including agriculture. Researchers have investigated its potential in crop monitoring, yield prediction, and now in livestock management. The ability of 3D imaging to provide detailed spatial information makes it a promising tool for capturing and analyzing the three-dimensional structure of dairy cows, presenting opportunities for accurate measurement and assessment.
Automated systems for monitoring livestock have evolved significantly, moving beyond traditional manual methods. Computer vision and machine learning techniques have been employed to automate the recognition of animal behaviors and health conditions. These studies often utilize 2D imaging systems, and the incorporation of 3D point cloud technology represents a novel extension to enhance the granularity and accuracy of data collection.
In accordance with [1], the authors implemented a method for automatically extracting measurements to estimate the weight of Nellore cattle based on regression algorithms using 2D images of the dorsal area. Additionally, the use of depth images along with an algorithm for automatically estimating heifer height and body mass for cattle, as presented in [2], has demonstrated that in single-view measurement methods utilizing a single RGB camera or depth camera for body condition and body size characteristics evaluation, challenges persist in obtaining multi-scale information, such as chest girth, abdominal circumference, rump angle, and so on.
Similar approaches for single-view based measurement problems have also been discussed in [3,4,5,6,7]. For the task of dairy cow 3D reconstruction, the farm environment and the movement of dairy cows significantly impact the resulting point cloud generated using multi-view methods, leading to the appearance of outliers and distortion in the dairy cow 3D point cloud. Consequently, the evaluation of body size introduces considerable errors [8]. By constructing a synchronized multi-camera system [9] during the point cloud generation process, our current approach has successfully minimized the occurrence of outliers, and the visualization of the dairy cow point cloud has been significantly improved.
On the other hand, the measurement of dairy cows is divided into two levels: manual and automatic measurement. The manual measurement method requires identifying measurement points on images and point clouds. Subsequently, the body-side parameters are calculated by determining the distance between the marked points. Automated measurement is achieved through the manual or automatic filtering of input images or point clouds [10,11], which is followed by the automated measurement of animal body size.

3. The Dairy Cows 3D Reconstruction

Currently, to successfully reconstruct a dairy cow, it is crucial to ensure the proper conditions on the farm, an effective data collection system, and the state of the dairy cow at the time of implementation. The excessive movement of dairy cows during the data collection process for 3D reconstruction is a significant concern due to the adverse effects it introduces. Specifically, the RGBD-SLAM algorithm for the dairy cow 3D reconstruction problem may encounter issues such as lost tracking during the point cloud registration process between fragments, leading to distorted 3D results and the emergence of numerous outliers.
To ensure the accuracy of the dairy cow 3D reconstruction system, an evaluation was conducted on a non-moving dairy cow model, as shown in Figure 2.
The error between the results measured manually on a dummy dairy cow and the results measured on a 3D reconstruction file is E = 2.1 cm. According to the depth quality specifications provided by the Intel Realsense datasheet, R M S E r r o r 4 cm for the objects is within 2 m and 80% ROI. For body length or other dimensions of the cow that are within approximately 1 m from the camera, the error is only around 1 cm. Therefore, our 3D reconstruction algorithm can create highly accurate 3D dairy cow objects used for auto-measurement problems.

3.1. The Camera Synchronization System

In the context of reconstructing a 3D object, achieving precise 3D geometric information poses a challenge when relying on a solitary camera. To enhance the capture of detailed geometric features in a 3D object, it becomes necessary to augment the number of cameras involved in the process. Nevertheless, the synchronization of these cameras is crucial for the simultaneous capture of frames. Incorrect synchronization among cameras can lead to the generation of numerous artifacts in the reconstructed 3D object. As presented in [9], the author has provided a comparison of the quality of 3D reconstructions created based on two different approaches to show the importance of camera synchronization. Our multi-camera synchronization system can synchronize 10 cameras together through one host (Jetson Orin).
In the current work, only 2 out of 10 pairs of Jetson Nano and Intel RealSense cameras are used to collect RGB-D data for the dairy cow 3D reconstruction problem. This system has been developed with the purpose of aligning the frames acquired between two cameras placed in the top and bottom positions. All frames were captured by cameras with corresponding timestamps, as shown in Figure 3. However, synchronously capturing frames via an external trigger signal proves challenging to ensure synchronization when storing frames from the RealSense camera to the host. Therefore, synchronizing the global timestamps among host computers with each other based on the Network Time Protocol (NTP) [12,13] has been applied to achieve the simultaneous computation of frames. Consequently, we can retrieve simultaneously captured frames by gathering each frame from all cameras that share the same global timestamps.

3.2. Dairy Cow 3D Reconstruction Improvement

As presented in [14], dairy cow 3D reconstruction is generated based on the RGB-D dataset applying an AI algorithm for creating depth images. Initially, k-frame segments were derived from pre-existing short RGB-D sequences. Within each subsequence, the RGB-D odometry algorithm [15] was applied to ascertain the camera trajectory and merge image ranges. Specifically, the identity matrix served as the initialization for adjacent RGB-D frames. In contrast, for non-adjacent RGB-D frames, ORB feature computation facilitated sparse feature matching across wide baseline images [16], followed by a 5-point RANSAC [17] process for a preliminary alignment estimation, which was utilized as the initialization for RGB-D odometry computation. To determine the appropriate k value, considering the number of input frames, k = 100 was consistently set for all experiments with the current dataset. Utilizing the initial 100 frames, fragments were generated, providing a description of a segment of the dairy cow surface mesh. Once the fragments of the scene are generated, the next step involves aligning them in a global space. Global registration refers to an algorithm that operates without the need for an initialization alignment. Typically, it computes and offers a less rigid alignment, serving as the initialization for local methods such as the ICP (Iterative Closest Point). In the current work, we used FGR [18] to initialize alignment. However, the rate of creating full 3D dairy cow reconstruction is quite low after some experiments. As shown in Figure 4, it is not possible to stitch the point cloud created from the top camera (containing information mainly about the back of the dairy cow) with the point cloud created from the bottom camera (containing information mainly about the lower body of the dairy cow). Two parameters, Fitness and Inlier RMSE, are calculated to evaluate our algorithm, as shown specifically in Table 1. On the dataset of some dairy cows, we applied the algorithm for 3D point cloud registration and evaluated it. The Fitness value after updating the algorithm increased from 0.1 to 0.5, and the Inlier RMSE value also decreased from 0.02 to 0.01.
Therefore, an initialization matrix T has been created based on the physical parameters of the camera’s position in the dairy cow data collection system, as shown in Figure 5. Equation (1) presents the specific values of the T matrix. By combining the FGR algorithm with these values, the point clouds generated from the top camera are rotated and translated closer to the nearest point cloud generated from the bottom camera.
Specifically,
T = 0.642788 0 0.766044 1.39093 0 1 0 0 0.766044 0 0.642788 0.733545 0 0 0 1

3.3. Dairy Cow 3D Point Cloud Extraction and Normalization

The 3D point clouds of dairy cows generated by the 3D reconstruction algorithms in [14] contain information from the input image scene. To expedite point cloud processing and mitigate the impact of the substantial volume of point cloud data on computer resources, the Voxel Grid approach was employed for downsampling the point clouds [19]. The original point cloud data include points from the target dairy cow, the fence, the ground, and outliers. To minimize the influence of unnecessary point clouds, the dairy cow clouds were trimmed at the plane of the fence and the ground. As depicted in Figure 6, the raw 3D reconstruction data encompass two main planes: the ground and the fence. The RANSAC algorithm was utilized to facilitate the detection of these two planes. This algorithm randomly selects three points within the point cloud, estimates the corresponding plane equation Equation (2), and utilizes the distance d to identify points belonging to the plane. Through multiple iterations, the plane with the greatest number of points was extracted.
a x + b y + c z + d = 0 n T x = d
where,
n = n 1 n 2 n 3 T : the normal vector of plane
x = x 1 y 1 z 1 T
Figure 6. Dairy cow 3D point cloud extraction and normalization.
Figure 6. Dairy cow 3D point cloud extraction and normalization.
Sensors 24 00987 g006

4. Automated Measurement and Analysis of Dairy Cows via 3D Point Cloud

4.1. Dairy Cow Body Automated Measurement

Considering the fragmented 3D shape of an animal, which is depicted as an amalgamation of point clouds acquired from a set of two depth cameras P 1 , P 2 , the point clouds are concatenated into a unified single point cloud such as P = c = 1 2 P c , where P = p 1 , , p n and p i R 3 . Our objective is to determine the m ordered key points N = n 1 , , n m , where n j P . The key points are systematically annotated in a consistent order, such as right rear leg, right front leg, hip, …, giving them a well-defined semantic significance. The coordinate system is defined by the x, y, and z axes, respectively, which are displayed as red, green, and blue arrows, as shown in Figure 6.
As described in [20], the key points extraction was proposed as a regression problem. Specifically, the distance between each point in the point cloud and each annotated point was computed, yielding m distance vectors, each with a size of n. Utilizing these distances, the key points can be determined by identifying the point with the minimum value in each distance vector. From a machine learning perspective, the problem is formulated as a mapping between the n × 3 input matrix P and the n × m output matrix D ^ . This naturally lends itself to an encoder–decoder architecture, where features for each point are aimed to be predicted from the input. Point cloud encoder–decoder architectures are typically designed to address semantic segmentation problems, where the probability of each point belonging to a specific class is predicted by the neural network. The transformation of an encoder–decoder into a segmentation problem is achieved by converting the network’s prediction into a probability, typically through the use of a sigmoid function, and by having the expected class probability backpropagated through the loss function. In this research, we contend that an encoder–decoder architecture can be employed interchangeably provided that it possesses the capacity to learn from point clouds [21].
The task of detecting specific points on a cow’s body is formulated as a 3D key point detection problem using point cloud data. To tackle this challenge, we leverage PointNet [22], a specialized 3D Deep Neural Network (DNN) designed for comprehensive 3D data analysis, offering the unique capability to learn both global and local features.
In Figure 7, the architecture of PointNet is structured as follows: it incorporates an Input Transform Network (T-Net) succeeded by a sequence of Multi-Layer Perceptrons (MLPs) dedicated to local feature extraction. The Input Transform Network adeptly captures transformations, ensuring the network’s resilience to variations in input point permutations, rotations, and translations. Following this, a Feature Transform Network (T-Net) is employed to augment the network’s ability to handle diverse point orderings. Upon local feature extraction, a global feature vector is derived through max pooling, facilitating the aggregation of information from the entire point cloud. This global feature vector undergoes further processing by a set of MLPs, culminating in the production of the final segmentation mask. This mask assigns class labels to each individual point, effectively completing the task. The synergistic interplay between the Input and feature Transform Networks empowers PointNet to robustly extract features from point cloud data, making it a potent solution for the nuanced task of detecting specific points on a cow’s body.
In this work, the model PointNet is implemented by Pytorch, which is a popular Deep Learning framework. Details of the training environment are as in Table 2.
As present in Table 3, dairy cow body part measurement values refer to quantified data associated with various physical characteristics and dimensions of dairy cows. These measurements play a crucial role in assessing the health, well-being, and productivity of the animals in a dairy farming context.
The following are explanations for some of these key measurement values:
Stature Value: Stature value refers to the measurement of a dairy cow’s height, which is usually from the ground to a specific point on its body. This measurement is vital in determining the cow’s overall size and can be used to monitor growth, nutritional status, and assess the animal’s ability to access feed and water resources.
Rump Angle Value: Rump angle value measures the angle of a cow’s rump, specifically the slope from its lower back to the tailhead. The rump angle can provide insights into the cow’s body condition and reproductive health. Changes in rump angle can indicate shifts in body fat and muscle distribution.
Rump Width Value: Rump width value quantifies the width of the cow’s rump, which is the area just before the tailhead. This measurement can help assess the cow’s body condition, particularly the development of the pelvic area, which is essential for calving.
Front Teat Length: The front teat length is the measurement of the length of the teats on the front udder of a dairy cow. This measurement is essential in assessing udder health and milkability. It can also be an indicator of the cow’s ability to nurse its calf or be milked efficiently.
These measurement values are collected through various methods, including manual measurements and automated systems that employ advanced technologies such as 3D point cloud imaging. Accurate and consistent measurements are essential for monitoring the health and performance of dairy cows, enabling farmers to make informed decisions about their care, nutrition, and overall management.

4.2. Stature Height

The stature height of a dairy cow is the vertical distance from the highest point on the back of a dairy cow to the ground. As illustrated in Figure 8, the 3D point cloud of the dairy cow after being separated from the ground plane is utilized for the computation of stature height. By applying depth learning techniques to identify key point clouds, the stature height can be calculated through point-to-plane distance measurements, M S t a t u r e H e i g h t .
M S t a t u r e H e i g h t = d ( P , G r o u n d P l a n e )

4.3. Rump Angle

The rump angle of the dairy cow is the inclination from the yaw angle to the sit bone. In Figure 9, it is shown that starting from the original dairy cow 3D point cloud, the outliers are removed to reduce computational costs and enhance the accuracy of key point detection based on AI.
Once the two key point clouds ( A 1 and A 2 ) used to compute the rump angle are identified, the measurement is evaluated by comparing the distance between two points to the ground plane ( h 1 and h 2 ) and providing a score. From the detected points A 1 and A 2 , we can compute the distances from these points to the ground plane, M R u m p A n g l e
M R u m p A n g l e = h 1 h 2

4.4. Rump Width

The rump width of a dairy cow is the inner width of the ischium and the width of the cone, M R u m p W i d t h . Figure 10 shows how to determine two points C 1 , C 2 based on the AI algorithm for calculating rump width
M R u m p W i d t h = d ( C 1 , C 2 )

4.5. Front Teat Length

After creating the 3D point cloud teat data (Figure 11b), this dataset is used to label for training and testing. Specifically, label 2 points at the top and the bottom of each teat (Figure 11c). With the automatic teat detection results after the training process (Figure 11d), the front teat length M F r o n t T e a t L e n g h t is determined according to Equation (6)
M F r o n t T e a t L e n g h t = M l e f t + M r i g h t 2
where
M l e f t : the front teat length left
M r i g h t : the front teat length right
Figure 11. Measurement of front teat length: (a) Input. (b) Generated teat point cloud. (c) Teat labeling. (d) AI-based teat auto-detection.
Figure 11. Measurement of front teat length: (a) Input. (b) Generated teat point cloud. (c) Teat labeling. (d) AI-based teat auto-detection.
Sensors 24 00987 g011

5. Experimental Results

5.1. The Camera Synchronization System

To assess the synchronization between the two cameras in the dairy cow 3D reconstruction system, we constructed a system comprising ten cameras connected to 10 Jetson Nano single computers, and all the data collected were sent to a host computer, Jetson Orin. The device specifications are detailed in Table 4. The synchronization was demonstrated through the frames obtained from the 10 cameras, which displayed the same timestamp on the screen when they had the same index. The synchronization results is shown in Figure 12.

5.2. Dairy Cow Body Automated Measurement

The key to calculating the body size data for cows, including withers height, body length, chest width, and chest girth, lies in the measurement points on the cow’s body. The automated cow body measurement algorithm was applied to the cow’s point clouds after coordinate normalization and refinement. The definitions for manual and automatic measurement values for each body size were as follows.

5.2.1. Stature Height

As shown in Figure 13, by adding the calculated value and score measurement directly to the input 3D point cloud dataset ( s t a t u r e h e i g h t = 142.0 cms and s c o r e m e a s u r e m e n t = 6), the evaluating and monitoring of the condition of dairy cows becomes simpler and more intuitive for evaluators via any point cloud file (.ply) display application. Table 5 displays the outcomes of the non-contact measurement system’s repeatability.
The mean absolute error of 0.7 cm in height measurement indicates a relatively high level of precision in capturing the vertical dimension of dairy cows. This accuracy is crucial in assessing the growth, health, and overall stature of the animals. It can aid in determining the appropriate feeding and care for each cow within the herd.

5.2.2. Rump Angle

Figure 14a shows that four points A 1 , A 2 , and B 1 , B 2 are automatically determined through the AI algorithm for the rump angle measurement problem. They are then calculated and given a score measurement to display on the rump part point cloud (the green point cloud area), r u m a n g l e = 6.06 and s c o r e m e a s u r e m e n t = 6 .
The measurement of rump-angle auto-detection error on 101 samples is computed in Table 6. The 0.61 cm mean absolute error in rump angle measurement reflects the depth cameras’ competence in quantifying the inclination or tilt of the cow’s rump. This metric is valuable in assessing the cow’s comfort and posture, which are particularly relevant for dairy cattle’s well-being and milking efficiency.

5.2.3. Rump Width

Similar to the rump angle, the rump width is calculated through two automatically determined points C 1 , C 2 , as shown in Figure 14b. The calculated rump width value and score measurement are added directly to the 3D point cloud input. Specifically, r u m p w i d t h = 11.7 cm corresponds to s c o r e m e a s u r e m e n t = 5 . The auto-detection error of rump-width measurement on 101 samples is computed and shown in Table 7. With a mean absolute error of 2.5 cm, the depth cameras demonstrate their capability to accurately capture the width of the cow’s rump. This measurement is significant in evaluating the body condition and reproductive health of the animals. The precision achieved here contributes to the effective management of dairy herds.

5.2.4. Front Teat Length

After the dairy cow teat is automatically detected, the teat length value is calculated and given a score measurement as shown in Figure 15, t e a t l e f t l e n g t h = 4.06 , t e a t r i g h t l e n g t h = 4.26 , a v e r a g e t e a t l e n g t h = 4.16 and s c o r e m e a s u r e m e n t = 3 . Besides, the result was computed on 71 samples as shown in Table 8.
The front teat length measurement, with a mean absolute error of 0.79 cm, is important for the assessment of milking efficiency and udder health. This level of accuracy enables dairy farmers to make informed decisions about milking routines and cow comfort.
The error between the manually measured values of dairy cow height, rump angle, front teat length and values calculated by the auto-measurement algorithm were within 0.7 cm. The errors observed did not surpass those generated by manual measurements.
In the verification, the rump width has shown a large error compared to other body parameters, which is possibly due to the structure of the rump width calculation area being quite complicated and the two points to be detected being too close together, leading to confusion in determining the point to measure based on AI.
Table 9 presents an analysis of research conducted for non-contact body measurement applications. Specifically, the measurement method, type of devices used to collect processing objects, and type of data processing objects (2D images, depth images, and point cloud) are important criteria that affect the results the 3D point cloud produces as well as the accuracy in automated measurement and analysis of animals via a 3D point cloud. This table also presents some measuring objects on animals such as horses, cows, pigs, and dairy cattle along with the performance of different measurement methods.
Precision requirements in dairy farming statistics can vary based on the specific applications and objectives. Precision is crucial in ensuring the accuracy and reliability of data collected for various aspects of dairy farming. In our work, accurate measurements of cow height (stature) are crucial for monitoring health and productivity. The mean absolute error (MAE) of 0.7 cm is acceptable in many scenarios—especially if the primary goal is to identify significant changes rather than precise measurements. Precision in rump-angle measurement is critical for assessing cow body condition and comfort. Based on our current approach, the MAE of 0.61 cm is relatively low and should be considered satisfactory for most practical purposes in dairy farming. The rump width is a significant factor in determining the optimal space required for a cow to move comfortably. The MAE of 2.5 cm may be acceptable in certain applications, depending on the specific needs of the dairy farm. However, if precision is crucial for a particular task, further improvements may be considered. Especially, front teat length is an important parameter for milking efficiency and udder health. The MAE of 0.79 cm is generally acceptable for routine monitoring in dairy farming.
Finally, this research has contributed to practical significance in the field of automated measurement and analysis of animals via 3D point clouds. The low mean absolute errors in these body measurements highlight the feasibility of using depth cameras for accurate and non-invasive data collection in the dairy industry. This technology can significantly improve the management and well-being of dairy cows, leading to increased milk production, health, and overall farm efficiency. Additionally, the results from this study could contribute to precision livestock farming practices, enabling farmers to make data-driven decisions for their dairy herds.

6. Conclusions

In conclusion, this paper has presented a pioneering approach to revolutionize the measurement and analysis of dairy cows through the innovative use of 3D point cloud technology. The intersection of precision livestock farming and advanced sensing techniques has yielded a system that promises to redefine how dairy farmers monitor and manage their herds. Through a comprehensive review of related work, we have contextualized our contributions within the broader landscape of automated livestock monitoring, highlighting the unique advantages offered by 3D imaging and point cloud data. Our proposed system leverages 3D imaging sensors to generate high-resolution point clouds, capturing intricate details of dairy cow anatomical structures. The developed automated algorithms extract crucial metrics related to body dimensions, posture, and health indicators, providing farmers with a nuanced understanding of individual and herd-wide conditions. By automating these processes, our system not only alleviates the burden on farmers but also facilitates continuous, real-time monitoring, enabling early detection of health issues and timely interventions. Furthermore, the integration of machine learning algorithms enhances the system’s capability to identify and classify various behaviors and conditions, contributing to a more comprehensive assessment of dairy cows’ well-being. While the presented work marks a significant stride forward, we acknowledge that further research is needed to refine and expand the capabilities of our system. Additionally, collaboration with stakeholders, including farmers, veterinarians, and agricultural technology developers, is essential to ensure the practicality and adoption of our proposed approach in real-world dairy farming scenarios. In summary, our work contributes to the growing body of knowledge in precision livestock farming by introducing a methodology for the automated measurement and analysis of dairy cows. The fusion of 3D point cloud technology and machine learning algorithms represents a powerful synergy that holds promise for the future of dairy farming, where technology aligns with ethics and sustainability, fostering a new era of intelligent and humane livestock management.

Author Contributions

Conceptualization, J.G.L. and D.T.N.; methodology, D.T.N.; software, D.T.N., A.T.P. and S.H.; validation, S.S.L., S.H., H.-P.N., H.-S.S., M.A. and S.M.L.; formal analysis, M.A. and C.G.D.; investigation, J.G.L.; resources, H.-S.S., M.A. and M.N.P.; data curation, D.T.N., S.S.L., M.A., M.N.P., M.K.B., A.T.P. and H.-P.N.; writing—original draft preparation, D.T.N. and H.-P.N.; writing—review and editing, D.T.N., S.H.; visualization, M.K.B., H.-P.N. and A.T.P.; supervision, J.G.L. and S.S.L.; project administration, J.G.L., M.A. and S.M.L.; funding acquisition, J.G.L., M.N.P., S.S.L., H.-S.S., M.A., S.M.L. and C.G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) and Korea Smart Farm R&D Foundation (KosFarm) through Smart Farm Innovation Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA) and Ministry of Science and ICT (MSIT), Rural Development Administration (421011-03).

Institutional Review Board Statement

This animal care and use protocol was reviewed and approved by the IACUC at the National Institute of Animal Science (approval number: NIAS 2022-0545).

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors, Seungkyu Han, Hoang-Phong Nguyen, Min Ki Baek, Anh Tuan Phan, Duc Toan Nguyen were employed by the company ZOOTOS Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
NTPNetwork Time Protocol
RANSACRandom Sample Consensus
RMSERoot Mean Square Error
ROIRegion of Interest
SLAMSimultaneous Localization and Mapping

References

  1. Weber, V.A.M.; de Lima Weber, F.; da Silva Oliveira, A.; Astolfi, G.; Menezes, G.V.; de Andrade Porto, J.V.; Pistori, H. Cattle weight estimation using active contour models and regression trees Bagging. Comput. Electron. Agric. 2020, 179, 105804. [Google Scholar] [CrossRef]
  2. Nir, O.; Parmet, Y.; Werner, D.; Adin, G.; Halachmi, I. 3D Computer-vision system for automatically estimating heifer height and body mass. Biosyst. Eng. 2018, 173, 4–10. [Google Scholar] [CrossRef]
  3. Pallottino, F.; Steri, R.; Menesatti, P.; Antonucci, F.; Costa, C.; Figorilli, S.; Catillo, G. Comparison between manual and stereovision body traits measurements of Lipizzan horses. Comput. Electron. Agric. 2015, 118, 408–413. [Google Scholar] [CrossRef]
  4. Alvarez, J.R.; Arroqui, M.; Mangudo, P.; Toloza, J.; Jatip, D.; Rodríguez, J.M.; Teyseyre, A.; Sanz, C.; Zunino, A.; Machado, C.; et al. Body condition estimation on cows from depth images using Convolutional Neural Networks. Comput. Electron. Agric. 2018, 155, 12–22. [Google Scholar] [CrossRef]
  5. Zhang, X.Y.; Liu, G.; Jing, L.; Si, Y.S.; Ren, X.H.; Ma, L. Automatic extraction method of cow’s back body measuring point based on simplification point cloud. Trans. Chin. Soc. Agric. Mach. 2019, 50, 267–275. [Google Scholar] [CrossRef]
  6. Shi, C.; Zhang, J.; Teng, G. Mobile measuring system based on LabVIEW for pig body components estimation in a large-scale farm. Comput. Electron. Agric. 2019, 156, 399–405. [Google Scholar] [CrossRef]
  7. Rodríguez Alvarez, J.; Arroqui, M.; Mangudo, P.; Toloza, J.; Jatip, D.; Rodriguez, J.; Teyseyre, A.; Sanz, C.; Zunino, A.; Machado, C.; et al. Estimating body condition score in dairy cows from depth images using convolutional neural networks, transfer learning and model ensembling techniques. Agronomy 2019, 9, 90. [Google Scholar] [CrossRef]
  8. He, D.J.; Niu, J.Y.; Zhang, Z.R.; Guo, Y.Y.; Tan, Y. Repairing method of missing area of dairy cows’point cloud based on improved cubic b-spline curve. Trans. Chin. Soc. Agric. Mach. 2018, 49, 225–231. [Google Scholar] [CrossRef]
  9. Yoon, H.; Jang, M.; Huh, J.; Kang, J.; Lee, S. Multiple Sensor Synchronization with theRealSense RGB-D Camera. Sensors 2021, 21, 6276. [Google Scholar] [CrossRef] [PubMed]
  10. Lu, J.; Guo, H.; Du, A.; Su, Y.; Ruchay, A.; Marinello, F.; Pezzuolo, A. 2-D/3-D fusion-based robust pose normalisation of 3-D livestock from multiple RGB-D cameras. Biosyst. Eng. 2021, 223, 129–141. [Google Scholar] [CrossRef]
  11. Dang, C.; Choi, T.; Lee, S.; Lee, S.; Alam, M.; Park, M.; Han, S.; Lee, J.; Hoang, D. Machine Learning-Based Live Weight Estimation for Hanwoo Cow. Sustainability 2022, 14, 12661. [Google Scholar] [CrossRef]
  12. Mills, D.; Martin, J.; Burbank, J.; Kasch, W. Network Time Protocol Version 4: Protocol and Algorithms Specification. no. 5905, RFC Editor, June 2010. Available online: https://www.rfc-editor.org/rfc/rfc5905.html (accessed on 21 November 2023).
  13. Johannessen, S. Time synchronization in a local area network. IEEE Control. Syst. Mag. 2004, 24, 61–69. [Google Scholar]
  14. Dang, C.; Choi, T.; Lee, S.; Lee, S.; Alam, M.; Lee, S.; Han, S.; Hoang, D.T.; Lee, J.; Nguyen, D.T. Case Study: Improving the Quality of Dairy Cow Reconstruction with a Deep Learning-Based Framework. Sensors 2022, 22, 9325. [Google Scholar] [CrossRef] [PubMed]
  15. Steinbrucker, F.; Sturm, J.; Cremers, D. Real-time visual odometry from dense RGB-D images. In Proceedings of the ICCV Workshops, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  16. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G.R. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the ICCV, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  17. Stewenius, H.; Engels, C.; Nistér, D. Recent developments on direct relative orientation. Isprs J. Photogramm. Remote Sens. 2006, 60, 284–294. [Google Scholar] [CrossRef]
  18. Zhou, Q.-Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the ECCV, Amsterdam, The Netherlands, 8–16 October 2016. [Google Scholar]
  19. Rusu, R.B.; Cousins, S. 3d is here: Point cloud library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011. [Google Scholar] [CrossRef]
  20. Falque, R.; Vidal-Calleja, T.; Alempijevic, A. Semantic Keypoint Extraction for Scanned Animals using Multi-Depth-Camera Systems. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 11794–11801. [Google Scholar]
  21. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5105–5114. [Google Scholar]
  22. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar] [CrossRef]
  23. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  24. Kuzuhara, Y.; Kawamura, K.; Yoshitoshi, R.; Tamaki, T.; Sugai, S.; Ikegami, M.; Kurokawa, Y.; Obitsu, T.; Okita, M.; Sugino, T.; et al. A preliminarily study for predicting body weight and milk properties in lactating Holstein cows using a three-dimensional camera system. Comput. Electron. Agric. 2015, 111, 186–193. [Google Scholar] [CrossRef]
  25. Salau, J.; Haas, J.H.; Junge, W.; Thaller, G. A multi-Kinect cow scanning system: Calculating linear traits from manually marked recordings of Holstein-Friesian dairy cows. Biosyst. Eng. 2017, 157, 92–98. [Google Scholar] [CrossRef]
  26. Le Cozler, Y.; Allain, C.; Xavier, C.; Depuille, L.; Caillot, A.; Delouard, J.M.; Delattre, L.; Luginbuhl, T.; Faverdin, P. Volume and surface area of Holstein dairy cows calculated from complete 3D shapes acquired using a high-precision scanning system: Interest for body weight estimation. Comput. Electron. Agric. 2019, 165, 104977. [Google Scholar] [CrossRef]
  27. Song, X.; Bokkers, E.A.M.; Van Mourik, S.; Koerkamp, P.G.; Van Der Tol, P.P.J. Automated body condition scoring of dairy cows using 3-dimensional feature extraction from multiple body regions. J. Dairy Sci. 2019, 102, 4294–4308. [Google Scholar] [CrossRef]
  28. Ruchay, A.; Kober, V.; Dorofeev, K.; Kolpakov, V.; Miroshnikov, S. Accurate body measurement of live cattle using three depth cameras and non-rigid 3-D shape recovery. Comput. Electron. Agric. 2020, 179, 105821. [Google Scholar] [CrossRef]
Figure 1. The automated measurement and analysis of dairy cows framework.
Figure 1. The automated measurement and analysis of dairy cows framework.
Sensors 24 00987 g001
Figure 2. Dairy cow 3D reconstruction evaluation: (a) 3D dairy cow point cloud. (b) Dummy dairy cow.
Figure 2. Dairy cow 3D reconstruction evaluation: (a) 3D dairy cow point cloud. (b) Dummy dairy cow.
Sensors 24 00987 g002
Figure 3. Genlock synchronization system.
Figure 3. Genlock synchronization system.
Sensors 24 00987 g003
Figure 4. Dairy cow 3D reconstruction improvement: (a) Merged 3D dairy cow point cloud based on old algorithm. (b) Merged 3D dairy cow point cloud based on a new algorithm.
Figure 4. Dairy cow 3D reconstruction improvement: (a) Merged 3D dairy cow point cloud based on old algorithm. (b) Merged 3D dairy cow point cloud based on a new algorithm.
Sensors 24 00987 g004
Figure 5. Dairy cow stereo dataset recording system.
Figure 5. Dairy cow stereo dataset recording system.
Sensors 24 00987 g005
Figure 7. PointNet architecture for 3D dairy cow segmentation.
Figure 7. PointNet architecture for 3D dairy cow segmentation.
Sensors 24 00987 g007
Figure 8. Measurement of stature height.
Figure 8. Measurement of stature height.
Sensors 24 00987 g008
Figure 9. Measurement of rump angle.
Figure 9. Measurement of rump angle.
Sensors 24 00987 g009
Figure 10. Measurement of rump width: (a) Input. (b) Detect points A 1 and B 1 . (c) Cut body along point A 1 , B 1 . (d) Determine point C 1 , C 2 .
Figure 10. Measurement of rump width: (a) Input. (b) Detect points A 1 and B 1 . (c) Cut body along point A 1 , B 1 . (d) Determine point C 1 , C 2 .
Sensors 24 00987 g010
Figure 12. The synchronization results.
Figure 12. The synchronization results.
Sensors 24 00987 g012
Figure 13. Auto-detection and measurement of stature height.
Figure 13. Auto-detection and measurement of stature height.
Sensors 24 00987 g013
Figure 14. Auto-detection and measurement of rump part: (a) Rump angle. (b) Rump width.
Figure 14. Auto-detection and measurement of rump part: (a) Rump angle. (b) Rump width.
Sensors 24 00987 g014
Figure 15. Auto-detection and measurement of front teat length.
Figure 15. Auto-detection and measurement of front teat length.
Sensors 24 00987 g015
Table 1. Dairy cow 3D reconstruction evaluations.
Table 1. Dairy cow 3D reconstruction evaluations.
Cow IDOld AlgorithmNew Algorithm
FitnessInlier RMSEFitnessInlier RMSE
5013630940.1570.0230.5250.018
5013630950.1350.0240.4740.017
5012086980.1630.0260.4420.017
5013491420.1530.0240.4340.018
Fitness: the overlapping area of inlier correspondence set between source and target point cloud; the higher values are better. Inlier RMSE: the RMSE of all inlier correspondences metrics; the lower values are better.
Table 2. Details of the training environment.
Table 2. Details of the training environment.
Operation SystemWindows 10
Python Version3.8.17
Deep Learning FrameworkPytorch 1.13.1
Loss FunctionMean Squared Error
Optimization AlgorithmAdam [23]
Learning Rate0.001
Number of Training Epochs100 (without early stop)
Table 3. Dairy cow body part measurement value.
Table 3. Dairy cow body part measurement value.
Stature Value
(cm)
Rump AngleRump Width
(cm)
Front Teat Length
(cm)
Score Measurement
128
(very small)
The left hip is 4 cm
above the iliac crest
521
131The left hip is 2 cm
above the iliac crest
6.532
134
(small)
The hip and iliac crest
are level
843
137The left hip is 2 cm
below the iliac crest
9.554
140
(medium level)
The left hip is 4 cm
below the iliac crest
1165
143The left hip is 6 cm
below the iliac crest
12.576
146
(large)
The left hip is 8 cm
below the iliac crest
1487
149The left hip is 10 cm
below the iliac crest
15.598
152
(very large)
The left hip is 12 cm
below the iliac crest
17109
Rump Angle: Position of the left hip relative to the iliac crest (above, equal, below).
Table 4. The specification of hardware device.
Table 4. The specification of hardware device.
DeviceSpecification
Depth Camera
(Intel RealSense D435i)
-
Use environment: Indoor/Outdoor
-
Baseline (mm): 50
-
Resolution: 1920 × 1080 px
-
Frame rate: 30 fps
-
Sensor FOV (H × V × D): 69.4o × 42.5 × 77 (±3)
-
Dimensions: 90 × 25 × 25 mm
-
Connection: USB-C 3.1 Gen1
Single Board Computer
(Jetson Nano)
-
GPU: 128-Core Maxwell
-
CPU: Quad-core ARM Cortex-A57 CPU
-
RAM: 4 GB 64-bit LPDDR4 25.6 GB/s
-
Storage: microSD card slot for storage (256 GB)
-
USB: 4 × USB 3.0 ports, USB 2.0 Micro-B
-
Networking: Gigabit Ethernet
-
Wireless: Optional Wi-Fi/Bluetooth module
-
Operating System: Supports NVIDIA’s Linux-based operating system
-
Power: 5V/4A power supply
Host Computer
(Jetson Orin)
-
GPU: NVIDIA Ampere architecture with 2048 NVIDIA® CUDA® cores and 64 Tensor Cores
-
CPU: 12-core Arm® Cortex®-A78AE v8.2 64-bit, CPU
-
RAM: 64 GB 256-bit LPDDR5, 204.8 GB/s
-
Storage: 64 GB eMMC 5.1 storage
-
USB: Up to 2 × 8, 1 × 4, 2 × 1 (PCIe Gen4, Root Port and Endpoint), 3 × USB 3.2
-
Networking: 1 × GbE, 1 × 10 GbE
-
Operating System: Supports various Linux distributions
-
Power: 15 W–60 W
Table 5. Stature measurement auto-detection error. The result was computed on 347 samples. Unit: cm.
Table 5. Stature measurement auto-detection error. The result was computed on 347 samples. Unit: cm.
Cow IDManual Height MeasurementAuto Height MeasurementDetection Error
500991129145.28144.870.41
501049585149.09147.791.31
501049591147.21146.780.43
501063723142.87142.270.60
501051848151.02150.540.48
501177073148.97148.290.69
501177573148.34147.700.64
501181588147.17147.320.15
501189051146.92147.020.09
501196133148.03147.900.13
Detection Average Error0.7
Table 6. Rump-angle measurement auto-detection error. The result was computed on 101 samples. Unit: cm.
Table 6. Rump-angle measurement auto-detection error. The result was computed on 101 samples. Unit: cm.
Cow IDManual Rump-Angle MeasurementAuto Rump-Angle MeasurementDetection Error
5009911295.355.730.37
5010348125.013.741.26
5010495855.235.550.33
5010495915.104.960.13
5010518483.133.020.09
5013246951.251.650.41
5013246987.256.320.94
5013248696.566.420.14
50132676110.8610.610.25
5013267886.555.610.94
Detection Average Error0.61
Table 7. Rump-width measurement auto-detection error. The result was computed on 101 samples. Unit: cm.
Table 7. Rump-width measurement auto-detection error. The result was computed on 101 samples. Unit: cm.
Cow IDManual Rump-Width MeasurementAuto Rump-Width MeasurementDetection Error
5009911298.6812.734.05
50103481210.6710.400.28
50104958511.3313.051.72
50104959111.5612.651.09
5010518489.8911.671.78
50132469514.0611.872.19
50132469815.414.221.18
50132486913.912.231.67
50132676112.9227.1314.21
50132678812.9112.940.03
Detection Average Error2.5
Table 8. Front teat length measurement auto-detection error. The result was computed on 71 samples. Unit: cm.
Table 8. Front teat length measurement auto-detection error. The result was computed on 71 samples. Unit: cm.
Cow IDManual Front Teat Length MeasurementAuto Front Teat Length MeasurementDetection Error
5010318854.434.340.10
5010495854.334.140.19
5010930213.713.950.24
5011057124.714.820.10
5011183793.894.340.45
5013818143.204.070.86
5013820623.774.290.52
5013829554.254.420.17
5013832874.284.110.17
5013866284.744.390.35
Detection Average Error0.79
Table 9. An analysis of research conducted for non-contact body measurement applications.
Table 9. An analysis of research conducted for non-contact body measurement applications.
ResearchMeasurement MethodDeviceObject ProcessingAnimalObject MeasurementPerformance
Rodriguez Alvarez (2018) [7]Automatic measurementKinectDepth imageCowBody condition scoreAccuracy: 78% within 0.25, Accuracy: 94% within 0.5
Nir et al. (2018) [2]Automatic measurementKinectDepth imageCowHip height, withers heightMean relative absolute error less than 1.17%
Zhang et al. (2019) [5]Automatic measurementKinectDepth imageCowMeasurement points on the backsideMean absolute error less than 1.17 cm
Weber et al. (2020) [1]Automatic measurementRGB camera2D imageCowFeature points on the backsideN/A
Kuzuhara et al. (2015) [24]Manual measurementXtion proPoint cloudCowBacksideN/A
Salau et al. (2017) [25]Manual measurementSix KinectPoint cloudCowTeat length, heights of the ischial tuberosStandard error range are 0.7∼1.5 mm, and 14.0∼22.5 mm
Le Cozler et al. (2019) [26]Manual measurementFive LiDAR sensorsPoint cloudCowVolume and surface areaCoefficients of variation were 0.17% and 3.12%
Song et al. (2019) [27]Automatic measurementThree KinectDepth imageCowVertebral column, centerline of the sacral ligament, hook bone centerN/A
Ruchay et al. (2020) [28]Manual measurementThree KinectPoint cloudCattleWithers height, hip height, chest depth, heart girth, ilium width, hip joint width, oblique body length, hip length, chest widthWith a 90% confidence level, measurement errors less than 3%
OurAutomatic measurementRGB cameraPoint cloudDairy CowHeight (stature), rump angle, rump width, front teat lengthHeight (stature): mean absolute error, 0.7 cm
Rump Angle: mean absolute error, 0.61 cm
Rump Width: mean absolute error, 2.5 cm
Front teat length: mean absolute error, 0.79 cm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, J.G.; Lee, S.S.; Alam, M.; Lee, S.M.; Seong, H.-S.; Park, M.N.; Han, S.; Nguyen, H.-P.; Baek, M.K.; Phan, A.T.; et al. Utilizing 3D Point Cloud Technology with Deep Learning for Automated Measurement and Analysis of Dairy Cows. Sensors 2024, 24, 987. https://doi.org/10.3390/s24030987

AMA Style

Lee JG, Lee SS, Alam M, Lee SM, Seong H-S, Park MN, Han S, Nguyen H-P, Baek MK, Phan AT, et al. Utilizing 3D Point Cloud Technology with Deep Learning for Automated Measurement and Analysis of Dairy Cows. Sensors. 2024; 24(3):987. https://doi.org/10.3390/s24030987

Chicago/Turabian Style

Lee, Jae Gu, Seung Soo Lee, Mahboob Alam, Sang Min Lee, Ha-Seung Seong, Mi Na Park, Seungkyu Han, Hoang-Phong Nguyen, Min Ki Baek, Anh Tuan Phan, and et al. 2024. "Utilizing 3D Point Cloud Technology with Deep Learning for Automated Measurement and Analysis of Dairy Cows" Sensors 24, no. 3: 987. https://doi.org/10.3390/s24030987

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop