Next Article in Journal
Optimizing the Agricultural Internet of Things (IoT) with Edge Computing and Low-Altitude Platform Stations
Previous Article in Journal
A Deep Cryptographic Framework for Securing the Healthcare Network from Penetration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Visual Estimation Research on Live Pig Weight

1
College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
2
Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
3
College of Informatics, Huazhong Agricultural University, Wuhan 430070, China
4
College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
5
College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin 300384, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(21), 7093; https://doi.org/10.3390/s24217093
Submission received: 12 September 2024 / Revised: 30 October 2024 / Accepted: 31 October 2024 / Published: 4 November 2024
(This article belongs to the Section Smart Agriculture)

Abstract

:
The weight of live pigs is directly related to their health, nutrition management, disease prevention and control, and the overall economic benefits to livestock enterprises. Direct weighing can induce stress responses in pigs, leading to decreased productivity. Therefore, modern livestock industries are increasingly turning to non-contact techniques for estimating pig weight, such as automated monitoring systems based on computer vision. These technologies provide continuous, real-time weight-monitoring data without disrupting the pigs’ normal activities or causing stress, thereby enhancing breeding efficiency and management levels. Two methods of pig weight estimation based on image and point cloud data are comprehensively analyzed in this paper. We first analyze the advantages and disadvantages of the two methods and then discuss the main problems and challenges in the field of pig weight estimation technology. Finally, we predict the key research areas and development directions in the future.

1. Introduction

The informatization and precision management of animal husbandry has become a new trend in the future development of the breeding industry [1,2,3]. This trend is of great significance as it can achieve resource conservation and environmental protection and also improve breeding efficiency and product quality. Among them, the precision feeding of pigs is crucial for the development of animal husbandry and covers multiple key aspects such as the precise ratio of feed, intelligent control of the feeding environment, and real-time monitoring of animal health. Through these measures, the weight gain speed and feed utilization efficiency of live pigs are improved. At the same time, carcass quality and production efficiency are also enhanced, which strongly promotes the development of animal husbandry in the direction of intensification, scale, and modernization and conforms to the future development trend of informatization and precision [4,5].
Pig weight monitoring occupies a crucial position in animal management. By analyzing the changes in pig weights, farmers can evaluate the reproductive potential, feed consumption, social behavior, and energy balance of pigs. The abnormal fluctuations in pig weights are important indicators for judging their health status, which is conducive to the early detection of diseases and can provide an accurate basis for predicting the optimal feeding date. In addition, pig weight is also a key parameter for assessing whether it meets the slaughter standards [6]. Real-time monitoring of animal weights has many significant advantages, such as being able to reduce losses and mortality and also playing a key role in reducing the overall breeding costs. Therefore, the weight parameter has already become a key economic factor in the growth process of pigs.
Most of the existing weight estimations are usually carried out manually. It typically takes two farm workers 3–5 min to weigh each pig on a scale [7]. Alternatively, the weight can be estimated by directly measuring the body size information of pigs. This involves measuring the size of specific body parts of pigs and then calculating the weight using a pre-established mathematical model or empirical formula. Although these methods can measure the weight of pigs, there are a series of problems associated with directly contacting pigs for measurement. Firstly, this approach requires a significant amount of labor, thus increasing costs. Secondly, it causes great stress to pigs and may even lead to injuries, which results in a decline in productivity. At the same time, workers also face risks during the operation, often leading to a situation where costs outweigh benefits.
With the development of modern information technology, technologies like computer vision and artificial intelligence have made it possible to indirectly estimate the weight of pigs [8]. Computer vision can collect pig data without stress or contact, allowing researchers to observe and analyze it carefully. By using the methods of pig body size characteristics and characteristic classification, the weight of pigs can be automatically estimated, and this technology has become an important means for estimating pig weight [9]. Extracting the body characteristics of pigs is the core aspect of estimating pig weight based on computer vision. Currently, the visual estimation of live pig weight mainly includes two methods: based on images and based on three-dimensional point clouds. Most domestic and foreign research scholars construct pig weight estimation models according to the body shape and body size characteristics of pigs. In recent years, with the rapid development of deep learning methods, algorithms such as image classification, object detection, and image segmentation based on deep learning have provided reliable technical support for data processing and analysis, and the technology of estimating pig weight based on computer vision has encountered new development opportunities. Compared with other livestock species, the breeding epidemic prevention level of pigs is higher. During the 6–8 months from the birth of piglets to slaughter, it is necessary to measure their body weight multiple times. Moreover, pigs cannot obtain the weight estimation parameters through a fixed channel like cattle and sheep. Therefore, the weight estimation methods applicable to other animals are difficult to apply to pigs.
At present, the technology of estimating the weight of pigs based on computer vision has achieved good research results. However, there are still some problems. For the weight estimation of images, there may be problems such as the influence of image acquisition angles and postures on accuracy and the influence of illumination changes on image quality and, thus, on the estimation precision. Although the above problems have been alleviated by using a depth camera and more abundant body size parameters can be obtained, there are defects such as a large amount of data, high requirements for algorithm accuracy, a long response time in the calculation process, and high algorithm complexity, which are not conducive to real-time acquisition and processing.
This research will focus on the estimation of pig weight based on computer vision, summarizing its working principles and key technologies, etc. From the aspects of data acquisition, technology research and development, and experimental verification, it will deeply analyze the main technical problems and challenges. At the same time, it will explore future research focuses and development directions, aiming to provide relevant theoretical basis and technical references for further research on accurate and efficient pig weight estimation technology in the future.

2. Estimating Pig Weight Using Computer Vision

As smart farming technologies continue to advance, the estimation of pig weight through computer vision has become a prominent focus in livestock research [10,11]. This technology involves capturing images or video data of pigs through a computer vision system and implementing non-contact, automated weight estimation through image processing and pattern recognition algorithms. By utilizing this method, the complexity and errors associated with manual operations are significantly reduced, while also mitigating any unnecessary stress to the pigs [12,13,14].
Computer vision-based pig weight estimation primarily falls into two categories: image-based visual analysis and three-dimensional point cloud-based weight estimation [15,16,17]. Traditional image processing algorithms or deep learning algorithms are used to detect key points related to pig body dimensions from captured pig images. Subsequently, relevant body dimension information for weight estimation is extracted and categorized. Commonly used body dimension information for estimating pig weight, as shown in [18,19], includes body length, chest girth, abdominal circumference, and shoulder height, among others. Figure 1 shows the main research elements of computer vision-based pig weight estimation.
According to Panda et al. [20], body length, heart girth, waist circumference, back height, hip width, thigh circumference, neck circumference, and body height exhibit a high correlation coefficient (0.8–0.97) with weight estimation. Machebe and Ezekwe [21] conducted a study utilizing bivariate Pearson correlation analysis and other statistical methods to assess the relationship between different measurement indicators and their accuracy and reliability in predicting weight. Their findings indicated a high correlation between pig weight, body length, and heart girth. Similarly, Banik et al. [22] identified that changes in pre-weaning weight were primarily associated with front leg height (20.98%), body length (19.50%), body height (6.60%), and abdominal circumference (5.80%). Oluwole et al. [23] considered height, heart girth, body length, snout length, and hip width as influential factors in predicting pig weight. Additionally, Al Ard Khanji et al. [24] observed that body length, heart girth, and lateral abdominal distance contribute significantly to estimating pig weight.
Although the aforementioned methods have achieved good progress in weight estimation, there are also certain issues, such as measurement errors and low efficiency. Table 1 presents the body size parameters related to pig weight, weight estimation methods, and the magnitude of errors based on computer vision measurement. As can be seen from the table, although different combinations of body size parameters are used to estimate the weight of pigs, the body length of pigs, as a parameter associated with weight, is widely recognized.
In recent years, both domestic and international scholars have conducted numerous studies on pig weight estimation using computer vision, resulting in a wealth of findings. Table 2 in this paper is a review article on precision livestock farming, which compares aspects such as the research fields, research progress, and technical methods of the articles. Benjamin et al. [4] mainly introduced the application of precision livestock farming technology in monitoring and enhancing the welfare of pigs. They discussed how technological advancements, including sensors, algorithms, and machine learning, enable farmers to monitor and improve the health and behavior of pigs in real-time. Additionally, they explored the challenges and future opportunities these technologies face, as well as the crucial roles that veterinarians and practitioners play in promoting the application of technology and improving animal welfare. Rohan et al. [14] and his team mainly focused on the application of deep learning in livestock behavior recognition. Through extensive literature research and analysis, their study evaluated the effectiveness of different deep learning models and network architectures in identifying livestock behavior and also discussed the challenges encountered in practical applications. Ma [18,19] and his colleagues mainly concentrated on the techniques of three-dimensional reconstruction of livestock and the methods of measuring body size parameters. The article also pointed out the challenges faced by current technologies and provided an outlook on future development, predicting the future trends of automated livestock body size measurement technology. Based on this, this paper elaborated on the development of pig weight estimation.
Bhoj et al. [25] and Zhao et al. [26] respectively introduced the development processes of pig weight estimation and animal weight estimation. Bhoj et al. [25] placed more emphasis on introducing and comparing the methods for estimating the weight of pigs by employing different image processing strategies and machine learning models. It presented more technical details regarding the application of image processing and machine learning models in weight estimation, such as image processing algorithms, the process of model training, and the formation of databases. In contrast, this paper provided a more comprehensive overview of the main research methods, key technologies, and future development directions of the visual estimation of pig body weight. It discussed more about the overall framework and research progress of weight estimation technology, including the feasibility analysis of single-image three-dimensional reconstruction technology and the possible future trends. Zhao et al. [26] offered a comprehensive literature review, exploring the weight estimation methods for various animals and conducting an in-depth analysis of the application and challenges of deep learning in this field. However, this paper focused more specifically on the visual estimation research of live pig weight, analyzing in detail the methods based on images and three-dimensional point clouds and exploring the main challenges and future development directions in this area. In terms of technical details, Zhao et al. [26] emphasized the evolution of weight estimation methods for different animals, including the shift from traditional methods to deep learning. Meanwhile, this paper delved deeper into the specific technical implementation of live pig weight estimation, covering feature extraction methods and model construction, and provided more practical technical processes and application examples.
Current research in this field often focuses on stereoscopic vision, image processing, and analysis to extract pig characteristic information, and the subsequent estimation of live pig weight. The main steps involved in estimating the weight of live pigs are illustrated in Figure 2.

3. Live Pig Weight Estimation Based on Image

As early as 1990, Schofield et al. [27] proposed the use of digital image analysis for measuring specific areas and dimensions of live pigs in order to calculate their weight. In the early stages of research on pig weight estimation technology, the majority of studies were based on images, with researchers using visible light cameras to capture two-dimensional data of pigs for analysis. For example, relationships were established between two-dimensional parameters such as pig body area, body length, body width, and pig weight. Du et al. [28] verified the correlation between body size parameters and body weight. It can be seen from Table 3 that the parameters of body length, body width, body height, and chest depth have a relatively strong correlation, and the correlation coefficients are all greater than 0.6. The correlation between the chest circumference parameter and body weight is the largest, and the correlation coefficients are all greater than 0.9. The correlation coefficient between abdominal length and body weight is the smallest, which also confirms the rationality of using body length as a body size estimation parameter in Table 1.
A key step in estimating live pig weight using image-based techniques is the acquisition and preprocessing of images. During the estimation process, visible light cameras are typically used to collect image data of livestock in standing or walking positions. The preprocessing of the data usually involves tasks such as image segmentation, contour extraction, and image labeling, which transform the data into a suitable format for detecting and measuring body dimensions.
The collection of livestock image data and the effectiveness of preprocessing significantly impact the accuracy of subsequent body dimension measurements. A weight prediction model is established based on the collected body dimension parameters, and the correlation between body dimensions and weight is analyzed to predict the weight of live pigs. Accurate measurement of pig body dimensions plays a crucial role in estimating live pig weight. Figure 3 illustrates the main body size parameters for pig body weight estimation.

3.1. Image-Based Body Dimension Measurement

Body dimension information is essential for the breeding and management of live pigs, as it not only reflects their physical size and body structure but also has significant implications for reproductive management, health monitoring, feeding optimization, and accurate weight estimation. In the early stages of machine vision-based livestock body measurement, there was a focus on exploring image information.
Whittemore et al. [29] and White et al. [30] were among the pioneering researchers who utilized machine vision technology to measure pig body dimensions. They employed visual image analysis methods to detect pigs and gather information on their back dimensions and shapes. Liu et al. [31] employed automatic threshold segmentation and morphological processing to segment the contours of live pigs, extracting external contour information from pig images. Banhazi et al. [32] developed a system using a single camera to measure pig body width, body length, and area.
Building on previous research methods, Zhang et al. [33] have developed a system that integrates livestock data collection, image data processing, and livestock body dimension measurement. This system has achieved real-time collection and processing of livestock body dimension measurements, with over 90% accuracy in detecting sheep body dimensions within a 3% error margin. Numerous studies have demonstrated the significance of precise measurement point extraction in accurately determining animal body dimensions.
Teng et al. [34] conducted research using computer vision technology to estimate live pig body dimensions and weight. They used an inflection point algorithm based on concave structures to extract key points for body length, body height, width, and hip width and calculate body dimension parameters. However, the accuracy of their method was low, and the system was not sufficiently intelligent.
In order to measure pig body dimensions more precisely, Liu et al. [35] used envelope analysis to remove the head and tail of the pig body. They calculated key point coordinates such as body length, hip width, abdominal width, and shoulder width and used coordinate data to calculate body dimensions. The algorithm showed a smaller average relative error compared to actual measurements.

3.2. Weight Estimating Based on Image Body Dimension Parameters

Advancements in technology for measuring livestock body dimensions have driven the progress in estimating livestock weight. Currently, monocular cameras are predominantly utilized to capture images for detecting and analyzing relevant parameters of live pigs. Through this process, models are established to correlate the two-dimensional parameters with weight so as to estimate the weight of pigs accurately.
In the 1990s, Schofield et al. [36] carried out an analysis of pigs’ back images and found a significant correlation between weight and the projected area, as viewed from above. They developed a weight estimation model that maintained an average error controlled within 5%. In 2001, Minagawa [37] utilized stereoscopic projection technology to measure the overhead projected area and employed geometric optics principles to calculate pig body height. Subsequently, they utilized a multivariate regression equation of area and body height to estimate pig weight, achieving an average error of only 0.3 kg and a relative error of 0.8%. Doeschl et al. [38] conducted a study to investigate the changing patterns of body size and weight during the growth process of pigs. They utilized a system to continuously monitor parameters such as body length, body width, and area on the back of pigs and established a relationship model between these body size parameters, weight, and time to describe the process of pig growth transformation.
With the advancement of technology, Wang et al. [39] optimized image contour extraction methods and established a model that correlates volume parameters with weight to estimate weight in a system based on a monocular camera. This model maintains an error of 4.1%. Yang et al. [40] utilized image processing technology to calculate the actual projected area of breeding pigs and found a high correlation with weight up to 0.94. Based on this, they established a weight estimation model with a relative error of 2.8%.
Wang et al. [41] established a pig weight detection system based on a monocular camera. They selected the projected area of the pig’s back for estimating the pig’s weight. Meanwhile, to reduce the influence of light, an improved wide-range boundary detection algorithm was adopted, ensuring that the segmentation effect was not affected by light. Kaewtapee et al. [42] used a camera to obtain the back image of a pig. Through morphological operations, they calculated the proportion of live pig pixels to the total area and used the methods of regression analysis and artificial neural network (ANN) to establish a live pig weight estimation model.
Wu et al. [43] collected data using a monocular camera and employed the improved PBAS foreground detection algorithm and Canny edge detection algorithm to extract pig contour features. They also used rectangular red labels placed on the pig’s back and side to calculate body dimensions, establishing an evaluation model that predicts the weight of sows with an accuracy of over 95.5% in complex pig house environments.
As livestock body dimension measurement technology continues to advance, the measurement accuracy of numerous studies has reached levels suitable for commercial application. Banhazi [44] developed a system based on a single camera to detect pig body width, length, and area. These parameters were combined to estimate weight with an average error of 1.18 kg. In order to further improve the portability and simplicity of live pig weight estimation equipment, Gaganpreet et al. [45] devised a novel, portable, and user-friendly method using the smartphone image measurement application “On 3D Camera Measure.” This method involves taking overhead and side views of pigs in order to measure their body dimensions. The researchers applied this technique specifically to Ghoongroo pigs (a breed from India) and successfully predicted their weight. The goal of this method is to achieve non-contact measurement of pigs.
Cunha et al. [46] introduced a method for predicting the weight of pigs by using computer vision and machine learning algorithms with the aid of two-dimensional images. The researchers developed a multiple linear regression-based mathematical model that can automatically extract morphometric data from images, such as dorsal length, thoracic width, abdominal width, dorsal area, and dorsal circumference, for predicting the weight of pigs. The experimental results show that the average error of weight estimation is 2.39 kg, and the R 2 value is 0.88. This research provides a low-cost and efficient weight-monitoring tool for pig production, especially suitable for small farms with a relatively low technical level. Wan et al. [47] used a monocular vision method based on the improved EfficientVit-C model for rapidly, accurately, and non-invasively estimating the weight of pigs. This method, through image segmentation technology combined with depth estimation and an adaptive regression network, can effectively estimate the weight of pigs using only a single camera under different lighting conditions. Table 4 shows the estimation of pig weight based on image parameters.
As can be seen from Table 4, whether it is a single-camera shot or a multi-camera shot, a top view is indispensable. The top view contains parameters that are highly relevant to weight, such as body length and body width. Although there are other features used for weight estimation in Table 4, these features also have a certain connection with parameters like body length and body width.
The previously mentioned methods for identification are based on estimating the weight of pigs from images. This means that the angle and posture of the pigs in the images significantly impact the accuracy of weight estimation. If a pig’s posture is not ideal or if the shooting angle is improper, it could lead to misinterpretation of body features, thereby affecting the results of body dimension and weight estimation [48]. According to Table 4, it is known that the change of illumination has a certain influence on the estimation of the weight of live pigs. Uneven or inappropriate illumination may lead to the appearance of shadows or overexposure in the image, causing the boundary of the pig’s body shape to be unclear and, thus, affecting the accuracy of body size and weight estimation.

3.3. Estimating Body Dimensions and Weight Using Binocular Stereoscopic Vision

Although using a single camera can effectively estimate the body dimensions and weight of pigs, the parameters for measured body dimensions are somewhat limited. In order to increase the range of measurement items for livestock body dimensions, some studies have captured multi-angle views of livestock and simultaneously measured multiple body dimension parameters.
With the advent of new concepts and technologies, such as 3D modeling and 3D printing, there has been a gradual shift in focus from two-dimensional planes to three-dimensional spaces. Research has demonstrated that three-dimensional parameters, including volume, cross-sectional area, and three-dimensional measurement points, more accurately represent animal phenotypes and exhibit a high correlation with their weight. In contrast to two-dimensional images, three-dimensional measurements can directly capture live pigs’ height information, therefore overcoming the limitations of two-dimensional estimation methods and improving the precision of prediction models.
The current method is based on binocular stereoscopic vision technology, which employs two image sensors to capture RGB images of the pig’s body. This technique utilizes vision technology to map image points onto a three-dimensional coordinate system, extracting three-dimensional features of the pig’s contour and back area. It also measures pig parameters such as body length, body width, and rump height. Furthermore, it establishes related weight prediction models in order to estimate the weight of pigs.
Binocular stereoscopic vision technology offers rapid and accurate advantages in three-dimensional measurement applications, enabling the authentic reproduction of the three-dimensional structure of objects.
Teng et al. [49] have made significant progress in the study of pig body dimension detection and weight estimation through the use of binocular vision technology. Si et al. [50] positioned cameras at a height of approximately 170 cm above the ground and 110 cm to the side of the corridor, capturing both overhead and side views of 103 pigs. They proposed an algorithm for detecting the ideal posture of pigs, which adjusts the pig’s body to a horizontal orientation, can identify the position of the head and tail, and assesses whether the head position is skewed. The results showed that the average accuracy of body width measurement was 95.5%, body height measurement was 96.3%, and body length measurement was 97.3%. Shi et al. [51] developed a mobile measurement system based on the LabVIEW platform for automatically measuring the weight composition parameters of pigs, such as body length, body width, body height, hip width, and hip height, in large-scale farms. The system uses a binocular camera to capture the back image of the pig and estimates the weight composition of the pig through image processing technology. The experimental results show that the system has high accuracy and reliability compared with the manual measurement method, providing a non-contact and efficient measurement method, which is beneficial for improving the production quality and management efficiency of the farm. Xue et al. [52] proposed employing a passive vision-based binocular stereoscopic vision method that automatically acquires data while the animal remains stationary.
During the process of camera calibration, a methodology integrating Bayesian principles, SIFT, and epipolar geometry was employed to tackle the two primary and challenging issues of camera calibration and stereo matching in binocular stereoscopic vision. This approach effectively mitigates the adverse effects of environmental factors on weight estimation. However, it is worth noting that this method imposes stringent requirements on the animal’s activity state and posture.
Shi et al. [53] employed a binocular stereoscopic vision system for capturing pig images and calculated the body dimensions of length and body height with average errors of 1.88 cm and 0.81 cm, respectively. They also utilized image processing to estimate the weight of the pigs, achieving an error rate of 1.759 kg. These findings validate the potential of binocular stereoscopic vision systems in computing three-dimensional data. The correlation between dorsal area and weight in pigs not only serves as a method for weight estimation but also allows for post-hoc evaluation of body shape.
Binocular stereoscopic vision technology has rapid and accurate advantages in three-dimensional measurement applications and can authentically reproduce the three-dimensional structure of objects. However, its configuration and calibration are relatively complex, disparity calculation is resource-intensive, and image quality is significantly affected by lighting, leading to bottlenecks in the accuracy of weight estimation.

4. Estimating Live Pig Weight Based on Point Cloud Data

With the widespread adoption of depth cameras and advancements in related scientific technologies, point cloud technology has been increasingly utilized. This technology relies on three-dimensional coordinate point sets captured by 3D scanning devices, which record detailed physical characteristics of an object’s surface, including, but not limited to, three-dimensional positions (X, Y, Z coordinates), color, classification, intensity values, and temporal information [54]. The popularity of consumer-grade 3D cameras such as Microsoft Kinect, Asus Xtion sensor, and Intel Real Sense has led researchers to explore the use of 3D point cloud data for measuring animal body dimensions and estimating weight [55]. As a result, point cloud technology has become a key research direction in the fields of computer graphics and computer vision. Considerable progress has been made in using 3D point clouds to estimate the weight of live pigs.
Unlike image-based pig weight estimation, point cloud-based pig weight estimation involves the collection of livestock image data using depth cameras or 3D scanners. Compared to traditional color images, depth cameras can measure the distance from the camera to the object, allowing for the creation of a three-dimensional model of the livestock based on depth information. This enhances the precision of livestock body dimension measurements. Point cloud data is voluminous and complex in data type; therefore, preprocessing of livestock point cloud data is currently a focus in research on point cloud-based livestock body dimension measurements. Through preprocessing operations such as object detection and segmentation, point cloud simplification and completion, multi-view point cloud registration, and normalization of livestock posture, a complete, edge-smoothed, uniformly oriented livestock target image can be obtained. This effectively improves the accuracy of livestock body dimension measurements.
Detection and segmentation of livestock targets are the first steps in preprocessing image data. The quality of object detection and segmentation significantly impacts subsequent data processing [56]. Once the livestock target point cloud has been detected and segmented, simplifying the livestock point cloud can effectively eliminate redundant data and retain only the key feature-containing point clouds, thus enhancing data processing efficiency. When capturing side-view image data of livestock, partial loss of data often occurs due to obstructions from pen railings, which affects the accuracy of key point localization and measurement in livestock body dimension measurements.
The general process of multi-view point cloud registration consists in using depth sensors to collect the surface depth information of livestock. After data preprocessing and livestock target extraction, methods such as point cloud registration algorithms and calibration object registration are employed to conduct stereoscopic matching of the data obtained from multiple sensors. These methods work by aligning and matching the point clouds from different sensors to ensure accurate integration of the data. Ultimately, a three-dimensional reconstruction model of the livestock’s body is obtained. After obtaining the accurate three-dimensional reconstruction model, the next crucial step is to perform posture normalization, which entails rotating and translating the segmented livestock model into a predetermined global coordinate system. This ensures a consistent orientation for the livestock model while simplifying the complexity of the algorithm for measuring the dimensions of the livestock’s body.
Figure 4 illustrates the process of normalizing livestock posture. The body dimension parameters of the collected livestock are calculated from the point cloud data, a weight prediction model is established, and the correlation between body dimensions and weight is analyzed to predict the weight of live pigs.

4.1. Measuring Live Pig Dimensions Using Point Clouds

Point cloud information enables the reconstruction of livestock’s actual dimensions, such as back area, body length, and body width, thereby enhancing the accuracy of weight estimation. The three-dimensional structure provides a more intuitive and clear presentation of the animal’s physical state. In recent years, 3D reconstruction has emerged as a prominent research focus in computer vision. The reconstruction of animal body type models and calculation of their body dimensions are widely applied in non-contact weight estimation methods.
Wang et al. [57] proposed a rotation normalization technique for extracting measurement points on a pig’s body. This method effectively controls measurement errors within 16 mm and significantly simplifies the data collection process. Moreover, Wang et al. [58] developed a single-viewpoint point cloud mirroring measurement technique. By using a single depth camera, they can reconstruct a complete pig point cloud even under partially occluded conditions. This approach enables the extraction of parameters such as body length, rump width, and rump height with detection errors of 5.0%, 7.4%, and 5.7%, respectively.
In order to obtain more comprehensive point cloud information, Yin et al. [59] used a Kinect v2 camera to gather partial point cloud data of pigs from three different perspectives. Through the application of a contour continuity-based point cloud registration fusion technique, they were able to reconstruct the three-dimensional shape of the pig and estimate key body dimensions such as body height, length, width, and girth. This approach facilitated a thorough analysis of the pig’s physical characteristics.
Guo et al. [60] utilized an Xtion Pro camera to gather point cloud data and devised a sphere calibration algorithm based on Random Sample Consensus in order to automatically extract key points and body dimensions. This approach enabled the automatic and swift registration of point cloud data, with overall errors being controlled within 4%. Consequently, this validates the efficacy methods like point cloud registration algorithms and calibration object registration, which are employed to conduct stereoscopic matching of the data of the Xtion Pro camera for measuring body dimensions. The optimization of point cloud data and the improvement of computational efficiency are crucial issues in point cloud body measurement. Qin et al. [61] have developed a 3D measurement system based on dual Kinect cameras, utilizing an octree-based K-means clustering algorithm to streamline point cloud data and enhance computational efficiency. This approach is combined with pig body features for measuring body dimensions, resulting in a relative error of 3.1% for five main body dimensions.
Guo et al. [62] have developed an interactive livestock body measurement software based on the feasibility of using depth cameras to measure livestock body dimensions. This software offers a semi-automated suite of tools for loading, rendering, segmenting, normalizing posture, and measuring livestock body dimensions based on body point clouds. It provides notable data processing precision and potential for commercial application. However, this interactive body measurement software still relies on the manual selection of measurement points on the livestock’s body despite achieving some level of automated measurement of livestock body dimensions. Hu et al. [63] proposed an automated method for pig body dimension measurement using an improved PointNet++ point cloud segmentation model to rapidly locate measurement points and accurately obtain body dimensions. This method involves segmenting the overall pig point cloud into different parts, such as the head, ears, trunk, limbs, and tail, in order to locate key measurement points within the segmented parts. Wang et al. [64] developed a portable automatic measurement system based on two depth cameras for measuring the body size parameters of pigs, such as body width, hip width, and body height. The system realizes precise measurement by means of capturing point clouds, registering point clouds, removing background point clouds, extracting pig point clouds, normalizing postures, and applying morphological constraints.
The accuracy and stability of this method were validated by comparing it with manual measurement results. Menesatti et al. [65] utilized single-viewpoint depth images to capture livestock backs and extracted three-dimensional point cloud data from them. Salau et al. [66,67] employed a multi-view approach to capture depth images of different parts of livestock and obtained precise body dimension information through registration and fusion of the point cloud data. Du et al. [68] presented an automated method for measuring livestock body dimensions using multiple-depth cameras and key point detection. This approach involves merging 2D images with 3D point cloud data to identify key points on high-resolution RGB images, which are then projected onto the livestock’s 3D point cloud surface. The method provides accurate measurements of body dimensions for livestock such as cattle and pigs, while also improving measurement efficiency and reducing stress on the animals. Experimental results have confirmed its accuracy and robustness when compared to traditional manual measurements.
Luo et al. [69] proposed a method for animal body measurement based on statistical shape models. This approach involves fitting the statistical shape model of an animal to point cloud data from livestock and then extracting body measurements from the reconstructed mesh. The method is designed to tackle the challenges posed by point cloud data loss and inconsistency resulting from livestock movement in precision livestock farming. Validation of the proposed method was conducted on two common types of livestock, namely cattle and pigs, yielding an overall estimation accuracy of 91.95% for cattle and 87.63% for pigs. Lei et al. [70] developed a non-contact sensing system. This system uses a depth camera to obtain the body data of pigs and enhances the quality of 3D reconstruction by means of data preprocessing and normal estimation methods. The experimental results indicate that this technology can rapidly and accurately reconstruct the 3D model of pigs, with the measurement errors of chest circumference and hip circumference being only 3.55% and 2.83%, respectively. This research offers a new monitoring tool for the pig breeding industry and is conducive to improving the efficiency of breeding management and the health level of pigs. Table 5 presents the measurement of pig body size parameters based on three-dimensional point clouds, summarizing the number of cameras, shooting angles, the age of pigs, body size parameters, etc. Through comparison with the weight estimation parameters in Table 4, it was discovered that body height and chest circumference are accurately measured multiple times, and these two parameters are highly correlated with the estimation of weight.

4.2. Estimating Live Pig Weight Based on Point Clouds

Compared to traditional image-based methods, point cloud-based techniques enable the measurement of livestock features such as waist and hip circumferences. This provides richer body dimension parameters and enhances detection accuracy, ultimately improving the accuracy of livestock weight estimation results.
Condotta et al. [71] employed depth images to predict the weight of live animals, specifically focusing on pigs at growth and fattening stages. They developed an algorithm to process these images in MATLAB 7.6 software by using depth images obtained from Kinect sensors to capture the volume of pigs. The research also investigated the impact of different genders and breeds on prediction accuracy, revealing that the overall model demonstrated commendable predictive capabilities. Cang et al. [72] proposed an intelligent method for estimating pig weight using deep learning. They designed a deep neural network that takes top-view depth images of a pig’s back as input and outputs weight estimates. This network, based on the Faster-RCNN architecture, incorporates a regression branch to integrate pig identification, localization, and weight estimation. He et al. [73] used 3D imaging and a regression network for non-contact weight measurement of pigs. The study introduced an image enhancement preprocessing algorithm and a BotNet-based regression network to predict pig weight accurately. The network, enhanced with convolution and multi-head self-attention (MHSA) branches in parallel fully connected layers, achieved a mean absolute error (MAE) of 6.366 kg on the test image set. Arthur et al. [74] proposed a system that uses a 3D camera to acquire pig body measurement data and predict their weight automatically. The system is capable of predicting pig weight online without the need for manual intervention, achieving this by extracting image features and measuring shape descriptors. The study conducted comparisons between datasets containing animals from different age categories versus those consisting solely of mature pigs in order to investigate the impact of dataset selection on the accuracy of weight prediction. Li et al. [75] conducted experiments on 50 pigs’ body parameters as variables for regression analysis models. Their study demonstrated the accuracy and reliability of the Kinect v2 camera in measuring body shape and estimating live weight. The final ridge regression model achieved high accuracy, with an R 2 value of 0.958 and a mean absolute error (MAE) of 2.961 kg, showcasing its effectiveness in this application.
Na et al. [76] developed a pig weight prediction system based on Raspberry Pi, which consists of three components: collecting pig image data, predicting pig weight, and visualizing prediction results. Initially, 3D depth camera images of pigs are captured. Then, using a Raspberry Pi module, pigs are segmented from the input images, and features are extracted from the segmented images to predict their weight. The system’s performance is trained using a 3D sensor dataset collected from a specific farm and validated using independent data, demonstrating relatively good predictive capability. Nguyen et al. [77] conducted a study in which they utilized a handheld, portable RGB-D imaging system to estimate the weight of pigs. The researchers collected RGB-D data from fattening pigs of various weights and generated a point cloud for each pig. They then employed latent features from a 3D generative model to predict the weight of the pigs. Kwon et al. [78] proposed a deep learning-based method for rapidly reconstructing mesh models from pig point cloud data and extracting various measurements to estimate pig weight in real-time. The data was collected using a point cloud acquisition system, and a deep neural network (DNN) model was developed to predict the weight. Experimental results showed high accuracy, with prediction errors on the test dataset of only 4.89 kg, accounting for 2.11% of pig weight. This research provides a new technological approach for rapidly and accurately estimating livestock weight in the livestock industry.
Selle et al. [79] introduced a three-dimensional data modeling method for pig breeding. By constructing a statistical shape model, the size, morphology, and posture changes of pigs can be quantitatively and intuitively analyzed. Through linear regression analysis, with only body volume as a predictor variable, an accurate prediction of pig weight has been achieved. Table 6, which is about the estimation of live pig weight based on three-dimensional point clouds, summarizes the number of cameras, shooting angles, weight estimation parameters, weight estimation methods, etc. It can be seen that during data collection, the angles of the cameras are mostly top–down, or there is the participation of top–down cameras. Compared with the weight estimation parameters in Table 4, many parameters related to weight have been added, which makes up for the deficiencies of two -dimensional images.
While the use of 3D point clouds for measuring and estimating the weight of livestock has improved detection accuracy compared to traditional two-dimensional images, it is important to note that the large volume of data generated by 3D point clouds requires high algorithmic precision. As a result, the computation process leads to longer response times and higher algorithmic complexity, which is not conducive to real-time data collection and processing.

5. Methods of Estimating Live Pig Weight

To estimate the weight of live pigs, body feature parameters are extracted from images, and mathematical regression methods are used to predict their weight. This process includes traditional stages based on machine learning and stages based on deep learning.
In the traditional machine learning stage, body feature parameters are either measured manually or extracted from collected animal images using algorithms and software. Parameters that show a high correlation with animal weight are selected for use in estimating the animal’s weight through mathematical regression methods such as linear regression, nonlinear regression, or machine learning. In the deep learning stage, specific animal images or video segments serve as inputs for the deep learning model. Through training computations performed by deep neural networks, the model outputs the animal’s weight or other target parameters.
Once the model for estimating live pig weight is determined, it is crucial to analyze its accuracy and relevance. The evaluation aims to compare differences between different prediction models for the same species or the applicability and performance differences of the same model on different species. Commonly used evaluation metrics in animal weight estimation tasks include mean squared error (MSE), mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), and the coefficient of determination R 2 . MSE represents the effectiveness of the model fit; the smaller the MSE, the better the model fit and the more accurate the model’s predictions. The formula for MSE is as follows:
M S E = 1 n i = 1 n Y i Y ~ i 2
Mean absolute error (MAE) is the average of absolute errors used to assess the model’s fit on given data. Its magnitude also reflects the performance of the live pig weight model. The formula for MAE is as follows:
M A E = 1 n i = 1 n Y i Y ~ i
Root mean squared error (RMSE) is a commonly used metric to measure the differences between the predicted values of a model and the actual observed values. It is used to evaluate how well a model fits the given data. The formula for RMSE is as follows:
R M S E = 1 n i = 1 n Y i Y ~ i 2
Mean absolute percentage error (MAPE) is commonly used to evaluate the discrepancy between predicted values and actual values. The formula for MAPE is as follows:
M A P E = 1 n i = 1 n Y i Y ~ i Y i × 100 %
R 2 , also known as the coefficient of determination, is a statistical measure used to evaluate the goodness of fit of a regression model. The formula for R 2 is as follows:
R 2 = 1 i = 1 n Y i Y ~ i 2 i = 1 n Y i Y ¯ 2
In the formula above, n represents the number of samples, Y ~ i is the predicted value for the i-th sample, Y i is the actual value of the i-th sample, and Y ¯ denotes the average of the predicted values.

5.1. Estimating Live Pig Weight Using Traditional Methods

Traditional machine learning methods for estimating pig weight are known for their simplicity, wide applicability, and ease of use. With advancements in mathematical regression techniques, feature parameters have progressed from one-dimensional to multidimensional representations. The integration of machine learning methods into the regression prediction process can significantly enhance prediction accuracy. As shown in Figure 5, the process of estimating live pig weight using traditional methods begins with image collection and preprocessing, followed by the extraction of weight-related features. Subsequently, the optimal prediction model is utilized to predict the weight of live pigs, leading to obtaining the estimated weight of the pigs.
In the early stages of weight estimation, scholars often used regression techniques to estimate the weight of live pigs. Schofield [36], Kashiha [80], and Shi et al. [81] employed linear regression to estimate pig weight. Multivariate linear regression has been utilized for animal weight estimation to improve the accuracy of weight estimation further. Sungirai et al. [82] used a stepwise multivariate linear regression method to build a weight estimation model, employing linear body measurements such as body length and chest girth to estimate live pig weight. Alenyoregue [83] analyzed the relationship between these linear body measurements and actual weight using simple linear regression and multivariate linear regression methods. To further enhance the predictive accuracy of multivariate regression models for live pig weight, Al Ard Khanji et al. [24] developed new multivariate regression models and found that the new models were more accurate than previous ones. The use of nonlinear models has further improved the precision of livestock weight estimation; Wang et al. [12] found that nonlinear models were more effective than linear models for estimating pig weight. Szyndler-Nedza et al. [84] used both linear and nonlinear models to estimate pig weight, finding that the nonlinear estimates were more accurate. Wongsriworaphon et al. [85] developed a nonlinear weight estimation method based on digital image analysis, solving the inconveniences of traditional weighing methods in farm environments and providing farms with a non-invasive, practical method for measuring weight. Jun et al. [86] fully utilized the latest advancements in machine learning technology when building their estimation models, developing new nonlinear weight estimation models. In addition to the traditional pig body area as a primary feature parameter, they introduced two new features, curvature and deviation, which are related to the pig’s posture and help more accurately adjust the weight estimation.
Pezzuolo et al. [87] utilized a depth camera for non-contact measurement of pig body sizes and developed linear and nonlinear models to estimate the weight of pigs. The research indicates that the estimation results of the nonlinear model are highly correlated with the directly measured weight, and the average absolute error is reduced by over 40% compared to manual measurement, demonstrating the potential and accuracy of this method in monitoring pig growth and health. Ruchay et al. [88] predicted the live weight of Duroc, Landrace, and Yorkshire pigs by comparing multiple machine learning methods. The research revealed that the Stacking Regressor model performed best on the test dataset, with an average absolute error (MAE) of 4.331 kg and an average absolute percentage error (MAPE) of 4.296%, suggesting that machine learning technology has high accuracy and application potential in predicting the live weight of pigs. Preethi et al. [89] employed artificial neural networks and nonlinear regression models to predict the weight of piglets at different growth stages based on 24 linear body size parameters. The research discovered that specific combinations of body size parameters, such as chest circumference, body length, and abdominal circumference, exhibited a high correlation when predicting the weight of piglets. The research findings are beneficial for estimating the weight of pigs through simple body size measurements in a farm environment lacking direct weighing conditions.
Tu et al. [90] developed a computer vision-based system, vision analysis and prediction (VAP), for estimating the weight of pigs in a slaughterhouse pen. The system developed a visual algorithm to segment the image and identify the pig area and then utilized statistical analysis and linear regression to predict the weight of the pig. Jiang et al. [91] adopted an integrated regression model method to estimate the weight of pigs. By using an Azure Kinect DK camera to collect the back images of pigs and then employing deep learning and machine learning techniques to extract and analyze image features, ultimately achieving a high-precision prediction of the weight of pigs. Lin et al. [92] utilized an improved Poisson reconstruction algorithm to estimate the weight from the three-dimensional point cloud data of pigs. The weight of the pigs was estimated by collecting point cloud data of 479 pigs from multiple angles and combining steps such as point cloud preprocessing, normal vector estimation, and mesh model reconstruction to obtain the volume of pigs, and finally, using a linear regression model.
Table 7 presents the estimation of pig weight using traditional methods, summarizing image acquisition devices, shooting angles, weight estimation parameters, weight estimation methods, etc. It can be observed that with the passage of time, depth cameras have gradually replaced the use of visible light cameras, and the quantity of weight estimation parameters has also increased.
While machine learning methods have straightforward logical principles, facilitating easy decision-making and high efficiency, traditional machine learning algorithms are generally less accurate in comparison to deep learning models. Furthermore, deep learning methods typically demonstrate strong generalization across different species and domestic animals in various environments.

5.2. Deep Learning-Based Weight Estimation

Through continuous exploration and experimentation by experts and scholars, end-to-end deep learning methods have been developed for estimating animal weight. Deep learning algorithms can accurately and efficiently extract high-value features from high-dimensional complex data, yielding results that are more accurate and reliable than those obtained with traditional machine learning methods. Figure 6 shows the process of estimating body weight based on deep learning.
Meckbach et al. [93] collected the depth image data of over 400 pigs at different growth stages and combined it with the actual weights of these pigs to train a deep learning model. This model is capable of directly learning from the images and extracting key features to predict the weight of pigs without the need for any additional feature engineering or preprocessing steps. Zhang et al. [94] proposed a method for rapidly and automatically estimating the weight and body shape of pigs using a deep learning model. By comparing different convolutional neural network models, it was found that the modified Xception model performed best on the test data and was able to complete the estimation task with high accuracy. He et al. [95] used a dual-stream cross-attention visual Transformer to predict the weight of pigs based on RGB and depth images. The combination of the two modalities provides a complementary representation of spatial body information for pigs, resulting in a mean absolute error of 3.237 kg. Chen et al. [96] proposed a method for live pig weight using a multi-layer radial basis function (RBF) network. The network autonomously learns and predicts the weight of live pigs by taking into account body parameters such as body length, height, and width. The results show that this network structure has high accuracy and stability in predicting live pig weight. This method reduces reliance on two-dimensional area parameters of live pigs, minimizes the impact of environmental conditions, lighting, and changes in pig posture on the prediction system’s stability, and offers a practical technological approach for smart agriculture and pig health management.
Liu et al. [97] proposed a method for estimating pig weight based on 3D hybrid filtering and convolutional neural networks. This approach combines statistical filtering and DBSCAN clustering techniques to accurately segment pigback images, while using voxel subsampling to improve real-time efficiency. By utilizing parameters from pigbacks and convolutional neural networks, the method achieves precise weight estimation, with mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean squared error (RMSE) of 12.45 kg, 5.36%, and 12.91 kg, respectively. In comparison to existing 2D and 3D weight estimation methods, this approach simplifies equipment configuration, reduces data processing complexity, and maintains estimation accuracy—providing an effective monitoring solution for precision pig farming management.
Tan et al. [98] proposed a method for estimating the weight of pigs based on a dual-stream fusion network and ConvNeXtV2, utilizing RGB-D data for precise measurements. By integrating RGB and depth images, the algorithm accurately estimated the weight of moving pigs in dynamic farm environments. The method employed advanced feature extraction and fusion techniques, achieving lower Mean Absolute Error and Mean Absolute Percentage Error. Liu et al. [99] introduced an unconstrained deep learning-based model for pig body mass estimation (PMEM-DLWC), utilizing Mask R-CNN for pig instance segmentation, Keypoint R-CNN for keypoint detection, and an enhanced ResNet for estimating pig body mass. This model utilizes image processing and deep learning technologies to accurately and rapidly estimate pig body mass in open environments without restricting the movement of the pigs.
Experimental results showed that the model achieved a root mean squared error (RMSE) of 3.52 kg on the test set, demonstrating high estimation accuracy and real-time performance. This suggests significant potential for adjusting breeding plans and enhancing production efficiency. In their study, Liu et al. [100] introduced a non-contact weight and body measurement model based on dorsal point cloud data of pigs. The model captures three-dimensional point clouds with a depth camera and utilizes a convolutional neural network combined with a multi-head attention mechanism to predict pig weight. Additionally, the incorporation of RGB information as supplementary features has been shown to enhance prediction accuracy. Compared to traditional manual measurement methods, this model presents potential advantages in reducing errors, increasing efficiency, and promoting animal welfare. Xie et al. [101] used a new method based on improved Mask R-CNN to predict the weight of live pigs. They used ResNeSt as the backbone network, combined with FPN and an attention mechanism to improve image segmentation accuracy. They collected back-depth images of 132 pigs through a depth camera, extracted five key body size parameters, including back area, length, width, average depth, and eccentricity, and trained a weight regression model with these parameters.
Paudel et al. [102] proposed a method based on 3D convolutional neural networks to predict pig weight using point cloud data. A stereo depth camera, Intel Real Sense D435, was utilized to capture 3D videos of freely walking fattening pigs, and the PointNet framework was employed for training and validating models on 1186 point clouds. The experimental results revealed that the PointNet regression model achieved a coefficient of determination of 0.94 on the test point clouds, with a validation root mean squared error (RMSE) of 6.79 kg and a test RMSE of 6.88 kg. These findings demonstrate the strong potential of deep learning in accurately predicting pig weight on point sets, even with a limited training dataset. This study validates the feasibility of using deep learning to predict the weight of farm animals based on point sets. At the same time, it also highlights the necessity of having larger datasets to ensure the most accurate predictions.
Table 8 shows the estimation of pig weight using deep learning methods, summarizing parameters such as image acquisition devices, weight estimation methods, and the number of pigs. With the progress of technology, the use of deep learning methods for estimating pig weight is increasing, and compared with Table 7, the number of pigs participating in the experiment is also increasing.
When utilizing machine learning and deep learning algorithms to predict pig weight, it has been observed that deep learning algorithms outperform other methods in terms of prediction accuracy. This is evidenced by the higher correlation coefficients and lower root mean squared errors exhibited by the deep learning algorithms. These findings suggest that, with adequate data support, deep learning algorithms may offer superior performance for pig weight prediction tasks and demonstrate strong generalization capabilities, rendering them suitable for implementation across various pig farms. However, it should be noted that employing deep learning methods for weight estimation necessitates a substantial amount of labeled data for training. This often entails significant human and material resources in the data collection and labeling process.

6. Discussion

6.1. Current Main Challenges

In recent years, domestic and foreign researchers have made numerous improvements and innovations in the field of automatic livestock body size measurement and weight estimation, achieving fruitful results. However, this technology still requires enhancement in aspects such as real-time performance, accuracy, universality, and automation level [99,100]. When researching livestock body size measurement and weight estimation technology, the following key issues need attention:
Regarding data acquisition: When collecting video data of pigs on a farm, complex background environment changes like lighting and shadows, as well as obstructions from railings and buildings, can affect data quality and impede the accurate detection of key body parts of pigs [103]. Moreover, using multiple sensors for data collection increases costs and restricts the commercial promotion and application of automatic livestock weight estimation technology [45].
In terms of three-dimensional point cloud data: The measurement of pig body size and weight estimation are mostly based on three-dimensional point clouds. Although they can offer more position and detail information with high precision, the large volume of data often leads to feature redundancy, consumes a significant amount of calculation time, demands high computing power, and thus affects the efficiency of information transmission and data processing. Additionally, it cannot ensure the real-time acquisition of livestock three-dimensional point cloud information [104].
Concerning point cloud registration: The coincidence rate of multi-view pig point cloud data is low, and existing point cloud registration algorithms tend to lead the registration into a locally optimal solution. Since livestock are non-rigid objects, their body movements can cause asynchrony in multi-viewpoint cloud data, further increasing the difficulty of point cloud registration.
In the aspect of pig weight estimation: Many studies on pig weight estimation lack robustness and universality, with algorithms relying on data from specific scenarios.
Regarding data acquisition efficiency: When collecting large-scale livestock data, although the data acquisition effect is good when livestock are stationary, controlling livestock behavior can easily trigger stress reactions, and the acquisition efficiency is low, which is not conducive to the automation and large-scale application of body size measurement technology.
In terms of model generalization ability: In actual application scenarios, although the model shows good generalization in an experimental environment, its generalization ability needs to be further verified when dealing with pigs of different breeds and growth dates [53].

6.2. Future Development Trends

With the development of three-dimensional image reconstruction technology, the existing problems in livestock body size measurement and weight estimation are expected to be effectively solved. The future development directions in this field are as follows:
In order to solve the problems of illumination, shadow, and occlusion that farms encounter when collecting video data of pigs, multi-sensor fusion technology can be employed. This technology can be combined with image enhancement and illumination normalization algorithms to enhance data quality. Specifically, multiple cameras can be arranged to capture pig videos from diverse angles. Meanwhile, high dynamic range imaging (HDR) technology can be utilized to mitigate the impact of illumination variations. Additionally, deep learning models like convolutional neural networks (CNNs) can be applied to perform illumination normalization and shadow suppression on images, thereby improving the detection accuracy of the key body parts of pigs.
With regard to the issue of sensor cost, it is possible to reduce the cost by optimizing algorithms and choosing sensors with high cost-performance ratios. Meanwhile, an automated data analysis platform can be developed to lessen the dependence on manual operations. In this way, on the premise of not sacrificing data quality, a balance can be achieved between cost-effectiveness and technology promotion. As the three-dimensional reconstruction technology advances, the single-image three-dimensional reconstruction technology can be employed for three-dimensional reconstruction. The collection of a single image is more convenient as only a camera is required to gather data. Its cost is lower than that of collecting three-dimensional point clouds using multiple-depth cameras, and the image reconstruction method is simpler and faster, which reduces the preprocessing steps and calculation errors of three-dimensional point clouds.
In response to the issues of low coincidence rate of multi-view pig point cloud data, proneness to falling into local optimal solutions, and data asynchrony resulting from the movement of livestock as non-rigid objects, the following solutions can be implemented. Firstly, scans are gradually aligned to the standard coordinate system through an incremental process. Inspired by image-based three-dimensional reconstruction, a sparse scan graph is established. Secondly, a second-order spatial compatibility metric is introduced, and the model generation and selection in the classic RANSAC method are re-explored to achieve rapid and robust point cloud registration. Additionally, for the registration challenges of non-rigid objects, deep learning technology can be utilized to extract point cloud features and combined with a consistency decision algorithm to obtain initial correspondences, thereby enhancing the accuracy and robustness of registration. Finally, through global optimization strategies such as simulated annealing or genetic algorithms, the situation where the algorithm stops when iterating to a local optimal solution is avoided, enabling it to jump out of the local optimal solution and find the global optimal solution. By integrating these methods, the registration quality of multi-view pig point cloud data can be effectively enhanced, the likelihood of falling into local optimal solutions can be reduced, and the problem of data asynchrony caused by livestock movement can be resolved.
To enhance the robustness and universality of pig weight estimation, multi-source data fusion technology can be employed. By integrating computer vision and deep learning models, multiple body size parameters, including body length and chest circumference, can be comprehensively utilized for predicting weight. At the same time, through data augmentation, transfer learning, and cross-scene verification, the generalization ability of the model can be improved. Moreover, with the support of a real-time monitoring and management system, data-driven accurate weight estimation can be realized, reducing the dependence on data in specific scenarios and ensuring accurate weight estimation results in diverse breeding environments.
In the aspect of optimizing the collection environment and process, the layout design of the farm can be remodeled to construct a naturally guided collection environment. Sound and light can be utilized to guide livestock to be distributed within the collection area. Meanwhile, an intelligent collection process can be devised, involving batch collection and the implementation of a dynamic collection strategy. Regarding technological innovation and application, non-contact collection technologies such as laser scanning and three-dimensional modeling, machine vision and deep learning, as well as sensor fusion and intelligent analysis, can be adopted. Multiple types of sensors can be integrated, and big data analysis and artificial intelligence technologies can be employed to handle data. In terms of personnel training and management, collectors can receive professional training to enhance their skills. Team collaboration and communication can be reinforced. Additionally, awareness of animal welfare can be fostered. Publicity and educational activities can be launched, and an incentive mechanism can be established. Through these measures, the efficiency of large-scale livestock data collection can be enhanced, and the automated and large-scale development of body size measurement technology can be propelled.
To address the issue that the generalization ability of the model in practical application scenarios requires verification when it comes to pigs of different breeds and growth dates, measures can be taken from three aspects: data augmentation and diverse collection, model optimization, and improvement, as well as real-time monitoring and feedback adjustment. In the area of data augmentation and diverse collection, it is essential to expand the dataset by gathering data on pigs belonging to different breeds and covering the entire growth cycle. Additionally, techniques such as image augmentation and data synthesis should be applied to enhance data diversity. For model optimization and improvement, a multi-model fusion strategy like ensemble learning can be employed. Moreover, pre-trained models can be utilized for transfer learning and fine-tuned according to specific breeds and growth stages. In terms of real-time monitoring and feedback adjustment, an online learning mechanism can be established to monitor the model’s prediction results in real-time and collect feedback. Simultaneously, a variety of evaluation metrics can be used, and the model can be validated in actual scenarios. Through these steps, the generalization ability of the model for pigs of different breeds and growth dates can be effectively enhanced.

6.3. Feasibility Analysis of Single-Image 3D Reconstruction

Current research predominantly uses point cloud generation methods from depth cameras to estimate the body dimensions and weight of livestock. However, this approach introduces several issues, including the relatively high cost of depth cameras, significant influence from the data collection environment, and limited shooting angles. With the emergence of technologies such as neural radiance fields and 3D Gaussian splatting, 3D reconstruction methods based on multi-view or single-view images are becoming increasingly sophisticated.
Hong et al. [105] proposed the first Large Reconstruction Model (LRM), capable of generating a 3D model of an object from a single input image in just 5 s. This model performs 3D reconstruction of an object based on a single image input, utilizing a high-capacity model and extensive training data to achieve versatility and produce high-quality 3D reconstructions from various test inputs, including real-world field captures and images created by generative models.
Since LRM’s approach does not incorporate the geometric priors of the three-planar components into its architecture, and due to the limited size of 3D data and slow training, it often results in poor training quality. Therefore, Wang et al. [106] introduced the convolutional reconstruction model (CRM), a high-fidelity feed-forward single-image to 3D generative model. Recognizing the constraints imposed by sparse 3D data, it is essential to integrate geometric priors into network design. CRM is based on the key observation of the spatial correspondence of six orthogonal images displayed in tri-planar visualization. Firstly, it generates six orthogonal view images from a single input image. Then, these images are fed into a convolutional U-Net, which leverages its powerful pixel-level alignment capabilities and ample bandwidth to create high-resolution tri-planes. Additionally, CRM utilizes flexicubes as a geometric representation, facilitating direct end-to-end optimization of textured meshes.
With the advancement of technology, some scholars have applied single-view or multi-view reconstruction to agriculture. Hu et al. [107] utilized neural radiance fields, focusing on two fundamental tasks in plant phenotypic analysis: the synthesis of new viewpoint images from two-dimensional images and 3D reconstruction of plant models. They introduced a novel plant phenotype dataset containing real plant images from production environments aimed at comprehensively exploring the advantages and limitations of NeRF in the agricultural sector.
The experimental results have demonstrated the strong performance of NeRF in synthesizing new viewpoint images. Moreover, Zhu et al. [108] extended this research by measuring reconstructed plant phenotypic parameters and proposing a system for three-dimensional modeling and phenotypic parameter acquisition of seedling crops based on neural radiance fields. This demonstrates the potential of NeRF in enhancing the capabilities of three-dimensional modeling and phenotypic parameter acquisition in agricultural applications.
Using a smartphone, they captured RGB images from different perspectives, and the NeRF algorithm was employed to construct 3D models. Based on this, algorithms like circle fitting and triangulation were used to accurately measure plant phenotypic parameters such as plant height, stem thickness, and leaf area. The method extracted stem height and stem thickness of pepper seedlings with determination coefficients (R2) of 0.971 and 0.907, respectively, and root mean square errors (RMSE) of 0.86 cm and 0.017 cm, respectively. For leaf area extraction in different seedling stages, the R 2 ranged from 0.909 to 0.935, and RMSE ranged from 0.75 to 3.22 cm2, indicating high measurement accuracy.
The previously mentioned methods illustrate the progress and viability of 3D reconstruction techniques in plant reconstruction and phenotypic estimation. Consequently, it is viable to pursue three-dimensional reconstruction of livestock using neural radiance fields and Gaussian splatting techniques, achieved by capturing multi-view or single-view images of the animals. This approach can then be utilized to estimate the body dimensions and weight of the livestock. Compared with pigs, the epidemic prevention levels of beef cattle and sheep are relatively low. Moreover, pigs cannot obtain the parameters for estimating body weight through a fixed collection channel like cattle or sheep, and the difficulty of image collection is relatively high. Therefore, the method of developing single-image 3D reconstruction of pigs is applicable to almost all livestock.

7. Conclusions

The estimation of pig weight based on computer vision collects pig data in a non-contact and non-stress manner and utilizes image processing and three-dimensional point cloud technology to estimate the weight of live pigs, achieving remarkable research progress. This research reviews the weight estimation methods based on images and three-dimensional point clouds, analyzes the advantages and disadvantages of various research methods, and explores the key aspects and development directions of future research.
In terms of technical means and progress, it evolved from using visible light cameras in the early stage to employing binocular cameras later, combining algorithms to detect body size parameters for weight estimation. Subsequently, with the aid of depth cameras or three-dimensional scanners to construct three-dimensional models, after preprocessing, body size parameters are calculated, and a weight prediction model is established.
Regarding the effect of technical improvement, compared with traditional machine learning algorithms, deep learning algorithms exhibit better performance in prediction accuracy, having a higher correlation coefficient and a lower root mean square error. They can extract key information from high-dimensional complex features, thus obtaining more accurate and reliable results.
Although some progress has been made, numerous challenges still exist. The research results of livestock body size detection and weight estimation are compared and analyzed, and the challenges such as high cost, low automation level, and poor universality faced by current livestock body size measurement research, as well as the development trends and feasibilities in this field in the future, are presented.
In conclusion, the estimation of pig weight based on machine vision is still in the research phase, and there are many unresolved problems, including differences in the body shape, breed of pigs, and breeding methods among different farms. Conducting a series of in-depth research on these technical problems, further improving and optimizing point cloud registration algorithms, data acquisition methods, point cloud registration algorithms, and single-image three-dimensional reconstruction technology, and accelerating the informatization process of the livestock industry are the future research directions.

Author Contributions

Conceptualization, Z.W.; methodology, Q.L., and Q.Y.; software, Z.W.; validation, Z.W.; formal analysis, R.G.; investigation, Z.W.; resources, Q.L.; data curation, W.Q. and T.W.; writing—original draft preparation, Z.W.; writing—review and editing, Z.W., R.W. and R.G.; visualization, R.W.; supervision, Q.L. and Q.Y.; project administration, W.Q. and X.L.; funding acquisition, R.G and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Special Program for Cultivating Outstanding Scientists of the Beijing Academy of Agricultural and Forestry Sciences (No. JKZX202214), the Sichuan Science and Technology Program (No. 2021ZDZX0011) and Innovation Capacity Building Project of Beijing Academy of Agriculture and Forestry Sciences (No. KJCX20230204).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ghotbaldini, H.; Mohammadabadi, M.; Nezamabadi-pour, H.; Babenko, O.I.; Bushtruk, M.V.; Tkachenko, S.V. Predicting breeding value of body weight at 6-month age using Artificial Neural Networks in Kermani sheep breed. Acta Scientiarum. Anim. Sci. 2019, 41, e45282. [Google Scholar]
  2. Hong, J.-K.; Kim, K.-H.; Hwang, H.-S.; Lee, J.-K.; Eom, T.-K.; Rhim, S.-J. Behaviors and body weight of suckling piglets in different social environments. Asian-Australas. J. Anim. Sci. 2017, 30, 902. [Google Scholar] [CrossRef] [PubMed]
  3. Yin, T.; König, S. Genetic parameters for body weight from birth to calving and associations between weights with test-day, health, and female fertility traits. J. Dairy Sci. 2018, 101, 2158–2170. [Google Scholar] [CrossRef] [PubMed]
  4. Benjamin, M.; Yik, S. Precision livestock farming in swine welfare: A review for swine practitioners. Animals 2019, 9, 133. [Google Scholar] [CrossRef] [PubMed]
  5. Stygar, A.H.; Kristensen, A.R. Monitoring growth in finishers by weighing selected groups of pigs–a dynamic approach. J. Anim. Sci. 2016, 94, 1255–1266. [Google Scholar] [CrossRef]
  6. Dohmen, R.; Catal, C.; Liu, Q. Computer vision-based weight estimation of livestock: A systematic literature review. N. Z. J. Agric. Res. 2022, 65, 227–247. [Google Scholar] [CrossRef]
  7. Lee, J.; Jin, L.; Park, D.; Chung, Y. Automatic recognition of aggressive behavior in pigs using a kinect depth sensor. Sensors 2016, 16, 631. [Google Scholar] [CrossRef]
  8. Franchi, G.A.; Bus, J.D.; Boumans, I.J.; Bokkers, E.A.; Jensen, M.B.; Pedersen, L.J. Estimating body weight in conventional growing pigs using a depth camera. Smart Agric. Technol. 2023, 3, 100117. [Google Scholar] [CrossRef]
  9. Kwon, K.; Mun, D. Iterative offset-based method for reconstructing a mesh model from the point cloud of a pig. Comput. Electron. Agric. 2022, 198, 106996. [Google Scholar] [CrossRef]
  10. Hu, Z.; Yang, H.; Lou, T. Dual attention-guided feature pyramid network for instance segmentation of group pigs. Comput. Electron. Agric. 2021, 186, 106140. [Google Scholar] [CrossRef]
  11. Ochoa Zaragoza, L.E. Evaluation of the Accuracy of Simple Body Measurements for Live Weight Prediction in Growing-Finishing Pigs. Ph.D. Dissertation, University of Illinois at Urbana-Champaign, Champaign, IL, USA, 2010. [Google Scholar]
  12. Wang, Y.; Yang, W.; Winter, P.; Walker, L. Walk-through weighing of pigs using machine vision and an artificial neural network. Biosyst. Eng. 2008, 100, 117–125. [Google Scholar] [CrossRef]
  13. Yin, M.; Ma, R.; Luo, H.; Li, J.; Zhao, Q.; Zhang, M. Non-contact sensing technology enables precision livestock farming in smart farms. Comput. Electron. Agric. 2023, 212, 108171. [Google Scholar] [CrossRef]
  14. Rohan, A.; Rafaq, M.S.; Hasan, M.J.; Asghar, F.; Bashir, A.K.; Dottorini, T. Application of deep learning for livestock behaviour recognition: A systematic literature review. Comput. Electron. Agric. 2024, 224, 109115. [Google Scholar] [CrossRef]
  15. Li, J.; Green-Miller, A.R.; Hu, X.; Lucic, A.; Mohan, M.M.; Dilger, R.N.; Condotta, I.C.; Aldridge, B.; Hart, J.M.; Ahuja, N. Barriers to computer vision applications in pig production facilities. Comput. Electron. Agric. 2022, 200, 107227. [Google Scholar] [CrossRef]
  16. Guo, Q.; Sun, Y.; Orsini, C.; Bolhuis, J.E.; de Vlieg, J.; Bijma, P.; de With, P.H. Enhanced camera-based individual pig detection and tracking for smart pig farms. Comput. Electron. Agric. 2023, 211, 108009. [Google Scholar] [CrossRef]
  17. Taylor, C.; Guy, J.; Bacardit, J. Prediction of growth in grower-finisher pigs using recurrent neural networks. Biosyst. Eng. 2022, 220, 114–134. [Google Scholar] [CrossRef]
  18. Ma, W.; Qi, X.; Sun, Y.; Gao, R.; Ding, L.; Wang, R.; Peng, C.; Zhang, J.; Wu, J.; Xu, Z. Computer Vision-Based Measurement Techniques for Livestock Body Dimension and Weight: A Review. Agriculture 2024, 14, 306. [Google Scholar] [CrossRef]
  19. Ma, W.; Sun, Y.; Qi, X.; Xue, X.; Chang, K.; Xu, Z.; Li, M.; Wang, R.; Meng, R.; Li, Q. Computer-Vision-Based Sensing Technologies for Livestock Body Dimension Measurement: A Survey. Sensors 2024, 24, 1504. [Google Scholar] [CrossRef]
  20. Panda, S.; Gaur, G.K.; Chauhan, A.; Kar, J.; Mehrotra, A. Accurate assessment of body weights using morphometric measurements in Landlly pigs. Trop. Anim. Health Prod. 2021, 53, 362. [Google Scholar] [CrossRef]
  21. Machebe, N.; Ezekwe, A. Predicting body weight of growing-finishing gilts raised in the tropics using linear body measurements. Asian J. Exp. Biol. Sci. 2010, 1, 162–165. [Google Scholar]
  22. Banik, S.; Naskar, S.; Pankaj, P.; Sahoo, N.; Tamuli, M.; Das, A. Effect of different body measurements on body weight in Ghungroo pigs. Indian J. Anim. Sci. 2012, 82, 1094. [Google Scholar] [CrossRef]
  23. Oluwole, O.; Tiamiyu, A.; Olorungbounmi, T.; Oladele-Bukola, M.; Akintoye, N. Pre-weaning growth traits in Nigerian indigenous pig crossbreds. Agric. Sci. 2014, 5, 49301. [Google Scholar] [CrossRef]
  24. Al Ard Khanji, M.S.; Llorente, C.; Falceto, M.V.; Bonastre, C.; Mitjana, O.; Tejedor, M.T. Using body measurements to estimate body weight in gilts. Can. J. Anim. Sci. 2018, 98, 362–367. [Google Scholar] [CrossRef]
  25. Bhoj, S.; Tarafdar, A.; Chauhan, A.; Singh, M.; Gaur, G.K. Image processing strategies for pig liveweight measurement: Updates and challenges. Comput. Electron. Agric. 2022, 193, 106693. [Google Scholar] [CrossRef]
  26. Zhao, Y.; Xiao, Q.; Li, J.; Tian, K.; Yang, L.; Shan, P.; Lv, X.; Li, L.; Zhan, Z. Review on image-based animals weight weighing. Comput. Electron. Agric. 2023, 215, 108456. [Google Scholar] [CrossRef]
  27. Schofield, C. Evaluation of image analysis as a means of estimating the weight of pigs. J. Agric. Eng. Res. 1990, 47, 287–296. [Google Scholar] [CrossRef]
  28. Du, X.; Li, X.; Fan, S. A review of the methods of pig body size measurement and body weight estimation. Chin. J. Anim. Sci. 2023, 59, 41–46. [Google Scholar]
  29. Whittemore, C.; Schofield, C. A case for size and shape scaling for understanding nutrient use in breeding sows and growing pigs. Livest. Prod. Sci. 2000, 65, 203–208. [Google Scholar] [CrossRef]
  30. White, R.; Schofield, C.; Green, D.; Parsons, D.; Whittemore, C. The effectiveness of a visual image analysis (VIA) system for monitoring the performance of growing/finishing pigs. Anim. Sci. 2004, 78, 409–418. [Google Scholar] [CrossRef]
  31. Liu, B.; Zhu, W.; Huo, G. An image fusion algorithm of infrared thermal and optical images for pig contour. Trans. Chin. Soc. Agric. Eng. 2013, 29, 113–120. [Google Scholar]
  32. Banhazi, T.; Tscharke, M.; Ferdous, W.; Saunders, C.; Lee, S. Using image analysis and statistical modelling to achieve improved pig weight predictions. In Society for Engineering in Agriculture (Australia); Engineers Australia Brisbane: Brisbane City, QLD, Australia, 2009; pp. 69–79. [Google Scholar]
  33. Zhang, A.L.N.; Wu, B.P.; Jiang, C.X.H.; Xuan, D.C.Z.; Ma, E.Y.H.; Zhang, F.Y.A. Development and validation of a visual image analysis for monitoring the body size of sheep. J. Appl. Anim. Res. 2018, 46, 1004–1015. [Google Scholar] [CrossRef]
  34. Li, Z.; Mao, T.; Liu, T.; Teng, G. Comparison and optimization of pig mass estimation models based on machine vision. Trans. Chin. Soc. Agric. Eng. 2015, 31, 155–161. [Google Scholar]
  35. Liu, T. Study of Pig’s Body Size Parameter Extraction Algorithm Optimization and Three-Dimensional Reconstruction Based-on Binocular Stereo Vision. Master’s Thesis, China Agricultural University, Beijing, China, 2014. [Google Scholar]
  36. Schofield, C.; Marchant, J.; White, R.; Brandl, N.; Wilson, M. Monitoring pig growth using a prototype imaging system. J. Agric. Eng. Res. 1999, 72, 205–210. [Google Scholar] [CrossRef]
  37. Minagawa, H.; Murakami, T. A hands-off method to estimate pig weight by light projection and image analysis. In The Livestock Environment VI, Proceedings of the 6th International Symposium, Louisville, KY, USA 21–23 May 2001; American Society of Agricultural Engineers: Louisville, KY, USA, 2001. [Google Scholar]
  38. Doeschl-Wilson, A.; Whittemore, C.; Knap, P.; Schofield, C. Using visual image analysis to describe pig growth in terms of size and shape. Anim. Sci. 2004, 79, 415–427. [Google Scholar] [CrossRef]
  39. Wang, Y.; Yang, W.; Winter, P.; Walker, L. Non-contact sensing of hog weights by machine vision. Appl. Eng. Agric. 2006, 22, 577–582. [Google Scholar] [CrossRef]
  40. Yang Yan, Y.Y.; Teng GuangHui, T.G.; Li BaoMing, L.B.; Shi ZhengXiang, S.Z. Measurement of pig weight based on computer vision. Trans. Chin. Soc. Agric. Eng. 2006, 22, 127–131. [Google Scholar]
  41. Wang, Y.; Yang, W.; Walker, L.T.; Rababah, T.M. Enhancing the accuracy of area extraction in machine vision-based pig weighing through edge detection. Int. J. Agric. Biol. Eng. 2008, 1, 37–42. [Google Scholar]
  42. Kaewtapee, C.; Rakangtong, C.; Bunchasak, C. Pig weight estimation using image processing and artificial neural networks. J. Adv. Agric. Technol. 2019, 6, 253–256. [Google Scholar] [CrossRef]
  43. Wu, Y.; Liu, Z.; Gu, Y. Weight Estimation of Sows Based on Machine Vision. Electron. Technol. Softw. Eng. 2020, 10, 100–101. [Google Scholar]
  44. Banhazi, T.; Tscharke, M.; Ferdous, W.; Saunders, C.; Lee, S. Improved image analysis based system to reliably predict the live weight of pigs on farm: Preliminary results. Aust. J. Multi-Discip. Eng. 2011, 8, 107–119. [Google Scholar] [CrossRef]
  45. Thapar, G.; Biswas, T.K.; Bhushan, B.; Naskar, S.; Kumar, A.; Dandapat, P.; Rokhade, J. Accurate estimation of body weight of pigs through smartphone image measurement app. Smart Agric. Technol. 2023, 4, 100194. [Google Scholar] [CrossRef]
  46. Cunha, A.C.R.; Antunes, R.C.; da Costa, W.G.; Rebouças, G.F.; Leite, C.D.S.; do Carmo, A.S. Body weight prediction in crossbred pigs from digital images using computer vision. Livest. Sci. 2024, 282, 105433. [Google Scholar] [CrossRef]
  47. Wan, S.; Fang, H.; Wang, X. Monocular Visual Pig Weight Estimation Method Based on the EfficientVit-C Model. Agriculture 2024, 14, 1571. [Google Scholar] [CrossRef]
  48. Yin, L.; Zhu, J.; Liu, C.; Tian, X.; Zhang, S. Point cloud-based pig body size measurement featured by standard and non-standard postures. Comput. Electron. Agric. 2022, 199, 107135. [Google Scholar] [CrossRef]
  49. Teng, G.; Shen, Z.; Zhang, J.; Shi, C.; Yu, J. Non-contact sow body condition scoring method based on Kinect sensor. Trans. Chin. Soc. Agric. Mach. 2018, 34, 211–217. [Google Scholar]
  50. Si, Y.; Lulu, A.; Gang, L.; Baocheng, L. Ideal Posture Detection and Body Size Measurement of Pig Based on Kinect. Trans. Chin. Soc. Agric. Mach. 2019, 50, 58–65. [Google Scholar]
  51. Shi, C.; Zhang, J.; Teng, G. Mobile measuring system based on LabVIEW for pig body components estimation in a large-scale farm. Comput. Electron. Agric. 2019, 156, 399–405. [Google Scholar] [CrossRef]
  52. Guangshun, X.; Zhiyong, L.; Zhiyi, Z. Acquisition of point clouds of cattle in complex situation based on binocular stereo vision. Comput. Eng. Des. 2015, 36, 1390–1395. [Google Scholar]
  53. Shuai, S.; Ling, Y.; Shihao, L.; Haojie, Z.; Xuhong, T.; Caixing, L.; Aidong, S.; Hanxing, L. Research on 3D surface reconstruction and body size measurement of pigs based on multi-view RGB-D cameras. Comput. Electron. Agric. 2020, 175, 105543. [Google Scholar] [CrossRef]
  54. Martins, B.; Mendes, A.; Silva, L.; Moreira, T.; Costa, J.; Rotta, P.; Chizzotti, M.; Marcondes, M. Estimating body weight, body condition score, and type traits in dairy cows using three dimensional cameras and manual body measurements. Livest. Sci. 2020, 236, 104054. [Google Scholar] [CrossRef]
  55. Wang, Y.; Mücher, S.; Wang, W.; Guo, L.; Kooistra, L. A review of three-dimensional computer vision used in precision livestock farming for cattle growth management. Comput. Electron. Agric. 2023, 206, 107687. [Google Scholar] [CrossRef]
  56. Chang, K.; Ma, W.; Xu, X.; Qi, X.; Xue, X.; Xu, Z.; Li, M.; Guo, Y.; Meng, R.; Li, Q. A Point Cloud Segmentation Method for Pigs from Complex Point Cloud Environments Based on the Improved PointNet++. Agriculture 2024, 14, 720. [Google Scholar] [CrossRef]
  57. Wang, K.; Guo, H.; Liu, W.; Ma, Q.; Su, W.; Zhu, D. Extraction method of pig body size measurement points based on rotation normalization of point cloud. Trans. Chin. Soc. Agric. Eng. 2017, 33, 253–259. [Google Scholar]
  58. Ke, W.; Hao, G.; Qin, M. Pig body size measurement method based on mirror reflection of single angle point cloud. Trans. Chin. Soc. Agric. Mach. 2018, 49, 187–195. [Google Scholar]
  59. Yin, L.; Cai, G.; Tian, X.; Sun, A.; Shi, S.; Zhong, H.; Liang, S. Three dimensional point cloud reconstruction and body size measurement of pigs based on multi-view depth camera. Trans. Chin. Soc. Agric. Eng. 2019, 35, 201–208. [Google Scholar]
  60. Guo, H.; Ma, Q.; Zhang, S. Prototype System for Animal Body Dimension Acquisition Based on 3D Reconstruction. Trans. Chin. Soc. Agric. Mach. 2014, 45, 227–232. [Google Scholar]
  61. Qin, H. Research of Three Dimensional Measuring Device Based on Dual Kinect Camera. Master’s Thesis, Changchun University of Technology, Changchun, China, 2020. [Google Scholar]
  62. Guo, H.; Ma, X.; Ma, Q.; Wang, K.; Su, W.; Zhu, D. LSSA_CAU: An interactive 3d point clouds analysis software for body measurement of livestock with similar forms of cows or pigs. Comput. Electron. Agric. 2017, 138, 60–68. [Google Scholar] [CrossRef]
  63. Hao, H.; Jincheng, Y.; Ling, Y.; Gengyuan, C.; Sumin, Z.; Huan, Z. An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size. Comput. Electron. Agric. 2023, 205, 107560. [Google Scholar] [CrossRef]
  64. Wang, K.; Guo, H.; Ma, Q.; Su, W.; Chen, L.; Zhu, D.; Chen, L. A portable and automatic Xtion-based measurement system for pig body size. Comput. Electron. Agric. 2018, 148, 291–298. [Google Scholar] [CrossRef]
  65. Menesatti, P.; Costa, C.; Antonucci, F.; Steri, R.; Pallottino, F.; Catillo, G. A low-cost stereovision system to estimate size and weight of live sheep. Comput. Electron. Agric. 2014, 103, 33–38. [Google Scholar] [CrossRef]
  66. Salau, J.; Haas, J.H.; Junge, W.; Bauer, U.; Harms, J.; Bieletzki, S. Feasibility of automated body trait determination using the SR4K time-of-flight camera in cow barns. SpringerPlus 2014, 3, 225. [Google Scholar] [CrossRef] [PubMed]
  67. Salau, J.; Haas, J.H.; Junge, W.; Thaller, G. Extrinsic calibration of a multi-Kinect camera scanning passage for measuring functional traits in dairy cows. Biosyst. Eng. 2016, 151, 409–424. [Google Scholar] [CrossRef]
  68. Du, A.; Guo, H.; Lu, J.; Su, Y.; Ma, Q.; Ruchay, A.; Marinello, F.; Pezzuolo, A. Automatic livestock body measurement based on keypoint detection with multiple depth cameras. Comput. Electron. Agric. 2022, 198, 107059. [Google Scholar] [CrossRef]
  69. Luo, X.; Hu, Y.; Gao, Z.; Guo, H.; Su, Y. Automated measurement of livestock body based on pose normalisation using statistical shape model. Biosyst. Eng. 2023, 227, 36–51. [Google Scholar] [CrossRef]
  70. Lei, K.; Tang, X.; Li, X.; Lu, Q.; Long, T.; Zhang, X.; Xiong, B. Research and Preliminary Evaluation of Key Technologies for 3D Reconstruction of Pig Bodies Based on 3D Point Clouds. Agriculture 2024, 14, 793. [Google Scholar] [CrossRef]
  71. Condotta, I.C.; Brown-Brandl, T.M.; Silva-Miranda, K.O.; Stinn, J.P. Evaluation of a depth sensor for mass estimation of growing and finishing pigs. Biosyst. Eng. 2018, 173, 11–18. [Google Scholar] [CrossRef]
  72. Cang, Y.; He, H.; Qiao, Y. An intelligent pig weights estimate method based on deep learning in sow stall environments. IEEE Access 2019, 7, 164867–164875. [Google Scholar] [CrossRef]
  73. He, H.; Qiao, Y.; Li, X.; Chen, C.; Zhang, X. Automatic weight measurement of pigs based on 3D images and regression network. Comput. Electron. Agric. 2021, 187, 106299. [Google Scholar] [CrossRef]
  74. Fernandes, A.F.A.; Dórea, J.R.; Fitzgerald, R.; Herring, W.; Rosa, G.J. A novel automated system to acquire biometric and morphological measurements and predict body weight of pigs via 3D computer vision. J. Anim. Sci. 2019, 97, 496. [Google Scholar] [CrossRef]
  75. Li, G.; Liu, X.; Ma, Y.; Wang, B.; Zheng, L.; Wang, M. Body size measurement and live body weight estimation for pigs based on back surface point clouds. Biosyst. Eng. 2022, 218, 10–22. [Google Scholar] [CrossRef]
  76. Na, M.H.; Cho, W.H.; Kim, S.K.; Na, I.S. The Development of a Weight Prediction System for Pigs Using Raspberry Pi. Agriculture 2023, 13, 2027. [Google Scholar] [CrossRef]
  77. Nguyen, A.H.; Holt, J.P.; Knauer, M.T.; Abner, V.A.; Lobaton, E.J.; Young, S.N. Towards rapid weight assessment of finishing pigs using a handheld, mobile RGB-D camera. Biosyst. Eng. 2023, 226, 155–168. [Google Scholar] [CrossRef]
  78. Kwon, K.; Park, A.; Lee, H.; Mun, D. Deep learning-based weight estimation using a fast-reconstructed mesh model from the point cloud of a pig. Comput. Electron. Agric. 2023, 210, 107903. [Google Scholar] [CrossRef]
  79. Selle, M.; Kircher, M.; Dorsch, U.; Berres, M.; Schwennen, C.; Hölscher, R.; Visscher, C.; Jung, K. An accessible and intuitive approach to 3D data modeling for morphometric surface evaluation in pig farming. Comput. Electron. Agric. 2024, 225, 109327. [Google Scholar] [CrossRef]
  80. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic weight estimation of individual pigs using image analysis. Comput. Electron. Agric. 2014, 107, 38–44. [Google Scholar] [CrossRef]
  81. Shi, C.; Teng, G.; Li, Z. An approach of pig weight estimation using binocular stereo system based on LabVIEW. Comput. Electron. Agric. 2016, 129, 37–43. [Google Scholar] [CrossRef]
  82. Sungirai, M.; Masaka, L.; Benhura, T.M. Validity of weight estimation models in pigs reared under different management conditions. Vet. Med. Int. 2014, 2014, 530469. [Google Scholar] [CrossRef]
  83. Alenyorege, B.; Addy, F.; Abgolosu, A. Linear body measurements as predictors of live weight of the large white pig in Northern Ghana. Ghanian J. Anim. Sci. 2013, 7, 99–105. [Google Scholar]
  84. Szyndler-Nędza, M.; Eckert, R.; Blicharski, T.; Tyra, M.; Prokowski, A. Prediction of carcass meat percentage in young pigs using linear regression models and artificial neural networks. Ann. Anim. Sci. 2016, 16, 275–286. [Google Scholar] [CrossRef]
  85. Wongsriworaphon, A.; Arnonkijpanich, B.; Pathumnakul, S. An approach based on digital image analysis to estimate the live weights of pigs in farm environments. Comput. Electron. Agric. 2015, 115, 26–33. [Google Scholar] [CrossRef]
  86. Jun, K.; Kim, S.J.; Ji, H.W. Estimating pig weights from images without constraint on posture and illumination. Comput. Electron. Agric. 2018, 153, 169–176. [Google Scholar] [CrossRef]
  87. Pezzuolo, A.; Guarino, M.; Sartori, L.; González, L.A.; Marinello, F. On-barn Pig Weight Estimation Based on Body Measurements by a Kinect v1 Depth Camera. Comput. Electron. Agric. 2018, 148, 29–36. [Google Scholar] [CrossRef]
  88. Ruchay, A.; Gritsenko, S.; Ermolova, E.; Bochkarev, A.; Ermolov, S.; Guo, H.; Pezzuolo, A. A Comparative Study of Machine Learning Methods for Predicting Live Weight of Duroc, Landrace, and Yorkshire Pigs. Animals 2022, 12, 1152. [Google Scholar] [CrossRef] [PubMed]
  89. Preethi, A.L.; Tarafdar, A.; Ahmad, S.F.; Panda, S.; Tamilarasan, K.; Ruchay, A.; Gaur, G.K. Weight Prediction of Landlly Pigs from Morphometric Traits in Different Age Classes Using ANN and Non-Linear Regression Models. Agriculture 2023, 13, 362. [Google Scholar] [CrossRef]
  90. Tu, G.J.; Jørgensen, E. Vision analysis and prediction for estimation of pig weight in slaughter pens. Expert Syst. Appl. 2023, 220, 119684. [Google Scholar] [CrossRef]
  91. Jiang, S.; Zhang, G.; Shen, Z.; Zhong, P.; Tan, J.; Liu, J. Pig Weight Estimation Method Based on a Framework Combining Mask R-CNN and Ensemble Regression Model. Animals 2024, 14, 2122. [Google Scholar] [CrossRef]
  92. Lin, J.; Chen, H.; Wu, R.; Wang, X.; Liu, X.; Wang, H.; Wu, Z.; Cai, G.; Yin, L.; Lin, R.; et al. Calculating Volume of Pig Point Cloud Based on Improved Poisson Reconstruction. Animals 2024, 14, 1210. [Google Scholar] [CrossRef]
  93. Meckbach, C.; Tiesmeyer, V.; Traulsen, I. A Promising Approach towards Precise Animal Weight Monitoring Using Convolutional Neural Networks. Comput. Electron. Agric. 2021, 183, 106056. [Google Scholar] [CrossRef]
  94. Zhang, J.; Zhuang, Y.; Ji, H.; Teng, G. Pig Weight and Body Size Estimation Using a Multiple Output Regression Convolutional Neural Network: A Fast and Fully Automatic Method. Sensors 2021, 21, 3218. [Google Scholar] [CrossRef]
  95. He, W.; Mi, Y.; Ding, X.; Liu, G.; Li, T. Two-stream cross-attention vision Transformer based on RGB-D images for pig weight estimation. Comput. Electron. Agric. 2023, 212, 107986. [Google Scholar] [CrossRef]
  96. Chen, H.; Liang, Y.; Huang, H.; Huang, Q.; Gu, W.; Liang, H. Live pig-weight learning and prediction method based on a multilayer RBF network. Agriculture 2023, 13, 253. [Google Scholar] [CrossRef]
  97. Liu, Z.; Hua, J.; Xue, H.; Tian, H.; Chen, Y.; Liu, H. Body weight estimation for pigs based on 3d hybrid filter and convolutional neural network. Sensors 2023, 23, 7730. [Google Scholar] [CrossRef] [PubMed]
  98. Tan, Z.; Liu, J.; Xiao, D.; Liu, Y.; Huang, Y. Dual-stream fusion network with ConvNeXtV2 for pig weight estimation using RGB-D data in aisles. Animals 2023, 13, 3755. [Google Scholar] [CrossRef] [PubMed]
  99. Liu, J.; Xiao, D.; Liu, Y.; Huang, Y. A pig mass estimation model based on deep learning without constraint. Animals 2023, 13, 1376. [Google Scholar] [CrossRef] [PubMed]
  100. Liu, Y.; Zhou, J.; Bian, Y.; Wang, T.; Xue, H.; Liu, L. Estimation of weight and body measurement model for pigs based on back point cloud data. Animals 2024, 14, 1046. [Google Scholar] [CrossRef] [PubMed]
  101. Xie, C.; Cang, Y.; Lou, X.; Xiao, H.; Xu, X.; Li, X.; Zhou, W. A novel approach based on a modified mask R-CNN for the weight prediction of live pigs. Artif. Intell. Agric. 2024, 12, 19–28. [Google Scholar] [CrossRef]
  102. Paudel, S.; de Sousa, R.V.; Sharma, S.R.; Brown-Brandl, T. Deep learning models to predict finishing pig weight using point clouds. Animals 2024, 14, 31. [Google Scholar] [CrossRef]
  103. Mengyuan, C.; Gang, L.; Yongsheng, S.; Fan, F. Predicting Method of Dairy Cow Weight Based on Three-dimensional Re-construction. Trans. Chin. Soc. Agric. Mach. 2020, 51, 378–384. [Google Scholar]
  104. Mengyuan, C.; Yongsheng, S.; Qian, L.; Gang, L. Research advances in the automatic measurement technology for livestock body size. Trans. Chin. Soc. Agric. Mach. 2022, 38, 228–240. [Google Scholar]
  105. Hong, Y.; Zhang, K.; Gu, J.; Bi, S.; Zhou, Y.; Liu, D.; Liu, F.; Sunkavalli, K.; Bui, T.; Tan, H. Lrm: Large reconstruction model for single image to 3d. arXiv 2023, arXiv:2311.04400. [Google Scholar]
  106. Wang, Z.; Wang, Y.; Chen, Y.; Xiang, C.; Chen, S.; Yu, D.; Li, C.; Su, H.; Zhu, J. Crm: Single image to 3d textured mesh with convolutional reconstruction model. arXiv 2024, arXiv:2403.05034. [Google Scholar]
  107. Hu, K.; Ying, W.; Pan, Y.; Kang, H.; Chen, C. High-fidelity 3D reconstruction of plants using Neural Radiance Fields. Comput. Electron. Agric. 2024, 220, 108848. [Google Scholar] [CrossRef]
  108. Lei, Z.; Wei, J.; Boyan, S.; Mingtang, C.; Saiju, L.; Yimin, D. Three-dimensional Reconstruction and Phenotype Parameters Acquisition of Seeding Vegetables Based on Neural Radiance Fields. Trans. Chin. Soc. Agric. Mach. 2024, 55, 4. [Google Scholar]
Figure 1. Main research elements of computer vision-based pig weight estimation.
Figure 1. Main research elements of computer vision-based pig weight estimation.
Sensors 24 07093 g001
Figure 2. Main steps in the estimation of live pig population.
Figure 2. Main steps in the estimation of live pig population.
Sensors 24 07093 g002
Figure 3. Body measurements terminologies of a pig.
Figure 3. Body measurements terminologies of a pig.
Sensors 24 07093 g003
Figure 4. Schematic of point cloud-based normalization of livestock posture.
Figure 4. Schematic of point cloud-based normalization of livestock posture.
Sensors 24 07093 g004
Figure 5. Workflow for estimating live pig weight using traditional methods.
Figure 5. Workflow for estimating live pig weight using traditional methods.
Sensors 24 07093 g005
Figure 6. The process of weight estimation based on deep learning.
Figure 6. The process of weight estimation based on deep learning.
Sensors 24 07093 g006
Table 1. Estimation of body weight based on body size parameters.
Table 1. Estimation of body weight based on body size parameters.
Number of PigsDaysWeight Estimation MethodsWeight Estimation ParametersR2ReferencesYear
27942–48Multiple linear regressionBody length, chest girth, etc.0.93–0.96Panda [20]2021
56Multiple linear regressionChest girth, body length, etc.0.955Machebe and Ezekwe [21]2010
26415–56Path analysisFront leg height, body length, etc.0.7359Banik [22]2012
471–49Multiple linear regressionBody height, heart girth, etc.0.86Oluwole [23]2014
193110–230Multiple linear regressionBody length, heart girth, etc.0.903Al Ard Khanji [24]2018
Table 2. Comparison of the content of related studies.
Table 2. Comparison of the content of related studies.
ResearchProgressTechnical ApproachChallengeReferences (Year)
Pig welfare in precision animal husbandryPrecision animal husbandry techniques in pig welfare monitoring and enhancementMachine learning, sensorsCost of technology, data privacy, accuracy of remote monitoring, etc.Benjamin [4] (2019)
Deep learning applications for livestock behavior recognitionDeep learning in recognizing livestock behaviorDeep learning, behavioral recognitionData imbalance, complex farming environments, etc.Rohan [14] (2024)
Precision management of livestockTechniques for non-contact acquisition of livestock phenotypic data3D reconstruction techniques, body size acquisition techniquesLack of accurate 3D reconstruction models, inefficient point cloud acquisition methods, etc.Ma [18] (2024)
Livestock body measurementsAdvances in body measurements of domestic animalsDepth cameras, 2D cameras, deep learningHigh cost of equipment, large volume of data, etc.Ma [19] (2024)
Pig weight measurementDevelopments and challenges in non-contact pig weight estimation techniquesImage processing, machine learningPig movement, ceiling height, low lighting intensity, etc.Bhoj [25] (2022)
Animal weight measurementComparing the development of different animal weighing techniquesTraditional image weighing, deep learning weighing, feature parameter extraction methodsData quality, model generalization capabilities, real-time performance, etc.Zhao [26] (2023)
Table 3. Statistics of Pearson’s correlation coefficient of weight estimation parameters.
Table 3. Statistics of Pearson’s correlation coefficient of weight estimation parameters.
WeightBody LengthBody WidthBody HeightChest GirthChest DepthAbdominal Length
Weight10.7500.6830.7880.9260.7280.328
Body length10.3980.6320.7150.6530.442
Body width10.4860.6830.4230.131
Body height10.7730.7330.176
Chest girth10.6990.225
Chest depth10.222
Abdominal length1
Table 4. Estimation of body weight based on image parameters.
Table 4. Estimation of body weight based on image parameters.
Number of PigsNumber of CamerasCamera AngleEffect of LightWeight Estimation ParametersWeight Estimation MethodsErrorReferences (Year)
151Aerial viewManual screeningProjection areaLinear regression5%Schofield (1999) [36]
121Aerial viewAdding an external light sourceProjection area, heightMultiple linear regression0.8%Minagawa (2001) [37]
251Aerial viewBody length, body width, body areaMultiple linear regression1.34 kgDoeschl (2004) [38]
241Aerial viewProjection area, body lengthMultiple linear regression4.1%Wang (2006) [39]
502Side view and aerial viewHistogram equilibriumProjection areaLinear regression2.8%Yang (2005) [40]
241Aerial viewEdge detectionProjection areaNonlinear regression4.1%Wang (2008) [41]
881Aerial viewChest girth, body lengthANN6.243%Kaewtapee (2019) [42]
1502Side view and aerial viewBody length, height, etc.Multiple linear regression3.4%Wu (2020) [43]
351Aerial viewImage enhancementBody length, body width, etc.Multiple linear regression1.18 kgBanhazi (2011) [44]
2653Aerial view and side viewBody length, projection area, etc.Nonlinear regressionGaganpreet (2023) [45]
521Aerial viewBody length, chest girth, back areaMultiple linear regression2.39 kgCunha (2024) [46]
8001Aerial viewBack areaDeep learning3.11 kgWan (2024) [47]
Table 5. Measurement of pig body size parameters based on 3D point cloud.
Table 5. Measurement of pig body size parameters based on 3D point cloud.
Number of PigsDaysNumber of CamerasCamera AngleParametersErrorReferences (Year)
253Side view and aerial viewBody length, body width, etc.<4%Yin (2022) [48]
4147–1541Body length, body width, hip width, etc.<16 mmWang (2017) [57]
10141–1491Askew viewHeight, chest girth, abdominal circumference, etc.<8%Wang (2018) [58]
253Side view and aerial viewBody length, chest width, etc.<5%Yin (2019) [59]
2Side viewBody length, shoulder width, hip width, etc.<4%Guo (2014) [60]
2Side viewBody length, body width, hip height, etc.<4%Qin (2020) [61]
10130–220Body length, chest girth, shoulder height, etc.<8%Guo (2017) [62]
253Side view and aerial viewBody length, body width, etc.<6%Hu (2023) [63]
20175–2242AdjustableBody width, hip width height, etc.<11%Wang (2018) [64]
133Side view and aerial viewChest girth, body length, etc.<21 cmDu (2022) [68]
133Side view and aerial viewBody length, chest girth, chest depth, etc.<11 cmLuo (2023) [69]
105Side view and aerial viewBody length, chest girth, etc.<5%Lei (2024) [70]
Table 6. Estimating live pig weight based on point clouds.
Table 6. Estimating live pig weight based on point clouds.
Number of PigsNumber of CamerasCamera AngleWeight Estimation ParametersWeight Estimation MethodsMAE/kgRMSE/kgR2References (Year)
2511Aerial viewBody length, rib width, etc.Nonlinear regression1.80.98Franchi (2023) [8]
2341Aerial viewVolume projectionLinear regression0.9907Condotta (2018) [71]
201Aerial viewDeep learning0.644Cang (2019) [72]
291Aerial viewDeep learning6.366He (2021) [73]
6551Aerial viewProjection area, volume projection, etc.Multiple linear regression0.86Arthur (2019) [74]
501Aerial viewBody length, height, shoulder width, etc.Multiple linear regression2.96172.6160.958Li (2022) [75]
151Body length, chest girth, HOG featureMultiple linear regression10.702Na (2023) [76]
7331HandholdBody length, body width, chest girthNonlinear regression9.2512.3Nguyen (2023) [77]
704Side view and aerial viewBody width, height, round, etc.Deep learning4.898.68990.9532Kwon (2023) [78]
5821Aerial viewVolume projectionLinear regression2.84Selle (2024) [79]
Table 7. Weight estimation based on traditional methods.
Table 7. Weight estimation based on traditional methods.
Number of PigsImage Acquisition EquipmentCamera AngleWeight Estimation ParametersWeight Estimation MethodsR2References (Year)
40Visible light cameraAerial viewProjection areaLinear regression0.9663Kashiha (2014) [80]
10Binocular cameraAerial viewBody length, shoulder height, etc. Linear regression0.99Shi (2016) [81]
358Body length, chest girth etc.Linear regression0.89Sungirai (2014) [82]
Body length, chest girthLinear regression0.93Alenyoregue (2013) [83]
183Body length, chest girth, etc.Linear regression0.90Al Ard Khanji (2016) [24]
61Visible light cameraAerial viewProjection area, round, etc. Linear regression0.9925Wang (2008) [12]
73Visible light cameraAerial viewMean distance between centers of mass, round, etc.Nonlinear regressionWongsriworaphon (2015) [85]
513Depth cameraAerial viewCurvature, misalignment, etc.Nonlinear regression0.790Jun (2018) [86]
78Depth cameraSide view and aerial viewBody length, chest girth, etc.Multiple linear regression0.9942Pezzuolo (2018) [87]
23Depth cameraSide view and aerial viewBody length, body width, etc.Multiple linear regressionKwon (2022) [9]
340Body length, chest girth, shoulder height, etc.Multiple linear regressionRuchay (2022) [88]
279Body length, chest girth, etc.Multiple linear regression0.9131Preethi (2023) [89]
18Visible light cameraAerial viewBack areaLinear regression0.99Tu (2023) [90]
39Depth cameraAerial viewBack area, body length, etc.Multiple linear regression0.995Jiang (2024) [91]
479Depth cameraSide view and aerial viewPoint cloud volumeLinear regression0.921Lin (2024) [92]
Table 8. Weight estimation based on deep learning.
Table 8. Weight estimation based on deep learning.
Image Acquisition EquipmentWeight Estimation MethodsNumber of PigsMAE/kgRMSE/kgMAPE/%R2References (Year)
Depth cameraDeep learning4003.83.90.397Meckbach (2021) [93]
Depth cameraDeep learning2391.161.531.990.9973Zhang (2021) [94]
Depth cameraDeep learning3.2375.9934.0820.742He (2023) [95]
Depth cameraDeep learning47211.855.741.680.63Chen (2023) [96]
Depth cameraDeep learning12.4512.915.36Liu (2023) [97]
Depth cameraDeep learning1982.8564.0822.3830.901Tan (2023) [98]
Binocular cameraDeep learning1173.522.82Liu (2023) [99]
Depth cameraDeep learning25811.8111.5524.81Liu (2024) [100]
Depth cameraDeep learning1322.963.958.450.987Xie (2024) [101]
Depth cameraDeep learning2496.880.94Paudel (2024) [102]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Li, Q.; Yu, Q.; Qian, W.; Gao, R.; Wang, R.; Wu, T.; Li, X. A Review of Visual Estimation Research on Live Pig Weight. Sensors 2024, 24, 7093. https://doi.org/10.3390/s24217093

AMA Style

Wang Z, Li Q, Yu Q, Qian W, Gao R, Wang R, Wu T, Li X. A Review of Visual Estimation Research on Live Pig Weight. Sensors. 2024; 24(21):7093. https://doi.org/10.3390/s24217093

Chicago/Turabian Style

Wang, Zhaoyang, Qifeng Li, Qinyang Yu, Wentai Qian, Ronghua Gao, Rong Wang, Tonghui Wu, and Xuwen Li. 2024. "A Review of Visual Estimation Research on Live Pig Weight" Sensors 24, no. 21: 7093. https://doi.org/10.3390/s24217093

APA Style

Wang, Z., Li, Q., Yu, Q., Qian, W., Gao, R., Wang, R., Wu, T., & Li, X. (2024). A Review of Visual Estimation Research on Live Pig Weight. Sensors, 24(21), 7093. https://doi.org/10.3390/s24217093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop