Next Article in Journal
Effects of Ecological Sea Buckthorn Powder Supplementation on Egg Production and Quality in Free-Range Moravia Black Hens
Previous Article in Journal
Trioza erytreae (Del Guercio, 1918) and the Interaction with Its Hosts: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Livestock Biometrics Identification Using Computer Vision Approaches: A Review

College of Physics and Electronic Information, Inner Mongolia Normal University, Hohhot 010022, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(1), 102; https://doi.org/10.3390/agriculture15010102
Submission received: 25 November 2024 / Revised: 16 December 2024 / Accepted: 2 January 2025 / Published: 4 January 2025
(This article belongs to the Section Digital Agriculture)

Abstract

:
In the domain of animal management, the technology for individual livestock identification is in a state of continuous evolution, encompassing objectives such as precise tracking of animal activities, optimization of vaccination procedures, effective disease control, accurate recording of individual growth, and prevention of theft and fraud. These advancements are pivotal to the efficient and sustainable development of the livestock industry. Recently, visual livestock biometrics have emerged as a highly promising research focus due to their non-invasive nature. This paper aims to comprehensively survey the techniques for individual livestock identification based on computer vision methods. It begins by elucidating the uniqueness of the primary biometric features of livestock, such as facial features, and their critical role in the recognition process. This review systematically overviews the data collection environments and devices used in related research, providing an analysis of the impact of different scenarios on recognition accuracy. Then, the review delves into the analysis and explication of livestock identification methods, based on extant research outcomes, with a focus on the application and trends of advanced technologies such as deep learning. We also highlight the challenges faced in this field, such as data quality and algorithmic efficiency, and introduce the baseline models and innovative solutions developed to address these issues. Finally, potential future research directions are explored, including the investigation of multimodal data fusion techniques, the construction and evaluation of large-scale benchmark datasets, and the application of multi-target tracking and identification technologies in livestock scenarios.

1. Introduction

The process of individual identification of livestock involves assigning a unique label to each animal, along with a verification of its identity. In the realm of modern animal husbandry, this practice is crucial for several reasons. It enhances disease prevention and control, ensures product traceability and food safety, improves breeding efficiency and economic outcomes, and supports the promotion of sustainable practices in animal husbandry [1,2].
Conventional identification techniques, such as ear cutting, hot iron branding, and tattooing, are relatively easy to implement [3]. However, these methods have inherent limitations, including that they do not conform to the welfare of animals, have low accuracy, and insufficient durability. Electronic identification methods, such as electronic ear tags, collars, leg rings, mark the identity through wearable methods, or subcutaneous implantation methods, or rumen built-in methods [4,5,6], with high accuracy. Nonetheless, these methods are prone to damage, loss, and can induce stress reactions in the livestock. In contrast, livestock individual identification based on biometric features has the advantage of relative stability and uniqueness. For example, identification based on facial features does not require any marking or wearing of marking devices on the animal. It is a convenient, fast, and livestock-friendly method. Therefore, livestock identification based on visual biometric features has attracted much attention in recent years [7,8].
The field of livestock individual identification using computer vision approaches encompasses both traditional machine learning methods and deep learning approaches. Traditional machine learning techniques rely on manually designed feature extractors to identify features that distinguish individual livestock from pre-processed images. These extracted features are then compared with a known library of livestock identity features, enabling the determination of the identity of the current livestock through the application of a classification algorithm [9]. The rapid development of deep learning technology has facilitated significant advancements in the field of livestock individual identification. Deep learning methods enable the automatic extraction of features from large datasets by constructing deep neural networks, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). This approach eliminates the need for meticulously designed feature extractors and facilitates an end-to-end process that goes from image input to identity recognition output [10]. Figure 1 illustrates the fundamental process of computer vision-based livestock individual identification.

2. Livestock Biometric Features

The utilization of biometric features for the purpose of identification must adhere to the requisite standards of uniqueness, stability, and harmlessness in order to ensure their suitability for practical applications. Currently, the biometric features commonly used to distinguish between different livestock individuals include retinal vascular patterns, iris patterns, muzzle pattern, facial features, and body pattern. With the in-depth exploration of researchers in related fields, some emerging methods are beginning to be applied to livestock individual identification.

2.1. Retinal Vascular Patterns

The distribution of blood vessels in the retina is a very unique feature that remains constant throughout the animal’s life and can even be used to distinguish between identical twins [12]. Therefore, retinal vascular patterns can be used as unique visual features for the identification of livestock. Due to the specificity of the retinal location, most of the early studies relied on devices with embedded software, for example, Optibrand, USA, designed a handheld retinal image capture and recognition device, OptiReader, including a controlling computer and data-logger, digital video camera, and a GPS receiver. Allen et al. [13] applied the device to retinal image capture and recognition of cattle, and the recognition success rate reached 98.3%. Barron et al. [14] applied this system to sheep recognition, and Figure 2 shows an illustration of the sheep retinal image acquisition operation using the OptiReader device, the state of the sheep pupil in dim and bright light, and the sheep retinal vascular image. The device does not support the processing of the images, so the accuracy is limited by the lighting conditions [15]. In 2021, Mustafi et al. [16] developed RetIS, a biometric system based on goat retinal images, which creates templates by image segmentation, normalization, coding, and template matching using the Hamming distance, and was tested on more than 200 images of 12 goats, with an accuracy of 99%. In 2024, Saygılı et al. [15] developed a bovine retinal image recognition system called CattNIs, which was processed by image processing techniques such as scaling and color conversion, and then features were extracted using techniques such as speeded-up robust features (SURF) and features from accelerated segment test (FAST). The final SURF recognition accuracy of 92.25% was the best performance.
The earlier implementation of livestock identification based on retinal vascular patterns was founded upon the principle of image matching. However, this approach is constrained by the processing power of the equipment, which is unable to cope with the challenges of changing imaging conditions. Furthermore, the process of updating or customizing such software was frequently both challenging and costly [17]. Subsequently, the application of image processing techniques, to a certain extent, has addressed the issue of the impact of environmental changes on the matching process. The identification of livestock based on retinal patterns represents a viable option, although it is not without its challenges, including the difficulty in acquiring retinal images and the impact of external conditions, such as lighting and flash, on the quality of retinal images. In addition, retinal image acquisition requires proximity to the livestock, which may cause stress reactions in the livestock, and the system may fail if the cornea of the livestock’s eye is injured [18].

2.2. Iris Patterns

In 1987, eye scientists Flom and Safir proposed the method of utilizing the iris of the human eye for identification purposes [19]. The iris of livestock displays a comparable structure to that of the human iris, consisting of a rich texture that remains unaltered throughout an animal’s lifetime once formed [20]. For example, iris recognition technology was introduced into the individual identification of racehorses [21]. Iris image acquisition is a crucial step in iris-based identity recognition. Contact-based image collection methods are not livestock-friendly; therefore, He et al. [22] invented a contactless autofeedback iris image capture device that includes four subcomponents: automatic capture, illumination, feedback, and pitching outfit (Figure 3b). Lu et al. [23] employed this device to obtain a series of iris images from cows, and proposed a cow identification system based on iris analysis. The inner and outer boundaries of the cow’s iris are fitted as two ellipses based on the edge images. A set of high frequency 2D-CWT coefficients is selected as features for recognition. The phase information of the coefficients is used for feature encoding and Hamming distance is adopted for classification. As technology progresses, the exploration of iris recognition technology has become more comprehensive and varied. For instance, in 2017, Trokielewicz et al. [24] investigated the employment of deep convolutional neural networks (DCNNs) to recognize horses by their iris and periocular features. In 2019, Larregui et al. [25] proposed a non-invasive method for iris segmentation of bovine eyes, which is capable of processing images taken with a normal visible light camera under field conditions. Figure 3a depicts the iris image of a bovine eye. In 2021, Roy et al. [26] collected iris images of Black Bengal goats and determined the matching threshold for Black Bengal goats. Figure 3c shows the collection of goat iris images using an iris camera.
Issues such as iris region localization and separation of the sclera and iris are frequently associated with iris recognition due to the particular location of the iris. In an attempt to address these issues, several research efforts have emerged. For example, Li et al. [27] proposed a point-by-point scanning hierarchical circle algorithm for fast localization of bovine iris. Sun et al. [28] proposed a method for segmentation of bovine iris based on the region active contour model. Laishram et al. [29] used a software called iGoat to segment goat iris and used a deep learning model for iris pattern matching. Yoon et al. [30] introduced a deep learning framework-based bovine iris segmentation method. These methods pave the way for more efficient iris recognition in livestock.

2.3. Muzzle Pattern

The muzzle pattern of livestock cattle and horses is characterized by a dense textural pattern of various grooves or bumps, known as beads, and some river-like structures, called ridges. Figure 4a depicts an image of a cow’s muzzle and Figure 4b shows the unique pattern on the cattle muzzle. Muzzle dermatoglyphics is similar to human fingerprints and is unique, thus providing a foundation for identification [31,32]. The earliest research on identification based on muzzle pattern can be traced back to 1921 [33], and the acquisition of muzzle images was carried out by applying ink to the animal’s muzzle prints, which were later printed on paper [34]. With the progression of image acquisition technology, cameras were subsequently applied to the acquisition of images. The identification of livestock based on muzzle pattern is primarily concerned with cattle and horses. Tharwat et al. [35] employed a method based on a muzzle pattern for the identification of cattle, extracting Gabor features from muzzle print images and using a support vector machine (SVM) classifier with its different kernels for identification. This approach achieved an identification accuracy of 99.5%. Taha et al. [36] proposed an Arabian horse recognition system based on the fusion of local binary pattern (LBP) and SURF, achieving an accuracy of 99.6% for 50 Arabian horses; Li et al. [37] employed a deep learning method to identify 268 cows with an accuracy of 98.7% for cow muzzle prints.
The collection of muzzle images is a relatively straightforward process compared to that of retina or iris images. However, during the implementation phase, external factors such as dirt, sweat in the muzzle region, and lighting conditions may impact the quality of the images. Additionally, livestock body movement can also influence the accuracy of recognition.

2.4. Body Pattern

Body pattern refers to the regular distribution of hairs of different colors on the trunk of the cattle. However, it should be noted that not all livestock have a distinctive body pattern. In addition, some specific breeds of cattle have unique skin colors or spots that can be used for identification purposes. For example, Zhao et al. [38] proposed a convolutional neural network-based approach to achieve a recognition accuracy of over 90% for individual cows using cow body images. Zhao et al. [39] extracted body images by detecting the side-view image of cows on a walk, achieving a recognition accuracy of 98.36% for 66 Holstein cows. Zhang et al. [40] proposed a cascade recognition method based on DeepOtsu (mainly solves the document enhancement and binarization problem) [41] and the deep learning model EfficientNet [42] to binarize and cascade classify dairy cow body pattern images (see Figure 5). The recognition accuracy for 118 individual Holstein cows was 98%.

2.5. Ficial Features

The facial features of livestock include eyes, ears, nose, mouth, facial contour, skin, and hair. These are the most direct external visual information of an individual. Foreign-related research on facial recognition first emerged in the 1960s, with a number of prominent research institutions, including Oxford University, subsequently proposing a number of well-known facial recognition algorithms [43], such as the geometric feature method [44] and FaceNet [45]. Subsequently, scholars have also conducted research on the use of facial recognition for animals. For example, Sihalath et al. [46] used four datasets of pig faces at different fertility periods and employed DCNN to classify the images, achieving an accuracy rate of over 97%. Liu et al. [47] proposed an individual identification method based on the fusion of Red-Green-Blue–Depth (RGB-D) information of cows’ faces, with a recognition accuracy of 98%. Xuan et al. [48] proposed an enhanced Biliner–CNN (B-CNN) network model based on VGG19-ResNet50 asymmetric for the identification of fine-grained images of sheep faces with an accuracy of 99.69%. Ahmad et al. [49] also proposed a deep learning-based facial recognition model for horses.
While the utilization of facial features for livestock identification provides several advantages, it is not without its drawbacks. For instance, the existence of various environmental factors, including alterations in lighting conditions, different imaging viewpoints, and distances, can result in significant changes to the facial features of livestock. This can lead to inaccuracies in the identification process. Furthermore, the discrepancies in facial features between individuals of the same breed and a comparable body size of livestock are relatively minimal.

2.6. Emerging Features

Through the dissection and investigation of various biometric identification methods, the strengths and weaknesses of each approach become distinctly apparent. It is noteworthy that these methods predominantly converge on the processing and classification of two-dimensional imagery, exhibiting a relative homogeneity in approach. In this era of unprecedented technological advancement, with research in the field deepening continuously, a suite of emerging feature types has emerged. These novel features transcend the limitations of two-dimensional information, relying instead on three-dimensional, video, or other forms of data for identification. These innovative attributes have demonstrated extraordinary potential in addressing the myriad challenges faced by traditional features, charting a new course toward enhancing the accuracy and reliability of livestock identification. This shift heralds a direction for the field to advance to more sophisticated levels of development.

2.6.1. Three-Dimensional Visual Appearances and Skeleton Pose Features

As research into livestock identification has progressed, scholars have also applied the 3D visual appearances and skeleton pose features of livestock to identification. For example, Arslan et al. [50] proposed a Kinect (a device based on 3D vision technology)-based system that is capable of identifying individual animals from their 3D visual appearances, and the proposed solution depends only on the shape information. Ferreira et al. [51] captured top-view 3D images of Holstein calves to identify individuals by the dorsal surface (Figure 6a). Zhang [52] proposed a sheep identification method based on deep metric learning fused skeleton attention guidance (Figure 6b). Recognition methods based on 3D visual appearances and skeleton pose features avoid the difficulties associated with the problem of size variation due to changes in the animal’s posture and distance, while not relying on color distribution and being able to effectively differentiate between similar animals (such as black cows), overcoming the limitations of traditional methods in this regard [50].

2.6.2. Time-Series Features

The exploration of video features, specifically extracting temporal information from videos, enables the integration of features from multiple image frames to comprehensively construct specific attributes of livestock, surpassing the analysis of single images alone. This process captures not only the variations in livestock posture at different time points but also encompasses the consistency and dynamic characteristics of their movement, leading to a more nuanced and contextually relevant information set. In essence, video analytics overcomes the limitations of static images and facilitates a comprehensive examination of the temporal and dimensional aspects of livestock. For instance, Su [53] proposed an improved dynamic temporal regularization algorithm to segment cow gaits, aiming to establish a lameness recognition model; this approach achieved a recognition accuracy of 90.57%. Qian [54] developed a gait recognition method for pigs based on skeleton analysis and gait energy maps, achieving a recognition rate of 93.25%. Zhang et al. [55] accomplished gait recognition in dairy cows through skeletal energy maps, correctly identifying 87.6% of cows in the test set. Andrew et al. [56] presented a video processing pipeline for cattle identification, utilizing a long-term recurrent convolutional network (LRCN) to classify cattle videos captured by unmanned aerial vehicles (UAVs). Qiao et al. [57] proposed a deep learning-based framework for recognizing beef cows using image sequences obtained from 50 cows (Figure 7). This framework achieved a recognition rate of 93.3% on 30-frame video lengths.
In a complex and dynamic breeding environment, the characteristics of livestock, such as body shape, color, and texture, can be influenced by various external factors like lighting conditions, angles, and shading. However, by carefully selecting the most representative and stable features, the influence of these external factors can be significantly reduced, thereby improving the accuracy of identification.

3. Acquisition of Biometric Features of Livestock

The initial step in the process of identifying livestock is to acquire the visual biometric features of the animals in question. Two aspects must be considered in order to establish an appropriate image acquisition scenario: the setup of the environment and the selection of the appropriate device.

3.1. Environment of Image Acquisition

The biometric feature acquisition environment can be categorized into two main types: locative scenarios and open scenarios. In locative scenarios, animals are confined to a specific area and data collection is conducted either artificially or through automated systems. On the other hand, open scenarios involve animals being photographed in a relatively unrestricted environment. The data acquisition methods employed in different scenarios are presented in Table 1.
Data acquisition can also be categorized into two methods: manual data acquisition and automatic data acquisition, based on the method of acquisition. In automated data collection, positional methods typically involve placing the device in a railing or walkway position, enabling relatively accurate positioning of the collected content. On the other hand, open collection methods often position the device at a higher vantage point (such as overhead position), resulting in collected data that may contain noise which needs to be localized later using alternative tools (such as tracking algorithms) to identity an individual.

3.2. Devices of Image Acquisition

In the process of livestock identification based on computer vision, common image data acquisition devices can be divided into three categories according to the different types of acquired image information: 2D vision information acquisition device, 3D vision information acquisition device, and infrared thermal imaging information acquisition device.
Devices for acquiring two-dimensional visual information are often chosen for applications in agricultural automation and robotics. For instance, Tassinari et al. [65] used a Sony HDR-CX115E HD camera to capture video footage of cattle, and Yao et al. [66] recorded cows with a 4K HD Mokers camera. However, these 2D images only offered a flat projection of the animals, and the absence of a third dimension restricted the use of depth information [67]. The development of 3D imaging technology has introduced affordable 3D cameras for livestock identification. Moreover, researchers have also adopted infrared thermal imaging for livestock detection. Jaddoa et al. [68], for example, introduced a multi-view facial detection method for cattle using infrared thermography. They employed cameras like the AGEMA 590 PAL, Therma Cam S65, A310, and T335 to record thermal infrared videos of cattle. Table 2 enumerates several types of devices employed in livestock identification tasks.
In their research on the automated measurement of cows’ back posture, Viazzi et al. [70] found that 2D cameras struggle with challenges like varying lighting and shadow disturbances, making them less effective than 3D cameras. They highlighted that 3D cameras can directly capture three-dimensional depth information, which simplifies the image segmentation process and significantly improves its accuracy and efficiency.
The choice of the appropriate scene and device directly affects image quality and background complexity. Additionally, the cost–benefit ratio is a critical factor in the decision-making process. Equally important is the selection of methodologies that minimize disruption to the livestock, thereby adhering to the standards of animal welfare.

4. Visual Biometric Identification for Livestock

Visual biometric identification stands as a cornerstone in a multitude of animal monitoring applications. The pinpointing of individual livestock is accomplishable through a spectrum of methodologies, ranging from conventional machine learning to the cutting-edge of deep learning techniques. Traditional machine learning paradigms concentrate on harnessing critical insights from the biometric characteristics of livestock, deploying feature extraction algorithms in tandem with classification models to ascertain identity. Conversely, deep neural networks obviate the necessity for manual feature extraction, seamlessly extracting intricate and abstract features from data to facilitate identification. Each approach boasts distinct advantages, leading some scholars to merge these methodologies to amplify the precision of livestock identity recognition. Figure 8 provides a synopsis of the methodologies employed for the identification of livestock.

4.1. Traditional Machine Learning Methods

Traditional machine learning methods for livestock identification encompass two main phases: a feature extraction phase and a feature classification or matching phase.
The primary goal of feature extraction is to precisely capture and distill the distinctive or readily identifiable feature data from each animal. The most commonly employed methods include LBP [32], Scale-Invariant Feature Transform (SIFT) [71], and SURF [72] and other similar techniques. The subsequent feature classification or matching stage entails a comparison and analysis of the extracted feature information with the preexisting livestock identity database. In this phase, classification algorithms or matching strategies are employed in order to identify the identity that best corresponds to the livestock in question. Such methods include linear discriminant analysis (LDA) [73], SVM [74], fast library for approximate nearest neighbors (FLANN) [49], and so forth.
For example, through the evaluation and test of various algorithms for feature point detection and matching, Zhao et al. [39] found that the detection and matching rate of feature points had already reached a very high level, the highest recognition accuracy reached 96.72%. However, it was difficult to further improve the accuracy of individual identification of dairy cows through point matching. Therefore, they concluded that future study should focus on the methods for improving image quality and developing speckle features, such as highlight removal, binarization, and contour matching, to future improve the identification accuracy. Kumar et al. [73] experimentally evaluated several feature extraction and classification algorithms and demonstrated that a high recognition accuracy can be achieved through traditional machine learning methods. It is recommended that future research directions include expanding the cattle face database and collecting images under more diverse conditions; developing multi-model fusion techniques to comprehensively consider the influence of multiple factors on recognition and further enhance the accuracy and robustness of cattle face recognition. Andrew et al. [75] utilized traditional machine learning methods to perform operations such as depth segmentation and local feature matching, demonstrating that the dorsal coat patterns of dairy cows can be used for individual identification, and this method is applicable to small herds. It is suggested that this technique be applied to unmanned aerial vehicle systems in the future to achieve monitoring of outdoor herds. Huang et al. [76] achieved identification by extracting local features such as the hair, skin texture, and spots on the body surface of pigs and combining them with a classifier. They established a model based on the pigsty scene, which can automatically identify different pig individuals in the pigsty without requiring them to be in a specific position or maintain a specific posture, thus possessing higher convenience and universality. Zhang et al. [77] showed that characteristics based on the muzzle and forehead parts of a dairy cow’s head have obvious differences, and utilizing a feature detection algorithm to extract edge features, they obtained the contour features of the dairy cow’s head, and then performed feature fusion with texture feature. The results showed that the recognition accuracy exceeded 99%, and it could be successfully applied in the field of dairy cow identification.

4.2. Deep Learning Methods

In the realm of deep learning-based recognition methodologies, a dichotomy emerges between two-stage and one-stage models. The two-stage approach necessitates the execution of ancillary operations prior to the identification step, tailored to enhance performance according to the demands of the specific task at hand; such operations include, for example, target detection and tracking. In contrast, one-stage models forgo these preliminary steps, instead of directly harnessing deep neural networks to extract features from livestock imagery. Each paradigm bears its own merits and limitations: whereas two-stage models are characterized by heightened precision in complex scenarios, one-stage models are favored for their efficiency in certain contexts.

4.2.1. One-Stage Model

CNNs have become the dominant approach in computer vision-based livestock identification tasks due to their superior image processing capabilities. The most commonly utilized CNNs for the purpose of livestock identification include visual geometry group (VGG) [37], mask region-based convolutional neural network (Mask R-CNN) [78], google inception net (GoogLeNet) [79], mobile convolutional neural network (MobileNet) [80], and others. For example, Li et al. [37] evaluated 59 deep learning models with different parameters and sizes in the same operation environment. The best identification accuracy was 98.7%. Additionally, the correlation analysis indicated that the accuracy had a low and positive correlation with the model parameters (total parameter and size), while the processing speed was moderately and positively correlated with the model parameters. Pang et al. [80] conducted sheep face classification experiments on four network models and evaluated the training accuracy and loss. Eventually, the proposed network model (Order-MobilenetV2) in this paper achieved the best performance. However, the study has certain limitations. The data collection environment was single, and issues such as external environmental changes and occlusions were not fully considered.
Furthermore, RNNs are increasingly regarded as a highly effective technique in the field, due to their notable advantages in processing data with temporal continuity [81]. The most commonly employed RNN variants in the context of livestock identification tasks include long short-term memory networks (LSTM), bidirectional long short-term memory networks (BiLSTM), and LRCN.

4.2.2. Two-Stage Model

The two-stage model approach for livestock identification initially employs detection or segmentation models, such as you only look once (YOLO) [82], faster region-based convolutional network (Faster R-CNN) [83], and single shot multibox detector (SSD) [84], to detect or segment the target animal within the image. Subsequently, neural network models, such as VGG and residual network (ResNet), are employed to extract features and identify individuals. For example, Hitelman et al. [58] employed the object detection algorithm to localize the sheep’s face in an image, the detected face was provided as input to seven different classification models. Although a relatively high accuracy was achieved, they believed that the influence of sheep maturation and aging on recognition performance should also be investigated. Moreover, they thought that in the future, automatic identification under uncontrolled conditions should be studied. The utilization of automatic self-supervised methods could be considered to select key frames and adopt lightweight CNN architectures and faster object detection algorithms. Hou et al. [85] initially employed the object detection model to detect the cow rump, and then used the lightweight convolutional neural network model for identification; the accuracy rate reached 99.76%. Shojaeipour et al. [86] proposed a two-stage algorithm, which first detected and extracted the muzzle region of the cows in the image. Subsequently, deep migration learning was applied for biometric recognition, resulting in an accuracy of 99.11%. Zhang et al. [87] first conducted sheep face detection. Then, they compared the recognition results of three deep learning network models on the sheep face image dataset. Eventually, the accuracy of the improved model (AlexNet model) in this paper reached 98.37%.

4.3. Hybrid Methods

The formidable capacity of deep learning to autonomously extract features from data empowers the dissection of nuanced intricacies within extensive datasets. This capability is exemplified by the discernment of minute variations in livestock fur pigmentation and morphological contours, which is critical for precise individual recognition. The synergy with traditional machine learning, which hinges on manually crafted features, imparts an intelligible layer to the data analysis, enhancing the interpretability and understanding of the derived outcomes. Although conventional techniques may exhibit diminished efficacy in the face of complex, high-volume data, their unique utility in certain contexts remains unassailable. Leveraging the strengths of both methodologies, researchers are forging novel hybrid approaches. These may entail the deployment of deep learning architectures for detection or feature extraction, followed by the application of traditional machine learning algorithms for classification or matching; alternatively, traditional methods may be employed for detection or preliminary data processing, with deep learning subsequently utilized for feature distillation and recognition. Table 3 catalogs a selection of studies that explore the convergence of deep learning with traditional machine learning strategies.

5. Challenges and Future Directions in Visual Livestock Biometrics

5.1. Challenges in Livestock Biometrics Using Computer Vision

The precipitous advancement of science and technology, in tandem with the burgeoning demand for modernization within the livestock sector, presents a significant opportunity for the accurate identification of livestock with the help of computer vision technology. Nonetheless, this domain is not without its array of challenges, encompassing the complexities of data collection, the nuances of data similarity and variability, and the quest for model precision and real-time performance. In addressing these impediments, researchers have diligently investigated and proffered solutions that are not only practicable but also efficaciously tailored to the demands of the industry.

5.1.1. Challenges in Data Collection

The procurement of a comprehensive and varied dataset of high-caliber imagery presents a formidable challenge, particularly in the context of uncooperative animals or when operating within the confines of remote agricultural settings.
  • Demand for large-scale data
The necessity for extensive data represents a significant challenge in the process of data collection, particularly in light of the inherent complexities of the tasks at hand and the imperative for precision. Deep learning methodologies frequently necessitate the availability of voluminous data for training and learning purposes. In certain instances, the requisite data volume may reach hundreds or even thousands, prompting researchers to propose solutions at various levels [94]. For instance, Zhang et al. [95] addressed the protracted and laborious issue of acquiring sheep facial imagery without eliciting distress among these inherently skittish creatures. This was accomplished through the innovation of a multi-view sheep face image capture apparatus (Figure 9A). Specifically, sheep are ushered onto a conveyor belt system via the portals of a moving walkway, whereupon they are gently propelled forward. The imaging system, in turn, captures facial images of the sheep from five distinct vantage points, relayed in real time to an overarching control system.
Data enhancement techniques are used by many researchers to increase the amount of data, and these include rotation, scaling, and cropping, among others. For instance, Salama et al. [96] employed a range of data enhancement techniques to augment the number of training images, thereby enhancing the generalization capacity and resilience of the CNN.
Furthermore, computer vision-based livestock individual recognition requires the annotation of a substantial number of images to achieve satisfactory performance. However, image annotation is a time-consuming and laborious process. Some researchers posit that reducing the reliance on large amounts of annotated data by improving the efficiency and scalability of the data learning process may offer a viable solution. Several techniques have been proposed attempting to enable deep neural networks to learn from small datasets, reducing costs related to annotation while maintaining good predictive performance. Among those techniques, there have been great advances in the field of few-shot learning, and more notably semi-supervised learning (SSL). However, there are fewer studies applying SSL to the identification of individual animals. Ferreira et al. [97] evaluated the potential of a semi-supervised learning technique called pseudo-labeling to improve the predictive performance of deep convolutional neural networks trained to identify individual Holstein cows using labeled training sets of varied sizes and a larger unlabeled dataset (Figure 9B). The results show that by using this technique to automatically label previously unlabeled images, the accuracy is improved by up to 20.4 percentage points compared to training using only manually labeled images. Additionally, the method evaluated in their study is complementary to current animal identification research, as it can be seamlessly applied to previously trained models without requiring any modifications in the model architecture or optimization procedure.
Figure 9. Multi-view sheep face image acquisition device and the steps that compose one round of pseudo-labeling. (A) The overall structure of the multi-view sheep face image acquisition device: (a) electric control box, (b) moving channel, (c) gate, (d) conveyor belt system, (e) upper computer control system, and (f) acquisition camera system [95]; (B) Using pseudo-labeling to improve performance of deep neural networks for animal identification: blue points correspond to labeled data, gray points correspond to unlabeled data, and orange points correspond to originally unlabeled data whose prediction confidence is greater than a given threshold [97].
Figure 9. Multi-view sheep face image acquisition device and the steps that compose one round of pseudo-labeling. (A) The overall structure of the multi-view sheep face image acquisition device: (a) electric control box, (b) moving channel, (c) gate, (d) conveyor belt system, (e) upper computer control system, and (f) acquisition camera system [95]; (B) Using pseudo-labeling to improve performance of deep neural networks for animal identification: blue points correspond to labeled data, gray points correspond to unlabeled data, and orange points correspond to originally unlabeled data whose prediction confidence is greater than a given threshold [97].
Agriculture 15 00102 g009
  • Dataset imbalance
Among the challenges inherent to the data collection process, data imbalance also represented a salient issue. The primary contributing factors to this imbalance included differences in herd structure, biases introduced by the data collection process itself, and discrepancies in image screening criteria. Data enhancement techniques remain a viable solution for addressing data imbalances. For example, Shang [98] proposed a nonlinear image data enhancement method based on the improved Cycle-GAN network model, in which a third virtual image of the same sheep is generated from two different images of that sheep. This approach enables the stable and effective fusion of goat image features and a balanced dataset. In addressing the issue of an imbalanced cow body dataset, Tassinari et al. [65] augmented the dataset by enhancing the least common classes, thereby achieving a more balanced distribution. This was achieved by randomly selecting a number of frames using XnConvert v.1.74 software, and creating modified copies by modifying the luminance level of the original frames to simulate different lighting conditions. Figure 10 shows the comparison between the original and modified frame created by means of the described procedure. A comparison of the enhanced and original datasets revealed that the former was more effective in improving the quality of the network’s detection, particularly in terms of precision. Li et al. [37] employed two strategies: a weighted cross-entropy (WCE) loss function and data augmentation. This resulted in an improvement in the maximum accuracy of the 20 selected models by 0.1% and 0.3%. Liu [99] proposed a bilateral recognition algorithm based on mix balance network transformer (MBN-Transformer) to deal with the unbalanced cow body dataset. The algorithm first reduces the over fitting phenomenon of cow body by the image mixing enhancement module, and then uses the Transformer encoder to design the conventional branch and the balanced branch to process the image mixing enhancement data with different samplers and merge the output features.
  • Environmental interference
In the realm of agricultural data acquisition, the ideal conditions that facilitate pristine data collection are seldom encountered in the practical setting of farming environments. As a corollary, the data procured is inherently subject to perturbations introduced by a multitude of environmental variables. Notably, livestock may be obfuscated by fellow animals or paraphernalia, or their visibility may be compromised by variable lighting conditions. To mitigate the challenges posed by environmental variables, researchers have proffered a suite of innovative strategies. For instance, Wang et al. [100] introduced random cropping and random obscuring strategies in the data loading stage (Figure 11), which enhanced the model’s ability to identify partially visible individuals, and the recognition rate was improved by 2.61% and 1.91%, respectively. Li et al. [101] addressed the problem of a narrow breeding environment in pig farms, where pig faces are easily obscured by dirt or other pigs, and an improved YOLOv3 model was proposed. First, this model introduced the densely connected convolutional network (DenseNet) to the basic feature extractor. Secondly, in order to integrate multi-scale information without introducing too many parameters, an improved spatial pyramid pooling (SPP) unit was added after the backbone network. In order to solve the problem of complex environments, where the sticking of pigs and the pig fence occlusion and other factors bring difficulties to the individual pig multi-target instance detection. Hu et al. [102] introduced a dual attention unit (DAU), and introduced it into the feature pyramid network structure, and also connected in series with the position attention unit (PAU) to build different spatial attention modules, the result accuracy can reach 92.8%. Li et al. [103] employed the cutout algorithm to simulate the pig face occlusion scenario for training purposes. Concurrently, they utilized the CSPDarknet53 for efficient feature extraction, with the assistance of SPP, in order to achieve Multi-scale feature extraction and training was also employed, and the spatial attention module (SAM) was introduced, along with the ReLU and Mish activation functions, with the objective of enhancing the model’s capacity to recognize situations where the pig’s face is occluded. Yang et al. [104] introduced coordinate attention mechanisms and coordinate convolution modules with coordinate channels into the feature extraction layer and the detection head of the YOLOv4 network, respectively, to enhance the model’s sensitivity to target locations. In response to the problem that is difficult to identify the identity information of the cows effectively in the nighttime environment. Xu et al. [105] proposed a nighttime cow face recognition method based on cross-modal shared feature learning, firstly, the model framework adopts a shallow two-stream structure to effectively extract the shared feature information in cow face images of different modalities; secondly, the Triplet attention mechanism is introduced to capture the interaction information cross-dimensionally; and lastly, the representation of cross-modal identity information is further mined by embedding the extension module, which compares with that of the non-cross-modal trained model, the highest improvement is 19.67 percentage points.
  • Posture changes
The maintenance of a consistent posture by livestock during the data acquisition process is not always practical. This inherent variability in posture can lead to blurring, distortion, or the outright absence of critical features within the captured images. In response, researchers have developed targeted solutions to address the array of challenges arising from posture fluctuations in real-world settings. For example, Xue et al. [106] for the problem that the sheep may not directly face the camera during the recognition process, collected multi-angle sheep face images for training the facial orientation recognition algorithm, and then combined the advantages of convolutional neural network and transformer to construct the MobileViT model, and introduced the ECA module (ECA is a commonly used channel attention mechanism designed to improve the efficiency of model utilization of channel information while reducing computational complexity), which improved the accuracy and robustness of sheep facial orientation recognition. Xue et al. [83] proposed the sheep face detection and correction (SheepFaceRepair) method, which aims to detect the sheep face area in the image to be recognized and align the sheep face area (Figure 12a). Weng et al. [107] proposed a cow face recognition method based on two-branch convolutional neural network (TB-CNN), which uses two feature extraction networks combined with the channel attention mechanism of SE block to extract features from cow facial images, replacing the fully connected layer for a global average pooling layer to improve the degree of association between the network elements and classes, and reduce the influence of pose changes on the recognition results by comprehensively recognizing the facial images of cows from different angles (Figure 12b). Xiao et al. [90] proposed a method for the cow pattern deformation problem. Firstly, a top-view image of a cow is obtained and the image is segmented using an improved Mask R-CNN to extract the shape features of the cow’s back. Then, the best feature subset is selected using the Fisher approach and an SVM classifier is applied to identify individual cows, which finally achieves 98.67% accuracy. Zhang et al. [108] took into account the unstable recognition effect caused by a single angle of sheep’s facial image in the training set, adopted a multi-pose training strategy, and embedded a convolutional block attention module (CBAM) in the neck of the YOLOV4 model.
Pioneering investigations have yielded novel strategies aimed at mitigating the complexity engendered by the dynamic diversification of individual orientation distributions, a phenomenon precipitated by alterations in posture. For instance, Guo et al. [64] address the problems of multi-angle, random distribution, flexibility, and difficulty in sheep face detection in the actual rearing environment, by adding coordinate attention mechanism in the backbone network of YOLOv5s to improve the detection accuracy of the occluded region, small target, and multi-view angle samples. Wang et al. [100] designed a spatial transformation deep feature extraction module named ResSTN, which integrates residual networks (ResNet), spatial transformer networks (STN), and attention mechanisms and incorporates preprocessing techniques. This module was developed to effectively tackle the low recognition rate issue resulting from the diverse orientation distribution of individual cows. It has been shown to improve the average accuracy by 2.98% for cows with such diverse orientation distributions. Wan et al. [109] designed a feature extraction channel with an attention mechanism and RepVGG (an improved backbone network based on the VGG network) blocks. Two channels form a bilinear feature extraction network to extract important features of different postures and angles, and then fuse the features of the same scale from different images to enhance the information. At the same time, multi-pose and multi-angle data are used for training and testing, making full use of these data to reduce the impact of posture and angle on recognition.

5.1.2. Challenges Posed by Livestock Characteristics

The indelible traits of livestock populations engender a litany of formidable challenges. Central among these is genetic homogeneity, which directly impacts the domain of livestock identification, rendering individual characteristics noticeably similar. An additional challenge arises from the morphological attributes of growth, wherein the appearance and dimensions of livestock are subject to evolution throughout their maturation trajectory.
  • Data similarities
The visual features of livestock, notably their facial features, exhibit a pronounced degree of congruence within the same breed and among individuals of comparable stature. This homogeneity is manifest in the orbital contours, the alignment of orifices such as the mouth and nose, and the composite facial silhouette. Such similarity has precipitated a decrement in recognition precision, spurring scholars to devise an array of strategies aimed at surmounting this impediment. For instance, Lv [110] proposed to add an attention network model and a deep separable convolutional model to a traditional convolutional neural network, this simultaneous improvement in network training speed and the extraction accuracy of sheep facial features was achieved. Zhou [11] constructed the efficient channel and spatial attention (ECAS) module, which integrates spatial information according to the sheep face contour and facial features. Then, by introducing the ECAS module into the deep feature extraction layer of the MobileFaceNet network, a lightweight sheep face recognition is constructed, model ECAS-MFC, compared to the MobileFaceNet model, the recognition rate of the ECAS-MFC model shows an improvement of 7.21% in the open set verification and an improvement of 2.61% in the closed set verification. Wang et al. [111] employed the ShuffleNetv2 model in conjunction with triple loss and cross-entropy loss, thereby enhancing the network’s capacity to discern analogous individuals. Additionally, they utilized batch normalization neck (BNNeck) to mitigate the discord between the two loss functions, the model achieved an accuracy of 82.93% on a dataset comprising 87 cows. Zhang et al. [112] proposed a high-similarity sheep face recognition model based on a Siamese network, named Siamese–high-similarity sheep face recognition (Siamese–HSFR). Siamese–HSFR uses contrastive learning to assess the probability that they belong to the same sheep. In the feature extraction network of Siamese–HSFR, two extraction modules are introduced, namely residual fusion block (RF_Block) and enhanced identity block (EI_Block), aiming to extract more detailed and robust sheep face features. Furthermore, by introducing the three-dimensional attention mechanism in the EI_Block, the SAM enhancement block (SAM_Block) is constructed to enhance the discriminative capability for high-similarity face features. Chen et al. [113] proposed a deep learning re-identification network model, the global and part network (GPN), which incorporates an attention mechanism in the Part branch of the GPN model and replaces the local region extraction strategy through the spatial transform (ST) module. This enables the model to capture both global features and local details of the cow’s face, thereby facilitating the learning of subtle differences between different cows’ faces and enhancing the accuracy and efficiency of cow face re-recognition. This represents an improvement of 9.1% and 8.0% on Rank-1 and mAP, respectively, in comparison with the unimproved model (Figure 13).
  • Data dynamic changes
The morphology and appearance of livestock are subject to transformation throughout their growth trajectory, rendering the initial features employed for identification potentially inaccurate or unreliable. Zhang et al. [114] used a deep convolutional network to study the relationship between facial changes and the accuracy of the recognition model in the growth of large white fattening pigs. It is suggested that in the pig face recognition system for fattening pigs, the pig face recognition model should be re-updated every day using at least the first 4 days and ≥10,800 sets of image data. Liu [99] devised a template updating protocol that begins by assessing whether the extracted features align with those of the cattle in the existing template database. If the features are not found in the library, the cattle depicted in the image will be incorporated as a new entry. Should the cattle’s body image already reside within the library, a decision must be made regarding whether an update is warranted. Outdated templates are removed, and the new cattle body image is subsequently refreshed within the template library. Ferreira et al. [51] experimentally verified that the application of 3D deep learning algorithms is able to recognize individual animals, and that these algorithms are sufficiently robust to take into account the changes in body size and shape during the growth period of the animal. Sihalath et al. [46] collected facial image data of pigs at different growth stages to study the recognition performance of deep convolutional neural networks on different age datasets, demonstrating that the models trained on the combined dataset (the dataset includes images of a pig’s face at three stages of body weight: 25–50 kg, 60–85 kg, and 90–120 kg) can achieve relatively good results when dealing with data of different age ranges.

5.1.3. Challenges in Model Accuracy and Generalization

  • Balance between model accuracy and complexity
In order to response the challenges inherent to livestock identification, the utilization of intricate models is often a necessity. These models are capable of more robust feature extraction and learning, yet they demand a considerable investment in computational resources for training and deployment. Consequently, striking a balance between identification accuracy and model complexity becomes a pivotal objective. For example, Fu et al. [115] proposed a lightweight multi-light based convolutional neural network model, which improves the recognition accuracy and global information extraction ability by introducing dilated convolution, multi-scale convolutional module and channel attention, and at the same time reduces the number of parameters, and the experimental results show that the model size is only 5.86 MB, which provides a lightweight solution for individual cow recognition. Li et al. [116] put forth the concept of MobileViTFace, a novel lightweight sheep face recognition model that integrates convolutional and transforms structures (as shown in Figure 14). In comparison to the standard vision transformer (ViT) model, MobileViTFace does not necessitate an extensive training dataset or a high level of computational complexity, and is more straightforward to implement on edge devices. Wang et al. [117] designed a lightweight pig face recognition model by replacing the parameters of the fully connected layer, which has a large number of parameters, with the k-nearest-neighbor algorithm, and the parameters of these improved models are at most reduced to 4.32% of the original model. Li et al. [118] reduced the computational complexity of the model by reducing the computational complexity of the multi-head attention layer and replacing the positional encoding with depthwise separable convolution, which improved the efficiency and reusability of the model; in addition, the Transformer structure was placed at a later stage in the network design to balance the performance and efficiency, so that the model reduces the amount of parameter and floating-point operations while maintaining a high recognition accuracy. Li et al. [119] proposed a lightweight sheep face recognition model, SheepFaceNet. They began by creating an efficient and fast base module, Eblock, and then used this module to build two different modules: SheepFaceNetDet is a sheep face detection model that employs the Eblock module to construct a backbone network and incorporates a bi-directional feature pyramid network layer (FPN layer) to enhance the geometric localization capability and optimize the network structure, SheepFaceNetRec is employed for the purpose of sheep face recognition. In order to achieve this, the feature extraction network is constructed using Eblock, the ECA channel attention mechanism is incorporated to enhance the efficacy of feature extraction, and multi-scale feature fusion is adopted in order to facilitate rapid and precise sheep face recognition. Ma et al. [120] introduced the soft non-maximum suppression (Soft–NMS) algorithm into the Faster R-CNN neural network model, Soft-NMS algorithm is an alternative to the traditional NMS algorithm, which achieves frame removal in a softer way, reduces the influence of the overlapping of neighboring detection frames on the detection results through the attenuation operation, and reduces the pressure of model training. Zhang et al. [121] proposed a lightweight sheep face recognition model LSR-YOLO. Specifically, the feature extraction modules of YOLOv5s backbone and neck are replaced with the ShuffleNetv2 module and the Ghost module to reduce the number of floating point operations per second (FLOPs) and parameters, and furthermore, a coordinated attention (CA) module is introduced in the backbone network to suppress non-critical information and improve the feature extraction capability of the recognition model, and the FLOPs and parameters of LSR-YOLO are reduced by 25.5% and 33.4%, respectively, compared with YOLOv5s.
  • Improvement of model generalization
Generalization ability refers to the ability of a model to perform on new, unseen data. Typically, models are tested on images similar to the dataset on which they were trained, and although these models perform highly in tests, they may not perform well on images with different characteristics. Andrew et al. [81] presented the complete procedure for recognizing known and unknown Holstein Friesian cattle. By building a robust embedding space based on several examples, they achieved effective recognition of unknown cattle with an average accuracy of 93.75%; however, the training sample size is small, and there is still room for improvement in accuracy. Bati et al. [122] proposed an improved sheep recognition and tracking algorithm based on the YOLOv5 and SORT method to address this problem, by performing the YOLOv5 model with some adaptive adjustments, such as adjusting the size of the input data and changing the size of the filters in the model structure, in order to improve the recognition and tracking effect of the model on different feature images. Wang et al. [123] introduced a ResNAM network that integrates the normalized attention module (NAM) with the ResNet model, and by combining multiple loss functions and metrics, constructed an open facial recognition framework, achieving a high accuracy of 95.28%. Wang et al. [100] introduced an innovative open-set metric learning-based cow back pattern recognition framework, which combines a variety of loss functions, metrics, and backbone networks, and this integration allows for the cow back pattern images to be open recognition, thus helping to identify cows that could not be recognized by previous models. Bakhshayeshi et al. [124] integrated the YOLOv5 algorithm with Siamese neural networks (SNN) for the purpose of cattle re-recognition. The SNN is capable of learning a similarity function within the recognition module, which is able to extract effective feature representations from a limited amount of data by learning the input space. This enables the model to ascertain whether the cow in question is the same one based on the learned similarity metric when presented with cow face images in different environments, obviating the necessity to retrain for each environment.

5.2. Research Hotspots and Trends in Visual Livestock Biometrics

In the process of addressing the challenges confronted by livestock biometric technologies based on computer vision, a series of research foci have not only been derived, but also the direction for future research in this field has been charted.

5.2.1. Application of Feature Fusion and Multimodal Fusion Technology

Feature fusion seeks to amalgamate information from disparate sources or levels, thereby creating a more distinctive and representative feature vector to augment the model’s recognition capabilities. Given the varying sensitivity of different features to environmental perturbations such as lighting, occlusion, and angle variation, the amalgamation of multiple features has emerged as a strategy to bolster system robustness in complex settings, becoming a prevalent research focus in recent times. For instance, Okura et al. [125] proposed a cow identification method based on RGB-D video analysis, which uses gait and texture features for individual cow identification, and at the end, a simple score–level fusion approach is used to linearly combine gait- and texture-based disparity. Li et al. [126] proposed a decision layer fusion cattle identity recognition method with multiple features of cattle face, cattle muzzle pattern, and cattle ear tag. Liu et al. [127] used ResNet50 to extract the pattern features of dairy cows and fused the fourth and fifth scale features with semantic information, effectively improving the recognition accuracy.
Concurrently, multimodal fusion endows systems with greater adaptability to the dynamic and intricate livestock rearing environment and the behavioral nuances of animals. Data from different modalities are mutually complementary, ensuring that the system can still operate effectively when one modality’s data are compromised. For example, studies have probed the vocalizations of animals, with Monica et al. [128] describing that the bellows of cows contain individual information compared to other species, as the vocal characteristics of different cows vary greatly. Briefer et al. [129] used grunting to determine the emotional state of pigs. When using a multimodal data fusion system, if the visual sensors are contaminated with dust and the quality of the visible image is degraded, the system can still use data from other modalities (such as grunt characteristics obtained from the acoustic sensors) to perform identification and ensure that the system is working correctly. The deployment of such fusion technologies promises a holistic approach to refining the precision and dependability of livestock identification, potentially catalyzing the advancement of intelligent management systems within the livestock sector.

5.2.2. Production of Large-Scale Benchmark Dataset

The diversity of livestock species has led to fragmented datasets and a lack of harmonization, which has hindered comparisons between models and progress in the field. Therefore, a comprehensive benchmark designed exclusively for the evaluation of individual livestock recognition algorithms is essential. Pang et al. [130] developed a large-scale benchmark dataset, Sheepface-107, consisting of 5350 images acquired from 107 different subjects. The images of each sheep were captured from multiple angles, including front and back views, and the variety of images captured provides a more comprehensive representation of facial features. In addition to the dataset, an evaluation protocol was developed that applied multiple evaluation metrics to the results produced by three different deep learning models (VGG16, GoogLeNet, and ResNet50). Statistical analysis of each algorithm showed that accuracy and number of parameters were the most useful metrics for evaluating recognition performance.

5.2.3. Multi Objects Tracking and Identification

Considering the practical application and promotion of technology, as well as the dimension of enhancing identification efficiency, Multi-Object Tracking (MOT) aligns with the future trajectory of modern livestock farming. In large-scale farms where livestock populations are sizeable and activities are frequent, MOT technology enables the precise localization and continuous tracking of multiple targets simultaneously, providing real-time data on their positions and movement trajectories. Presently, several scholars have engaged in research within this domain, for example, Guo et al. [131] used three deep learning-based multi-target tracking methods to achieve the tracking of pigs. By extending the weighted association strategy, the multi-objective re-recognition was optimized to improve the accuracy of individual pig tracking. Guan et al. [132] combined CNN deep learning method and particle filter tracking method to simultaneously recognize the face, body region and sitting and standing state of the cow to achieve comprehensive recognition and tracking of the cow, and learned by collecting image samples during daytime and nighttime to enable the model to recognize the cow 24 h a day which improves the applicability of the system.

6. Conclusions

The techniques used for identifying livestock have evolved considerably over time. They have progressed from the primitive methods of branding and ear cutting to the sophisticated computer vision-based identification methods that are used today. The method of recognition has undergone a significant transformation. The selection of visual features represents a crucial aspect of livestock identification. Features such as those of the face have become a common visual feature in livestock identification due to their distinctive nature and relative stability. Additionally, some emerging features have demonstrated remarkable potential in addressing the numerous limitations of traditional characteristics. The selection of visual features should be capable of achieving a high degree of individual recognition, while also being relatively straightforward to collect and process by computer vision technology. Consequently, future research should focus on further developing and optimizing the extraction methods of these features, as well as identifying new visual features with potential, in order to enhance the performance of livestock identification technology.
Secondly, the acquisition of visual features is of paramount importance in enhancing the precision and efficacy of livestock identification. In the process of acquisition, it is essential to select appropriate devices for the given scenario in order to guarantee that the data obtained are of optimal clarity, completeness, and signal-to-noise ratio. It is imperative that future research be conducted to further reinforce the study of visual information acquisition technology, with the objective of enhancing the quality and stability of the image. This will facilitate the provision of more reliable input data for subsequent visual information processing.
It is becoming increasingly evident that deep learning methods have the potential to revolutionize the field of livestock identification, and, therefore, it is unsurprising that they are emerging as a prominent research trend. Deep learning models are capable of automatically learning hierarchical feature representations in images and optimizing model parameters through the utilization of a substantial amount of training data, thereby achieving accurate identification of individual livestock. However, it is imperative not to overlook the potential of traditional machine learning algorithms, which may possess distinctive advantages in certain specific scenarios and have amassed a wealth of experience and knowledge through past research. Therefore, it is recommended that future research fully recognizes the respective values of traditional machine learning and deep learning methods, and the flexibility to choose them in order to achieve more efficient and accurate identification. Furthermore, it is essential to pursue further research into deep learning technology in order to develop superior model structures and algorithms, thereby enhancing the efficacy of livestock identification technology.
The field of livestock identification based on computer vision has witnessed significant advancements, undoubtedly a testament to the relentless efforts and innovative explorations of numerous researchers. However, it cannot be overlooked that this technology still confronts an array of challenges on its path forward. For instance, there is a lack of widely accessible, large-scale benchmark datasets, with existing studies often focusing on specific scenarios and relatively idealized conditions. Looking ahead to future research directions, it is imperative to delve into and actively explore multiple critical dimensions to propel the continued development of livestock identification technology. Firstly, enhancing the accuracy and robustness of identification remains a top priority, which necessitates the integrated application of advanced image processing techniques, optimized feature extraction algorithms, and more potent classification models. Secondly, reducing the computational complexity and time costs of algorithms is an urgent issue to address. Furthermore, attention to the practical application and dissemination of the technology is of great significance for advancing the development of the livestock industry. Additionally, the evolution of general artificial intelligence may endow identification systems with greater adaptive capabilities and intelligent decision-making, enabling them to automatically adjust recognition strategies based on diverse farming scenarios and livestock behaviors, thus achieving more intelligent identification and management. Simultaneously, in the process of technological development, utmost importance must be placed on ethical considerations, ensuring that the collection, use, and sharing of livestock data adhere to stringent moral and legal standards, thereby protecting livestock privacy and the data security of farms.

Author Contributions

Conceptualization, H.M. and L.Z. (Lina Zhang); methodology, H.M. and F.Y.; formal analysis, L.H.; investigation, Y.W. and L.Z. (Lin Zhu); resources, J.Z.; data curation, H.M. and L.Z. (Lina Zhang); writing—original draft preparation, H.M.; writing—review and editing, H.M. and L.Z. (Lina Zhang); project administration, F.Y.; funding acquisition, L.Z. (Lina Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China through the project (Grant No. 62061037), the Natural Science Foundation of Inner Mongolia Autonomous Region of China through the projects (Grant No. 2023LHMS06017, 2023LHMS03066), the Inner Mongolia Normal University through the projects (Grant No. 2022JBZD012, 2022JBYJ026) supported by the Fundamental Research Funds, and the funds for Reform and Development of Local Universities Supported by The Central Government (Cultivation of First-Class Disciplines in Physics).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNsConvolutional Neural Networks
RNNsRecurrent Neural Networks
SURFSpeeded-Up Robust Features
FASTFeatures from Accelerated Segment Test
DCNNsDeep Convolutional Neural Networks
SVMSupport Vector Machine
LBPLocal Binary Pattern
RGB-DRed-Green-Blue–Depth
LRCNLong Short-Term Memory Convolutional Network
SIFTScale-Invariant Feature Transform
LDALinear Discriminant Analysis
FLANNFast Library for Approximate Nearest Neighbors
VGGVisual Geometry Group
Mask R-CNNMask Region-based Convolutional Neural Network
GoogLeNetGoogle Network
MobileNetMobile Network
LSTMLong Short-Term Memory
BiLSTMBidirectional Long Short-Term Memory
YOLOYou Only Look Once
Faster R-CNNFaster Region-based Convolutional Neural Network
SSDSingle Shot MultiBox Detector
ResNetResidual Network
AlexNetAlex Network
SSLSemi-Supervised Learning

References

  1. Tonsor, G.T.; Schroeder, T.C. Livestock identification: Lessons for the US beef industry from the Australian system. J. Int. Food Agribus. Mark. 2006, 18, 103–118. [Google Scholar] [CrossRef]
  2. Ahmad, M.; Abbas, S.; Fatima, A.; Ghazal, T.M.; Alharbi, M.; Khan, M.A.; Elmitwally, N.S. AI-Driven livestock identification and insurance management system. Egypt. Inform. J. 2023, 24, 100390. [Google Scholar] [CrossRef]
  3. Bodkhe, J.; Dighe, H.; Gupta, A.; Bopche, L. Animal Identification. In Proceedings of the 2018 International Conference on Advanced Computation and Telecommunication (ICACAT), Bhopal, India, 28–29 December 2018; pp. 1–4. [Google Scholar]
  4. Zhao, J.; Li, A.; Jin, X.; Pan, L. Technologies in individual animal identification and meat products traceability. Biotechnol. Biotechnol. Equip. 2020, 34, 48–57. [Google Scholar] [CrossRef]
  5. Silveira, M. A Review of the History and Motivations of Animal Identification and the Different Methods of Animal Identification Focusing on Radiofrequency Identification and How It Works for the Development of a Radiofrequency Identification Based Herd Management System on the Cal Poly Dairy. 2013. Available online: https://www.researchgate.net/publication/303946440_A_Review_of_the_History_and_Motivations_of_Animal_Identification_and_the_Different_Methods_of_Animal_Identification_Focusing_on_Radiofrequency_Identification_and_How_It_Works_for_the_Development_of_a_ (accessed on 27 September 2023).
  6. Roberts, C.M. Radio frequency identification (RFID). Comput. Secur. 2006, 25, 18–26. [Google Scholar] [CrossRef]
  7. Duyck, J.; Finn, C.; Hutcheon, A.; Vera, P.; Salas, J.; Ravela, S. Sloop: A pattern retrieval engine for individual animal identification. Pattern Recognit. 2015, 48, 1059–1073. [Google Scholar] [CrossRef]
  8. Bugge, C.E.; Burkhardt, J.; Dugstad, K.S.; Enger, T.B.; Kasprzycka, M.; Kleinauskas, A.; Myhre, M.; Scheffler, K.; Ström, S.; Vetlesen, S. Biometric Methods of Animal Identification; Course notes; Laboratory Animal Science at the Norwegian School of Veterinary Science: Oslo, Norway, 2011; pp. 1–6. [Google Scholar]
  9. Chen, C.; Zhu, W.; Norton, T. Behaviour recognition of pigs and cattle: Journey from computer vision to deep learning. Comput. Electron. Agric. 2021, 187, 106255. [Google Scholar] [CrossRef]
  10. Xu, P.; Zhang, Y.; Ji, M.; Guo, S.; Tang, Z.; Wang, X.; Guo, J.; Zhang, J.; Guan, Z. Advanced intelligent monitoring technologies for animals: A survey. Neurocomputing 2024, 585, 127640. [Google Scholar] [CrossRef]
  11. Zhou, L.X. Research on Sheep Face Recognition Method Based on Lightweight Neural Network. Master’s Thesis, Northwest A&F University, Yangling, China, 2022. [Google Scholar]
  12. Alsaadi, I.M. Physiological biometric authentication systems, advantages, disadvantages and future development: A review. Int. J. Sci. Technol. Res. 2015, 4, 285–289. [Google Scholar]
  13. Allen, A.; Golden, B.; Taylor, M.; Patterson, D.; Henriksen, D.; Skuce, R. Evaluation of retinal imaging technology for the biometric identification of bovine animals in Northern Ireland. Livest. Sci. 2008, 116, 42–52. [Google Scholar] [CrossRef]
  14. Barron, U.G.; Corkery, G.; Barry, B.; Butler, F.; McDonnell, K.; Ward, S. Assessment of retinal recognition technology as a biometric method for sheep identification. Comput. Electron. Agric. 2008, 60, 156–166. [Google Scholar] [CrossRef]
  15. Saygılı, A.; Cihan, P.; Ermutlu, C.Ş.; Aydın, U.; Aksoy, Ö. CattNIS: Novel identification system of cattle with retinal images based on feature matching method. Comput. Electron. Agric. 2024, 221, 108963. [Google Scholar] [CrossRef]
  16. Mustafi, S.; Ghosh, P.; Mandal, S.N. RetIS: Unique identification system of goats through retinal analysis. Comput. Electron. Agric. 2021, 185, 106127. [Google Scholar] [CrossRef]
  17. Cihan, P.; Saygili, A.; Ozmen, N.E.; Akyuzlu, M. Identification and Recognition of Animals from Biometric Markers Using Computer Vision Approaches: A Review. Kafkas Univ. Veter-Fak. Derg. 2023, 29, 581. [Google Scholar] [CrossRef]
  18. Alturk, G.; Karakus, F. Assessment of Retinal Recognition Technology as a Biometric Identification Method in Norduz Sheep. In Proceedings of the 11th International Animal Science Conference, Cappadocia, Turkey, 20–22 October 2019; pp. 20–22. [Google Scholar]
  19. Jain, A.K.; Nandakumar, K.; Ross, A. 50 years of biometric research: Accomplishments, challenges, and opportunities. Pattern Recognit. Lett. 2016, 79, 80–105. [Google Scholar] [CrossRef]
  20. Sheng, D.W. Research on Technology of Cattle’s Iris Recognition. Master’s Thesis, East China Normal University, Shanghai, China, 2010. [Google Scholar]
  21. Suzaki, M.; Yamakita, O.; Horikawa, S.i.; Kuno, Y.; Aida, H.; Sasaki, N.; Kusunose, R. A horse identification system using biometrics. Syst. Comput. Jpn. 2001, 32, 12–23. [Google Scholar] [CrossRef]
  22. He, X.; Yan, J.; Chen, G.; Shi, P. Contactless autofeedback iris capture design. IEEE Trans. Instrum. Meas. 2008, 57, 1369–1375. [Google Scholar]
  23. Lu, Y.; He, X.; Wen, Y.; Wang, P.S. A new cow identification system based on iris analysis and recognition. Int. J. Biom. 2014, 6, 18–32. [Google Scholar] [CrossRef]
  24. Trokielewicz, M.; Szadkowski, M. Iris and Periocular Recognition in Arabian Race Horses Using Deep Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB), Denver, CO, USA, 1–4 October 2017; pp. 510–516. [Google Scholar]
  25. Larregui, J.I.; Cazzato, D.; Castro, S.M. An image processing pipeline to segment iris for unconstrained cow identification system. Open Comput. Sci. 2019, 9, 145–159. [Google Scholar] [CrossRef]
  26. Roy, S.; Dan, S.; Mukherjee, K.; Nath Mandal, S.; Hajra, D.K.; Banik, S.; Naskar, S. Black Bengal Goat Identification Using Iris Images. In Proceedings of the International Conference on Frontiers in Computing and Systems: COMSYS 2020, Jalpaiguri Government Engineering College (JGEC), West Bengal, India, 13–15 January 2020; pp. 213–224. [Google Scholar]
  27. Li, C.; Zhao, L.D. Research on Cattle Iris Localization Algorithm and Its Application in Meat Food Tracking and Traceability System. China Saf. Sci. J. 2011, 21, 124–130. [Google Scholar] [CrossRef]
  28. Sun, S.; Zhao, L. Bovine iris segmentation using region-based active contour model. Int. J. Innov. Comput. Inf. Control 2012, 8, 6461–6471. [Google Scholar]
  29. Laishram, M.; Mandal, S.N.; Haldar, A.; Das, S.; Bera, S.; Samanta, R. Biometric identification of Black Bengal goat: Unique iris pattern matching system vs deep learning approach. Anim. Biosci. 2023, 36, 980. [Google Scholar] [CrossRef] [PubMed]
  30. Yoon, H.; Park, M.; Lee, H.; An, J.; Lee, T.; Lee, S.-H. Deep learning framework for bovine iris segmentation. J. Anim. Sci. Technol. 2024, 66, 167. [Google Scholar] [CrossRef] [PubMed]
  31. Mishra, S.; Tomer, O.; Kalm, E. Muzzle dermatoglypics: A new method to identify bovines. Asian Livest. (FAO) 1995, 20, 91–96. [Google Scholar]
  32. Kumar, S.; Singh, S.K.; Singh, A.K. Muzzle point pattern based techniques for individual cattle identification. IET Image Process. 2017, 11, 805–814. [Google Scholar] [CrossRef]
  33. Noviyanto, A.; Arymurthy, A.M. Beef cattle identification based on muzzle pattern using a matching refinement technique in the SIFT method. Comput. Electron. Agric. 2013, 99, 77–84. [Google Scholar] [CrossRef]
  34. Barry, B.; Gonzales-Barron, U.; McDonnell, K.; Butler, F.; Ward, S. Using muzzle pattern recognition as a biometric approach for cattle identification. Trans. ASABE 2007, 50, 1073–1080. [Google Scholar] [CrossRef]
  35. Tharwat, A.; Gaber, T.; Hassanien, A.E. Cattle Identification Based on Muzzle Images Using Gabor Features and SVM Classifier. In Proceedings of the International Conference on Advanced Machine Learning Technologies and Applications, Cairo, Egypt, 28–30 November 2014; pp. 236–247. [Google Scholar]
  36. Taha, A.; Darwish, A.; Hassanien, A.E.; ElKholy, A. Arabian Horse Identification and Gender Determination System based on Feature Fusion and Gray Wolf Optimization. Int. J. Intell. Eng. Syst. 2020, 13, 145–155. [Google Scholar] [CrossRef]
  37. Li, G.; Erickson, G.E.; Xiong, Y. Individual beef cattle identification using muzzle images and deep learning techniques. Animals 2022, 12, 1453. [Google Scholar] [CrossRef]
  38. Zhao, K.X.; He, D.J. Recognition of individual dairy cattle based on convolutional neural networks. Trans. Chin. Soc. Agric. Eng. 2015, 31, 181–187. [Google Scholar]
  39. Zhao, K.; Jin, X.; Ji, J.; Wang, J.; Ma, H.; Zhu, X. Individual identification of Holstein dairy cows based on detecting and matching feature points in body images. Biosyst. Eng. 2019, 181, 128–139. [Google Scholar] [CrossRef]
  40. Zhang, R.; Ji, J.; Zhao, K.; Wang, J.; Zhang, M.; Wang, M. A cascaded individual cow identification method based on DeepOtsu and EfficientNet. Agriculture 2023, 13, 279. [Google Scholar] [CrossRef]
  41. He, S.; Schomaker, L. DeepOtsu: Document enhancement and binarization using iterative deep learning. Pattern Recognit. 2019, 91, 379–390. [Google Scholar] [CrossRef]
  42. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International conference on machine learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  43. Song, G.F. Research on Animal Facial Recognition Algorithm Based on Deep Learning. Master’s Thesis, Hangzhou Dianzi University, Hangzhou, China, 2019. [Google Scholar]
  44. Wiskott, L.; Fellous, J.-M.; Krüger, N.; Von Der Malsburg, C. Face recognition by elastic bunch graph matching. In Intelligent Biometric Techniques in Fingerprint and Face Recognition; Routledge: London, UK, 2022; pp. 355–396. [Google Scholar]
  45. Perronnin, F.; Sánchez, J.; Mensink, T. Improving the fisher kernel for large-scale image classification. In Proceedings of the Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; Part IV 11. pp. 143–156. [Google Scholar]
  46. Sihalath, T.; Basak, J.K.; Bhujel, A.; Arulmozhi, E.; Moon, B.E.; Kim, H.T. Pig identification using deep convolutional neural network based on different age range. J. Biosyst. Eng. 2021, 46, 182–195. [Google Scholar] [CrossRef]
  47. Liu, S.F.; Chang, R.; Li, B.; Wei, Y.; Wang, H.F.; Jia, N. Individual Identification of Cattle Based on RGB-D Images. Trans. Chin. Soc. Agric. Mach. 2023, 54, 260–266. [Google Scholar]
  48. Xuan, C.Z.; Lv, Y.; Liu, S.H.; Cui, J.H.; Zhang, X.W. Deep learning based identification of sheep face with fine-grained features. Digit. Agric. Intell. Agric. Mach. 2023, 26–30, 58. [Google Scholar] [CrossRef]
  49. Ahmad, M.; Abbas, S.; Fatima, A.; Issa, G.F.; Ghazal, T.M.; Khan, M.A. Deep transfer learning-based animal face identification model empowered with vision-based hybrid approach. Appl. Sci. 2023, 13, 1178. [Google Scholar] [CrossRef]
  50. Arslan, A.C.; Akar, M.; Alagöz, F. 3D cow identification in cattle farms. In Proceedings of the 2014 22nd Signal Processing and Communications Applications Conference (SIU), Trabzon, Turkey, 23–25 April 2014; pp. 1347–1350. [Google Scholar]
  51. Ferreira, R.E.; Bresolin, T.; Rosa, G.J.; Dórea, J.R. Using dorsal surface for individual identification of dairy calves through 3D deep learning algorithms. Comput. Electron. Agric. 2022, 201, 107272. [Google Scholar] [CrossRef]
  52. Zhang, F.Y. Individual Identity Recognition of Sheep Based on Deep Metric Learning. Master’s Thesis, Northwest A&F University, Yangling, China, 2023. [Google Scholar]
  53. SU, L.D. Study on Dairy Cows Gait Feature Extraction and Early Lameness Prediction. Ph.D. Thesis, Inner Mongolia Agricultural University, Hohhot, China, 2020. [Google Scholar]
  54. Qian, J.X. Gait Recognition of Pigs Based on Skeleton Analysis and Gait Energy Image. Master’s Thesis, Jiangsu University, Zhenjiang, China, 2018. [Google Scholar]
  55. Zhang, M.T.; Wang, M.M.; Liu, T.H.; Wen, S.D.; Yu, Y. Gait recognition in dairy cows based on skeleton energy maps. Jiangsu Agric. Sci. 2020, 48, 257–262. [Google Scholar] [CrossRef]
  56. Andrew, W.; Greatwood, C.; Burghardt, T. Fusing animal biometrics with autonomous robotics: Drone-based search and individual id of friesian cattle. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, Snowmass Village, CO, USA, 1–5 March 2020; pp. 38–43. [Google Scholar]
  57. Qiao, Y.; Clark, C.; Lomax, S.; Kong, H.; Su, D.; Sukkarieh, S. Automated individual cattle identification using video data: A unified deep learning architecture approach. Front. Anim. Sci. 2021, 2, 759147. [Google Scholar] [CrossRef]
  58. Hitelman, A.; Edan, Y.; Godo, A.; Berenstein, R.; Lepar, J.; Halachmi, I. Biometric identification of sheep via a machine-vision system. Comput. Electron. Agric. 2022, 194, 106713. [Google Scholar] [CrossRef]
  59. Mon, S.L.; Onizuka, T.; Tin, P.; Aikawa, M.; Kobayashi, I.; Zin, T.T. AI-enhanced real-time cattle identification system through tracking across various environments. Sci. Rep. 2024, 14, 17779. [Google Scholar] [CrossRef]
  60. Huang, Z.J.; Xu, A.J.; Zhou, S.Y.; Ye, J.H.; Weng, X.X.; Xiang, Y. Key point detection method for pig face fusing reparameterization and attention mechanisms. Trans. Chin. Soc. Agric. Eng. 2023, 39, 141–149. [Google Scholar]
  61. Song, S.; Liu, T.; Wang, H.; Hasi, B.; Yuan, C.; Gao, F.; Shi, H. Using pruning-based YOLOv3 deep learning algorithm for accurate detection of sheep face. Animals 2022, 12, 1465. [Google Scholar] [CrossRef] [PubMed]
  62. Andrew, W.; Greatwood, C.; Burghardt, T. Visual Localisation and Individual Identification of Holstein Friesian Cattle via Deep Learning. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2850–2859. [Google Scholar]
  63. Parmiggiani, A.; Liu, D.; Psota, E.; Fitzgerald, R.; Norton, T. Don’t get lost in the crowd: Graph convolutional network for online animal tracking in dense groups. Comput. Electron. Agric. 2023, 212, 108038. [Google Scholar] [CrossRef]
  64. Guo, Y.Y.; Hong, W.H.; Ding, Y.; Huang, X.P. Goat face detection method by combining coordinate attention mechanism and YOLO v5s model. Trans. Chin. Soc. Agric. Mach. 2023, 54, 313–321. [Google Scholar]
  65. Tassinari, P.; Bovo, M.; Benni, S.; Franzoni, S.; Poggi, M.; Mammi, L.M.E.; Mattoccia, S.; Di Stefano, L.; Bonora, F.; Barbaresi, A. A computer vision approach based on deep learning for the detection of dairy cows in free stall barn. Comput. Electron. Agric. 2021, 182, 106030. [Google Scholar] [CrossRef]
  66. Yao, C.; Li, Q.; Liu, G.; Lv, S.S.; Hou, C.; Zhang, M. Individual Identification of Partially Occluded Holstein Cows Based on NAS-Res. Trans. Chin. Soc. Agric. Mach. 2023, 54, 252–259. [Google Scholar]
  67. Pezzuolo, A.; Guarino, M.; Sartori, L.; Marinello, F. A feasibility study on the use of a structured light depth-camera for three-dimensional body measurements of dairy cows in free-stall barns. Sensors 2018, 18, 673. [Google Scholar] [CrossRef] [PubMed]
  68. Jaddoa, M.; Gonzalez, L.; Cuthbertson, H.; Al-Jumaily, A. Multi view face detection in cattle using infrared thermography. In Proceedings of the International Conference on Applied Computing to Support Industry: Innovation and Technology, Ramadi, Iraq, 15–16 September 2019; pp. 223–236. [Google Scholar]
  69. Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.P.; Niewold, T.A.; Ödberg, F.O.; Berckmans, D. Automatic identification of marked pigs in a pen using image pattern recognition. Comput. Electron. Agric. 2013, 93, 111–120. [Google Scholar] [CrossRef]
  70. Viazzi, S.; Bahr, C.; Van Hertem, T.; Schlageter-Tello, A.; Romanini, C.; Halachmi, I.; Lokhorst, C.; Berckmans, D. Comparison of a three-dimensional and two-dimensional camera system for automated measurement of back posture in dairy cows. Comput. Electron. Agric. 2014, 100, 139–147. [Google Scholar] [CrossRef]
  71. Wang, F.; Li, Q. Research on recognition method of individual cattle muzzle based on local invariant features. Heilongjiang Anim. Sci. Vet. Med. 2022, 2, 48–52+136–137. [Google Scholar] [CrossRef]
  72. Awad, A.I.; Hassaballah, M. Bag-of-visual-words for cattle identification from muzzle print images. Appl. Sci. 2019, 9, 4914. [Google Scholar] [CrossRef]
  73. Kumar, S.; Tiwari, S.; Singh, S.K. Face recognition of cattle: Can it be done? Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2016, 86, 137–148. [Google Scholar] [CrossRef]
  74. Wang, M.M. Study on Individual Recognition of Cow Based on Gait Feature and Texture. Master’s Thesis, Hebei University of Technology, Tianjin, China, 2020. [Google Scholar]
  75. Andrew, W.; Hannuna, S.; Campbell, N.; Burghardt, T. Automatic individual holstein friesian cattle identification via selective local coat pattern matching in RGB-D imagery. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 484–488. [Google Scholar]
  76. Huang, W.; Zhu, W.; Ma, C.; Guo, Y. Weber texture local descriptor for identification of group-housed pigs. Sensors 2020, 20, 4649. [Google Scholar] [CrossRef]
  77. Zhang, M.T.; Mi, N.; Yu, Y.; Shan, X.Y.; Yan, G.; Guo, Y.C. Individual identification of dairy cows based on feature fusion. Jiangsu Agric. Sci. 2018, 46, 278–281. [Google Scholar] [CrossRef]
  78. Zhao, L.; Zhou, G.H.; Ren, L.S. Individual identification of dairy cows based on comprehensive face and trunk information. J. Hebei Agric. Univ. 2024, 47, 112–118. [Google Scholar] [CrossRef]
  79. Li, Z.H.; Wang, T.Y.; Li, Y.Z. Recognition of sheep individual based on GoogLeNet combined with attention mechanism. Intell. Comput. Appl. 2023, 13, 148–153. [Google Scholar]
  80. Pang, Y.; Yu, W.; Zhang, Y.; Xuan, C.; Wu, P. Sheep face recognition and classification based on an improved MobilenetV2 neural network. Int. J. Adv. Robot. Syst. 2023, 20, 17298806231152969. [Google Scholar] [CrossRef]
  81. Andrew, W.; Gao, J.; Mullan, S.; Campbell, N.; Dowsey, A.W.; Burghardt, T. Visual identification of individual Holstein-Friesian cattle via deep metric learning. Comput. Electron. Agric. 2021, 185, 106133. [Google Scholar] [CrossRef]
  82. Shen, W.; Hu, H.; Dai, B.; Wei, X.; Sun, J.; Jiang, L.; Sun, Y. Individual identification of dairy cows based on convolutional neural networks. Multimed. Tools Appl. 2020, 79, 14711–14724. [Google Scholar] [CrossRef]
  83. Xue, H.; Qin, J.; Quan, C.; Ren, W.; Gao, T.; Zhao, J. Open set sheep face recognition based on Euclidean space metric. Math. Probl. Eng. 2021, 2021, 3375394. [Google Scholar] [CrossRef]
  84. Xing, Y.; Wu, B.; Wu, S. Individual cow recognition based on convolution neural network and transfer learning. Laser Optoelectron. Prog. 2021, 58, 1628002. [Google Scholar]
  85. Hou, H.; Shi, W.; Guo, J.; Zhang, Z.; Shen, W.; Kou, S. Cow rump identification based on lightweight convolutional neural networks. Information 2021, 12, 361. [Google Scholar] [CrossRef]
  86. Shojaeipour, A.; Falzon, G.; Kwan, P.; Hadavi, N.; Cowley, F.C.; Paul, D. Automated muzzle detection and biometric identification via few-shot deep transfer learning of mixed breed cattle. Agronomy 2021, 11, 2365. [Google Scholar] [CrossRef]
  87. Zhang, C.; Zhang, H.; Tian, F.; Zhou, Y.; Zhao, S.; Du, X. Research on sheep face recognition algorithm based on improved AlexNet model. Neural Comput. Appl. 2023, 35, 24971–24979. [Google Scholar] [CrossRef]
  88. Hu, H.; Dai, B.; Shen, W.; Wei, X.; Sun, J.; Li, R.; Zhang, Y. Cow identification based on fusion of deep parts features. Biosyst. Eng. 2020, 192, 245–256. [Google Scholar] [CrossRef]
  89. Marsot, M.; Mei, J.; Shan, X.; Ye, L.; Feng, P.; Yan, X.; Li, C.; Zhao, Y. An adaptive pig face recognition approach using Convolutional Neural Networks. Comput. Electron. Agric. 2020, 173, 105386. [Google Scholar] [CrossRef]
  90. Xiao, J.; Liu, G.; Wang, K.; Si, Y. Cow identification in free-stall barns based on an improved Mask R-CNN and an SVM. Comput. Electron. Agric. 2022, 194, 106738. [Google Scholar] [CrossRef]
  91. Du, Y.; Kou, Y.; Li, B.; Qin, L.; Gao, D. Individual identification of dairy cows based on deep learning and feature fusion. Anim. Sci. J. 2022, 93, e13789. [Google Scholar] [CrossRef] [PubMed]
  92. Tang, Z.Y. Feeding Behavior and Identification of Pigs Based on Improved Optical Flow Method and Deep Learning. Master’s Thesis, JiangSu University, Zhenjiang, China, 2022. [Google Scholar]
  93. Achour, B.; Belkadi, M.; Filali, I.; Laghrouche, M.; Lahdir, M. Image analysis for individual identification and feeding behaviour monitoring of dairy cows based on Convolutional Neural Networks (CNN). Biosyst. Eng. 2020, 198, 31–49. [Google Scholar] [CrossRef]
  94. Qiao, Y.; Kong, H.; Clark, C.; Lomax, S.; Su, D.; Eiffert, S.; Sukkarieh, S. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation. Comput. Electron. Agric. 2021, 185, 106143. [Google Scholar] [CrossRef]
  95. Zhang, X.; Xuan, C.; Ma, Y.; Tang, Z.; Gao, X. An efficient method for multi-view sheep face recognition. Eng. Appl. Artif. Intell. 2024, 134, 108697. [Google Scholar] [CrossRef]
  96. Salama, A.; Hassanien, A.E.; Fahmy, A. Sheep identification using a hybrid deep learning and bayesian optimization approach. IEEE Access 2019, 7, 31681–31687. [Google Scholar] [CrossRef]
  97. Ferreira, R.E.; Lee, Y.J.; Dórea, J.R. Using pseudo-labeling to improve performance of deep neural networks for animal identification. Sci. Rep. 2023, 13, 13875. [Google Scholar] [CrossRef] [PubMed]
  98. Shang, C. Research on Individual Identification of Goat Based on Deep Learning. Master’s Thesis, NorthWest A&F University, Yangling, China, 2022. [Google Scholar]
  99. Liu, H. Cattle Identification in Complex Scenes. Master’s Thesis, HangZhou Dianzi University, Hangzhou, China, 2023. [Google Scholar]
  100. Wang, B.; Li, X.; An, X.; Duan, W.; Wang, Y.; Wang, D.; Qi, J. Open-Set Recognition of Individual Cows Based on Spatial Feature Transformation and Metric Learning. Animals 2024, 14, 1175. [Google Scholar] [CrossRef] [PubMed]
  101. Li, G.; Jiao, J.; Shi, G.; Ma, H.; Gu, L.; Tao, L. Fast Recognition of Pig Faces Based on Improved Yolov3. In Proceedings of the International Conference on Computer, Big Data and Artificial Intelligence (ICCBDAI 2021); Journal of Physics: Conference Series, Beihai, China, 12–14 December 2021; IOP Publishing Ltd.: Bristol, UK, 2022; p. 12005. [Google Scholar]
  102. Hu, Z.; Yang, H.; Lou, T.T. Instance detection of group breeding pigs using a pyramid network with dual attention feature. Trans. Chin. Soc. Agric. Eng. 2021, 37, 166–174. [Google Scholar]
  103. Li, S.; Kang, X.; Feng, Y.; Liu, G. Detection Method for Individual Pig Based on Improved YOLOv4 Convolutional Neural Network. In Proceedings of the 2021 4th International Conference on Data Science and Information Technology, Shanghai, China, 23–25 July 2021; pp. 231–235. [Google Scholar]
  104. Yang, S.Q.; Liu, Y.Q.H.; Wang, Z.; Han, Y.Y.; Wang, Y.S.; Lan, X.Y. Improved YOLO V4 model for face recognition of diary cow by fusing coordinate information. Trans. Chin. Soc. Agric. Eng. 2021, 37, 129–135. [Google Scholar]
  105. Xu, X.S.; Wang, Y.F.; Deng, H.X.; Song, H.B. Nighttime cattle face recognition based on cross-modal shared feature learning. J. South China Agric. Univ. 2024, 45, 793–801. [Google Scholar]
  106. Xue, J.; Hou, Z.; Xuan, C.; Ma, Y.; Sun, Q.; Zhang, X.; Zhong, L. A Sheep Identification Method Based on Three-Dimensional Sheep Face Reconstruction and Feature Point Matching. Animals 2024, 14, 1923. [Google Scholar] [CrossRef]
  107. Weng, Z.; Meng, F.; Liu, S.; Zhang, Y.; Zheng, Z.; Gong, C. Cattle face recognition based on a Two-Branch convolutional neural network. Comput. Electron. Agric. 2022, 196, 106871. [Google Scholar] [CrossRef]
  108. Zhang, X.; Xuan, C.; Ma, Y.; Su, H.; Zhang, M. Biometric facial identification using attention module optimized YOLOv4 for sheep. Comput. Electron. Agric. 2022, 203, 107452. [Google Scholar] [CrossRef]
  109. Wan, Z.; Tian, F.; Zhang, C. Sheep face recognition model based on deep learning and bilinear feature fusion. Animals 2023, 13, 1957. [Google Scholar] [CrossRef] [PubMed]
  110. Lv, Y. Research on Sheep Face Identity Recognition Based on Improved Deep Convolutional Neural Network. Master’s Thesis, Inner Mongolia Agricultural University, Hohhot, China, 2023. [Google Scholar]
  111. Wang, Y.; Xu, X.; Wang, Z.; Li, R.; Hua, Z.; Song, H. ShuffleNet-Triplet: A lightweight RE-identification network for dairy cows in natural scenes. Comput. Electron. Agric. 2023, 205, 107632. [Google Scholar] [CrossRef]
  112. Zhang, X.; Xuan, C.; Ma, Y.; Tang, Z.; Cui, J.; Zhang, H. High-similarity sheep face recognition method based on a Siamese network with fewer training samples. Comput. Electron. Agric. 2024, 225, 109295. [Google Scholar] [CrossRef]
  113. Chen, X.; Yang, T.; Mai, K.; Liu, C.; Xiong, J.; Kuang, Y.; Gao, Y. Holstein cattle face re-identification unifying global and part feature deep network with attention mechanism. Animals 2022, 12, 1047. [Google Scholar] [CrossRef]
  114. Zhang, J.L.; Zhou, K.; Zhuang, Y.R.; Teng, G.H. Effect of facial changes on the accuracy of the recognition model during the growth of finishing pigs. J. China Agric. Univ. 2021, 26, 180–186. [Google Scholar]
  115. Fu, L.L.; Li, S.J.; Kong, S.L.; Gong, H.; Li, S.H. Research on individual identification of cows based on Multi-Light model. Heilongjiang Anim. Sci. Vet. Med. 2023, 41–45+51+132–133. [Google Scholar] [CrossRef]
  116. Li, X.; Xiang, Y.; Li, S. Combining convolutional and vision transformer structures for sheep face recognition. Comput. Electron. Agric. 2023, 205, 107651. [Google Scholar] [CrossRef]
  117. Wang, Z.; Liu, T. Two-stage method based on triplet margin loss for pig face recognition. Comput. Electron. Agric. 2022, 194, 106737. [Google Scholar] [CrossRef]
  118. Li, X.; Du, J.; Yang, J.; Li, S. When mobilenetv2 meets transformer: A balanced sheep face recognition model. Agriculture 2022, 12, 1126. [Google Scholar] [CrossRef]
  119. Li, X.; Zhang, Y.; Li, S. SheepFaceNet: A Speed–Accuracy Balanced Model for Sheep Face Recognition. Animals 2023, 13, 1930. [Google Scholar] [CrossRef] [PubMed]
  120. Ma, C.; Sun, X.; Yao, C.; Tian, M.; Li, L. Research on sheep recognition algorithm based on deep learning in animal husbandry. J. Phys. Conf. Ser. 2020, 1651, 12129. [Google Scholar] [CrossRef]
  121. Zhang, X.; Xuan, C.; Xue, J.; Chen, B.; Ma, Y. LSR-YOLO: A high-precision, lightweight model for sheep face recognition on the mobile end. Animals 2023, 13, 1824. [Google Scholar] [CrossRef] [PubMed]
  122. Bati, C.T.; Ser, G. Improved sheep identification and tracking algorithm based on YOLOv5+ SORT methods. Signal Image Video Process. 2024, 18, 1–12. [Google Scholar] [CrossRef]
  123. Wang, R.; Gao, R.; Li, Q.; Dong, J. Pig face recognition based on metric learning by combining a residual network and attention mechanism. Agriculture 2023, 13, 144. [Google Scholar] [CrossRef]
  124. Bakhshayeshi, I.; Erfani, E.; Taghikhah, F.R.; Elbourn, S.; Beheshti, A.; Asadnia, M. An Intelligence Cattle Reidentification System Over Transport by Siamese Neural Networks and YOLO. IEEE Internet Things J. 2023, 11, 2351–2363. [Google Scholar] [CrossRef]
  125. Okura, F.; Ikuma, S.; Makihara, Y.; Muramatsu, D.; Nakada, K.; Yagi, Y. RGB-D video-based individual identification of dairy cows using gait and texture analyses. Comput. Electron. Agric. 2019, 165, 104944. [Google Scholar] [CrossRef]
  126. Li, D.; Li, B.; Li, Q.; Wang, Y.; Yang, M.; Han, M. Cattle identification based on multiple feature decision layer fusion. Sci. Rep. 2024, 14, 26631. [Google Scholar] [CrossRef]
  127. Bo, L.; Yuefeng, L.; Xiang, B.; Yue, W.; Haofeng, L.; Xuan, L. Research on dairy cow identification methods in dairy farm. Indian J. Anim. Res. 2023, 57, 1733–1739. [Google Scholar] [CrossRef]
  128. De La Torre, M.P.; Briefer, E.F.; Ochocki, B.M.; McElligott, A.G.; Reader, T. Mother–offspring recognition via contact calls in cattle, Bos taurus. Anim. Behav. 2016, 114, 147–154. [Google Scholar] [CrossRef]
  129. Briefer, E.F.; Sypherd, C.C.-R.; Linhart, P.; Leliveld, L.M.; Padilla de La Torre, M.; Read, E.R.; Guérin, C.; Deiss, V.; Monestier, C.; Rasmussen, J.H. Classification of pig calls produced from birth to slaughter according to their emotional valence and context of production. Sci. Rep. 2022, 12, 3409. [Google Scholar] [CrossRef] [PubMed]
  130. Pang, Y.; Yu, W.; Xuan, C.; Zhang, Y.; Wu, P. A Large Benchmark Dataset for Individual Sheep Face Recognition. Agriculture 2023, 13, 1718. [Google Scholar] [CrossRef]
  131. Guo, Q.; Sun, Y.; Orsini, C.; Bolhuis, J.E.; de Vlieg, J.; Bijma, P.; de With, P.H. Enhanced camera-based individual pig detection and tracking for smart pig farms. Comput. Electron. Agric. 2023, 211, 108009. [Google Scholar] [CrossRef]
  132. Guan, H.; Motohashi, N.; Maki, T.; Yamaai, T. Cattle identification and activity recognition by surveillance camera. Electron. Imaging 2020, 32, 1–6. [Google Scholar] [CrossRef]
Figure 1. The major component of computer vision-based livestock identification. * Source: [11].
Figure 1. The major component of computer vision-based livestock identification. * Source: [11].
Agriculture 15 00102 g001
Figure 2. Sheep retinal image acquisition using OptiReader device [14] (a) Image acquisition; (b) Pupil status of sheep in dim and bright light; (c) Large vessels’ pairing and twisting, a phenomenon typically seen in sheep retinal vessels.
Figure 2. Sheep retinal image acquisition using OptiReader device [14] (a) Image acquisition; (b) Pupil status of sheep in dim and bright light; (c) Large vessels’ pairing and twisting, a phenomenon typically seen in sheep retinal vessels.
Agriculture 15 00102 g002
Figure 3. Examples of iris images and two iris image acquisition devices [22,25,26]. (a) Original, outer border, and inner border images of the iris of a cow’s eye. (b) Non-contact self-feedback iris image acquisition device. (c) Iris camera acquiring an iris image of a goat.
Figure 3. Examples of iris images and two iris image acquisition devices [22,25,26]. (a) Original, outer border, and inner border images of the iris of a cow’s eye. (b) Non-contact self-feedback iris image acquisition device. (c) Iris camera acquiring an iris image of a goat.
Agriculture 15 00102 g003
Figure 4. Muzzle image and the unique pattern on the cattle muzzle [32,35]. (a) Muzzle image of a cow. (b) Beads and ridges features of the muzzle point image pattern of cattle.
Figure 4. Muzzle image and the unique pattern on the cattle muzzle [32,35]. (a) Muzzle image of a cow. (b) Beads and ridges features of the muzzle point image pattern of cattle.
Agriculture 15 00102 g004
Figure 5. The body pattern image of a dairy cow [40].
Figure 5. The body pattern image of a dairy cow [40].
Agriculture 15 00102 g005
Figure 6. An example of the application of 3D visual appearances and skeleton pose features to livestock identification [51,52]. (a) Original depth frame and generated occupancy grid during the recognition of individual cows based on dorsal surface features. (b) Schematic of the key point prediction of the sheep skeleton.
Figure 6. An example of the application of 3D visual appearances and skeleton pose features to livestock identification [51,52]. (a) Original depth frame and generated occupancy grid during the recognition of individual cows based on dorsal surface features. (b) Schematic of the key point prediction of the sheep skeleton.
Agriculture 15 00102 g006
Figure 7. Image sequence, only the first two and the last two frames are displayed [57].
Figure 7. Image sequence, only the first two and the last two frames are displayed [57].
Agriculture 15 00102 g007
Figure 8. Overview of livestock individual identification methods.
Figure 8. Overview of livestock individual identification methods.
Agriculture 15 00102 g008
Figure 10. Comparison between (a) the original frame as recorded and (b) the modified frame created modifying the brightness of the image [65].
Figure 10. Comparison between (a) the original frame as recorded and (b) the modified frame created modifying the brightness of the image [65].
Agriculture 15 00102 g010
Figure 11. Random enhancement [100].
Figure 11. Random enhancement [100].
Agriculture 15 00102 g011
Figure 12. Sheep face alignment and improvements to the algorithm structure. (a) Sheep face alignment [83]. (b) Structure diagram of TB-CNN [107].
Figure 12. Sheep face alignment and improvements to the algorithm structure. (a) Sheep face alignment [83]. (b) Structure diagram of TB-CNN [107].
Agriculture 15 00102 g012
Figure 13. Architecture of the proposed GPN-ST model. The GPN-ST model is improved in the Part branch used to extract the local regions via four STN modules [113].
Figure 13. Architecture of the proposed GPN-ST model. The GPN-ST model is improved in the Part branch used to extract the local regions via four STN modules [113].
Agriculture 15 00102 g013
Figure 14. Reducing computational complexity through structural substitution. (a) The original Transformer. (b) The Linear Transformer [116].
Figure 14. Reducing computational complexity through structural substitution. (a) The original Transformer. (b) The Linear Transformer [116].
Agriculture 15 00102 g014
Table 1. Data acquisition in different scenarios.
Table 1. Data acquisition in different scenarios.
ScenariosObjectsSetup of EnvironmentFeatures
locative scenariosAssaf breed sheep; Acquisition of facial imagesAgriculture 15 00102 i001
1. NVIDIA Jetson Nano embedded system-on-module (SoM), 2. front camera for recording facial videos, 3. side camera for recording ear tags, and 4. Infrared (IR) sensor [58].
1. The imaging device is affixed to a regulated water trough. Video data are acquired in a contactless manner while the sheep are engaged in the act of drinking independently.
2. The two cameras were positioned at a distance of 80 cm from the reference point.
1. The image data are less susceptible to external factors (such as animal movement, obstructions) and exhibits high clarity.
2. A stable acquisition environment serves to reduce errors and facilitate accurate data analysis.
3. The costs associated with these methods are higher, whether the shooting is man-made (which is time-consuming) or the livestock is fixed (which requires more sophisticated equipment).
Dairy cows; Acquisition of back imagesAgriculture 15 00102 i0021. Record the cattle walking through the exit lane of the milking parlor.
2. The camera is located above the 4 m off the ground [59].
Dan line sow; Acquisition of face imagesAgriculture 15 00102 i003Data were collected in the mating barn, and the images captured in the sow restriction pen featured only a single pig [60].
Sunit Sheep; Acquisition of facial imagesAgriculture 15 00102 i004The sheep were taken in turn to an enclosed environment and photographed individually by an experimenter holding a camera while they were calm [61].
open scenariosHolstein cows; Acquisition of back imagesAgriculture 15 00102 i0051. Outdoor farmland.
2. Photographed by a drone (DJI Inspire MkI) at 5 m above the ground [62].
1. Livestock are relatively free to move, and data can be obtained when they are in a natural state.
2. Relatively low cost.
3. The environment is complex and variable, and the acquired image data may be blurred or have a lot of noise.
4. High requirements for image processing techniques and methods.
Pigs; Acquisition of dorsal imagesAgriculture 15 00102 i006The camera is set orthogonal to the plane of the pen and mounted on the ceiling [63].
Huanghuai goat; Acquisition of facial imagesAgriculture 15 00102 i007Images of goats in their natural state were captured and acquired by mobile phone in an actual rearing environment [64].
Table 2. Data acquisition using different devices.
Table 2. Data acquisition using different devices.
TypeDeviceSetupAcquisition ImageReference
2DPanasonic WV—BP330
Video Format: MPEG-1
Frame rate: 25 fps
Resolution: 576 × 720
Data Rate: 64 kbps
Agriculture 15 00102 i008
Cameras were installed in the rafters to capture top-view images; To provide light in the barn, six 58 W, 120 cm Gamma white fluorescent tube lamps were installed at a height of 200 cm in locations
Agriculture 15 00102 i009
Image of a small group of pigs in a barn
Kashiha et al. (2013) [69]
Nikon D5200
Resolution: 720 × 1280
The camera was installed on a tripod 3.5 m away from path and 1.5 m above the groundAgriculture 15 00102 i010
Individual image of cattle
Zhao et al. (2019) [39]
IriShield–USB MK2120U
Resolution: 640 × 480
Camera was connected with a light weighted mobile device through the cable for capturing iris images within a distance of 5 cm from the sensor. Eye lid and eye lashes are avoided as much as possible to visualize the whole portion of the iris during capture.Agriculture 15 00102 i011
Localized image: Iris images of goat
Laishram et al. (2023) [29]
3DKinect XBOX 360
Resolution: (RGB):640 × 480
(Depth): 320 × 240
Agriculture 15 00102 i012
The cameras were placed at a 45 degrees angle, 3.28 m apart from each other and 2.7 m above the ground.
Agriculture 15 00102 i013
Individual image of cattle
Arslan et al. (2014) [50]
Kinect V2
Resolution:
(RGB): 1920 × 1080
(Depth): 512 × 424
All videos were recorded using Kinect for Windows SDK 2.0 installed on a laptop locally operated by a person who manually started recording as soon as the calf was positioned on the scale, and stopped recording when the weighing process was concluded for that calf.Agriculture 15 00102 i014
Individual image of cattle
Ferreira et al. (2022) [51]
Intel RealSenseD455
Resolution (Depth): 640 × 480
The camera was installed on a tripod 1.5 m away from the barn fence in order to capture and store RGB-D images of the cow’s face using the camera’s own software (Intel. RealSense. Viewer).Agriculture 15 00102 i015
Localized image: face of cattle
Liu et al. (2024) [47]
Thermal infrared imagingAGEMA 590 PAL, Therma Cam S65, A310, T335
Resolution: 320 × 240
The camera was setup at approximately 2 m from the target cattle as they moved through the race towards the knocking box. All cattle were in the shade under the roof during the recordings. And the camera was setup at approximately 45 degrees angle from the head of the animal.Agriculture 15 00102 i016
Individual image of cattle
Jaddoa et al. (2019) [68]
Table 3. Selected research work on fusion methods of deep learning and traditional machine learning.
Table 3. Selected research work on fusion methods of deep learning and traditional machine learning.
FeaturesLivestock/Images (Train, Test)Recognition MethodsAccuracyReference
Overall cow object93/(593, 365)YOLO + three independent CNNs + SVM98.36%Hu et al. (2020) [88]
Pig face10/(2044, 320)Haar cascade classifiers + CNN model83%Marsot et al. (2020) [89]
Cattle body147(farm A)/-
13(farm B)/-
1103(farm C)/-
YOLOV8 + VGG16 + SVM96.34% (three farms average)Mon et al. (2024) [59]
Cow’ back48/-Mask R-CNN + SVM98.67%Xiao et al. (2022) [90]
Horse face-/(1000, 103)YOLOV7 + SIFT + FLANN99.5%Ahmad et al. (2023) [49]
Dairy cow’ trunk34/(480, 206)VGG16 + SVM99.48%Du et al. (2022) [91]
Pig’ back\MB-ACDLDP + PIG-VGG1694.52%Tang (2022) [92]
Dairy cow’ head17/-CNN + SVM96.72%Achour et al. (2020) [93]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meng, H.; Zhang, L.; Yang, F.; Hai, L.; Wei, Y.; Zhu, L.; Zhang, J. Livestock Biometrics Identification Using Computer Vision Approaches: A Review. Agriculture 2025, 15, 102. https://doi.org/10.3390/agriculture15010102

AMA Style

Meng H, Zhang L, Yang F, Hai L, Wei Y, Zhu L, Zhang J. Livestock Biometrics Identification Using Computer Vision Approaches: A Review. Agriculture. 2025; 15(1):102. https://doi.org/10.3390/agriculture15010102

Chicago/Turabian Style

Meng, Hua, Lina Zhang, Fan Yang, Lan Hai, Yuxing Wei, Lin Zhu, and Jue Zhang. 2025. "Livestock Biometrics Identification Using Computer Vision Approaches: A Review" Agriculture 15, no. 1: 102. https://doi.org/10.3390/agriculture15010102

APA Style

Meng, H., Zhang, L., Yang, F., Hai, L., Wei, Y., Zhu, L., & Zhang, J. (2025). Livestock Biometrics Identification Using Computer Vision Approaches: A Review. Agriculture, 15(1), 102. https://doi.org/10.3390/agriculture15010102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop