Next Article in Journal
Crop Diversification and Fertilization Strategies in a Rainfed System with Drought Periods
Next Article in Special Issue
RpTrack: Robust Pig Tracking with Irregular Movement Processing and Behavioral Statistics
Previous Article in Journal
Research Progress in the Establishment of Sterile Hosts and Their Usage in Conservation of Poultry Genetic Resources
Previous Article in Special Issue
WH-DETR: An Efficient Network Architecture for Wheat Spike Detection in Complex Backgrounds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open-Set Sheep Face Recognition in Multi-View Based on Li-SheepFaceNet

1
College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
2
Key Laboratory of Smart Agriculture System Integration, Ministry of Education, China Agricultural University, Beijing 100083, China
3
Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(7), 1112; https://doi.org/10.3390/agriculture14071112
Submission received: 16 June 2024 / Revised: 6 July 2024 / Accepted: 9 July 2024 / Published: 10 July 2024
(This article belongs to the Special Issue Computer Vision and Artificial Intelligence in Agriculture)

Abstract

:
Deep learning-based sheep face recognition improves the efficiency and effectiveness of individual sheep recognition and provides technical support for the development of intelligent livestock farming. However, frequent changes within the flock and variations in facial features in different views significantly affect the practical application of sheep face recognition. In this study, we proposed the Li-SheepFaceNet, a method for open-set sheep face recognition in multi-view. Specifically, we employed the Seesaw block to construct a lightweight model called SheepFaceNet, which significantly improves both performance and efficiency. To enhance the convergence and performance of low-dimensional embedded feature learning, we used Li-ArcFace as the loss function. The Li-SheepFaceNet achieves an open-set recognition accuracy of 96.13% on a self-built dataset containing 3801 multi-view face images of 212 Ujumqin sheep, which surpasses other open-set sheep face recognition methods. To evaluate the robustness and generalization of our approach, we conducted performance testing on a publicly available dataset, achieving a recognition accuracy of 93.33%. Deploying Li-SheepFaceNet on an open-set sheep face recognition system enables the rapid and accurate identification of individual sheep, thereby accelerating the development of intelligent sheep farming.

1. Introduction

Identity recognition plays an important role in the daily management of sheep farming, enabling farms to optimize their farming strategies and achieve precision farming. Additionally, it provides proof and a basis for bank loans and insurance claims. Traditional methods of identifying sheep, such as branding, painting, and ear tagging [1], can cause stress and harm. Furthermore, these methods require frequent maintenance and cleaning, and are time-consuming and prone to errors [2]. While radio frequency identification (RFID) technology is available [3], it is expensive and susceptible to shedding and interference. Therefore, these traditional methods are not suitable for large-scale sheep farming and are difficult to identify quickly and accurately.
With the advancement in convolutional neural networks (CNNs), sheep face recognition technology has become an active area of research due to its advantages of low distance constraints, simplicity of operation, minimal human involvement, and non-contact, which minimizes stress and injury to the sheep. In recent years, some studies have focused on training classifiers based on CNNs for closed-set sheep face recognition [4,5,6,7]. In order to enhance the capacity of the model to extract features, some relevant studies [8,9,10] have demonstrated the applicability of Vision Transformer (ViT) [11] in sheep face recognition. However, the introduction of the Transformer structure brings about two issues: (1) It requires larger-scale training datasets to improve performance; and (2) it demands higher computational resources. In practical farming scenarios, the breeding cycles of sheep are relatively brief, and the populations of sheep are subject to fluctuations due to factors such as population growth, purchases, and sales. When the recognition group changes, the neural network must be retrained. Therefore, research on closed-set sheep face recognition has been shown to be less effective in practical applications.
Open-set recognition requires the model to possess generalization abilities to recognize unseen sheep face samples and perform accurate recognition when encountering new individuals. The loss function, which is the primary focus of improving face recognition algorithms, offers novel insights for enhancing open-set sheep face recognition. Refs. [12,13] attempted to differentiate sheep face categories in Euclidean and angular spaces using CenterLoss [14] and ArcFace loss [15], respectively. However, their recognition accuracies were low and could not satisfy the practical needs.
Furthermore, it is important to consider that alterations in facial features may potentially lead to suboptimal results when employing the recognition model in practice. This is due to the inability of livestock to maintain facial stability in front of a camera for extended periods of time. To address this issue, some studies [5,16] have designed devices or implemented measures to keep sheep stable during data collection. However, these methods may intensify the disparity between experimental and practical application environments. Face recognition is a passive biometric technique used to identify uncooperative subjects [17]. In the open-set recognition context, the model must be capable of identifying unknown classes of faces that may appear in varying orientations. This requires the model to be robust and capable of generalization across different perspectives. Therefore, it is important to collect sheep face images in multi-view by simulating the environmental factors in a realistic farming context to enhance the practical applicability of the method.
To improve the open-set recognition accuracy of sheep faces, in this paper, we proposed a high-performance and lightweight model, SheepFaceNet, which uses a lighter-weight and more efficient Seesaw block [18] as the bottleneck. The Seesaw block uses uneven group convolutions with a channel shuffle operation and an SE layer (squeeze-and-excitation layer). It uses fewer computational resources while improving accuracy. Subsequently, the disparity in recognition accuracy and model size between SheepFaceNet and mainstream networks was evaluated. To enhance both inter-class discrepancy and intra-class compactness, we used Li-ArcFace as the loss function to further improve the open-set recognition performance. We also introduced Labeled Sheep Faces in the Wild (LSFW), a self-constructed dataset comprising multi-view sheep face images, to enhance the model’s robustness to face feature changes. The proposed method is specifically designed for efficient deployment on mobile and edge devices. It enhances open-set recognition accuracy while using fewer computational resources.
Section 2 introduces the LSFW, a multi-view sheep face dataset; presents the specific structure of the Seesaw block and SheepFaceNet; explains the definition and role of Li-ArcFace loss; and provides an overview of the experimental evaluation metrics and detailed experimental parameters. Section 3 examines the impact of SheepFaceNet and Li-ArcFace loss on open-set sheep face recognition performance; assesses the generalization effect of the Li-SheepFaceNet method on the publicly available sheep face dataset; visualizes and analyzes the recognition results; and demonstrates the sheep face recognition system. Section 4 analyzes and discusses the differences from existing studies. Section 5 provides a summary of the paper.
The primary contributions of the paper can be summarized as follows:
(1)
We improved the baseline model MobileFaceNet by incorporating the Seesaw block, which utilizes uneven group convolutions with a channel shuffle operation and an SE block to achieve a lighter model with higher-accuracy performance.
(2)
We utilized the Li-ArcFace loss, which employs a linear function instead of the cosine function, to treat angle values as target logits during low-dimensional embedding feature learning in sheep face recognition. This approach enhances both inter-class discrepancy and intra-class compactness, leading to better convergence and performance.
(3)
To enhance the robustness of the model to the variations in facial features and lighting, we contactlessly collected multi-view facial images of 212 Ujumqin sheep in the wild, simulating the real-world application environment.
(4)
To promote the practical implementation of the proposed method, a rapid and precise open-set sheep face recognition system was developed. It is capable of identifying new individuals following registration without the necessity of retraining the model.

2. Materials and Methods

2.1. Labeled Sheep Faces in the Wild

Sheep face images were collected in April 2023 at the ZhiMu Sheep Cooperative in Xilingol League, Inner Mongolia Autonomous Region, China. Three MOKESE C100 cameras were installed on the passageway to capture the sheep face images of 212 Inner Mongolia Ujumqin sheep contactlessly at 60 frames per second (FPS) and a resolution of 1920 × 1080. By extracting frames and removing duplicates from the recorded videos, images containing the target sheep were obtained. A sheep face detector was trained using YOLOv5s [19], which detected and cropped the sheep faces, resizing them to 224 × 224. In total, we obtained 3801 sheep face images in multi-view, belonging to 212 classes with different skin colors, which formed the LSFW dataset. Figure 1 depicts examples of the sheep face images in LSFW. The face images of each sheep, captured from different views, were stored in subfolders named after their respective numbers.
For open-set recognition experiments, we randomly selected 30 sheep from the LSFW as the test cluster, while the remaining 182 sheep were used as the training set–open set. As we know, Labeled Faces in the Wild (LFW) [20] is a publicly available dataset of human faces collected in real-world, uncontrolled environments and is widely used in the field of human face recognition. In the testing process, LFW gives a pair of face images and determines whether they belong to the same individual through the recognition model. The face recognition accuracy can be calculated by the ratio of the number of correctly predicted results to the number of image pairs. In accordance with the established format of the LFW test set, we constructed our sheep face recognition test set. The testing set–open set served as the test set for open-set experiments. It consisted of 30 sheep that were independent of the training set–open set, with a total of 254 images. A total of 600 pairs of sheep face images were randomly generated, with 300 pairs being the same sheep and 300 pairs being different sheep.
Table 1 presents a statistical summary of the dataset utilized in this study. During model training, we utilized NVIDIA’s DALI library to perform random online augmentation on the training dataset, including resizing, flipping, HSV transformations, and Gaussian blurring.

2.2. SheepFaceNet

Sheep face recognition models are increasingly deployed and applied on edge devices and mobile devices. Therefore, achieving a balance between optimizing accuracy and computational cost is crucial when designing deep neural network architectures. In recent years, several lightweight and efficient neural network architectures [21,22] have been proposed for common computer vision tasks. However, when applied to face recognition, the accuracy of these lightweight network architectures is not satisfactory. To improve the performance of face recognition, MobileFaceNet [23] uses the residual bottlenecks presented in MobileNetV2 [22] as its main building blocks and introduces global depthwise convolutions (GDConv) to replace global average pooling layers, extracting discriminative facial features and enhancing feature representation and discriminability. It is extremely efficient in terms of performing real-time face recognition on mobile and embedded devices while maintaining high facial recognition accuracy.
There are differences in the features of sheep and human faces. Therefore, in order to improve the robustness and generalization ability in sheep face recognition, SheepFaceNet, a higher-performance network architecture based on MobileFaceNet, was designed under limited computational resources. In SheepFaceNet, a lighter-weight and more efficient Seesaw block is used as the bottleneck, and the specific network structure is illustrated in Table 2. The Seesaw block utilizes uneven group convolutions and applies channel shuffle operation to facilitate the flow of information between uneven convolution groups. In comparison to MobileFaceNet, our model is both more lightweight and more accurate in sheep face recognition.

2.2.1. Channel Shuffle for Uneven Group Convolution

Group convolution was firstly introduced in AlexNet [24]. Recently, ShuffleNet-V1/V2 [25,26] achieved state-of-the-art results in lightweight models using group convolution. The augmented computational demands notwithstanding, uneven-group convolution has been demonstrated to enhance the representation capability in comparison with classical even-group convolution [27]. The architecture of the uneven-group convolution is illustrated in Figure 2. This operation captures information at different scales and levels of the input features, thereby improving the representation of the model. In our work, the input feature channels are divided into one-fourth and three-fourths, which are separately passed through small and big convolutional layers for convolution operations. Finally, the two convolutional outputs are concatenated along the channel dimension to obtain the final output.
Nevertheless, when multiple uneven group convolutions are stacked together, a side effect emerges: the outputs from a specific channel are derived from a limited subset of input channels. Figure 3a depicts a scenario involving two stacked uneven-group convolution layers. It is evident that the outputs of a specific group are exclusively related to the inputs within that group. This property impedes the flow of information between channel groups and compromises the representation. To capture interactions and correlations between different features more effectively, we added the channel shuffle operation after the first uneven-group convolution (as shown in Figure 3b). We divided the number of channels equally into two halves and swapped their positions. The channel shuffle operation enabled the construction of more powerful structures with multiple uneven-group convolutional layers.

2.2.2. Seesaw Block

Taking advantage of the channel shuffle operation, a novel Seesaw block is proposed that is specially designed for lightweight networks. A computational economical 3 × 3 depthwise convolution [21] was added between two uneven-group convolutions on the bottleneck feature map. The Swish activation function was used, as it is more appropriate than PReLU [28,29]. The usage of batch normalization (BN) [30] and non-linearity was similar to ResNets [31]. In addition, the SE layer module was introduced to dynamically adjust the importance of channels and enhance the representation power of important channels. Figure 4 depicts the specific structure of the Seesaw block. In the Seesaw block, the residual connection was employed when the input and output channel numbers were equal in order to prevent manifold collapse during transformations and to allow the network to represent more complex functions [22].

2.3. Li-ArcFace Loss

Softmax loss, represented in Equation (1), is the most prevalent loss function in classification, where x i denotes the embedding feature of the i -th sample belonging to the y i -th class, and the dimension of the embedding feature is set as d . W j R d denotes the j -th column of the weight W R d × n , and b j R is the bias term. The batch size is N and the class number of training data is n . However, softmax loss has limitations in effectively guiding the embedding feature to minimize inter-class similarity and maximize intra-class similarity. As a result, some margin-based losses have been proposed to address the challenges of open-set face recognition.
L 1 = 1 N i = 1 N log e W y i T x i + b y i j = 1 n e W j T x i + b i

2.3.1. Margin-Based Softmax Loss

To optimize the feature space metric, angular cosine distance is introduced and applied to Softmax loss. The bias term b j is removed and the W j = 1 and x i = s , are fixed such that the logit is Equation (2), where s denotes a learnable parameter that controls the distance between categories and θ j denotes the angle between x i and W j . In this way, the logit is only dependent on the cosine of the angle. The modified loss function can be formulated as in Equation (3).
W j T x i = W j × x i × cos θ j = s × cos θ j
L 2   = 1 N i = 1 N log e s × cos θ y i j = 1 n e s × cos θ j
In SphereFace [32], CosFace [33], and ArcFace, an additional angular margin m is added at different positions within cos θ y i , which can improve the intra-class compactness and inter-class discrepancy. We combined the formulas of the three losses above to derive a new unified formula, as shown in Equation (4). In the experimental section, we choose which loss to use by adjusting the values of m 1 , m 2 , and m 3 . Table 3 shows the difference in the target logit and the setting of m 1 ,   m 2 ,   m 3 .
L 3 = 1 N i = 1 N log e s × cos m 1 θ y i + m 2 m 3 e s × cos m 1 θ y i + m 2 m 3 + j = 1 , j y i n e s × cos θ j

2.3.2. Li-ArcFace

The Li-ArcFace [34] takes the angle after a linear function as the logit rather than a cosine function. The transformation in Equation (2) was replicated, resulting in θ j = arccos W j T x i / s 0 , π . We constructed a linear function f x = π 2 x / π , so that f θ j 1,1 . An additive angular margin m was also added in the target logit, like ArcFace. In the end, we had s × π 2 θ y i + m / π as the target logit. The whole Li-ArcFace, where the prefix Li refers to the linear function, can be formulated as in Equation (5).
L 4 = 1 N i = 1 N log e s × π 2 θ y i + m / π e s × π 2 θ y i + m / π + j = 1 , j y i n e s × π 2 θ j / π
There are two main advantages of using this linear function to replace the cosine function. Firstly, the linear function is monotonic, decreasing when the angle is between 0 and π + m , leading to better convergence. Secondly, the proposed loss function penalizes the angle between the embedding feature x i and the center W y i linearly, resulting in a linear decrease in the target logit as the angle increases, as shown in Figure 5. This linear penalty contributes to the effective optimization and regularization of the model.

2.4. Evaluation Indicators and Experimental Environment

2.4.1. Evaluation Indicators

The evaluation metrics of the sheep face recognition model include accuracy, precision, recall, F1-score, Matthews correlation coefficient (MCC), parameters, and floating-point operations (FLOPs). These are calculated in Equations (6)–(10), where T P , F P , T N , and F N denote the counts of True Positive, False Positive, True Negative, and False Negative, respectively. Accuracy is defined as the percentage of correctly classified samples out of the total number of samples. Precision represents the proportion of samples predicted by the model to be positive that are true positive cases. Recall is a measure of the proportion of true positive cases that the model correctly identifies. The F1-score is a metric that represents the reconciled average of precision and recall. MCC is a comprehensive metric for assessing the performance of a model that takes into account the correlation between T P , F P , T N , and F N . A higher MCC value indicates that the predicted results of the model exhibit a strong correlation with the true results. FLOPs is a metric that quantifies the duration of the model’s execution and represents the number of floating-point operations performed during the forward process. A reduction in FLOPs indicates a decrease in the computational and execution time required by the model. Parameters determine the size of the model. Smaller models have lower hardware requirements and higher applicability.
a c c u r a c y =   T P + T N T P + T N + F P + F N
p r e c i s i o n = T P T P + F P
r e c a l l = T P T P + F N
F 1 s c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
M C C = T P × T N F P × F N T P + F P T P + F N T N + F P T N + F N

2.4.2. Experimental Environment

The experiments in this paper were conducted in the Linux Ubuntu 18.0.4 environment (CPU: Intel Core i9 10900K, GPU: NVIDIA GeForce RTX 3060 × 2, RAM: 64 GB), and the deep learning framework was Pytorch 2.0.1. See Table 4 for other parameter settings.

3. Results

3.1. Comparison of Open-Set Recognition Results Using SheepFaceNet and Other Models

To validate the effectiveness of the proposed SheepFaceNet, we conducted experiments on the LSFW dataset in this section. The LSFW dataset, comprising 212 sheep individuals, was randomly divided into the training set–open set (182 sheep) and the testing set–open set (30 sheep). A comprehensive analysis was conducted by comparing the proposed model with the current state-of-the-art models [11,23,35] in terms of open-set accuracy, model computational complexity, and parameter count. In this experiment, we utilized the ArcFace loss ( s = 64.0 ,   m = 0.5 ) as the loss function, which is the optimal hyperparameter settings in ArcFace [15]. The experimental results are presented in Table 5.
iResNets is an improved version of residual networks (ResNets) [31], which enhance accuracy and learning convergence. Three distinct depths of iResNet models were tested for open-set sheep face recognition: iResnet18, iResnet34, and iResnet50. As the model depth increased, the open-set recognition accuracy improved to 89.66%. However, this also resulted in a higher demand for computational resources. The performance of the ViT model on the LSFW dataset was assessed. In this analysis, the smaller version of the ViT model, ViT-s, was employed. However, the model demonstrated suboptimal performance in open-set recognition, with an accuracy of only 90.5%. Moreover, the Transformer architecture introduced higher computational complexity than CNN, making it less suitable for deployment on resource-constrained edge devices and mobile devices. The lightweight model, MobileFaceNet, achieved an open-set recognition accuracy of 94.33% with a low FLOPs value of only 0.45 G and a parameter count of 2.06 M. Additionally, we tested the larger version of MobileFaceNet, MobileFaceNet-Large, which further improved the open-set recognition accuracy to 94.5%.
The results of the experiment demonstrate that SheepFaceNet exhibited a significant advantage in open-set recognition, achieving a recognition accuracy of 95%, which is higher than that of MobileFaceNet-Large. The SheepFaceNet model exhibited the highest F1-score and MCC and demonstrated the most robust and balanced performance overall. Moreover, the model exhibited a notable reduction in parameters and FLOPs in comparison to MobileFaceNet, thereby substantiating the efficacy of the Seesaw block. Figure 6 depicts the recognition accuracy and loss curves of the SheepFaceNet and other models. SheepFaceNet demonstrated superior open-set accuracy while exhibiting a comparable convergence rate to MobileFaceNet.

3.2. Open-Set Recognition Results with Different Margin-Based Losses

In this section, we compare the performances of different margin-based softmax loss functions in open-set sheep face recognition. Experiments were conducted on sheep faces using SheepFaceNet with four loss functions: SphereFace, CosFace, ArcFace, and Li-ArcFace. For each loss function, we set s = 64.0 and searched for the optimal value around the recommended m value. Figure 7 illustrates the recognition accuracy of SheepFaceNet with different losses as the m varied. The optimal recognition accuracy for each function was achieved at different values of m . In general, the Li-ArcFace loss function performed better than the other loss functions on the sheep face recognition task. The experimental results with the best m value are presented in Table 6, which demonstrates the significant impact of loss function selection on the accuracy of sheep face recognition. Among the evaluated loss functions, Li-ArcFace ( m = 0.45 ) performed the best, achieving the highest accuracy in open-set recognition. Although the recall of Li-ArcFace was slightly lower than that of ArcFace, it obtained a higher F1-score due to its higher precision and the smaller gap between precision and recall, which indicates that Li-ArcFace had a more balanced performance. Therefore, the sheep face recognition method employing Li-ArcFace as the loss function and SheepFaceNet as the network model was designated as Li-SheepFaceNet. The Li-SheepFaceNet method exhibited outstanding discriminative power in distinguishing facial features, thereby improving accuracy.

3.3. Evaluation Results of Li-SheepFaceNet on Public Sheep Face Dataset

In this section, we assess the generalization of Li-SheepFaceNet on a publicly available face dataset [6] which comprises 10 goats. Figure 8 provides a schematic representation of the GoatFace dataset, which contains 20 face images for each goat. In comparison to the LSFW dataset, the inter-individual disparity in the GoatFace dataset is smaller, and the facial features are more disparate from those of Ujumqin sheep. In our experiments, we randomly created a total of 600 validation pairs of goat face images, with 300 pairs being the same goat and 300 pairs being different goats.
We used the training set-open set as the training set and evaluated the advantages of the proposed method, Li-SheepFaceNet, in terms of generalization on the LSFW (testing set-open set) and the GoatFace dataset. Table 7 presents the results of the evaluation conducted on the two datasets. The experimental results demonstrate that SheepFaceNet and Li-ArcFace loss exhibited notable advantages over MobileFaceNet and ArcFace loss in terms of recognition performance and generalization. The Li-SheepFaceNet method achieved the highest recognition accuracy on the LSFW and the GoatFace dataset. Moreover, the Li-SheepFaceNet method exhibited the smallest difference in recognition accuracy on the two datasets, which provides further evidence of the efficacy of our method and its superior capacity for generalization across different sheep face datasets. This enables the transfer of the learned features from the training data to new data.

3.4. Influence of Facial Feature Changes for Open-Set Recognition

Open-set recognition is more challenging compared to closed-set recognition. This is due to the fact that in open-set recognition, the model must be capable of handling situations with unknown classes. Figure 9 illustrates the distinction between closed-set and open-set recognition. Closed-set face recognition can be regarded as a classification problem (Figure 9a), while open-set face recognition is essentially a metric learning problem (Figure 9b). The key objective is to identify distinctive features that satisfy the criterion that the maximum intra-class distance is less than the minimum inter-class distance under a specific metric space. This also requires the model to exhibit strong generalization abilities to adapt to the facial feature changes in different views. Therefore, we incorporate multi-view sheep face information during the model training and recognition process, simulating the real-world application environment, to enhance the robustness of the network to the variations in facial features and lighting. However, the introduction of sheep face images in different views increases the similarity between classes and amplifies the differences within classes, significantly increasing the difficulty of recognition.
Table 8 presents the number of misidentified positive and negative pairs from LSFW. The misidentification of positive pairs occurs when the same individual is erroneously classified as a different individual, whereas the misidentification of negative pairs occurs when different individuals are incorrectly identified as the same individual. In comparison to MobileFaceNet and ArcFace loss, the Li-SheepFaceNet demonstrated a reduction in the number of misidentifications for positive pairs from 15 to 8, and for negative pairs from 19 to 15. Figure 10 illustrates examples of face pairs that were incorrectly recognized using MobileFaceNet with ArcFace loss, but correctly recognized by Li-SheepFaceNet. Figure 10a illustrates the correctly identified positive pair samples of Li-SheepFaceNet, in which the sheep faces exhibit discernible variations in facial features under varying face angles and lighting conditions. The Li-SheepFaceNet model demonstrated enhanced generalization capabilities with respect to these variations. Figure 10b depicts the improved negative pair samples of Li-SheepFaceNet, which demonstrates that our method is capable of more effectively distinguishing between two highly similar individuals.

3.5. Visualization of Open-Set Sheep Face Recognition Results

A total of 30 sheep from the LSFW-testing Set were registered in the sheep face recognition system. The visualization results of sheep face recognition system are presented in Figure 11. From Figure 11a,b, it can be observed that the sheep face recognition system was capable of effectively recognizing sheep faces with different skin colors and providing the corresponding identity information and maximum matching similarity within the system. Figure 11c illustrates that the sheep face recognition system was capable of accurately recognizing sheep faces in multi-view. For sheep with unknown identities, the system developed in this paper was also able to recognize them accurately, as shown in Figure 11d, which depicts an unknown identity. In the actual test, the average time consumed from sending a request to displaying the recognition result was 0.91 s, and the feature matching of 214 image pairs could be completed, on average, at a rate of one per second. Furthermore, the number of parameters and the required computation of our method are relatively modest, which allows for more efficient deployment on mobile and edge devices to meet the demands of real-time recognition.

4. Discussion

This study proposed the Li-SheepFaceNet for open-set sheep face recognition and collected sheep face images in multi-view with the objective of enhancing the model’s robustness to variations in facial features and lighting. In comparison to the base method, Li-SheepFaceNet demonstrated an improvement in open-set recognition accuracy of 1.8% on the self-built dataset. Furthermore, it achieved an open-set recognition accuracy of 93.33% on the public dataset, which indicates that it has stronger generalization capabilities. In addition, the model size and computational resources have been further reduced, which makes it more favorable for deployment on mobile and edge devices.
We used a two-stage recognition method involving sheep face detection followed by sheep face recognition. Ref. [36] employed an improved YOLOv4 model to directly detect and classify sheep faces within the original image. However, the presence of a multitude of background information and interfering factors led to a reduction in recognition accuracy and high computational demands. The two-stage sheep face recognition method, as shown in Figure 12, serves to reduce errors and noise and to minimize the computational burden of processing the entire image.
Sheep face recognition is a non-invasive and animal-friendly method of identification. However, it is possible to induce stress or discomfort in sheep when capturing images of their faces. Ref. [5] proposed a two-stage sheep face recognition system that employs a device with a water trough design to collect sheep face images under controlled conditions. During the image collection process, isolating individuals or applying controlling behaviors to sheep can result in harm and potentially induce stress-induced facial expression variations. Therefore, we endeavored to refrain from the aforementioned behaviors during the data collection process in this paper. We installed cameras in the passageway of the pasture to capture images of the sheep as they passed freely, as shown in Figure 13. Despite the increased complexity of the acquisition process, the collected data were more closely aligned with the actual application environment.
Research on open-set sheep face recognition, which is more suited to practical application environments, remains relatively scarce. Ref. [12] designed a sheep face open-set recognition network based on the European space metric, achieving an open-set recognition accuracy of 89.12%. Ref. [13] improved MobileFaceNet by incorporating Efficient Channel Attention with Spatial Information (ECCSA), resulting in an open-set recognition accuracy of 88.06%. Despite the notable progress made in open-set recognition research, the recognition accuracy remains relatively low and fails to meet the requirements of practical applications.
The objective of this research is to enhance the open-set accuracy of sheep face recognition and to reduce the number of parameters and FLOPs of the model, thereby rendering it more suitable for wide deployment and practical applications. Although the Li-SheepFaceNet achieved a satisfactory level of open-set recognition accuracy on the publicly available dataset, which contained 10 goats, we intend to validate our approach on datasets that consist of different scenarios and contain more individuals. This represents a field of future research for us.

5. Conclusions

To address the practical challenges of sheep face recognition posed by flock dynamics and variations in face features in different views, in this paper, we proposed the Li-SheepFaceNet to enhance open-set sheep face recognition performance. We non-invasively collected face images of the Ujumqin sheep in multi-view to enhance the model’s robustness to facial variations. We proposed the SheepFaceNet network, which employs the Seesaw block to enhance feature extraction capabilities and reduce computational resources, thereby facilitating more efficient deployment on mobile and edge devices. The Li-ArcFace loss, utilizing a linear function to incorporate angle values as target logits, exhibits better convergence and performance in low-dimensional embedding feature learning for sheep face recognition. Li-SheepFaceNet presents a novel solution for open-set recognition of sheep faces, accelerating the application of deep learning in intelligent farming practices. Significant facial variations exist among different breeds and in different environmental contexts. In the future, we will enrich the dataset composition to support research on cross-breed and cross-domain recognition, which will further promote the practical application of sheep face recognition.

Author Contributions

Conceptualization, J.L.; methodology, J.L.; software, J.L.; validation, J.L.; formal analysis, Y.Y.; investigation, J.L.; resources, G.L.; data curation, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L., Y.Y. and Y.N.; visualization, J.L. and P.S.; supervision, Y.Y.; project administration, G.L.; funding acquisition, G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China (Grant No. 2021YFD1300502).

Institutional Review Board Statement

All applicable international, national, and/or institutional guidelines for care and use of animals were followed. We declare that we comply with the ethical standards of relevant national and institutional animal experiment committees. And all animal handling procedures were approved by the Laboratory Animal Welfare and Ethics Review Committee of China Agricultural University (Aw60604202-5-1).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to the privacy policy of the author’s institution.

Acknowledgments

We thank all of the funders.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ait-Saidi, A.; Caja, G.; Salama, A.A.K.; Carné, S. Implementing Electronic Identification for Performance Recording in Sheep: I. Manual versus Semiautomatic and Automatic Recording Systems in Dairy and Meat Farms. J. Dairy Sci. 2014, 97, 7505–7514. [Google Scholar] [CrossRef] [PubMed]
  2. Kumar, S.; Tiwari, S.; Singh, S.K. Face Recognition for Cattle. In Proceedings of the 2015 Third International Conference on Image Information Processing (ICIIP), Waknaghat, India, 21–24 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 65–72. [Google Scholar]
  3. Yan, H.; Cui, Q.; Liu, Z. Pig Face Identification Based on Improved AlexNet Model. INMATEH Agric. Eng. 2020, 61, 97–104. [Google Scholar] [CrossRef]
  4. Huang, L.; Qian, B.; Guan, F.; Hou, Z.; Zhang, Q. Goat Face Recognition Model Based on Wavelet Transform and Convolutional Neural Networks. Trans. Chin. Soc. Agric. Mach. 2023, 54, 278–287. [Google Scholar]
  5. Hitelman, A.; Edan, Y.; Godo, A.; Berenstein, R.; Lepar, J.; Halachmi, I. Biometric Identification of Sheep via a Machine-Vision System. Comput. Electron. Agric. 2022, 194, 106713. [Google Scholar] [CrossRef]
  6. Billah, M.; Wang, X.; Yu, J.; Jiang, Y. Real-Time Goat Face Recognition Using Convolutional Neural Network. Comput. Electron. Agric. 2022, 194, 106730. [Google Scholar] [CrossRef]
  7. Meng, X.; Tao, P.; Han, L.; CaiRang, D. Sheep Identification with Distance Balance in Two Stages Deep Learning. In Proceedings of the 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 4–6 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1308–1313. [Google Scholar]
  8. Li, X.; Du, J.; Yang, J.; Li, S. When Mobilenetv2 Meets Transformer: A Balanced Sheep Face Recognition Model. Agriculture 2022, 12, 1126. [Google Scholar] [CrossRef]
  9. Li, X.; Xiang, Y.; Li, S. Combining Convolutional and Vision Transformer Structures for Sheep Face Recognition. Comput. Electron. Agric. 2023, 205, 107651. [Google Scholar] [CrossRef]
  10. Zhang, X.; Xuan, C.; Ma, Y.; Su, H. A High-Precision Facial Recognition Method for Small-Tailed Han Sheep Based on an Optimised Vision Transformer. Animal 2023, 17, 100886. [Google Scholar] [CrossRef]
  11. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  12. Xue, H.; Qin, J.; Quan, C.; Ren, W.; Gao, T.; Zhao, J. Open Set Sheep Face Recognition Based on Euclidean Space Metric. Math. Probl. Eng. 2021, 2021, 1–15. [Google Scholar] [CrossRef]
  13. Zhang, H.; Zhou, L.; Li, Y.; Hao, J.; Sun, Y.; Li, S. Sheep Face Recognition Method Based on Improved MobileFaceNet. Trans. Chin. Soc. Agric. Mach. 2022, 53, 267–274. [Google Scholar]
  14. Wen, Y.; Zhang, K.; Li, Z.; Qiao, Y. A Discriminative Feature Learning Approach for Deep Face Recognition. In Proceedings of the Computer Vision–ECCV, Amsterdam, The Netherlands, 11–14 October 2016; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2016; pp. 499–515. [Google Scholar]
  15. Deng, J.; Guo, J.; Yang, J.; Xue, N.; Cotsia, I.; Zafeiriou, S.P. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5962–5979. [Google Scholar] [CrossRef] [PubMed]
  16. Salama, A.; Hassanien, A.E.; Fahmy, A. Sheep Identification Using a Hybrid Deep Learning and Bayesian Optimization Approach. IEEE Access 2019, 7, 31681–31687. [Google Scholar] [CrossRef]
  17. Ding, C.; Tao, D. A Comprehensive Survey on Pose-Invariant Face Recognition. ACM Trans. Intell. Syst. Technol. 2016, 7, 1–42. [Google Scholar] [CrossRef]
  18. Zhang, J. SeesawFaceNets: Sparse and Robust Face Verification Model for Mobile Platform. arXiv 2019, arXiv:1908.09124. [Google Scholar]
  19. Jocher, G. Ultralytics YOLOv5. 7.0. Zenodo 2020. [Google Scholar] [CrossRef]
  20. Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E.; Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. In Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition. 2008. Available online: https://inria.hal.science/inria-00321923/ (accessed on 15 June 2024).
  21. Howard, A.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  22. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4510–4520. [Google Scholar]
  23. Chen, S.; Liu, Y.; Gao, X.; Han, Z. MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices. In Proceedings of the Biometric Recognition, Urumqi, China, 11–12 August 2018; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2018; pp. 428–438. [Google Scholar]
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 84–90. [Google Scholar] [CrossRef]
  25. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 6848–6856. [Google Scholar]
  26. Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the Computer Vision–ECCV, Munich, Germany, 8–14 September 2018; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2018; pp. 122–138. [Google Scholar]
  27. Zhang, J. Seesaw-Net: Convolution Neural Network with Uneven Group Convolution. arXiv 2019, arXiv:1905.03672. [Google Scholar]
  28. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for Activation Functions. arXiv 2017, arXiv:1710.05941. [Google Scholar]
  29. Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.-C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  30. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  32. Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; Song, L. SphereFace: Deep Hypersphere Embedding for Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6738–6746. [Google Scholar]
  33. Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; Liu, W. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5265–5274. [Google Scholar]
  34. Li, X.; Wang, F.; Hu, Q.; Leng, C. AirFace:Lightweight and Efficient Model for Face Recognition. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 2678–2682. [Google Scholar]
  35. Duta, I.C.; Liu, L.; Zhu, F.; Shao, L. Improved Residual Networks for Image and Video Recognition. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: New York, NY, USA, 2021; pp. 9415–9422. [Google Scholar]
  36. Zhang, X.; Xuan, C.; Ma, Y.; Su, H.; Zhang, M. Biometric Facial Identification Using Attention Module Optimized YOLOv4 for Sheep. Comput. Electron. Agric. 2022, 203, 107452. [Google Scholar] [CrossRef]
Figure 1. Examples of sheep face images in LSFW.
Figure 1. Examples of sheep face images in LSFW.
Agriculture 14 01112 g001
Figure 2. The structure of uneven group convolution. C1 and C2 indicate the number of input channels and output channels, respectively.
Figure 2. The structure of uneven group convolution. C1 and C2 indicate the number of input channels and output channels, respectively.
Agriculture 14 01112 g002
Figure 3. Channel shuffle with two stacked uneven-group convolutions. UnevenGConv means uneven-group convolution. (a) is two stacked uneven-group convolution layers; (b) is two stacked uneven-group convolution layers using channel shuffle.
Figure 3. Channel shuffle with two stacked uneven-group convolutions. UnevenGConv means uneven-group convolution. (a) is two stacked uneven-group convolution layers; (b) is two stacked uneven-group convolution layers using channel shuffle.
Agriculture 14 01112 g003
Figure 4. The structure of Seesaw block. (a) Seesaw block with residual learning; (b) Seesaw block without residual learning, when the stride of 3 × 3 depthwise convolution (DWConv) is 2.
Figure 4. The structure of Seesaw block. (a) Seesaw block with residual learning; (b) Seesaw block without residual learning, when the stride of 3 × 3 depthwise convolution (DWConv) is 2.
Agriculture 14 01112 g004
Figure 5. Target logit curves.
Figure 5. Target logit curves.
Agriculture 14 01112 g005
Figure 6. The recognition accuracy and loss curves of SheepFaceNet and other models. (a) The comparison of open-set recognition accuracy of sheep face recognition models; (b) the comparison of loss of sheep face recognition models.
Figure 6. The recognition accuracy and loss curves of SheepFaceNet and other models. (a) The comparison of open-set recognition accuracy of sheep face recognition models; (b) the comparison of loss of sheep face recognition models.
Agriculture 14 01112 g006
Figure 7. The recognition performance of SheepFaceNet with different losses as the margin value is varied. “★” represents the optimal value of m , when the recognition accuracy is best.
Figure 7. The recognition performance of SheepFaceNet with different losses as the margin value is varied. “★” represents the optimal value of m , when the recognition accuracy is best.
Agriculture 14 01112 g007
Figure 8. Examples of facial images of goats in GoatFace dataset.
Figure 8. Examples of facial images of goats in GoatFace dataset.
Agriculture 14 01112 g008
Figure 9. Differences in task types between closed-set and open-set recognition. (a) Classification problem; (b) metric learning problem.
Figure 9. Differences in task types between closed-set and open-set recognition. (a) Classification problem; (b) metric learning problem.
Agriculture 14 01112 g009
Figure 10. Examples of face pairs in LSFW with differential recognition results by MobileFaceNet with ArcFace loss and Li-SheepFaceNet. (a) Positive pairs; (b) negative pairs.
Figure 10. Examples of face pairs in LSFW with differential recognition results by MobileFaceNet with ArcFace loss and Li-SheepFaceNet. (a) Positive pairs; (b) negative pairs.
Agriculture 14 01112 g010
Figure 11. Sheep face recognition system. (ac) The correct identification results; and (d) the identification result of unregistered sheep.
Figure 11. Sheep face recognition system. (ac) The correct identification results; and (d) the identification result of unregistered sheep.
Agriculture 14 01112 g011
Figure 12. Flowchart of the two-stage sheep face recognition method.
Figure 12. Flowchart of the two-stage sheep face recognition method.
Agriculture 14 01112 g012
Figure 13. Data collection cameras placed in the passageway.
Figure 13. Data collection cameras placed in the passageway.
Agriculture 14 01112 g013
Table 1. A statistical summary of the dataset utilized in this study.
Table 1. A statistical summary of the dataset utilized in this study.
DatasetSheep NumberImages NumberPairs
Training Set-Open Set1823547-
Testing Set-Open Set30254600
Total2123801-
Table 2. The proposed network architecture. “/2” is the stride of convolution is 2. “RSeesaw Block” and “Seesaw Block” denote the Seesaw block with and without residual learning, respectively.
Table 2. The proposed network architecture. “/2” is the stride of convolution is 2. “RSeesaw Block” and “Seesaw Block” denote the Seesaw block with and without residual learning, respectively.
InputOperator
112 × 112 × 3Conv 3 × 3, /2, 64
56 × 56 × 64DWConv 3 × 3, 64
56 × 56 × 641 × Seesaw block1 × 1 UnevenGConv, 128
3 × 3 DWConv, /2, 128
1 × 1 UnevenGConv, 64
28 × 28 × 644 × RSeesaw block1 × 1 UnevenGConv, 128
3 × 3 DWConv, 128
1 × 1 UnevenGConv, 64
28 × 28 × 641 × Seesaw block1 × 1 UnevenGConv, 256
3 × 3 DWConv, /2, 256
1 × 1 UnevenGConv, 128
14 × 14 × 1286 × RSeesaw block1 × 1 UnevenGConv, 256
3 × 3 DWConv, 256
1 × 1 UnevenGConv, 128
14 × 14 × 1281 × Seesaw block1 × 1 UnevenGConv, 512
3 × 3 DWConv, /2, 512
1 × 1 UnevenGConv, 128
7 × 7 × 1282 × Seesaw block1 × 1 UnevenGConv, 256
3 × 3 DWConv, 256
1 × 1 UnevenGConv, 128
7 ×7 × 128Conv 1 × 1, 512
7 × 7 × 512LinearGDConv 7 × 7, 512
1 × 1 × 512LinearConv 1 × 1
Conv = Convolution; UnevenGConv = uneven group convolution; DWConv = depthwise convolution; LinearGDConv = linear global depthwise convolution; LinearConv = linear convolution.
Table 3. The difference in the target logit of margin-based softmax losses.
Table 3. The difference in the target logit of margin-based softmax losses.
LossTarget LogitThe Value of m 1 , m 2 , m 3
SphereFace s × cos m θ y i m 1 ,   0 ,   0
CosFace s × cos θ y i m 0 ,   0 ,   m 3
ArcFace s × cos θ y i + m 0 ,   m 2 ,   0
Table 4. Parameter settings in this paper.
Table 4. Parameter settings in this paper.
ConfigureValue
OptimizerSGD
Learning Rate0.01
Momentum0.9
Weight Decay1 × 10−4
Batch Size16
Training Epochs40
Table 5. Comparison of open-set recognition results between SheepFaceNet and other models.
Table 5. Comparison of open-set recognition results between SheepFaceNet and other models.
ModelLSFW
(Testing Set–Open Set)
(%)
Precision
(%)
Recall
(%)
F1-Score
(%)
MCC
(%)
FLOPs/GParams/MB
iResnet 1887.8389.6686.4988.0575.712.6224.02
iResnet 3488.3390.3386.8588.5676.724.4834.13
iResnet 5089.6691.3388.3889.8379.376.33 43.59
ViT-s90.50 92.3389.0690.67 81.05 5.7576.02
MobileFaceNet94.33 95.0093.7594.37 88.67 0.452.06
MobileFaceNet-Large94.50 95.0094.0594.52 89.00 1.856.31
SheepFaceNet95.00 95.6694.4095.03 90.000.151.36
MCC = Matthews correlation coefficient; FLOPS = floating-point operations per second; Params = parameters; ViT-s = Vision Transformer-small.
Table 6. Recognition accuracy of different loss functions with the best margin value.
Table 6. Recognition accuracy of different loss functions with the best margin value.
LossThe Best Value of mLSFW
(Testing Set–Open Set)
(%)
Precision
(%)
Recall
(%)
F1-Score
(%)
MCC (%)
SphereFace1.3592.1692.6691.7492.2084.33
CosFace0.4093.1694.0092.4593.2286.34
ArcFace0.5095.0095.0097.3594.3788.67
Li-ArcFace0.45 96.13 97.3395.1196.2192.35
Table 7. Accuracy of open-set recognition results on LSFW and GoatFace.
Table 7. Accuracy of open-set recognition results on LSFW and GoatFace.
MethodsDatasets
LSFW
(Testing Set–Open Set)
(%)
GoatFace
(%)
MobileFaceNet + ArcFace94.3386.66
MobileFaceNet + Li-ArcFace96.0090.00
SheepFaceNet + ArcFace95.0090.00
SheepFaceNet + Li-ArcFace (Li-SheepFaceNet)96.13 93.33
Table 8. The number of misidentifications of positive pairs and negative pairs from LSFW.
Table 8. The number of misidentifications of positive pairs and negative pairs from LSFW.
MethodsPositive PairsNegative Pairs
MobileFaceNet + ArcFace1519
SheepFaceNet + Li-ArcFace (Li-SheepFaceNet)815
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Yang, Y.; Liu, G.; Ning, Y.; Song, P. Open-Set Sheep Face Recognition in Multi-View Based on Li-SheepFaceNet. Agriculture 2024, 14, 1112. https://doi.org/10.3390/agriculture14071112

AMA Style

Li J, Yang Y, Liu G, Ning Y, Song P. Open-Set Sheep Face Recognition in Multi-View Based on Li-SheepFaceNet. Agriculture. 2024; 14(7):1112. https://doi.org/10.3390/agriculture14071112

Chicago/Turabian Style

Li, Jianquan, Ying Yang, Gang Liu, Yuanlin Ning, and Ping Song. 2024. "Open-Set Sheep Face Recognition in Multi-View Based on Li-SheepFaceNet" Agriculture 14, no. 7: 1112. https://doi.org/10.3390/agriculture14071112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop