Next Article in Journal
Integrating Actuator Fault-Tolerant Control and Deep-Learning-Based NDVI Estimation for Precision Agriculture with a Hexacopter UAV
Previous Article in Journal
Construction of a Discrete Elemental Model for Clayey Soil Considering Pressure–Sinkage Nonlinear Relationship to Investigate Stress Transfer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Deep Learning Algorithms for Effective Chicken Tracking through Image Processing

by
Saman Abdanan Mehdizadeh
1,
Allan Lincoln Rodrigues Siriani
2 and
Danilo Florentino Pereira
3,*
1
Department of Mechanics of Biosystems Engineering, Faculty of Agricultural Engineering and Rural Development, Agricultural Sciences and Natural Resources University of Khuzestan, Ahvaz 63417-73637, Iran
2
Graduate Program in Agribusiness and Development, School of Sciences and Engineering, São Paulo State University, Tupã 17602-496, SP, Brazil
3
Department of Management, Development, and Technology, School of Sciences and Engineering, São Paulo State University, Tupã 17602-496, SP, Brazil
*
Author to whom correspondence should be addressed.
AgriEngineering 2024, 6(3), 2749-2767; https://doi.org/10.3390/agriengineering6030160
Submission received: 2 July 2024 / Revised: 20 July 2024 / Accepted: 1 August 2024 / Published: 8 August 2024
(This article belongs to the Section Livestock Farming Technology)

Abstract

:
Identifying bird numbers in hostile environments, such as poultry facilities, presents significant challenges. The complexity of these environments demands robust and adaptive algorithmic approaches for the accurate detection and tracking of birds over time, ensuring reliable data analysis. This study aims to enhance methodologies for automated chicken identification in videos, addressing the dynamic and non-standardized nature of poultry farming environments. The YOLOv8n model was chosen for chicken detection due to its high portability. The developed algorithm promptly identifies and labels chickens as they appear in the image. The process is illustrated in two parallel flowcharts, emphasizing different aspects of image processing and behavioral analysis. False regions such as the chickens’ heads and tails are excluded to calculate the body area more accurately. The following three scenarios were tested with the newly modified deep-learning algorithm: (1) reappearing chicken with temporary invisibility; (2) multiple missing chickens with object occlusion; and (3) multiple missing chickens with coalescing chickens. This results in a precise measure of the chickens’ size and shape, with the YOLO model achieving an accuracy above 0.98 and a loss of less than 0.1. In all scenarios, the modified algorithm improved accuracy in maintaining chicken identification, enabling the simultaneous tracking of several chickens with respective error rates of 0, 0.007, and 0.017. Morphological identification, based on features extracted from each chicken, proved to be an effective strategy for enhancing tracking accuracy.

1. Introduction

Ensuring efficient chicken farming is crucial, including prioritizing their welfare and health in agricultural production [1]. Individual variability in behavior enhances our understanding of how housing can be utilized differently by chickens. This may be due to individual preferences for certain types of equipment or impediments to access caused by dominant individuals [2]. This variability underscores the importance of improving accurate identification systems for camera-based tracking.
Precision livestock farming (PLF) aims to automatically and continuously monitor animal health and welfare in real time. Its success relies on the ability to provide early warnings [3]. Excessively high or low activity levels can increase the risk of metabolic and locomotion problems, negatively affecting chicken welfare [4]. Therefore, it is crucial to regularly monitor bird activity and keep it within appropriate limits for optimal poultry production.
Li et al. [3] listed several image analysis techniques used to analyze herd activity. Monitoring changes in collective movement patterns is a strategy that has significantly contributed to the early detection of unwanted situations. Bloemen et al. [5] used threshold segmentation to convert images into a binary format, counting pixels corresponding to moving animals and calculating the activity index. However, traditional threshold segmentation methods have high requirements for lighting and background conditions, leading to high error rates in chicken segmentation. Kristensen et al. [6] combined gradual changes in light intensity with dynamic data modeling to calculate chicken activity levels, but this study was highly sensitive to lighting conditions. Dawkins et al. [7] assessed chicken behavior, gait, and welfare using optical flow. These visual records provide valuable insights into movement patterns, social interactions, environmental preferences, and herd health, among other aspects fundamental to animal management and welfare, as well as for production optimization. However, tracking individual chickens remains a challenge to overcome.
Some object-tracking techniques can be categorized into separate trackers and joint trackers. Joint trackers integrate appearance and motion tracking into a single framework [8]. This integration allows for a more holistic and potentially more robust tracking process. Joint trackers can more effectively handle situations where objects are partially occluded or momentarily out of view [9], as the combined information helps maintain continuity. However, the complexity of combining multiple tracking aspects can lead to increased computational demands [10] and may require more sophisticated models to achieve the same level of performance as separate trackers [11].
In summary, separate trackers like StrongSort offer high efficiency and accuracy by focusing on specific tracking components but may struggle with occlusions [12]. Joint trackers provide a more integrated approach, potentially improving robustness in challenging scenarios, but often at the cost of increased computational complexity.
Some object-tracking techniques can be categorized into separate trackers and joint trackers. Separate trackers, such as StrongSort, are known for their efficiency and enhanced performance [13]. Utilizing advanced techniques like BoT and ResNeSt50 for appearance [14], and the ECC module for motion [15], StrongSort offers superior accuracy in real-time object tracking. Additionally, techniques like the EMA method for updating appearance states [16] and the use of vanilla matching instead of cascade matching [17] contribute to its enhanced effectiveness. However, these methods require the chicken to be visible for identification, and if the bird passes through blind spots or objects that obstruct its view by the camera, the model fails.
This study presents an innovative approach to identifying birds in hostile environments, such as poultry facilities, using robust and adaptive algorithms to ensure the precise detection and tracking of birds over time.
In this context, this study aims to enhance the methodologies for the automated identification of chickens in videos, addressing the dynamic and non-standardized nature of poultry farming environments [18,19]. The YOLOv8n model was chosen for chicken detection due to its high portability [20]. The developed algorithm promptly identifies and labels the chickens as they appear in the image, ensuring efficient detection even under adverse conditions.
The novelty of this study lies in the creation of an algorithm capable of maintaining bird identification even in complex scenarios. The identification process is illustrated in two parallel flowcharts, each highlighting different aspects of image processing and behavioral analysis. To calculate the chickens’ body area more accurately, false regions such as heads and tails are excluded.
The following three specific scenarios were tested with the newly modified deep-learning algorithm: (1) chickens reappearing after temporary invisibility; (2) multiple chickens disappearing due to object occlusion; and (3) multiple chickens disappearing with the coalescence of chickens. These tests demonstrated the algorithm’s ability to track and individually identify chickens even in highly challenging situations, significantly contributing to the advancement of ethology research and the improvement in animal monitoring and management systems in agricultural environments.
The goal of this study is to improve methods for automated chicken identification in videos, addressing the challenges of dynamic and non-standardized poultry farming environments. By developing and evaluating algorithms capable of recognizing and individually tracking chickens, even in adverse conditions, we intend to make a significant contribution to the advancement of ethology research, as well as the improvement of animal monitoring and management systems in agricultural environments.

2. Materials and Methods

2.1. Video Collection Setup

This study was conducted at the Bioterium of the School of Sciences and Engineering, São Paulo State University (UNESP) in Tupã, Brazil, over 90 days from 10 June to 8 September 2020 and involved observing a group of 20 chickens in a reduced-scale shed over a 15 min video. The camera was placed above the aviary to capture images from an overhead view. Monitoring was performed using a POWER® camera, model AP2688W, with an analog charge-coupled device (CCD) image sensor. The experiment included 80 Lohmann lineage layers, monitored from 10 June to 8 September 2020. At the start of the experiment, the hens were 29 weeks old and were sourced from a commercial farm. They were randomly divided into four groups of twenty. The experiment used four small-scale shed models, each measuring 140 cm by 300 cm and 150 cm in height, oriented east–west. The hens were kept on 15 cm high wood-chip bedding. Each shed included two nest boxes (40 cm × 40 cm × 40 cm), a 30 cm diameter pendulum feeder, and four nipple drinkers. Ref. [21] provided a complete description of the experiment.

2.2. Deep Learning Algorithm

2.2.1. Detection Model—YOLOv8n

For this study, the YOLOv8n model was selected to detect chickens due to its high portability. As the latest version of the YOLO model [22], YOLOv8 is a single-stage model that directly extracts features from input images and simultaneously predicts target categories and positions. This approach reduces the training time and model complexity, resulting in faster processing.
Compared to two-stage models like R-CNN [23] and Fast-RCNN [24], YOLOv8 offers high detection accuracy and quick detection rates. This makes it ideal for use in resource-limited environments, such as mobile devices, where efficiency and speed are essential. The study by [25] provided the foundational data for this research. We evaluated the YOLOv8 model using the following performance metrics.
Training Losses:
  • Bounding Box Loss (train/box_loss): this metric measures the error in predicting the coordinates of the bounding boxes.
  • Classification Loss (train/cls_loss): this metric measures the error in classifying the objects.
  • Objectness Loss (train/obj_loss): this metric evaluates the confidence score for the presence of an object in a proposed region.
Validation Losses:
  • Bounding Box Loss (val/box_loss): similar to the training bounding box loss, this validation metric tends to decrease, reflecting the improved generalization of the model for object localization on unseen data.
  • Classification Loss (val/cls_loss): the reduction in validation classification loss indicates the enhanced generalization for object classification.
  • Objectness Loss (val/obj_loss): this metric, which also decreases over the epochs, shows that the model’s confidence in detecting objects improves on the validation set.
Precision Metrics:
  • Precision for Class B (metrics/precision(B)): Precision measures the accuracy of positive predictions. A rising precision curve indicates that the model reduces false positives for class B.
  • Recall for Class B (metrics/recall(B)): Recall measures the model’s ability to detect all relevant instances of class B. The upward trend in recall shows that the model becomes more effective in finding all true positives over time.
  • Mean Average Precision at IoU = 0.50 for Class B (metrics/mAP50(B)): This metric evaluates the precision of the model at an intersection over union (IoU) threshold of 0.50.
  • Mean Average Precision at IoU = 0.50:0.95 for Class B (metrics/mAP50-95(B)): This metric averages the precision over multiple IoU thresholds from 0.50 to 0.95.
Overall, these metrics collectively illustrate the performance of the object detection model over the training epochs. The declining loss values and rising precision and recall curves confirm that the model becomes more accurate and confident in detecting and classifying objects.

2.2.2. The New Approach Modified the Deep Learning Algorithm

The YOLO model described previously performed the initial identification. When an ID was lost, the modified algorithm reassigned the chicken’s ID when it reappeared in the image. However, due to blind spots in the camera’s field of view and the reappearance of the chickens, it assigned new labels to them, which artificially increased the number of chickens. To prevent this issue, the number of chickens in each pen was kept constant for this study. Nevertheless, if several chickens entered the blind spot of the camera simultaneously, the likelihood of misassigning labels to each chicken increased upon their reappearance. To improve the accuracy of the center coordinates and proper labeling of each chicken, the area feature was used. Figure 1 illustrates the detailed workflow of the modified deep-learning algorithm used for tracking and analyzing chickens in a confined environment. The process is depicted in two parallel flowcharts, each highlighting different aspects of image processing and behavioral analysis of the chickens. The algorithm was developed in Python.
The flowchart on the left starts with data input and image capture, where the current number of chickens in the environment is recorded, and real-time images are captured. This information is essential for accurate indexing and tracking. The captured images are then processed to calculate the centroid, contours, and area of each chicken, which are fundamental parameters for identifying and tracking individuals over time. Based on these calculations, the chickens are indexed over ten consecutive frames, allowing for continuous and accurate movement analysis.
The algorithm checks for the availability of a chicken’s index. If an index is missing but the chicken is still present in the image, the center and area of the lost index are compared with the data from the last ten frames to regenerate the correct index, ensuring continuity in tracking. The area and centroid data of each chicken are saved for the last ten frames, serving as temporary storage for subsequent analysis. These collected data are then used for behavior-based classification, which helps in understanding movement patterns and social interactions. Finally, the processing results are printed directly on the image frames, and the video is saved based on the generated frames, facilitating visualization and analysis.
The flowchart on the right begins with real-time image capture, followed by converting the captured RGB images to grayscale to reduce complexity and facilitate subsequent processing. The grayscale images are then converted to binary images to increase computational speed. The watershed method is employed to segment edge regions, clearly separating different areas of interest. Morphological opening is applied to refine segmentation, removing noise and small imperfections. The processed image is then returned, ready for subsequent analysis.
False regions, such as the chickens’ heads and tails, are removed to calculate the body area more accurately. The final step involves calculating the body area, excluding these false regions, to provide a precise measure of the chickens’ size and shape.
The combination of these two flowcharts allows for the robust and accurate tracking of chickens, as well as a detailed analysis of their behaviors within the confined environment. This advanced methodology is crucial for improving the monitoring and management of chickens in agricultural settings, significantly contributing to ethological research and the optimization of production. Appendix A presents the pseudocode of the optimized algorithm.
Different head positions (head up or down) and tail variations (length and shortness of tail feathers) cause a chicken to have different areas in various states, which not only does not improve the tracking algorithm but also misleads it [26]. To address this issue, a head and tail removal algorithm based on the watershed algorithm proposed in [27] was employed. Figure 2 shows the processing steps used to remove the head and tail of chickens.
In this method, after identifying the chickens using a deep learning algorithm, each chicken is examined individually (Figure 2a). To speed up image processing, RGB images were converted to grayscale images to reduce the computational load (Figure 2b). Additionally, background removal is easily performed in this mode due to the difference in pixel intensity between the chicken and the background, and by thresholding at 210, the chicken pixels remain in the image while the background is removed. With the background removed, areas of the background with pixel values similar to the chickens remain in the image, creating white noise areas (Figure 2c).
To eliminate the unwanted areas in the image, filtering based on the area size was used. The areas with an area of less than 60 square pixels were considered noise and removed from the image (Figure 2d). The threshold of 60 square pixels for the area image filtering was established through a methodical process of trial and error. This value, albeit small, was found to be optimal for our specific context. It is important to note that the size of the chickens, which is the primary subject of our study, was significantly larger than this threshold, with an average area of 19,573 pixels.
Given this substantial difference in scale, the threshold of 60 square pixels is unlikely to introduce any significant distortions or inaccuracies in our analysis. Furthermore, this threshold does not necessitate any adaptations when applied to different samples, as its value is relatively insignificant compared to the size of the chickens.
In essence, the choice of this threshold is a balance between eliminating noise and preserving the integrity of the sample’s physical properties. It is a testament to the robustness of our algorithm that it can accommodate such variations without the need for individual adjustments. We believe this approach enhances the generalizability and applicability of our method across the different samples.
The resulting binary image retains only the chicken with the background completely removed. After that, the heads and tails of the chicks were removed using the watershed algorithm, which views an image as a three-dimensional object with the pixel intensity determining the height of each pixel. This algorithm defines how a water drop would move over a specific point according to the topographic landscape, with a drop of water remaining at a local minimum point, points where a water drop moving downhill reaches a specific local minimum (often called basins or catchment basins), and points where a water drop moving downhill reaches more than one local minimum (called watershed lines or divides). The segmentation algorithms based on these concepts aim to find the watershed lines (Figure 2e). After applying watershed thresholding, the image is opened using an opening operator, and small regions (related to the head and tail) are filtered out. The image is then reconstructed and closed (Figure 2f), resulting in the calculation of the body area of the chicks without the head and tail (Figure 2g).
To evaluate the modified YOLOv8, accuracy, precision, specificity, sensitivity, error rate, Kappa coefficient, and F1-score were calculated using Equations (1)–(7) over 900 frames [26].
A c c u r a c y = T P + T N T P + T N + F P + F N × 100
P r e c i s i o n = T P T P + F P × 100
S p e c i f i c i t y = T N T N + F P × 100
S e n s i t i v i t y = T P T P + F N × 100
E r r o r   r a t e = F P + F N T P + T N + F P + F N × 100
K a p p a   c o e f f i c i e n t = O b s e r v e d   a c c u r a c y e x p e c t e d   a c c u r a c y 1 e x p e c t e d   a c c u r a c y × 100
F 1 - S c o r e = P r e c i s i o n × S e n s i t i v i t y P r e c i s i o n + S e n s i t i v i t y × 100
where T P : true positives; T N : true negatives; F P : false positives; and F N : false negatives.

3. Results

3.1. YOLO Detection Model

Figure 3 presents the evolution of various performance metrics over 50 epochs of training and validation of an object detection model. The metrics are divided into two main categories: losses and precision metrics. The detailed analysis of these results reveals significant improvements in the model’s performance, achieving an accuracy above 0.98 and a loss of less than 0.1.
Observing the graphs in Figure 3, we see that the performance metrics bounding box loss (train/box_loss), classification loss (train/cls_loss), and objectness loss (train/obj_loss) consistently decline over the epochs. This indicates that the model becomes increasingly accurate in localizing objects and correctly identifying object classes, reflecting the improved generalization for object localization on unseen data.
The precision for Class B (metrics/precision(B)), recall for Class B (metrics/recall(B)), and mean average precision at IoU = 0.50 for Class B (metrics/mAP50(B)) exhibit rising precision curves, suggesting that the model reduces false positives for Class B, becomes more effective in finding all true positives over time, and demonstrates good precision and recall for Class B at this IoU threshold.
Furthermore, the mean average precision at IoU = 0.50:0.95 for Class B (metrics/mAP50-95(B)) shows a consistent increase, with high final values, demonstrating that the model performs well across different levels of localization strictness.

3.2. Evaluation of the New Modified Deep Learning Algorithm

Once trained, the classifier assigns a unique label to each chicken across different frames, ensuring distinct identification. It is crucial to acknowledge that this process operates optimally when all chickens remain within the image boundaries. However, challenges arise when other objects obstruct the view of the chickens or when a chicken exits the frame. In such scenarios, upon the chickens’ reappearance, the existing algorithm identifies them as new chickens entering the frame, resulting in the assignment of new labels and an increase in the count of the present chickens. Additionally, the creation of separate files for each chicken and the storage of relevant information in those files may not be ideal.
The algorithm could face different scenarios, as illustrated in Figure 4, where chicken number 4 exhibits movement toward the feeder, positioning itself behind it to consume grain. However, due to the feeder’s presence obstructing the view, chicken number 4 becomes temporarily invisible within the frame. Subsequently, when the chicken moves away from the feeder and exits the feeder’s coverage area, it reappears in the image. The algorithm then identifies this reappeared chicken as a distinct entity and assigns it a new label. Consequently, with each iteration of this process, the label numbers increment, while the actual count of chickens in the image remains constant.
To address this issue, it is assumed that the number of chickens in the image remains constant, with no additions or removals. Consequently, when a chicken that was not present in previous frames reappears, the algorithm assigns the missing label to it and abstains from generating new labels. However, even in this scenario, various situations arise, which will be examined as follows.

3.2.1. Scenario One: Reappearing Chicken with Temporary Invisibility

Consider a scenario where there is a fixed set of chickens in an image and one of them is temporarily lost from view, only to reappear later. Under the premise that the total number of chickens in the image remains constant, a tracking algorithm is implemented to assign the correct identity to each chicken as it becomes visible again.
In Figure 5, the movement of chicken number 4 is observed, which initially moves diagonally from location A to location B. It then undergoes a horizontal transition from location B to location C. At this point, chicken number 4 chooses to consume grains and advances towards location D. Upon reaching the feeder, it assumes the position at the rear of the feeder (designated as location E) and begins feeding. During this period, the presence of the chicken is temporarily obstructed by the feeder, resulting in its invisibility in the image. After completing the feeding, the chicken moves away from the feeder and proceeds to location F. The developed algorithm promptly reassigns the label to the chicken as soon as it reappears in the view, under the assumption that the only missing label corresponds to the chicken previously removed from the image. From this point onwards, the algorithm continues to track the chicken at subsequent locations, such as F and G.
The main challenge arises when more than one chicken is lost from the image. In such a case, upon their reappearance, the algorithm becomes confused, and the likelihood of correctly identifying each chicken’s label decreases.

3.2.2. Scenario Two: Multiple Missing Chickens, Occlusion with Objects

In this scenario, more than one chicken can be lost from the image. In such cases, the morphological features extracted from the chickens serve as a basis for identifying each chicken according to its label. The data file associated with each chicken is examined based on the labels previously assigned to them. Specifically, only the data of chickens whose labels are not currently present in the image are reviewed. When a reappearing chicken is detected, its morphological features are extracted and compared with the existing data file. As a result, each chicken’s label is correctly reassigned upon reappearing in the image. Figure 6 shows an example of this scenario. Consider the image where the chickens numbered 4 and 17 have disappeared (Figure 6a). When these chickens reappear, the algorithm analyzes the historical paths taken by the missing chickens. Specifically, it focuses on the path of the chicken that disappeared closest to the reappearance location (Figure 6b). If the movement direction of the disappeared chicken aligns with that of the reappeared chicken (Figure 6c), additional morphological features are checked. Upon confirmation, the correct label is assigned to the reappeared chicken (Figure 6d).

3.2.3. Scenario Three: Multiple Missing Chickens, Coalescing Chickens

In the observed scenario, two or more chickens come into proximity, visually coalescing into a single entity due to their similar appearance. Upon subsequent separation, the task of assigning accurate labels to each individual becomes challenging due to the presence of multiple lost labels. To address this issue, not only are the morphological characteristics of each chicken considered, but also their movement direction is considered for identification. Neglecting the direction of movement could lead to label misallocation. Therefore, the importance of incorporating movement direction as a key component in this context is emphasized. In the accompanying image (Figure 7), the chickens numbered 5 and 17 converge at position A. Chicken number 17 moves underneath chicken number 5, causing it to disappear from the frame (as observed in Figure 7b).
After a brief interval, chicken number 17 re-emerges from beneath chicken number 5, becoming visible again. The two chickens are re-labeled and differentiated using the developed algorithm, which leverages morphological features and movement directions, and unique labels are assigned to each chicken (Figure 7c). Following separation and re-identification, both chickens resume their paths, and the algorithm continues to track their movements (as demonstrated in Figure 7d).
In the context of object detection, several critical metrics are employed to assess the performance of this model. Precision quantifies the proportion of true positive predictions (correct detections) among all positive predictions (both true positives and false positives). High precision indicates the effective avoidance of false positives by the model. Sensitivity calculates the proportion of true positive predictions among all actual positive instances (true positives and false negatives). Table 1 shows the performance evaluation metrics for each of the evaluated scenarios.
High recall signifies the successful detection of most positive instances by the model. The F1-Score combines precision and recall into a single metric, balancing the trade-off between false positives and false negatives. A higher Kappa coefficient (0.719) indicates a better overall performance. The error rate (0.017) complements accuracy (1 minus accuracy) and quantifies misclassifications. Lower error rates correspond to better model performance. Specificity quantifies the proportion of true negative predictions (correctly rejected non-positive instances) among all actual negative instances. Note that specificity is not applicable when true negatives (TNs) are zero. The accuracy (98.3%) measures the overall correctness of predictions (both true positives and true negatives) relative to the total number of instances, with high accuracy values indicating good overall performance. Finally, the Kappa coefficient (0.756) assesses the agreement between the observed and expected classifications, considering both accuracy and chance agreement, with higher Kappa coefficients indicating a better agreement beyond chance. Additionally, in scenarios where the algorithm initially produces incorrect detections, it subsequently corrects itself within the first few frames.
The initial state of the four chickens was examined, as depicted in the image in Figure 8a, and this image was designated as the starting point for tracking the birds. These four hens were tracked across 18,000 frames by the developed algorithm, and the coordinates of each hen were continuously stored (Figure 8b–d). The movement of each hen, recorded for 10 min, is visually displayed at the end (Figure 8f). This capability can serve as an indicator of the health of the chickens, their preference for different locations, their inclination towards feed, etc. For example, a greater inclination towards eating feed compared to the other chickens examined can be concluded from a careful observation of the path taken by chicken number 4 (Figure 8e). Similarly, a lesser inclination to move or use the feed in the feeder compared to the other hens can be inferred from observing the path taken by chicken number 3 (Figure 8d).

3.3. Tracking Four Chickens Using a Modified Deep Learning Algorithm

The algorithm that has been developed is capable of tracking all the chickens present in the image. For instance, an automatic tracking of four chickens was conducted by the algorithm for a duration of 10 min. Figure 8 illustrates the tracking of four chickens in a confined environment, captured by a camera (CAM-1) using a modified deep-learning algorithm. The image is divided into six panels, (a) to (f), detailing the different aspects and results of the tracking.
The results presented in Figure 8 underscore the efficacy of the modified deep-learning algorithm in accurately tracking the movements of individual chickens in a confined environment. The detailed tracking trajectories, illustrated in panels (b) through (e), demonstrate the algorithm’s capability to distinguish and monitor each bird’s movement despite the inherent complexity and interactions within the enclosure. The superimposed trajectories in panel (f) provide a comprehensive visualization of the movement patterns, facilitating a comparative analysis of the chickens’ spatial behavior. These findings highlight the potential of advanced tracking technologies in enhancing the monitoring and management of poultry, offering valuable insights into animal behavior that are crucial for optimizing welfare and production outcomes in agricultural settings. The Supplementary Materials (Video S1) presents an example of applying the optimized algorithm to a section of footage. Future work will focus on refining these methodologies to further improve tracking accuracy in varied and more challenging farming environments.

4. Discussion

Observing behavior is crucial for assessing the needs of animals and estimating their health by evaluating pain, injury, and illness [28]. Tracking chickens in breeding houses involves several challenges, such as varying light conditions, occlusions by other chickens or objects, and the need to distinguish between individual chickens [29]. These challenges necessitate sophisticated algorithms capable of maintaining high accuracy and robustness.
The development of algorithms for chicken identification and tracking in videos is essential for significant advancements in automated monitoring in poultry environments. Recent studies have explored the use of You Only Look Once (YOLO) models, particularly the YOLO version, which has shown promising results in the real-time detection and labeling of chickens. Tan et al. [30] used YOLOv7-tiny and the improved StrongSort and reported a mean AP50% of 0.97. Neethirajan [31] used YOLOv5, improved it with a Kalman filter, and reported a mean AP50% of over 0.85. Siriani et al. [25] used YOLOv8n, improved it with the BoT-Sort algorithm, and reported a mean AP50% of 0.98. This model’s high portability and accuracy make it a suitable choice for deployment in dynamic poultry farming environments.
Despite the significant advances, there is still room for improvement in the field of poultry tracking. The ongoing research focuses on enhancing the robustness of these models against more complex scenarios and improving the integration of multi-sensor data for a comprehensive monitoring solution. Additionally, the development of more sophisticated algorithms capable of real-time adaptation to environmental changes remains a critical area of exploration [30].
This study addressed this need by implementing an adaptable and robust algorithm capable of dealing with challenges presented by the environment and bird behavior, using morphological identification. The results showed that, although the algorithm is effective in many situations, challenges persist, such as bird overlap and temporary occlusion by objects, which can lead to errors in identification and tracking. The algorithm’s ability to correctly reassociate labels when birds reappear is a significant advance, providing a solid foundation for future developments. Morphological identification, based on features extracted from each bird, proved to be an effective strategy for improving tracking accuracy.
However, generalizing the algorithm to different poultry contexts is an evident necessity. The combination of image processing techniques, video analysis, and tracking algorithms focused on morphological identification represents a promising approach to understanding bird behavior and enhancing poultry management and animal welfare practices.

5. Practical Applications and Perspectives

The implementation of the algorithm can lead to significant improvements in poultry management practices. By accurately tracking and identifying individual birds, farm managers can monitor health, behavior, and welfare more effectively, allowing for timely interventions and better overall flock management.
Furthermore, the ability to monitor individual birds and identify issues such as overcrowding, stress, and aggression can lead to improved animal welfare. Early detection of health problems and behavioral issues ensures that birds receive the necessary care promptly, enhancing their quality of life.
The algorithm can also generate extensive data on bird behavior and interactions, providing valuable insights for researchers and farmers. These data can be used to optimize feeding strategies, housing conditions, and overall management practices, leading to more sustainable and productive poultry farming.
Integrating the algorithm into automated monitoring systems can reduce the need for manual observation, saving time and labor costs. Automated systems can provide continuous, real-time monitoring, ensuring that any issues are quickly detected and addressed.
While the current study focuses on specific poultry environments, the algorithm’s adaptability allows for application across different poultry contexts. Future developments can tailor the algorithm to various types of poultry farming, enhancing its utility and effectiveness in diverse settings.
Looking forward, several promising directions for further research and development emerge, as follows:
  • Validation in Diverse Environments: Validating the algorithm in different poultry environments, including various housing systems and flock sizes, will ensure its robustness and generalizability.
  • Artificial Intelligence Integration: Integrating artificial intelligence and machine learning technologies with the algorithm can enhance its predictive capabilities and adaptability, enabling more sophisticated analyses of bird behavior and health.
  • Collaborative Research: Interdisciplinary collaboration among researchers, veterinarians, and industry professionals is essential to advance the field. Combining expertise from various domains can lead to innovative solutions and broader applications.
  • Technological Innovations: Continuous advances in video sensors and image processing techniques will further refine the algorithm’s performance. Incorporating these innovations can drive significant progress in automated poultry monitoring.
Continuous research efforts and interdisciplinary collaboration are required to advance in this area. Additional strategies, such as implementing collision algorithms and validating the algorithm in different poultry environments, can significantly contribute to these techniques’ effectiveness and practical applicability. Integrating artificial intelligence and machine learning technologies with video sensors and behavioral data can open new pathways for innovation in poultry monitoring.
This study represents an important initial step in the development of accurate and efficient methods for automated chicken monitoring. With ongoing dedication and cooperation among researchers, it is hoped that these efforts will contribute to significant advancements in sustainable poultry production and animal welfare. The evolution of these technologies promises to transform poultry management, providing a more efficient and humanized future for the industry.

6. Conclusions

The YOLO model, combined with advanced image processing techniques, has proven effective in maintaining the identification of chickens that are partially occluded in footage. Morphological identification, based on features extracted from each chicken, has demonstrated its efficacy in improving tracking accuracy.
This study represents a crucial initial step in developing accurate and efficient methods for automated chicken monitoring. The results show that the algorithm can handle various challenges presented by the environment and bird behavior, utilizing morphological identification to enhance tracking precision. However, several limitations need to be addressed for further improvement, as follows:
  • Bird Overlap and Occlusion: Despite advances, challenges remain such as bird overlap and temporary occlusion by objects. These situations can lead to misidentification and tracking errors, particularly in environments with high bird density or complex structures.
  • Environmental Variability: The algorithm’s performance can be affected by varying lighting conditions, camera angles, and backgrounds typical in different poultry farming setups. These factors can introduce noise and reduce the accuracy of detection and tracking.
  • Generalization Across Different Contexts: While the algorithm shows promise in specific environments, its generalizability to different poultry contexts and species requires further validation. Adapting the algorithm to diverse farming practices and housing conditions is essential for broader applicability.
  • Real-Time Processing: Although the YOLOv8n model is portable, the computational requirements for real-time processing in large-scale poultry facilities may pose challenges. Ensuring the algorithm can operate efficiently without significant delays is crucial for practical implementation.
  • Behavioral Complexity: The dynamic and often unpredictable nature of bird behavior adds another layer of complexity. The algorithm needs continuous refinement to accurately interpret and respond to various behavioral patterns and interactions among birds.
Despite these limitations, the algorithm’s ability to correctly reassociate labels when birds reappear is a significant advancement, providing a solid foundation for future developments.

Supplementary Materials

The video processed with the proposed algorithm can be downloaded at: https://www.mdpi.com/article/10.3390/agriengineering6030160/s1. Video S1: Example of applying the optimized algorithm to a section of footage.

Author Contributions

Conceptualization, formal and statistical analysis, software development, methodology, validation, original draft, and proofreading and editing were conducted by S.A.M., A.L.R.S. and D.F.P.; visualization and supervision were provided by S.A.M. and D.F.P.; investigation, data curation, and funding acquisition were managed by D.F.P. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided by the National Council for Scientific and Technological Development—CNPq (Grant # 304085/2021-9).

Institutional Review Board Statement

This study was conducted according to the guidelines of the Brazilian National Council for the Control of Animal Experimentation (CONCEA) and approved by the Sao Paulo State University’s Animal Ethics Committee (protocol number 02/2020, CEUA-UNESP), of the Comfort Environmental Laboratory at the School of Science and Engineering from the Sao Paulo State University (UNESP, Brazil).

Data Availability Statement

Data will be available upon request to the corresponding author.

Acknowledgments

We thank the technicians of the Precision Livestock Laboratory of the School of Sciences and Engineering of UNESP, Campus at Tupã for their support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Algorithm A1: Pseudo-code for chicken tracking algorithm (with head and tail removal).
Agriengineering 06 00160 i001

References

  1. Neethirajan, S. Automated Tracking Systems for the Assessment of Farmed Poultry. Animals 2022, 12, 232. [Google Scholar] [CrossRef] [PubMed]
  2. Campbell, D.L.M.; Karcher, D.M.; Siegford, J.M. Location tracking of individual laying hens housed in aviaries with different litter substrates. Appl. Anim. Behav. Sci. 2016, 184, 74–79. [Google Scholar] [CrossRef]
  3. Li, N.; Ren, Z.; Li, D.; Zeng, L. Review: Automated techniques for monitoring the behaviour and welfare of broilers and laying hens: Towards the goal of precision livestock farming. Animal 2020, 14, 617–625. [Google Scholar] [CrossRef] [PubMed]
  4. Aydin, A.; Cangar, O.; Ozcan, S.E.; Bahr, C.; Berckmans, D. Application of a fully automatic analysis tool to assess the activity of broiler chickens with different gait scores. Comput. Electron. Agric. 2010, 73, 194–199. [Google Scholar] [CrossRef]
  5. Bloemen, H.; Aerts, J.; Berckmans, D.; Goedseels, V. Image analysis to measure activity index of animals. Equine Vet. J. 1997, 23 (Suppl. S23), 16–19. [Google Scholar] [CrossRef] [PubMed]
  6. Kristensen, H.H.; Aerts, J.M.; Leroy, T.; Wathes, C.M.; Berckmans, D. Modelling the dynamic activity of broiler chickens in response to step-wise changes in light intensity. Appl. Anim. Behav. Sci. 2006, 101, 125–143. [Google Scholar] [CrossRef]
  7. Dawkins, M.S.; Cain, R.; Roberts, S.J. Optical flow, flock behaviour and chicken welfare. Anim. Behav. 2012, 84, 219–223. [Google Scholar] [CrossRef]
  8. Wu, S.; Hadachi, A.; Lu, C.; Vivet, D. Transformer for multiple object tracking: Exploring locality to vision. Pattern Recognit. Lett. 2023, 170, 70–76. [Google Scholar] [CrossRef]
  9. Sun, L.; Zhang, J.; Gao, D.; Fan, B.; Fu, Z. Occlusion-aware visual object tracking based on multi-template updating Siamese network. Digit. Signal Process. 2024, 148, 104440. [Google Scholar] [CrossRef]
  10. Liu, Y.; Li, W.; Liu, X.; Li, Z.; Yue, J. Deep learning in multiple animal tracking: A survey. Comput. Electron. Agric. 2024, 224, 109161. [Google Scholar] [CrossRef]
  11. Zhu, Z.; Li, X.; Zhai, J.; Hu, H. PODB: A learning-based polarimetric object detection benchmark for road scenes in adverse weather conditions. Inf. Fusion 2024, 108, 102385. [Google Scholar] [CrossRef]
  12. Zhai, X.; Wei, H.; Wu, H.; Zhao, Q.; Huang, M. Multi-target tracking algorithm in aquaculture monitoring based on deep learning. Ocean Eng. 2023, 286, 116005. [Google Scholar] [CrossRef]
  13. Lu, X.; Ma, C.; Shen, J.; Yang, X.; Reid, I.; Yang, M.-H. Deep Object Tracking with Shrinkage Loss. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2386–2401. [Google Scholar] [CrossRef] [PubMed]
  14. Luo, H.; Jiang, W.; Gu, Y.; Liu, F.; Liao, X.; Lai, S.; Gu, J. A Strong Baseline and Batch Normalization Neck for Deep Person Re-Identification. IEEE Trans. Multimed. 2020, 22, 2597–2609. [Google Scholar] [CrossRef]
  15. Evangelidis, G.D.; Psarakis, E.Z. Parametric Image Alignment Using Enhanced Correlation Coefficient Maximization. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1858–1865. [Google Scholar] [CrossRef] [PubMed]
  16. Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; Meng, H. StrongSORT: Make DeepSORT Great Again. IEEE Trans. Multimed. 2023, 25, 8725–8737. [Google Scholar] [CrossRef]
  17. Wang, Z.; Zheng, L.; Liu, Y.; Li, Y.; Wang, S. Towards Real-Time Multi-Object Tracking. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; Volume 12356. [Google Scholar] [CrossRef]
  18. Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
  19. Badgujar, C.M.; Poulose, A.; Gan, H. Agricultural object detection with You Only Look Once (YOLO) Algorithm: A bibliometric and systematic literature review. Comput. Electron. Agric. 2024, 223, 109090. [Google Scholar] [CrossRef]
  20. Yan, P.; Wang, W.; Li, G.; Zhao, Y.; Wang, J.; Wen, Z. A lightweight coal gangue detection method based on multispectral imaging and enhanced YOLOv8n. Microchem. J. 2024, 199, 110142. [Google Scholar] [CrossRef]
  21. Fernandes, A.M.; De Sartori, D.D.; De Morais, F.J.D.; Salgado, D.D.; Pereira, D.F. Analysis of Cluster and Unrest Behaviors of Laying Hens Housed under Different Thermal Conditions and Light Wave Length. Animals 2021, 11, 2017. [Google Scholar] [CrossRef] [PubMed]
  22. Redmon, J.; Divvala, S.; Girshick, R. You only look once: Unified, real-time object detection computer vision & pattern recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  23. Agrawal, P.; Girshick, R.; Malik, J. Analyzing the Performance of Multilayer Neural Networks for Object Recognition. In Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8695. [Google Scholar] [CrossRef]
  24. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  25. Siriani, A.L.R.; De Miranda, I.B.D.; Mehdizadeh, S.A.; Pereira, D.F. Chicken Tracking and Individual Bird Activity Monitoring Using the BoT-SORT Algorithm. AgriEngineering 2023, 5, 1677–1693. [Google Scholar] [CrossRef]
  26. Amraei, S.; Mehdizadeh, S.A.; Salari, S. Broiler weight estimation based on machine vision and artificial neural network. Br. Poult. Sci. 2017, 58, 200–205. [Google Scholar] [CrossRef] [PubMed]
  27. Liu, H.; Zhang, W.; Wang, F.; Sun, X.; Wang, J.; Wang, C.; Wang, X. Application of an improved watershed algorithm based on distance map reconstruction in bean image segmentation. Heliyon 2023, 9, e15097. [Google Scholar] [CrossRef] [PubMed]
  28. Kashiha, M.A.; Green, A.R.; Sales, T.G.; Bahr, C.; Berckmans, D.; Gates, R.S. Performance of an image analysis processing system for hen tracking in an environmental preference chamber. Poult. Sci. 2014, 93, 2439–2448. [Google Scholar] [CrossRef] [PubMed]
  29. Doornweerd, J.E.; Veerkamp, R.F.; De Klerk, B.; Van der Sluis, M.; Bouwman, A.C.; Ellen, E.D.; Kootstra, G. Tracking individual broilers on video in terms of time and distance. Poult. Sci. 2024, 103, 103185. [Google Scholar] [CrossRef] [PubMed]
  30. Tan, X.; Yin, C.; Li, X.; Cai, M.; Chen, W.; Liu, Z.; Wang, J.; Han, Y. SY-Track: A tracking tool for measuring chicken flock activity level. Comput. Electron. Agric. 2024, 217, 108603. [Google Scholar] [CrossRef]
  31. Neethirajan, S. ChickTrack—A quantitative tracking tool for measuring chicken activity. Measurement 2022, 191, 110819. [Google Scholar] [CrossRef]
Figure 1. Flowchart of processing steps for the modified deep-learning algorithm.
Figure 1. Flowchart of processing steps for the modified deep-learning algorithm.
Agriengineering 06 00160 g001
Figure 2. Steps of body feature detection algorithm and head and tail removal of chickens, where (a) is the original image, (b) is the same image in grayscale, (c) is the binarized image after background removal, (d) is the binarized image after applying an area filter, (e) is the image processed by the watershed algorithm, (f) is the reconstructed image with the head and tail areas identified., and (g) is the resulting image without the head and tail.
Figure 2. Steps of body feature detection algorithm and head and tail removal of chickens, where (a) is the original image, (b) is the same image in grayscale, (c) is the binarized image after background removal, (d) is the binarized image after applying an area filter, (e) is the image processed by the watershed algorithm, (f) is the reconstructed image with the head and tail areas identified., and (g) is the resulting image without the head and tail.
Agriengineering 06 00160 g002
Figure 3. Graph of the evolution of performance metrics during the training and validation of the object detection model over 50 epochs. The metrics include training bounding box loss (train/box_loss), training classification loss (train/cls_loss), training objectness loss (train/obj_loss), precision for Class B (metrics/precision(B)), recall for Class B (metrics/recall(B)), validation bounding box loss (val/box_loss), validation classification loss (val/cls_loss), validation objectness loss (val/obj_loss), mean average precision at IoU = 0.50 for Class B (metrics/mAP50(B)), and mean average precision at IoU = 0.50:0.95 for Class B (metrics/mAP50-95(B)).
Figure 3. Graph of the evolution of performance metrics during the training and validation of the object detection model over 50 epochs. The metrics include training bounding box loss (train/box_loss), training classification loss (train/cls_loss), training objectness loss (train/obj_loss), precision for Class B (metrics/precision(B)), recall for Class B (metrics/recall(B)), validation bounding box loss (val/box_loss), validation classification loss (val/cls_loss), validation objectness loss (val/obj_loss), mean average precision at IoU = 0.50 for Class B (metrics/mAP50(B)), and mean average precision at IoU = 0.50:0.95 for Class B (metrics/mAP50-95(B)).
Agriengineering 06 00160 g003
Figure 4. An example of a missing chicken due to obstruction with an object, where in (a) we observe the chicken with the ID 4, in (b) it is hidden behind the feeder and in (c) it reappears with another ID 18.
Figure 4. An example of a missing chicken due to obstruction with an object, where in (a) we observe the chicken with the ID 4, in (b) it is hidden behind the feeder and in (c) it reappears with another ID 18.
Agriengineering 06 00160 g004
Figure 5. Detection of a missing chicken and re-tracking it after re-appearing. The frames are sequential cuts at varying intervals, with frame (a) showing the initial position of chicken ID 4, frame (b) depicting its diagonal movement, and frame (c) illustrating its horizontal displacement to a position hidden from the feeder in frame (d), and the complete disappearance of chicken ID 4 in frame (e). The frame (f) shows the reappearance of chicken ID 4, and frame (g) captures its diagonal movement.
Figure 5. Detection of a missing chicken and re-tracking it after re-appearing. The frames are sequential cuts at varying intervals, with frame (a) showing the initial position of chicken ID 4, frame (b) depicting its diagonal movement, and frame (c) illustrating its horizontal displacement to a position hidden from the feeder in frame (d), and the complete disappearance of chicken ID 4 in frame (e). The frame (f) shows the reappearance of chicken ID 4, and frame (g) captures its diagonal movement.
Agriengineering 06 00160 g005
Figure 6. Checking the historical paths of the missing chicken. Frame (a) shows a chicken emerging from behind the feeder, which could be either chicken ID 4 or chicken ID 17, as both were missing in the previous frame. The frame (b) depicts the trajectory of chicken ID 4 from earlier frames. The frame (c) is the same as the frame (a) but includes the vector analysis of the possible trajectory of the chicken based on the trajectory of ID 4. Frame (d) confirms the identity of chicken ID 4 using the newly modified deep-learning algorithm.
Figure 6. Checking the historical paths of the missing chicken. Frame (a) shows a chicken emerging from behind the feeder, which could be either chicken ID 4 or chicken ID 17, as both were missing in the previous frame. The frame (b) depicts the trajectory of chicken ID 4 from earlier frames. The frame (c) is the same as the frame (a) but includes the vector analysis of the possible trajectory of the chicken based on the trajectory of ID 4. Frame (d) confirms the identity of chicken ID 4 using the newly modified deep-learning algorithm.
Agriengineering 06 00160 g006
Figure 7. Coalescing chicken detection. The frames are sequential cuts at varying intervals. In frame (a), there are two chickens (ID 5 and ID 17) close to each other. Frames (b,c) are subsequent frames showing one chicken overlapping the other, making one of the chickens completely hidden or unidentifiable by the YOLO detection model. In frame (d), the chickens are separated, and the newly modified deep-learning algorithm correctly identifies each one.
Figure 7. Coalescing chicken detection. The frames are sequential cuts at varying intervals. In frame (a), there are two chickens (ID 5 and ID 17) close to each other. Frames (b,c) are subsequent frames showing one chicken overlapping the other, making one of the chickens completely hidden or unidentifiable by the YOLO detection model. In frame (d), the chickens are separated, and the newly modified deep-learning algorithm correctly identifies each one.
Agriengineering 06 00160 g007
Figure 8. Tracking four chickens (red numbers) using a modified deep-learning algorithm. Downward-pointing triangle: Start, and Circle: End (a) Initial configuration of the chickens, with numerical identification (2 to 4) for reference. (b) Tracking of chicken 1, represented by a red trajectory. (c) Tracking of chicken 2, represented by a green trajectory. (d) Tracking of chicken 3, represented by a blue trajectory. (e) Tracking of chicken 4, represented by a black trajectory. (f) Superimposed trajectories of the four chickens in an (X, Y) coordinate graph. The start (Start) and end (End) positions are marked. The colors of the trajectories correspond to the colors used in the individual panels: red for chicken 1, green for chicken 2, blue for chicken 3, and black for chicken 4.
Figure 8. Tracking four chickens (red numbers) using a modified deep-learning algorithm. Downward-pointing triangle: Start, and Circle: End (a) Initial configuration of the chickens, with numerical identification (2 to 4) for reference. (b) Tracking of chicken 1, represented by a red trajectory. (c) Tracking of chicken 2, represented by a green trajectory. (d) Tracking of chicken 3, represented by a blue trajectory. (e) Tracking of chicken 4, represented by a black trajectory. (f) Superimposed trajectories of the four chickens in an (X, Y) coordinate graph. The start (Start) and end (End) positions are marked. The colors of the trajectories correspond to the colors used in the individual panels: red for chicken 1, green for chicken 2, blue for chicken 3, and black for chicken 4.
Agriengineering 06 00160 g008aAgriengineering 06 00160 g008b
Table 1. Performance evaluation metrics of the optimized model for object detection scenarios.
Table 1. Performance evaluation metrics of the optimized model for object detection scenarios.
ScenarioPrecisionAccuracyF1-ScoreSpecificitySensitivityError RateKappa
Coefficient
I111N/A101
II0.9930.9930.99600.9960.0070.756
III0.9830.9830.99200.9920.0170.719
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mehdizadeh, S.A.; Siriani, A.L.R.; Pereira, D.F. Optimizing Deep Learning Algorithms for Effective Chicken Tracking through Image Processing. AgriEngineering 2024, 6, 2749-2767. https://doi.org/10.3390/agriengineering6030160

AMA Style

Mehdizadeh SA, Siriani ALR, Pereira DF. Optimizing Deep Learning Algorithms for Effective Chicken Tracking through Image Processing. AgriEngineering. 2024; 6(3):2749-2767. https://doi.org/10.3390/agriengineering6030160

Chicago/Turabian Style

Mehdizadeh, Saman Abdanan, Allan Lincoln Rodrigues Siriani, and Danilo Florentino Pereira. 2024. "Optimizing Deep Learning Algorithms for Effective Chicken Tracking through Image Processing" AgriEngineering 6, no. 3: 2749-2767. https://doi.org/10.3390/agriengineering6030160

APA Style

Mehdizadeh, S. A., Siriani, A. L. R., & Pereira, D. F. (2024). Optimizing Deep Learning Algorithms for Effective Chicken Tracking through Image Processing. AgriEngineering, 6(3), 2749-2767. https://doi.org/10.3390/agriengineering6030160

Article Metrics

Back to TopTop