Next Article in Journal
Evaluating the Effect of Pulse Width Modulation-Controlled Spray Duty Cycles on Cotton Fiber Quality Using Principal Component Analysis
Previous Article in Journal
Proof-of-Concept Recirculating Air Cleaner Evaluation in a Pig Nursery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Machine Vision System for Monitoring Wild Birds on Poultry Farms to Prevent Avian Influenza

1
Department of Poultry Science, College of Agricultural & Environmental Sciences, University of Georgia, Athens, GA 30602, USA
2
Department of Computer Science, Franklin College of Arts and Sciences, University of Georgia, Athens, GA 30602, USA
*
Author to whom correspondence should be addressed.
AgriEngineering 2024, 6(4), 3704-3718; https://doi.org/10.3390/agriengineering6040211
Submission received: 5 September 2024 / Revised: 29 September 2024 / Accepted: 7 October 2024 / Published: 9 October 2024
(This article belongs to the Special Issue Precision Farming Technologies for Monitoring Livestock and Poultry)

Abstract

:
The epidemic of avian influenza outbreaks, especially high-pathogenicity avian influenza (HPAI), which causes respiratory disease and death, is a disaster in poultry. The outbreak of HPAI in 2014–2015 caused the loss of 60 million chickens and turkeys. The most recent HPAI outbreak, ongoing since 2021, has led to the loss of over 50 million chickens so far in the US and Canada. Farm biosecurity management practices have been used to prevent the spread of the virus. However, existing practices related to controlling the transmission of the virus through wild birds, especially waterfowl, are limited. For instance, ducks were considered hosts of avian influenza viruses in many past outbreaks. The objectives of this study were to develop a machine vision framework for tracking wild birds and test the performance of deep learning models in the detection of wild birds on poultry farms. A deep learning framework based on computer vision was designed and applied to the monitoring of wild birds. A night vision camera was used to collect data on wild bird near poultry farms. In the data, there were two main wild birds: the gadwall and brown thrasher. More than 6000 pictures were extracted through random video selection and applied in the training and testing processes. An overall precision of 0.95 ([email protected]) was reached by the model. The model is capable of automatic and real-time detection of wild birds. Missed detection mainly came from occlusion because the wild birds tended to hide in grass. Future research could be focused on applying the model to alert to the risk of wild birds and combining it with unmanned aerial vehicles to drive out detected wild birds.

1. Introduction

High-pathogenicity avian influenza (HPAI) is disastrous for poultry industries worldwide. The outbreak of HPAI in 2014–2015 caused the loss of 60 million chickens and turkey [1]. The most recent HPAI outbreak, ongoing since 2021, has led to the loss of over 50 million chickens so far in the US and Canada [2]. Farm biosecurity management practices have been applied to prevent the spread of the virus [3]. Wild birds, especially some waterfowl and shorebirds, are the natural hosts of avian influenza (AI) virus. They can be infected with HPAI and show no symptoms. However, their droppings spread AI virus to domestic flocks during migration [4,5]. Avian influenza, also known as bird flu, is a viral infection that affects bird species, particularly wild aquatic birds and domestic poultry like chickens. It is categorized into low-pathogenic avian influenza (LPAI) and highly pathogenic avian influenza (HPAI) based on the severity of the disease they cause [6]. LPAI generally results in no noticeable symptoms or only mild ones in chickens, whereas HPAI leads to severe disease with high mortality rates. Of significant concern are the H5 and H7 strains of LPAI, which have the potential to mutate into the deadly HPAI. This mutation can occur when the virus circulates within a poultry population, undergoing changes to its hemagglutinin protein, which then increase its virulence [7,8]. HPAI has a higher mortality rate in chickens than LPAI and it spreads rapidly; it could even be passed to humans [9].
Wild birds are attracted by feeds, shelters, and dams on poultry farms. However, monitoring and preventing the AI risk carried by wild birds is challenging at present because of inadequate AI outbreak data and the uncertain movements of nomadic and local wild birds, which are sensitive to weather changes [10]. Poultry farms can inadvertently become hotspots for the spread of AI as they attract wild birds seeking food. These wild birds, natural reservoirs for AI viruses, can carry and transmit the virus to domesticated birds through direct or indirect contact. For example, in the United States, during the 2022–2023 outbreak, more than 8500 detections of highly pathogenic avian influenza (HPAI) were recorded in wild birds, contributing to over 1000 confirmed outbreaks in commercial and backyard flocks. These outbreaks resulted in the loss of approximately 77.9 million birds across 47 states [11]. The primary concern is the contamination of chicken feed and water through the feces of infected wild birds. Such contamination can lead to rapid transmission of the virus among the farm’s poultry population. Due to the close quarters and high density of commercial chicken farms, the disease can spread swiftly, with the potential for significant economic losses, as well as public health concerns. Therefore, strict biosecurity measures, including the control of wild birds’ access to poultry feeding and watering areas, are crucial to prevent the introduction and spread of AI [12,13]. Humans, wild birds, and poultry have shown cross-species transmission of AI viruses in poultry houses or nearby areas (Figure 1).
The automated detection of wild birds provides an effective way to understand AI transmission pathways [14,15]. The detection system includes sensors (cameras, robots, or radar) to collect target data, training (feature extraction and selection), and classification. Radar-based detection systems have been developed to detect birds for many years. However, their limitations are as follows: (a) identifying birds is impossible; (b) some specific radiofrequencies and microwaves can cause electromagnetic interference with other electronic equipment in poultry houses [16,17]. Therefore, in this study, we investigated an emerging computer vision system to monitor and classify wild birds. Compared with traditional detection systems, computer vision improves the overall performance of the system by introducing deep learning methods. You only look once (YOLO) is one of the most popular deep learning methods, famous for its high precision and real-time detection [18]. Datar et al. [19] developed a YOLOv3 model to detect wild birds in the wild which reached an accuracy of 0.87 on average [19]. Human observation costs are generally limited to salaries, typically around USD 15–25 per hour, while telescopes can range from USD 100–1000 depending on quality. In contrast, setting up a computer vision system involves higher upfront costs. Cameras and graphics processing units (GPUs) can cost between USD 2000–15,000, and additional software or model development may add further expenses. However, automatic systems can save on labor costs and make automatic deterrents possible [20].
However, most researchers have only focused on applying YOLO to monitor wild birds, and the YOLO they utilized has since been updated. When it comes to poultry farms, the situation is dramatically different. Currently, the egg industry is shifting to cage-free houses to improve chicken welfare [3]. Under cage-free situations, hens have more chances to behave naturally and come into contact with wild birds, causing AI transmission [21,22,23,24,25]. Therefore, in our study, we used YOLOv5 and YOLOX, which are the newest YOLO series models to date. In addition, we built a computer vision-based method to track wild birds near cage-free houses to prevent HPAI transmission.
The objectives of this study were as follows: (1) to develop detectors for tracking wild birds near cage-free houses using the updated YOLO methods; (2) to compare the model performance between YOLOX and YOLOv5; (3) to predict and manage the presence of wild birds to improve flock health based on the method developed in this study.

2. Materials and Methods

2.1. Experimental Design and Data Collection

A machine vision system using night vision cameras (Swann Communications USA Inc., Santa Fe Springs, CA, USA) and deep learning models was developed in this study. The system includes a 1080p full high-definition camera (PRO-1080MSFB) with a focal length ranging from 2.8 mm to 3.6 mm capturing 18 frames per second (FPS) with a resolution of 1440 pixels high and 1080 pixels wide. The installation height of the camera was 1.5 m. The camera used a digital video recorder (DVR-5580) and a computer (Dell-P2419H) and was located at the poultry research center M house at the University of Georgia (UGA), Athens, GA, USA. The system monitored wild birds automatically 24 h a day. Utilizing night vision in low-light conditions helped us record wild bird activity during the evening and early morning. Night vision cameras are essential for capturing data in low-light environments where birds may be active and where traditional cameras would struggle to provide clear footage. The videos are then saved to the recorder (Figure 2). In the Department of Poultry Science at UGA, the protocol for managing video data entails a systematic transfer from recording devices to hard storage, centralizing data for analysis. This process was meticulously overseen by team members who conducted daily inspections of both the camera’s position and the lens’s condition to ensure consistent quality throughout the experimental phase. Spanning from 14 August to 10 September 2022, this month-long data collection phase was critical for capturing the nuances of poultry behavior. Following this period, a random selection of the video data was converted into static images using the Free Video Converter software (1.6.1). This conversion marked a preparatory step for the next phase of this study, allowing for detailed and focused analysis of the imagery. The converted images are instrumental for researchers, serving as a foundation for further examination and insights into poultry science, contributing to advancements in the field.

2.2. Dataset Augmentation Strategy

To bolster the efficacy and training performance of machine learning models, four fundamental augmentation techniques are used. First, scaling adjusts the size of images to a specified dimension, ensuring consistency across the dataset. Cropping is utilized to extract and focus on specific parts of an image, which can aid in the model’s ability to recognize features from various angles and sections. Flipping is another technique used, where images are mirrored horizontally, providing the model with a wider variety of orientations to learn from. Lastly, rotation introduces a range of angular perspectives by turning the images by random degrees. Each method contributes to a more robust training set, simulating a diverse array of possible scenarios and appearances that the model may encounter in real-world applications. This diversity is crucial for developing an AI system capable of an accurate and reliable detection of wild birds [26,27]. After image augmentation, a total number of 6612 images were gained, of which 5290 images were divided into testing dataset. Then, 1322 images were used as a valid dataset [28,29]. The details of dataset augmentation were shown below (Figure 3). The augmented images were labeled with Labeling Window v2.1.1. The wild birds were labeled with a bounding box in each picture, which contained at least 1/3 their body size. The wild bird class name was given after labeling.

2.3. Detection of Wild Birds

In the field of computer vision for wildlife monitoring, this study harnesses the power of two advanced YOLO algorithms, YOLOv5 and YOLOX, to detect wild birds with enhanced precision. YOLOv5 employs an anchor-based bounding box methodology to pinpoint the location of objects within an image, leveraging predefined box dimensions as references to aid in object detection. This approach, while effective, contrasts with the innovative strategy adopted by YOLOX, which eschews anchor boxes entirely. YOLOX’s anchor-free framework achieves end-to-end detection by utilizing a key point head that identifies central points of objects for localization. This key point-based strategy closely mirrors human visual perception, where the eye naturally seeks out notable points to discern objects in the visual field. The anchor-free detector’s methodology resonates with this intuitive process, initially establishing key points that act as focal indicators to outline the object’s area. Subsequently, the algorithm applies attention mechanisms to refine the detection, concentrating on the central region to optimize accuracy. This center-based approach enables YOLOX to not only maintain but also improve upon the accuracy levels of traditional YOLO series feature pyramids. By adopting this more human-like perception model, YOLOX pushes the boundaries of object detection, offering a sophisticated, refined tool for the precise identification and tracking of avian species in their natural habitats, a critical advancement for ecological research and the conservation of biodiversity [30,31,32,33].
The YOLOX architecture offers a suite of seven different models designed to accommodate various computational and resource requirements, each tailored to specific use cases. The five standard structures—YOLOX-s, YOLOX-m, YOLOX-l, YOLOX-x, and YOLOX-Darknet53—provide a range of options from small to extra-large sizes, catering to diverse performance needs. Each variant is distinguished by its input image size, the number of parameters defining the model’s complexity, and the number of floating-point operations per second (FLOPs) it requires (Table 1). The larger models, like YOLOX-l and YOLOX-x, offer higher accuracy due to more parameters and increased FLOPs, making them suitable for systems with ample computational power. Conversely, YOLOX-s and YOLOX-m are designed for environments where efficiency is prioritized over high precision. Complementing these are the two light models, YOLOX-Nano and YOLOX-Tiny, engineered for scenarios demanding ultra-fast inference times and minimal resource usage, such as mobile or embedded devices. These models significantly reduce the computational load by decreasing the input image size and parameters, resulting in fewer FLOPs. The trade-off, however, is a reduction in detection accuracy compared to their standard counterparts. These light models are particularly advantageous in real-time applications where speed is critical and computational resources are limited. Collectively, the YOLOX suite demonstrates the versatility of the YOLO family, offering solutions that range from high-accuracy, resource-intensive models to lightweight, speed-optimized versions, thus addressing the varied demands of modern object detection tasks [34,35]. The YOLOX network structure includes three main parts, which are the backbone, neck, and head (Figure 4). The backbone, using the same Darknet53 as the YOLOv3 algorithm and the feature pyramid network (FPN), is used to extract information from an image with different aspects. Essentially, the Darknet53 backbone outputs three feature layers (256 channels, 512 channels and 1024 channels) into the neck part [36,37]. The YOLOX neck updates a coupled head (original YOLO) to a decoupled head and outputs three tensors for better average precision. The three tensor outputs aggregate the class name of the bounding box (Cls), the dimension of the bounding box (X: x coordinate of the center of the bounding box, Y: y coordinate of the center of the bounding box, W: width of the bounding box and H: heigh of the bounding box), and the intersection over union (IoU) for prediction [38]. In the prediction part, different H × W predictions were used to predict whether there was an object or not at the center point in each layer, the class name of the object, and the position of the object.
The YOLOX object detection model represents a cutting-edge integration of machine learning into image recognition tasks, and its training was executed within the versatile and accessible confines of Google Colab. This cloud-based environment, built upon the Jupyter notebook interface, allows for the seamless execution of complex computational tasks without the need for local resource-intensive setups. Leveraging the cloud’s computational power, the YOLOX model was trained using NVIDIA’s CUDA toolkit, version 11.2, a parallel computing platform and programming model that significantly enhances processing efficiency. Key to the system’s performance is the NVIDIA-SMI graphics card with 16 GB of memory, which is particularly adept at handling the parallel processing demands of machine learning algorithms. This robust hardware enables the handling of large volumes of data with remarkable speed, a crucial factor given the extensive nature of the training data involved. To optimize the training process, YOLOX was pretrained on the COCO dataset, a large-scale object detection, segmentation, and captioning dataset. This pretraining imbued the model with a foundational understanding of various object types and features, thereby reducing the overall time required for training the model to detect specific objects, such as wild birds. The model’s training regimen was configured with a batch size of 8, which dictates the number of training samples to be processed before the model’s internal parameters are updated. The choice of batch size strikes a balance between memory usage and the granularity of the update process. With 300 epochs set for the training duration, the model was exposed to the dataset multiple times, allowing it to iteratively learn and refine its detection capabilities through repeated passes. This extensive training ensured that the model reached a high level of accuracy, learning from the vast variety of images and annotations contained within the COCO dataset. Such a sophisticated training procedure allowed YOLOX to achieve a state of readiness where it can accurately identify and classify objects within images, a capability that is particularly beneficial for applications that require the rapid and reliable detection of objects in diverse and challenging environments. The use of Google Colab and the advanced hardware and software specifications detail a training environment setup for rigorous, extensive, and deep learning tasks, reflecting the progressive trajectory of machine learning and its applications in the real world.

2.4. Evaluation Standard for Monitoring Wild Birds

To evaluate the performance of wild bird detectors between YOLOX and YOLOv5, several standard indicators were set next, including true positives (TPs), false negatives (FNs), false positives (FPs) and true negatives (TNs) [39]. Precision, recall, and F1 score are critical indicators of a model’s performance. Precision represents the accuracy of the model in classifying an image as positive, signifying the fraction of true positive identifications out of all positive identifications made. Recall, on the other hand, measures the model’s ability to correctly identify all available actual positives, essentially quantifying the sensitivity of the model to detecting positives. The F1 score emerges as a balance between precision and recall, providing a single score that weighs both the model’s precision and recall equally through their harmonic mean; this is particularly useful when seeking a balance between the model’s performance in precision and sensitivity. Beyond these, average precision (AP) and mean average precision (mAP) extend the evaluation to a broader scope. AP calculates the accuracy of the detector for a single class across all recall levels, integrating the precision–recall curve. The mAP then takes this a step further by averaging the AP across all classes, offering an overarching measure of the detector’s accuracy across different bird species [25]. By comparing these metrics, we can determine which bird detector performs better.
P r e c i s i o n = T P s T P s + F P s
R e c a l l = T P s T P s + F N s
F 1 = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
A P = r = 0 1 P r d r
where average precision is the area of the Precision and Recall curve
m A P = i = 1 N A P i N
where N is the sum number of classes in the dataset.
To quantify the movement of activity patterns in our study, we used the YOLO model to detect species from video footage. The detection results were exported as a CSV file, which was then imported into Excel. In Excel, we calculated the average detections per hour for each species and created a line chart to visualize the activity patterns. This chart illustrated the movement trends of species throughout the day, allowing us to quantify and compare their activity levels over time.

3. Results and Discussions

3.1. The Results and Comparisons of Different Algorithms

To attest the superiority of the YOLOX, YOLOv5 was used as a representation of the previous YOLO series model for comparative study. All models were trained and tested with same dataset to keep uniformity. The consequences of the comparison experiments are shown below (Figure 5). The [email protected] (when IoU threshold is 0.5) values of the two models were close before 250 epochs. However, YOLOX markedly exceeded YOLOv5 after 250 epochs with the highest mAP (95.4%), which improved the original YOLOv5 (86.4%) by 9%. To better comprehend the comparative results, three different losses were summarized to evaluate the model performance. The loss functions are formed of three metrics, box_loss (bounding box regression loss), obj_loss (the confidence of object presence), and cls_loss (classification loss). The box_loss of YOLOX was continually lower than that of YOLOv5, which indicates YOLOX outputs more correctly than YOLOv5 because box_loss measures how wrong the model is in evaluating the relationship between x and y. YOLOX applies the decoupled head to capture more channel features [40]. In terms of obj_loss, the YOLOX and YOLOv5 algorithms shared the same trend and featured very close loss values before 250 epochs, but the YOLOX method was slightly improved between 250 to 300 epochs, which means its confidence in predicting that the object exists in a proposed region of interest was higher. For cls_loss, the loss value of both methods was constant. The reason for this is that the dataset has only one class, so there is no missed class. YOLOX can improve the overall accuracy and confidence in detection.
Two detection examples based on the two different models are shown in Figure 6 and Figure 7. Both models can detect and locate gadwalls (a common reservoir of H5N1 virus) and brown thrashers (a potential host of HPAI virus) correctly [41,42]. However, the detection confidence of YOLOX is higher than that of YOLOv5 (on average, it is 10% higher), which also indirectly reflects the fact that YOLOX has better accuracy. The main advantage of YOLOX over YOLOv5 lies in its architectural improvements. YOLOX incorporates an anchor-free algorithm, which simplifies the detection process by eliminating the need for predefined anchor boxes. This leads to a more efficient detection of objects, especially those of varying sizes and shapes. Additionally, YOLOX leverages advanced detection techniques such as a decoupled head and simple optimal transport assignment (SimOTA), which further enhance its performance. Nevertheless, the models also generated some flaws when the bodies of wild birds were hidden in the grass on the ground or by other wild birds. This is a common error in object detection, especially in open-field areas, because there are a number of shelters and natural objects overlapping with the target objects [43,44]. In addition, object size also affects detection accuracy. Wild birds, especially the brown thrasher, are small birds. To improve model performance in small bird detection, high-resolution images could be applied in the prediction process [45]. Alternatively, other solutions such as utilizing additional camera angles or deploying unmanned aerial vehicles (UAVs) could be explored to address this challenge. In this study, other errors included humans interfering with wild birds, which drove the wild birds away from the place we observed them in. However, this could not be prevented because farm workers and researchers conducting experiments here have free access to their experimental houses. In addition, model performance is affected by weather conditions. Rainfall makes observing wild birds difficult because wild birds return to their shelters. However, even in sunny weather, there are also some limitations. The sun’s light causes shadows, and when shadows cover wild birds, this makes the detected targets blurry [46]. To solve this issue, a dual-cue network is proposed to recover shadows in local pictures [47,48]. Region-enhanced networks with normalized attention are an alternative way to address the issue of shadows [49]. In addition, deep metric leaning combined with knowledge graphs would provide an efficient way to detect similar objects [50,51,52]. Although there are still some flaws in the automated detection system we used, its cornerstone (model accuracy) achieved acceptable results.

3.2. Identification and Classification of Wild Birds

In the study of wildlife and agricultural coexistence, the behavior of wild birds is a critical factor, particularly in relation to the transmission of diseases such as highly pathogenic avian influenza [53]. This comprehensive study zeroes in on two avian species, the gadwall and the brown thrasher, which were meticulously tracked and classified by a sophisticated detection model (Figure 8).
The gadwall is a modest duck, often overshadowed by more colorful fowl, yet it plays a significant role in disease ecology. These ducks frequent wetlands, the very environments that are prime for the spread of HPAI, mingling in the shared waters that can serve as conduits for the virus. The gadwall’s migration patterns and feeding habits—dabbling in surface waters where the virus may be present—position it as a potential vector for AI, bridging wild ecosystems and domestic settings [54].
On the other hand, the brown thrasher, distinguished by its streaked feathers and melodious calls, operates closer to the ground. Thriving in dense brushwood, the thrasher’s foraging can disturb the soil and unearth the AI virus, potentially held within organic debris. Its terrestrial habits bring it into closer contact with poultry farms, thus implicating it in the vector chain [55].
From 6 to 8 September, snapshots of these birds’ activities were captured every hour. These snapshots highlight typical behavioral patterns that can be identified throughout the entire data collection period. The gadwall showed greater numbers at most times of the day, except for a noticeable dip in the early afternoon—perhaps due to a siesta or a shift to a feeding ground less visible to the cameras. The brown thrasher, however, peaked in the early afternoon, suggesting a period of heightened activity that could correlate with increased AI transmission risk (Figure 9). In this study, weather conditions did not significantly impact the observation of wild birds. During periods of heavy rain or fog, wild birds tended to remain in their habitats and were less likely to emerge.
The findings extend beyond the academic sphere, carrying weighty implications for farm management and disease prevention. Recognizing when wild birds are most active can inform farming practices to mitigate HPAI risks, like adjusting poultry’s outdoor access or reinforcing biosecurity measures during peak bird activity times. This data-driven approach underscores the importance of predictive behavioral modeling in managing disease vectors. By leveraging technology, we can obtain a deeper understanding of wild bird behavior, enhancing our ability to coexist sustainably with wildlife while protecting domestic bird populations from diseases like HPAI.

3.3. Strategies to Prevent Avian Influenza

In the bid to mitigate the risk of avian influenza transmission from wild birds to poultry, integrating deterrent strategies with advanced detection models like YOLOX provides a sophisticated approach to wildlife management on farms. The proposed deterrence cycle commences with the target phase, where the YOLOX model efficiently detects wild birds, pinpointing species known as AI hosts, such as the gadwall and brown thrasher. The model’s precision in identifying these specific species allows for a focused and informed response. Following detection, the track phase involves the model summarizing data to locate areas of high bird activity. By mapping these frequent visitation spots, the model provides valuable insights, allowing farmers to understand the movement patterns and habits of potential AI vectors. The final phase, prevent, involves the activation of deterrents. This study proposes the use of pyrotechnic sounds, human effigies, and unmanned aerial vehicles (UAVs) as potential deterrents. Pyrotechnic sounds create a sudden, loud noise that startles and scatters birds. Effigies serve as visual deterrents, exploiting birds’ innate caution around perceived human presence. UAVs add a dynamic and mobile element, capable of moving and responding to bird activity in real time [56,57,58]. These deterrents, especially when automated, offer several advantages over traditional human-operated methods. They are not constrained by adverse weather conditions and can function in a range of temperatures, ensuring continuous protection of the farm. Moreover, they can react instantly to the presence of unwanted wild birds, providing an immediate response to potential threats.
An automated detection and deterrence system serves as a non-lethal, persistent, and adaptable solution to the problem of keeping wild birds at bay. It recognizes the impracticality of completely sealing off farms from the natural world and instead focuses on creating an environment that is unwelcoming to species that pose a risk of disease transmission. The aim is to establish a perimeter of defense that discourages wild birds from entering and lingering in areas where they could meet domestic poultry. By doing so, the system not only protects the health of the farm’s birds but also preserves the ecological integrity of the local wildlife. Moreover, the continuous operation of such a system provides an ongoing dataset of bird activity, further refining the model’s accuracy and the effectiveness of deterrent strategies. This allows for a proactive, rather than reactive, approach to farm management and disease prevention. Implementing an automated system also frees up resources, allowing farm personnel to focus on other critical tasks, secure in the knowledge that the AI risk is being managed efficiently. This technology-driven approach exemplifies how innovation in agricultural practices can lead to more effective, humane, and environmentally sensitive farming operations. In conclusion, combining YOLOX’s detection capabilities with automated deterrents represents a modern, intelligent approach to wildlife management on farms. It is a testament to the potential of technology to enhance traditional agricultural practices, ensuring the safety and productivity of the industry while maintaining harmony with the surrounding ecosystem.

4. Conclusions

This study compares two different deep learning algorithms, i.e., YOLOX and YOLOv5, to develop wild bird detection models. The overall mAP of the models ranges from of 86.4% to 95.4% (when the IoU threshold is 0.5), indicating that the deep learning method is a useful approach to monitor wild birds. In addition, YOLOX is superior to YOLOv5 as it improves the mAP by 9% using the same training dataset, and it could be employed as the final model for wild bird detection. Some common wild birds (i.e., the gadwall and brown thrasher) can be recognized by our model, and their behavior patterns were analyzed for their application in a deterrent system to prevent AI transmission. This method plays a crucial role in developing a computer vision-based system to accurately locate potential wild birds that are reservoirs of AI viruses, which can reduce the risk of AI transmission from wild birds to poultry houses. However, this study has some limitations. Detection accuracy may be affected by weather conditions and occlusions. The model also relies on high-quality annotated data and larger, more diverse datasets are needed for better generalization. Moreover, further developments could integrate this method with unmanned aerial vehicles to automatically improve biosecurity on poultry farms, especially in open-field areas.

Author Contributions

Conceptualization, L.C.; Methodology, X.Y. and L.C.; Validation, X.Y.; Formal analysis, X.Y. and Z.W.; Investigation, X.Y., R.B.B., S.S., B.P. and L.C.; Resources, L.C.; Writing—original draft, X.Y., R.B.B., S.S., Z.W., T.L., B.P. and L.C.; Supervision, T.L.; Project administration, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the University of Georgia (UGA) Provost Office Rural Engagement Grant; USDA-NIFA AFRI (2023-68008-39853); Georgia Research Alliance (Venture Fund); Oracle America (Oracle for Research Grant, CPQ-2060433); and UGA IIPA Equipment grant.

Data Availability Statement

The datasets generated, used, and/or analyzed during the current study will be available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, Y.; Richardson, B.; Takle, E.; Chai, L.; Schmitt, D.; Xin, H. Airborne Transmission May Have Played a Role in the Spread of 2015 Highly Pathogenic Avian Influenza Outbreaks in the United States. Sci. Rep. 2019, 9, 11755. [Google Scholar] [CrossRef] [PubMed]
  2. CDC. Avian Influenza Current Situation Summary. Available online: https://www.cdc.gov/bird-flu/situation-summary/index.html (accessed on 11 March 2023).
  3. Chai, L.; Zhao, Y.; Xin, H.; Richardson, B. Heat Treatment for Disinfecting Egg Transport Tools. Appl. Eng. Agric. 2022, 38, 343–350. [Google Scholar] [CrossRef]
  4. Krauss, S.; Webster, R.G. Avian Influenza Virus Surveillance and Wild Birds: Past and Present. Avian Dis. 2010, 54, 394–398. [Google Scholar] [CrossRef] [PubMed]
  5. Vaidya, N.K.; Wang, F.-B.; Zou, X. Avian Influenza Dynamics in Wild Birds with Bird Mobility and Spatial Heterogeneous Environment. DCDS-B 2012, 17, 2829–2848. [Google Scholar] [CrossRef]
  6. Comin, A.; Klinkenberg, D.; Marangon, S.; Toffan, A.; Stegeman, A. Transmission Dynamics of Low Pathogenicity Avian Influenza Infections in Turkey Flocks. PLoS ONE 2011, 6, e26935. [Google Scholar] [CrossRef]
  7. Lupiani, B.; Reddy, S.M. The History of Avian Influenza. Comp. Immunol. Microbiol. Infect. Dis. 2009, 32, 311–323. [Google Scholar] [CrossRef]
  8. Martinez, L.; Cheng, W.; Wang, X.; Ling, F.; Mu, L.; Li, C.; Huo, X.; Ebell, M.H.; Huang, H.; Zhu, L.; et al. A Risk Classification Model to Predict Mortality Among Laboratory-Confirmed Avian Influenza A H7N9 Patients: A Population-Based Observational Cohort Study. J. Infect. Dis. 2019, 220, 1780–1789. [Google Scholar] [CrossRef]
  9. Bouma, A.; Claassen, I.; Natih, K.; Klinkenberg, D.; Donnelly, C.A.; Koch, G.; Boven, M. van Estimation of Transmission Parameters of H5N1 Avian Influenza Virus in Chickens. PLoS Pathog. 2009, 5, e1000281. [Google Scholar] [CrossRef]
  10. Poulson, R.L.; Brown, J.D. Wild Bird Surveillance for Avian Influenza Virus. In Animal Influenza Virus: Methods and Protocols; Methods in Molecular Biology; Spackman, E., Ed.; Springer: New York, NY, USA, 2020; pp. 93–112. ISBN 978-1-07-160346-8. [Google Scholar]
  11. Kandeil, A.; Patton, C.; Jones, J.C.; Jeevan, T.; Harrington, W.N.; Trifkovic, S.; Seiler, J.P.; Fabrizio, T.; Woodard, K.; Turner, J.C.; et al. Rapid Evolution of A(H5N1) Influenza Viruses after Intercontinental Spread to North America. Nat. Commun. 2023, 14, 3082. [Google Scholar] [CrossRef]
  12. Li, C.; Peng, Q.; Wan, X.; Sun, H.; Tang, J. C-Terminal Motifs in Promyelocytic Leukemia Protein Isoforms Critically Regulate PML Nuclear Body Formation. J. Cell Sci. 2017, 130, 3496–3506. [Google Scholar] [CrossRef]
  13. Li, C.; Fu, J.; Shao, S.; Luo, Z.-Q. Legionella Pneumophila Exploits the Endo-Lysosomal Network for Phagosome Biogenesis by Co-Opting SUMOylated Rab7. PLoS Pathog. 2023, 20, e1011783. [Google Scholar] [CrossRef] [PubMed]
  14. Balaji, V.S.; Mahi, A.R.; Anirudh Ganapathy, P.S.; Manju, M. Scarecrow Monitoring System: Employing Mobilenet Ssd for Enhanced Animal Supervision. arXiv 2024, arXiv:2407.01435. [Google Scholar]
  15. Maheswaran, S.; Ramya, M.; Priyadharshini, P.; Sivaranjani, P. A Real Time Image Processing Based System to Scaring the Birds from the Agricultural Field. Indian J. Sci. Technol. 2016, 9, 98999. [Google Scholar] [CrossRef]
  16. Ge, Y.; Yao, Q.; Wang, X.; Chai, H.; Deng, G.; Chen, H.; Hua, Y. Detection of Reassortant Avian Influenza A (H11N9) Virus in Wild Birds in China. Transbound. Emerg. Dis. 2019, 66, 1142–1157. [Google Scholar] [CrossRef] [PubMed]
  17. Styś-Fijoł, N.; Kozdruń, W.; Czekaj, H. Detection of Avian Reoviruses in Wild Birds in Poland. J. Vet. Res. 2017, 61, 239–245. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, Y.; Li, M.; Ma, X.; Wu, X.; Wang, Y. High-Precision Wheat Head Detection Model Based on One-Stage Network and GAN Model. Front. Plant Sci. 2022, 13, 787852. [Google Scholar] [CrossRef]
  19. Datar, P.; Jain, K.; Dhedhi, B. Detection of Birds in the Wild Using Deep Learning Methods. In Proceedings of the 2018 4th International Conference for Convergence in Technology (I2CT), Mangalore, India, 27–28 October 2018; pp. 1–4. [Google Scholar]
  20. Yang, X.; Bahadur Bist, R.; Paneru, B.; Liu, T.; Applegate, T.; Ritz, C.; Kim, W.; Regmi, P.; Chai, L. Computer Vision-Based Cybernetics Systems for Promoting Modern Poultry Farming: A Critical Review. Comput. Electron. Agric. 2024, 225, 109339. [Google Scholar] [CrossRef]
  21. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Pecking Behaviors and Damages of Cage-Free Laying Hens with Machine Vision Technologies. Comput. Electron. Agric. 2023, 204, 107545. [Google Scholar] [CrossRef]
  22. Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Guo, Y. Variation of Litter Quality in Cage-Free Houses during Pullet Production. In Proceedings of the 2022 ASABE Annual International Meeting, Houston, TX, USA, 17–20 July 2022; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2022. [Google Scholar]
  23. Bist, R.B.; Subedi, S.; Chai, L.; Yang, X. Ammonia Emissions, Impacts, and Mitigation Strategies for Poultry Production: A Critical Review. J. Environ. Manag. 2023, 328, 116919. [Google Scholar] [CrossRef]
  24. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Floor Eggs with Machine Vision in Cage-Free Hen Houses. Poult. Sci. 2023, 102, 102637. [Google Scholar] [CrossRef]
  25. Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Wu, Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals 2022, 12, 1983. [Google Scholar] [CrossRef] [PubMed]
  26. Hammami, M.; Friboulet, D.; Kechichian, R. Data Augmentation for Multi-Organ Detection in Medical Images. In Proceedings of the 2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France, 09–12 November 2020; pp. 1–6. [Google Scholar]
  27. Lin, S.-Y.; Li, H.-Y. Integrated Circuit Board Object Detection and Image Augmentation Fusion Model Based on YOLO. Front. Neurorobot. 2021, 15, 762702. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, D.; Zhou, F. Self-Supervised Image Denoising for Real-World Images with Context-Aware Transformer. IEEE Access 2023, 11, 14340–14349. [Google Scholar] [CrossRef]
  29. Zhang, D.; Zhou, F.; Jiang, Y.; Fu, Z. MM-BSN: Self-Supervised Image Denoising for Real-World with Multi-Mask Based on Blind-Spot Network 2023. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  30. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  31. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  32. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  33. Kim, K.; Lee, H.S. Probabilistic Anchor Assignment with IoU Prediction for Object Detection. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020, Proceedings, Part XXV 16; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
  34. Ren, J.; Wang, Z.; Zhang, Y.; Liao, L. YOLOv5-R: Lightweight Real-Time Detection Based on Improved YOLOv5. J. Electron. Imaging 2022, 31, 033033. [Google Scholar] [CrossRef]
  35. Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021. [Google Scholar]
  36. Panigrahi, S.; Raju, U.S.N. InceptionDepth-wiseYOLOv2: Improved Implementation of YOLO Framework for Pedestrian Detection. Int. J. Multimed. Inf. Retr. 2022, 11, 409–430. [Google Scholar] [CrossRef]
  37. Xue, Y.; Ju, Z.; Li, Y.; Zhang, W. MAF-YOLO: Multi-Modal Attention Fusion Based YOLO for Pedestrian Detection. Infrared Phys. Technol. 2021, 118, 103906. [Google Scholar] [CrossRef]
  38. Li, Q.; Zhang, C. Continual Learning on Deployment Pipelines for Machine Learning Systems. arXiv 2022, arXiv:2212.02659. [Google Scholar]
  39. Huang, Y.; Yang, X.; Guo, J.; Cheng, J.; Qu, H.; Ma, J.; Li, L. A High-Precision Method for 100-Day-Old Classification of Chickens in Edge Computing Scenarios Based on Federated Computing. Animals 2022, 12, 3450. [Google Scholar] [CrossRef]
  40. Ren, X.; Zhang, W.; Wu, M.; Li, C.; Wang, X. Meta-YOLO: Meta-Learning for Few-Shot Traffic Sign Detection via Decoupling Dependencies. Appl. Sci. 2022, 12, 5543. [Google Scholar] [CrossRef]
  41. Gauthier-Clerc, M.; Tamisier, A.; Cezilly, F. Sleep-Vigilance Trade-off in Gadwall during the Winter Period. Condor 2000, 102, 307–313. [Google Scholar] [CrossRef]
  42. Rivers, J.W.; Sandercock, B.K. Predation by gray catbird on brown thrasher eggs. SWNA 2004, 49, 101–103. [Google Scholar] [CrossRef]
  43. Li, G.; Hui, X.; Chen, Z.; Chesser, G.D.; Zhao, Y. Development and Evaluation of a Method to Detect Broilers Continuously Walking around Feeder as an Indication of Restricted Feeding Behaviors. Comput. Electron. Agric. 2021, 181, 105982. [Google Scholar] [CrossRef]
  44. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
  45. Kim, M.; Jeong, J.; Kim, S. ECAP-YOLO: Efficient Channel Attention Pyramid YOLO for Small Object Detection in Aerial Image. Remote Sens. 2021, 13, 4851. [Google Scholar] [CrossRef]
  46. Li, B.; Zhang, J.; Zhang, C.; Wang, L.; Xu, J.; Liu, L. Rare Bird Recognition Method in Beijing Based on TC-YOLO Model. Biodivers. Sci. 2024, 32, 24056. [Google Scholar] [CrossRef]
  47. Huang, X.; Huang, Q.; Zhang, N. Dual Fusion Paired Environmental Background and Face Region for Face Anti-Spoofing. In Proceedings of the 2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT), Haikou, China, 29–31 October 2021; pp. 142–149. [Google Scholar]
  48. Zhou, F.; Fu, Z.; Zhang, D. High Dynamic Range Imaging with Context-Aware Transformer. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–8. [Google Scholar]
  49. Ju, Y.; Shi, B.; Jian, M.; Qi, L.; Dong, J.; Lam, K.-M. NormAttention-PSN: A High-Frequency Region Enhanced Photometric Stereo Network with Normalized Attention. Int. J. Comput. Vis. 2022, 130, 3014–3034. [Google Scholar] [CrossRef]
  50. Wang, H.; Zhang, F.; Zhao, M.; Li, W.; Xie, X.; Guo, M. Multi-Task Feature Learning for Knowledge Graph Enhanced Recommendation. In Proceedings of the The World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 2000–2010. [Google Scholar]
  51. Dou, J.X.; Jia, M.; Zaslavsky, N.; Ebeid, M.; Bao, R.; Zhang, S.; Ni, K.; Liang, P.P.; Mao, H.; Mao, Z.H. Learning more effective cell representations efficiently. In Proceedings of the NeurIPS 2022 Workshop on Learning Meaningful Representations of Life, Virtual, 9 December 2022. [Google Scholar]
  52. Dou, J.X.; Mao, H.; Bao, R.; Liang, P.P.; Tan, X.; Zhang, S.; Jia, M.; Zhou, P.; Mao, Z.H. The Measurement of Knowledge in Knowledge Graphs. In Proceedings of the AAAI 2023 Workshop on Representation Learning for Responsible Human-Centric AI (R2HCAI); Association for the Advancement of Artificial Intelligence (AAAI): Washington, DC, USA, 2023. [Google Scholar]
  53. Munster, V.J.; Baas, C.; Lexmond, P.; Waldenström, J.; Wallensten, A.; Fransson, T.; Rimmelzwaan, G.F.; Beyer, W.E.P.; Schutten, M.; Olsen, B.; et al. Spatial, Temporal, and Species Variation in Prevalence of Influenza A Viruses in Wild Migratory Birds. PLoS Pathog. 2007, 3, e61. [Google Scholar] [CrossRef]
  54. Filaire, F.; Bertran, K.; Gaide, N.; Valle, R.; Secula, A.; Perlas, A.; Foret-Lucas, C.; Nofrarías, M.; Cantero, G.; Croville, G.; et al. Viral Shedding and Environmental Dispersion of Two Clade 2.3.4.4b H5 High Pathogenicity Avian Influenza Viruses in Experimentally Infected Mule Ducks: Implications for Environmental Sampling. Vet. Res. 2024, 55, 100. [Google Scholar] [CrossRef]
  55. Bahl, J.; Pham, T.T.; Hill, N.J.; Hussein, I.T.M.; Ma, E.J.; Easterday, B.C.; Halpin, R.A.; Stockwell, T.B.; Wentworth, D.E.; Kayali, G.; et al. Ecosystem Interactions Underlie the Spread of Avian Influenza A Viruses with Pandemic Potential. PLoS Pathog. 2016, 12, e1005620. [Google Scholar] [CrossRef]
  56. Levey, D.J.; Tewksbury, J.J.; Cipollini, M.L.; Carlo, T.A. A Field Test of the Directed Deterrence Hypothesis in Two Species of Wild Chili. Oecologia 2006, 150, 61–68. [Google Scholar] [CrossRef]
  57. Cook, A.; Rushton, S.; Allan, J.; Baxter, A. An Evaluation of Techniques to Control Problem Bird Species on Landfill Sites. Environ. Manag. 2008, 41, 834–843. [Google Scholar] [CrossRef]
  58. Wen, F.; Qin, M.; Gratz, P.; Reddy, N. OpenMem: Hardware/Software Cooperative Management for Mobile Memory System. In Proceedings of the 2021 58th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 5–9 December 2021; pp. 109–114. [Google Scholar]
Figure 1. Generalized transmission of avian influenza viruses.
Figure 1. Generalized transmission of avian influenza viruses.
Agriengineering 06 00211 g001
Figure 2. Swann monitor system for collecting wild birds’ images/videos. Camera setup outside a poultry house for monitoring wild birds (A). Sample footage captured from the Swann camera showing the monitoring area (B). The Swann camera system used for the experiment, including multiple cameras and the recording unit (C).
Figure 2. Swann monitor system for collecting wild birds’ images/videos. Camera setup outside a poultry house for monitoring wild birds (A). Sample footage captured from the Swann camera showing the monitoring area (B). The Swann camera system used for the experiment, including multiple cameras and the recording unit (C).
Agriengineering 06 00211 g002
Figure 3. The process of dataset augmentation, showing various transformations applied to the original image to enhance the training dataset. The augmentations include scaling, cropping, flipping, and rotation.
Figure 3. The process of dataset augmentation, showing various transformations applied to the original image to enhance the training dataset. The augmentations include scaling, cropping, flipping, and rotation.
Agriengineering 06 00211 g003
Figure 4. The structure of YOLOX.
Figure 4. The structure of YOLOX.
Agriengineering 06 00211 g004
Figure 5. The results of comparative experiments. ((a): mAP @ 0.5 (Mean Average Precision) vs. Epoch, (b): Box Loss vs. Epoch, (c): Objective Loss vs. Epoch, (d): Classification Loss vs. Epoch).
Figure 5. The results of comparative experiments. ((a): mAP @ 0.5 (Mean Average Precision) vs. Epoch, (b): Box Loss vs. Epoch, (c): Objective Loss vs. Epoch, (d): Classification Loss vs. Epoch).
Agriengineering 06 00211 g005
Figure 6. Automatic detection of the gadwall by our machine vision model. The left-hand image shows a gadwall. The top-right image displays YOLOX’s detection with bounding boxes and confidence scores. The bottom-right image shows YOLOv5’s detection for comparison.
Figure 6. Automatic detection of the gadwall by our machine vision model. The left-hand image shows a gadwall. The top-right image displays YOLOX’s detection with bounding boxes and confidence scores. The bottom-right image shows YOLOv5’s detection for comparison.
Agriengineering 06 00211 g006
Figure 7. Automatic detection of the brown thrasher by our machine vision model. The left-hand image shows a reference brown thrasher. The top-right image displays YOLOX’s detection, with bounding boxes and confidence scores. The bottom-right image shows YOLOv5’s detection for comparison.
Figure 7. Automatic detection of the brown thrasher by our machine vision model. The left-hand image shows a reference brown thrasher. The top-right image displays YOLOX’s detection, with bounding boxes and confidence scores. The bottom-right image shows YOLOv5’s detection for comparison.
Agriengineering 06 00211 g007
Figure 8. Detection of brown thrashers and gadwalls in evening conditions using the yolo model.
Figure 8. Detection of brown thrashers and gadwalls in evening conditions using the yolo model.
Agriengineering 06 00211 g008
Figure 9. Activity of wild birds per hour detected by machine vision model.
Figure 9. Activity of wild birds per hour detected by machine vision model.
Agriengineering 06 00211 g009
Table 1. The differences between YOLOX models.
Table 1. The differences between YOLOX models.
ModelSizeParameters (M)FLOPs (G)
YOLOX-s6409.026.8
YOLOX-m64025.373.8
YOLOX-l 64054.2155.6
YOLOX-x 64099.1281.9
YOLOX-Darknet5364063.7185.3
YOLOX-Nano4160.911.08
YOLOX-Tiny4165.066.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Bist, R.B.; Subedi, S.; Wu, Z.; Liu, T.; Paneru, B.; Chai, L. A Machine Vision System for Monitoring Wild Birds on Poultry Farms to Prevent Avian Influenza. AgriEngineering 2024, 6, 3704-3718. https://doi.org/10.3390/agriengineering6040211

AMA Style

Yang X, Bist RB, Subedi S, Wu Z, Liu T, Paneru B, Chai L. A Machine Vision System for Monitoring Wild Birds on Poultry Farms to Prevent Avian Influenza. AgriEngineering. 2024; 6(4):3704-3718. https://doi.org/10.3390/agriengineering6040211

Chicago/Turabian Style

Yang, Xiao, Ramesh Bahadur Bist, Sachin Subedi, Zihao Wu, Tianming Liu, Bidur Paneru, and Lilong Chai. 2024. "A Machine Vision System for Monitoring Wild Birds on Poultry Farms to Prevent Avian Influenza" AgriEngineering 6, no. 4: 3704-3718. https://doi.org/10.3390/agriengineering6040211

APA Style

Yang, X., Bist, R. B., Subedi, S., Wu, Z., Liu, T., Paneru, B., & Chai, L. (2024). A Machine Vision System for Monitoring Wild Birds on Poultry Farms to Prevent Avian Influenza. AgriEngineering, 6(4), 3704-3718. https://doi.org/10.3390/agriengineering6040211

Article Metrics

Back to TopTop