Next Article in Journal
A Novel Graph-Based Innovative Trend Analysis Technique for Studying the Crop Trends in Kerala, India
Previous Article in Journal
An Integrated Statistical, Geostatistical and Hydrogeological Approach for Assessing and Modelling Groundwater Salinity and Quality in Nile Delta Aquifer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method

1
Department of Poultry Science, University of Georgia, Athens, GA 30602, USA
2
College of Engineering, University of Georgia, Athens, GA 30602, USA
*
Authors to whom correspondence should be addressed.
AgriEngineering 2025, 7(2), 35; https://doi.org/10.3390/agriengineering7020035
Submission received: 16 December 2024 / Revised: 19 January 2025 / Accepted: 30 January 2025 / Published: 3 February 2025
(This article belongs to the Section Livestock Farming Technology)

Abstract

:
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were used to detect floor eggs and dead chickens in cage-free housing environments. A dataset comprising 1200 images was used to develop detection models, which were split into training, testing, and validation sets in a 3:1:1 ratio. Five different CNN models were developed based on YOLOv8 and the robot’s 360° panoramic depth perception camera. The final results indicated that YOLOv8m exhibited the highest performance, achieving a precision of 90.59%. The application of the optimal model facilitated the detection of floor eggs in dimly lit areas such as below the feeder area and in corner spaces, as well as the detection of dead chickens within the flock. This research underscores the utility of bionic robotics and convolutional neural networks for poultry management and precision livestock farming.

1. Introduction

Animal welfare policies are receiving increased global attention. In poultry farming, traditional battery cages severely restrict hens’ natural behaviors, leading to disuse osteoporosis [1]. Even with improved environments and managed activities, hens in cages are deprived of expressing most of their natural behaviors [2]. Consequently, many countries are actively formulating policies and trade measures to protect animal welfare in poultry production systems [3]. The egg industry is shifting to cage-free houses to improve bird welfare, providing sufficient space for hens to engage in natural behaviors. For example, legislation in California mandates that all eggs sold must come from hens in cage-free houses [4]. However, the transformation to cage-free systems presents new challenges, including managing floor eggs and the increased time required to inspect the entire house for deceased chickens [5,6]. Previous research has worked on improving floor egg management. One study proposed a cost-effective method for collecting eggs from laying trays and arranging them in a distribution tray, which helps reduce labor and cost in egg collection and packaging [7]. Another study developed a novel device using a helical spring that opens when it comes into contact with an egg, allowing it to collect eggs with a success rate of 96.8% [8]. Despite these advancements, efficient management of floor eggs remains a challenge in cage-free systems. Automatic floor egg collection and the removal of deceased chickens are primary concerns for egg producers who use cage-free housing. One potential solution is to utilize robots for these tasks [9].
Mobile robot technology has been extensively developed and applied in the agricultural industry [10]. Most robots utilize a two-wheeled differential drive method for directional control. They collect environmental information via multiple sensors, enabling target tracking and obstacle avoidance [11]. In the poultry sector, Vroegindeweij et al., (2014) proposed a path-planning method using the PoultryBot to collect floor eggs, reducing the need for manual egg picking [12]. Bao et al., (2021) introduced an AI-based sensor method for monitoring dead and sick chickens using foot rings and a ZigBee network, achieving 95.6% accuracy and reducing costs by 25% over four years compared to manual inspection [13]. In the ever-evolving landscape of mobile robotics, the incorporation of advanced object recognition technologies is pivotal in enhancing robotic capabilities and operational efficiency, particularly in intelligent bionic quadruped robots [14]. Reese et al., (2024) investigated the integration of object recognition in autonomous quadruped robotics using Red–Green–Blue (RGB) cameras and You Only Look Once version 8 (YOLOv8) in “Unitree Go 1” robots, optimizing sensor use for defense, surveillance, and industrial monitoring applications [15]. Angulo et al., (2024) explored the implementation of Chat Generative Pre-trained Transformer (ChatGPT) with the “Unitree Go 1” Robot Dog using voice prompts. They developed an interface that connects the ChatGPT Application Programming Interface (API) with the Unitree Go 1 Software Development Kit (SDK), facilitating user-friendly control and software development [16]. Our research focuses on the integration of advanced object recognition technologies within “Unitree Go 1”, a quadrupedal robotic dog. This platform hosts a network of interconnected sensors and cameras, including Forward-Looking Infrared (FLIR), Light Detection and Ranging (LiDAR), and a depth camera, for both autonomous and manually controlled applications. This study explores the synergistic effects of combining these technologies to enhance the capabilities and operational efficiency of the “Unitree Go 1” [17].
At the heart of our proposed system for object detection with the “Unitree Go 1” are convolutional neural networks (CNNs) [18]. Besides the “Unitree Go 1”, some custom-designed robots utilize robotic arms, a conveyor belt, and a storage cache to remove deceased chickens. Additionally, a robotic bin-picking pipeline for chicken filets employs 3D reconstruction of the environment using depth data from an RGB-D camera. Both systems are based on advanced computer vision techniques and CNNs [19,20]. CNNs utilize patterns in images to recognize objects, classes, and categories, making them suitable for various applications [21,22]. Among these, the YOLO series stands out for its effectiveness in precision livestock farming. These algorithms can automatically extract target features from images, eliminating the need for manual observation and enhancing the model’s generalizability [23]. Seo et al., (2019) [24] demonstrated improved accuracy and processing time for real-time pig surveillance by combining YOLO object detection with image processing techniques. They utilized infrared and depth information to effectively separate touching pigs. Similarly, Tong introduced a real-time poultry disease detector by integrating scale-aware modules and slide weighting loss into YOLOv5. This enhancement significantly improved detection accuracy (85.0%) and health status recognition in chickens, facilitating automated monitoring [25]. Given the high performance of YOLO in object detection for precision livestock farming, it has the potential to detect floor eggs and dead chickens in various cage-free housing environments. For example, one study used an enhanced YOLOv8 algorithm, incorporating techniques such as partial convolution and channel prior convolutional attention to more accurately and efficiently detect leg diseases in broiler chickens’ X-ray images, achieving a precision of 90.7% and ultimately improving poultry health and productivity [26]. By combining the YOLO detection model with the “robot dog”, the system could efficiently identify and collect floor eggs, as well as remove dead chickens [27]. This integration enhances the functionality and applicability of automated monitoring and management in livestock farming [28].
The objectives of this study were to (1) develop a detector based on YOLOv8 and the robotic method (i.e., the Unitree Go 1 robot) for monitoring floor eggs and deceased chickens in cage-free houses; (2) train the YOLOv8 model using images and videos of dead hens and floor eggs collected by the robot wide-angle and RGB cameras; and (3) test the performance of the newly developed models under various production conditions.

2. Materials and Methods

2.1. Bird Management

The robotic monitoring system was tested at the University of Georgia (UGA)’s Poultry Research Center. Each house, measuring 7.3 m in length, 6.1 m in width, and 3 m in height, housed 200 Lohmann White Leghorn Chickens. The houses were equipped with lights, perches, nest boxes, feeders, and drinkers, with floors covered in pine shavings. Indoor conditions, including light intensity and duration, ventilation rates, temperature, and relative humidity, were managed using a Chore-Tronics Model 8 controller (CHORE-Time Controller, Milford, IN, USA). The feed, a soy–corn mixture, was manufactured at the UGA feed mill every two months to ensure freshness and prevent mildew. Team members monitored the hens’ growth and environmental conditions daily, following the UGA Poultry Research Center Standard Operating Procedure. This experiment adhered to the animal care and use guidelines established by UGA’s Institutional Animal Care and Use Committee (IACUC).

2.2. Robotic System for Collecting Dead Chickens and Egg Samples

In this study, we utilized the “Unitree Go1” dog (Unitree, Binjiang District, Hangzhou, China), which is the world’s first intelligent bionic quadruped robot companion at the consumer level. It is the first full-size general-purpose humanoid robot capable of running and featuring 360° panoramic depth perception. This robot boasts an extensive joint movement range with up to 34 joints, incorporating force-position hybrid control technology to simulate human hand operations for precise tasks [29]. This capability enables it to potentially remove dead chickens and pick up eggs in the future. Figure 1 presents its three-dimensional view. The Go 1 is equipped with a built-in advanced AI processing unit, comprising a 16-core top CPU and a GPU (384 cores, 1.5 TFLOPS) for deploying AI models, such as chicken detection and chicken body weight prediction [30]. The YOLOv8 model was integrated into the robot’s AI unit, allowing it to analyze live video feeds captured by the robot’s cameras (RGB cameras for visual perception and depth cameras for 3D mapping and obstacle detection) to identify dead chickens and eggs in real time (Figure 2). We controlled the robot dog using its dedicated controller and the official application (https://www.unitree.com/app/go1/ (accessed on 12 October 2024)). The robotic dog was deployed twice daily, in the morning and evening, to inspect the entire poultry farm. The inspection route was pre-set by human operators using the robot’s controller, guiding the robot from the entrance door around the perimeter of the farm and across all sections, ensuring comprehensive coverage of the entire facility. During data collection, the light was around 10–30 lux, and the chicken density was around 5–9 birds/m2. Figure 3 illustrates the experimental setup. The robotic dog demonstrates several movement patterns, including turning, jumping, side-stepping, and more [31]. During our sample collection process, we primarily utilized climbing when encountering steps, as well as turning and walking to search for dead chickens and eggs and to capture sample images.

2.3. Data Processing and Analysis

Bird and egg images were extracted from the robot dog and annotated using V7 Darwin, an online annotation tool provided by V7labs (V7, 8 Meard St, London, United Kingdom). This tool supports various formats, including JPG, PNG, TIF, MP4, MOV, SVS, DICOM, NIfTI, and more, enabling the consolidation of training data in one place [32]. In this study, we created two classes: dead chickens and good eggs. The dead chicken was a frozen chicken died from pecking damages by other chickens. The team stored the dead chicken in a freezer for this study. For each image, we first checked the quality to ensure that it captured our target objects. Using the bounding box tool, we created boxes around the target objects. After a final review, we marked the images as completed (Figure 4).

2.4. Detection Methods

In our detection tasks, identifying small targets like chickens and eggs presents challenges such as limited feature availability and a low proportion of annotated areas for small targets. Additionally, the challenge is exacerbated by the limited dataset of 300 original images. To address these challenges, we first employed data augmentation methods like copy–paste enhancement, which involves randomly duplicating small targets multiple times within the image (pure cropping) or copying a region containing multiple small targets (cropping with background context), applying various transformations (scaling, flipping, rotating, etc.) during pasting. Additionally, we used over-sampling by duplicating the same image file multiple times and applying scaling and stitching techniques to combine multiple image files into one [33]. These data augmentation methods expanded the dataset size and increased its diversity, artificially boosting the proportion of small targets in the dataset to ensure the network can effectively learn their features. After data augmentation, we obtained 1200 images, which we split into training, testing, and validation sets in a 3:1:1 ratio to ensure sufficient data for model training while also maintaining a balanced representation for testing and validating the model’s performance. In this study, we adapted You Only Look Once version 8 (YOLOv8) to detect dead chickens and eggs, utilizing one of the five most used models for object detection within the YOLOv8 family (i.e., YOLOv8s, YOLOv8n, YOLOv8m, YOLOv8l, and YOLOv8x) [34]. The backbone network, which is the foundation of the model, is responsible for extracting features from the input image, and these features are the basis for subsequent network layers to perform object detection. In YOLOv8, the backbone network uses a structure similar to Cross Stage Partial Darknet (CSPDarknet) [35]. The head network is the decision-making part of the object detection model, responsible for producing the final detection results, while the neck network lies between the backbone and head networks, playing a role in feature fusion and enhancement. Other modules include the ConvModule, which contains convolutional layers, batch normalization (BN), and activation functions (e.g., Sigmoid Linear Unit (SiLU) for feature extraction; DarknetBottleneck, which increases network depth through residual connections while maintaining efficiency; and the CSP layer, a variant of the Cross Stage Partial structure that improves model training efficiency through partial connections [36]). The design of the YOLOv8 network is shown in Figure 5.

2.5. Model Evaluation

To benchmark the performance of classifiers, we focused on precision, recall, mean average precision (mAP), frames per second (FPS), and loss function values (Equations (1)–(3)). Precision measures the accuracy of detected objects, indicating the proportion of correct detections, while recall assesses the model’s ability to identify all instances of objects in the images. The mAP, which evaluates the model’s bounding box predictions on the validation dataset, is determined by plotting precision and recall values at different confidence thresholds [37]. Additionally, FPS is used to evaluate the speed of the methods, providing a measure of their efficiency. Finally, the loss function serves as a metric indicating how well the algorithms train the neural network model based on the dataset and achieve optimal results, tying together the overall performance evaluation.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
m A P = 1 n k = 1 k = n A P k
APk denotes the average precision for class k, where n is the number of classes. In chicken detection, True Positive (TP) correctly identifies a chicken, False Positive (FP) incorrectly identifies a non-chicken as a chicken, and False Negative (FN) fails to identify a chicken. mAP@0.5 refers to the mean average precision calculated at an Intersection over Union (IoU) threshold of 0.5. A loss function measures how well a model accomplishes its task by comparing its predicted dead chickens and eggs to the actual output. “lcls” measures the discrepancy between the predicted class probabilities and our labels, while “lobj” measures the confidence score assigned to each predicted bounding box, indicating whether it contains an object or not, as determined using Equations (4)–(6).
l o s s   f u n c t i o n = l c l s + l o b j
l c l s = λ c l a s s i = 0 S 2 j = 0 B I i , j o b j C c l a s s e s P i c log ( p l ^ c )
l o b j = λ n o o b j i = 0 S 2 j = 0 B I i , j n o o b j c i C l ^ 2 + λ o b j i = 0 S 2 j = 0 B I i , j o b j c i C l ^ 2
In the equations, I i , j o b j indicates whether the targets are located at the anchor box (i, j), P i c represents the probability of the target class c, and p l ^ c denotes the actual value of the class. The summation across these terms encompasses the total number of classes C.

3. Results and Discussion

3.1. The Influence of Robotics on Chicken Activity

In this study, we recorded the entire process of chickens’ interactions with a robotic entity over the course of one hour. The observation focused on the chickens’ initial reactions and subsequent behavior changes, documenting phases of fear, curiosity, intimacy, and normalization [38]. The robot was positioned in our observation area, which included half a drinking line, two feeders, and one nesting box, representing a typical section of a cage-free house. We recorded the number of chickens around the robot in this area. Upon first encountering the robot, the chickens exhibited immediate flight responses, resulting in widespread panic within the flock, accompanied by dust and feathers flying. Initially, only two chickens remained at the edge of the observation area. Within 20 min, the chickens’ panic subsided, and curiosity began to dominate. Consequently, the number of chickens in the observation area rapidly increased from 2 to 37. This number continued to rise steadily, reaching 51 chickens by the 40 min mark. After 40 min, more interactive behaviors, such as jumping and pecking at the robot, were observed [39]. Gradually, the chickens began to treat the robot as a normal object in their environment, with approximately 57 chickens present in the observation area by the end of the hour. This observation illustrates the process by which chickens overcome their initial fear and the time required for a flock to acclimate to the presence of a robotic entity. These findings can inform researchers aiming to integrate robotics into poultry environments, highlighting the optimal time frame for chickens to become comfortable interacting with robots while maintaining their welfare (Figure 6).

3.2. Model Comparison

Five individual experiments (YOLOv8s, YOLOv8n, YOLOv8m, YOLOv8l, and YOLOv8x) were conducted to identify the optimal detector for floor egg and dead chicken detection. The selection of these YOLOv8 variants was based on their varying trade-offs between model size, speed, and accuracy. The suffixes “s,” “n,” “m,” “l,” and “x” in YOLOv8 refer to different versions of the model, with varying numbers of layers and computational requirements. Specifically, “s” represents the smallest model with the fewest layers and parameters, while “x” denotes the largest model with the most layers and parameters. The “n,” “m,” and “l,” versions fall between these two extremes, corresponding to nano, medium, and large models, respectively [40]. All experiments were trained for 100 epochs using Python 3.7 and the PyTorch deep learning library, on hardware equipped with an NVIDIA-SMI (16 GB) graphics card. A summary of the model comparison is presented in Table 1.
In terms of accuracy, YOLOv8m achieved 90.59%, outperforming all other models. This superior performance can be attributed to the small size of floor eggs and chickens in our images, which cover less than 10% of the image area [41]. Lower-stride models like YOLOv8m generally perform better with small objects because they retain more detail from the input image, which is crucial for accurate detection and classification. Consequently, YOLOv8m is more effective for detecting floor eggs and deceased chickens [42]. In terms of recall, the values ranged from 78.52% to 80.52%, showing only a 2% maximum difference among the five models. This minimal variation in recall indicates that all models are similarly effective at identifying the presence of floor eggs and deceased chickens [43]. For FPS, YOLOv8s demonstrated the highest speed due to its fewer layers and reduced number of parameters compared to the other models [44]. In addition, mAP@0.5 shows the precision–recall trade-off at an IoU threshold of 0.5. YOLOv8l, YOLOv8m, and YOLOv8x have the top three mAP@0.5 scores, at 86.29%, 85.40%, and 85.31%, respectively. Although YOLOv8n and YOLOv8s have slightly lower mAP@0.5 values, all models perform reasonably well in detecting the bounding boxes for floor eggs and dead chickens. As for Class_loss and Box_loss, all models have Class_loss values close to 0.90 and Box_loss values close to 2.00. This indicates that their ability to classify floor eggs and deceased chickens, as well as the errors in the predicted bounding box locations, are similar [45]. These findings demonstrate that YOLOv8m achieved the best performance in detecting floor eggs and deceased chickens (see Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11).

3.3. YOLOv8m Algorithm Detection

In this study, the YOLOv8m detector demonstrated the best performance in detecting floor eggs and dead chickens compared to other models. Consequently, a further investigation was conducted using the YOLOv8m model in conjunction with the robot in cage-free houses. In cage-free houses, floor eggs are primarily found in dark areas such as below the feeder area and in corner spaces. YOLOv8m, when paired with the robot, performs well in detecting floor eggs in these dimly lit areas. This is because these areas are not completely dark but have lower-than-normal light intensity, which does not hinder the model’s detection capability [6]. Additionally, the robustness of the YOLOv8m model allows it to detect floor eggs effectively as long as the images capture the eggs. However, there are some misdetections when the robot is too far from the eggs, such as eggs located under perches where the robot cannot reach. To mitigate this, it is recommended to either remove the perches or design higher perches on the first layer to accommodate the robot’s detection capabilities [46]. Regarding the detection of dead chickens, there is no particular location where dead birds are commonly found. However, dead chickens often become mixed with the floor litter after death, which does not affect the model’s detection performance [5]. When chickens die, their bodies and feet become stiff, and their heads often lie in the litter, creating unique features that make it easier to detect dead chickens. Consequently, the model can detect dead chickens using the robot. Nonetheless, occlusions can sometimes occur, such as when a dead chicken carcass is covered by other chickens [47,48]. Therefore, the robot should inspect the entire house at least once daily to prevent carcass decomposition and address animal biosecurity concerns. Figure 12 and Figure 13 illustrate the detection results using the robot in cage-free houses.

3.4. Comparison with Related Research

To compare our research with previous work, we selected some recent studies on using robotics to detect eggs and deceased chickens. For egg collection, there are two primary methods: CNN and traditional sensing systems. One study featured a robot equipped with a YOLOV3-based deep learning egg detector, a robotic arm, a two-finger gripper, and a hand-mounted camera. The YOLOV3 detected eggs on a simulated litter floor in real-time, providing coordinates and dimensions for accurate gripper manipulation [49]. Another study introduced the PoultryBot, which utilizes various sensing systems, including a laser scanner with a 20 m depth and 270-degree view, a digital camera for area visualization, and wheel encoders to measure rotation and movement. The localization technique involves a particle filter that estimates the robot’s pose using prediction, update, and resampling phases [50]. Both systems demonstrated more than 90% accuracy in floor egg detection. However, when it came to picking up eggs, the sensing systems achieved only 43% success, while the CNN maintained a high accuracy of 93% in egg collection [51]. In our study, we also employed CNN to detect eggs, aiming to advance egg collection development using the robot. Compared to previous studies in broiler facilities [6,21], this study was conducted in a cage-free layer facility, an emerging egg production system in the USA and EU countries [2]. Regarding the detection of deceased chickens, CNN remains the mainstream method. Studies utilizing YOLOv3 and YOLOv4 for detecting dead broilers have achieved detection accuracy as high as 99%, though these studies were conducted in stacked-cage broiler houses and sometimes required multiple robotics combinations for high precision [52,53]. In cage-free houses, our study can detect both deceased chickens and floor eggs simultaneously using an intelligent bionic quadruped robot. This comprehensive solution addresses the challenges of cage-free environments, such as floor eggs and the increased time required to inspect the entire house for deceased chickens. Therefore, employing CNN with a quadruped robot like “Unitree Go 1” has the potential to efficiently collect floor eggs and remove deceased chickens in a single system.

3.5. Future Studies

Despite the significant advancements in detecting floor eggs and deceased chickens, the robotic ability to pick up eggs and remove dead chickens remains an ongoing area of research. For egg collection, most studies integrate computer vision with mechanical arms. One common approach involves using a soft suction mechanism to pick up eggs. This design ensures that the eggs are handled delicately to prevent breakage during collection. Additionally, soft rubber grippers are employed to gently grasp and lift the eggs without causing damage. Once the eggs are picked up, they need to be stored in a tank within the robot for later retrieval [1,51]. Therefore, an additional mechanical arm and a storage tank can be incorporated into the robot dog system to facilitate the collection and storage of floor eggs. On the other hand, removing dead chickens presents a more complex challenge due to their greater weight and size compared to floor eggs [51,52]. A stronger mechanical arm, or, alternatively, a separate robot equipped with a multi-target path routing scheme, can be utilized [53]. This secondary robot would collect the dead chickens using location data provided by the robotic system. Future improvements could focus on enhancing the mobility and strength of robotic arms to handle larger or heavier chickens, as well as incorporating AI to improve the identification of dead chickens from other objects in the environment [54]. Additionally, integrating more advanced sensors could allow the robot to avoid obstacles and navigate tighter spaces within the poultry house more efficiently.

4. Conclusions

In this study, a multiple detector for floor eggs and dead chickens was developed based on YOLOv8 networks embedded in a robotic system for detecting eggs and identifying dead chickens in cage-free facilities. The results show that the average accuracy of each detector ranges from 85.39% to 90.59%, with the best model being YOLOv8m, which achieved a precision of 90.59%. The detector can effectively recognize various floor eggs on the litter or under feeders and detect dead chickens in corners or around healthy chickens. This detector can be further combined with mechanical arms, such as soft suction mechanisms or soft rubber grippers, to pick up floor eggs. It can also be equipped with a secondary robot to remove dead chickens using location information provided by the robot. While the YOLOv8 model performed well in detecting floor eggs and dead chickens, there were some limitations. The robot struggled with detecting eggs in low light or when blocked by obstacles. Additionally, factors like flock density and occlusions could affect detection accuracy. The results provide an actionable approach to detecting floor eggs and dead chickens in cage-free houses using a single system without intrusion. This study demonstrates the potential of using intelligent bionic quadruped robots to address the issues of floor eggs and dead chickens in cage-free houses. These advancements provide valuable information for using robotics to help improve the management of cage-free chickens.

Author Contributions

Conceptualization, L.C. and G.L.; Methodology, X.Y., G.L. and L.C.; Software, X.Y.; Formal analysis, X.Y., J.Z. and J.L.; Investigation, X.Y., J.Z., B.P., J.L., R.B.B., G.L. and L.C.; Resources, G.L. and L.C.; Data curation, J.Z.; Writing–original draft, X.Y., J.Z., B.P., J.L., R.B.B., G.L. and L.C.; Project administration, L.C. and G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was sponsored by the USDA-NIFA AFRI (2023-68008-39853), Georgia Research Alliance (Venture Fund), Oracle America (Oracle for Research Grant, CPQ-2060433), and Seed Grant from UGA Institute for Integrative Precision Agriculture.

Data Availability Statement

The datasets generated, used and/or analyzed during the current study will be available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Correction Statement

This article has been republished with a minor correction to the readability of Figures 2 and 13. This change does not affect the scientific content of the article.

References

  1. Chang, C.-L.; Xie, B.-X.; Wang, C.-H. Visual Guidance and Egg Collection Scheme for a Smart Poultry Robot for Free-Range Farms. Sensors 2020, 20, 6624. [Google Scholar] [CrossRef] [PubMed]
  2. Hewson, C.J. What is animal welfare? Common definitions and their practical consequences. Can. Vet. J. 2003, 44, 496–499. [Google Scholar]
  3. Shields, S.; Duncan, I. A Comparison of the Welfare of Hens in Battery Cages and Alternative Systems; Impacts on Farm Animals Report; The Humane Society Institute for Science and Policy: Potomac, MD, USA, 2009. [Google Scholar]
  4. Mullally, C.; Lusk, J.L. The Impact of Farm Animal Housing Restrictions on Egg Prices, Consumer Welfare, and Production in California. Am. J. Agric. Econ. 2018, 100, 649–669. [Google Scholar] [CrossRef]
  5. Bist, R.B.; Subedi, S.; Yang, X.; Chai, L. Automatic Detection of Cage-Free Dead Hens with Deep Learning Methods. AgriEngineering 2023, 5, 1020–1038. [Google Scholar] [CrossRef]
  6. Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Wu, Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals 2022, 12, 1983. [Google Scholar] [CrossRef]
  7. Sheela, A.; Prithivi, K.; Nivesh, N.S.; Pavithran, A.; Pradeep, C.; Babu, K.P. Automation in egg collecting system in poultry farms. In Proceedings of the 4th National Conference on Current and Emerging Process Technologies e-CONCEPT-2021, Erode, India, 20 February 2021; AIP Publishing: Melville, NY, USA, 2021; Volume 2387. [Google Scholar]
  8. Vroegindeweij, B.A.; Kortlever, J.W.; Wais, E.; Henten, E. Development and test of an egg collecting device for floor eggs in loose housing systems for laying hens. In Proceedings of the International Conference of Agricultural Engineering, Zurich, Switzerland, 6–10 July 2014. [Google Scholar]
  9. Ren, G.; Lin, T.; Ying, Y.; Chowdhary, G.; Ting, K.C. Agricultural robotics research applicable to poultry production: A review. Comput. Electron. Agric. 2020, 169, 105216. [Google Scholar] [CrossRef]
  10. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419839596. [Google Scholar] [CrossRef]
  11. Gopalakrishnan, B.; Tirunellayi, S.; Todkar, R. Design and development of an autonomous mobile smart vehicle: A mechatronics application. Mechatronics 2004, 14, 491–514. [Google Scholar] [CrossRef]
  12. Vroegindeweij, B.A.; van Willigenburg, G.L.; Koerkamp, P.W.G.; van Henten, E.J. Path planning for the autonomous collection of eggs on floors. Biosyst. Eng. 2014, 121, 186–199. [Google Scholar] [CrossRef]
  13. Bao, Y.; Lu, H.; Zhao, Q.; Yang, Z.; Xu, W. Detection system of dead and sick chickens in large scale farms based on artificial intelligence. Math. Biosci. Eng. 2021, 18, 6117–6135. [Google Scholar] [CrossRef]
  14. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  15. Reese, R.; Kovarovics, A.; Charles, A.; Koduru, C.; Tanveer, M.H.; Voicu, R.C. Optimizing Data Capture Through Object Recognition for Efficient Sensor and Camera Management with a Quadruped Robot. In Proceedings of the Southeast Con 2024, Atlanta, GA, USA, 15–24 March 2024; IEEE: New York, NY, USA, 2024; pp. 1125–1130. [Google Scholar]
  16. Martinez Angulo, A.; Henry, S.; Tanveer, M.H.; Voicu, R.; Koduru, C. The Voice-To-Text Implementation with ChatGPT in Unitree Go1 Programming. In Proceedings of the 28th Symposium of Student Scholars, Kennesaw, GA, USA, 17–19 April 2024. [Google Scholar]
  17. Sharma, A.; Singh, P.K.; Khurana, P. Analytical review on object segmentation and recognition. In Proceedings of the 2016 6th International Conference—Cloud System and Big Data Engineering (Confluence), Noida, India, 14–15 January 2016; IEEE: New York, NY, USA, 2016; pp. 524–530. [Google Scholar]
  18. Jiang, H.; Zhang, C.; Qiao, Y.; Zhang, Z.; Zhang, W.; Song, C. CNN feature based graph convolutional network for weed and crop recognition in smart farming. Comput. Electron. Agric. 2020, 174, 105450. [Google Scholar] [CrossRef]
  19. Liu, H.-W.; Chen, C.-H.; Tsai, Y.-C.; Hsieh, K.-W.; Lin, H.-T. Identifying Images of Dead Chickens with a Chicken Removal System Integrated with a Deep Learning Algorithm. Sensors 2021, 21, 3579. [Google Scholar] [CrossRef]
  20. Jonker, L.M. Robotic Bin-Picking Pipeline for Chicken Fillets with Deep Learning-Based Instance Segmentation Using Synthetic Data. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2023. Available online: https://essay.utwente.nl/94881/ (accessed on 6 July 2024).
  21. Yang, D.; Cui, D.; Ying, Y. Development and trends of chicken farming robots in chicken farming tasks: A review. Comput. Electron. Agric. 2024, 221, 108916. [Google Scholar] [CrossRef]
  22. Yang, X.; Bist, R.B.; Paneru, B.; Chai, L. Deep Learning Methods for Tracking the Locomotion of Individual Chickens. Animals 2024, 14, 911. [Google Scholar] [CrossRef]
  23. Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
  24. Seo, J.; Sa, J.; Choi, Y.; Chung, Y.; Park, D.; Kim, H. A YOLO-based Separation of Touching-Pigs for Smart Pig Farm Applications. In Proceedings of the 2019 21st International Conference on Advanced Communication Technology (ICACT), PyeongChang, Republic of Korea, 17–20 February 2019; IEEE: New York, NY, USA, 2019; pp. 395–401. [Google Scholar]
  25. Tong, Q.; Zhang, E.; Wu, S.; Xu, K.; Sun, C. A real-time detector of chicken healthy status based on modified YOLO. Signal Image Video Process. 2023, 17, 4199–4207. [Google Scholar] [CrossRef]
  26. Zhang, X.; Zhu, R.; Zheng, W.; Chen, C. Detection of Leg Diseases in Broiler Chickens Based on Improved YOLOv8 X-Ray Images. IEEE Access 2024, 12, 47385–47401. [Google Scholar] [CrossRef]
  27. Yang, X.; Dai, H.; Wu, Z.; Bist, R.B.; Subedi, S.; Sun, J.; Lu, G.; Li, C.; Liu, T.; Chai, L. An innovative segment anything model for precision poultry monitoring. Comput. Electron. Agric. 2024, 222, 109045. [Google Scholar] [CrossRef]
  28. Gunaratnam, A.; Thayananthan, T.; Thangathurai, K.; Abhiram, B. Computer vision in livestock management and production. In Engineering Applications in Livestock Production; Tarafdar, A., Pandey, A., Gaur, G.K., Singh, M., Pandey, H.O., Eds.; Academic Press: Cambridge, MA, USA, 2024; pp. 93–128. ISBN 978-0-323-98385-3. [Google Scholar]
  29. Kim, M.; Shin, U.; Kim, J.-Y. Learning Quadrupedal Locomotion with Impaired Joints Using Random Joint Masking. arXiv 2024, arXiv:2403.00398. [Google Scholar] [CrossRef]
  30. Roh, S.G. Rapid Speed Change for Quadruped Robots via Deep Reinforcement Learning. In Proceedings of the 2023 IEEE International Conference on Development and Learning (ICDL), Macau, China, 9–11 November 2023; IEEE: New York, NY, USA, 2023; pp. 473–478. [Google Scholar]
  31. Long, J.; Wang, Z.; Li, Q.; Cao, L.; Gao, J.; Pang, J. Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot Response. In Proceedings of the Twelfth International Conference on Learning Representations, Vienna, Austria, 7–11 May 2024. [Google Scholar]
  32. Vidal, P.L.; de Moura, J.; Novo, J.; Ortega, M. Multi-stage transfer learning for lung segmentation using portable X-ray devices for patients with COVID-19. Expert Syst. Appl. 2021, 173, 114677. [Google Scholar] [CrossRef]
  33. Zou, K.; Chen, X.; Wang, Y.; Zhang, C.; Zhang, F. A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field. Comput. Electron. Agric. 2021, 187, 106242. [Google Scholar] [CrossRef]
  34. Safaldin, M.; Zaghden, N.; Mejdoub, M. An Improved YOLOv8 to Detect Moving Objects. IEEE Access 2024, 12, 59782–59806. [Google Scholar] [CrossRef]
  35. Sohan, M.; Sai Ram, T.; Rami Reddy, C.V. A Review on YOLOv8 and Its Advancements. In Data Intelligence and Cognitive Informatics; Jacob, I.J., Piramuthu, S., Falkowski-Gilski, P., Eds.; Springer Nature: Singapore, 2024; pp. 529–545. [Google Scholar]
  36. Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
  37. Vroegindeweij, B.A.; Blaauw, S.K.; IJsselmuiden, J.M.M.; van Henten, E.J. Evaluation of the performance of PoultryBot, an autonomous mobile robotic platform for poultry houses. Biosyst. Eng. 2018, 174, 295–315. [Google Scholar] [CrossRef]
  38. Wu, T.; Dong, Y. YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition. Appl. Sci. 2023, 13, 12977. [Google Scholar] [CrossRef]
  39. Rekavandi, A.M.; Xu, L.; Boussaid, F.; Seghouane, A.-K.; Hoefs, S.; Bennamoun, M. A Guide to Image and Video based Small Object Detection using Deep Learning: Case Study of Maritime Surveillance. arXiv 2022, arXiv:2207.12926. [Google Scholar] [CrossRef]
  40. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  41. Juba, B.; Le, H.S. Precision-Recall versus Accuracy and the Role of Large Data Sets. Proc. AAAI Conf. Artif. Intell. 2019, 33, 4039–4048. [Google Scholar] [CrossRef]
  42. Held, D.; Thrun, S.; Savarese, S. Learning to Track at 100 FPS with Deep Regression Networks. In Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 749–765. [Google Scholar]
  43. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  44. Bist, R.B.; Subedi, S.; Chai, L.; Regmi, P.; Ritz, C.W.; Kim, W.K.; Yang, X. Effects of Perching on Poultry Welfare and Production: A Review. Poultry 2023, 2, 134–157. [Google Scholar] [CrossRef]
  45. Bist, R.B.; Yang, X.; Subedi, S.; Chai, L. Mislaying behavior detection in cage-free hens with deep learning technologies. Poult. Sci. 2023, 102, 102729. [Google Scholar] [CrossRef]
  46. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking pecking behaviors and damages of cage-free laying hens with machine vision technologies. Comput. Electron. Agric. 2023, 204, 107545. [Google Scholar] [CrossRef]
  47. Li, G.; Chesser, G.D., Jr.; Huang, Y.; Zhao, Y.; Purswell, J.L. Development and Optimization of a Deep-Learning-Based Egg-Collecting Robot. Trans. ASABE 2021, 64, 1659–1669. [Google Scholar] [CrossRef]
  48. Vroegindeweij, B.A.; IJsselmuiden, J.; van Henten, E.J. Probabilistic localisation in repetitive environments: Estimating a robot’s position in an aviary poultry house. Comput. Electron. Agric. 2016, 124, 303–317. [Google Scholar] [CrossRef]
  49. Hao, H.; Fang, P.; Duan, E.; Yang, Z.; Wang, L.; Wang, H. A Dead Broiler Inspection System for Large-Scale Breeding Farms Based on Deep Learning. Agriculture 2022, 12, 1176. [Google Scholar] [CrossRef]
  50. Lei, T.; Li, G.; Luo, C.; Zhang, L.; Liu, L.; Gates, R.S. An informative planning-based multi-layer robot navigation system as applied in a poultry barn. Intell. Robot. 2022, 2, 313–332. [Google Scholar] [CrossRef]
  51. Wang, C.-H.; Xie, B.-X.; Chang, C.-L. Design and Implementation of Livestock Robot for Egg Picking and Classification in the Farm. In Proceedings of the 2019 International Symposium on Electrical and Electronics Engineering (ISEE), Ho Chi Minh City, Vietnam, 10–12 October 2019; IEEE: New York, NY, USA, 2019; pp. 161–165. [Google Scholar]
  52. Zhang, D.; Zhou, F.; Yang, X.; Gu, Y. Unleashing the Power of Self-Supervised Image Denoising: A Comprehensive Review. arXiv 2023, arXiv:2308.00247. [Google Scholar] [CrossRef]
  53. Zhou, F.; Fu, Z.; Zhang, D. High Dynamic Range Imaging with Context-aware Transformer. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–8. [Google Scholar]
  54. He, P.; Chen, Z.; Yu, H.; Hayat, K.; He, Y.; Pan, J.; Lin, H. Research Progress in the Early Warning of Chicken Diseases by Monitoring Clinical Symptoms. Appl. Sci. 2022, 12, 5601. [Google Scholar] [CrossRef]
Figure 1. Three-dimensional views of the “Unitree Go 1” robotic dog (front, side, and top).
Figure 1. Three-dimensional views of the “Unitree Go 1” robotic dog (front, side, and top).
Agriengineering 07 00035 g001
Figure 2. Overview of the automated detection system for poultry management using YOLOv8 and Unitree Go 1 Robotic.
Figure 2. Overview of the automated detection system for poultry management using YOLOv8 and Unitree Go 1 Robotic.
Agriengineering 07 00035 g002
Figure 3. The robot dog in walking mode collecting chicken and egg images in the research poultry house.
Figure 3. The robot dog in walking mode collecting chicken and egg images in the research poultry house.
Agriengineering 07 00035 g003
Figure 4. Examples of image labeling by V7 Darwin.
Figure 4. Examples of image labeling by V7 Darwin.
Agriengineering 07 00035 g004
Figure 5. YOLOv8 network structure diagram.
Figure 5. YOLOv8 network structure diagram.
Agriengineering 07 00035 g005
Figure 6. Time series of chickens’ interactions with the robot. The x-axis represents time (in minutes), and the y-axis represents the observed chicken count.
Figure 6. Time series of chickens’ interactions with the robot. The x-axis represents time (in minutes), and the y-axis represents the observed chicken count.
Agriengineering 07 00035 g006
Figure 7. Precision comparison results of different detectors for dead chickens and floor eggs.
Figure 7. Precision comparison results of different detectors for dead chickens and floor eggs.
Agriengineering 07 00035 g007
Figure 8. Recall comparison results of different detectors for dead chickens and floor eggs.
Figure 8. Recall comparison results of different detectors for dead chickens and floor eggs.
Agriengineering 07 00035 g008
Figure 9. mAP@0.5 comparison results of different detectors for dead chickens and floor eggs.
Figure 9. mAP@0.5 comparison results of different detectors for dead chickens and floor eggs.
Agriengineering 07 00035 g009
Figure 10. Class_loss comparison results of different detectors for dead chickens and floor eggs.
Figure 10. Class_loss comparison results of different detectors for dead chickens and floor eggs.
Agriengineering 07 00035 g010
Figure 11. Box_loss comparison results of different detectors for dead chickens and floor eggs.
Figure 11. Box_loss comparison results of different detectors for dead chickens and floor eggs.
Agriengineering 07 00035 g011
Figure 12. Floor eggs identified by the model: original image (a) vs. identified floor eggs (b).
Figure 12. Floor eggs identified by the model: original image (a) vs. identified floor eggs (b).
Agriengineering 07 00035 g012
Figure 13. Floor eggs and dead chickens identified by the model: original image (a) vs. identified floor eggs and dead chickens (b).
Figure 13. Floor eggs and dead chickens identified by the model: original image (a) vs. identified floor eggs and dead chickens (b).
Agriengineering 07 00035 g013
Table 1. The summary of model validation comparison for dead chicken and egg detection.
Table 1. The summary of model validation comparison for dead chicken and egg detection.
ModelPrecision (%)Recall (%)FPSmAP@0.5Class_lossBox_loss
YOLOv8s85.3979.327485.080.942.01
YOLOv8n85.4979.896985.170.901.98
YOLOv8m90.5979.346385.400.922.02
YOLOv8l88.1080.724886.290.882.05
YOLOv8x87.9778.524185.310.892.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Zhang, J.; Paneru, B.; Lin, J.; Bist, R.B.; Lu, G.; Chai, L. Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method. AgriEngineering 2025, 7, 35. https://doi.org/10.3390/agriengineering7020035

AMA Style

Yang X, Zhang J, Paneru B, Lin J, Bist RB, Lu G, Chai L. Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method. AgriEngineering. 2025; 7(2):35. https://doi.org/10.3390/agriengineering7020035

Chicago/Turabian Style

Yang, Xiao, Jinchang Zhang, Bidur Paneru, Jiakai Lin, Ramesh Bahadur Bist, Guoyu Lu, and Lilong Chai. 2025. "Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method" AgriEngineering 7, no. 2: 35. https://doi.org/10.3390/agriengineering7020035

APA Style

Yang, X., Zhang, J., Paneru, B., Lin, J., Bist, R. B., Lu, G., & Chai, L. (2025). Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method. AgriEngineering, 7(2), 35. https://doi.org/10.3390/agriengineering7020035

Article Metrics

Back to TopTop