Animal Production in the Artificial Intelligence Era: Advances and Applications

A special issue of Animals (ISSN 2076-2615). This special issue belongs to the section "Animal System and Management".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 11126

Special Issue Editors


E-Mail Website
Guest Editor
Precision Farming Consultant, Melbourne, VIC, Australia
Interests: artificial intelligence; machine learning; fuzzy logic; precision farming; decision support systems; phenomics

E-Mail Website
Guest Editor
Agricultural Research Service, Animal Genomics and Improvement Laboratory, USDA, Beltsville, MD, USA
Interests: quantitative genetics; genomic selection; statistical genetics; population genomics; genomic inbreeding; genetic adaptation; genotype imputation

E-Mail Website
Guest Editor
Department of Animal Science, College of Agriculture and Natural Resources, Michigan State University, East Lansing, MI, USA
Interests: quantitative genetics; animal breeding; computational genomics; evolutionary computation; artificial intelligence; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Animal and Dairy Science, University of Georgia, Athens, GA, USA
2. Department of Statistics, University of Georgia, Athens, GA, USA
3. Institute of Bioinformatics, University of Georgia, Athens, GA, USA
4. Institute of Integrative and Precision Agriculture, University of Georgia, Athens, GA, USA
Interests: quantitative genetics; genomics; animal breeding; Bayesian inference; large scale data analysis; precision agriculture

Special Issue Information

Dear Colleagues,

The third decade of the 21st century has ushered in an unprecedented AI revolution. We are witnessing the transformation of industries and societies through groundbreaking advancements in artificial intelligence. Animal production has a tremendous opportunity to capitalize on this trend as it already has a long history of developing and adopting advanced data analytics techniques and counts with a very talented scientific community and forward-thinking practitioners. Applications of AI in animal production include, are but not limited to, measuring, monitoring, and managing animal production systems to enhance profitability, production, and health event forecasting, real time prediction, and decision support tools. To accomplish that, the integration of sensors, hyper-spectral, MIR/NMR, imaging, and omics data to phenomics data for the prediction of performance, health, welfare, and behavior, reproduction or even breeding values, and selection decisions are inevitable and bring new challenges and opportunities to the field.

Our aim with this Special Issue is to invite the precision animal farming community to present their latest cutting-edge research in artificial intelligence applied to animal production. Original research, reviews, or commentary papers with a focus on advancements of AI in real-world animal production applications, health and welfare, reproduction, genetics, and breeding programs are especially encouraged to be submitted to this Special Issue.

Dr. Saleh Shahinfar
Dr. Sajjad Toghiani
Prof. Dr. Cedric Gondro
Prof. Dr. Romdhane Rekaya
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Animals is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • forecasting (prediction)
  • animal production
  • animal health and welfare
  • breeding and genetics
  • reproduction
  • high-throughput phenotyping and big data
  • sensors
  • computer vision

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 10392 KiB  
Article
Monitoring Multiple Behaviors in Beef Calves Raised in Cow–Calf Contact Systems Using a Machine Learning Approach
by Seong-Jin Kim, Xue-Cheng Jin, Rajaraman Bharanidharan and Na-Yeon Kim
Animals 2024, 14(22), 3278; https://doi.org/10.3390/ani14223278 - 14 Nov 2024
Viewed by 790
Abstract
The monitoring of pre-weaned calf behavior is crucial for ensuring health, welfare, and optimal growth. This study aimed to develop and validate a machine learning-based technique for the simultaneous monitoring of multiple behaviors in pre-weaned beef calves within a cow–calf contact (CCC) system [...] Read more.
The monitoring of pre-weaned calf behavior is crucial for ensuring health, welfare, and optimal growth. This study aimed to develop and validate a machine learning-based technique for the simultaneous monitoring of multiple behaviors in pre-weaned beef calves within a cow–calf contact (CCC) system using collar-mounted sensors integrating accelerometers and gyroscopes. Three complementary models were developed to classify feeding-related behaviors (natural suckling, feeding, rumination, and others), postural states (lying and standing), and coughing events. Sensor data, including tri-axial acceleration and tri-axial angular velocity, along with video recordings, were collected from 78 beef calves across two farms. The LightGBM algorithm was employed for behavior classification, and model performance was evaluated using a confusion matrix, the area under the receiver operating characteristic curve (AUC-ROC), and Pearson’s correlation coefficient (r). Model 1 achieved a high performance in recognizing natural suckling (accuracy: 99.10%; F1 score: 96.88%; AUC-ROC: 0.999; r: 0.997), rumination (accuracy: 97.36%; F1 score: 95.07%; AUC-ROC: 0.995; r: 0.990), and feeding (accuracy: 95.76%; F1 score: 91.89%; AUC-ROC: 0.990; r: 0.987). Model 2 exhibited an excellent classification of lying (accuracy: 97.98%; F1 score: 98.45%; AUC-ROC: 0.989; r: 0.982) and standing (accuracy: 97.98%; F1 score: 97.11%; AUC-ROC: 0.989; r: 0.983). Model 3 achieved a reasonable performance in recognizing coughing events (accuracy: 88.88%; F1 score: 78.61%; AUC-ROC: 0.942; r: 0.969). This study demonstrates the potential of machine learning and collar-mounted sensors for monitoring multiple behaviors in calves, providing a valuable tool for optimizing production management and early disease detection in the CCC system Full article
Show Figures

Figure 1

15 pages, 2392 KiB  
Article
Estimating Ross 308 Broiler Chicken Weight Through Integration of Random Forest Model and Metaheuristic Algorithms
by Erdem Küçüktopçu, Bilal Cemek and Didem Yıldırım
Animals 2024, 14(21), 3082; https://doi.org/10.3390/ani14213082 - 25 Oct 2024
Viewed by 804
Abstract
For accurate estimation of broiler chicken weight (CW), a novel hybrid method was developed in this study where several benchmark methods, including Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Differential Evolution (DE), and Gravity Search Algorithm (GSA), were employed [...] Read more.
For accurate estimation of broiler chicken weight (CW), a novel hybrid method was developed in this study where several benchmark methods, including Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Differential Evolution (DE), and Gravity Search Algorithm (GSA), were employed to adjust the Random Forest (RF) hyperparameters. The performance of the RF models was compared with that of classic linear regression (LR). With this aim, data (temperature, relative humidity, feed consumption, and CW) were collected from six poultry farms in Samsun, Türkiye, covering both the summer and winter seasons between 2014 and 2021. The results demonstrated that PSO and ACO significantly enhanced the performance of the standard RF model in all periods. Specifically, the RF-PSO model achieved a significant improvement by reducing the Mean Absolute Error (MAE) by 5.081% to 60.707%, highlighting its superior prediction accuracy and efficiency. The RF-ACO model also showed remarkable MAE reductions, ranging from 3.066% to 43.399%, depending on the input combinations used. In addition, the computational time required to train the RF models with PSO and ACO was considerably low, indicating their computational efficiency. These improvements emphasize the effectiveness of the PSO and ACO algorithms in achieving more accurate predictions of CW. Full article
Show Figures

Figure 1

12 pages, 871 KiB  
Article
Machine Learning and Wavelet Transform: A Hybrid Approach to Predicting Ammonia Levels in Poultry Farms
by Erdem Küçüktopçu, Bilal Cemek and Halis Simsek
Animals 2024, 14(20), 2951; https://doi.org/10.3390/ani14202951 - 14 Oct 2024
Viewed by 761
Abstract
Ammonia (NH3) is a major pollutant in poultry farms, negatively impacting bird health and welfare. High NH3 levels can cause poor weight gain, inefficient feed conversion, reduced viability, and financial losses in the poultry industry. Therefore, accurate estimation of NH [...] Read more.
Ammonia (NH3) is a major pollutant in poultry farms, negatively impacting bird health and welfare. High NH3 levels can cause poor weight gain, inefficient feed conversion, reduced viability, and financial losses in the poultry industry. Therefore, accurate estimation of NH3 concentration is crucial for environmental protection and human and animal health. Three widely used machine learning (ML) algorithms—extreme learning machine (ELM), k-nearest neighbor (KNN), and random forest (RF)—were initially used as base algorithms. The wavelet transform (WT) with ten levels of decomposition was then applied as a preprocessing method. Three statistical metrics, including the mean absolute error (MAE) and the correlation coefficient (R), were used to evaluate the predictive accuracies of algorithms. The results indicate that the RF algorithms perform robustly individually and in combination with the WT. The RF-WT algorithm performed best using the air temperature, relative humidity, and air velocity inputs with a MAE of 0.548 ppm and an R of 0.976 for the testing dataset. In summary, applying WT to the inputs significantly improved the predictive power of the ML algorithms, especially for inputs that initially had a low correlation with the NH3 values. Full article
Show Figures

Figure 1

24 pages, 12953 KiB  
Article
Visual Navigation of Caged Chicken Coop Inspection Robot Based on Road Features
by Hongfeng Deng, Tiemin Zhang, Kan Li and Jikang Yang
Animals 2024, 14(17), 2515; https://doi.org/10.3390/ani14172515 - 29 Aug 2024
Viewed by 718
Abstract
The speed and accuracy of navigation road extraction and driving stability affect the inspection accuracy of cage chicken coop inspection robots. In this paper, a new grayscale factor (4B-3R-2G) was proposed to achieve fast and accurate road extraction, and a navigation line fitting [...] Read more.
The speed and accuracy of navigation road extraction and driving stability affect the inspection accuracy of cage chicken coop inspection robots. In this paper, a new grayscale factor (4B-3R-2G) was proposed to achieve fast and accurate road extraction, and a navigation line fitting algorithm based on the road boundary features was proposed to improve the stability of the algorithm. The proposed grayscale factor achieved 92.918% segmentation accuracy, and the speed was six times faster than the deep learning model. The experimental results showed that at the speed of 0.348 m/s, the maximum deviation of the visual navigation was 4 cm, the average deviation was 1.561 cm, the maximum acceleration was 1.122 m/s2, and the average acceleration was 0.292 m/s2, with the detection number and accuracy increased by 21.125% and 1.228%, respectively. Compared with inertial navigation, visual navigation can significantly improve the navigation accuracy and stability of the inspection robot and lead to better inspection effects. The visual navigation system proposed in this paper has better driving stability, higher inspection efficiency, better inspection effect, and lower operating costs, which is of great significance to promote the automation process of large-scale cage chicken breeding and realize rapid and accurate monitoring. Full article
Show Figures

Figure 1

16 pages, 6518 KiB  
Article
DCNN for Pig Vocalization and Non-Vocalization Classification: Evaluate Model Robustness with New Data
by Vandet Pann, Kyeong-seok Kwon, Byeonghyeon Kim, Dong-Hwa Jang and Jong-Bok Kim
Animals 2024, 14(14), 2029; https://doi.org/10.3390/ani14142029 - 9 Jul 2024
Viewed by 897
Abstract
Since pig vocalization is an important indicator of monitoring pig conditions, pig vocalization detection and recognition using deep learning play a crucial role in the management and welfare of modern pig livestock farming. However, collecting pig sound data for deep learning model training [...] Read more.
Since pig vocalization is an important indicator of monitoring pig conditions, pig vocalization detection and recognition using deep learning play a crucial role in the management and welfare of modern pig livestock farming. However, collecting pig sound data for deep learning model training takes time and effort. Acknowledging the challenges of collecting pig sound data for model training, this study introduces a deep convolutional neural network (DCNN) architecture for pig vocalization and non-vocalization classification with a real pig farm dataset. Various audio feature extraction methods were evaluated individually to compare the performance differences, including Mel-frequency cepstral coefficients (MFCC), Mel-spectrogram, Chroma, and Tonnetz. This study proposes a novel feature extraction method called Mixed-MMCT to improve the classification accuracy by integrating MFCC, Mel-spectrogram, Chroma, and Tonnetz features. These feature extraction methods were applied to extract relevant features from the pig sound dataset for input into a deep learning network. For the experiment, three datasets were collected from three actual pig farms: Nias, Gimje, and Jeongeup. Each dataset consists of 4000 WAV files (2000 pig vocalization and 2000 pig non-vocalization) with a duration of three seconds. Various audio data augmentation techniques are utilized in the training set to improve the model performance and generalization, including pitch-shifting, time-shifting, time-stretching, and background-noising. In this study, the performance of the predictive deep learning model was assessed using the k-fold cross-validation (k = 5) technique on each dataset. By conducting rigorous experiments, Mixed-MMCT showed superior accuracy on Nias, Gimje, and Jeongeup, with rates of 99.50%, 99.56%, and 99.67%, respectively. Robustness experiments were performed to prove the effectiveness of the model by using two farm datasets as a training set and a farm as a testing set. The average performance of the Mixed-MMCT in terms of accuracy, precision, recall, and F1-score reached rates of 95.67%, 96.25%, 95.68%, and 95.96%, respectively. All results demonstrate that the proposed Mixed-MMCT feature extraction method outperforms other methods regarding pig vocalization and non-vocalization classification in real pig livestock farming. Full article
Show Figures

Figure 1

20 pages, 4709 KiB  
Article
Current Trends in Artificial Intelligence and Bovine Mastitis Research: A Bibliometric Review Approach
by Thatiane Mendes Mitsunaga, Breno Luis Nery Garcia, Ligia Beatriz Rizzanti Pereira, Yuri Campos Braga Costa, Roberto Fray da Silva, Alexandre Cláudio Botazzo Delbem and Marcos Veiga dos Santos
Animals 2024, 14(14), 2023; https://doi.org/10.3390/ani14142023 - 9 Jul 2024
Viewed by 1171
Abstract
Mastitis, an important disease in dairy cows, causes significant losses in herd profitability. Accurate diagnosis is crucial for adequate control. Studies using artificial intelligence (AI) models to classify, identify, predict, and diagnose mastitis show promise in improving mastitis control. This bibliometric review aimed [...] Read more.
Mastitis, an important disease in dairy cows, causes significant losses in herd profitability. Accurate diagnosis is crucial for adequate control. Studies using artificial intelligence (AI) models to classify, identify, predict, and diagnose mastitis show promise in improving mastitis control. This bibliometric review aimed to evaluate AI and bovine mastitis terms in the most relevant Scopus-indexed papers from 2011 to 2021. Sixty-two documents were analyzed, revealing key terms, prominent researchers, relevant publications, main themes, and keyword clusters. “Mastitis” and “machine learning” were the most cited terms, with an increasing trend from 2018 to 2021. Other terms, such as “sensors” and “mastitis detection”, also emerged. The United States was the most cited country and presented the largest collaboration network. Publications on mastitis and AI models notably increased from 2016 to 2021, indicating growing interest. However, few studies utilized AI for bovine mastitis detection, primarily employing artificial neural network models. This suggests a clear potential for further research in this area. Full article
Show Figures

Graphical abstract

27 pages, 1570 KiB  
Article
Federated Multi-Label Learning (FMLL): Innovative Method for Classification Tasks in Animal Science
by Bita Ghasemkhani, Ozlem Varliklar, Yunus Dogan, Semih Utku, Kokten Ulas Birant and Derya Birant
Animals 2024, 14(14), 2021; https://doi.org/10.3390/ani14142021 - 9 Jul 2024
Cited by 2 | Viewed by 1245
Abstract
Federated learning is a collaborative machine learning paradigm where multiple parties jointly train a predictive model while keeping their data. On the other hand, multi-label learning deals with classification tasks where instances may simultaneously belong to multiple classes. This study introduces the concept [...] Read more.
Federated learning is a collaborative machine learning paradigm where multiple parties jointly train a predictive model while keeping their data. On the other hand, multi-label learning deals with classification tasks where instances may simultaneously belong to multiple classes. This study introduces the concept of Federated Multi-Label Learning (FMLL), combining these two important approaches. The proposed approach leverages federated learning principles to address multi-label classification tasks. Specifically, it adopts the Binary Relevance (BR) strategy to handle the multi-label nature of the data and employs the Reduced-Error Pruning Tree (REPTree) as the base classifier. The effectiveness of the FMLL method was demonstrated by experiments carried out on three diverse datasets within the context of animal science: Amphibians, Anuran-Calls-(MFCCs), and HackerEarth-Adopt-A-Buddy. The accuracy rates achieved across these animal datasets were 73.24%, 94.50%, and 86.12%, respectively. Compared to state-of-the-art methods, FMLL exhibited remarkable improvements (above 10%) in average accuracy, precision, recall, and F-score metrics. Full article
Show Figures

Figure 1

23 pages, 19155 KiB  
Article
Open-Set Recognition of Individual Cows Based on Spatial Feature Transformation and Metric Learning
by Buyu Wang, Xia Li, Xiaoping An, Weijun Duan, Yuan Wang, Dian Wang and Jingwei Qi
Animals 2024, 14(8), 1175; https://doi.org/10.3390/ani14081175 - 14 Apr 2024
Cited by 1 | Viewed by 1210
Abstract
The automated recognition of individual cows is foundational for implementing intelligent farming. Traditional methods of individual cow recognition from an overhead perspective primarily rely on singular back features and perform poorly for cows with diverse orientation distributions and partial body visibility in the [...] Read more.
The automated recognition of individual cows is foundational for implementing intelligent farming. Traditional methods of individual cow recognition from an overhead perspective primarily rely on singular back features and perform poorly for cows with diverse orientation distributions and partial body visibility in the frame. This study proposes an open-set method for individual cow recognition based on spatial feature transformation and metric learning to address these issues. Initially, a spatial transformation deep feature extraction module, ResSTN, which incorporates preprocessing techniques, was designed to effectively address the low recognition rate caused by the diverse orientation distribution of individual cows. Subsequently, by constructing an open-set recognition framework that integrates three attention mechanisms, four loss functions, and four distance metric methods and exploring the impact of each component on recognition performance, this study achieves refined and optimized model configurations. Lastly, introducing moderate cropping and random occlusion strategies during the data-loading phase enhances the model’s ability to recognize partially visible individuals. The method proposed in this study achieves a recognition accuracy of 94.58% in open-set scenarios for individual cows in overhead images, with an average accuracy improvement of 2.98 percentage points for cows with diverse orientation distributions, and also demonstrates an improved recognition performance for partially visible and randomly occluded individual cows. This validates the effectiveness of the proposed method in open-set recognition, showing significant potential for application in precision cattle farming management. Full article
Show Figures

Figure 1

21 pages, 6076 KiB  
Article
In Vivo Prediction of Breast Muscle Weight in Broiler Chickens Using X-ray Images Based on Deep Learning and Machine Learning
by Rui Zhu, Jiayao Li, Junyan Yang, Ruizhi Sun and Kun Yu
Animals 2024, 14(4), 628; https://doi.org/10.3390/ani14040628 - 16 Feb 2024
Viewed by 1445
Abstract
Accurately estimating the breast muscle weight of broilers is important for poultry production. However, existing related methods are plagued by cumbersome processes and limited automation. To address these issues, this study proposed an efficient method for predicting the breast muscle weight of broilers. [...] Read more.
Accurately estimating the breast muscle weight of broilers is important for poultry production. However, existing related methods are plagued by cumbersome processes and limited automation. To address these issues, this study proposed an efficient method for predicting the breast muscle weight of broilers. First, because existing deep learning models struggle to strike a balance between accuracy and memory consumption, this study designed a multistage attention enhancement fusion segmentation network (MAEFNet) to automatically acquire pectoral muscle mask images from X-ray images. MAEFNet employs the pruned MobileNetV3 as the encoder to efficiently capture features and adopts a novel decoder to enhance and fuse the effective features at various stages. Next, the selected shape features were automatically extracted from the mask images. Finally, these features, including live weight, were input to the SVR (Support Vector Regression) model to predict breast muscle weight. MAEFNet achieved the highest intersection over union (96.35%) with the lowest parameter count (1.51 M) compared to the other segmentation models. The SVR model performed best (R2 = 0.8810) compared to the other prediction models in the five-fold cross-validation. The research findings can be applied to broiler production and breeding, reducing measurement costs, and enhancing breeding efficiency. Full article
Show Figures

Figure 1

Back to TopTop