Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (324)

Search Parameters:
Keywords = crop disease dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 21987 KiB  
Article
AHN-YOLO: A Lightweight Tomato Detection Method for Dense Small-Sized Features Based on YOLO Architecture
by Wenhui Zhang and Feng Jiang
Horticulturae 2025, 11(6), 639; https://doi.org/10.3390/horticulturae11060639 - 6 Jun 2025
Abstract
Convolutional neural networks (CNNs) are increasingly applied in crop disease identification, yet most existing techniques are optimized solely for laboratory environments. When confronted with real-world challenges such as diverse disease morphologies, complex backgrounds, and subtle feature variations, these models often exhibit insufficient robustness. [...] Read more.
Convolutional neural networks (CNNs) are increasingly applied in crop disease identification, yet most existing techniques are optimized solely for laboratory environments. When confronted with real-world challenges such as diverse disease morphologies, complex backgrounds, and subtle feature variations, these models often exhibit insufficient robustness. To effectively identify fine-grained disease features in complex scenarios while reducing deployment and training costs, this paper proposes a novel network architecture named AHN-YOLO, based on an improved YOLOv11-n framework that demonstrates balanced performance in multi-scale feature processing. The key innovations of AHN-YOLO include (1) the introduction of an ADown module to reduce model parameters; (2) the adoption of a Normalized Wasserstein Distance (NWD) loss function to stabilize small-feature detection; and (3) the proposal of a lightweight hybrid attention mechanism, Light-ES, to enhance focus on disease regions. Compared to the original architecture, AHN-YOLO achieves a 17.1 % reduction in model size. Comparative experiments on a tomato disease detection dataset under real-world complex conditions demonstrate that AHN-YOLO improves accuracy, recall, and mAP-50 by 9.5%, 7.5%, and 9.2%, respectively, indicating a significant enhancement in detection precision. When benchmarked against other lightweight models in the field, AHN-YOLO exhibits superior training efficiency and detection accuracy in complex, dense scenarios, demonstrating clear advantages. Full article
(This article belongs to the Section Vegetable Production Systems)
Show Figures

Figure 1

23 pages, 13758 KiB  
Article
Edge–Region Collaborative Segmentation of Potato Leaf Disease Images Using Beluga Whale Optimization Algorithm with Danger Sensing Mechanism
by Jin-Ling Bei and Ji-Quan Wang
Agriculture 2025, 15(11), 1123; https://doi.org/10.3390/agriculture15111123 - 23 May 2025
Viewed by 218
Abstract
Precise detection of potato diseases is critical for food security, yet traditional image segmentation methods struggle with challenges including uneven illumination, background noise, and the gradual color transitions of lesions under complex field conditions. Therefore, a collaborative segmentation framework of Otsu and Sobel [...] Read more.
Precise detection of potato diseases is critical for food security, yet traditional image segmentation methods struggle with challenges including uneven illumination, background noise, and the gradual color transitions of lesions under complex field conditions. Therefore, a collaborative segmentation framework of Otsu and Sobel edge detection based on the beluga whale optimization algorithm with a danger sensing mechanism (DSBWO) is proposed. The method introduces an S-shaped control parameter, a danger sensing mechanism, a dynamic foraging strategy, and an improved whale fall model to enhance global search ability, prevent premature convergence, and improve solution quality. DSBWO demonstrates superior optimization performance on the CEC2017 benchmark, with faster convergence and higher accuracy than other algorithms. Experiments on the Berkeley Segmentation Dataset and potato early/late blight images show that DSBWO achieves excellent segmentation performance across multiple evaluation metrics. Specifically, it reaches a maximum IoU of 0.8797, outperforming JSBWO (0.8482) and PSOSHO (0.8503), while maintaining competitive PSNR and SSIM values. Even under different Gaussian noise levels, DSBWO maintains stable segmentation accuracy and low CPU time, confirming its robustness. These findings suggest that DSBWO provides a reliable and efficient solution for automatic crop disease monitoring and can be extended to other smart agriculture applications. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

18 pages, 2484 KiB  
Article
Empowering Smallholder Farmers with UAV-Based Early Cotton Disease Detection Using AI
by Halimjon Khujamatov, Shakhnoza Muksimova, Mirjamol Abdullaev, Jinsoo Cho, Cheolwon Lee and Heung-Seok Jeon
Drones 2025, 9(5), 385; https://doi.org/10.3390/drones9050385 - 21 May 2025
Viewed by 276
Abstract
Early detection of cotton diseases is critical for safeguarding crop yield and minimizing agrochemical usage. However, most state-of-the-art systems rely on multispectral or hyperspectral sensors, which are costly and inaccessible to smallholder farmers. This paper introduces CottoNet, a lightweight and efficient deep learning [...] Read more.
Early detection of cotton diseases is critical for safeguarding crop yield and minimizing agrochemical usage. However, most state-of-the-art systems rely on multispectral or hyperspectral sensors, which are costly and inaccessible to smallholder farmers. This paper introduces CottoNet, a lightweight and efficient deep learning framework for detecting early-stage cotton diseases using only RGB images captured by unmanned aerial vehicles (UAVs). The proposed model integrates an EfficientNetV2-S backbone with a Dual-Attention Feature Pyramid Network (DA-FPN) and a novel Early Symptom Emphasis Module (ESEM) to enhance sensitivity to subtle visual cues such as chlorosis, minor lesions, and texture irregularities. A custom-labeled dataset was collected from cotton fields in Uzbekistan to evaluate the model under realistic agricultural conditions. CottoNet achieved a mean average precision (mAP@50) of 89.7%, an F1 score of 88.2%, and an early detection accuracy (EDA) of 91.5%, outperforming existing lightweight models while maintaining real-time inference speed on embedded devices. The results demonstrate that CottoNet offers a scalable, accurate, and field-ready solution for precision agriculture in resource-limited settings. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

35 pages, 13580 KiB  
Article
A Novel MaxViT Model for Accelerated and Precise Soybean Leaf and Seed Disease Identification
by Al Shahriar Uddin Khondakar Pranta, Hasib Fardin, Jesika Debnath, Amira Hossain, Anamul Haque Sakib, Md. Redwan Ahmed, Rezaul Haque, Ahmed Wasif Reza and M. Ali Akber Dewan
Computers 2025, 14(5), 197; https://doi.org/10.3390/computers14050197 - 18 May 2025
Viewed by 326
Abstract
Timely diagnosis of soybean diseases is essential to protect yields and limit global economic loss, yet current deep learning approaches suffer from small, imbalanced datasets, single-organ focus, and limited interpretability. We propose MaxViT-XSLD (MaxViT XAI-Seed–Leaf-Diagnostic), a Vision Transformer that integrates multiaxis attention with [...] Read more.
Timely diagnosis of soybean diseases is essential to protect yields and limit global economic loss, yet current deep learning approaches suffer from small, imbalanced datasets, single-organ focus, and limited interpretability. We propose MaxViT-XSLD (MaxViT XAI-Seed–Leaf-Diagnostic), a Vision Transformer that integrates multiaxis attention with MBConv layers to jointly classify soybean leaf and seed diseases while remaining lightweight and explainable. Two benchmark datasets were upscaled through elastic deformation, Gaussian noise, brightness shifts, rotation, and flipping, enlarging ASDID from 10,722 to 16,000 images (eight classes) and the SD set from 5513 to 10,000 images (five classes). Under identical augmentation and hyperparameters, MaxViT-XSLD delivered 99.82% accuracy on ASDID and 99.46% on SD, surpassing competitive ViT, CNN, and lightweight SOTA variants. High PR-AUC and MCC values, confirmed via 10-fold stratified cross-validation and Wilcoxon tests, demonstrate robust generalization across data splits. Explainable AI (XAI) techniques further enhanced interpretability by highlighting biologically relevant features influencing predictions. Its modular design also enables future model compression for edge deployment in resource-constrained settings. Finally, we deploy the model in SoyScan, a real-time web tool that streams predictions and visual explanations to growers and agronomists. These findings establishes a scalable, interpretable system for precision crop health monitoring and lay the groundwork for edge-oriented, multimodal agricultural diagnostics. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

37 pages, 4964 KiB  
Review
A Comprehensive Review of Deep Learning Applications in Cotton Industry: From Field Monitoring to Smart Processing
by Zhi-Yu Yang, Wan-Ke Xia, Hao-Qi Chu, Wen-Hao Su, Rui-Feng Wang and Haihua Wang
Plants 2025, 14(10), 1481; https://doi.org/10.3390/plants14101481 - 15 May 2025
Viewed by 467
Abstract
Cotton is a vital economic crop in global agriculture and the textile industry, contributing significantly to food security, industrial competitiveness, and sustainable development. Traditional technologies such as spectral imaging and machine learning improved cotton cultivation and processing, yet their performance often falls short [...] Read more.
Cotton is a vital economic crop in global agriculture and the textile industry, contributing significantly to food security, industrial competitiveness, and sustainable development. Traditional technologies such as spectral imaging and machine learning improved cotton cultivation and processing, yet their performance often falls short in complex agricultural environments. Deep learning (DL), with its superior capabilities in data analysis, pattern recognition, and autonomous decision-making, offers transformative potential across the cotton value chain. This review highlights DL applications in seed quality assessment, pest and disease detection, intelligent irrigation, autonomous harvesting, and fiber classification et al. DL enhances accuracy, efficiency, and adaptability, promoting the modernization of cotton production and precision agriculture. However, challenges remain, including limited model generalization, high computational demands, environmental adaptability issues, and costly data annotation. Future research should prioritize lightweight, robust models, standardized multi-source datasets, and real-time performance optimization. Integrating multi-modal data—such as remote sensing, weather, and soil information—can further boost decision-making. Addressing these challenges will enable DL to play a central role in driving intelligent, automated, and sustainable transformation in the cotton industry. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

43 pages, 7456 KiB  
Article
Estimation of Fractal Dimensions and Classification of Plant Disease with Complex Backgrounds
by Muhammad Hamza Tariq, Haseeb Sultan, Rehan Akram, Seung Gu Kim, Jung Soo Kim, Muhammad Usman, Hafiz Ali Hamza Gondal, Juwon Seo, Yong Ho Lee and Kang Ryoung Park
Fractal Fract. 2025, 9(5), 315; https://doi.org/10.3390/fractalfract9050315 - 14 May 2025
Viewed by 436
Abstract
Accurate classification of plant disease by farming robot cameras can increase crop yield and reduce unnecessary agricultural chemicals, which is a fundamental task in the field of sustainable and precision agriculture. However, until now, disease classification has mostly been performed by manual methods, [...] Read more.
Accurate classification of plant disease by farming robot cameras can increase crop yield and reduce unnecessary agricultural chemicals, which is a fundamental task in the field of sustainable and precision agriculture. However, until now, disease classification has mostly been performed by manual methods, such as visual inspection, which are labor-intensive and often lead to misclassification of disease types. Therefore, previous studies have proposed disease classification methods based on machine learning or deep learning techniques; however, most did not consider real-world plant images with complex backgrounds and incurred high computational costs. To address these issues, this study proposes a computationally effective residual convolutional attention network (RCA-Net) for the disease classification of plants in field images with complex backgrounds. RCA-Net leverages attention mechanisms and multiscale feature extraction strategies to enhance salient features while reducing background noises. In addition, we introduce fractal dimension estimation to analyze the complexity and irregularity of class activation maps for both healthy plants and their diseases, confirming that our model can extract important features for the correct classification of plant disease. The experiments utilized two publicly available datasets: the sugarcane leaf disease and potato leaf disease datasets. Furthermore, to improve the capability of our proposed system, we performed fractal dimension estimation to evaluate the structural complexity of healthy and diseased leaf patterns. The experimental results show that RCA-Net outperforms state-of-the-art methods with an accuracy of 93.81% on the first dataset and 78.14% on the second dataset. Furthermore, we confirm that our method can be operated on an embedded system for farming robots or mobile devices at fast processing speed (78.7 frames per second). Full article
Show Figures

Figure 1

27 pages, 14459 KiB  
Review
Disease Detection on Cocoa Crops Based on Computer-Vision Techniques: A Systematic Literature Review
by Joan Alvarado, Juan Felipe Restrepo-Arias, David Velásquez and Mikel Maiza
Agriculture 2025, 15(10), 1032; https://doi.org/10.3390/agriculture15101032 - 10 May 2025
Viewed by 826
Abstract
Computer vision in the agriculture field aims to find solutions to guarantee and assure farmers the quality of their products. Therefore, studies to diagnose diseases and detect anomalies in crops, through computer vision, have been growing in recent years. However, crops such as [...] Read more.
Computer vision in the agriculture field aims to find solutions to guarantee and assure farmers the quality of their products. Therefore, studies to diagnose diseases and detect anomalies in crops, through computer vision, have been growing in recent years. However, crops such as cocoa required further attention to drive advances in computer vision to the detection of diseases. As a result, this paper aims to explore the computer vision methods used to diagnose diseases in crops, especially in cocoa. Therefore, the purpose of this paper is to provide answers to the following research questions: (Q1) What are the diseases affecting cocoa crop production? (Q2) What are the main Machine Learning algorithms and techniques used to detect and classify diseases in cocoa? (Q3) What are the types of imaging technologies (e.g., RGB, hyperspectral, or multispectral cameras) commonly used in these applications? (Q4) What are the main Machine Learning algorithms used in mobile applications and other platforms for cocoa disease detection? This paper carries out a Systematic Literature Review approach. The Scopus Digital, Science Direct Digital, Springer Link, and IEEE Explore databases were explored from January 2019 to August 2024. These questions have identified the main diseases that affect cocoa crops and their production. From this, it was identified that mostly Machine Learning algorithms based on computer vision are employed to detect anomalies in cocoa. In addition, the main sensors were explored, such as RGB and hyperspectral cameras, used for the creation of datasets and as a tool to diagnose or detect diseases. Finally, this paper allowed us to explore a Machine Learning algorithm to detect disease deployed in mobile and Internet of Things applications for detecting diseases in cocoa crops. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

31 pages, 3016 KiB  
Review
Image Recognition Technology in Smart Agriculture: A Review of Current Applications Challenges and Future Prospects
by Chunxia Jiang, Kangshu Miao, Zhichao Hu, Fengwei Gu and Kechuan Yi
Processes 2025, 13(5), 1402; https://doi.org/10.3390/pr13051402 - 4 May 2025
Viewed by 1110
Abstract
The implementation of image recognition technology can significantly enhance the levels of automation and intelligence in smart agriculture. However, most researchers focused on its applications in medical imaging, industry, and transportation, while fewer focused on smart agriculture. Based on this, this study aims [...] Read more.
The implementation of image recognition technology can significantly enhance the levels of automation and intelligence in smart agriculture. However, most researchers focused on its applications in medical imaging, industry, and transportation, while fewer focused on smart agriculture. Based on this, this study aims to contribute to the comprehensive understanding of the application of image recognition technology in smart agriculture by investigating the scientific literature related to this technology in the last few years. We discussed and analyzed the applications of plant disease and pest detection, crop species identification, crop yield prediction, and quality assessment. Then, we made a brief introduction to its applications in soil testing and nutrient management, as well as in agricultural machinery operation quality assessment and agricultural product grading. At last, the challenges and the emerging trends of image recognition technology were summarized. The results indicated that the models used in image recognition technology face challenges such as limited generalization, real-time processing, and insufficient dataset diversity. Transfer learning and green Artificial Intelligence (AI) offer promising solutions to these issues by reducing the reliance on large datasets and minimizing computational resource consumption. Advanced technologies like transformers further enhance the adaptability and accuracy of image recognition in smart agriculture. This comprehensive review provides valuable information on the current state of image recognition technology in smart agriculture and prospective future opportunities. Full article
Show Figures

Figure 1

23 pages, 8988 KiB  
Article
BED-YOLO: An Enhanced YOLOv10n-Based Tomato Leaf Disease Detection Algorithm
by Qing Wang, Ning Yan, Yasen Qin, Xuedong Zhang and Xu Li
Sensors 2025, 25(9), 2882; https://doi.org/10.3390/s25092882 - 2 May 2025
Viewed by 600
Abstract
As an important economic crop, tomato is highly susceptible to diseases that, if not promptly managed, can severely impact yield and quality, leading to significant economic losses. Traditional diagnostic methods rely on expert visual inspection, which is not only laborious but also prone [...] Read more.
As an important economic crop, tomato is highly susceptible to diseases that, if not promptly managed, can severely impact yield and quality, leading to significant economic losses. Traditional diagnostic methods rely on expert visual inspection, which is not only laborious but also prone to subjective bias. In recent years, object detection algorithms have gained widespread application in tomato disease detection due to their efficiency and accuracy, providing reliable technical support for crop disease identification. In this paper, we propose an improved tomato leaf disease detection method based on the YOLOv10n algorithm, named BED-YOLO. We constructed an image dataset containing four common tomato diseases (early blight, late blight, leaf mold, and septoria leaf spot), with 65% of the images sourced from field collections in natural environments, and the remainder obtained from the publicly available PlantVillage dataset. All images were annotated with bounding boxes, and the class distribution was relatively balanced to ensure the stability of training and the fairness of evaluation. First, we introduced a Deformable Convolutional Network (DCN) to replace the conventional convolution in the YOLOv10n backbone network, enhancing the model’s adaptability to overlapping leaves, occlusions, and blurred lesion edges. Second, we incorporated a Bidirectional Feature Pyramid Network (BiFPN) on top of the FPN + PAN structure to optimize feature fusion and improve the extraction of small disease regions, thereby enhancing the detection accuracy for small lesion targets. Lastly, the Efficient Multi-Scale Attention (EMA) mechanism was integrated into the C2f module to enhance feature fusion, effectively focusing on disease regions while reducing background noise and ensuring the integrity of disease features in multi-scale fusion. The experimental results demonstrated that the improved BED-YOLO model achieved significant performance improvements compared to the original model. Precision increased from 85.1% to 87.2%, recall from 86.3% to 89.1%, and mean average precision (mAP) from 87.4% to 91.3%. Therefore, the improved BED-YOLO model demonstrated significant enhancements in detection accuracy, recall ability, and overall robustness. Notably, it exhibited stronger practical applicability, particularly in image testing under natural field conditions, making it highly suitable for intelligent disease monitoring tasks in large-scale agricultural scenarios. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

37 pages, 46669 KiB  
Article
ViX-MangoEFormer: An Enhanced Vision Transformer–EfficientFormer and Stacking Ensemble Approach for Mango Leaf Disease Recognition with Explainable Artificial Intelligence
by Abdullah Al Noman, Amira Hossain, Anamul Sakib, Jesika Debnath, Hasib Fardin, Abdullah Al Sakib, Rezaul Haque, Md. Redwan Ahmed, Ahmed Wasif Reza and M. Ali Akber Dewan
Computers 2025, 14(5), 171; https://doi.org/10.3390/computers14050171 - 2 May 2025
Viewed by 904
Abstract
Mango productivity suffers greatly from leaf diseases, leading to economic and food security issues. Current visual inspection methods are slow and subjective. Previous Deep-Learning (DL) solutions have shown promise but suffer from imbalanced datasets, modest generalization, and limited interpretability. To address these challenges, [...] Read more.
Mango productivity suffers greatly from leaf diseases, leading to economic and food security issues. Current visual inspection methods are slow and subjective. Previous Deep-Learning (DL) solutions have shown promise but suffer from imbalanced datasets, modest generalization, and limited interpretability. To address these challenges, this study introduces the ViX-MangoEFormer, which combines convolutional kernels and self-attention to effectively diagnose multiple mango leaf conditions in both balanced and imbalanced image sets. To benchmark against ViX-MangoEFormer, we developed a stacking ensemble model (MangoNet-Stack) that utilizes five transfer learning networks as base learners. All models were trained with Grad-CAM produced pixel-level explanations. In a combined dataset of 25,530 images, ViX-MangoEFormer achieved an F1 score of 99.78% and a Matthews Correlation Coefficient (MCC) of 99.34%. This performance consistently outperformed individual pre-trained models and MangoNet-Stack. Additionally, data augmentation has improved the performance of every architecture compared to its non-augmented version. Cross-domain tests on morphologically similar crop leaves confirmed strong generalization. Our findings validate the effectiveness of transformer attention and XAI in mango leaf disease detection. ViX-MangoEFormer is deployed as a web application that delivers real-time predictions, probability scores, and visual rationales. The system enables growers to respond quickly and enhances large-scale smart crop health monitoring. Full article
(This article belongs to the Special Issue Deep Learning and Explainable Artificial Intelligence)
Show Figures

Figure 1

25 pages, 13043 KiB  
Article
Coffee-Leaf Diseases and Pests Detection Based on YOLO Models
by Jonatan Fragoso, Clécio Silva, Thuanne Paixão, Ana Beatriz Alvarez, Olacir Castro Júnior, Ruben Florez, Facundo Palomino-Quispe, Lucas Graciolli Savian and Paulo André Trazzi
Appl. Sci. 2025, 15(9), 5040; https://doi.org/10.3390/app15095040 - 1 May 2025
Viewed by 525
Abstract
Coffee cultivation is vital to the global economy, but faces significant challenges with diseases such as rust, miner, phoma, and cercospora, which impact production and sustainable crop management. In this scenario, deep learning techniques have shown promise for the early identification of these [...] Read more.
Coffee cultivation is vital to the global economy, but faces significant challenges with diseases such as rust, miner, phoma, and cercospora, which impact production and sustainable crop management. In this scenario, deep learning techniques have shown promise for the early identification of these diseases, enabling more efficient monitoring. This paper proposes an approach for detecting diseases and pests on coffee leaves using an efficient single-shot object-detection algorithm. The experiments were conducted using the YOLOv8, YOLOv9, YOLOv10 and YOLOv11 versions, including their variations. The BRACOL dataset, annotated by an expert, was used in the experiments to guarantee the quality of the annotations and the reliability of the trained models. The evaluation of the models included quantitative and qualitative analyses, considering the mAP, F1-Score, and recall metrics. In the analyses, YOLOv8s stands out as the most effective, with a mAP of 54.5%, an inference time of 11.4 ms and the best qualitative predictions, making it ideal for real-time applications. Full article
(This article belongs to the Special Issue Applied Computer Vision in Industry and Agriculture)
Show Figures

Figure 1

23 pages, 8570 KiB  
Article
Apple Pest and Disease Detection Network with Partial Multi-Scale Feature Extraction and Efficient Hierarchical Feature Fusion
by Weihao Bao and Fuquan Zhang
Agronomy 2025, 15(5), 1043; https://doi.org/10.3390/agronomy15051043 - 26 Apr 2025
Viewed by 256
Abstract
Apples are a highly valuable economic crop worldwide, but their cultivation often faces challenges from pests and diseases that severely affect yield and quality. To address this issue, this study proposes an improved pest and disease detection algorithm, YOLO-PEL, based on YOLOv11, which [...] Read more.
Apples are a highly valuable economic crop worldwide, but their cultivation often faces challenges from pests and diseases that severely affect yield and quality. To address this issue, this study proposes an improved pest and disease detection algorithm, YOLO-PEL, based on YOLOv11, which integrates multiple innovative modules, including PMFEM, EHFPN, and LKAP, combined with data augmentation strategies, significantly improving detection accuracy and efficiency in complex environments. PMFEM leverages partial multi-scale feature extraction to effectively enhance feature representation, particularly improving the ability to capture pest and disease targets in complex backgrounds. EHFPN employs hierarchical feature fusion and an efficient local attention mechanism to markedly improve the detection accuracy of small targets. LKAP introduces a large kernel attention mechanism, expanding the receptive field and enhancing the localization precision of diseased regions. Experimental results demonstrate that YOLO-PEL achieves a mAP@50 of 72.9% in the Turkey_Plant dataset’s apple subset, representing an improvement of approximately 4.3% over the baseline YOLOv11. Furthermore, the model exhibits favorable lightweight characteristics in terms of computational complexity and parameter count, underscoring its effectiveness and robustness in practical applications. YOLO-PEL not only provides an efficient solution for agricultural pest and disease detection, but also offers technological support for the advancement of smart agriculture. Future research will focus on optimizing the model’s speed and lightweight design to adapt to broader agricultural application scenarios, driving further development in agricultural intelligence technologies. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

19 pages, 3076 KiB  
Article
Federated Learning for Heterogeneous Multi-Site Crop Disease Diagnosis
by Wesley Chorney, Abdur Rahman, Yibin Wang, Haifeng Wang and Zhaohua Peng
Mathematics 2025, 13(9), 1401; https://doi.org/10.3390/math13091401 - 25 Apr 2025
Viewed by 458
Abstract
Crop diseases can significantly impact crop growth and production, often leading to a severe economic burden for rice farmers. These diseases can spread rapidly over large areas, making it challenging for farmers to detect and manage them effectively and promptly. Automated methods for [...] Read more.
Crop diseases can significantly impact crop growth and production, often leading to a severe economic burden for rice farmers. These diseases can spread rapidly over large areas, making it challenging for farmers to detect and manage them effectively and promptly. Automated methods for disease classification emerge as promising approaches for detecting and managing these diseases, provided there are sufficient data. Sharing data among farms could facilitate the development of a strong classifier, but it must be executed properly to prevent leaking sensitive information. In this study, we demonstrate how farms with vastly different datasets can collaborate through a federated learning model. The objective of this collaboration is to create a classifier that every farm can use to detect and manage rice crop diseases by leveraging data sharing while safeguarding data privacy. We underscore the significance of data sharing and model architecture in developing a robust centralized classifier, which can effectively classify multiple diseases (and a healthy state) with 83.24% accuracy, 84.24% precision, 83.24% recall, and an 82.28% F1 score. In addition, we demonstrate the importance of model design on classification outcomes. The proposed collaborative learning method not only preserves data privacy but also offers a cost-effective and communication-efficient lightweight solution for rice crop disease detection. Furthermore, this collaborative strategy can be extended to other crop disease classification tasks. Full article
(This article belongs to the Special Issue Computational Intelligence in Addressing Data Heterogeneity)
Show Figures

Figure 1

24 pages, 37584 KiB  
Article
Interpretable and Robust Ensemble Deep Learning Framework for Tea Leaf Disease Classification
by Ozan Ozturk, Beytullah Sarica and Dursun Zafer Seker
Horticulturae 2025, 11(4), 437; https://doi.org/10.3390/horticulturae11040437 - 19 Apr 2025
Viewed by 532
Abstract
Tea leaf diseases are among the most critical factors affecting the yield and quality of tea harvests. Due to climate change and widespread pesticide use in tea cultivation, these diseases have become more prevalent. As the demand for high-quality tea continues to rise, [...] Read more.
Tea leaf diseases are among the most critical factors affecting the yield and quality of tea harvests. Due to climate change and widespread pesticide use in tea cultivation, these diseases have become more prevalent. As the demand for high-quality tea continues to rise, tea has assumed an increasingly prominent role in the global economy, thereby rendering the continuous monitoring of leaf diseases essential for maintaining crop quality and ensuring sustainable production. In this context, developing innovative and sustainable agricultural policies is vital. Integrating artificial intelligence (AI)-based techniques with sustainable agricultural practices presents promising solutions. Ensuring that the outputs of these techniques are interpretable would also provide significant value for decision-makers, enhancing their applicability in sustainable agricultural practices. In this study, advanced deep learning architectures such as ResNet50, MobileNet, EfficientNetB0, and DenseNet121 were utilized to classify tea leaf diseases. Since low-resolution images and complex backgrounds caused significant challenges, an ensemble learning approach was proposed to combine the strengths of these models. The generalization performance of the ensemble model was comprehensively evaluated through statistical cross-validation. Additionally, Grad-CAM visualizations demonstrated a clear correspondence between diseased regions and disease types on the tea leaves. Thus, the models could detect diseases under varying conditions, highlighting their robustness. The ensemble model achieved high predictive performance, with precision, recall, and F1-score values of 95%, 94%, and 94% across folds. The overall classification accuracy reached 96%, with a maximum standard deviation of 2% across all dataset folds. Additionally, Grad-CAM visualizations demonstrated a clear correspondence between diseased regions and specific disease types on tea leaves, confirming the ability of models to detect diseases under varying conditions accurately and highlighting their robustness. Full article
(This article belongs to the Section Plant Pathology and Disease Management (PPDM))
Show Figures

Figure 1

30 pages, 12978 KiB  
Article
A Framework for Breast Cancer Classification with Deep Features and Modified Grey Wolf Optimization
by Fathimathul Rajeena P.P and Sara Tehsin
Mathematics 2025, 13(8), 1236; https://doi.org/10.3390/math13081236 - 9 Apr 2025
Viewed by 503
Abstract
Breast cancer is the most common disease in women, with 287,800 new cases and 43,200 deaths in 2022 across United States. Early mammographic picture analysis and processing reduce mortality and enable efficient treatment. Several deep-learning-based mammography classification methods have been developed. Due to [...] Read more.
Breast cancer is the most common disease in women, with 287,800 new cases and 43,200 deaths in 2022 across United States. Early mammographic picture analysis and processing reduce mortality and enable efficient treatment. Several deep-learning-based mammography classification methods have been developed. Due to low-contrast images and irrelevant information in publicly available breast cancer datasets, existing models generally perform poorly. Pre-trained convolutional neural network models trained on generic datasets tend to extract irrelevant features when applied to domain-specific classification tasks, highlighting the need for a feature selection mechanism to transform high-dimensional data into a more discriminative feature space. This work introduces an innovative and effective multi-step pathway to overcome these restrictions. In preprocessing, mammographic pictures are haze-reduced using adaptive transformation, normalized using a cropping algorithm, and balanced using rotation, flipping, and noise addition. A 32-layer convolutional neural model inspired by YOLO, U-Net, and ResNet is intended to extract highly discriminative features for breast cancer classification. A modified Grey Wolf Optimization algorithm with three significant adjustments improves feature selection and redundancy removal over the previous approach. The robustness and efficacy of the proposed model in the classification of breast cancer were validated by its consistently high performance across multiple benchmark mammogram datasets. The model’s constant and better performance proves its robust generalization, giving it a powerful solution for binary and multiclass breast cancer classification. Full article
(This article belongs to the Special Issue Application of Neural Networks and Deep Learning)
Show Figures

Figure 1

Back to TopTop