Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,504)

Search Parameters:
Keywords = deep learning in agriculture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 73507 KB  
Article
2C-Net: A Novel Spatiotemporal Dual-Channel Network for Soil Organic Matter Prediction Using Multi-Temporal Remote Sensing and Environmental Covariates
by Jiale Geng, Chong Luo, Jun Lu, Depiao Kong, Xue Li and Huanjun Liu
Remote Sens. 2025, 17(19), 3358; https://doi.org/10.3390/rs17193358 - 3 Oct 2025
Abstract
Soil organic matter (SOM) is essential for ecosystem health and agricultural productivity. Accurate prediction of SOM content is critical for modern agricultural management and sustainable soil use. Existing digital soil mapping (DSM) models, when processing temporal data, primarily focus on modeling the changes [...] Read more.
Soil organic matter (SOM) is essential for ecosystem health and agricultural productivity. Accurate prediction of SOM content is critical for modern agricultural management and sustainable soil use. Existing digital soil mapping (DSM) models, when processing temporal data, primarily focus on modeling the changes in input data across successive time steps. However, they do not adequately model the relationships among different input variables, which hinders the capture of complex data patterns and limits the accuracy of predictions. To address this problem, this paper proposes a novel deep learning model, 2-Channel Network (2C-Net), leveraging sequential multi-temporal remote sensing images to improve SOM prediction. The network separates input data into temporal and spatial data, processing them through independent temporal and spatial channels. Temporal data includes multi-temporal Sentinel-2 spectral reflectance, while spatial data consists of environmental covariates including climate and topography. The Multi-sequence Feature Fusion Module (MFFM) is proposed to globally model spectral data across multiple bands and time steps, and the Diverse Convolutional Architecture (DCA) extracts spatial features from environmental data. Experimental results show that 2C-Net outperforms the baseline model (CNN-LSTM) and mainstream machine learning model for DSM, with R2 = 0.524, RMSE = 0.884 (%), MAE = 0.581 (%), and MSE = 0.781 (%)2. Furthermore, this study demonstrates the significant importance of sequential spectral data for the inversion of SOM content and concludes the following: for the SOM inversion task, the bare soil period after tilling is a more important time window than other bare soil periods. 2C-Net model effectively captures spatiotemporal features, offering high-accuracy SOM predictions and supporting future DSM and soil management. Full article
(This article belongs to the Special Issue Remote Sensing in Soil Organic Carbon Dynamics)
21 pages, 4053 KB  
Article
Self-Attention-Enhanced Deep Learning Framework with Multi-Scale Feature Fusion for Potato Disease Detection in Complex Multi-Leaf Field Conditions
by Ke Xie, Decheng Xu and Sheng Chang
Appl. Sci. 2025, 15(19), 10697; https://doi.org/10.3390/app151910697 - 3 Oct 2025
Abstract
Potato leaf diseases are recognized as a major threat to agricultural productivity and global food security, emphasizing the need for rapid and accurate detection methods. Conventional manual diagnosis is limited by inefficiency and susceptibility to bias, whereas existing automated approaches are often constrained [...] Read more.
Potato leaf diseases are recognized as a major threat to agricultural productivity and global food security, emphasizing the need for rapid and accurate detection methods. Conventional manual diagnosis is limited by inefficiency and susceptibility to bias, whereas existing automated approaches are often constrained by insufficient feature extraction, inadequate integration of multiple leaves, and poor generalization under complex field conditions. To overcome these challenges, a ResNet18-SAWF model was developed, integrating a self-attention mechanism with a multi-scale feature-fusion strategy within the ResNet18 framework. The self-attention module was designed to enhance the extraction of key features, including leaf color, texture, and disease spots, while the feature-fusion module was implemented to improve the holistic representation of multi-leaf structures under complex backgrounds. Experimental evaluation was conducted using a comprehensive dataset comprising both simple and complex background conditions. The proposed model was demonstrated to achieve an accuracy of 98.36% on multi-leaf images with complex backgrounds, outperforming baseline ResNet18 (91.80%), EfficientNet-B0 (86.89%), and MobileNet_V2 (88.53%) by 6.56, 11.47, and 9.83 percentage points, respectively. Compared with existing methods, superior performance was observed, with an 11.55 percentage point improvement over the average accuracy of complex background studies (86.81%) and a 0.7 percentage point increase relative to simple background studies (97.66%). These results indicate that the proposed approach provides a robust, accurate, and practical solution for potato leaf disease detection in real field environments, thereby advancing precision agriculture technologies. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

38 pages, 2485 KB  
Review
Research Progress of Deep Learning-Based Artificial Intelligence Technology in Pest and Disease Detection and Control
by Yu Wu, Li Chen, Ning Yang and Zongbao Sun
Agriculture 2025, 15(19), 2077; https://doi.org/10.3390/agriculture15192077 - 3 Oct 2025
Abstract
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and [...] Read more.
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and control technologies, with a special focus on the effectiveness of deep-learning-based image recognition methods for pest identification, as well as their integrated applications in drone-based remote sensing, spectral imaging, and Internet of Things sensor systems. Through multimodal data fusion and dynamic prediction, artificial intelligence has significantly improved the response times and accuracy of pest monitoring. On the control side, the development of intelligent prediction and early-warning systems, precision pesticide-application technologies, and smart equipment has advanced the goals of eco-friendly pest management and ecological regulation. However, challenges such as high data-annotation costs, limited model generalization, and constrained computing power on edge devices remain. Moving forward, further exploration of cutting-edge approaches such as self-supervised learning, federated learning, and digital twins will be essential to build more efficient and reliable intelligent control systems, providing robust technical support for sustainable agricultural development. Full article
22 pages, 2526 KB  
Article
An Explainable Deep Learning Framework with Adaptive Feature Selection for Smart Lemon Disease Classification in Agriculture
by Naeem Ullah, Michelina Ruocco, Antonio Della Cioppa, Ivanoe De Falco and Giovanna Sannino
Electronics 2025, 14(19), 3928; https://doi.org/10.3390/electronics14193928 - 2 Oct 2025
Abstract
Early and accurate detection of lemon disease is necessary for effective citrus crop management. Traditional approaches often lack refined diagnosis, necessitating more powerful solutions. The article introduces adaptive PSO-LemonNetX, a novel framework integrating a novel deep learning model, adaptive Particle Swarm Optimization (PSO)-based [...] Read more.
Early and accurate detection of lemon disease is necessary for effective citrus crop management. Traditional approaches often lack refined diagnosis, necessitating more powerful solutions. The article introduces adaptive PSO-LemonNetX, a novel framework integrating a novel deep learning model, adaptive Particle Swarm Optimization (PSO)-based feature selection, and explainable AI (XAI) using LIME. The approach improves the accuracy of classification while also enhancing the explainability of the model. Our end-to-end model obtained 97.01% testing and 98.55% validation accuracy. Performance was enhanced further with adaptive PSO and conventional classifiers—100% validation accuracy using Naive Bayes and 98.8% testing accuracy using Naive Bayes and an SVM. The suggested PSO-based feature selection performed better than ReliefF, Kruskal–Wallis, and Chi-squared approaches. Due to its lightweight design and good performance, this approach can be adapted for edge devices in IoT-enabled smart farms, contributing to sustainable and automated disease detection systems. These results show the potential of integrating deep learning, PSO, grid search, and XAI into smart agriculture workflows for enhancing agricultural disease detection and decision-making. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition)
Show Figures

Figure 1

22 pages, 12774 KB  
Article
Multi-Agent Coverage Path Planning Using Graph-Adapted K-Means in Road Network Digital Twin
by Haeseong Lee and Myungho Lee
Electronics 2025, 14(19), 3921; https://doi.org/10.3390/electronics14193921 - 1 Oct 2025
Abstract
In this paper, we research multi-robot coverage path planning (MCPP), which generates paths for agents to visit all target areas or points. This problem is common in various fields, such as agriculture, rescue, 3D scanning, and data collection. Algorithms to solve MCPP are [...] Read more.
In this paper, we research multi-robot coverage path planning (MCPP), which generates paths for agents to visit all target areas or points. This problem is common in various fields, such as agriculture, rescue, 3D scanning, and data collection. Algorithms to solve MCPP are generally categorized into online and offline methods. Online methods work in an unknown area, while offline methods generate a path for the known. Recently, offline MCPP has been researched through various approaches, such as graph clustering, DARP, genetic algorithms, and deep learning models. However, many previous algorithms can only be applied on grid-like environments. Therefore, this study introduces an offline MCPP algorithm that applies graph-adapted K-means and spanning tree coverage for robust operation in non-grid-structure maps such as road networks. To achieve this, we modify a cost function based on the travel distance by adjusting the referenced clustering algorithm. Moreover, we apply bipartite graph matching to reflect the initial positions of agents. We also introduce a cluster-level graph to alleviate local minima during clustering updates. We compare the proposed algorithm with existing methods in a grid environment to validate its stability, and evaluation on a road network digital twin validates its robustness across most environments. Full article
26 pages, 3841 KB  
Article
Comparison of Regression, Classification, Percentile Method and Dual-Range Averaging Method for Crop Canopy Height Estimation from UAV-Based LiDAR Point Cloud Data
by Pai Du, Jinfei Wang and Bo Shan
Drones 2025, 9(10), 683; https://doi.org/10.3390/drones9100683 - 1 Oct 2025
Abstract
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient [...] Read more.
Crop canopy height is a key structural indicator that is strongly associated with crop development, biomass accumulation, and crop health. To overcome the limitations of time-consuming and labor-intensive traditional field measurements, Unmanned Aerial Vehicle (UAV)-based Light Detection and Ranging (LiDAR) offers an efficient alternative by capturing three-dimensional point cloud data (PCD). In this study, UAV-LiDAR data were acquired using a DJI Matrice 600 Pro equipped with a 16-channel LiDAR system. Three canopy height estimation methodological approaches were evaluated across three crop types: corn, soybean, and winter wheat. Specifically, this study assessed machine learning regression modeling, ground point classification techniques, percentile-based method and a newly proposed Dual-Range Averaging (DRA) method to identify the most effective method while ensuring practicality and reproducibility. The best-performing method for corn was Support Vector Regression (SVR) with a linear kernel (R2 = 0.95, RMSE = 0.137 m). For soybean, the DRA method yielded the highest accuracy (R2 = 0.93, RMSE = 0.032 m). For winter wheat, the PointCNN deep learning model demonstrated the best performance (R2 = 0.93, RMSE = 0.046 m). These results highlight the effectiveness of integrating UAV-LiDAR data with optimized processing methods for accurate and widely applicable crop height estimation in support of precision agriculture practices. Full article
(This article belongs to the Special Issue UAV Agricultural Management: Recent Advances and Future Prospects)
Show Figures

Figure 1

24 pages, 108802 KB  
Article
Enhanced Garlic Crop Identification Using Deep Learning Edge Detection and Multi-Source Feature Optimization with Random Forest
by Junli Zhou, Quan Diao, Xue Liu, Hang Su, Zhen Yang and Zhanlin Ma
Sensors 2025, 25(19), 6014; https://doi.org/10.3390/s25196014 - 30 Sep 2025
Abstract
Garlic, as an important economic crop, plays a crucial role in the global agricultural production system. Accurate identification of garlic cultivation areas is of great significance for agricultural resource allocation and industrial development. Traditional crop identification methods face challenges of insufficient accuracy and [...] Read more.
Garlic, as an important economic crop, plays a crucial role in the global agricultural production system. Accurate identification of garlic cultivation areas is of great significance for agricultural resource allocation and industrial development. Traditional crop identification methods face challenges of insufficient accuracy and spatial fragmentation in complex agricultural landscapes, limiting their effectiveness in precision agriculture applications. This study, focusing on Kaifeng City, Henan Province, developed an integrated technical framework for garlic identification that combines deep learning edge detection, multi-source feature optimization, and spatial constraint optimization. First, edge detection training samples were constructed using high-resolution Jilin-1 satellite data, and the DexiNed deep learning network was employed to achieve precise extraction of agricultural field boundaries. Second, Sentinel-1 SAR backscatter features, Sentinel-2 multispectral bands, and vegetation indices were integrated to construct a multi-dimensional feature space containing 28 candidate variables, with optimal feature subsets selected through random forest importance analysis combined with recursive feature elimination techniques. Finally, field boundaries were introduced as spatial constraints to optimize pixel-level classification results through majority voting, generating field-scale crop identification products. The results demonstrate that feature optimization improved overall accuracy from 0.91 to 0.93 and the Kappa coefficient from 0.8654 to 0.8857 by selecting 13 optimal features from 28 candidates. The DexiNed network achieved an F1-score of 94.16% for field boundary extraction. Spatial optimization using field constraints effectively eliminated salt-and-pepper noise, with successful validation in Kaifeng’s garlic. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

21 pages, 5230 KB  
Article
Attention-Guided Differentiable Channel Pruning for Efficient Deep Networks
by Anouar Chahbouni, Khaoula El Manaa, Yassine Abouch, Imane El Manaa, Badre Bossoufi, Mohammed El Ghzaoui and Rachid El Alami
Mach. Learn. Knowl. Extr. 2025, 7(4), 110; https://doi.org/10.3390/make7040110 - 29 Sep 2025
Abstract
Deploying deep learning (DL) models in real-world environments remains a major challenge, particularly under resource-constrained conditions where achieving both high accuracy and compact architectures is essential. While effective, Conventional pruning methods often suffer from high computational overhead, accuracy degradation, or disruption of the [...] Read more.
Deploying deep learning (DL) models in real-world environments remains a major challenge, particularly under resource-constrained conditions where achieving both high accuracy and compact architectures is essential. While effective, Conventional pruning methods often suffer from high computational overhead, accuracy degradation, or disruption of the end-to-end training process, limiting their practicality for embedded and real-time applications. We present Dynamic Attention-Guided Pruning (DAGP), a Dynamic Attention-Guided Soft Channel Pruning framework that overcomes these limitations by embedding learnable, differentiable pruning masks directly within convolutional neural networks (CNNs). These masks act as implicit attention mechanisms, adaptively suppressing non-informative channels during training. A progressively scheduled L1 regularization, activated after a warm-up phase, enables gradual sparsity while preserving early learning capacity. Unlike prior methods, DAGP is retraining-free, introduces minimal architectural overhead, and supports optional hard pruning for deployment efficiency. Joint optimization of classification and sparsity objectives ensures stable convergence and task-adaptive channel selection. Experiments on CIFAR-10 (VGG16, ResNet56) and PlantVillage (custom CNN) achieve up to 98.82% FLOPs reduction with accuracy gains over baselines. Real-world validation on an enhanced PlantDoc dataset for agricultural monitoring achieves 60 ms inference with only 2.00 MB RAM on a Raspberry Pi 4, confirming efficiency under field conditions. These results illustrate DAGP’s potential to scale beyond agriculture to diverse edge-intelligent systems requiring lightweight, accurate, and deployable models. Full article
Show Figures

Figure 1

30 pages, 14129 KB  
Article
Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification
by Adolfo Lozano-Tello, Andrés Caballero-Mancera, Jorge Luceño and Pedro J. Clemente
Sustainability 2025, 17(19), 8628; https://doi.org/10.3390/su17198628 - 25 Sep 2025
Abstract
This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and [...] Read more.
This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and informing policy decisions aimed at reducing carbon emissions and fostering climate resilience. The first approach applies deep learning-based semantic segmentation to high-resolution RGB orthophotos, using the pretrained “Solar PV Segmentation” model, which achieves an F1-score of 95.27% and an IoU of 91.04%, providing highly reliable PV identification. The second approach employs multitemporal pixel-wise spectral classification using Sentinel-2 imagery, where the best-performing neural network achieved a precision of 99.22%, a recall of 96.69%, and an overall accuracy of 98.22%. Both approaches coincided in detecting 86.67% of the identified parcels, with an average surface difference of less than 6.5 hectares per parcel. The Sentinel-2 method leverages its multispectral bands and frequent revisit rate, enabling timely detection of new or evolving installations. The proposed methodology supports the sustainable management of land resources by enabling automated, scalable, and cost-effective monitoring of solar infrastructures using open-access satellite data. This contributes directly to the goals of climate action and sustainable land-use planning and provides a replicable framework for assessing human-induced changes in land cover at regional and national scales. Full article
Show Figures

Figure 1

42 pages, 5042 KB  
Review
A Comprehensive Review of Remote Sensing and Artificial Intelligence Integration: Advances, Applications, and Challenges
by Nikolay Kazanskiy, Roman Khabibullin, Artem Nikonorov and Svetlana Khonina
Sensors 2025, 25(19), 5965; https://doi.org/10.3390/s25195965 - 25 Sep 2025
Abstract
The integration of remote sensing (RS) and artificial intelligence (AI) has revolutionized Earth observation, enabling automated, efficient, and precise analysis of vast and complex datasets. RS techniques, leveraging satellite imagery, aerial photography, and ground-based sensors, provide critical insights into environmental monitoring, disaster response, [...] Read more.
The integration of remote sensing (RS) and artificial intelligence (AI) has revolutionized Earth observation, enabling automated, efficient, and precise analysis of vast and complex datasets. RS techniques, leveraging satellite imagery, aerial photography, and ground-based sensors, provide critical insights into environmental monitoring, disaster response, agriculture, and urban planning. The rapid developments in AI, specifically machine learning (ML) and deep learning (DL), have significantly enhanced the processing and interpretation of RS data. AI-powered models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and reinforcement learning (RL) algorithms, have demonstrated remarkable capabilities in feature extraction, classification, anomaly detection, and predictive modeling. This paper provides a comprehensive survey of the latest developments at the intersection of RS and AI, highlighting key methodologies, applications, and emerging challenges. While AI-driven RS offers unprecedented opportunities for automation and decision-making, issues related to model generalization, explainability, data heterogeneity, and ethical considerations remain significant hurdles. The review concludes by discussing future research directions, emphasizing the need for improved model interpretability, multimodal learning, and real-time AI deployment for global-scale applications. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

16 pages, 2888 KB  
Article
A Novel Application of Deep Learning–Based Estimation of Fish Abundance and Temporal Patterns in Agricultural Drainage Canals for Sustainable Ecosystem Monitoring
by Shigeya Maeda and Tatsuru Akiba
Sustainability 2025, 17(19), 8578; https://doi.org/10.3390/su17198578 - 24 Sep 2025
Viewed by 48
Abstract
Agricultural drainage canals provide critical habitats for fish species that are highly sensitive to agricultural practices. However, conventional monitoring methods such as capture surveys are invasive and labor-intensive, which means they can disturb fish populations and hinder long-term ecological assessment. Therefore, there is [...] Read more.
Agricultural drainage canals provide critical habitats for fish species that are highly sensitive to agricultural practices. However, conventional monitoring methods such as capture surveys are invasive and labor-intensive, which means they can disturb fish populations and hinder long-term ecological assessment. Therefore, there is a strong need for effective and non-invasive monitoring techniques. In this study, we developed a practical method using the YOLOv8n deep learning model to automatically detect and quantify fish occurrence in underwater images from a canal in Ibaraki Prefecture, Japan. The model showed high performance in validation (F1-score = 91.6%, Precision = 95.1%, Recall = 88.4%) but exhibited reduced performance under real field conditions (F1-score = 61.6%) due to turbidity, variable lighting, and sediment resuspension. By correcting for detection errors, we estimated that approximately 7300 individuals of Pseudorasbora parva and 80 individuals of Cyprinus carpio passed through the observation site during a seven-hour monitoring period. These findings demonstrate the feasibility of deep learning-based monitoring to capture temporal patterns of fish occurrence in agricultural drainage canals. This approach provides a promising tool for sustainable aquatic ecosystem management in agricultural landscapes and emphasizes the need for further improvements in recall under turbid and low-visibility conditions. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

21 pages, 7458 KB  
Article
Dynamic and Lightweight Detection of Strawberry Diseases Using Enhanced YOLOv10
by Huilong Jin, Xiangrong Ji and Wanming Liu
Electronics 2025, 14(19), 3768; https://doi.org/10.3390/electronics14193768 - 24 Sep 2025
Viewed by 139
Abstract
Strawberry cultivation faces significant challenges from pests and diseases, which are difficult to detect due to complex natural backgrounds and the high visual similarity between targets and their surroundings. This study proposes an advanced and lightweight detection algorithm, YOLO10-SC, based on the YOLOv10 [...] Read more.
Strawberry cultivation faces significant challenges from pests and diseases, which are difficult to detect due to complex natural backgrounds and the high visual similarity between targets and their surroundings. This study proposes an advanced and lightweight detection algorithm, YOLO10-SC, based on the YOLOv10 model, to address these challenges. The algorithm integrates the convolutional block attention module (CBAM) to enhance feature representation by focusing on critical disease-related information while suppressing irrelevant data. Additionally, the Spatial and Channel Reconstruction Convolution (SCConv) module is incorporated into the C2f module to improve the model’s ability to distinguish subtle differences among various pest and disease types. The introduction of DySample, an ultra-lightweight dynamic upsampler, further enhances feature boundary smoothness and detail preservation, ensuring efficient upsampling with minimal computational resources. Experimental results demonstrate that YOLO10-SC outperforms the original YOLOv10 and other mainstream algorithms in precision, recall, mAP50, F1 score, and FPS while reducing model parameters, GFLOPs, and size. These improvements significantly enhance detection accuracy and efficiency, making the model well-suited for real-time applications in natural agricultural environments. The proposed algorithm offers a robust solution for strawberry pest and disease detection, contributing to the advancement of smart agriculture. Full article
Show Figures

Figure 1

37 pages, 3784 KB  
Review
A Review on the Detection of Plant Disease Using Machine Learning and Deep Learning Approaches
by Thandiwe Nyawose, Rito Clifford Maswanganyi and Philani Khumalo
J. Imaging 2025, 11(10), 326; https://doi.org/10.3390/jimaging11100326 - 23 Sep 2025
Viewed by 492
Abstract
The early and accurate detection of plant diseases is essential for ensuring food security, enhancing crop yields, and facilitating precision agriculture. Manual methods are labour-intensive and prone to error, especially under varying environmental conditions. Artificial intelligence (AI), particularly machine learning (ML) and deep [...] Read more.
The early and accurate detection of plant diseases is essential for ensuring food security, enhancing crop yields, and facilitating precision agriculture. Manual methods are labour-intensive and prone to error, especially under varying environmental conditions. Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has advanced automated disease identification through image classification. However, challenges persist, including limited generalisability, small and imbalanced datasets, and poor real-world performance. Unlike previous reviews, this paper critically evaluates model performance in both lab and real-time field conditions, emphasising robustness, generalisation, and suitability for edge deployment. It introduces recent architectures such as GreenViT, hybrid ViT–CNN models, and YOLO-based single- and two-stage detectors, comparing their accuracy, inference speed, and hardware efficiency. The review discusses multimodal and self-supervised learning techniques to enhance detection in complex environments, highlighting key limitations, including reliance on handcrafted features, overfitting, and sensitivity to environmental noise. Strengths and weaknesses of models across diverse datasets are analysed with a focus on real-time agricultural applicability. The paper concludes by identifying research gaps and outlining future directions, including the development of lightweight architectures, integration with Deep Convolutional Generative Adversarial Networks (DCGANs), and improved dataset diversity for real-world deployment in precision agriculture. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

29 pages, 9358 KB  
Article
Deep Ensemble Learning and Explainable AI for Multi-Class Classification of Earthstar Fungal Species
by Eda Kumru, Aras Fahrettin Korkmaz, Fatih Ekinci, Abdullah Aydoğan, Mehmet Serdar Güzel and Ilgaz Akata
Biology 2025, 14(10), 1313; https://doi.org/10.3390/biology14101313 - 23 Sep 2025
Viewed by 157
Abstract
The current study presents a multi-class, image-based classification of eight morphologically similar macroscopic Earthstar fungal species (Astraeus hygrometricus, Geastrum coronatum, G. elegans, G. fimbriatum, G. quadrifidum, G. rufescens, G. triplex, and Myriostoma coliforme) using [...] Read more.
The current study presents a multi-class, image-based classification of eight morphologically similar macroscopic Earthstar fungal species (Astraeus hygrometricus, Geastrum coronatum, G. elegans, G. fimbriatum, G. quadrifidum, G. rufescens, G. triplex, and Myriostoma coliforme) using deep learning and explainable artificial intelligence (XAI) techniques. For the first time in the literature, these species are evaluated together, providing a highly challenging dataset due to significant visual overlap. Eight different convolutional neural network (CNN) and transformer-based architectures were employed, including EfficientNetV2-M, DenseNet121, MaxViT-S, DeiT, RegNetY-8GF, MobileNetV3, EfficientNet-B3, and MnasNet. The accuracy scores of these models ranged from 86.16% to 96.23%, with EfficientNet-B3 achieving the best individual performance. To enhance interpretability, Grad-CAM and Score-CAM methods were utilised to visualise the rationale behind each classification decision. A key novelty of this study is the design of two hybrid ensemble models: EfficientNet-B3 + DeiT and DenseNet121 + MaxViT-S. These ensembles further improved classification stability, reaching 93.71% and 93.08% accuracy, respectively. Based on metric-based evaluation, the EfficientNet-B3 + DeiT model delivered the most balanced performance, with 93.83% precision, 93.72% recall, 93.73% F1-score, 99.10% specificity, a log loss of 0.2292, and an MCC of 0.9282. Moreover, this modeling approach holds potential for monitoring symbiotic fungal species in agricultural ecosystems and supporting sustainable production strategies. This research contributes to the literature by introducing a novel framework that simultaneously emphasises classification accuracy and model interpretability in fungal taxonomy. The proposed method successfully classified morphologically similar puffball species with high accuracy, while explainable AI techniques revealed biologically meaningful insights. All evaluation metrics were computed exclusively on a 10% independent test set that was entirely separate from the training and validation phases. Future work will focus on expanding the dataset with samples from diverse ecological regions and testing the method under field conditions. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

16 pages, 2669 KB  
Article
YOLOv7 for Weed Detection in Cotton Fields Using UAV Imagery
by Anindita Das, Yong Yang and Vinitha Hannah Subburaj
AgriEngineering 2025, 7(10), 313; https://doi.org/10.3390/agriengineering7100313 - 23 Sep 2025
Viewed by 165
Abstract
Weed detection is critical for precision agriculture, enabling targeted herbicide application to reduce costs and enhance crop health. This study utilized UAV-acquired RGB imagery from cotton fields to develop and evaluate deep learning models for weed detection. As sustainable resource management gains importance [...] Read more.
Weed detection is critical for precision agriculture, enabling targeted herbicide application to reduce costs and enhance crop health. This study utilized UAV-acquired RGB imagery from cotton fields to develop and evaluate deep learning models for weed detection. As sustainable resource management gains importance in rainfed agricultural systems, precise weed identification is essential to optimize yields and minimize herbicide use. However, distinguishing weeds from crops in complex field environments remains challenging due to their visual similarity. This research employed YOLOv7, YOLOv7-w6, and YOLOv7-x models to detect and classify weeds in cotton fields, using a dataset of 9249 images collected under real field conditions. To improve model performance, we enhanced the annotation process using LabelImg and Roboflow, ensuring accurate separation of weeds and cotton plants. Additionally, we fine-tuned key hyperparameters, including batch size, epochs, and input resolution, to optimize detection performance. YOLOv7, achieving the highest estimated accuracy at 83%, demonstrated superior weed detection sensitivity, particularly in cluttered field conditions, while YOLOv7-x with accuracy at 77% offered balanced performance across both cotton and weed classes. YOLOv7-w6 with accuracy at 63% faced difficulties in distinguishing features in shaded or cluttered soil regions. These findings highlight the potential of UAV-based deep learning approaches to support site-specific weed management in cotton fields, providing an efficient, environmentally friendly approach to weed management. Full article
Show Figures

Figure 1

Back to TopTop