Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,795)

Search Parameters:
Keywords = YOLO model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3706 KB  
Article
Ginseng-YOLO: Integrating Local Attention, Efficient Downsampling, and Slide Loss for Robust Ginseng Grading
by Yue Yu, Dongming Li, Shaozhong Song, Haohai You, Lijuan Zhang and Jian Li
Horticulturae 2025, 11(9), 1010; https://doi.org/10.3390/horticulturae11091010 (registering DOI) - 25 Aug 2025
Abstract
Understory-cultivated Panax ginseng possesses high pharmacological and economic value; however, its visual quality grading predominantly relies on subjective manual assessment, constraining industrial scalability. To address challenges including fine-grained morphological variations, boundary ambiguity, and complex natural backgrounds, this study proposes Ginseng-YOLO, a lightweight and [...] Read more.
Understory-cultivated Panax ginseng possesses high pharmacological and economic value; however, its visual quality grading predominantly relies on subjective manual assessment, constraining industrial scalability. To address challenges including fine-grained morphological variations, boundary ambiguity, and complex natural backgrounds, this study proposes Ginseng-YOLO, a lightweight and deployment-friendly object detection model for automated ginseng grade classification. The model is built on the YOLOv11n (You Only Look Once11n) framework and integrates three complementary components: (1) C2-LWA, a cross-stage local window attention module that enhances discrimination of key visual features, such as primary root contours and fibrous textures; (2) ADown, a non-parametric downsampling mechanism that substitutes convolution operations with parallel pooling, markedly reducing computational complexity; and (3) Slide Loss, a piecewise IoU-weighted loss function designed to emphasize learning from samples with ambiguous or irregular boundaries. Experimental results on a curated multi-grade ginseng dataset indicate that Ginseng-YOLO achieves a Precision of 84.9%, a Recall of 83.9%, and an mAP@50 of 88.7%, outperforming YOLOv11n and other state-of-the-art variants. The model maintains a compact footprint, with 2.0 M parameters, 5.3 GFLOPs, and 4.6 MB model size, supporting real-time deployment on edge devices. Ablation studies further confirm the synergistic contributions of the proposed modules in enhancing feature representation, architectural efficiency, and training robustness. Successful deployment on the NVIDIA Jetson Nano demonstrates practical real-time inference capability under limited computational resources. This work provides a scalable approach for intelligent grading of forest-grown ginseng and offers methodological insights for the design of lightweight models in medicinal plants and agricultural applications. Full article
(This article belongs to the Section Medicinals, Herbs, and Specialty Crops)
28 pages, 67780 KB  
Article
YOLO-GRBI: An Enhanced Lightweight Detector for Non-Cooperative Spatial Target in Complex Orbital Environments
by Zimo Zhou, Shuaiqun Wang, Xinyao Wang, Wen Zheng and Yanli Xu
Entropy 2025, 27(9), 902; https://doi.org/10.3390/e27090902 (registering DOI) - 25 Aug 2025
Abstract
Non-cooperative spatial target detection plays a vital role in enabling autonomous on-orbit servicing and maintaining space situational awareness (SSA). However, due to the limited computational resources of onboard embedded systems and the complexity of spaceborne imaging environments, where spacecraft images often contain small [...] Read more.
Non-cooperative spatial target detection plays a vital role in enabling autonomous on-orbit servicing and maintaining space situational awareness (SSA). However, due to the limited computational resources of onboard embedded systems and the complexity of spaceborne imaging environments, where spacecraft images often contain small targets that are easily obscured by background noise and characterized by low local information entropy, many existing object detection frameworks struggle to achieve high accuracy with low computational cost. To address this challenge, we propose YOLO-GRBI, an enhanced detection network designed to balance accuracy and efficiency. A reparameterized ELAN backbone is adopted to improve feature reuse and facilitate gradient propagation. The BiFormer and C2f-iAFF modules are introduced to enhance attention to salient targets, reducing false positives and false negatives. GSConv and VoV-GSCSP modules are integrated into the neck to reduce convolution operations and computational redundancy while preserving information entropy. YOLO-GRBI employs the focal loss for classification and confidence prediction to address class imbalance. Experiments on a self-constructed spacecraft dataset show that YOLO-GRBI outperforms the baseline YOLOv8n, achieving a 4.9% increase in mAP@0.5 and a 6.0% boost in mAP@0.5:0.95, while further reducing model complexity and inference latency. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
23 pages, 16577 KB  
Article
SLD-YOLO: A Lightweight Satellite Component Detection Algorithm Based on Multi-Scale Feature Fusion and Attention Mechanism
by Yonghao Li, Hang Yang, Bo Lü and Xiaotian Wu
Remote Sens. 2025, 17(17), 2950; https://doi.org/10.3390/rs17172950 (registering DOI) - 25 Aug 2025
Abstract
Space-based on-orbit servicing missions impose stringent requirements for precise identification and localization of satellite components, while existing detection algorithms face dual challenges of insufficient accuracy and excessive computational resource consumption. This paper proposes SLD-YOLO, a lightweight satellite component detection model based on improved [...] Read more.
Space-based on-orbit servicing missions impose stringent requirements for precise identification and localization of satellite components, while existing detection algorithms face dual challenges of insufficient accuracy and excessive computational resource consumption. This paper proposes SLD-YOLO, a lightweight satellite component detection model based on improved YOLO11, balancing accuracy and efficiency through structural optimization and lightweight design. First, we design RLNet, a lightweight backbone network that employs reparameterization mechanisms and hierarchical feature fusion strategies to reduce model complexity by 19.72% while maintaining detection accuracy. Second, we propose the CSP-HSF multi-scale feature fusion module, used in conjunction with PSConv downsampling, to effectively improve the model’s perception of multi-scale objects. Finally, we introduce SimAM, a parameter-free attention mechanism in the detection head to further improve feature representation capability. Experiments on the UESD dataset demonstrate that SLD-YOLO achieves measurable improvements compared to the baseline YOLO11s model across five satellite component detection categories: mAP50 increases by 2.22% to 87.44%, mAP50:95 improves by 1.72% to 63.25%, while computational complexity decreases by 19.72%, parameter count reduces by 25.93%, model file size compresses by 24.59%, and inference speed reaches 90.4 FPS. Validation experiments on the UESD_edition2 dataset further confirm the model’s robustness. This research provides an effective solution for target detection tasks in resource-constrained space environments, demonstrating practical engineering application value. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
19 pages, 2069 KB  
Article
Learning Guided Binary PSO Algorithm for Feature Selection and Reconstruction of Ultrasound Contrast Images in Endometrial Region Detection
by Zihao Zhang, Yongjun Liu, Haitong Zhao, Yu Zhou, Yifei Xu and Zhengyu Li
Biomimetics 2025, 10(9), 567; https://doi.org/10.3390/biomimetics10090567 (registering DOI) - 25 Aug 2025
Abstract
Accurate identification of the endometrial region is critical for the early detection of endometrial lesions. However, current detection models still face two major challenges when processing endometrial imaging data: (1) In complex and noisy environments, recognition accuracy remains limited, partly due to the [...] Read more.
Accurate identification of the endometrial region is critical for the early detection of endometrial lesions. However, current detection models still face two major challenges when processing endometrial imaging data: (1) In complex and noisy environments, recognition accuracy remains limited, partly due to the insufficient exploitation of color information within the images; (2) Traditional Two-dimensional PCA-based (2DPCA-based) feature selection methods have limited capacity to capture and represent key characteristics of the endometrial region. To address these challenges, this paper proposes a novel algorithm named Feature-Level Image Fusion and Improved Swarm Intelligence Optimization Algorithm (FLFSI), which integrates a learning guided binary particle swarm optimization (BPSO) strategy with an image feature selection and reconstruction framework to enhance the detection of endometrial regions in clinical ultrasound images. Specifically, FLFSI contributes to improving feature selection accuracy and image reconstruction quality, thereby enhancing the overall performance of region recognition tasks. First, we enhance endometrial image representation by incorporating feature engineering techniques that combine structural and color information, thereby improving reconstruction quality and emphasizing critical regional features. Second, the BPSO algorithm is introduced into the feature selection stage, improving the accuracy of feature selection and its global search ability while effectively reducing the impact of redundant features. Furthermore, we refined the BPSO design to accelerate convergence and enhance optimization efficiency during the selection process. The proposed FLFSI algorithm can be integrated into mainstream detection models such as YOLO11 and YOLOv12. When applied to YOLO11, FLFSI achieves 96.6% Box mAP and 87.8% Mask mAP. With YOLOv12, it further improves the Mask mAP to 88.8%, demonstrating excellent cross-model adaptability and robust detection performance. Extensive experimental results validate the effectiveness and broad applicability of FLFSI in enhancing endometrial region detection for clinical ultrasound image analysis. Full article
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)
Show Figures

Figure 1

16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 (registering DOI) - 25 Aug 2025
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 18344 KB  
Article
A Checkerboard Corner Detection Method for Infrared Thermal Camera Calibration Based on Physics-Informed Neural Network
by Zhen Zuo, Zhuoyuan Wu, Junyu Wei, Peng Wu, Siyang Huang and Zhangjunjie Cheng
Photonics 2025, 12(9), 847; https://doi.org/10.3390/photonics12090847 - 25 Aug 2025
Abstract
Control point detection is a critical initial step in camera calibration. For checkerboard corner points, detection is based on inferences about local gradients in the image. Infrared (IR) imaging, however, poses challenges due to its low resolution and low signal-to-noise ratio, hindering the [...] Read more.
Control point detection is a critical initial step in camera calibration. For checkerboard corner points, detection is based on inferences about local gradients in the image. Infrared (IR) imaging, however, poses challenges due to its low resolution and low signal-to-noise ratio, hindering the identification of clear local features. This study proposes a physics-informed neural network (PINN) based on the YOLO target detection model to detect checkerboard corner points in infrared images, aiming to enhance the calibration accuracy of infrared thermal cameras. This method first optimizes the YOLO model used for corner detection based on the idea of enhancing image gradient information extraction and then incorporates camera physical information into the training process so that the model can learn the intrinsic constraints between corner coordinates. Camera physical information is applied to the loss calculation process during training, avoiding the impact of label errors on the model and further improving detection accuracy. Compared with the baselines, the proposed method reduces the root mean square error (RMSE) by at least 30% on average across five test sets, indicating that the PINN-based corner detection method can effectively handle low-quality infrared images and achieve more accurate camera calibration. Full article
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)
Show Figures

Graphical abstract

20 pages, 6887 KB  
Article
EMR-YOLO: A Multi-Scale Benthic Organism Detection Algorithm for Degraded Underwater Visual Features and Computationally Constrained Environments
by Dehua Zou, Songhao Zhao, Jingchun Zhou, Guangqiang Liu, Zhiying Jiang, Minyi Xu, Xianping Fu and Siyuan Liu
J. Mar. Sci. Eng. 2025, 13(9), 1617; https://doi.org/10.3390/jmse13091617 (registering DOI) - 24 Aug 2025
Abstract
Marine benthic organism detection (BOD) is essential for underwater robotics and seabed resource management but suffers from motion blur, perspective distortion, and background clutter in dynamic underwater environments. To address visual feature degradation and computational constraints, we, in this paper, introduce EMR-YOLO, a [...] Read more.
Marine benthic organism detection (BOD) is essential for underwater robotics and seabed resource management but suffers from motion blur, perspective distortion, and background clutter in dynamic underwater environments. To address visual feature degradation and computational constraints, we, in this paper, introduce EMR-YOLO, a deep learning based multi-scale BOD method. To handle the diverse sizes and morphologies of benthic organisms, we propose an Efficient Detection Sparse Head (EDSHead), which combines a unified attention mechanism and dynamic sparse operators to enhance spatial modeling. For robust feature extraction under resource limitations, we design a lightweight Multi-Branch Fusion Downsampling (MBFDown) module that utilizes cross-stage feature fusion and multi-branch architecture to capture rich gradient information. Additionally, a Regional Two-Level Routing Attention (RTRA) mechanism is developed to mitigate background noise and sharpen focus on target regions. The experimental results demonstrate that EMR-YOLO achieves improvements of 2.33%, 1.50%, and 4.12% in AP, AP50, and AP75, respectively, outperforming state-of-the-art methods while maintaining efficiency. Full article
Show Figures

Figure 1

18 pages, 4684 KB  
Article
F3-YOLO: A Robust and Fast Forest Fire Detection Model
by Pengyuan Zhang, Xionghan Zhao, Xubing Yang, Ziqian Zhang, Changwei Bi and Li Zhang
Forests 2025, 16(9), 1368; https://doi.org/10.3390/f16091368 - 23 Aug 2025
Viewed by 39
Abstract
Forest fires not only destroy vegetation and directly decrease forested areas, but they also significantly impair forest stand structures and habitat conditions, ultimately leading to imbalances within the entire forest ecosystem. Therefore, accurate forest fire detection is critical for ecological safety and for [...] Read more.
Forest fires not only destroy vegetation and directly decrease forested areas, but they also significantly impair forest stand structures and habitat conditions, ultimately leading to imbalances within the entire forest ecosystem. Therefore, accurate forest fire detection is critical for ecological safety and for protecting lives and property. However, existing algorithms often struggle with detecting flames and smoke in complex scenarios like sparse smoke, weak flames, or vegetation occlusion, and their high computational costs hinder practical deployment. To cope with it, this paper introduces F3-YOLO, a robust and fast forest fire detection model based on YOLOv12. F3-YOLO introduces conditionally parameterized convolution (CondConv) to enhance representational capacity without incurring a substantial increase in computational cost, improving fire detection in complex backgrounds. Additionally, a frequency domain-based self-attention solver (FSAS) is integrated to combine high-frequency and high-contrast information, thus better handling real-world detection scenarios involving both small distant targets in aerial imagery and large nearby targets on the ground. To provide more stable structural cues, we propose the Focaler Minimum Point Distance Intersection over Union Loss (FMPDIoU), which helps the model capture irregular and blurred boundaries caused by vegetation occlusion or flame jitter and smoke dispersion. To enable efficient deployment on edge devices, we also apply structured pruning to reduce computational overhead. Compared to YOLOv12 and other mainstream methods, F3-YOLO achieves superior accuracy and robustness, attaining the highest mAP@50 of 68.5% among all compared methods on the dataset while requiring only 5.4 GFLOPs of computational cost and maintaining a compact parameter count of 2.6 M, demonstrating exceptional efficiency and effectiveness. These attributes make it a reliable, low-latency solution well-suited for real-time forest fire early warning systems. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
20 pages, 4409 KB  
Article
Optimization of Object Detection Network Architecture for High-Resolution Remote Sensing
by Hongyan Shi, Xiaofeng Bai and Chenshuai Bai
Algorithms 2025, 18(9), 537; https://doi.org/10.3390/a18090537 - 23 Aug 2025
Viewed by 40
Abstract
(1) Objective: This study is aiming at the key problems, such as insufficient detection accuracy of small targets and complex background interference in remote-sensing image target detection; (2) Methods: by optimizing the YOLOv10x model architecture, the YOLO-KRM model is proposed. Firstly, a new [...] Read more.
(1) Objective: This study is aiming at the key problems, such as insufficient detection accuracy of small targets and complex background interference in remote-sensing image target detection; (2) Methods: by optimizing the YOLOv10x model architecture, the YOLO-KRM model is proposed. Firstly, a new backbone network structure is constructed. By replacing the C2f of the third layer of the backbone network with the Kolmogorov–Arnold network, the approximation ability of the model to complete complex nonlinear functions in high-dimensional space is improved. Then, the C2f of the fifth layer of the backbone network is replaced by the receptive field attention convolution, which enhances the model’s ability to capture the global context information of the features. In addition, the C2f and C2fCIB structures in the upsampling operation in the neck network are replaced by the hybrid local channel attention mechanism module, which significantly improves the feature representation ability of the model. Results: In order to validate the effectiveness of the YOLO-KRM model, detailed experiments were conducted on two remote-sensing datasets, RSOD and NWPU VHR-10. The experimental results show that, compared with the original model YOLOv10x, the mAP@50 of the YOLO-KRM model on the two datasets is increased by 1.77% and 2.75%, respectively, and the mAP @ 50:95 index is increased by 3.82% and 5.23%, respectively; (3) Results: by improving the model, the accuracy of target detection in remote-sensing images is successfully enhanced. The experimental results verify the effectiveness of the model in dealing with complex backgrounds and small targets, especially in high-resolution remote-sensing images. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
27 pages, 5369 KB  
Article
High-Performance Automated Detection of Sheep Binocular Eye Temperatures and Their Correlation with Rectal Temperature
by Yadan Zhang, Ying Han, Xiaocong Li, Xueting Zeng, Waleid Mohamed EL-Sayed Shakweer, Gang Liu and Jun Wang
Animals 2025, 15(17), 2475; https://doi.org/10.3390/ani15172475 - 22 Aug 2025
Viewed by 128
Abstract
Although rectal temperature is reliable, its measurement requires manual handling and causes stress to animals. IRT provides a non-contact alternative but often ignores bilateral eye temperature differences. This study presents an E-S-YOLO11n model for the automated detection of the binocular regions of sheep, [...] Read more.
Although rectal temperature is reliable, its measurement requires manual handling and causes stress to animals. IRT provides a non-contact alternative but often ignores bilateral eye temperature differences. This study presents an E-S-YOLO11n model for the automated detection of the binocular regions of sheep, which achieves remarkable performance with a precision of 98.2%, recall of 98.5%, mAP@0.5 of 99.40%, F1 score of 98.35%, FPS of 322.58 frame/s, parameters of 7.27 M, model size of 3.97 MB, and GFLOPs of 1.38. Right and left eye temperatures exhibit a strong correlation (r = 0.8076, p < 0.0001), However, the eye temperatures show only very weak correlation with rectal temperature (right eye: r = 0.0852; left eye: r = −0.0359), and neither figure reaches statistical significance. Rectal temperature is 7.37% and 7.69% higher than the right and left eye temperatures, respectively. Additionally, the right eye temperature is slightly higher than the left eye (p < 0.01). The study demonstrates the feasibility of combining IRT and deep learning for non-invasive eye temperature monitoring, although environmental factors may limit it as a proxy for rectal temperature. These results support the development of efficient thermal monitoring tools for precision animal husbandry. Full article
Show Figures

Figure 1

24 pages, 17793 KB  
Article
Small Object Detection in Agriculture: A Case Study on Durian Orchards Using EN-YOLO and Thermal Fusion
by Ruipeng Tang, Tan Jun, Qiushi Chu, Wei Sun and Yili Sun
Plants 2025, 14(17), 2619; https://doi.org/10.3390/plants14172619 - 22 Aug 2025
Viewed by 155
Abstract
Durian is a major tropical crop in Southeast Asia, but its yield and quality are severely impacted by a range of pests and diseases. Manual inspection remains the dominant detection method but suffers from high labor intensity, low accuracy, and difficulty in scaling. [...] Read more.
Durian is a major tropical crop in Southeast Asia, but its yield and quality are severely impacted by a range of pests and diseases. Manual inspection remains the dominant detection method but suffers from high labor intensity, low accuracy, and difficulty in scaling. To address these challenges, this paper proposes EN-YOLO, a novel enhanced YOLO-based deep learning model that integrates the EfficientNet backbone and multimodal attention mechanisms for precise detection of durian pests and diseases. The model removes redundant feature layers and introduces a large-span residual edge to preserve key spatial information. Furthermore, a multimodal input strategy—incorporating RGB, near-infrared and thermal imaging—is used to enhance robustness under variable lighting and occlusion. Experimental results on real orchard datasets demonstrate that EN-YOLO outperforms YOLOv8 (You Only Look Once version 8), YOLOv5-EB (You Only Look Once version 5—Efficient Backbone), and Fieldsentinel-YOLO in detection accuracy, generalization, and small-object recognition. It achieves a 95.3% counting accuracy and shows superior performance in ablation and cross-scene tests. The proposed system also supports real-time drone deployment and integrates an expert knowledge base for intelligent decision support. This work provides an efficient, interpretable, and scalable solution for automated pest and disease management in smart agriculture. Full article
(This article belongs to the Special Issue Plant Protection and Integrated Pest Management)
Show Figures

Figure 1

25 pages, 5271 KB  
Article
Improving YOLO-Based Plant Disease Detection Using αSILU: A Novel Activation Function for Smart Agriculture
by Duyen Thi Nguyen, Thanh Dang Bui, Tien Manh Ngo and Uoc Quang Ngo
AgriEngineering 2025, 7(9), 271; https://doi.org/10.3390/agriengineering7090271 - 22 Aug 2025
Viewed by 427
Abstract
The precise identification of plant diseases is essential for improving agricultural productivity and reducing reliance on human expertise. Deep learning frameworks, belonging to the YOLO series, have demonstrated significant potential in the real-time detection of plant diseases. There are various factors influencing model [...] Read more.
The precise identification of plant diseases is essential for improving agricultural productivity and reducing reliance on human expertise. Deep learning frameworks, belonging to the YOLO series, have demonstrated significant potential in the real-time detection of plant diseases. There are various factors influencing model performance; activation functions play an important role in improving both accuracy and efficiency. This study proposes αSiLU, a modified activation function developed to optimize the performance of YOLOv11n for plant disease-detection tasks. By integrating a scaling factor α into the standard SiLU function, αSiLU improved the effectiveness of feature extraction. Experiments are conducted on two different plant disease datasets—tomato and cucumber—to demonstrate that YOLOv11n models equipped with αSiLU outperform their counterparts using the conventional SiLU function. Specifically, with α = 1.05, mAP@50 increased by 1.1% for tomato and 0.2% for cucumber, while mAP@50–95 improved by 0.7% and 0.2% each. Additional evaluations across various YOLO versions confirmed consistently superior performance. Furthermore, notable enhancements in precision, recall, and F1-score were observed across multiple configurations. Crucially, αSiLU achieves these performance improvements with minimal effect on inference speed, thereby enhancing its appropriateness for application in practical agricultural contexts, particularly as hardware advancements progress. This study highlights the efficiency of αSiLU in the plant disease-detection task, showing the potential in applying deep learning models in intelligent agriculture. Full article
Show Figures

Figure 1

26 pages, 15990 KB  
Article
YOLO-LCE: A Lightweight YOLOv8 Model for Agricultural Pest Detection
by Xinyu Cen, Shenglian Lu and Tingting Qian
Agronomy 2025, 15(9), 2022; https://doi.org/10.3390/agronomy15092022 - 22 Aug 2025
Viewed by 95
Abstract
Agricultural pest detection through image analysis is a key technology in automated pest-monitoring systems. However, some existing pest detection models face excessive model complexity. This study proposes YOLO-LCE, a lightweight model based on the YOLOv8 architecture for agricultural pest detection. Firstly, a Lightweight [...] Read more.
Agricultural pest detection through image analysis is a key technology in automated pest-monitoring systems. However, some existing pest detection models face excessive model complexity. This study proposes YOLO-LCE, a lightweight model based on the YOLOv8 architecture for agricultural pest detection. Firstly, a Lightweight Complementary Residual (LCR) module is proposed to extract complementary features through a dual-branch structure. It enhances detection performance and reduces model complexity. Additionally, Efficient Partial Convolution (EPConv) is proposed as a downsampling operator. It adopts an asymmetric channel splitting strategy to efficiently utilize features. Furthermore, the Ghost module is introduced to the detection head to reduce computational overhead. Finally, WIoUv3 is used to improve detection performance further. YOLO-LCE is evaluated on the Pest24 dataset. Compared to the baseline model, YOLO-LCE achieves mAP50 improvement of 1.7 percentage points, mAP50-95 improvement of 0.4 percentage points, and precision improvement of 0.5 percentage points. For computational efficiency, parameters are reduced by 43.9%, and GFLOPs are reduced by 33.3%. These metrics demonstrate that YOLO-LCE improves detection accuracy while reducing computational complexity, providing an effective solution for lightweight pest detection. Full article
(This article belongs to the Section Pest and Disease Management)
24 pages, 26968 KB  
Article
Using a High-Precision YOLO Surveillance System for Gun Detection to Prevent Mass Shootings
by Jonathan Hsueh and Chao-Tung Yang
AI 2025, 6(9), 198; https://doi.org/10.3390/ai6090198 - 22 Aug 2025
Viewed by 125
Abstract
Mass shootings are forms of loosely defined violent crimes typically involving four or more casualties by firearm and have become increasingly more frequent, and organized and speedy responses from police are necessary to mitigate harm and neutralize the perpetrator. Recent, widely publicized police [...] Read more.
Mass shootings are forms of loosely defined violent crimes typically involving four or more casualties by firearm and have become increasingly more frequent, and organized and speedy responses from police are necessary to mitigate harm and neutralize the perpetrator. Recent, widely publicized police responses to mass shooting events have been criticized by the media, government, and public. With the advancements in artificial intelligence, specifically single-shot detection (SSD) models, computer programs can detect harmful weapons within efficient time frames. We utilized YOLO (You Only Look Once), an SSD with a Convolutional Neural Network, and used versions 5, 7, 8, 9, 10, and 11 to develop our detection system. For our data, we used a Roboflow dataset that contained almost 17,000 images of real-life handgun scenarios, designed to skew towards positive instances. We trained each model on our dataset and exchanged different hyperparameters, conducting a randomized trial. Finally, we evaluated the performance based on precision metrics. Using a Python-based design, we tested our model’s capabilities for surveillance functions. Our experimental results showed that our best-performing model was YOLOv10s, with an mAP-50 (mean average precision 50) of 98.2% on our dataset. Our model showed potential in edge computing settings. Full article
Show Figures

Figure 1

29 pages, 23079 KB  
Article
An Aircraft Skin Defect Detection Method with UAV Based on GB-CPP and INN-YOLO
by Jinhong Xiong, Peigen Li, Yi Sun, Jinwu Xiang and Haiting Xia
Drones 2025, 9(9), 594; https://doi.org/10.3390/drones9090594 (registering DOI) - 22 Aug 2025
Viewed by 101
Abstract
To address the problems of low coverage rate and low detection accuracy in UAV-based aircraft skin defect detection under complex real-world conditions, this paper proposes a method combining a Greedy-based Breadth-First Search Coverage Path Planning (GB-CPP) approach with an improved YOLOv11 architecture (INN-YOLO). [...] Read more.
To address the problems of low coverage rate and low detection accuracy in UAV-based aircraft skin defect detection under complex real-world conditions, this paper proposes a method combining a Greedy-based Breadth-First Search Coverage Path Planning (GB-CPP) approach with an improved YOLOv11 architecture (INN-YOLO). GB-CPP generates collision-free, near-optimal flight paths on the 3D aircraft surface using a discrete grid map. INN-YOLO enhances detection capability by reconstructing the neck with the BiFPN (Bidirectional Feature Pyramid Network) for better feature fusion, integrating the SimAM (Simple Attention Mechanism) with convolution for efficient small-target extraction, as well as employing RepVGG within the C3k2 layer to improve feature learning and speed. The model is deployed on a Jetson Nano for real-time edge inference. Results show that GB-CPP achieves 100% surface coverage with a redundancy rate not exceeding 6.74%. INN-YOLO was experimentally validated on three public datasets (10,937 images) and a self-collected dataset (1559 images), achieving mAP@0.5 scores of 42.30%, 84.10%, 56.40%, and 80.30%, representing improvements of 10.70%, 2.50%, 3.20%, and 6.70% over the baseline models, respectively. The proposed GB-CPP and INN-YOLO framework enables efficient, high-precision, and real-time UAV-based aircraft skin defect detection. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

Back to TopTop