Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (625)

Search Parameters:
Keywords = YOLOv8n

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 73928 KB  
Article
Attention-Guided Edge-Optimized Network for Real-Time Detection and Counting of Pre-Weaning Piglets in Farrowing Crates
by Ning Kong, Tongshuai Liu, Guoming Li, Lei Xi, Shuo Wang and Yuepeng Shi
Animals 2025, 15(17), 2553; https://doi.org/10.3390/ani15172553 (registering DOI) - 30 Aug 2025
Abstract
Accurate, real-time, and cost-effective detection and counting of pre-weaning piglets are critical for improving piglet survival rates. However, achieving this remains technically challenging due to high computational demands, frequent occlusion, social behaviors, and cluttered backgrounds in commercial farming environments. To address these challenges, [...] Read more.
Accurate, real-time, and cost-effective detection and counting of pre-weaning piglets are critical for improving piglet survival rates. However, achieving this remains technically challenging due to high computational demands, frequent occlusion, social behaviors, and cluttered backgrounds in commercial farming environments. To address these challenges, this study proposes a lightweight and attention-enhanced piglet detection and counting network based on an improved YOLOv8n architecture. The design includes three key innovations: (i) the standard C2f modules in the backbone were replaced with an efficient novel Multi-Scale Spatial Pyramid Attention (MSPA) module to enhance the multi-scale feature representation while a maintaining low computational cost; (ii) an improved Gather-and-Distribute (GD) mechanism was incorporated into the neck to facilitate feature fusion and accelerate inference; and (iii) the detection head and the sample assignment strategy were optimized to align the classification and localization tasks better, thereby improving the overall performance. Experiments on the custom dataset demonstrated the model’s superiority over state-of-the-art counterparts, achieving 88.5% precision and a 93.8% mAP0.5. Furthermore, ablation studies showed that the model reduced the parameters, floating point operations (FLOPs), and model size by 58.45%, 46.91% and 56.45% compared to those of the baseline YOLOv8n, respectively, while achieving a 2.6% improvement in the detection precision and a 4.41% reduction in the counting MAE. The trained model was deployed on a Raspberry Pi 4B with ncnn to verify the effectiveness of the lightweight design, reaching an average inference speed of <87 ms per image. These findings confirm that the proposed method offers a practical, scalable solution for intelligent pig farming, combining a high accuracy, efficiency, and real-time performance in resource-limited environments. Full article
(This article belongs to the Section Pigs)
24 pages, 21436 KB  
Article
ESG-YOLO: An Efficient Object Detection Algorithm for Transplant Quality Assessment of Field-Grown Tomato Seedlings Based on YOLOv8n
by Xinhui Wu, Zhenfa Dong, Can Wang, Ziyang Zhu, Yanxi Guo and Shuhe Zheng
Agronomy 2025, 15(9), 2088; https://doi.org/10.3390/agronomy15092088 - 29 Aug 2025
Abstract
Intelligent detection of tomato seedling transplant quality represents a core technology for advancing agricultural automation. However, in practical applications, existing algorithms still face numerous technical challenges, particularly with prominent issues of false detections and missed detections during recognition. To address these challenges, we [...] Read more.
Intelligent detection of tomato seedling transplant quality represents a core technology for advancing agricultural automation. However, in practical applications, existing algorithms still face numerous technical challenges, particularly with prominent issues of false detections and missed detections during recognition. To address these challenges, we developed the ESG-YOLO object detection model and successfully deployed it on edge devices, enabling real-time assessment of tomato seedling transplanting quality. Our methodology integrates three key innovations: First, an EMA (Efficient Multi-scale Attention) module is embedded within the YOLOv8 neck network to suppress interference from redundant information and enhance morphological focus on seedlings. Second, the feature fusion network is reconstructed using a GSConv-based Slim-neck architecture, achieving a lightweight neck structure compatible with edge deployment. Finally, optimization employs the GIoU (Generalized Intersection over Union) loss function to precisely localize seedling position and morphology, thereby reducing false detection and missed detection. The experimental results demonstrate that our ESG-YOLO model achieves a mean average precision mAP of 97.4%, surpassing lightweight models including YOLOv3-tiny, YOLOv5n, YOLOv7-tiny, and YOLOv8n in precision, with improvements of 9.3, 7.2, 5.7, and 2.2%, respectively. Notably, for detecting key yield-impacting categories such as “exposed seedlings” and “missed hills”, the average precision (AP) values reach 98.8 and 94.0%, respectively. To validate the model’s effectiveness on edge devices, the ESG-YOLO model was deployed on an NVIDIA Jetson TX2 NX platform, achieving a frame rate of 18.0 FPS for efficient detection of tomato seedling transplanting quality. This model provides technical support for transplanting performance assessment, enabling quality control and enhanced vegetable yield, thus actively contributing to smart agriculture initiatives. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

14 pages, 3021 KB  
Article
An Integrated Deep Learning Approach for Poultry Disease Detection and Classification Based on Analysis of Chicken Manure Images
by Anjan Dhungana, Xiao Yang, Bidur Paneru, Samin Dahal, Guoyu Lu and Lilong Chai
AgriEngineering 2025, 7(9), 278; https://doi.org/10.3390/agriengineering7090278 - 29 Aug 2025
Abstract
Poultry diseases threaten animal welfare and productivity, especially in cage-free systems where communal environments increase disease transmission risks. Traditional diagnostic methods, though accurate, are often labor-intensive, time-consuming, and not suitable for continuous monitoring. This study aimed to develop a web-based disease screening tool [...] Read more.
Poultry diseases threaten animal welfare and productivity, especially in cage-free systems where communal environments increase disease transmission risks. Traditional diagnostic methods, though accurate, are often labor-intensive, time-consuming, and not suitable for continuous monitoring. This study aimed to develop a web-based disease screening tool to make this process faster and accurate using fecal images. A publicly available dataset consisting of 6812 PCR-verified images categorized into Coccidiosis, Newcastle Disease (NCD), Salmonella, and Healthy from commercial farms in Tanzania was used in this study. Augmentation was used to address the imbalance present in the dataset, with NCD underrepresented (376 images) compared to other classes (>2000 images). Five YOLOv11 detection models were trained, with YOLO11n selected due to its high mean average precision (mAP@0.5 = 0.881). For classification, EfficientNet-B0 was chosen over the EfficientNet-B1 variant because of its high accuracy (99.12% vs. 98.54% for B1). Despite high class imbalance, B0 had higher precision than B1 for the underrepresented NCD class (0.88 for B1 vs. 1.00 for B0). The system achieved an average total inference time of 25.8 milliseconds, demonstrating real-time capabilities. Field testing, expanding datasets across different regions, and incorporating additional diseases is required to further validate and enhance the robustness of the system. Full article
(This article belongs to the Section Livestock Farming Technology)
Show Figures

Figure 1

23 pages, 5394 KB  
Article
Spatially Adaptive and Distillation-Enhanced Mini-Patch Attacks for Remote Sensing Image Object Detection
by Zhihan Yang, Xiaohui Li, Linchao Zhang and Yingjie Xu
Electronics 2025, 14(17), 3433; https://doi.org/10.3390/electronics14173433 - 28 Aug 2025
Viewed by 302
Abstract
Despite the remarkable success of Deep Neural Networks (DNNs) in Remote Sensing Image (RSI) object detection, they remain vulnerable to adversarial attacks. Numerous adversarial attack methods have been proposed for RSI; however, adding a single large-scale adversarial patch to certain high-value targets, which [...] Read more.
Despite the remarkable success of Deep Neural Networks (DNNs) in Remote Sensing Image (RSI) object detection, they remain vulnerable to adversarial attacks. Numerous adversarial attack methods have been proposed for RSI; however, adding a single large-scale adversarial patch to certain high-value targets, which are typically large in physical scale and irregular in shape, is both costly and inflexible. To address this issue, we propose a strategy of using multiple compact patches. This approach introduces two fundamental challenges: (1) how to optimize patch placement for a synergistic attack effect, and (2) how to retain strong adversarial potency within size-constrained mini-patches. To overcome these challenges, we introduce the Spatially Adaptive and Distillation-Enhanced Mini-Patch Attack (SDMPA) framework, which consists of two key modules: (1) an Adaptive Sensitivity-Aware Positioning (ASAP) module, which resolves the placement challenge by fusing the model’s attention maps from both an explainable and an adversarial perspective to identify optimal patch locations, and (2) a Distillation-based Mini-Patch Generation (DMPG) module, which tackles the potency challenge by leveraging knowledge distillation to transfer adversarial information from large teacher patches to small student patches. Extensive experiments on the RSOD and MAR20 datasets demonstrate that SDMPA significantly outperforms existing patch-based attack methods. For example, against YOLOv5n on the RSOD dataset, SDMPA achieves an Attack Success Rate (ASR) of 88.3% using only three small patches, surpassing other patch attack methods. Full article
Show Figures

Figure 1

33 pages, 8300 KB  
Article
Farmland Navigation Line Extraction Method Based on RS-LineNet Network and Root Subordination Relationship Optimization
by Yanlei Xu, Zhen Lu, Jian Li, Yuting Zhai, Chao Liu, Xinyu Zhang and Yang Zhou
Agronomy 2025, 15(9), 2069; https://doi.org/10.3390/agronomy15092069 - 28 Aug 2025
Viewed by 194
Abstract
Navigation line extraction is vital for visual navigation with agricultural machinery. The current methods primarily utilize plant canopy detection frames to extract feature points for navigation line fitting. However, this approach is highly susceptible to environmental changes, causing position instability and reduced extraction [...] Read more.
Navigation line extraction is vital for visual navigation with agricultural machinery. The current methods primarily utilize plant canopy detection frames to extract feature points for navigation line fitting. However, this approach is highly susceptible to environmental changes, causing position instability and reduced extraction accuracy. To address this problem, this study aims to develop a robust navigation line extraction method that overcomes canopy-based feature instability. We propose extracting feature points from root detection frames for navigation line fitting. Compared to canopy points, root feature point positions remain more stable under natural interference and less prone to fluctuations. A dataset of corn crop row images under multiple growth environments was collected. Based on YOLOv8n (You Only Look Once version 8, nano model), we proposed the RS-LineNet lightweight model and introduced a root subordination relationship filtering algorithm to further improve detection precision. Compared with the YOLOv8n model, RS-LineNet achieves 4.2% higher precision, 16.2% improved recall, and an 11.8% increase in mean average precision (mAP50), while reducing the model weight and parameters to 32% and 23% of the original. Navigation lines extracted under different environments exhibit an 0.8° average angular error, which is 3.1° lower than canopy-based methods. On Jetson TX2, the frame rate exceeds 12 FPS, meeting practical application requirements. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

24 pages, 5170 KB  
Article
EIM-YOLO: A Defect Detection Method for Metal-Painted Surfaces on Electrical Sealing Covers
by Zhanjun Wu and Likang Yang
Appl. Sci. 2025, 15(17), 9380; https://doi.org/10.3390/app15179380 - 26 Aug 2025
Viewed by 220
Abstract
Electrical sealing covers are widely used in various industrial equipment, where the quality of their metal-painted surfaces directly affects product appearance and long-term reliability. Micro-defects such as pores, particles, scratches, and uneven paint coatings can compromise protective performance during manufacturing. In the rapidly [...] Read more.
Electrical sealing covers are widely used in various industrial equipment, where the quality of their metal-painted surfaces directly affects product appearance and long-term reliability. Micro-defects such as pores, particles, scratches, and uneven paint coatings can compromise protective performance during manufacturing. In the rapidly growing new energy vehicle (NEV) industry, battery charging-port sealing covers are critical components, requiring precise defect detection due to exposure to harsh environments, like extreme weather and dust-laden conditions. Even minor defects can lead to water ingress or foreign matter accumulation, affecting vehicle performance and user safety. Conventional manual or rule-based inspection methods are inefficient, and the existing deep learning models struggle with detecting minor and subtle defects. To address these challenges, this study proposes EIM-YOLO, an improved object detection framework for the automated detection of metal-painted surface defects on electrical sealing covers. We propose a novel lightweight convolutional module named C3PUltraConv, which reduces model parameters by 3.1% while improving mAP50 by 1% and recall by 3.2%. The backbone integrates RFAConv for enhanced feature perception, and the neck architecture uses an optimized BiFPN-concat structure with adaptive weight learning for better multi-scale feature fusion. Experimental validation on a real-world industrial dataset collected using industrial cameras shows that EIM-YOLO achieves a precision of 71% (an improvement of 3.4%), with mAP50 reaching 64.8% (a growth of 2.6%), and mAP50–95 improving by 1.2%. Maintaining real-time detection capability, EIM-YOLO significantly outperforms the existing baseline models, offering a more accurate solution for automated quality control in NEV manufacturing. Full article
Show Figures

Figure 1

19 pages, 29645 KB  
Article
Defect Detection in GIS X-Ray Images Based on Improved YOLOv10
by Guoliang Xu, Xiaolong Bai and Menghao Huang
Sensors 2025, 25(17), 5310; https://doi.org/10.3390/s25175310 - 26 Aug 2025
Viewed by 391
Abstract
Timely and accurate detection of internal defects in Gas-Insulated Switchgear (GIS) with X-ray imaging is critical for power system reliability. However, automated detection faces significant challenges from small, low-contrast defects and complex background structures. This paper proposes an enhanced object-detection model based on [...] Read more.
Timely and accurate detection of internal defects in Gas-Insulated Switchgear (GIS) with X-ray imaging is critical for power system reliability. However, automated detection faces significant challenges from small, low-contrast defects and complex background structures. This paper proposes an enhanced object-detection model based on the lightweight YOLOv10n framework, specifically optimized for this task. Key improvements include adopting the Normalized Wasserstein Distance (NWD) loss function for small object localization, integrating Monte Carlo (MCAttn) and Parallelized Patch-Aware (PPA) attention to enhance feature extraction, and designing a GFPN-inspired neck for improved multi-scale feature fusion. The model was rigorously evaluated on a custom GIS X-ray dataset. The final model achieved a mean Average Precision (mAP) of 0.674 (IoU 0.5:0.95), representing a 5.0 percentage point improvement over the YOLOv10n baseline and surpassing other comparative models. Qualitative results also confirmed the model’s enhanced capability in detecting challenging small and low-contrast defects. This study presents an effective approach for automated GIS defect detection, with significant potential to enhance power grid maintenance efficiency and safety. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 913 KB  
Article
Criticality Assessment of Wind Turbine Defects via Multispectral UAV Fusion and Fuzzy Logic
by Pavlo Radiuk, Bohdan Rusyn, Oleksandr Melnychenko, Tomasz Perzynski, Anatoliy Sachenko, Serhii Svystun and Oleg Savenko
Energies 2025, 18(17), 4523; https://doi.org/10.3390/en18174523 - 26 Aug 2025
Viewed by 265
Abstract
Ensuring the structural integrity of wind turbines is crucial for the sustainability of wind energy. A significant challenge remains in transitioning from mere defect detection to objective, scalable criticality assessment for prioritizing maintenance. In this work, we propose a novel comprehensive framework that [...] Read more.
Ensuring the structural integrity of wind turbines is crucial for the sustainability of wind energy. A significant challenge remains in transitioning from mere defect detection to objective, scalable criticality assessment for prioritizing maintenance. In this work, we propose a novel comprehensive framework that leverages multispectral unmanned aerial vehicle (UAV) imagery and a novel standards-aligned Fuzzy Inference System to automate this task. Our contribution is validated on two open research-oriented datasets representing small on- and offshore machines: the public AQUADA-GO and Thermal WTB Inspection datasets. An ensemble of YOLOv8n models trained on fused RGB-thermal data achieves a mean Average Precision (mAP@.5) of 92.8% for detecting cracks, erosion, and thermal anomalies. The core novelty, a 27-rule Fuzzy Inference System derived from the IEC 61400-5 standard, translates quantitative defect parameters into a five-level criticality score. The system’s output demonstrates exceptional fidelity to expert assessments, achieving a mean absolute error of 0.14 and a Pearson correlation of 0.97. This work provides a transparent, repeatable, and engineering-grounded proof of concept, demonstrating a promising pathway toward predictive, condition-based maintenance strategies and supporting the economic viability of wind energy. Full article
(This article belongs to the Special Issue Optimal Control of Wind and Wave Energy Converters)
Show Figures

Graphical abstract

24 pages, 103094 KB  
Article
A Method for Automated Detection of Chicken Coccidia in Vaccine Environments
by Ximing Li, Qianchao Wang, Lanqi Chen, Xinqiu Wang, Mengting Zhou, Ruiqing Lin and Yubin Guo
Vet. Sci. 2025, 12(9), 812; https://doi.org/10.3390/vetsci12090812 - 26 Aug 2025
Viewed by 293
Abstract
Vaccines play a crucial role in the prevention and control of chicken coccidiosis, effectively reducing economic losses in the poultry industry and significantly improving animal welfare. To ensure the production quality and immune effect of vaccines, accurate detection of chicken Coccidia oocysts in [...] Read more.
Vaccines play a crucial role in the prevention and control of chicken coccidiosis, effectively reducing economic losses in the poultry industry and significantly improving animal welfare. To ensure the production quality and immune effect of vaccines, accurate detection of chicken Coccidia oocysts in vaccine is essential. However, this task remains challenging due to the minute size of oocysts, variable spatial orientation, and morphological similarity among species. Therefore, we propose YOLO-Cocci, a chicken coccidia detection model based on YOLOv8n, designed to improve the detection accuracy of chicken coccidia oocysts in vaccine environments. Firstly, an efficient multi-scale attention (EMA) module was added to the backbone to enhance feature extraction and enable more precise focus on oocyst regions. Secondly, we developed the inception-style multi-scale fusion pyramid network (IMFPN) as an efficient neck. By integrating richer low-level features and applying convolutional kernels of varying sizes, IMFPN effectively preserves the features of small objects and enhances feature representation, thereby improving detection accuracy. Finally, we designed a lightweight feature-reconstructed and partially decoupled detection head (LFPD-Head), which enhances detection accuracy while reducing both model parameters and computational cost. The experimental results show that YOLO-Cocci achieves an mAP@0.5 of 89.6%, an increase of 6.5% over the baseline model, while reducing the number of parameters and computation by 14% and 12%, respectively. Notably, in the detection of Eimeria necatrix, mAP@0.5 increased by 14%. In order to verify the application effect of the improved detection algorithm, we developed client software that can realize automatic detection and visualize the detection results. This study will help improve the level of automated assessment of vaccine quality and thus promote the improvement of animal welfare. Full article
Show Figures

Figure 1

17 pages, 2498 KB  
Article
FPH-DEIM: A Lightweight Underwater Biological Object Detection Algorithm Based on Improved DEIM
by Qiang Li and Wenguang Song
Appl. Syst. Innov. 2025, 8(5), 123; https://doi.org/10.3390/asi8050123 - 26 Aug 2025
Viewed by 290
Abstract
Underwater biological object detection plays a critical role in intelligent ocean monitoring and underwater robotic perception systems. However, challenges such as image blurring, complex lighting conditions, and significant variations in object scale severely limit the performance of mainstream detection algorithms like the YOLO [...] Read more.
Underwater biological object detection plays a critical role in intelligent ocean monitoring and underwater robotic perception systems. However, challenges such as image blurring, complex lighting conditions, and significant variations in object scale severely limit the performance of mainstream detection algorithms like the YOLO series and Transformer-based models. Although these methods offer real-time inference, they often suffer from unstable accuracy, slow convergence, and insufficient small object detection in underwater environments. To address these challenges, we propose FPH-DEIM, a lightweight underwater object detection algorithm based on an improved DEIM framework. It integrates three tailored modules for perception enhancement and efficiency optimization: a Fine-grained Channel Attention (FCA) mechanism that dynamically balances global and local channel responses to suppress background noise and enhance target features; a Partial Convolution (PConv) operator that reduces redundant computation while maintaining semantic fidelity; and a Haar Wavelet Downsampling (HWDown) module that preserves high-frequency spatial information critical for detecting small underwater organisms. Extensive experiments on the URPC 2021 dataset show that FPH-DEIM achieves a mAP@0.5 of 89.4%, outperforming DEIM (86.2%), YOLOv5-n (86.1%), YOLOv8-n (86.2%), and YOLOv10-n (84.6%) by 3.2–4.8 percentage points. Furthermore, FPH-DEIM significantly reduces the number of model parameters to 7.2 M and the computational complexity to 7.1 GFLOPs, offering reductions of over 13% in parameters and 5% in FLOPs compared to DEIM, and outperforming YOLO models by margins exceeding 2 M parameters and 14.5 GFLOPs in some cases. These results demonstrate that FPH-DEIM achieves an excellent balance between detection accuracy and lightweight deployment, making it well-suited for practical use in real-world underwater environments. Full article
Show Figures

Figure 1

24 pages, 15799 KB  
Article
Performance Comparison of Embedded AI Solutions for Classification and Detection in Lung Disease Diagnosis
by Md Sabbir Ahmed, Stefano Giordano and Davide Adami
Appl. Sci. 2025, 15(17), 9345; https://doi.org/10.3390/app15179345 - 26 Aug 2025
Viewed by 303
Abstract
Lung disease diagnosis from chest X-ray images is a critical task in clinical care, especially in resource-constrained settings where access to radiology expertise and computational infrastructure is limited. Recent advances in deep learning have shown promise, yet most studies focus solely on either [...] Read more.
Lung disease diagnosis from chest X-ray images is a critical task in clinical care, especially in resource-constrained settings where access to radiology expertise and computational infrastructure is limited. Recent advances in deep learning have shown promise, yet most studies focus solely on either classification or detection in isolation, rarely exploring their combined potential in an embedded, real-world setting. To address this, we present a dual deep learning approach that combines five-class disease classification and multi-label thoracic abnormality detection, optimized for embedded edge deployment. Specifically, we evaluate six state-of-the-art CNN architectures—ResNet101, DenseNet201, MobileNetV3-Large, EfficientNetV2-B0, InceptionResNetV2, and Xception—on both base (2020 images) and augmented (9875 images) datasets. Validation accuracies ranged from 55.3% to 70.7% on the base dataset and improved to 58.4% to 72.0% with augmentation, with MobileNetV3-Large achieving the highest accuracy on both. In parallel, we trained a YOLOv8n model for multi-label detection of 14 thoracic diseases. While not deployed in this work, its lightweight architecture makes it suitable for future use on embedded platforms. All classification models were evaluated for end-to-end inference on a Raspberry Pi 4 using a high-resolution chest X-ray image (2566 × 2566, PNG). MobileNetV3-Large demonstrated the fastest latency at 429.6 ms, and all models completed inference in under 2.4 s. These results demonstrate the feasibility of combining classification for rapid triage and detection for spatial interpretability in real-time, embedded clinical environments—paving the way for practical, low-cost AI-based decision support systems for surgery rooms and mobile clinical environments. Full article
Show Figures

Figure 1

10 pages, 4186 KB  
Proceeding Paper
Indirect Crop Line Detection in Precision Mechanical Weeding Using AI: A Comparative Analysis of Different Approaches
by Ioannis Glykos, Gerassimos G. Peteinatos and Konstantinos G. Arvanitis
Eng. Proc. 2025, 104(1), 32; https://doi.org/10.3390/engproc2025104032 - 25 Aug 2025
Viewed by 154
Abstract
Growing interest in organic food, along with European regulations limiting chemical usage, and the declining effectiveness of herbicides due to weed resistance, are all contributing to the growing trend towards mechanical weeding. For mechanical weeding to be effective, tools must pass near the [...] Read more.
Growing interest in organic food, along with European regulations limiting chemical usage, and the declining effectiveness of herbicides due to weed resistance, are all contributing to the growing trend towards mechanical weeding. For mechanical weeding to be effective, tools must pass near the crops in both the inter- and intra-row areas. The use of AI-based computer vision can assist in detecting crop lines and accurately guiding weeding tools. Additionally, AI-driven image analysis can be used for selective intra-row weeding with mechanized blades, distinguishing crops from weeds. However, until now, there have been two separate systems for these tasks. To enable simultaneous in-row weeding and row alignment, YOLOv8n and YOLO11n were trained and compared in a lettuce field (Lactuca sativa L.). The models were evaluated based on different metrics and inference time for three different image sizes. Crop lines were generated through linear regression on the bounding box centers of detected plants and compared against manually drawn ground truth lines, generated during the annotation process, using different deviation metrics. As more than one line appeared per image, the proposed methodology for classifying points in their corresponding crop line was tested for three different approaches with different empirical factor values. The best-performing approach achieved a mean horizontal error of 45 pixels, demonstrating the feasibility of a dual-functioning system using a single vision model. Full article
Show Figures

Figure 1

24 pages, 7604 KB  
Article
Ginseng-YOLO: Integrating Local Attention, Efficient Downsampling, and Slide Loss for Robust Ginseng Grading
by Yue Yu, Dongming Li, Shaozhong Song, Haohai You, Lijuan Zhang and Jian Li
Horticulturae 2025, 11(9), 1010; https://doi.org/10.3390/horticulturae11091010 - 25 Aug 2025
Viewed by 287
Abstract
Understory-cultivated Panax ginseng possesses high pharmacological and economic value; however, its visual quality grading predominantly relies on subjective manual assessment, constraining industrial scalability. To address challenges including fine-grained morphological variations, boundary ambiguity, and complex natural backgrounds, this study proposes Ginseng-YOLO, a lightweight and [...] Read more.
Understory-cultivated Panax ginseng possesses high pharmacological and economic value; however, its visual quality grading predominantly relies on subjective manual assessment, constraining industrial scalability. To address challenges including fine-grained morphological variations, boundary ambiguity, and complex natural backgrounds, this study proposes Ginseng-YOLO, a lightweight and deployment-friendly object detection model for automated ginseng grade classification. The model is built on the YOLOv11n (You Only Look Once11n) framework and integrates three complementary components: (1) C2-LWA, a cross-stage local window attention module that enhances discrimination of key visual features, such as primary root contours and fibrous textures; (2) ADown, a non-parametric downsampling mechanism that substitutes convolution operations with parallel pooling, markedly reducing computational complexity; and (3) Slide Loss, a piecewise IoU-weighted loss function designed to emphasize learning from samples with ambiguous or irregular boundaries. Experimental results on a curated multi-grade ginseng dataset indicate that Ginseng-YOLO achieves a Precision of 84.9%, a Recall of 83.9%, and an mAP@50 of 88.7%, outperforming YOLOv11n and other state-of-the-art variants. The model maintains a compact footprint, with 2.0 M parameters, 5.3 GFLOPs, and 4.6 MB model size, supporting real-time deployment on edge devices. Ablation studies further confirm the synergistic contributions of the proposed modules in enhancing feature representation, architectural efficiency, and training robustness. Successful deployment on the NVIDIA Jetson Nano demonstrates practical real-time inference capability under limited computational resources. This work provides a scalable approach for intelligent grading of forest-grown ginseng and offers methodological insights for the design of lightweight models in medicinal plants and agricultural applications. Full article
(This article belongs to the Section Medicinals, Herbs, and Specialty Crops)
Show Figures

Figure 1

28 pages, 67788 KB  
Article
YOLO-GRBI: An Enhanced Lightweight Detector for Non-Cooperative Spatial Target in Complex Orbital Environments
by Zimo Zhou, Shuaiqun Wang, Xinyao Wang, Wen Zheng and Yanli Xu
Entropy 2025, 27(9), 902; https://doi.org/10.3390/e27090902 - 25 Aug 2025
Viewed by 286
Abstract
Non-cooperative spatial target detection plays a vital role in enabling autonomous on-orbit servicing and maintaining space situational awareness (SSA). However, due to the limited computational resources of onboard embedded systems and the complexity of spaceborne imaging environments, where spacecraft images often contain small [...] Read more.
Non-cooperative spatial target detection plays a vital role in enabling autonomous on-orbit servicing and maintaining space situational awareness (SSA). However, due to the limited computational resources of onboard embedded systems and the complexity of spaceborne imaging environments, where spacecraft images often contain small targets that are easily obscured by background noise and characterized by low local information entropy, many existing object detection frameworks struggle to achieve high accuracy with low computational cost. To address this challenge, we propose YOLO-GRBI, an enhanced detection network designed to balance accuracy and efficiency. A reparameterized ELAN backbone is adopted to improve feature reuse and facilitate gradient propagation. The BiFormer and C2f-iAFF modules are introduced to enhance attention to salient targets, reducing false positives and false negatives. GSConv and VoV-GSCSP modules are integrated into the neck to reduce convolution operations and computational redundancy while preserving information entropy. YOLO-GRBI employs the focal loss for classification and confidence prediction to address class imbalance. Experiments on a self-constructed spacecraft dataset show that YOLO-GRBI outperforms the baseline YOLOv8n, achieving a 4.9% increase in mAP@0.5 and a 6.0% boost in mAP@0.5:0.95, while further reducing model complexity and inference latency. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

25 pages, 3472 KB  
Article
YOLOv10n-CF-Lite: A Method for Individual Face Recognition of Hu Sheep Based on Automated Annotation and Transfer Learning
by Yameng Qiao, Wenzheng Liu, Fanzhen Wang, Hang Zhang, Jinghan Cai, Huaigang He, Tonghai Liu and Xue Yang
Animals 2025, 15(17), 2499; https://doi.org/10.3390/ani15172499 - 25 Aug 2025
Viewed by 294
Abstract
Individual recognition of Hu sheep is a core requirement for precision livestock management, significantly improving breeding efficiency and fine management. However, traditional machine vision methods face challenges such as high annotation time costs, the inability to quickly annotate new sheep, and the need [...] Read more.
Individual recognition of Hu sheep is a core requirement for precision livestock management, significantly improving breeding efficiency and fine management. However, traditional machine vision methods face challenges such as high annotation time costs, the inability to quickly annotate new sheep, and the need for manual intervention and retraining. To address these issues, this study proposes a solution that integrates automatic annotation and transfer learning, developing a sheep face recognition algorithm that adapts to complex farming environments and can quickly learn the characteristics of new Hu sheep individuals. First, through multi-view video collection and data augmentation, a dataset consisting of 82 Hu sheep and a total of 6055 images was created. Additionally, a sheep face detection and automatic annotation algorithm was designed, reducing the annotation time per image to 0.014 min compared to traditional manual annotation. Next, the YOLOv10n-CF-Lite model is proposed, which improved the recognition precision of Hu sheep faces to 92.3%, and the mAP@0.5 to 96.2%. To enhance the model’s adaptability and generalization ability for new sheep, transfer learning was applied to transfer the YOLOv10n-CF-Lite model trained on the source domain (82 Hu sheep) to the target domain (10 new Hu sheep). The recognition precision in the target domain increased from 91.2% to 94.9%, and the mAP@0.5 improved from 96.3% to 97%. Additionally, the model’s convergence speed was improved, reducing the number of training epochs required for fitting from 43 to 14. In summary, the Hu sheep face recognition algorithm proposed in this study improves annotation efficiency, recognition precision, and convergence speed through automatic annotation and transfer learning. It can quickly adapt to the characteristics of new sheep individuals, providing an efficient and reliable technical solution for the intelligent management of livestock. Full article
(This article belongs to the Section Small Ruminants)
Show Figures

Figure 1

Back to TopTop