Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,060)

Search Parameters:
Keywords = YOLOV5

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2524 KB  
Article
YOLO-PFA: Advanced Multi-Scale Feature Fusion and Dynamic Alignment for SAR Ship Detection
by Shu Liu, Peixue Liu, Zhongxun Wang, Mingze Sun and Pengfei He
J. Mar. Sci. Eng. 2025, 13(10), 1936; https://doi.org/10.3390/jmse13101936 (registering DOI) - 9 Oct 2025
Abstract
Maritime ship detection faces challenges due to complex object poses, variable target scales, and background interference. This paper introduces YOLO-PFA, a novel SAR ship detection model that integrates multi-scale feature fusion and dynamic alignment. By leveraging the Bidirectional Feature Pyramid Network (BiFPN), YOLO-PFA [...] Read more.
Maritime ship detection faces challenges due to complex object poses, variable target scales, and background interference. This paper introduces YOLO-PFA, a novel SAR ship detection model that integrates multi-scale feature fusion and dynamic alignment. By leveraging the Bidirectional Feature Pyramid Network (BiFPN), YOLO-PFA enhances cross-scale weighted feature fusion, improving detection of objects of varying sizes. The C2f-Partial Feature Aggregation (C2f-PFA) module aggregates raw and processed features, enhancing feature extraction efficiency. Furthermore, the Dynamic Alignment Detection Head (DADH) optimizes classification and regression feature interaction, enabling dynamic collaboration. Experimental results on the iVision-MRSSD dataset demonstrate YOLO-PFA’s superiority, achieving an mAP@0.5 of 95%, outperforming YOLOv11 by 1.2% and YOLOv12 by 2.8%. This paper contributes significantly to automated maritime target detection. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 2141 KB  
Article
YOLO-Based Object and Keypoint Detection for Autonomous Traffic Cone Placement and Retrieval for Industrial Robots
by János Hollósi
Appl. Sci. 2025, 15(19), 10845; https://doi.org/10.3390/app151910845 - 9 Oct 2025
Abstract
The accurate and efficient placement of traffic cones is a critical safety and logistical requirement in diverse industrial environments. This study introduces a novel dataset specifically designed for the near-overhead detection of traffic cones, containing both bounding box annotations and apex keypoints. Leveraging [...] Read more.
The accurate and efficient placement of traffic cones is a critical safety and logistical requirement in diverse industrial environments. This study introduces a novel dataset specifically designed for the near-overhead detection of traffic cones, containing both bounding box annotations and apex keypoints. Leveraging this dataset, we systematically evaluated whether classical object detection methods or keypoint-based detection methods are more effective for the task of cone apex localization. Several state-of-the-art YOLO-based architectures (YOLOv8, YOLOv11, YOLOv12) were trained and tested under identical conditions. The comparative experiments showed that both approaches can achieve high accuracy, but they differ in their trade-offs between robustness, computational cost, and suitability for real-time embedded deployment. These findings highlight the importance of dataset design for specialized viewpoints and confirm that lightweight YOLO models are particularly well-suited for resource-constrained robotic platforms. The key contributions of this work are the introduction of a new annotated dataset for overhead cone detection and a systematic comparison of object detection and keypoint detection paradigms for apex localization in real-world robotic applications. Full article
(This article belongs to the Special Issue Sustainable Mobility and Transportation (SMTS 2025))
17 pages, 3374 KB  
Article
An Enhanced SAR-Based ISW Detection Method Using YOLOv8 with an Anti-Interference Strategy and Repair Module and Its Applications
by Zheyu Lu, Hui Du, Shaodong Wang, Jianping Wu and Pai Peng
Remote Sens. 2025, 17(19), 3390; https://doi.org/10.3390/rs17193390 - 9 Oct 2025
Abstract
The detection of internal solitary waves (ISWs) in the ocean using Synthetic Aperture Radar (SAR) images is important for the safety of marine engineering structures. Based on 4120 Sentinel SAR images obtained from 2014 to 2024, an ISW dataset covering the Andaman Sea [...] Read more.
The detection of internal solitary waves (ISWs) in the ocean using Synthetic Aperture Radar (SAR) images is important for the safety of marine engineering structures. Based on 4120 Sentinel SAR images obtained from 2014 to 2024, an ISW dataset covering the Andaman Sea (AS), the South China Sea (SCS), the Sulu Sea (SS), and the Celebes Sea (CS) is constructed, and a deep learning dataset containing 3495 detection samples and 2476 segmentation samples is also established. Based on the YOLOv8 lightweight model, combined with an anti-interference strategy, a multi-size block detection strategy, and a post-processing repair module, an ISW detection method is proposed. This method reduces the false detection rate by 44.20 percentage points in terms of anti-interference performance. In terms of repair performance, the repair rate reaches 85.2%, and the error connection rate is less than 3.1%. The detection results of applying this method to Sentinel images in multiple sea areas show that there are significant regional differences in ISW activities in different sea areas: in the AS, ISW activities peak in the dry season of March and are mainly concentrated in the eastern and southern regions; the western part of the SS and the southern part of the CS are also the core areas of ISW activities. From the perspective of temporal characteristics, the SS maintains a relatively high ISW activity level throughout the dry season, while the CS exhibits more complex seasonal dynamic features. The lightweight detection method proposed in this study has good applicability and can provide support for marine disaster prevention work. Full article
(This article belongs to the Section Ocean Remote Sensing)
19 pages, 24139 KB  
Article
EnhancedMulti-Scenario Pig Behavior Recognition Based on YOLOv8n
by Panqi Pu, Junge Wang, Geqi Yan, Hongchao Jiao, Hao Li and Hai Lin
Animals 2025, 15(19), 2927; https://doi.org/10.3390/ani15192927 - 9 Oct 2025
Abstract
Advances in smart animal husbandry necessitate efficient pig behavior monitoring, yet traditional approaches suffer from operational inefficiency and animal stress. We address these limitations through a lightweight YOLOv8n architecture enhanced with SPD-Conv for feature preservation during downsampling, LSKBlock attention for contextual feature fusion, [...] Read more.
Advances in smart animal husbandry necessitate efficient pig behavior monitoring, yet traditional approaches suffer from operational inefficiency and animal stress. We address these limitations through a lightweight YOLOv8n architecture enhanced with SPD-Conv for feature preservation during downsampling, LSKBlock attention for contextual feature fusion, and a dedicated small-target detection head. Experimental validation demonstrates superior performance: the optimized model achieves a 92.4% mean average precision (mAP@0.5) and 87.4% recall, significantly outperforming baseline YOLOv8n by 3.7% in AP while maintaining minimal parameter growth (3.34M). Controlled illumination tests confirm enhanced robustness under strong and warm lighting conditions, with performance gains of 1.5% and 0.7% in AP, respectively. This high-precision framework enables real-time recognition of standing, prone lying, lateral lying, and feeding behaviors in commercial piggeries, supporting early health anomaly detection through non-invasive monitoring. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

24 pages, 18260 KB  
Article
DWG-YOLOv8: A Lightweight Recognition Method for Broccoli in Multi-Scene Field Environments Based on Improved YOLOv8s
by Haoran Liu, Yu Wang, Changyuan Zhai, Huarui Wu, Hao Fu, Haiping Feng and Xueguan Zhao
Agronomy 2025, 15(10), 2361; https://doi.org/10.3390/agronomy15102361 - 9 Oct 2025
Abstract
Addressing the challenges of multi-scene precision pesticide application for field broccoli crops and computational limitations of edge devices, this study proposes a lightweight broccoli detection method named DWG-YOLOv8, based on an improved YOLOv8s architecture. Firstly, Ghost Convolution is introduced into the C2f module, [...] Read more.
Addressing the challenges of multi-scene precision pesticide application for field broccoli crops and computational limitations of edge devices, this study proposes a lightweight broccoli detection method named DWG-YOLOv8, based on an improved YOLOv8s architecture. Firstly, Ghost Convolution is introduced into the C2f module, and the standard CBS module is replaced with Depthwise Separable Convolution (DWConv) to reduce model parameters and computational load during feature extraction. Secondly, a CDSL module is designed to enhance the model’s feature extraction capability. The CBAM attention mechanism is incorporated into the Neck network to strengthen the extraction of channel and spatial features, enhancing the model’s focus on the target. Experimental results indicate that compared to the original YOLOv8s, the DWG-YOLOv8 model has a size decreased by 35.6%, a processing time reduced by 1.9 ms, while its precision, recall, and mean Average Precision (mAP) have increased by 1.9%, 0.9%, and 3.4%, respectively. In comparative tests on complex background images, DWG-YOLOv8 showed reductions of 1.4% and 16.6% in miss rate and false positive rate compared to YOLOv8s. Deployed on edge devices using field-collected data, the DWG-YOLOv8 model achieved a comprehensive recognition accuracy of 96.53%, representing a 5.6% improvement over YOLOv8s. DWG-YOLOv8 effectively meets the lightweight requirements for accurate broccoli recognition in complex field backgrounds, providing technical support for object detection in intelligent precision pesticide application processes for broccoli. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

29 pages, 7823 KB  
Article
Real-Time Detection Sensor for Unmanned Aerial Vehicle Using an Improved YOLOv8s Algorithm
by Fuhao Lu, Chao Zeng, Hangkun Shi, Yanghui Xu and Song Fu
Sensors 2025, 25(19), 6246; https://doi.org/10.3390/s25196246 - 9 Oct 2025
Abstract
This study advances the unmanned aerial vehicle (UAV) localization technology within the framework of a low-altitude economy, with particular emphasis on the accurate and real-time identification and tracking of unauthorized (“black-flying”) drones. Conventional YOLOv8s-based target detection algorithms often suffer from missed detections due [...] Read more.
This study advances the unmanned aerial vehicle (UAV) localization technology within the framework of a low-altitude economy, with particular emphasis on the accurate and real-time identification and tracking of unauthorized (“black-flying”) drones. Conventional YOLOv8s-based target detection algorithms often suffer from missed detections due to their reliance on single-frame features. To address this limitation, this paper proposes an improved detection algorithm that integrates a long-short-term memory (LSTM) network into the YOLOv8s framework. By incorporating time-series modeling, the LSTM module enables the retention of historical features and dynamic prediction of UAV trajectories. The loss function combines bounding box regression loss with binary cross-entropy and is optimized using the Adam algorithm to enhance training convergence. The training data distribution is validated through Monte Carlo random sampling, which improves the model’s generalization to complex scenes. Simulation results demonstrate that the proposed method significantly enhances UAV detection performance. In addition, when deployed on the RK3588-based embedded system, the method achieves a low false negative rate and exhibits robust detection capabilities, indicating strong potential for practical applications in airspace management and counter-UAV operations. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
Show Figures

Figure 1

21 pages, 3712 KB  
Article
CISC-YOLO: A Lightweight Network for Micron-Level Defect Detection on Wafers via Efficient Cross-Scale Feature Fusion
by Yulun Chi, Xingyu Gong, Bing Zhao and Lei Yao
Electronics 2025, 14(19), 3960; https://doi.org/10.3390/electronics14193960 - 9 Oct 2025
Abstract
With the development of the semiconductor manufacturing process towards miniaturization and high integration, the detection of microscopic defects on wafer surfaces faces the challenge of balancing precision and efficiency. Therefore, this study proposes a lightweight inspection model based on the YOLOv8 framework, aiming [...] Read more.
With the development of the semiconductor manufacturing process towards miniaturization and high integration, the detection of microscopic defects on wafer surfaces faces the challenge of balancing precision and efficiency. Therefore, this study proposes a lightweight inspection model based on the YOLOv8 framework, aiming to achieve an optimal balance between inspection accuracy, model complexity, and inference speed. First, we design a novel lightweight module called IRB-GhostConv-C2f (IGC) to replace the C2f module in the backbone, thereby significantly minimizing redundant feature computations. Second, a CNN-based cross-scale feature fusion neck network, the CCFF-ISC neck, is proposed to reduce the redundant computation of low-level features and enhance the expression of multi-scale semantic information. Meanwhile, the novel IRB-SCSA-C2f (ISC) module replaces the C2f in the neck to further improve the efficiency of feature fusion. In addition, a novel dynamic head network, DyHeadv3, is integrated into the head structure, aiming to improve the small-scale target detection performance by dynamically adjusting the feature interaction mechanism. Finally, so as to comprehensively assess the proposed algorithm’s performance, an industrial dataset of wafer defects, WSDD, is constructed, which covers “broken edges”, “scratches”, “oil pollution”, and “minor defects”. The experimental results demonstrate that the CISC-YOLO model attains an mAP50 of 93.7%, and the parameter amount is reduced to 1.92 M, outperforming other mainstream leading algorithms in the field. The proposed approach provides a high-precision and low-latency real-time defect detection solution for semiconductor industry scenarios. Full article
Show Figures

Figure 1

24 pages, 2777 KB  
Article
LightSeek-YOLO: A Lightweight Architecture for Real-Time Trapped Victim Detection in Disaster Scenarios
by Xiaowen Tian, Yubi Zheng, Liangqing Huang, Rengui Bi, Yu Chen, Shiqi Wang and Wenkang Su
Mathematics 2025, 13(19), 3231; https://doi.org/10.3390/math13193231 - 9 Oct 2025
Abstract
Rapid and accurate detection of trapped victims is vital in disaster rescue operations, yet most existing object detection methods cannot simultaneously deliver high accuracy and fast inference under resource-constrained conditions. To address this limitation, we propose the LightSeek-YOLO, a lightweight, real-time victim detection [...] Read more.
Rapid and accurate detection of trapped victims is vital in disaster rescue operations, yet most existing object detection methods cannot simultaneously deliver high accuracy and fast inference under resource-constrained conditions. To address this limitation, we propose the LightSeek-YOLO, a lightweight, real-time victim detection framework for disaster scenarios built upon YOLOv11. Our LightSeek-YOLO integrates three core innovations. First, it employs HGNetV2 as the backbone, whose HGStem and HGBlock modules leverage depthwise separable convolutions to markedly reduce computational cost while preserving feature extraction. Secondly, it introduces Seek-DS (Seek-DownSampling), a dual-branch downsampling module that preserves key feature extrema through a MaxPool branch while capturing spatial patterns via a progressive convolution branch, thereby effectively mitigating background interference. Third, it incorporates Seek-DH (Seek Detection Head), a lightweight detection head that processes features through a unified pipeline, enhancing scale adaptability while reducing parameter redundancy. Evaluated on the common C2A disaster dataset, LightSeek-YOLO achieves 0.478 AP@small for small-object detection, demonstrating strong robustness in challenging conditions such as rubble and smoke. Moreover, on the COCO, it reaches 0.473 mAP@[0.5:0.95], matching YOLOv8n while achieving superior computational efficiency through 38.2% parameter reduction and 39.5% FLOP reduction, and achieving 571.72 FPS on desktop hardware, with computational efficiency improvements suggesting potential for edge deployment pending validation. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

31 pages, 6076 KB  
Article
MSWindD-YOLO: A Lightweight Edge-Deployable Network for Real-Time Wind Turbine Blade Damage Detection in Sustainable Energy Operations
by Pan Li, Jitao Zhou, Jian Zeng, Qian Zhao and Qiqi Yang
Sustainability 2025, 17(19), 8925; https://doi.org/10.3390/su17198925 (registering DOI) - 8 Oct 2025
Abstract
Wind turbine blade damage detection is crucial for advancing wind energy as a sustainable alternative to fossil fuels. Existing methods based on image processing technologies face challenges such as limited adaptability to complex environments, trade-offs between model accuracy and computational efficiency, and inadequate [...] Read more.
Wind turbine blade damage detection is crucial for advancing wind energy as a sustainable alternative to fossil fuels. Existing methods based on image processing technologies face challenges such as limited adaptability to complex environments, trade-offs between model accuracy and computational efficiency, and inadequate real-time inference capabilities. In response to these limitations, we put forward MSWindD-YOLO, a lightweight real-time detection model for wind turbine blade damage. Building upon YOLOv5s, our work introduces three key improvements: (1) the replacement of the Focus module with the Stem module to enhance computational efficiency and multi-scale feature fusion, integrating EfficientNetV2 structures for improved feature extraction and lightweight design, while retaining the SPPF module for multi-scale context awareness; (2) the substitution of the C3 module with the GBC3-FEA module to reduce computational redundancy, coupled with the incorporation of the CBAM attention mechanism at the neck network’s terminus to amplify critical features; and (3) the adoption of Shape-IoU loss function instead of CIoU loss function to facilitate faster model convergence and enhance localization accuracy. Evaluated on the Wind Turbine Blade Damage Visual Analysis Dataset (WTBDVA), MSWindD-YOLO achieves a precision of 95.9%, a recall of 96.3%, an mAP@0.5 of 93.7%, and an mAP@0.5:0.95 of 87.5%. With a compact size of 3.12 MB and 22.4 GFLOPs inference cost, it maintains high efficiency. After TensorRT acceleration on Jetson Orin NX, the model attains 43 FPS under FP16 quantization for real-time damage detection. Consequently, the proposed MSWindD-YOLO model not only elevates detection accuracy and inference efficiency but also achieves significant model compression. Its deployment-compatible performance in edge environments fulfills stringent industrial demands, ultimately advancing sustainable wind energy operations through lightweight lifecycle maintenance solutions for wind farms. Full article
Show Figures

Figure 1

22 pages, 4661 KB  
Article
Research on Eye-Tracking Control Methods Based on an Improved YOLOv11 Model
by Xiangyang Sun, Jiahua Wu, Wenjun Zhang, Xianwei Chen and Haixia Mei
Sensors 2025, 25(19), 6236; https://doi.org/10.3390/s25196236 - 8 Oct 2025
Abstract
Eye-tracking technology has gained traction in the field of medical rehabilitation due to its non-invasive and intuitive nature. However, current eye-tracking methods based on object detection technology suffer from insufficient accuracy in detecting the eye socket and iris, as well as inaccuracies in [...] Read more.
Eye-tracking technology has gained traction in the field of medical rehabilitation due to its non-invasive and intuitive nature. However, current eye-tracking methods based on object detection technology suffer from insufficient accuracy in detecting the eye socket and iris, as well as inaccuracies in determining eye movement direction. To address this, this study improved the YOLOv11 model using the EFFM and ORC modules, resulting in a 1.7% and 9.9% increase in recognition accuracy for the eye socket and iris, respectively, and a 5.5% and 44% increase in recall rate, respectively. A method combining frame voting mechanisms with eye movement area discrimination was proposed for eye movement direction discrimination, achieving average accuracy rates of 95.3%, 92.8%, and 94.8% for iris fixation, left, and right directions, respectively. The discrimination results of multiple eye movement images were mapped to a binary value, and eye movement encoding was used to obtain control commands that align with the robotic arm. The average matching degree of eye movement encoding ranged from 93.4% to 96.8%. An experimental platform was established, and the average completion rates for three object-grabbing tasks controlled by eye movements were 98%, 78%, and 96%, respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 6407 KB  
Article
Lightweight SCC-YOLO for Winter Jujube Detection and 3D Localization with Cross-Platform Deployment Evaluation
by Meng Zhou, Yaohua Hu, Anxiang Huang, Yiwen Chen, Xing Tong, Mengfei Liu and Yunxiao Pan
Agriculture 2025, 15(19), 2092; https://doi.org/10.3390/agriculture15192092 - 8 Oct 2025
Abstract
Harvesting winter jujubes is a key step in production, yet traditional manual approaches are labor-intensive and inefficient. To overcome these challenges, we propose SCC-YOLO, a lightweight method for winter jujube detection, 3D localization, and cross-platform deployment, aiming to support intelligent harvesting. In this [...] Read more.
Harvesting winter jujubes is a key step in production, yet traditional manual approaches are labor-intensive and inefficient. To overcome these challenges, we propose SCC-YOLO, a lightweight method for winter jujube detection, 3D localization, and cross-platform deployment, aiming to support intelligent harvesting. In this study, RGB-D cameras were integrated with an improved YOLOv11 network optimized by ShuffleNetV2, CBAM, and a redesigned C2f_WTConv module, which enables joint spatial–frequency feature modeling and enhances small-object detection in complex orchard conditions. The model was trained on a diversified dataset with extensive augmentation to ensure robustness. In addition, the original localization loss was replaced with DIoU to improve bounding box regression accuracy. A robotic harvesting system was developed, and an Eye-to-Hand calibration-based 3D localization pipeline was implemented to map fruit coordinates to the robot workspace for accurate picking. To validate engineering applicability, the SCC-YOLO model was deployed on both desktop (PyTorch and ONNX Runtime) and mobile (NCNN with Vulkan+FP16) platforms, and FPS, latency, and stability were comparatively analyzed. Experimental results showed that SCC-YOLO improved mAP by 5.6% over YOLOv11, significantly enhanced detection precision and robustness, and achieved real-time performance on mobile devices while maintaining peak throughput on high-performance desktops. Field and laboratory tests confirmed the system’s effectiveness for detection, localization, and harvesting efficiency, demonstrating its adaptability to diverse deployment environments and its potential for broader agricultural applications. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 7048 KB  
Article
Enhanced Lightweight Object Detection Model in Complex Scenes: An Improved YOLOv8n Approach
by Sohaya El Hamdouni, Boutaina Hdioud and Sanaa El Fkihi
Information 2025, 16(10), 871; https://doi.org/10.3390/info16100871 - 8 Oct 2025
Abstract
Object detection has a vital impact on the analysis and interpretation of visual scenes. It is widely utilized in various fields, including healthcare, autonomous driving, and vehicle surveillance. However, complex scenes containing small, occluded, and multiscale objects present significant difficulties for object detection. [...] Read more.
Object detection has a vital impact on the analysis and interpretation of visual scenes. It is widely utilized in various fields, including healthcare, autonomous driving, and vehicle surveillance. However, complex scenes containing small, occluded, and multiscale objects present significant difficulties for object detection. This paper introduces a lightweight object detection algorithm, utilizing YOLOv8n as the baseline model, to address these problems. Our method focuses on four steps. Firstly, we add a layer for small object detection to enhance the feature expression capability of small objects. Secondly, to handle complex forms and appearances, we employ the C2f-DCNv2 module. This module integrates advanced DCNv2 (Deformable Convolutional Networks v2) by substituting the final C2f module in the backbone. Thirdly, we designed the CBAM, a lightweight attention module. We integrate it into the neck section to address missed detections. Finally, we use Ghost Convolution (GhostConv) as a light convolutional layer. This alternates with ordinary convolution in the neck. It ensures good detection performance while decreasing the number of parameters. Experimental performance on the PASCAL VOC dataset demonstrates that our approach lowers the number of model parameters by approximately 9.37%. The mAP@0.5:0.95 increased by 0.9%, recall (R) increased by 0.8%, mAP@0.5 increased by 0.3%, and precision (P) increased by 0.1% compared to the baseline model. To better evaluate the model’s generalization performance in real-world driving scenarios, we conducted additional experiments using the KITTI dataset. Compared to the baseline model, our approach yielded a 0.8% improvement in mAP@0.5 and 1.3% in mAP@0.5:0.95. This result indicates strong performance in more dynamic and challenging conditions. Full article
Show Figures

Graphical abstract

17 pages, 6432 KB  
Article
An AI-Enabled System for Automated Plant Detection and Site-Specific Fertilizer Application for Cotton Crops
by Arjun Chouriya, Peeyush Soni, Abhilash K. Chandel and Ajay Kumar Patel
Automation 2025, 6(4), 53; https://doi.org/10.3390/automation6040053 - 8 Oct 2025
Abstract
Typical fertilizer applicators are often restricted in performance due to non-uniformity in distribution, required labor and time intensiveness, high discharge rate, chemical input wastage, and fostering weed proliferation. To address this gap in production agriculture, an automated variable-rate fertilizer applicator was developed for [...] Read more.
Typical fertilizer applicators are often restricted in performance due to non-uniformity in distribution, required labor and time intensiveness, high discharge rate, chemical input wastage, and fostering weed proliferation. To address this gap in production agriculture, an automated variable-rate fertilizer applicator was developed for the cotton crop that is based on deep learning-initiated electronic control unit (ECU). The applicator comprises (a) plant recognition unit (PRU) to capture and predict presence (or absence) of cotton plants using the YOLOv7 recognition model deployed on-board Raspberry Pi microprocessor (Wale, UK), and relay decision to a microcontroller; (b) an ECU to control stepper motor of fertilizer metering unit as per received cotton-detection signal from the PRU; and (c) fertilizer metering unit that delivers precisely metered granular fertilizer to the targeted cotton plant when corresponding stepper motor is triggered by the microcontroller. The trials were conducted in the laboratory on a custom testbed using artificial cotton plants, with the camera positioned 0.21 m ahead of the discharge tube and 16 cm above the plants. The system was evaluated at forward speeds ranging from 0.2 to 1.0 km/h under lighting levels of 3000, 5000, and 7000 lux to simulate varying illumination conditions in the field. Precision, recall, F1-score, and mAP of the plant recognition model were determined as 1.00 at 0.669 confidence, 0.97 at 0.000 confidence, 0.87 at 0.151 confidence, and 0.906 at 0.5 confidence, respectively. The mean absolute percent error (MAPE) of 6.15% and 9.1%, and mean absolute deviation (MAD) of 0.81 g/plant and 1.20 g/plant, on application of urea and Diammonium Phosphate (DAP), were observed, respectively. The statistical analysis showed no significant effect of the forward speed of the conveying system on fertilizer application rate (p > 0.05), thereby offering a uniform application throughout, independent of the forward speed. The developed fertilizer applicator enhances precision in site-specific applications, minimizes fertilizer wastage, and reduces labor requirements. Eventually, this fertilizer applicator placed the fertilizer near targeted plants as per the recommended dosage. Full article
Show Figures

Figure 1

21 pages, 6386 KB  
Article
SPMF-YOLO-Tracker: A Method for Quantifying Individual Activity Levels and Assessing Health in Newborn Piglets
by Jingge Wei, Yurong Tang, Jinxin Chen, Kelin Wang, Peng Li, Mingxia Shen and Longshen Liu
Agriculture 2025, 15(19), 2087; https://doi.org/10.3390/agriculture15192087 - 7 Oct 2025
Abstract
This study proposes a behavioral monitoring framework for newborn piglets based on SPMF-YOLO object detection and ByteTrack multi-object tracking, which enables precise quantification of early postnatal activity levels and health assessment. The method enhances small-object detection performance by incorporating the SPDConv module, the [...] Read more.
This study proposes a behavioral monitoring framework for newborn piglets based on SPMF-YOLO object detection and ByteTrack multi-object tracking, which enables precise quantification of early postnatal activity levels and health assessment. The method enhances small-object detection performance by incorporating the SPDConv module, the MFM module, and the NWD loss function into YOLOv11. When combined with the ByteTrack algorithm, it achieves stable tracking and maintains trajectory continuity for multiple targets. An annotated dataset containing both detection and tracking labels was constructed using video data from 10 piglet pens for evaluation. Experimental results indicate that SPMF-YOLO achieved a recognition accuracy rate of 95.3% for newborn piglets. When integrated with ByteTrack, it achieves 79.1% HOTA, 92.2% MOTA, and 84.7% IDF1 in multi-object tracking tasks, thereby outperforming existing methods. Building upon this foundation, this study further quantified the cumulative movement distance of each newborn piglet within 30 min after birth and proposed a health-assessment method based on statistical thresholds. The results demonstrated an overall consistency rate of 98.2% across pens and an accuracy rate of 92.9% for identifying abnormal individuals. The results validated the effectiveness of this method for quantifying individual behavior and assessing health status in newborn piglets within complex farming environments, providing a feasible technical pathway and scientific basis for health management and early intervention in precision animal husbandry. Full article
(This article belongs to the Special Issue Modeling of Livestock Breeding Environment and Animal Behavior)
Show Figures

Figure 1

24 pages, 16680 KB  
Article
Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11
by Xiaofan Feng, Lu Peng, Yu Tang, Chang Liu and Huazhen An
Sensors 2025, 25(19), 6211; https://doi.org/10.3390/s25196211 - 7 Oct 2025
Abstract
With the strict requirements of national policies on truck dimensions, axle loads, and weight limits, along with the implementation of tolls based on vehicle types, rapid and accurate identification of vehicle axle types has become essential for toll station management. To address the [...] Read more.
With the strict requirements of national policies on truck dimensions, axle loads, and weight limits, along with the implementation of tolls based on vehicle types, rapid and accurate identification of vehicle axle types has become essential for toll station management. To address the limitations of existing methods in distinguishing between drive and driven axles, complex equipment setup, and image evidence retention, this article proposes a panoramic image detection technology for vehicle chassis based on enhanced ORB and YOLOv11. A portable vehicle chassis image acquisition system, based on area array cameras, was developed for rapid on-site deployment within 20 min, eliminating the requirement for embedded installation. The FeatureBooster (FB) module was employed to optimize the ORB algorithm’s feature matching, and combined with keyframe technology to achieve high-quality panoramic image stitching. After fine-tuning the FB model on a domain-specific area scan dataset, the number of feature matches increased to 151 ± 18, substantially outperforming both the pre-trained FB model and the baseline ORB. Experimental results on axle type recognition using the YOLOv11 algorithm combined with ORB and FB features demonstrated that the integrated approach achieved superior performance. On the overall test set, the model attained an mAP@50 of 0.989 and an mAP@50:95 of 0.780, along with a precision (P) of 0.98 and a recall (R) of 0.99. In nighttime scenarios, it maintained an mAP@50 of 0.977 and an mAP@50:95 of 0.743, with precision and recall both consistently at 0.98 and 0.99, respectively. The field verification shows that the real-time and accuracy of the system can provide technical support for the axle type recognition of toll stations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop