Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (152)

Search Parameters:
Keywords = LiDAR-IMU

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
38 pages, 3221 KB  
Article
Simulating the Effects of Sensor Failures on Autonomous Vehicles for Safety Evaluation
by Francisco Matos, João Durães and João Cunha
Informatics 2025, 12(3), 94; https://doi.org/10.3390/informatics12030094 - 15 Sep 2025
Viewed by 1003
Abstract
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures, [...] Read more.
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures, whether due to noise, malfunction, or degradation, can compromise this perception and lead to incorrect localization or unsafe decisions by the autonomous control system. While modern AV systems often combine data from multiple sensors to mitigate such risks through sensor fusion techniques (e.g., Kalman filtering), the extent to which these systems remain resilient under faulty conditions remains an open question. This work presents a simulation-based fault injection framework to assess the impact of sensor failures on AVs’ behavior. The framework enables structured testing of autonomous driving software under controlled fault conditions, allowing researchers to observe how specific sensor failures affect system performance. To demonstrate its applicability, an experimental campaign was conducted using the CARLA simulator integrated with the Autoware autonomous driving stack. A multi-segment urban driving scenario was executed using a modified version of CARLA’s Scenario Runner to support Autoware-based evaluations. Faults were injected simulating LiDAR, GNSS, and IMU sensor failures in different route scenarios. The fault types considered in this study include silent sensor failures and severe noise. The results obtained by emulating sensor failures in our chosen system under test, Autoware, show that faults in LiDAR and IMU gyroscope have the most critical impact, often leading to erratic motion and collisions. In contrast, faults in GNSS and IMU accelerometers were well tolerated. This demonstrates the ability of the framework to investigate the fault-tolerance of AVs in the presence of critical sensor failures. Full article
Show Figures

Figure 1

25 pages, 7018 KB  
Article
LiDAR-IMU Sensor Fusion-Based SLAM for Enhanced Autonomous Navigation in Orchards
by Seulgi Choi, Xiongzhe Han, Eunha Chang and Haetnim Jeong
Agriculture 2025, 15(17), 1899; https://doi.org/10.3390/agriculture15171899 - 7 Sep 2025
Viewed by 1498
Abstract
Labor shortages and uneven terrain in orchards present significant challenges to autonomous navigation. This study proposes a navigation system that integrates Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) data to enhance localization accuracy and map stability through Simultaneous Localization and [...] Read more.
Labor shortages and uneven terrain in orchards present significant challenges to autonomous navigation. This study proposes a navigation system that integrates Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) data to enhance localization accuracy and map stability through Simultaneous Localization and Mapping (SLAM). To minimize distortions in LiDAR scans caused by ground irregularities, real-time tilt correction was implemented based on IMU feedback. Furthermore, the path planning module was improved by modifying the Rapidly-Exploring Random Tree (RRT) algorithm. The enhanced RRT generated smoother and more efficient trajectories with quantifiable improvements: the average shortest path length was 2.26 m, compared to 2.59 m with conventional RRT and 2.71 m with A* algorithm. Tracking performance also improved, achieving a root mean square error of 0.890 m and a maximum lateral deviation of 0.423 m. In addition, yaw stability was strengthened, as heading fluctuations decreased by approximately 7% relative to the standard RRT. Field results validated the robustness and adaptability of the proposed system under real-world agricultural conditions. These findings highlight the potential of LiDAR–IMU sensor fusion and optimized path planning to enable scalable and reliable autonomous navigation for precision agriculture. Full article
(This article belongs to the Special Issue Advances in Precision Agriculture in Orchard)
Show Figures

Figure 1

37 pages, 1666 KB  
Review
Camera, LiDAR, and IMU Spatiotemporal Calibration: Methodological Review and Research Perspectives
by Xinyu Lyu, Songlin Liu, Rongcan Qiao, Songyang Jiang and Yuanshi Wang
Sensors 2025, 25(17), 5409; https://doi.org/10.3390/s25175409 - 2 Sep 2025
Cited by 1 | Viewed by 1241
Abstract
Multi-sensor fusion systems involving Light Detection and Ranging (LiDAR), cameras, and inertial measurement units (IMUs) have been widely adopted in fields such as autonomous driving and robotics due to their complementary perception capabilities. This widespread application has led to a growing demand for [...] Read more.
Multi-sensor fusion systems involving Light Detection and Ranging (LiDAR), cameras, and inertial measurement units (IMUs) have been widely adopted in fields such as autonomous driving and robotics due to their complementary perception capabilities. This widespread application has led to a growing demand for accurate sensor calibration. Although numerous calibration methods have been proposed in recent years for various sensor combinations, such as camera–IMU, LiDAR–IMU, camera–LiDAR, and camera–LiDAR–IMU, there remains a lack of systematic reviews and comparative analyses of these approaches. This paper focuses on extrinsic calibration techniques for LiDAR, cameras, and IMU, providing a comprehensive review of the latest developments across the four types of sensor combinations. We further analyze the strengths and limitations of existing methods, summarize the evaluation criteria for calibration, and outline potential future research directions for the benefit of the academic community. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

24 pages, 10964 KB  
Article
Enhancing LiDAR–IMU SLAM for Infrastructure Monitoring via Dynamic Coplanarity Constraints and Joint Observation
by Zhaosheng Feng, Jun Chen, Yaofeng Liang, Wenli Liu and Yongfeng Peng
Sensors 2025, 25(17), 5330; https://doi.org/10.3390/s25175330 - 27 Aug 2025
Viewed by 716
Abstract
Real-time acquisition of high-precision 3D spatial information is critical for intelligent maintenance of urban infrastructure. While SLAM technology based on LiDAR–IMU sensor fusion has become a core approach for infrastructure monitoring, its accuracy remains limited by vertical pose estimation drift. To address this [...] Read more.
Real-time acquisition of high-precision 3D spatial information is critical for intelligent maintenance of urban infrastructure. While SLAM technology based on LiDAR–IMU sensor fusion has become a core approach for infrastructure monitoring, its accuracy remains limited by vertical pose estimation drift. To address this challenge, this paper proposes a LiDAR–IMU fusion SLAM algorithm incorporating a dynamic coplanarity constraint and a joint observation model within an improved error-state Kalman filter framework. A threshold-driven ground segmentation method is developed to robustly extract planar features in structured environments, enabling dynamic activation of ground constraints to suppress vertical drift. Extensive experiments on a self-collected long-corridor dataset and the public M2DGR dataset demonstrate that the proposed method significantly improves pose estimation accuracy. In structured environments, the method reduces z-axis endpoint errors by 85.8% compared with Fast-LIO2, achieving an average z-axis RMSE of 0.0104 m. On the M2DGR Hall04 sequence, the algorithm attains a z-axis RMSE of 0.007 m, outperforming four mainstream LiDAR-based SLAM methods. These results validate the proposed approach as an effective solution for high-precision 3D mapping in infrastructure monitoring applications. Full article
Show Figures

Figure 1

19 pages, 7742 KB  
Article
Three-Dimensional Point Cloud Reconstruction of Unstructured Terrain for Autonomous Robots
by Wei Chen, Xiufang Lin and Xiangpan Zheng
Sensors 2025, 25(16), 4890; https://doi.org/10.3390/s25164890 - 8 Aug 2025
Viewed by 413
Abstract
In scenarios such as field exploration, disaster relief, and agricultural automation, LIDAR-based reconstructed terrain models can largely contribute to robot activities such as passable area identification and path planning optimization. However, unstructured terrain environments are typically absent and poorly characterized by artificially labeled [...] Read more.
In scenarios such as field exploration, disaster relief, and agricultural automation, LIDAR-based reconstructed terrain models can largely contribute to robot activities such as passable area identification and path planning optimization. However, unstructured terrain environments are typically absent and poorly characterized by artificially labeled features, which makes it difficult to find reliable feature correspondences in the point cloud between two consecutive LiDAR scans. Meanwhile, the persistence of noise accompanying unstructured terrain environments also causes certain difficulties in finding reliable feature correspondences between two consecutively scanned point clouds, which in turn leads to lower matching accuracy and larger offsets between neighboring frames. Therefore, this paper proposes an unstructured terrain construction algorithm combined with graph optimization theory based on LOAM algorithm further introducing the robots motion information provided by IMU. Experimental results show that the method proposed in this paper can achieve accurate and effective reconstruction in unstructured terrain environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 5843 KB  
Article
Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
by Lin Yue, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang and Hao Ren
Sensors 2025, 25(15), 4637; https://doi.org/10.3390/s25154637 - 26 Jul 2025
Viewed by 777
Abstract
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and [...] Read more.
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and a LiDAR-inertial odometry factor accounting for degenerate states are constructed to adapt to railway train operating environments. Subsequently, a lightweight network based on YOLO improvement is used for recognizing reflective kilometer posts, while PaddleOCR extracts numerical codes. High-precision vertex coordinates of kilometer posts are obtained by jointly using LiDAR point cloud and an image detection box. Next, a kilometer post factor is constructed, and multi-source information is optimized within a factor graph framework. Finally, onboard experiments conducted on real railway vehicles demonstrate high-precision landmark detection at 35 FPS with 94.8% average precision. The proposed method delivers robust positioning within 5 m RMSE accuracy for high-speed, long-distance train travel, establishing a novel framework for intelligent railway development. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

18 pages, 12540 KB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Viewed by 714
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

20 pages, 3710 KB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 901
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

18 pages, 12097 KB  
Article
Adaptive Outdoor Cleaning Robot with Real-Time Terrain Perception and Fuzzy Control
by Raul Fernando Garcia Azcarate, Akhil Jayadeep, Aung Kyaw Zin, James Wei Shung Lee, M. A. Viraj J. Muthugala and Mohan Rajesh Elara
Mathematics 2025, 13(14), 2245; https://doi.org/10.3390/math13142245 - 10 Jul 2025
Viewed by 700
Abstract
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A [...] Read more.
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A 128-channel LiDAR sensor captures signal intensity images, which are processed by a ResNet-18 convolutional neural network to classify floor types as wood, smooth, or rough. Simultaneously, pitch angles from an onboard IMU detect terrain inclination. These inputs are transformed into fuzzy sets and evaluated using a Mamdani-type fuzzy inference system. The controller adjusts brush height, brush speed, and robot velocity through 81 rules derived from 48 structured cleaning experiments across varying terrain and slopes. Validation was conducted in low-light (night-time) conditions, leveraging LiDAR’s lighting-invariant capabilities. Field trials confirm that the robot responds effectively to environmental conditions, such as reducing speed on slopes or increasing brush pressure on rough surfaces. The integration of deep learning and fuzzy control enables safe, energy-efficient, and adaptive cleaning in complex outdoor environments. This work demonstrates the feasibility and real-world applicability for combining perception and inference-based control in terrain-adaptive robotic systems. Full article
(This article belongs to the Special Issue Research and Applications of Neural Networks and Fuzzy Logic)
Show Figures

Figure 1

31 pages, 28041 KB  
Article
Cyberattack Resilience of Autonomous Vehicle Sensor Systems: Evaluating RGB vs. Dynamic Vision Sensors in CARLA
by Mustafa Sakhai, Kaung Sithu, Min Khant Soe Oke and Maciej Wielgosz
Appl. Sci. 2025, 15(13), 7493; https://doi.org/10.3390/app15137493 - 3 Jul 2025
Cited by 1 | Viewed by 1234
Abstract
Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically [...] Read more.
Autonomous vehicles (AVs) rely on a heterogeneous sensor suite of RGB cameras, LiDAR, GPS/IMU, and emerging event-based dynamic vision sensors (DVS) to perceive and navigate complex environments. However, these sensors can be deceived by realistic cyberattacks, undermining safety. In this work, we systematically implement seven attack vectors in the CARLA simulator—salt and pepper noise, event flooding, depth map tampering, LiDAR phantom injection, GPS spoofing, denial of service, and steering bias control—and measure their impact on a state-of-the-art end-to-end driving agent. We then equip each sensor with tailored defenses (e.g., adaptive median filtering for RGB and spatial clustering for DVS) and integrate a unsupervised anomaly detector (EfficientAD from anomalib) trained exclusively on benign data. Our detector achieves clear separation between normal and attacked conditions (mean RGB anomaly scores of 0.00 vs. 0.38; DVS: 0.61 vs. 0.76), yielding over 95% detection accuracy with fewer than 5% false positives. Defense evaluations reveal that GPS spoofing is fully mitigated, whereas RGB- and depth-based attacks still induce 30–45% trajectory drift despite filtering. Notably, our research-focused evaluation of DVS sensors suggests potential intrinsic resilience advantages in high-dynamic-range scenarios, though their asynchronous output necessitates carefully tuned thresholds. These findings underscore the critical role of multi-modal anomaly detection and demonstrate that DVS sensors exhibit greater intrinsic resilience in high-dynamic-range scenarios, suggesting their potential to enhance AV cybersecurity when integrated with conventional sensors. Full article
(This article belongs to the Special Issue Intelligent Autonomous Vehicles: Development and Challenges)
Show Figures

Figure 1

22 pages, 6123 KB  
Article
Real-Time Proprioceptive Sensing Enhanced Switching Model Predictive Control for Quadruped Robot Under Uncertain Environment
by Sanket Lokhande, Yajie Bao, Peng Cheng, Dan Shen, Genshe Chen and Hao Xu
Electronics 2025, 14(13), 2681; https://doi.org/10.3390/electronics14132681 - 2 Jul 2025
Viewed by 1009
Abstract
Quadruped robots have shown significant potential in disaster relief applications, where they have to navigate complex terrains for search and rescue or reconnaissance operations. However, their deployment is hindered by limited adaptability in highly uncertain environments, especially when relying solely on vision-based sensors [...] Read more.
Quadruped robots have shown significant potential in disaster relief applications, where they have to navigate complex terrains for search and rescue or reconnaissance operations. However, their deployment is hindered by limited adaptability in highly uncertain environments, especially when relying solely on vision-based sensors like cameras or LiDAR, which are susceptible to occlusions, poor lighting, and environmental interference. To address these limitations, this paper proposes a novel sensor-enhanced hierarchical switching model predictive control (MPC) framework that integrates proprioceptive sensing with a bi-level hybrid dynamic model. Unlike existing methods that either rely on handcrafted controllers or deep learning-based control pipelines, our approach introduces three core innovations: (1) a situation-aware, bi-level hybrid dynamic modeling strategy that hierarchically combines single-body rigid dynamics with distributed multi-body dynamics for modeling agility and scalability; (2) a three-layer hybrid control framework, including a terrain-aware switching MPC layer, a distributed torque controller, and a fast PD control loop for enhanced robustness during contact transitions; and (3) a multi-IMU-based proprioceptive feedback mechanism for terrain classification and adaptive gait control under sensor-occluded or GPS-denied environments. Together, these components form a unified and computationally efficient control scheme that addresses practical challenges such as limited onboard processing, unstructured terrain, and environmental uncertainty. A series of experimental results demonstrate that the proposed method outperforms existing vision- and learning-based controllers in terms of stability, adaptability, and control efficiency during high-speed locomotion over irregular terrain. Full article
(This article belongs to the Special Issue Smart Robotics and Autonomous Systems)
Show Figures

Figure 1

21 pages, 15478 KB  
Review
Small Object Detection in Traffic Scenes for Mobile Robots: Challenges, Strategies, and Future Directions
by Zhe Wei, Yurong Zou, Haibo Xu and Sen Wang
Electronics 2025, 14(13), 2614; https://doi.org/10.3390/electronics14132614 - 28 Jun 2025
Viewed by 1428
Abstract
Small object detection in traffic scenes presents unique challenges for mobile robots operating under constrained computational resources and highly dynamic environments. Unlike general object detection, small targets often suffer from low resolution, weak semantic cues, and frequent occlusion, especially in complex outdoor scenarios. [...] Read more.
Small object detection in traffic scenes presents unique challenges for mobile robots operating under constrained computational resources and highly dynamic environments. Unlike general object detection, small targets often suffer from low resolution, weak semantic cues, and frequent occlusion, especially in complex outdoor scenarios. This study systematically analyses the challenges, technical advances, and deployment strategies for small object detection tailored to mobile robotic platforms. We categorise existing approaches into three main strategies: feature enhancement (e.g., multi-scale fusion, attention mechanisms), network architecture optimisation (e.g., lightweight backbones, anchor-free heads), and data-driven techniques (e.g., augmentation, simulation, transfer learning). Furthermore, we examine deployment techniques on embedded devices such as Jetson Nano and Raspberry Pi, and we highlight multi-modal sensor fusion using Light Detection and Ranging (LiDAR), cameras, and Inertial Measurement Units (IMUs) for enhanced environmental perception. A comparative study of public datasets and evaluation metrics is provided to identify current limitations in real-world benchmarking. Finally, we discuss future directions, including robust detection under extreme conditions and human-in-the-loop incremental learning frameworks. This research aims to offer a comprehensive technical reference for researchers and practitioners developing small object detection systems for real-world robotic applications. Full article
(This article belongs to the Special Issue New Trends in Computer Vision and Image Processing)
Show Figures

Figure 1

30 pages, 14473 KB  
Article
VOX-LIO: An Effective and Robust LiDAR-Inertial Odometry System Based on Surfel Voxels
by Meijun Guo, Yonghui Liu, Yuhang Yang, Xiaohai He and Weimin Zhang
Remote Sens. 2025, 17(13), 2214; https://doi.org/10.3390/rs17132214 - 27 Jun 2025
Viewed by 1231
Abstract
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an [...] Read more.
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an adaptive hash voxel-based point cloud map management method that incorporates surfel features and planarity. This method enhances the efficiency of point-to-surfel association by leveraging long-term observed surfel. It facilitates the incremental refinement of surfel features within classified surfel voxels, thereby enabling precise and efficient map updates. Furthermore, we develop a weighted fusion approach that integrates LiDAR and IMU data measurements on the manifold, effectively compensating for motion distortion, particularly under high-speed LiDAR motion. We validate our system through experiments conducted on both public datasets and our mobile robot platforms. The results demonstrate that VOX-LIO outperforms the existing methods, effectively handling challenging environments while minimizing computational cost. Full article
Show Figures

Figure 1

30 pages, 16390 KB  
Article
Model-Based RL Decision-Making for UAVs Operating in GNSS-Denied, Degraded Visibility Conditions with Limited Sensor Capabilities
by Sebastien Boiteau, Fernando Vanegas, Julian Galvez-Serna and Felipe Gonzalez
Drones 2025, 9(6), 410; https://doi.org/10.3390/drones9060410 - 4 Jun 2025
Cited by 1 | Viewed by 2066
Abstract
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a [...] Read more.
Autonomy in Unmanned Aerial Vehicle (UAV) navigation has enabled applications in diverse fields such as mining, precision agriculture, and planetary exploration. However, challenging applications in complex environments complicate the interaction between the agent and its surroundings. Conditions such as the absence of a Global Navigation Satellite System (GNSS), low visibility, and cluttered environments significantly increase uncertainty levels and cause partial observability. These challenges grow when compact, low-cost, entry-level sensors are employed. This study proposes a model-based reinforcement learning (RL) approach to enable UAVs to navigate and make decisions autonomously in environments where the GNSS is unavailable and visibility is limited. Designed for search and rescue operations, the system enables UAVs to navigate cluttered indoor environments, detect targets, and avoid obstacles under low-visibility conditions. The architecture integrates onboard sensors, including a thermal camera to detect a collapsed person (target), a 2D LiDAR and an IMU for localization. The decision-making module employs the ABT solver for real-time policy computation. The framework presented in this work relies on low-cost, entry-level sensors, making it suitable for lightweight UAV platforms. Experimental results demonstrate high success rates in target detection and robust performance in obstacle avoidance and navigation despite uncertainties in pose estimation and detection. The framework was first assessed in simulation, compared with a baseline algorithm, and then through real-life testing across several scenarios. The proposed system represents a step forward in UAV autonomy for critical applications, with potential extensions to unknown and fully stochastic environments. Full article
Show Figures

Figure 1

21 pages, 6509 KB  
Article
Design of a Chili Pepper Harvesting Device for Hilly Chili Fields
by Weikang Han, Jialong Luo, Jiatao Wang, Qihang Gu, Liujun Lin, Yuan Gao, Hongru Chen, Kangya Luo, Zhixiong Zeng and Jie He
Agronomy 2025, 15(5), 1118; https://doi.org/10.3390/agronomy15051118 - 30 Apr 2025
Cited by 1 | Viewed by 975
Abstract
To address issues such as leaf occlusion, misalignment of the harvesting robotic arm, and limited harvesting range in hillside chili fields, this paper designs an intelligent harvesting system based on 3D point cloud reconstruction and multi-mechanism collaborative leveling. The system integrates real-time data [...] Read more.
To address issues such as leaf occlusion, misalignment of the harvesting robotic arm, and limited harvesting range in hillside chili fields, this paper designs an intelligent harvesting system based on 3D point cloud reconstruction and multi-mechanism collaborative leveling. The system integrates real-time data from a LiDAR and IMU inertial navigation system to reconstruct the chili point cloud occluded by leaves from multiple perspectives. To address issues such as misalignment of the robotic arm caused by terrain undulations, the system integrates an adaptive leveling platform and an H-shaped planar slide, combined with a gyroscope to dynamically adjust the arm’s posture in real time, ensuring arm stability while expanding its workspace. In addition, to ensure harvesting efficiency and pepper integrity, an integrated cutting–gripping flexible end effector is designed to achieve synchronized cutting and collection operations. The experiment shows that the system achieves recognition accuracy of 81.95% for occluded chili peppers and 89.04% for non-occluded chili peppers. The harvesting success rate is 86.33%, with a single harvesting operation taking 13.17 s. During prolonged operation, the harvesting success rate can be maintained at approximately 85.1%. In summary, the intelligent harvesting system based on 3D point cloud reconstruction and multi-mechanism collaborative leveling provides a feasible solution for automated pepper harvesting. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop