Next Article in Journal
Robust Deep Neural Network for Learning in Noisy Multi-Label Food Images
Previous Article in Journal
Topology Optimization Design Method for Acoustic Imaging Array of Power Equipment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D Reconstruction

1
School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China
2
The Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China
3
School of Computer and Electronic Information, Guangxi University, Nanning 530000, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(7), 2033; https://doi.org/10.3390/s24072033
Submission received: 3 February 2024 / Revised: 9 March 2024 / Accepted: 18 March 2024 / Published: 22 March 2024
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Simultaneous Localization and Mapping (SLAM) poses distinct challenges, especially in settings with variable elements, which demand the integration of multiple sensors to ensure robustness. This study addresses these issues by integrating advanced technologies like LiDAR-inertial odometry (LIO), visual-inertial odometry (VIO), and sophisticated Inertial Measurement Unit (IMU) preintegration methods. These integrations enhance the robustness and reliability of the SLAM process for precise mapping of complex environments. Additionally, incorporating an object-detection network aids in identifying and excluding transient objects such as pedestrians and vehicles, essential for maintaining the integrity and accuracy of environmental mapping. The object-detection network features a lightweight design and swift performance, enabling real-time analysis without significant resource utilization. Our approach focuses on harmoniously blending these techniques to yield superior mapping outcomes in complex scenarios. The effectiveness of our proposed methods is substantiated through experimental evaluation, demonstrating their capability to produce more reliable and precise maps in environments with variable elements. The results indicate improvements in autonomous navigation and mapping, providing a practical solution for SLAM in challenging and dynamic settings.

1. Introduction

Simultaneous Localization and Mapping (SLAM), a cornerstone technology in the realms of contemporary robotics and spatial computing, has significantly advanced the capability to interpret and interact with the physical world [1]. Utilizing an array of sensor data from cameras, LiDAR, and Inertial Measurement Units (IMUs), SLAM concurrently estimates sensor poses and fabricates a comprehensive three-dimensional representation of the surrounding milieu. This technology’s prowess in real-time pose estimation has catalyzed its widespread adoption across various sectors of autonomous robotics, encompassing unmanned aerial vehicles [2], automated ground vehicles [3,4,5], and the burgeoning field of self-driving automobiles [6,7]. Moreover, SLAM’s adeptness in real-time mapping plays a crucial role in robot navigation [8], enriching the experiences in virtual and augmented reality (VR/AR) [9] and bolstering the precision in surveying and mapping endeavors [10]. SLAM is a fundamental technology in modern robotics, enabling machines to map their environment while tracking their own location in real time. This innovation is crucial across various applications, from autonomous vehicles to virtual reality, enhancing navigation and spatial awareness.
In the landscape of current SLAM methodologies, systems predominantly bifurcate into two sensor-based categories: visual SLAM [11], utilizing camera sensors, and LiDAR SLAM [12], which uses LiDAR sensors. Each approach has distinct advantages and limitations in terms of accuracy, resolution, and environmental suitability, with visual SLAM being more cost-effective, but less precise at longer distances and in poor conditions, while LiDAR SLAM offers higher accuracy and better environmental mapping, but lacks color information. Integrating both types within a SLAM system can overcome their respective weaknesses, resulting in more detailed and accurate 3D mapping.
Visual SLAM, leveraging the cost effectiveness and efficiency in size, weight, and power consumption of camera sensors, has attained significant accuracy in localization [13]. The vivid color data procured from cameras enhances the human interpretability of the resultant maps [14]. However, visual SLAM’s accuracy and resolution in mapping often lag behind those of LiDAR SLAM. The principal shortcoming of this method arises from its dependency on triangulating disparities from multi-view imagery, a computationally demanding task that especially demands substantial computing resources when dealing with high-resolution images and full traversal scenarios. As a result, hardware acceleration or the support of server clusters is frequently required [15]. Among the numerous challenges encountered, a particularly pernicious issue is the erroneous correspondence of feature points, which can significantly undermine the precision of trajectory calculations [16]. Moreover, the specific error factors vary significantly depending on whether a monocular or stereo camera is employed, further exacerbating the complexities associated with this approach [17]. Therefore, a comprehensive understanding and optimization of this method must take into account these additional factors and challenges. Additionally, the depth accuracy in visual SLAM degrades proportionally with increasing measurement distances, posing challenges in reconstructing expansive outdoor scenes and underperforming in environments with poor lighting or limited texture [18].
Conversely, LiDAR SLAM, leveraging the precise and extensive measurement capabilities of LiDAR sensors, excels in both localization accuracy and environmental map reconstruction [19,20]. Despite its strengths, LiDAR SLAM may struggle in scenarios with limited geometric features, such as extended corridors or large, featureless walls. While it effectively reconstructs the environmental geometry, LiDAR SLAM does not capture the color information that visual SLAM systems provide, a factor that can be crucial in certain application contexts.
Integrating LiDAR and camera measurements within a SLAM framework effectively addresses the inherent limitations of each sensor type in localization tasks, leading to an enriched output [21,22]. This approach yields a precise, high-resolution 3D map endowed with detailed textural information, meeting the diverse requirements of a wide array of mapping applications and providing a robust solution to the challenges faced in complex mapping scenarios [23].
In this paper, we investigate the integration of various sensor modalities, including LiDAR, vision, and inertial sensors, within the domain of Simultaneous Localization and Mapping (SLAM), particularly focusing on dynamic environments. In response to the challenges of robust and accurate mapping in such settings, we propose an innovative LiDAR–inertial–visual fusion framework. This framework is notable for its seamless amalgamation of two critical sub-systems: the LiDAR–inertial odometry (LIO) and the visual–inertial odometry (VIO).
Collaborating effectively, these sub-systems facilitate the incremental assembly of a comprehensive 3D radiance map, adeptly capturing various environmental nuances. Our approach harnesses the distinct advantages of both the LIO and VIO systems, and by integrating their capabilities, we significantly enhance the overall accuracy and effectiveness of the mapping process. This synergy allows for a more detailed and precise representation of the mapped environments. Moreover, we have incorporated specific methodologies within this unified framework, including IMU preintegration techniques specially designed for LIO.
To effectively remove dynamic obstacles such as pedestrians and vehicles from the mapping process, we have incorporated an object-detection network into our system. This network is applied to match static obstacles and eliminate dynamic ones, thereby enhancing the precision and reliability of the mapping process. This integration is crucial in accurately identifying and excluding these moving entities, thereby enhancing the overall quality and reliability of the mapping results. This addition further improves our system’s ability to navigate and map complex dynamic environments, demonstrating the robustness and effectiveness of our approach in various challenging scenarios.

2. Related Work

2.1. Sensor Fusion

Multi-sensor fusion, with data from a variety of different types of sensors, brings together the benefits of each sensor while mitigating their limitations to create a more robust, comprehensive, and accurate representation of the environment. Zhang et al. in [24] proposed a multi-sensor information fusion algorithm based on increased trust, which fuses line segments extracted from sonar and laser rangefinders into SLAM to improve the robot’s attitude and map accuracy. In [25], Laurent et al. proposed a multi-sensor self-localization method. The mobile robot is equipped with LiDAR, GPS, an IMU, and other sensors and uses segmentation covariance cross-filtering to improve the accuracy of existing maps. Ref. [26] proposed a tightly coupled multi-sensor fusion framework, Lvio-Fusion, based on graph optimization, which integrates stereo cameras, LiDAR, an IMU, and GPS. It introduced a piecewise global pose graph optimization based on GPS and a closed loop. This method can eliminate accumulated drift and adopt an actor–critic method in reinforcement learning to adaptively adjust the weight of the sensor, so that the system has higher estimation accuracy and robustness to various environments. In [27], Zhang et al. combined an IMU, camera, and GNSS data and used image sequences and wheel inertial self-motion results to build semantic local maps describing local environments and, then, used supervised neural networks to simplify the matching of local semantic maps with online map databases, achieving higher accuracy.

2.2. Dynamic Target Detection

Many current SLAM methods solve problems in static scenes, but in reality, most of them are dynamic objects. In order to improve the accuracy of the SLAM system, Fu et al. [28] proposed integrating Convolutional Block Attention Module (CBAM) into the Mask R-CNN network to extract dynamic feature points, thereby reducing the error between the actual trajectory and the estimated trajectory and improving accuracy. Jaafar and Andrey [29] used KMeans clustering with extreme constraints and SegNet for semantic segmentation to filter out features detected on moving objects, improving the real-time performance and accuracy of the SLAM system. Since the estimated trajectory of static landmarks is greatly different from that of all dynamic landmarks, Yin et al. [30] proposed a method of loosely coupling the three-dimensional scene flow with the Inertial Measurement Unit (IMU) for dynamic feature detection and estimated the camera state by integrating IMU measurement and feature observation. Li et al. [31] proposed object detection and scene flow feature-point-tracking technologies based on deep learning to separate and jointly optimize dynamic and static objects. Both of their methods improved the accuracy and robustness of SLAM systems in dynamic scenarios.

2.3. Semantic SLAM

Semantic SLAM enhances the traditional SLAM framework by introducing semantic understanding into the map-building and localization processes. Instead of solely constructing a geometric map of the environment, Semantic SLAM aims to generate a map that also identifies and labels objects and structures based on their meaning and function. This enriched mapping provides a deeper understanding of the environment, facilitating improved decision making, interaction, and navigation for autonomous systems. Ran et al. [32] proposed a method that combines the target recovery method of the DBSCAN algorithm based on geometric features with the adaptive sampling strategy based on line features with variable step intervals and achieved better target reconstruction accuracy in complex environmental backgrounds. Lee et al. [33] proposed a semantic segmentation technology based on deep learning, which significantly improved the trajectory tracking accuracy of monocular SLAM. Ma et al. [34] used the deep neural network YOLOv5 to add weight to the features of objects matching the same semantic category and incorporated semantic information into loop closure detection, thereby improving the accuracy of loop closure detection and reducing the absolute trajectory error. Simultaneously, Cheng et al. [35] proposed a real-time RGB-D semantic visualization SLAM system based on the ORB-SLAM2 framework, which added semantic information to the metric map constructed by the system, ensuring that the system is real-time, accurate, and robust in dynamic scenes.

2.4. Preintegration

Because the IMU is prone to nontrivial noise and drift during use, this can lead to large errors in attitude estimation. Yuan et al. [36] and Chang et al. [37] used preintegration to correct the IMU and odometer, which effectively reduces the forward positioning drift and alleviates the problem of a high IMU drift rate. Li et al. [38] used ego-velocity preintegration factors to optimize the attitude map to achieve more accurate and robust attitude estimation.
Our present IMU preintegration approach yields numerical outcomes that are largely consistent with those reported in References [39,40], albeit through distinct mathematical derivations. To systematically articulate the methodology of this paper, a clear exposition is provided herein. The core objective of IMU preintegration is to aggregate inertial measurements over a fixed time window, generally denoted as t 0 , t 1 , to yield an estimate of relative motion. IMU preintegration relies on a set of equations that govern the update of the position, velocity, and rotation state variables.
Rotation update: Consider an IMU that outputs angular velocity measurements ω ( t ) . The rotation matrix R t is updated using:
Δ R = e x p ( Ω Δ t )
where Ω is the skew-symmetric matrix form of ω ( t ) and Δ t is the time interval between measurements.
Velocity update: Given linear acceleration measurements a t , the velocity v ( t ) is updated as:
Δ v = R t 0 · a t Δ t
where R t is the rotation matrix at the start time t 0 and Δ t is the time interval.
Position update: The position p t is updated as:
Δ p = Δ v Δ t + 1 2 a t Δ t 2
Composite measurement: The composite measurement generated through preintegration over the interval t 0 , t 1 can be expressed as a tuple Δ R , Δ v , Δ p .
The estimated motion obtained by IMU preintegration can remove the deflecting point cloud and provide the initial guess for LiDAR range optimization. A tightly coupled LiDAR inertial odometer framework [41] is proposed, and the resulting laser ranging solution is used to estimate IMU bias. Wang et al. [42] adopted a direct point cloud registration method without extracting features, ensuring accuracy through the preintegration of the IMU, direct scan matching at local scales, an effective fusion loop closure-detection method, and condition detection.
In LiDAR–inertial odometry (LIO), the Inertial Measurement Unit (IMU) preintegration involves combining accelerometer and gyroscope readings over a time interval to generate a compound measurement, which captures the overall change in pose (position, velocity, and orientation) during that time. Here us how it generally works, in terms of formulas.

2.4.1. Notation

Δ t : time interval between two IMU measurements.
a k : acceleration measurement at time k.
ω k : angular velocity measurement at time k.
g: gravitational acceleration (known and constant).
R k : rotation matrix at time k, representing the orientation.

2.4.2. Preintegration Steps

(1) Rotation update:
Integrate gyroscope measurements to obtain the change in orientation over the interval. This is typically performed using quaternion or rotation matrix formulations. If we denote Δ θ as the integral of angular velocity over Δ t , then
Δ θ = t t + Δ t ω d t
We can then update the rotation matrix R as follows:
R k + 1 = R k e x p Δ θ
Here, e x p represents the matrix exponential, which converts an angular velocity to a rotation matrix.
(2) Velocity update:
Integrate accelerometer measurements to obtain the change in velocity Δ v over the interval Δ t .
Δ v = t t + Δ t R a g d t
This can be approximated discretely as:
Δ v R k a k g Δ t
The velocity at t + Δ t is then:
v k + 1 = v k + Δ v
(3) Position update:
Integrate the change in velocity to obtain the change in position Δ p over Δ t .
Δ p = t t + Δ t v d t
This can be approximated discretely as:
Δ p V k Δ t + 1 2 R k a k g Δ t 2
Position at t + Δ t is then:
p k + 1 = p k + Δ p
These preintegrated measurements Δ θ , Δ v , Δ p are then used in conjunction with LiDAR measurements to estimate the full system state in the LIO framework. This is a simplified overview and assumes constant acceleration and angular velocity over Δ t . In practice, more sophisticated numerical integration methods may be used, and additional terms may be included to account for biases and noise in the IMU measurements.

2.5. Three-Dimensional Reconstruction

Within the SLAM framework, 3D reconstruction serves as the linchpin for generating comprehensive spatial representations of environments. It translates raw sensor observations into structured three-dimensional models, ensuring accurate geometric and topological fidelity. This process not only underpins the localization component by offering reference landmarks and structures, but also facilitates enhanced environmental comprehension. The resultant detailed 3D maps become the foundation upon which autonomous systems make informed navigation decisions, interact with their surroundings, and execute advanced tasks. Three-dimensional reconstruction translates the ephemeral sensory data into persistent and interpretable spatial constructs within SLAM.
In the domain of SLAM with multi-sensor fusion, the significance of 3D reconstruction is paramount, serving as a crucial component in environmental perception and comprehension for autonomous systems. By meticulously crafting precise three-dimensional models of the surrounding environment, 3D reconstruction significantly augments the capabilities of robots and autonomous vehicles in executing tasks such as path planning, navigation, and obstacle avoidance [43]. Moreover, the seamless integration of multi-sensor fusion techniques with SLAM not only bolsters the accuracy and robustness of 3D reconstruction, but also elevates the overall performance of SLAM systems by imparting richer and more intricate environmental information [44]. This groundbreaking advancement holds tremendous potential for a diverse array of practical applications, encompassing areas such as augmented and virtual reality, urban planning, and heritage preservation [45].
With the maturity of SLAM, many mature methods for 3D reconstruction have also been developed. Zhang et al. [46] proposed a multi-plane image 3D-reconstruction model based on stereo vision. By collecting multi-plane image features and using a 3-bit coordinate conversion algorithm to run under stereo vision, a model with better application performance was obtained. Song et al. [47] proposed a system based on the traditional Structure from Motion (SfM) pipeline to preprocess and modify equirectangular images generated by omnidirectional cameras, which can estimate the accurate self-motion and sparse 3D structure of synthetic and real-world scenes well. Aiming at the problems of large memory consumption and the low efficiency and high hardware requirements of previous 3D reconstruction schemes based on deep learning, Zeng et al. [48] proposed a multi-view geometric 3D-reconstruction network framework based on the improved PatchMatch algorithm, which iteratively optimized the PatchMatch of the feature maps of each scale to improve the reconstruction efficiency and reduce the running memory. Qinet al. [49] proposed a method combining calibration and ICP registration to complete the reconstruction of weak texture surfaces, using the calibration results as a better initial position for ICP registration, reducing the iteration time of the ICP algorithm, so as to obtain accurate reconstruction of weak texture objects.

3. IMU Preintegration

3.1. Introduction to IMU Preintegration

Inertial Measurement Units (IMUs) have become an indispensable component in the field of robotics and autonomous systems, especially in scenarios requiring precise localization and mapping. IMUs are capable of providing high-frequency data related to acceleration and angular velocity, thereby offering a rich source of information for tracking motion dynamics. However, the utility of raw IMU data is often compromised due to noise, drift, and other forms of inaccuracies, warranting the need for sophisticated data-processing techniques.
One such technique pivotal for enhancing the utility of IMU data is IMU preintegration [50]. It involves the accumulation of inertial readings over a finite-time interval, thereby creating a composite measurement that approximates the relative motion between two instances in time. Through preintegration, the inertial data are transformed into a more manageable form, offering several advantages like reduced computational complexity and improved resilience against noise. Importantly, the preintegrated IMU measurements serve as a valuable input for state estimation algorithms, ensuring that the system remains responsive and accurate, even when faced with latency in other sensor modalities like LiDAR or cameras.
The significance of IMU preintegration becomes exceedingly evident in dynamic environments. When mapping scenarios involve transient or unpredictable elements—such as moving cars or pedestrians—the system’s ability to quickly and accurately adapt becomes crucial. Here, IMU preintegration offers an expedient method to temporally align disparate sensor data and improve state estimation in real-time, thereby enabling the system to react more intelligently and safely to dynamic changes in the environment.

3.2. Comparative Analysis of LIO and VIO Preintegration

IMU preintegration serves as the common underpinning in both LIO and VIO for effectively dealing with high-frequency inertial measurements. However, the utilization and impact of preintegrated IMU data differ in these two paradigms, which rely on disparate sensor modalities for additional measurements—LiDAR for LIO and cameras for VIO.

3.2.1. Mathematical Formulations

(1) Similarities: Both LIO and VIO utilize the same fundamental equations for IMU preintegration concerning rotation ( Δ R ), velocity ( Δ V ), and position ( Δ P ) updates. These equations serve to convert high-frequency IMU data into a lower-dimensional, composite form.
(2) Differences: In LIO, the preintegrated IMU data are often directly fused with LiDAR point clouds through optimization algorithms. VIO, on the other hand, involves additional mathematical layers, as it requires feature extraction and tracking from image frames to correlate with the preintegrated IMU data. The complexity of the mathematical model is generally higher in VIO due to the introduction of photometric error terms or additional constraints that are absent in LIO.

3.2.2. Practical Applications

(1) Similarities: Both LIO and VIO offer the advantage of providing robust state estimation in dynamic environments, and their preintegrated IMU data can be used to compensate for latency in acquiring LiDAR or visual data.
(2) Differences: LIO is often favored in outdoor, large-scale environments due to the long-range capabilities of LiDAR sensors. VIO is generally more compact and cost-effective, but may be sensitive to lighting conditions and feature-poor scenarios. The choice between LIO and VIO often depends on the specific requirements of the environment and the application.

3.2.3. Suitability for Dynamic Environments

Both LIO and VIO can benefit from IMU preintegration in dynamic scenarios. The preintegrated IMU measurements can help in reducing the computational burden and in improving real-time responsiveness. However, the choice between LIO and VIO may hinge on various external factors such as lighting conditions, the required sensing range, and the availability of distinct visual features in the environment.

3.3. Mathematical Formulations of Preintegration in LIO

In the present work, our emphasis is on leveraging preintegration methodologies within the context of LiDAR–inertial odometry (LIO).
In the realm of robotic navigation, Inertial Measurement Units (IMUs) play a pivotal role. These IMUs capture measurements in the body frame. Specifically, the IMU measurements encapsulate both the force counteracting gravitational pull and the intricate dynamics of the platform on which it is mounted. However, it is crucial to recognize that these measurements are not devoid of potential interferences. They are invariably influenced by various biases and disturbances. Among the most prominent biases are the acceleration bias, represented as b a , and the gyroscope bias, denoted as b ω . These biases are intrinsic to the IMU and can skew the readings, making them deviate from the true values.
Furthermore, additive noise, an external interference, can further compound these biases, leading to even more complex deviations. The raw measurements provided by the IMU, particularly the gyroscope and accelerometer readings, symbolized as ω ^ and a ^ , respectively, are the direct values fetched from the sensors. These measurements, before any form of correction or filtering, are the foundational data upon which subsequent processing and fusion algorithms act. Therefore, ensuring their accuracy and understanding their inherent biases are paramount for any sophisticated navigation system.
a ^ t = a t + b a t + R ω t g ω + n a
ω ^ t = ω t + b ω t + n ω
Consistent with our preintegration derivation, which closely aligns with the findings presented in References [39,40], we adhered to the identical assumption that the inherent additive noise observed in both acceleration and gyroscope measurements conforms to a Gaussian white noise distribution. This assumption is fundamental in ensuring the accuracy and reliability of our measurements, which are crucial for precise 3D reconstruction and environmental perception in SLAM systems. Specifically, the noise in acceleration, denoted as n a , is modeled as N ( 0 , σ a 2 ) , while the noise in the gyroscope, represented as n ω , follows the distribution N ( 0 , σ ω 2 ) . Furthermore, it is essential to emphasize that both the acceleration bias and the gyroscope bias are conceptualized as undergoing a random walk process. In this context, the derivatives of these biases are also influenced by Gaussian white noise: n b a conforms to N ( 0 , σ b a 2 ) , and n b ω is governed by N ( 0 , σ b ω 2 ) . Mathematically, this can be succinctly represented as b ˙ a t = n b a and b ˙ ω t = n b ω , providing a robust framework to interpret sensor deviations and perturbations.
Such a modeling choice is instrumental in understanding the inherent variations and uncertainty of the sensor measurements, thereby aiding in the development of more robust and resilient algorithms for navigation and sensor fusion.
In the context of our study, consider two temporally successive frames, denoted as b k and b k + 1 . During the time interval [ t k , t k + 1 ] , a multitude of inertial measurements can be observed. Leveraging our bias estimation approach, we proceed to integrate these measurements within the localized frame b k as follows:
α b k + 1 b k = t t k , t k + 1 R t b k a t b a t d t 2
β b k + 1 b k = t t k , t k + 1 R t b k a t b a t d t
γ b k + 1 b k = t t k , t k + 1 1 2 Ω ω t b ω t γ t b k d t
where
Ω ω = ω × ω ω T 0 , ω × = 0 ω z ω y ω z 0 ω x ω y ω x 0 ,
Considering the covariance matrix p b k + 1 b k corresponding to the variables α , β , and γ , it is evident that its propagation adheres to a predetermined pattern. Notably, from Equations (14)–(16), it can be deduced that the preintegration terms can be exclusively derived from the IMU measurements, provided that b k serves as the reference frame and the biases are appropriately accounted for.
In scenarios where the bias estimation undergoes only minor fluctuations, we fine-tune the terms α b k + 1 b k , β b k + 1 b k , and γ b k + 1 b k using their respective first-order approximations in relation to the bias. This adjustment can be mathematically represented as follows:
α b k + 1 b k α b k + 1 b k + J b a α δ b a t + J b ω α δ b ω k
β b k + 1 b k β b k + 1 b k + J b a β δ b a t + J b ω β δ b ω k
γ b k + 1 b k γ b k + 1 b k 1 1 2 J b a γ δ b ω k
In the event that the bias estimation undergoes a substantial alteration, a repropagation is executed based on the updated bias estimation. Adopting such a methodology substantially conserves computational resources, especially pertinent for optimization-centric algorithms, by negating the necessity to repetitively propagate IMU measurements.

4. Dynamic Obstacle Removal Using YOLOv5

4.1. Object Detection Using YOLOv5 Architecture

You Only Look Once version 5 (YOLOv5) is a state-of-the-art object-detection model known for its efficiency and high performance [51,52,53]. At its core, YOLOv5 is designed to identify and localize multiple objects in images or video feeds, executing these tasks with high accuracy.
Compared to previous YOLO versions, YOLOv5 exhibits numerous notable advantages. Its lightweight design not only enhances inference speed, but also significantly reduces memory consumption, making it effective even in resource-constrained environments. Additionally, YOLOv5 demonstrates remarkable stability, thanks to its sophisticated training strategies and regularization techniques, which effectively prevent overfitting and ensure consistent performance. Furthermore, its excellent compatibility allows seamless integration with various deep learning frameworks, providing users with the flexibility to choose the platform best suited to their needs. More importantly, YOLOv5’s modular code architecture and comprehensive documentation facilitate easy modifications and porting to diverse applications. Therefore, YOLOv5 undoubtedly stands out as a highly flexible and appealing choice for target-detection tasks.
The architecture employs a deep Convolutional Neural Network (CNN) for feature extraction, followed by specialized layers for object detection. The network’s backbone is optimized for rapid computations, enabling its deployment in time-sensitive applications such as autonomous driving and real-time surveillance. YOLOv5 offers a series of model sizes (small, medium, large, and x-large), allowing users to choose a variant that best suits their balance of computational efficiency and detection accuracy. The YOLOv5 network structure is shown in Figure 1.
One of the distinguishing features of YOLOv5 is its utilization of anchor boxes to improve the detection of objects with varying sizes and orientations. These anchor boxes are dynamically scaled and adjusted during training to better match the ground truth boxes. Additionally, the architecture incorporates advanced data-augmentation techniques such as mosaic augmentation and CutMix, improving the model’s ability to generalize across different conditions.
YOLOv5 employs Convolutional Neural Networks (CNNs) as part of its architecture, like many other object-detection models. The fundamental operation in a CNN is convolution, which is applied to the input image or feature maps from previous layers. The convolution operation can be represented mathematically as follows:
O i j = m n I i m , j n · K m n
where
O i j is the pixel value of the output feature map at the i j -thposition.
I is the input feature map.
K is the kernel or filter.
m , n is range over the dimensions of the kernel.
Furthermore, it should be noted that YOLOv5 may incorporate post-convolutional techniques such as Batch Normalization and activation functions including, but not limited to, Leaky Rectified Linear Unit (Leaky ReLU) and Mish. While these operations do not constitute components of the convolutional operation per se, they are integral to the overall architecture and contribute significantly to the model’s performance.
YOLOv5 serves as a highly adaptable and efficient object-detection model, capable of operating under the constraints of computational resources without substantially compromising performance. Its architecture is strategically designed to optimize both speed and accuracy, making it well suited for a variety of real-world applications requiring instant object detection and localization.

4.2. Algorithmic Framework for Dynamic Obstacle Identification and Removal

The process of dynamic object removal in the context of a video stream involves both object-detection and -tracking mechanisms. Specifically, the You Only Look Once version 5 (YOLOv5) architecture can be utilized to perform real-time object detection, while subsequent tracking algorithms, such as Simple Online and Real-time Tracking (SORT), can be employed to keep track of object identities over time. The following sections outline the theoretical framework for this approach:
  • Object detection using YOLOv5:
    The first step involves the detection of objects in individual frames f t at time t. For a given frame f t , the set of detected objects D t can be represented as:
    D t = d 1 , d 2 , , d n
    where each detection d i contains information about the object class, bounding box coordinates, and confidence score.
  • Object tracking using SORT:
    For tracking objects across multiple frames, the SORT algorithm assigns a unique identifier I D i to each detected object d i . The updated state of all tracked objects T t at time t can be denoted as:
    T t = t 1 , t 2 , , t m
    where t i = ( I D i , p o s i t i o n i )
  • Identifying dynamic objects:
    The dynamism of an object i is determined based on the change in its position over a set number of frames Δ t . Mathematically, the dynamism δ i for object i is:
    δ i = x t x t Δ t 2 + y t y t Δ t 2
    where ( x t , y t ) and ( x t Δ t , y t Δ t ) are the coordinates of the object at times t and t Δ t , respectively. If δ i > the threshold, the object is considered dynamic.

5. Synergistic Integration of LIO and VIO in SLAM Frameworks

5.1. LIO and VIO: A Comparative Overview

LiDAR–inertial odometry (LIO) is a sensor-fusion-based navigation approach that integrates Light Detection and Ranging (LiDAR) data with Inertial Measurement Unit (IMU) information to compute the position and orientation of a moving platform. In the LIO framework, the LiDAR provides high-fidelity spatial point clouds, which are utilized to detect features in the environment. Simultaneously, the IMU provides temporal motion data, including acceleration and angular velocity. By fusing these data streams, LIO is capable of yielding accuratestate estimation. The method is particularly advantageous in scenarios where visual data are unreliable or unavailable, such as in adverse lighting conditions or unstructured environments.
Visual–inertial odometry (VIO) is another sensor-fusion methodology that combines visual data from cameras with IMU data for real-time state estimation. Unlike LIO, VIO leverages visual features extracted from a sequence of images to build a representation of the environment. The IMU data, providing acceleration and angular velocity measurements, complements the visual data by offering high-frequency temporal information, which mitigates the shortcomings of visual sampling rates. VIO is often employed in applications requiring lightweight and low-cost sensors, such as mobile robotics and augmented reality, and provides reliable performance under a broad range of lighting and texture conditions.

5.2. The System Overview

The advanced, tightly coupled system presented in this study seamlessly integrates data from a LiDAR, a camera, and an Inertial Measurement Unit (IMU), as shown in Figure 2. This fusion is executed through a tripartite architectural design comprising the following sub-systems:
  • Measurement preprocessing sub-system: This preliminary module is crucial for the effective handling and conditioning of raw sensor data. Here, raw images and point clouds undergo a series of preprocessing steps. The IMU provides linear acceleration and angular acceleration signals, which are respectively integrated with the camera’s image data and the LiDAR’s point data. For the camera data, the IMU signals offer a temporal correction, ensuring that any delay or temporal misalignment between the camera frames is minimized. This synchronization helps in maintaining the temporal coherence of the visual data, which is pivotal for accurate feature extraction.
  • LiDAR–inertial integration sub-system: Akin to the camera data, the raw LiDAR point clouds are corrected using the IMU signals. These signals rectify any misalignments due to rapid motions or vibrations, which can distort the spatial arrangement of the point clouds. After this correction, a rigorous outlier-rejection protocol is employed to filter out anomalous points, ensuring that only consistent and reliable point data are retained. These processed point clouds then undergo advanced edge and planar matching techniques. By doing so, the sub-system can extract critical geometric information, such as object boundaries and surface orientations, which aids in a richer scene representation. The resulting data, complemented by odometry information derived from the LiDAR, forms a robust set ready for map optimization.
  • Visual–inertial integration sub-system: Post IMU-based correction, the images are processed to extract salient features. These visual landmarks, when paired with depth cues from the LiDAR, offer a more holistic and three-dimensional representation of the environment. Furthermore, to bolster the robustness of visual features, advanced algorithms may employ techniques such as scale-invariant feature transformation or adaptive thresholding, ensuring that features are invariant to scale changes and illumination conditions.
Figure 2. Multi-sensor tightly coupled system architecture.
Figure 2. Multi-sensor tightly coupled system architecture.
Sensors 24 02033 g002
Finally, the processed data from both the LiDAR and visual channels converge in the graph optimization phase. Within this phase, a sophisticated optimization algorithm, potentially leveraging state-of-the-art solvers, refines the pose graph, minimizing errors and inconsistencies. This integrated approach, amalgamating inputs from multiple sensors and harnessing their respective strengths, ensures a map reconstruction that is not only precise, but also resilient to typical environmental challenges. The outcome is a state-of-the-art SLAM system that stands out in terms of its accuracy, robustness, and efficiency.

5.3. Data Alignment and Synchronization

5.3.1. Visual–Inertial Integration

In our proposed architecture shown in Figure 3, the system undergoes a comprehensive optimization process that considers a variety of inputs to achieve enhanced performance. Primarily, it integrates residuals from the Inertial Measurement Unit (IMU) preintegration, which offers accurate and rapid calculations pertaining to the motion dynamics. This integration ensures that the minor deviations or errors that might arise due to the inherent limitations of the IMU can be effectively minimized.
Furthermore, our system incorporates visual measurements; however, uniquely, it evaluates them both with and without depth. Employing visual measurements devoid of depth information, akin to those obtained through a monocular camera, offers a broadened, yet less detailed perspective of the environment. Despite the absence of depth perception, these measurements present a comprehensive overview of the surroundings, thereby facilitating a macroscopic comprehension of the scene in question. This general view is essential for preliminary analysis and for rapidly assessing the spatial arrangement of observable landmarks.
On the other hand, the integration of visual measurements with depth furnishes the system with intricate details about the environment, allowing for a more granular and precise understanding of the surroundings. Depth-inclusive visual measurements facilitate the creation of a dense and detailed 3D map, thereby bolstering the system’s ability to recognize and react to minute changes or obstacles within its operational environment.
By synthesizing the advantages of both IMU preintegration and these dual-mode visual measurements, our system ensures a robust, efficient, and highly accurate output, vital for real-time operations and complex environmental navigation.
In the context of visual–inertial SLAM, we employed a preintegration technique for IMU readings captured between successive visual frames, namely b k and b k + 1 . Through this preintegration, we derived measurements related to rotation ( Δ Q ), velocity ( Δ V ), and position ( Δ P ). Accompanying these measurements is a covariance matrix, denoted as Σ L b k , b k + 1 , which encompasses the entire measurement vector. With these preintegrated parameters and the states s b k and s b k + 1 at our disposal, we turn to the inertial residual, r I b k , b k + 1 , as defined in subsequent discussions.
r I b k , b k + 1 = r Δ Q , r Δ V , r Δ P
r Δ Q = L o g Δ Q T R b k T R b k + 1
r Δ V = R b k T v b k + 1 v b k g Δ t Δ V
r Δ P = R b k T p j p b k v b k Δ t 1 2 g Δ t 2 Δ P
In this study, we made use of the logarithmic mapping function, S O ( 3 ) R 3 , which facilitates a transformation from the Lie group to its associated vector space. Concurrently, along with the inertial residuals, we incorporated reprojection errors, denoted as r b k j , which represent discrepancies between frame b k and a three-dimensional point j located at position x j .
r b k j = u b k j R C B R b k 1 x j

5.3.2. LiDAR–Inertial Integration

In the brief representation illustrated in Figure 4, one can observe the intricacies of the LiDAR–inertial system, one of the foundational pillars of our research methodology. This system is designed to maintain a factor graph, which is imperative for the optimization of global poses.
Central to this optimization strategy are the IMU preintegration constraints. The IMU, as an essential sensor in determining motion-related metrics, offers continuous data, which necessitates periodic integration. Through this preintegration, the constraints are established, aiding the system in correcting minor discrepancies or drifts that may inadvertently seep into the estimations.
Simultaneously, the system assimilates constraints derived from LiDAR odometry. These constraints emerge from a sophisticated feature-matching process, wherein the current LiDAR keyframe is juxtaposed against a global feature map. This map itself is dynamic, with a sliding window mechanism in place of the LiDAR keyframes. Such an approach ensures the computational complexity remains within bounds, facilitating real-time operations without compromising accuracy or response time.
The selection criterion for a new LiDAR keyframe is based on the change in the robot’s pose. When this change surpasses a predefined threshold, the system nominates the current frame as a keyframe. Notably, LiDAR frames that exist intermittently between two keyframes are not retained, ensuring an efficient memory management protocol. As a new LiDAR keyframe is incorporated, the system simultaneously introduces a new robot state, represented as Ri, into the factor graph, effectively positioning it as a node.
One of the most significant advantages of this keyframe addition methodology is its dual-faceted benefit. First, it strikes an optimal balance between memory usage and map density, ensuring that the map remains sufficiently detailed without overburdening the storage resources. Second, by maintaining a sparse factor graph, the system remains agile, capable of real-time optimization even in dynamic environments.
In parallel with the LiDAR–inertial system, the visual–inertial system that complements the aforementioned mechanism, working in tandem to offer a navigation and mapping solution. This symbiotic relationship between the sub-systems underscores the holistic approach we have adopted in this study, ensuring that each component reinforces the other, leading to a robust, efficient, and reliable SLAM solution.
In our LIO sub-system, for every b k input from the LiDAR scan, the in-frame motion is first compensated using an IMU backward propagation technique. Given that a three-dimensional point j located at position x j as mentioned before, L m represents the set of m LiDAR points post-motion compensation, expressed as L m = p 1 L , , p m L , and we determined the residual for each original point (or a selected downsampled subset) from p j L , where j denotes the point index and the superscript L indicates that it is represented in the LiDAR reference frame. For the current iteration state variable X = ( R I G , p I G , v G , b ω , b a , g G , R C I , P C I ) , Equation (30) converts the point p j L from the LiDAR frame to the global frame.
p j G = R L G R B L p j L + t B L + p L G
In the formula, b ω and b a represent the biases of the gyroscope and accelerometer in the Inertial Measurement Unit (IMU), respectively. R I G and p I G denote the attitude and position of the IMU relative to the global coordinate system. v G is the linear velocity in the global coordinate system. g G indicates the gravitational acceleration in the global coordinate system. R C I and P C I are the extrinsic parameters between the camera and the IMU. t B L is the translation from the body coordinate system to the LiDAR coordinate system.
To transform point j on the current frame b k to the global map, our algorithm searches for the five nearest points in the map to fit a plane. This plane is characterized by a normal u j and a centroid q j , thus yielding the LiDAR measurement residual r l . The calculation method for this residual is depicted in Equation (31).
r l = u j T p j G q j

5.4. Integration Methodology

In the initiation phase of the visual–inertial system (VIS), a pivotal task involves aligning the LiDAR frames in congruence with the camera frame, leveraging the derived visual odometry as a reference. Modern 3D LiDAR systems, despite their advancements, tend to produce scans that might be relatively sparse in nature. To counteract this sparsity and harness a richer depth representation, we have adopted a strategy of stacking multiple LiDAR frames. This cumulation aids in synthesizing a comprehensive and dense depth map.
To intricately link a visual feature with a corresponding depth value, a systematic approach has been devised. We commenced by projecting both the identified visual features and LiDAR-derived depth points onto a defined unit sphere, the origin of which is anchored at the camera’s focal point. In order to maintain consistent density over the sphere and to manage the data volume, the depth points underwent a downsampling process. The resultant depth points were then cataloged based on their polar coordinates.
The subsequent challenge lies in associating the depth with a visual feature. To address this, we employed a two-dimensional K-D tree search mechanism, using the visual feature’s polar coordinates. This allowed us to identify the three closest depth points on the sphere for the given visual feature. The culmination of this process enabled us to deduce the feature depth, defined as the length of the vector connecting the visual feature to the camera’s center Oc, as shown in Figure 5. The vector’s length is determined at its intersection with the plane established by the triad of the aforementioned depth points in the Cartesian framework.
A graphical illustration elucidating this intricate mechanism is provided in Figure 5, wherein the visual representation emphasizes the computed feature depth as depicted by the interrupted linear path.
In the vision–LiDAR fusion architecture proposed in this study, a visual–inertial bundle adjustment optimization strategy, namely BA optimization, is employed with the aim of determining a maximum posterior estimate. Integrating the aforementioned analysis of residuals in each sub-system, the system’s minimal residual representation is obtained by minimizing the residuals of the prior and all measurements. The calculation method can be succinctly expressed as Formulation (32) and is solved using the Ceres Solver.
min ρ r I + r j + r l
In this context, r I , r j , and r l , respectively, represent the residuals of the IMU, camera, and LiDAR. The symbol ρ denotes the Huber norm.

6. Experiments and Results

In this section, we present a comprehensive set of experiments designed to demonstrate the superiority of our proposed system in comparison to other leading systems. Our evaluation is three-pronged: First, we assessed the localization accuracy of our system by conducting quantitative comparisons with current top-performing SLAM systems using the publicly available NCLT dataset. Second, we examined the robustness of our framework by testing its performance in various demanding conditions that involved the degradation of camera and LiDAR sensor data. Third, we measured the precision of our system in radiance map reconstruction by benchmarking it against established standards for determining camera exposure time and computing the mean photometric error relative to each image.
We ran the datasets on a PC equipped with an Intel i5-12490F CPU (Santa Clara, CA, USA) running at 3.60 GHz and a single NVIDIA GeForce RTX 3060 GPU (Santa Clara, CA, USA).

6.1. Dataset

To benchmark the precision of our proposed approach, we conducted quantitative assessments using the NCLT dataset [54]. The NCLT dataset is a comprehensive resource for robotics research, encompassing a wide range of conditions for long-term autonomy. It was compiled through extensive data collection across the University of Michigan’s North Campus. This dataset contains 27 sequences obtained by navigating both indoor and outdoor environments of the campus via various routes, times of day, and seasons. Each sequence encapsulates a rich set of data captured from an omnidirectional camera, 3D LiDAR, planar LiDAR, GPS, and wheel encoders mounted on a Segway robot. We selected the NCLT dataset for evaluation due to three key factors: (1) The NCLT dataset stands as the most extensive publicly available collection featuring high-quality ground truth trajectories. (2) It offers comprehensive raw data captured by sensors, which aligns perfectly with our criteria for the input data. (3) The dataset encompasses a variety of demanding conditions, including dynamic obstacles (like pedestrians, bicyclists, and vehicles), shifts in illumination, changes in viewpoints, and fluctuations in seasons and weather conditions (such as leaves falling and snow), as well as substantial long-term alterations to the environment stemming from construction activities.
In this study, the dataset employed for 3D reconstruction was extensively collected from the University of Hong Kong (HKU) and the Hong Kong University of Science and Technology (HKUST). This comprehensive dataset includes a wide range of environments, covering both indoor and outdoor areas such as walkways, parks, and forests, with data collection conducted at various times of the day to encapsulate different lighting conditions, including morning, noon, and evening. This rich and diverse range of settings provides a robust basis for nuanced 3D reconstruction experiments, capturing both the structured urban architecture and the complex natural terrains. The dataset’s inclusion of various architectural and environmental conditions under different lighting makes it a valuable resource for testing and refining 3D reconstruction techniques. Additionally, it features sequences that demonstrate the performance of LiDAR and camera systems when faced with challenges such as being directed towards texture-lacking surfaces like walls or the ground and in visually obstructed scenarios. This aspect of the dataset is essential for assessing and improving the adaptability of 3D reconstruction methods in less ideal conditions.

6.2. Experiment 1: Assessment of Localization Accuracy Using APE

Table 1 presents a comparative analysis of the absolute position error (APE) [55] across various methods. The data delineated in the table clearly illustrate that our system, with an average APE of just 8.02 m, outperformed the feature-based LiDAR–inertial–visual system LVI-SAM in terms of overall efficacy. The enhancement in performance is principally attributed to the tight integration of the LiDAR–inertial odometry (LIO) and visual–inertial odometry (VIO) sub-systems. This integrated approach augments the precision of the VIO sub-system—and, consequently, the entire system—by capitalizing on the high-accuracy geometric structures reconstructed from the LiDAR data. Moreover, our system’s overall APE was superior to that of FAST-LIO2 and LIO-SAM, substantiating the benefits of incorporating camera data into the fusion process.
In our system, we have incorporated YOLOv5 to mitigate the impact of moving objects, such as pedestrians, bicyclists, and cars, on the visual–inertial odometry (VIO). The integration of YOLOv5 aims to enhance the system’s ability to accurately recognize and exclude these moving objects, thereby reducing their interference with the VIO’s input data. This approach contributes to improving the accuracy of the system’s environmental reconstruction, especially in scenarios where dynamic objects are prevalent.
To justify our choice of YOLOv5 over other typical networks, it is essential to consider the demands for a lightweight architecture, reliability, and speed. YOLOv5 stands out for its quick and accurate real-time object-detection capabilities, making it exceptionally suited for applications that require rapid and dependable detection, such as visual–inertial odometry (VIO). In comparison, while YOLOv4 also offers commendable accuracy and speed, YOLOv5 has been optimized for faster inference times, which is crucial for reducing latency in real-time systems. Furthermore, compared to the Single-Shot MultiBox Detector (SSD) and Faster R-CNN, which are accurate, but generally slower, YOLOv5 is more appropriate for scenarios demanding immediate processing. Thus, by integrating YOLOv5, our system not only more effectively reduces the impact of moving objects, but also maintains an optimal balance between speed and accuracy, ensuring minimal interference with the VIO’s input data and improving environmental reconstruction in dynamic settings.

6.3. Experiment 2: Reconstructed Radiance Map

In Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, we display the reconstructed radiance maps from seven sequences, encompassing a variety of scenes such as plazas, parks, academic buildings, roads, and more. Each map highlights the algorithm’s proficiency in accurately capturing the unique features and intricacies of different scenes.
The radiance maps reveal that our algorithm is highly versatile in handling natural elements such as leaves and other organic structures, as well as man-made features. Moreover, academic buildings, with their complex geometrical shapes and diverse facade materials, provide a rigorous test of the algorithm’s ability to handle intricate architectural details. The results demonstrate precise reconstruction, capturing subtle architectural nuances and maintaining the structural integrity.

6.4. Experiment 3: Trajectory Comparison for Localization Accuracy

In Figure 11a–f, utilizing data from the NCLT dataset, we present a comparison between the trajectories from our system and the ground truth, each with a duration of approximately 1.5 h. As depicted in the figure, the gray dashed reference lines represent the ground truth trajectory. Most of the trajectories in each set of figures are colored blue, indicating a high degree of agreement between the experimental trajectories and the ground truth. The rare occurrence of red trajectories suggests that only a minor portion of the trajectories deviated slightly from the actual values. The results demonstrate a strong alignment of our estimated trajectories with the ground truth, ensuring clarity and minimal error. It is important to note that these six trajectories, derived from the NCLT dataset, were collected in the same campus area, but at various times throughout the day and across different seasons. Despite the variations in illumination and scene changes, our system consistently yielded accurate and reliable trajectory estimations. This adaptability and consistency in diverse environmental conditions highlight the robustness of our system.

6.5. Experiment 4: Performance Comparison of Two Algorithms

We present a comparative analysis of the trajectories produced by our system algorithm and LIO-SAM in Figure 12a–d, leveraging data from the NCLT dataset. These four sets of figures illustrate representative instances of deviations and errors exhibited by the comparison algorithm compared to the ground truth, while also highlighting the precision of our algorithm. Upon conducting a meticulous comparison with the ground truth trajectories, it became evident that the blue curves, representing the outcomes of our method, exhibit a closer approximation to the actual values. Conversely, the green curves, which correspond to the trajectories obtained using LIO-SAM, deviate significantly from the ground truth in certain segments.
In Figure 12a, the green curve representing LIO-SAM exhibits an additional straight segment compared to the ground truth. In Figure 12b,c, the green curve lacks a small segment at the end of the trajectory when compared to the actual path. Furthermore, in Figure 12d, the green curve not only misses approximately half of the route, but also displays significant deviations from the intended path. In contrast, across all four comparisons, the blue curve representing our algorithm demonstrates superior performance in both overall path fitting and local detail handling, clearly outperforming the former. This advantage stems from the optimizations and innovations incorporated into our algorithm in various aspects, including data processing, feature extraction, and trajectory optimization. In contrast to LIO-SAM, our algorithm appears to be more robust in handling data noise and interference in complex environments, thereby resulting in more accurate trajectories. Furthermore, our algorithm may also exhibit superior real-time performance, enabling it to adjust the trajectory more swiftly in response to environmental changes.

7. Conclusions and Future Work

In conclusion, our research has made significant strides in addressing the complexities of Simultaneous Localization and Mapping (SLAM) in dynamic environments. By integrating LiDAR–inertial odometry (LIO) with visual–inertial odometry (VIO), along with the application of specialized IMU preintegration methods, we have enhanced the accuracy and efficiency of SLAM under dynamic conditions. A key innovation in our approach is the incorporation of the YOLOv5 algorithm for the exclusion of dynamic objects such as pedestrians and vehicles, which has notably improved the mapping quality.
The integration of these advanced techniques has led to significant enhancements in mapping results across a variety of environments, including both indoor and outdoor settings, as well as diverse areas like buildings and parks. Our experimental evaluation strongly supports the effectiveness of these methods, demonstrating their capability to produce highly reliable and accurate maps in a range of complex environments. This includes detailed mapping of intricate architectural structures and natural landscapes, showcasing the versatility and robustness of our approach in addressing the challenges of spatial mapping in diverse settings.
In our future work, we will aim to enhance and refine our SLAM approach for more dynamic and unpredictable environments. This will involve improving the adaptability of our system to handle a broader spectrum of dynamic scenarios, including the refinement of the object-detection and removal processes and the optimization of sensory input integration for more efficient operation. Additionally, we plan to delve into the potential of machine learning algorithms for real-time prediction and adaptation to environmental changes, thereby enhancing the robustness of our system. We also intend to explore the applicability of our methods across various domains, such as autonomous driving and urban robotic navigation, to both validate and broaden the scope of our research. Our ongoing innovation and exploration are geared towards making significant contributions to the field of SLAM, with the goal of developing sophisticated and reliable navigation systems that can effectively operate in environments with dynamic changes.

Author Contributions

Conceptualization, Y.C.; methodology, Y.C.; software, Y.C. and Y.O.; analysis, Y.C.; writing—original draft preparation, Y.C. and Y.O.; writing—review and editing, Y.C. and T.Q.; supervision, T.Q.; funding acquisition, T.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Nation Natural Science Foundation of China under Grant No. 62361003.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Publicly available datasets were analyzed in this study.

Acknowledgments

The authors would like to thank all the subjects and staff who participated in this experiment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cadena, C. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  2. Kwon, W.; Park, J.H.; Lee, M.; Her, J. Seo, Robust Autonomous Navigation of Unmanned Aerial Vehicles (UAVs) for Warehouses’ Inventory Application. IEEE Robot. Autom. Lett. 2020, 5, 243–249. [Google Scholar] [CrossRef]
  3. Sankalprajan, P.; Sharma, T.; Perur, H.D. Comparative analysis of ROS based 2D and 3D SLAM algorithms for Autonomous Ground Vehicles. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–6. [Google Scholar]
  4. Yilmaz, K.; Suslu, B.; Roychowdhury, S. AV-SLAM: Autonomous Vehicle SLAM with Gravity Direction Initialization. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 8093–8100. [Google Scholar]
  5. Wang, T.; Su, Y.; Shao, S.; Yao, C. GR-Fusion: Multi-sensor Fusion SLAM for Ground Robots with High Robustness and Low Drift. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5440–5447. [Google Scholar]
  6. Taranco, R.; Arnau, J.-M.; González, A. A Low-Power Hardware Accelerator for ORB Feature Extraction in Self-Driving Cars. In Proceedings of the 2021 IEEE 33rd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), Belo Horizonte, Brazil, 26–29 October 2021; pp. 11–21. [Google Scholar]
  7. Lu, G.; Yang, H.; Li, J. A Lightweight Real-Time 3D LiDAR SLAM for Autonomous Vehicles in Large-Scale Urban Environment. IEEE Access 2023, 11, 12594–12606. [Google Scholar] [CrossRef]
  8. Alliez, P. Real-Time Multi-SLAM System for Agent Localization and 3D Mapping in Dynamic Scenarios. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 4894–4900. [Google Scholar]
  9. Son, Y.; Choi, K.-S.; Kim, D. Design of the Bundle Adjustment FPGA-SoC Architecture for Real Time Vision Based SLAM in AR Glasses. In Proceedings of the 2021 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 12–15 October 2021; pp. 2152–2155. [Google Scholar]
  10. Wang, H.; Wang, C.; Xie, L. Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment. IEEE Robot. Autom. Lett. 2021, 6, 1715–1721. [Google Scholar] [CrossRef]
  11. Merzlyakov, A.; Macenski, S. A Comparison of Modern General-Purpose Visual SLAM Approaches. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 9190–9197. [Google Scholar]
  12. Zhou, L.; Koppel, D.; Kaess, M. LiDAR SLAM with Plane Adjustment for Indoor Environment. IEEE Robot. Autom. Lett. 2021, 6, 7073–7080. [Google Scholar] [CrossRef]
  13. Mo, J.; Islam, M.J.; Sattar, J. Fast Direct Stereo Visual SLAM. IEEE Robot. Autom. Lett. 2022, 7, 778–785. [Google Scholar] [CrossRef]
  14. Wang, W. TartanAir: A Dataset to Push the Limits of Visual SLAM. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 4909–4916. [Google Scholar]
  15. Li, J. A Hardware Architecture of Feature Extraction for Real-Time Visual SLAM. In Proceedings of the IECON 2022—48th Annual Conference of the IEEE Industrial Electronics Society, Brussels, Belgium, 17–20 October 2022; pp. 1–6. [Google Scholar]
  16. Han, S.; Xi, Z. Dynamic Scene Semantics SLAM Based on Semantic Segmentation. IEEE Access 2020, 8, 43563–43570. [Google Scholar] [CrossRef]
  17. Liu, Y.; Ge, Z.; Yuan, Y.; Su, X.; Guo, X.; Suo, T.; Yu, Q. Study of the Error Caused by Camera Movement for the Stereo-Vision System. Appl. Sci. 2021, 11, 9384. [Google Scholar] [CrossRef]
  18. Jia, G.; Li, X.; Zhang, D.; Xu, W.; Lv, H.; Shi, Y.; Cai, M. Visual-SLAM Classical Framework and Key Techniques: A Review. Sensors 2022, 22, 4582. [Google Scholar] [CrossRef] [PubMed]
  19. Ruan, J.; Li, B.; Wang, Y.; Fang, Z. GP-SLAM+: Real-time 3D lidar SLAM based on improved regionalized Gaussian process map reconstruction. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5171–5178. [Google Scholar]
  20. Zhao, J.; Liu, S.; Li, J. Research and Implementation of Autonomous Navigation for Mobile Robots Based on SLAM Algorithm under ROS. Sensors 2022, 11, 4172. [Google Scholar] [CrossRef]
  21. Chou, C.-C.; Chou, C.-F. Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM. IEEE Trans. Intell. Transp. Syst. 2022, 23, 14509–14523. [Google Scholar] [CrossRef]
  22. Liu, G.; Zeng, W.; Feng, B.; Xu, F. DMS-SLAM: A General Visual SLAM System for Dynamic Scenes with Multiple Sensors. Sensors 2019, 19, 3714. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, P.; Li, R.; Shi, Y.; He, L. Design of 3D reconstruction system on quadrotor Fusing LiDAR and camera. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 3984–3989. [Google Scholar]
  24. Zhang, F.; Shen, C.; Ren, X. The Application of Multi-sensor Information Fusion by Improved Trust Degree on SLAM. In Proceedings of the 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2013; pp. 360–364. [Google Scholar]
  25. Delobel, L.; Aufrère, R.; Debain, C.; Chapuis, R.; Chateau, T. A Real-Time Map Refinement Method Using a Multi-Sensor Localization Framework. IEEE Trans. Intell. Transp. Syst. 2019, 20, 1644–1658. [Google Scholar] [CrossRef]
  26. Jia, Y. Lvio-Fusion: A Self-adaptive Multi-sensor Fusion SLAM Framework Using Actor-critic Method. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 286–293. [Google Scholar]
  27. Zhang, Z.; Zhao, J.; Huang, C.; Li, L. Learning Visual Semantic Map-Matching for Loosely Multi-Sensor Fusion Localization of Autonomous Vehicles. IEEE Trans. Intell. Veh. 2023, 8, 358–367. [Google Scholar] [CrossRef]
  28. Fu, Y.; Han, B.; Hu, Z.; Shen, X.; Zhao, Y. CBAM-SLAM: A Semantic SLAM Based on Attention Module in Dynamic Environment. In Proceedings of the 2022 6th Asian Conference on Artificial Intelligence Technology (ACAIT), Changzhou, China, 9–11 December 2022; pp. 1–6. [Google Scholar]
  29. Mahmoud, J.; Penkovskiy, A. Dynamic Environments and Robust SLAM: Optimizing Sensor Fusion and Semantics for Wheeled Robots. In Proceedings of the 2023 33rd Conference of Open Innovations Association (FRUCT), Zilina, Slovakia, 24–26 May 2023; pp. 185–191. [Google Scholar]
  30. Yin, H.; Li, S.; Tao, Y.; Guo, J.; Huang, B. Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments. IEEE Trans. Robot. 2023, 39, 289–308. [Google Scholar] [CrossRef]
  31. Li, X.; Jiao, Z.; Zhang, X.; Zhang, L. Visual SLAM in Dynamic Environments Based on Object Detection and Scene Flow. In Proceedings of the 2023 IEEE International Conference on Mechatronics and Automation (ICMA), Harbin, China, 6–9 August 2023; pp. 2157–2162. [Google Scholar]
  32. Ran, T.; Yuan, L.; Zhang, J.; Wu, Z.; He, L. Object-Oriented Semantic SLAM Based on Geometric Constraints of Points and Lines. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 751–760. [Google Scholar] [CrossRef]
  33. Lee, J.; Back, M.; Hwang, S.S.; Chun, I.Y. Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames. IEEE Trans. Intell. Transp. Syst. 2023, 24, 2800–2813. [Google Scholar] [CrossRef]
  34. Ma, S.; Liang, H.; Wang, H.; Xu, T. An Improved Feature-Based Visual Slam Using Semantic Information. In Proceedings of the 2023 IEEE 6th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 24–26 February 2023; pp. 559–564. [Google Scholar]
  35. Cheng, S.; Sun, C.; Zhang, S.; Zhang, D. SG-SLAM: A Real-Time RGB-D Visual SLAM Toward Dynamic Scenes with Semantic and Geometric Information. IEEE Trans. Instrum. Meas. 2023, 72, 7501012. [Google Scholar] [CrossRef]
  36. Yuan, Z.; Zhu, D.; Chi, C. Visual-inertial state estimation with pre-integration correction for robust mobile augmented reality. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1410–1418. [Google Scholar]
  37. Chang, L.; Niu, X.; Liu, T. GNSS/IMU/ODO/LiDAR-SLAM integrated navigation system using IMU/ODO pre-integration. Sensors 2020, 9, 4702. [Google Scholar] [CrossRef]
  38. Li, X.; Zhang, H.; Chen, W. 4D Radar-Based Pose Graph SLAM With Ego-Velocity Pre-Integration Factor. IEEE Robot. Autom. Lett. 2023, 8, 5124–5131. [Google Scholar] [CrossRef]
  39. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef]
  40. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. IMU preintegration on manifold for efficient visual-inertial maximum-a-posterioriestimation. In Proceedings of the Robotics: Science and Systems XI, Rome, Italy, 13–17 July 2015. [Google Scholar]
  41. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5135–5142. [Google Scholar]
  42. Wang, S.; Zhang, J.; Tan, X. PDLC-LIO: A Precise and Direct SLAM System Toward Large-Scale Environments With Loop Closures. IEEE Trans. Intell. Transp. Syst. 2023, 25, 626–637. [Google Scholar] [CrossRef]
  43. Almadhoun, R.; Taha, T.; Seneviratne, L. Zweiri, Y. Multi-Robot Hybrid Coverage Path Planning for 3D Reconstruction of Large Structures. IEEE Access 2022, 10, 2037–2050. [Google Scholar] [CrossRef]
  44. Yu, F.; Fan, X. 3D Reconstruction System Based on Multi Sensor. Int. J. Adv. Netw. Monit. Control 2022, 7, 58–66. [Google Scholar] [CrossRef]
  45. Yeh, C.-H.; Lin, M.-H. Robust 3D Reconstruction Using HDR-Based SLAM. IEEE Access 2021, 9, 16568–16581. [Google Scholar] [CrossRef]
  46. Zhang, K.; Zhang, M. Design of a 3D reconstruction model of multiplane images based on stereo vision. In Proceedings of the 2021 IEEE International Conference on Industrial Application of Artificial Intelligence (IAAI), Harbin, China, 24–26 December 2021; pp. 385–391. [Google Scholar]
  47. Song, M.; Watanabe, H.; Hara, J. Robust 3D reconstruction with omni-directional camera based on structure from motion. In Proceedings of the 2018 International Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, 7–9 January 2018; pp. 1–4. [Google Scholar]
  48. Zeng, L.; Wu, J.-Q.; Huang, W.-D.; Liu, X.-L. Multi-view Stereo 3D Reconstruction Algorithm Based on Improved PatchMatch Algorithm. In Proceedings of the 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 24–26 February 2023; pp. 738–744. [Google Scholar]
  49. Qin, L.; Chen, X.; Gong, X. An improved 3D reconstruction method for weak texture objects combined with calibration and ICP registration. In Proceedings of the 2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS), Wuhan, China, 8–11 May 2023; pp. 1–5. [Google Scholar]
  50. Barrau, A.; Bonnabel, S. A Mathematical Framework for IMU Error Propagation with Applications to Preintegration. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 5732–5738. [Google Scholar]
  51. Li, S.; Li, Y.; Li, Y.; Li, M.; Xu, X. YOLO-FIRI: Improved YOLOv5 for Infrared Image Object Detection. IEEE Access 2021, 9, 141861–141875. [Google Scholar] [CrossRef]
  52. Song, Y.; Xie, Z.; Wang, X.; Zou, Y. MS-YOLO: Object Detection Based on YOLOv5 Optimized Fusion Millimeter-Wave Radar and Machine Vision. IEEE Sens. J. 2022, 22, 15435–15447. [Google Scholar] [CrossRef]
  53. Wang, H. A YOLOv5 Baseline for Underwater Object Detection. In Proceedings of the OCEANS 2021: San Diego–Porto, San Diego, CA, USA, 20–23 September 2021; pp. 1–4. [Google Scholar]
  54. Carlevaris-Bianco, N.; Ushani, A.K.; Eustice, R.M. University of michigan north campus long-term vision and lidar dataset. Int. J. Robot. Res. 2016, 35, 1023–1035. [Google Scholar] [CrossRef]
  55. Zhang, Z.; Scaramuzza, D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7244–7251. [Google Scholar]
Figure 1. YOLOv5 algorithm network structure.
Figure 1. YOLOv5 algorithm network structure.
Sensors 24 02033 g001
Figure 3. Schematic representation of the sub-system: visual–inertial integration.
Figure 3. Schematic representation of the sub-system: visual–inertial integration.
Sensors 24 02033 g003
Figure 4. Schematic representation of the sub-system: LiDAR–inertial integration.
Figure 4. Schematic representation of the sub-system: LiDAR–inertial integration.
Sensors 24 02033 g004
Figure 5. Depth correlation.
Figure 5. Depth correlation.
Sensors 24 02033 g005
Figure 6. The reconstructed map of the “hku_main_building” sequence: (a) is a bird’s-eye view, capturing the overall structure. (b,c) are closeups, revealing intricate details.
Figure 6. The reconstructed map of the “hku_main_building” sequence: (a) is a bird’s-eye view, capturing the overall structure. (b,c) are closeups, revealing intricate details.
Sensors 24 02033 g006
Figure 7. The reconstructed map of the “hku_campus_seq_03” sequence: (a) offers a bird’s-eye view of the entire radiance map, providing a comprehensive overview of the campus. (b,c) then zoom in to showcase the intricate details of specific areas.
Figure 7. The reconstructed map of the “hku_campus_seq_03” sequence: (a) offers a bird’s-eye view of the entire radiance map, providing a comprehensive overview of the campus. (b,c) then zoom in to showcase the intricate details of specific areas.
Sensors 24 02033 g007
Figure 8. The reconstructed map of the “hku_park_00” sequence: (a) Presents a bird’s-eye view, capturing the park’s expanse. Detailed closeups are shown in (b,c), offering a deeper glimpse into its features.
Figure 8. The reconstructed map of the “hku_park_00” sequence: (a) Presents a bird’s-eye view, capturing the park’s expanse. Detailed closeups are shown in (b,c), offering a deeper glimpse into its features.
Sensors 24 02033 g008
Figure 9. The reconstructed map of the “hkust_campus_02” sequence: (a) Presents a bird’s-eye view of a specific area of campus, providing an overview of its layout. For a deeper understanding of its intricate details, refer to the closeup views in (b,c).
Figure 9. The reconstructed map of the “hkust_campus_02” sequence: (a) Presents a bird’s-eye view of a specific area of campus, providing an overview of its layout. For a deeper understanding of its intricate details, refer to the closeup views in (b,c).
Sensors 24 02033 g009
Figure 10. The reconstructed map of the “degenerate_seq_02” sequence: (a) Offers a bird’s-eye view of the corridor, outlining its structure. For a closer look at the intricate details within, refer to the closeup views presented in (b,c).
Figure 10. The reconstructed map of the “degenerate_seq_02” sequence: (a) Offers a bird’s-eye view of the corridor, outlining its structure. For a closer look at the intricate details within, refer to the closeup views presented in (b,c).
Sensors 24 02033 g010
Figure 11. (af) Trajectory comparison: our system vs. ground truth.
Figure 11. (af) Trajectory comparison: our system vs. ground truth.
Sensors 24 02033 g011
Figure 12. (ad) Trajectory comparison: our system vs. LIO-SAM.
Figure 12. (ad) Trajectory comparison: our system vs. LIO-SAM.
Sensors 24 02033 g012
Table 1. Absolute position error (APE in meters) comparison on NCLT dataset.
Table 1. Absolute position error (APE in meters) comparison on NCLT dataset.
Sequence (Date)Length (m)Duration (h:min:s)OurLVI-SAMFAST-LIO2LIO-SAM
201201226183.071:27:227.628.217.528.72
201202026315.781:38:368.7417.959.2115.81
201202196232.681:29:116.158.786.329.63
201211175751.891:29:446.7622.316.0524.52
201212014991.931:16:487.857.157.627.03
201302235235.271:20:0810.9812.7412.2212.41
total34,710.629:1:57
average 8.0212.868.1613.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, Y.; Ou, Y.; Qin, T. Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D Reconstruction. Sensors 2024, 24, 2033. https://doi.org/10.3390/s24072033

AMA Style

Cai Y, Ou Y, Qin T. Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D Reconstruction. Sensors. 2024; 24(7):2033. https://doi.org/10.3390/s24072033

Chicago/Turabian Style

Cai, Yiyi, Yang Ou, and Tuanfa Qin. 2024. "Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D Reconstruction" Sensors 24, no. 7: 2033. https://doi.org/10.3390/s24072033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop