Next Article in Journal
Deep Learning-Based Simultaneous Temperature- and Curvature-Sensitive Scatterplot Recognition
Next Article in Special Issue
A Survey of Autonomous Vehicle Behaviors: Trajectory Planning Algorithms, Sensed Collision Risks, and User Expectations
Previous Article in Journal
Sensor-Assisted Analysis of Autonomic and Cerebrovascular Dysregulation following Concussion in an Individual with a History of Ten Concussions: A Case Study
Previous Article in Special Issue
Single-Line LiDAR Localization via Contribution Sampling and Map Update Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fusion Positioning System Based on Camera and LiDAR for Unmanned Rollers in Tunnel Construction

by
Hao Huang
,
Yongbiao Hu
and
Xuebin Wang
*
National Engineering Laboratory for Highway Maintenance Equipment, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4408; https://doi.org/10.3390/s24134408
Submission received: 27 May 2024 / Revised: 3 July 2024 / Accepted: 5 July 2024 / Published: 7 July 2024

Abstract

:
As an important vehicle in road construction, the unmanned roller is rapidly advancing in its autonomous compaction capabilities. To overcome the challenges of GNSS positioning failure during tunnel construction and diminished visual positioning accuracy under different illumination levels, we propose a feature-layer fusion positioning system based on a camera and LiDAR. This system integrates loop closure detection and LiDAR odometry into the visual odometry framework. Furthermore, recognizing the prevalence of similar scenes in tunnels, we innovatively combine loop closure detection with the compaction process of rollers in fixed areas, proposing a selection method for loop closure candidate frames based on the compaction process. Through on-site experiments, it is shown that this method not only enhances the accuracy of loop closure detection in similar environments but also reduces the runtime. Compared with visual systems, in static positioning tests, the longitudinal and lateral accuracy of the fusion system are improved by 12 mm and 11 mm, respectively. In straight-line compaction tests under different illumination levels, the average lateral error increases by 34.1% and 32.8%, respectively. In lane-changing compaction tests, this system enhances the positioning accuracy by 33% in dim environments, demonstrating the superior positioning accuracy of the fusion positioning system amid illumination changes in tunnels.

1. Introduction

The level of investment in large-scale projects, such as highways and airports, has been increasing in recent years. This trend has led to higher demands regarding the quality and efficiency of subgrade and pavement construction [1,2]. However, traditional manual driving rollers [3,4] have several shortcomings. The different skill levels of drivers may result in different compaction errors, leading to problems such as under-compaction or over-compaction [5,6], which can affect the construction quality. Additionally, accidents such as injuries to site personnel during the compaction process can occur. Therefore, traditional rollers cannot meet the demands of high quality and efficiency, nor can they guarantee the safety of personnel at the construction site.
With the rapid development of computer science, artificial intelligence, automation control and other technologies, unmanned rollers provide the possibility to solve the above problems [7,8]. To ensure higher accuracy than manual driving, unmanned rollers are designed to compact on the planned path through high-precision positioning. The compaction accuracy of unmanned rollers is directly affected by the accuracy of the positioning system.
Currently, unmanned rollers rely on a global navigation satellite system (GNSS) for positioning [5,6,7,9], offering high precision, all-weather capabilities and simple operation. However, a GNSS is only applicable in open areas. Due to the complexity and variability of the compaction scene, it is not possible to obtain GNSS signals in closed sites such as tunnels, as shown in Figure 1. Consequently, the roller cannot be turned on for autonomous compaction at this time.
In tunnel construction, researchers have proposed Simultaneous Localization and Mapping (SLAM) [10,11,12,13,14] as a replacement in order to achieve positioning and navigation in the absence of GNSS signals. By employing sensors such as cameras or LiDAR, the motion state of the roller is estimated and an environmental map is constructed in the absence of a priori environmental information. In our previous study [15], we employed camera-based visual odometry as a cost-effective alternative to LiDAR for the positioning of unmanned rollers in tunnels.
Unmanned rollers compact in a certain area for a long time, and there will be insufficient illumination in tunnels. A visual odometer will generate cumulative errors that cannot be corrected, resulting in poor positioning accuracy. Therefore, a feature layer fusion positioning system based on the camera and LiDAR for unmanned rollers is proposed. This system incorporates loop closure detection and LiDAR odometry in addition to visual odometry. Furthermore, in view of the prevalence of similar scenes in tunnels, this paper presents the innovative integration of loop closure detection with the compaction process of the roller in a fixed area and proposes a new method of selecting loop closure candidate frames.
This paper contributes in the following aspects. Firstly, the authors propose a GNSS-denied fusion positioning system based on the camera and LiDAR for unmanned rollers in tunnel construction. This system iteratively optimizes the residual error functions by integrating relative pose constraints between adjacent frames, loop closure constraints with historical frames and LiDAR prior pose constraints. Additionally, a new loop closure detection method based on the compaction process is introduced to address mismatches in similar scenes. Keyframes are intelligently grouped, and candidate loop closure frames are selected, demonstrating superior precision and real-time performance. Finally, the proposed fusion positioning system is compared with a traditional visual positioning system, evaluating the static positioning error, straight line positioning error and lane-changing error according to an unmanned roller test platform. The result indicates the effectiveness of the proposed method in reducing the cumulative error and offering good robustness under low-illumination conditions.
The structure of the paper is as follows. Section 2 reviews the related research on positioning methods for unmanned rollers. Section 3 proposes the principles of the feature layer fusion positioning system based on the camera and LiDAR and proposes an improved loop closure detection method. Section 4 analyzes the improvement in the precision and real-time performance of the improved loop closure detection method. The error of the fusion positioning system is compared with that of conventional visual positioning. Finally, Section 5 summarizes the conclusions and future work.

2. Related Work

Currently, most unmanned rollers use GNSS for high-precision positioning. Fang [16] relied on RTK positioning signals to achieve the automatic compaction of the roller. A path-following control model for a vibratory roller was established, which allowed the lateral error to reach 30 cm. Zhang [9] designed an unmanned roller system that relied entirely on the positioning accuracy of the GNSS. After analyzing the field test data, they found that the system improved the compaction quality and efficiency for earth and rock dams. Zhan [17] utilized the Attitude Heading Reference System (AHRS) to measure attitude information and correct the position and heading of the roller, which was previously measured by GPS only. This method reduced the positioning error by 0.197 m and the heading error by 1.6°. The aforementioned research was conducted in outdoor environments with good GNSS signals. For scenarios without GNSS, researchers have also proposed various positioning methods.
Song [18] introduced a hybrid positioning strategy that combines RFID technology and in-vehicle sensors. This strategy uses the received signal strength (RSS) from RFID tags and readers for preliminary positioning. However, it relies on extensive RFID infrastructure, which can be costly and complex to implement and maintain. Jiang [19] developed a tunnel vehicle localization method using virtual stations and reflections. The localization algorithms using filters improve the localization accuracy compared with a positioning algorithm without using filters.
Ultra-wide band (UWB) is one of the methods used to overcome the partial denial problem of the GNSS. Wang [20] combined UWB technology with an inertial navigation system (INS) to create a commonly used indoor positioning method. Gao [21] proposed an innovation gain-adaptive Kalman filter (IG-AKF) algorithm for rollers, which significantly improved the positioning accuracy compared to UWB and the standard KF.
Visual positioning is another method used to overcome the problem of partial denial of the GNSS. Sun [22] utilized image processing techniques to detect ground markings, resulting in the precise and real-time lateral positioning of the roller in the working area. However, this method only provides lateral positioning and does not achieve two-dimensional plane positioning. Mur-Artal et al. [23] proposed a camera-based ORB-SLAM2 system that uses the Oriented FAST and Rotated BRIEF (ORB) features and descriptors to accurately track the target’s positional coordinates. The system also includes a loop closure detection step to reduce errors and drift accumulation. Jakob Engel et al. [24] proposed direct sparse odometry (DSO), a visual odometry method based on sparse, direct structure and motion formulas. DSO suggests photometric calibration [25] to improve the robustness of monocular direct visual odometry. In our previous work [15], a vision-based odometry system for unmanned rollers was proposed, which used SURF and PnP to achieve two-dimensional positioning. However, this method is not suitable for environments with significant changes in illumination intensity.
In addition, a large number of researchers have begun to explore multi-sensor fusion positioning systems. Song [26] introduced PSMD-SLAM using panoptic segmentation and multi-sensor fusion to enhance the accuracy and robustness in dynamic environments. The primary drawback of PSMD-SLAM is its computational complexity, which may limit its real-time application in resource-constrained environments. Liu [27] discussed a multi-sensor fusion approach integrating the camera, LiDAR and IMU in outdoor scenes, highlighting time alignment modules and feature point depth recovery models for enhanced accuracy and robustness. Kumar [28] proposed a method for the estimation of the distances between a self-driving vehicle and various objects using the fusion of LiDAR and camera data, emphasizing low-level sensor fusion and geometrical transformations for accurate depth sensing.
The above-mentioned research covers different positioning methods in different scenarios. Among them, GNSS-based positioning methods are completely ineffective in tunnels. RFID technology and UWB-based positioning methods require the pre-deployment of equipment and precision measurement in construction sites, making their application inconvenient. Visual-based positioning methods are susceptible to changes in illumination intensity, leading to decreased positioning accuracy. Therefore, this paper proposes a tunnel fusion positioning system based on the camera and LiDAR for unmanned rollers. It incorporates loop closure detection and LiDAR odometry as supplements to visual odometry, reducing the impact of illumination variations. Additionally, a loop closure detection method based on the compaction process is proposed, enabling the high-precision identification of loop closure frames. Ultimately, this achieves high-precision positioning and navigation for unmanned roller in tunnels.

3. Methods

3.1. Composition of Fusion Positioning System

The system uses a camera and a LiDAR as inputs, and the framework is shown in Figure 2. It consists of five modules, and the principles and roles of each module are described in detail in the following.
  • External parameter calibration. This paper integrates two sensors, a camera and a LiDAR, which are not initially aligned in a unified coordinate system. Therefore, external parameter calibration is essential to unify them under the coordinate system of the roller.
  • Visual odometry. It processes raw and depth images from the camera and conducts feature extraction and matching to derive matching point pairs between adjacent frames [29,30,31,32]. Subsequently, pose constraints are solved based on the matching point pairs. Visual odometry has been extensively discussed in our previous research, as shown in Figure 3.
  • Loop closure detection. The aim is to determine whether the current position forms a loop closure frame with any historical positions. When a loop closure is detected, pose constraints are computed to rectify the accumulated errors.
  • LiDAR odometry. LiDAR odometry is achieved through LiDAR point cloud matching. The resulting output serves as prior pose constraints inputted into the fusion positioning system.
  • Feature layer fusion module. This module merges the pose constraints from the visual odometry, loop closure detection and LiDAR odometry modules into a unified framework based on graph optimization. By minimizing the reprojection errors, the fusion module enables the high-precision positioning of the unmanned roller.
  • The aim of all sub-modules is to construct a fusion positioning model encompassing the LiDAR point clouds and texture/color data from the camera. This integration is critical for unmanned rollers in addressing the decreased positioning precision resulting from inadequate illumination in tunnel construction.

3.2. Loop Closure Detection

Due to the abundance of visually similar scenes encountered during tunnel construction, as shown in Figure 4, the issue of false positives in loop closure detection arises. Despite their visual resemblance, they do not constitute loop closure frames. Therefore, this paper introduces a new method that integrates the compaction process of rollers to refine the selection of loop closure candidate frames. Moreover, employing the visual bag-of-words (BoW) model [33], similarity detection is implemented to identify potential loop closure frames among the candidates. Ultimately, the validity of the loop closure frames is confirmed through a reprojection error analysis, ensuring accurate identification.

3.2.1. Candidate Loop Closure Frames Based on Compaction Process

As shown in Figure 5a, the compaction process of the roller within the tunnel unfolds through several sequential stages. Initially, the roller awaits deployment at the starting point within the compaction zone, during which pertinent compaction parameters are defined. These parameters encompass the lateral width w , forward distance d , rolling speed v and number of compaction passes p . Subsequently, the roller commences the compaction of the first lane from the starting position, advancing forward and subsequently reversing the compaction direction upon reaching the predetermined forward distance. With the completion of the first lane compaction task, the roller executes a lane change to the right, initiating the compaction of the second lane. It systematically proceeds with lane shifting from the starting point to the endpoint directionally, culminating in the comprehensive compaction of the entire tunnel area. Therefore, the roller follows a fixed path within each lane, with loop closure occurring only when there is a change in the direction of motion. The proposed loop closure detection process is depicted in Figure 5b.
Due to the substantial overlap of information between adjacent image frames, including all frames in the loop closure detection process would not only increase the likelihood of false positives but also waste computational resources. Therefore, the first step is to extract keyframes. A new keyframe must be a certain number of frames (nmin) away from the previous keyframe to ensure less overlap in the field of view. Additionally, the selected frame must have a sufficient number of extracted feature points. Furthermore, a new keyframe must be inserted whenever there is a significant pose change, to prevent abrupt trajectory shifts and ensure stable tracking by the positioning system.
After extracting the keyframes, the compaction direction must first be determined, i.e., whether v is greater than 0. When v > 0, the keyframes extracted during forward compaction are saved in set K F f . When v < 0, the keyframes from backward compaction are saved in set K F b . Keyframes within the same set are not subjected to loop closure detection against each other. Loop closure detection between keyframes is performed only when there is a change in the direction of movement.
When the roller changes from forward to backward, the keyframes in set K F f are temporarily fixed (denoted as K F f = { K F f 1 , K F f 2 , K F f n 1 } ), and the current keyframe K F b x is searched for the candidates of loop closure within the search area in reverse order, while simultaneously being added to set K F b . Conversely, when the roller changes from backward to forward, the keyframes in set K F b are temporarily fixed (denoted as K F b = { K F b 1 , K F b 2 , K F b n } ), and the current keyframe K F f n + x is searched for loop closure within the search area in reverse order within set K F b and then added to set K F f .
The length threshold L for the search area is adjusted according to Equation (1):
N K F x int ( T K F n T K F 1 T K F x ) L N K F x + int ( T K F n T K F 1 T K F x )
where N K F x represents the position of the current keyframe in the keyframe set, T K F n T K F 1 denotes the total time taken to extract the keyframe set when the direction of speed changes and T K F x indicates the time elapsed from the change in speed direction to the current keyframe. Finally, a set of candidate loop closure frames K F C L is obtained.

3.2.2. Similarity Detection

The similarity between the current keyframe and the candidate loop closure frames in the set K F C L is assessed, and the frame with the highest similarity is selected as the loop closure frame. In this study, the SURF features from visual odometry are clustered using the K-means method to generate word vectors composed of ID numbers and weights. These vectors are organized into a dictionary using a k-d tree structure. Given the image features, the corresponding words can be retrieved from the dictionary.
After obtaining N features and their corresponding words for a frame, a distribution histogram representing the image in the dictionary is constructed. The Term Frequency–Inverse Document Frequency (TF–IDF) method is employed to assign weight coefficients, reflecting the importance of different words in distinguishing features. TF represents the frequency of a word’s occurrence in a single image; the higher the frequency, the greater the discriminative power. IDF indicates the frequency of a word’s occurrence in the dictionary; the lower the frequency, the greater the discriminative power.
For two keyframes, K F n and K F m , their corresponding bag-of-words vectors are denoted as v K F n and v K F m respectively. The similarity between these vectors is measured according to Equation (2).
s ( v K F n v K F m ) = 1 1 2 | v K F n | v K F n | v K F m | v K F m | | = 1 2 i = 1 N ( | v K F n i | + | v K F m i | | v K F n i v K F m i | )
The resulting similarity scores fall within the range [0, 1] and are sorted in descending order. The candidate frame with the highest similarity is selected as the final loop closure frame. Subsequently, the relative pose between the two frames is computed and employed as the loop closure constraint in the fusion positioning system for unmanned rollers.

3.3. LiDAR Odometry

To mitigate the impact of different illumination levels on the positioning accuracy in tunnels, the fusion positioning system for unmanned rollers integrates LiDAR odometry based on LiDAR, in addition to visual odometry and loop closure detection, thereby establishing corresponding prior pose constraints.
Let R ( t , α ) represent the LiDAR scan function, where t denotes the time and α denotes the coordinates of the scan points. The polar coordinates of point P in the LiDAR’s coordinate system are denoted as P ( r , θ ) , with the coordinates expressed as in Equation (4), where FOV represents the scanning angle of the LiDAR and N represents the number of LiDAR scan points.
α = N 1 F O V θ = k a θ
P represents the scan point at a time interval Δ t between consecutive scans. Then, the scan function at any point in the second scan can be approximated using Taylor expansion as follows [34]:
R ( t + Δ t , α + Δ α ) = R ( t , α ) + R t ( t , α ) Δ t + R α ( t , α ) Δ α + O ( Δ t 2 , Δ α 2 )
By neglecting higher-order terms, when the scan range and the coordinates of points change within [ t , t + Δ t ] , the gradient of the scan function is approximately
Δ R Δ t R t + R α Δ α Δ t = R t + R α k a θ
Equation (6) represents the range flow constraint equation, with R t = R t ( t , α ) , R α = R α ( t , α ) and Δ R = R ( t + Δ t , α + Δ α ) R ( t , α ) .
To express the velocity of all points within the scan range, the velocity ( r , θ ) is rewritten in the Cartesian coordinate system of the LiDAR:
{ r = x cos θ + y sin θ r θ = y cos θ x sin θ
Assuming that the environment consists of static rigid bodies, the motion of all scan points is attributed to the intrinsic motion of the LiDAR. Hence, the velocity of the LiDAR and the velocity of the scan points possess the same value but the opposite direction.
( x y ) = ( v x , s + y ω s v y , s x ω s )
Let ξ s = ( v x , s , v y , s , ω s ) denote the sensor velocity and ( x , y ) denote the Cartesian coordinates of point P. By substituting Equation (7) into Equation (6) and applying the rigid assumption given in Equation (8), the range flow constraint equation can be transformed into a LiDAR velocity constraint:
( cos θ + R α k α sin θ r ) v x , s + ( sin θ R α k α cos θ r ) v y , s + ( x sin θ y cos θ R α k α ) ω s + R t = 0
Each scan point imposes restrictions on sensor motion. By substituting the angle and coordinates of each scan point into Equation (9), the velocity and pose constraints of the LiDAR can be determined.

3.4. Feature Layer Fusion Model Based on Graph Optimization

Figure 6 illustrates the feature layer fusion model based on graph optimization. Triangles denote the keyframe nodes of the visual odometry, while squares represent the nodes of the LiDAR odometry. Blue lines indicate the standard pose constraints between keyframes, red lines signify the loop closure pose constraints formed when a loop closure keyframe is identified and black lines depict the prior pose constraints from the LiDAR odometry.
In the fusion positioning system for unmanned rollers, all observations and states are jointly optimized, with the residual errors of each constraint assigned weights, referred to as the information matrix. Considering the information matrix, the total residual error can be expressed as
F ( x ) = i , j C e i j T Ω i j e i j
The optimization problem can then be formulated as
x * = arg min F ( x )
There are three types of residual error: the pose residual error between adjacent frames, the relative pose residual error of the loop closure frames and the prior pose residual error from the LiDAR odometry. Since the first two constraints are based on relative poses, the forms and derivations are identical. However, the prior pose constraint of LiDAR odometry pertains to single-frame observations. Consequently, the residual error model is categorized into two types.

3.4.1. Residual Error Model of Relative Pose

In graph optimization, the node represents the pose of the sensor, denoted by ξ 1 , ξ 2 ξ m . The relative motion between nodes ξ i and ξ j , denoted by Δ ξ i j , can be expressed as in Equation (12):
Δ ξ i j = ξ i 1 ξ j = ln ( exp ( ( ξ i ) ) exp ( ξ j ) )
When there is a pose error, the residual error is calculated using Equation (13):
e i j = ln ( T i j 1 T i 1 T j ) = ln ( exp ( ( ξ i j ) ) exp ( ( ξ i ) ) exp ( ξ j ) )
Two variables, ξ i and ξ j , need to be optimized. A left perturbation is added to both ξ i and ξ j , resulting in δ ξ i and δ ξ j , which can be expressed as
e ^ i j = ln ( T i j 1 T i 1 exp ( ( δ ξ i ) ) exp ( δ ξ j ) T j )
Equation (14) is simplified:
e ^ i j = ln ( T i j 1 T i 1 exp ( ( δ ξ i ) ) T j exp ( ( A d ( T j 1 ) δ ξ j ) ) e i j J 1 ( e i j ) A d ( T j 1 ) δ ξ i + J 1 ( e i j ) A d ( T j 1 ) δ ξ j
The Jacobian matrix of the residual error in Equation (15) with respect to T i , T j is, respectively,
A i j = e i j δ ξ i = J 1 ( e i j ) A d ( T j 1 ) B i j = e i j δ ξ j = J 1 ( e i j ) A d ( T j 1 )
J 1 ( e i j ) I + 1 2 [ ϕ e ρ e 0 ϕ e ]
Perform first-order Taylor expansion on the residual error and find the Jacobian matrix of the residual error with respect to the pose J i j :
e i j ( x i + Δ x i , x j + Δ x j ) = e i j ( x + Δ x ) e i j + J i j Δ x
For each residual error block, there is
F i j ( x + Δ x ) = e i j ( x + Δ x ) T Ω i j e i j ( x + Δ x ) ( e i j + J i j Δ x ) T Ω i j ( e i j + J i j Δ x ) = e i j T Ω i j e i j + 2 e i j T Ω i j J i j Δ x + Δ x T J i j T Ω i j J i j Δ x = c i j + 2 b i j T Δ x + Δ x T H i j Δ x
At this point, the Gaussian–Newton method is used to optimize the solution.

3.4.2. Residual Error Model of LiDAR Prior Pose

The LiDAR observation is a unitary edge. Unlike visual odometry and loop closure, which connect two pose states, this observation connects only one pose state. It directly provides the observed value of the state quantity, and its corresponding residual error is the difference between the observed value and the state quantity.
e i = ln ( Z i 1 T i ) = ln ( exp ( ξ z i ) exp ( ξ i ) )
By adding perturbations to the residual error, it can be concluded that
e ^ i = ln ( Z i 1 exp ( δ ξ i ) T i ) e ^ i = ln ( Z i 1 T i exp ( ( A d ( T i 1 ) δ ξ i ) ) ) = ln ( exp ( e i ) exp ( ( A d ( T i 1 ) δ ξ i ) ) ) e i + J r 1 ( e i ) A d ( T i 1 ) δ ξ i
Therefore, the Jacobian of the residual error with respect to T i is
e i δ ξ i = J r 1 ( e i ) A d ( T i 1 )
The subsequent process is consistent with that in Section 3.4.1. At this point, the Gaussian–Newton method is also used to optimize the solution.

4. Results and Discussion

Figure 7 shows the test platform, which consists of a double-drum roller, a stereo camera, a LiDAR, a controller and an embedded computer. The double-drum roller’s forward and backward movement is controlled by adjusting the inlet and outlet oil volumes of the hydraulic pump through the proportional solenoid valve. The steering angle is controlled by adjusting the oil volume of the steering cylinder. Table 1 shows the parameters of the double-drum roller.

4.1. Results for Loop Closure Detection

Traditional loop closure detection methods are based on the DBoW3 library, which involves extracting keypoints and descriptors. In this experiment, to eliminate the influence of extraneous factors, SURF operators are similarly employed. Subsequently, a visual vocabulary is created, and the features of each image are transformed into a BoW histogram. The BoW histogram of the current image is compared with those stored in the database, and potential loop closure frames are identified through a comparison scoring function. The new loop closure detection method proposed for the fusion positioning system of an unmanned roller improves upon this process by incorporating a selection and classification process for keyframes, achieving more efficient detection within a specified range. The main evaluation metrics are the precision under different lighting conditions and the real-time performance.

4.1.1. Precision

The precision of loop closure detection refers to the probability of correctly identifying a loop closure when the roller passes through similar scenes [35], typically described using Precision and Recall. It is defined by four statistical measures: true positive (TP) represents the number of correctly detected loop closures; true negative (TN) represents the number of correctly rejected loop closures; false positive (FP) represents the number of incorrectly identified loop closures; and false negative (FN) represents the number of loop closures incorrectly rejected. The formulas for Precision and Recall are as follows:
P r e c i s i o n = T P T P + F P R e c a l l = T P T P + F N
To validate the recognition accuracy and robustness to illumination changes of the improved loop closure detection method based on the compaction process, precision–recall experiments were conducted during the first compaction under illumination intensities of 100 lux and 22 lux, respectively. The results are presented in Figure 8.
At an illumination intensity of 100 lux, with a recall rate of 60%, the precision rates are 99.3% and 97.1%. When the recall rate is 80%, the precision rates are 61.4% and 56.3%. Compared to the traditional loop closure detection method based on DBoW3, the improved method increases the precision by 5.1%, enhancing the recognition accuracy of loop closure frames and effectively filtering out a significant number of false positives in repetitive tunnel scenes.
Additionally, by comparing the left and right panels of Figure 8, it can be seen that, under different illumination levels, the precision at an 80% recall rate is 61.4% and 55.3%, respectively. This slight decrease in precision under lower illumination is due to the loop closure detection being based on visual features, which are more difficult to capture clearly in low-light conditions. However, the overall analysis indicates that this method still exhibits good robustness to changes in illumination.

4.1.2. Real-Time Performance

Real-time performance is another crucial metric [36], where the time taken for loop closure detection must be significantly shorter than the data update cycle to meet the real-time requirements of the positioning system. The factors influencing the real-time performance include not only the complexity of the algorithm but also the computational capabilities of the hardware platform on which the algorithm operates, as well as software-level data scheduling and management.
The roller performs compaction in both the forward and reverse directions. The computation times for all keyframes using different loop closure detection methods are compared in Figure 9. When the direction of the roller remains unchanged, the proposed method eliminates all false candidate frames, thereby avoiding subsequent similarity checks. The average keyframe processing time for the traditional method is 7.89 ms, whereas it is 4.93 ms for the proposed method, thereby reducing the computational time for loop closure detection within the fusion positioning system.

4.1.3. Positioning Experiment for Improved Loop Closure Detection Method

This section validates the improvement in positioning accuracy achieved by the improved loop closure detection method. Figure 10a,b display the trajectories and errors of the roller using different loop closure detection methods. The improved method reduces the median lateral error from 8.5 cm to 7.3 cm. Additionally, when compared to the ORB with the traditional loop closure detection method, the median lateral error decreases by 2.2 cm. This demonstrates that correctly eliminating the false loop closure frame in similar scenes allows for accurate pose association between the current frame and historical frames, thereby minimizing the impact of accumulated errors on the positioning accuracy.
In summary, the improved loop closure detection method significantly enhances both the precision and real-time performance compared to traditional methods. Furthermore, it exhibits robust performance under different illumination levels, ultimately reducing the positioning errors compared to systems using traditional loop detection methods. This effectively mitigates the cumulative errors encountered by the roller during forward and backward compaction.

4.2. Static Positioning Test

We chose ORB-SLAM2 as the comparative method in the static and dynamic positioning experiments for the following reasons. (1) Performance Benchmarking: ORB-SLAM2 is widely recognized as a benchmark in the field of SLAM due to its robustness and efficiency. Its widespread use in various research studies allows for a meaningful comparative analysis. (2) Feature Efficiency: One of the key strengths of ORB-SLAM2 is its efficient use of ORB features, which are highly effective in environments lacking GNSS signals, as is the focus of our study. (3) Availability and Accessibility: ORB-SLAM2 is open-source and has well-documented implementations, making it accessible for comparative evaluations.
Since the unmanned rollers need to remain stationary before starting the compaction process to receive the compaction parameters and task assignments, it is crucial that the positioning system does not drift during this period. Therefore, a static positioning test was conducted. The roller was kept stationary, and the relevant methods were executed to obtain the static positioning error, as shown in Figure 11.
After removing outliers, the proposed fusion system achieved an average and maximum static longitudinal error of 3.7 cm and 5.2 cm, respectively, compared to ORB-SLAM2’s errors of 4.9 cm and 6.4 cm. The average and maximum static lateral errors were 3.6 cm and 6.1 cm, respectively, compared to ORB-SLAM2’s errors of 4.7 cm and 7.7 cm. Thus, the proposed fusion system improves the static positioning accuracy by 12 mm in the longitudinal direction and 11 mm in the lateral direction, effectively preventing positioning drift during the prolonged stationary periods of the rollers.

4.3. Straight-Line Compaction Test

4.3.1. Short Straight-Line Compaction Positioning Test

To validate the positioning accuracy of the fusion positioning system for unmanned rollers under varying illumination intensities, experiments were conducted in both bright and dim environments. The roller was driven forward and backward over a total distance of 20 m, yielding positioning trajectories for both the fusion system and the traditional visual system, as shown in Figure 12 and Figure 13. In these figures, the red curve represents the actual compaction trajectory of the roller’s steel wheels. By comparing the positioning results of the fusion system and the traditional visual system with the actual compaction path, the real-time lateral errors were obtained, as shown in Figure 14a. The analysis of the real-time lateral errors during forward and backward compaction yielded the average and maximum lateral errors, as depicted in Figure 14b.
At an illumination intensity of 96 lux, the proposed fusion positioning system demonstrated slight improvements in both the average and maximum lateral errors, reducing them from 6.9 cm to 5.8 cm. At an illumination intensity of 18 lux, the visual positioning system based on ORB-SLAM2 exhibited multiple instances of abrupt positioning changes. This degradation was due to the poor illumination conditions, which hindered the clear extraction and tracking of the keypoints in the images, leading to decreased pose estimation accuracy and irreversible changes in the positioning accuracy.
By incorporating LiDAR odometry pose constraints, the fusion positioning system effectively mitigates the impact of poor illumination on the positioning accuracy, preventing sudden increases in errors and ensuring accurate short-distance linear compaction in tunnels. Specifically, under an illumination intensity of 18 lux, the average lateral error during forward compaction was reduced from 11.6 cm to 7.7 cm, and, during backward compaction, it was reduced from 11.3 cm to 7.4 cm. This demonstrates a significant enhancement in lateral positioning accuracy during short straight-line compaction.

4.3.2. Long Straight-Line Compaction Positioning Test

Under different illuminations, a 40-m-long straight-line compaction positioning experiment was conducted with the roller moving forward and backward. The real-time positioning trajectories for different systems are illustrated in Figure 15 and Figure 16. The experiment sought to confirm whether long straight-line compaction impacted the positioning error of the fusion system. The real-time positioning errors under different illumination levels were derived from the actual roller trajectory, as presented in Figure 17a. Additionally, the average and maximum lateral errors are displayed in Figure 17b.
Under a dim environment of 18 lux, the ORB-SLAM2 system exhibited sudden error spikes in both forward and backward compaction, occurring six times. In contrast, the fusion positioning system displayed no such behavior. Furthermore, it reduced the average lateral error during forward compaction from 11.0 cm to 7.4 cm and during backward compaction from 10.5 cm to 7.0 cm. Thus, in dim environments, the proposed system enhances the lateral positioning accuracy in long straight-line compaction, unaffected by the increase in the compaction length.
As the linear positioning experiment involved both forward and backward compaction, the camera captured the ground image behind the roller. When the roller compacted forward, there were no compaction marks on the ground behind the roller. During backward compaction, the marks from the forward movement could be captured. This significantly increased the number of keypoints in the images, enhancing the accuracy of both matching and pose estimation. Consequently, in both Figure 14b and Figure 17b, the average error in the backward direction is observed to be smaller than that in the forward direction.

4.3.3. Real-Time Performance of the Positioning System

During the short straight-line positioning experiment, the runtimes of various components within the fusion positioning system were recorded, including feature extraction and matching based on SURF operators, the improved loop closure detection method and range-based 2D LiDAR odometry. As shown in Figure 18, the entire process involved forward and reverse compaction, and the maximum, minimum and average runtimes for each step were recorded separately, as shown in Table 2. The visual feature extraction and matching based on SURF operators had an average runtime of 31.35 ms, making it the most time-consuming step due to the processing of a large number of feature points. Regarding loop closure detection, since keyframes are grouped, loop closure detection is temporarily suspended when moving forward; it begins only when the roller starts to move in the backward direction, with the similarity check of the keyframes, and the average runtime for the entire process is 4.72 ms. In this experiment, the 2D LiDAR with a smaller data volume was used, which collects data at a frequency of 15 Hz, the same as the camera, resulting in an average runtime of 7.92 ms. From this, it can be concluded that the total computational time of the above steps does not exceed the data reception frequency of the camera and LiDAR per unit time; therefore, the system can provide real-time and high-precision positioning.

4.4. Lane-Changing Positioning Test

Another crucial metric for the evaluation of fusion systems is the lane-changing positioning accuracy, directly impacting the compaction overlap width and preventing under-compaction or over-compaction. Thus, apart from linear positioning experiments, lane-changing compaction positioning experiments were conducted under different illumination levels for both the fusion system and the visual system. The positioning trajectories are depicted in Figure 19. The real-time lateral errors under different illumination levels are illustrated in Figure 20, with the average and maximum lateral errors tabulated in Table 3.
Under 101 lux illumination, the fusion positioning system exhibits a slight improvement in lateral error compared to the visual system. Meanwhile, at 17 lux illumination, the maximum lateral error decreases from 21.5 cm to 13.5 cm, and the average value decreases from 11.1 cm to 7.4 cm. Thus, during the lane-changing compaction of the unmanned roller, the fusion positioning system effectively mitigates the decrease in positioning accuracy attributed to poor illumination.
By integrating an enhanced loop closure detection method and LiDAR odometry with visual odometry, the fusion system achieves smaller positioning errors and better robustness. It addresses the challenge of decreased visual positioning accuracy resulting from changes in the illumination intensity within tunnels, as evidenced by the combined outcomes of the static, straight-line and lane-changing experiments.

5. Conclusions

In tunnel construction, fluctuations in illumination intensity can compromise the accuracy of visual positioning for unmanned rollers. To mitigate this challenge, this paper proposes an indoor fusion positioning system for unmanned rollers based on the camera/LiDAR. This system integrates loop closure detection and LiDAR odometry with the foundation of visual odometry. Keyframe poses serve as nodes, and constraints including the relative pose constraints between adjacent frames, loop closure constraints with historical frames and LiDAR odometry constraints are amalgamated through graph optimization. Given the prevalence of similar scenes in tunnels, a candidate frame selection method based on the compaction process is proposed to obtain precise loop closure constraints.
Through on-site experiments, compared with traditional loop closure detection methods, it is found that the improved method significantly enhances the precision while reducing the runtime, meeting the real-time system requirements. In the static positioning tests, the longitudinal/lateral accuracy of the roller indoor fusion positioning system improves by 12 mm and 11 mm, respectively. In the straight-line positioning tests under different illumination levels, the system’s positioning error is notably diminished. In two straight-line compaction tests with an illumination intensity below 20 lux, the average lateral error is increased by 34.1% and 32.8%, respectively. Furthermore, in the lane-changing positioning tests, this system boosts the positioning accuracy by 33% in dim environments. It enables substantial enhancements in positioning accuracy during straight-line and lane-changing compaction compared to visual positioning, effectively circumventing the declines in positioning accuracy due to illumination changes in tunnels.
Due to the constraints of laboratory environments, it is only feasible to simulate a limited range of illumination variations, such as 20 lux or 100 lux; rapid changes in illumination cannot be promptly mimicked. Additionally, this research builds on an existing visual positioning system by integrating 2D LiDAR. Future studies will need to incorporate more sensors, such as millimeter-wave radar and IMUs, to determine whether integrating additional sensors enhances the accuracy and robustness of the positioning system over the current configuration. Lastly, while the method proposed in this paper focuses on the impact of lighting variations in tunnel environments on the positioning accuracy, it does not address the performance of road rollers in outdoor settings. Future work will therefore extend the analysis and testing to outdoor scenarios, such as highways, to improve the versatility and environmental adaptability of the positioning system.

Author Contributions

Conceptualization, H.H., X.W. and Y.H.; methodology, H.H. and Y.H.; software, H.H.; validation, H.H. and X.W.; formal analysis, H.H.; writing—original draft preparation, H.H.; writing—review and editing, H.H. and X.W.; supervision, X.W. and Y.H.; project administration, X.W. and Y.H.; funding acquisition, X.W. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61901056, and the Youth Science Foundation of the National Natural Science Foundation of China, grant number 52005048.

Data Availability Statement

Data sharing is not applicable.

Acknowledgments

The authors acknowledge the support of the National Natural Science Foundation of China (61901056) and the Youth Science Foundation of the National Natural Science Foundation of China (52005048). The authors would also like to thank the reviewers and editors for their insightful comments, which helped to improve the quality of this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yao, D.; Xie, H.; Qiang, W.; Liu, Y.; Xiong, S. Accurate trajectory tracking with disturbance-resistant and heading estimation method for self-driving vibratory roller. IFAC-PapersOnLine 2018, 51, 754–758. [Google Scholar] [CrossRef]
  2. Chen, B.; Yu, X.; Dong, F.; Zheng, C.; Ding, G.; Wu, W. Compaction Quality Evaluation of Asphalt Pavement Based on Intelligent Compaction Technology. J. Constr. Eng. Manag. 2021, 147, 04021099. [Google Scholar] [CrossRef]
  3. Polaczyk, P.; Hu, W.; Gong, H.; Jia, X.; Huang, B. Improving asphalt pavement intelligent compaction based on differentiated compaction curves. Constr. Build. Mater. 2021, 301, 124125. [Google Scholar] [CrossRef]
  4. Wang, J.; Wang, T.; Pan, F. Development of unmanned roller and its application in highway engineering. In Proceedings of the 20th COTA International Conference of Transportation Professionals, Xi’an, China, 14–16 August 2020; pp. 1583–1590. [Google Scholar]
  5. Zhang, Q.; An, Z.; Liu, T.; Zhang, Z.; Huangfu, Z.; Li, Q.; Yang, Q.; Liu, J. Intelligent rolling compaction system for earth-rock dams. Automat. Constr. 2020, 116, 103246. [Google Scholar] [CrossRef]
  6. Shi, M.; Wang, J.; Li, Q.; Cui, B.; Guan, S.; Zeng, T. Accelerated earth-rockfill dam compaction by collaborative operation of unmanned roller fleet. J. Constr. Eng. Manag. 2022, 148, 04022046. [Google Scholar] [CrossRef]
  7. Bian, Y.; Fang, X.; Yang, M. Automatic rolling control for unmanned vibratory roller based on fuzzy algorithm. J. Tongji Univ. Nat. Sci. 2017, 45, 1830–1838. [Google Scholar]
  8. Wang, X.; Shen, S.; Huang, H.; Zhang, Z. Towards smart compaction: Particle movement characteristics from laboratory to the field. Constr. Build. Mater. 2019, 218, 323–332. [Google Scholar] [CrossRef]
  9. Zhang, Q.; Liu, T.; Zhang, Z. Unmanned rolling compaction system for rockfill materials. Automat. Constr. 2019, 100, 103–117. [Google Scholar] [CrossRef]
  10. Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
  11. Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 2016, 33, 249–265. [Google Scholar] [CrossRef]
  12. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  13. Kazerouni, I.A.; Fitzgerald, L.; Dooly, G.; Toal, D. A survey of state-of-the-art on visual SLAM. Expert Syst. Appl. 2022, 205, 117734. [Google Scholar] [CrossRef]
  14. Zhang, X.; Lu, G.; Fu, G.; Xu, D.; Lin, S. SLAM Algorithm Analysis of Mobile Robot Based on Lidar. In Proceedings of the 2019 Chinese Control Conference, Guangzhou, China, 27–30 July 2019; pp. 4739–4745. [Google Scholar]
  15. Fang, X.; Bian, Y.; Yang, M. Development of a path following control model for an unmanned vibratory roller in vibration compaction. Adv. Mech. Eng. 2018, 10, 1687814018773660. [Google Scholar] [CrossRef]
  16. Wei, Z.; Hui, X.; Quanzhi, X.; Kang, S.; Wei, Q. The impact of attitude feedback on the control performance and energy consumption in the path-following of unmanned rollers. SAE. Tech. Paper. 2020, 1, 5029. [Google Scholar]
  17. Wang, C.; Xu, A.; Kuang, J.; Sui, X.; Hao, Y.; Niu, X. A High-Accuracy Indoor Localization System and Applications Based on Tightly Coupled UWB/INS/Floor Map Integration. IEEE. Sens. J. 2021, 21, 18166–18177. [Google Scholar] [CrossRef]
  18. Song, X.; Li, X.; Tang, W.; Zhang, W.; Li, B. A hybrid positioning strategy for vehicles in a tunnel based on RFID and in-vehicle sensors. Sensors 2014, 14, 23095–23118. [Google Scholar] [CrossRef]
  19. Jiang, S.; Wang, W.; Peng, P. A Single-Site Vehicle Positioning Method in the Rectangular Tunnel Environment. Remote Sens. 2023, 15, 527. [Google Scholar] [CrossRef]
  20. Gao, H.; Wang, J.; Cui, B.; Wang, X.; Lin, W. An innovation gain-adaptive Kalman filter for unmanned vibratory roller positioning. Measurement 2022, 203, 111900. [Google Scholar] [CrossRef]
  21. Sun, Y.; Xie, H. Lateral Positioning Method for Unmanned Roller Compactor Based on Visual Feature Extraction. In Proceedings of the 2019 3rd Conference on Vehicle Control and Intelligence, Hefei, China, 21–22 September 2019; pp. 1–6. [Google Scholar]
  22. Huang, H.; Wang, X.; Hu, Y.; Tan, P. Accuracy Analysis of Visual Odometer for Unmanned Rollers in Tunnels. Electronics 2023, 12, 4202. [Google Scholar] [CrossRef]
  23. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar]
  24. Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar]
  25. Engel, J.; Usenko, V.; Cremers, D. A photometrically calibrated benchmark for monocular visual odometry. arXiv 2016, arXiv:1607.02555. [Google Scholar]
  26. Song, C.; Zeng, B.; Cheng, J.; Wu, F.; Hao, F. PSMD-SLAM: Panoptic Segmentation-Aided Multi-Sensor Fusion Simultaneous Localization and Mapping in Dynamic Scenes. Appl. Sci. 2024, 14, 3843. [Google Scholar] [CrossRef]
  27. Liu, Z.; Li, Z.; Liu, A.; Shao, K.; Guo, Q.; Wang, C. LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme. Remote Sens. 2024, 16, 1524. [Google Scholar] [CrossRef]
  28. Kumar, G.A.; Lee, J.H.; Hwang, J.; Park, J.; Youn, S.H.; Kwon, S. LiDAR and camera fusion approach for object distance estimation in self-driving vehicles. Symmetry 2020, 12, 324. [Google Scholar] [CrossRef]
  29. Gupta, S.; Kumar, M.; Garg, A. Improved object recognition results using SIFT and ORB feature detector. Multimed. Tools Appl. 2019, 78, 34157–34171. [Google Scholar] [CrossRef]
  30. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
  31. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
  32. Jakubović, A.; Velagić, J. Image feature matching and object detection using brute-force matchers. In Proceedings of the 2018 International Symposium ELMAR, Zadar, Croatia, 16–19 September 2018; pp. 83–86. [Google Scholar]
  33. Qader, W.A.; Ameen, M.M.; Ahmed, B.I. An overview of bag of words; importance, implementation, applications, and challenges. In Proceedings of the 2019 International Engineering Conference, Erbil, Iraq, 23–25 June 2019; pp. 200–204. [Google Scholar]
  34. Jaimez, M.; Monroy, J.G.; Gonzalez-Jimenez, J. Planar Odometry from a Radial Laser Scanner. A Range Flow-based Approach. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; pp. 4479–4485. [Google Scholar]
  35. Guclu, O.; Can, A.B. Fast and effective loop closure detection to improve SLAM performance. J. Intell. Robot. Syst. 2019, 93, 495–517. [Google Scholar] [CrossRef]
  36. Ma, J.; Ye, X.; Zhou, H.; Mei, X.; Fan, F. Loop-closure detection using local relative orientation matching. IEEE Trans. Intell. Transp. Syst. 2021, 23, 7896–7909. [Google Scholar] [CrossRef]
Figure 1. Tunnel compaction scenes.
Figure 1. Tunnel compaction scenes.
Sensors 24 04408 g001
Figure 2. System framework diagram.
Figure 2. System framework diagram.
Sensors 24 04408 g002
Figure 3. An example of visual odometry.
Figure 3. An example of visual odometry.
Sensors 24 04408 g003
Figure 4. Error loop closure frames in similar scenes.
Figure 4. Error loop closure frames in similar scenes.
Sensors 24 04408 g004
Figure 5. Schematic diagram of loop detection based on compaction process: (a) roller construction process; (b) loop closure detection process.
Figure 5. Schematic diagram of loop detection based on compaction process: (a) roller construction process; (b) loop closure detection process.
Sensors 24 04408 g005
Figure 6. Fusion model based on graph optimization.
Figure 6. Fusion model based on graph optimization.
Sensors 24 04408 g006
Figure 7. Distribution of sensors.
Figure 7. Distribution of sensors.
Sensors 24 04408 g007
Figure 8. Precision–recall curve under different illumination levels.
Figure 8. Precision–recall curve under different illumination levels.
Sensors 24 04408 g008
Figure 9. Running time for loop closure detection.
Figure 9. Running time for loop closure detection.
Sensors 24 04408 g009
Figure 10. Positioning experiment with different loop closure detection methods: (a) positioning trajectory; (b) positioning error.
Figure 10. Positioning experiment with different loop closure detection methods: (a) positioning trajectory; (b) positioning error.
Sensors 24 04408 g010
Figure 11. Static positioning error diagram.
Figure 11. Static positioning error diagram.
Sensors 24 04408 g011
Figure 12. Bright environment, illumination intensity = 96 lux: (a) test site; (b) positioning data.
Figure 12. Bright environment, illumination intensity = 96 lux: (a) test site; (b) positioning data.
Sensors 24 04408 g012
Figure 13. Dim environment, illumination intensity = 18 lux: (a) test site; (b) positioning data.
Figure 13. Dim environment, illumination intensity = 18 lux: (a) test site; (b) positioning data.
Sensors 24 04408 g013
Figure 14. Lateral error under different illumination levels: (a) real-time lateral error; (b) average and maximum lateral error.
Figure 14. Lateral error under different illumination levels: (a) real-time lateral error; (b) average and maximum lateral error.
Sensors 24 04408 g014
Figure 15. Forward and backward trajectory at illumination intensity of 95 lux.
Figure 15. Forward and backward trajectory at illumination intensity of 95 lux.
Sensors 24 04408 g015
Figure 16. Forward and backward trajectory at illumination intensity of 20 lux.
Figure 16. Forward and backward trajectory at illumination intensity of 20 lux.
Sensors 24 04408 g016
Figure 17. Lateral error under different illumination levels: (a) real-time lateral error; (b) average and maximum lateral error.
Figure 17. Lateral error under different illumination levels: (a) real-time lateral error; (b) average and maximum lateral error.
Sensors 24 04408 g017
Figure 18. Graph of computational time variations for short straight-line positioning experiment.
Figure 18. Graph of computational time variations for short straight-line positioning experiment.
Sensors 24 04408 g018
Figure 19. Lane-changing positioning data under different illumination.
Figure 19. Lane-changing positioning data under different illumination.
Sensors 24 04408 g019
Figure 20. Lane-changing positioning error under different illumination.
Figure 20. Lane-changing positioning error under different illumination.
Sensors 24 04408 g020
Table 1. Parameters of double-drum roller.
Table 1. Parameters of double-drum roller.
ParameterValue
Compaction width1200 mm
Maximum steering angle±20°
Rolling speed2–5 km/h
Table 2. Calculation schedule for each step.
Table 2. Calculation schedule for each step.
ParameterMinimum (ms)Maximum (ms)Mean (ms)
Feature extraction and matching26.7739.1231.35
Loop closure detection0.146.734.72
2D LiDAR odometry6.1414.367.92
Table 3. Lane-changing positioning errors.
Table 3. Lane-changing positioning errors.
Illumination Intensity = 101 luxIllumination Intensity = 17 lux
Average Lateral Error (cm)Maximum Lateral Error (cm)Average Lateral Error (cm)Maximum Lateral Error (cm)
ORB−SLAM27.216.411.121.5
Fusion System6.616.87.413.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, H.; Hu, Y.; Wang, X. A Fusion Positioning System Based on Camera and LiDAR for Unmanned Rollers in Tunnel Construction. Sensors 2024, 24, 4408. https://doi.org/10.3390/s24134408

AMA Style

Huang H, Hu Y, Wang X. A Fusion Positioning System Based on Camera and LiDAR for Unmanned Rollers in Tunnel Construction. Sensors. 2024; 24(13):4408. https://doi.org/10.3390/s24134408

Chicago/Turabian Style

Huang, Hao, Yongbiao Hu, and Xuebin Wang. 2024. "A Fusion Positioning System Based on Camera and LiDAR for Unmanned Rollers in Tunnel Construction" Sensors 24, no. 13: 4408. https://doi.org/10.3390/s24134408

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop