Next Article in Journal
Study on Model and Experimental of Laser Scribing Parameter of Maskant in Chemical Milling for Aerospace Applications
Next Article in Special Issue
Study on Performance Evaluation and Prediction of Francis Turbine Units Considering Low-Quality Data and Variable Operating Conditions
Previous Article in Journal
Mechanical Static Force Negatively Regulates Vitality and Early Skeletal Development in Zebrafish Embryos
Previous Article in Special Issue
An Adaptive Operational Modal Analysis under Non-White Noise Excitation Using Hybrid Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Map Construction and Path Planning Method for a Mobile Robot Based on Multi-Sensor Information Fusion

1
School of Automotive Engineering, Shan Dong Jiaotong University, Jinan 250357, China
2
College of Energy and Power Engineering, Nanjing University of Aeronautics & Astronautics, Nanjing 210016, China
3
SWAT Corps, Shandong Provincial Public Security Department, Jinan 250115, China
4
Office of Academic Affairs, Shan Dong Jiaotong University, Jinan 250357, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(6), 2913; https://doi.org/10.3390/app12062913
Submission received: 25 January 2022 / Revised: 28 February 2022 / Accepted: 9 March 2022 / Published: 12 March 2022
(This article belongs to the Special Issue Advancing Reliability & Prognostics and Health Management)

Abstract

:
In order to solve the path planning problem of an intelligent vehicle in an unknown environment, this paper proposes a map construction and path planning method for mobile robots based on multi-sensor information fusion. Firstly, the extended Kalman filter (EKF) is used to fuse the ambient information of LiDAR and a depth camera. The pose and acceleration information of the robot is obtained through the pose sensor. The SLAM algorithm based on a fusion of LiDAR, a depth camera, and the inertial measurement unit was built. Secondly, the improved ant colony algorithm was used to carry out global path planning. Meanwhile, the dynamic window method was used to realize local planning and local obstacle avoidance. Finally, experiments were carried out on a robot platform to verify the reliability of the proposed method. The experiment results showed that the map constructed by multi-sensor information fusion was closer to the real environment, and the accuracy and robustness of SLAM were effectively improved. The turning angle of the path was smoothed using the improved ant colony algorithm, and the real-time obstacle avoidance was able to be realized using the dynamic window method. The efficiency of path planning was improved, and the automatic feedback control of the intelligent vehicle was able to be realized.

1. Introduction

With the rapid development of computer science, sensor technology, and other high-tech industries, mobile robot technology is developing rapidly. It has been applied to the fields of logistics, agriculture, and service, among others [1]. As the key technology of mobile robot technology research and development, navigation technology mainly includes simultaneous positioning, mapping, and path planning. SLAM needs to complete robot positioning and mapping tasks in the location environment, while path planning aims to generate collision-free paths in a complex environment to ensure the mobile robot can move [2,3].
The key technology of SLAM is to construct the robot’s pose and environment map in an unknown environment. According to the position and map information, the unknown movement of the vehicle is estimated, and the map is constructed incrementally along with the movement process to realize the autonomous navigation and obstacle avoidance functions of the vehicle. Depending on a single sensor for real-time positioning and mapping, the robustness of the system is poor. The method of multi-sensor fusion can improve the accuracy and robustness of the system. Yang et al., proposed a method that presents a dimension reduction preparation of a two-dimensional grid map through discussion, which involves a cloud camera coordinate relationship with the real space ground [4]. Zhong et al., proposed a binocular visual-inertial odometry method based on a semi-direct method. This method combines the inertial navigation data and the visual information of the binocular camera to construct a high-precision map, which reduces the calculation of measurement values and improves the work efficiency of the system [5]. Ye et al., used a graph-based optimization method to fuse LiDAR and IMU data. This method improves the accuracy of map construction and location [6]. Huang et al., adopted camera information and used inter-frame matching to constrain the data association between point clouds and improved the motion trajectory estimation method by iterating the closest points. However, the accuracy of constructing maps was poor [7]. Chen et al., installed the 2D LiDAR vertically on the robot and matched it with a depth camera for data collection to construct a high-precision map, but the positioning accuracy of this method is poor [8]. Multi-sensor fusion can provide better data support for mobile robot positioning mapping and make up for the defects of single-sensor detection.
Path planning is based on multi-source perception information. The feasible path to the destination is searched for with a certain update frequency, and an optimal target path is iterated from the initial point to the destination on the premise of ensuring safety. Among the path planning methods, the two-dimensional geometric search method is the most mature path planning method. In addition, there are some widely used algorithms, such as Dijkstra [9], A* [10], and Basic Theta* [11]. Xiong et al. proposed a large-scale adaptive ocean V-ACO path planner to improve the heuristic function of the ant colony algorithm to find the collision-free optimal trajectory. This method can find the optimal path with more sampling values [12]. Eduardo et al., proposed the dynamic obstacle dynamic window algorithm and the dynamic obstacle tree dynamic window algorithm to improve the safety of avoidance operations using two restricted occupancy grids [13]. Wang et al. proposed a hybrid path planning algorithm based on the A* algorithm and the dynamic window method. This method solves the problem of path planning in a complex multi-objective environment and improves the efficiency of autonomous navigation [14]. Cheng et al. proposed a hybrid path planning algorithm combining the A* algorithm and the dynamic window method. The resulting path is smoother, has global optimality, and improves dynamic obstacle avoidance and autonomous navigation capabilities [15]. The indoor environment is complex, and therefore the fusion algorithm has good robustness and real-time performance in solving the path planning problem, which is beneficial for improving the autonomous navigation ability of the vehicle.
In the aspect of sensor data fusion, the filtering algorithm is usually used for data fusion [16,17,18,19]. On the basis of the Sage-Husa adaptive filtering algorithm, through the Cholesky decomposition of the posterior covariance estimation matrix existing in the extended Kalman filter SLAM algorithm, Cheng Lu et al. greatly improved the operating efficiency and estimation accuracy of the system [20]. Li Yan et al., proposed a multi-sensor fusion mapping method. The EKF is used to fuse LiDAR and depth camera data information, and the generated map is fused twice through Bayesian estimation. This method makes full use of the redundant information of the sensor to the target and improves the robustness of SLAM [21].
In order to solve the path planning problem of intelligent vehicles in unknown environments, we designed a method of map construction and path planning for mobile robots based on multi-sensor information fusion. The main innovations are as follows: (1) In the face of an unknown environment, a real-time positioning and mapping method based on the fusion of LiDAR, depth camera, and inertial measurement unit is proposed. (2) A hybrid path planning method is proposed on the basis of the combination of the improved ant colony algorithm and the dynamic window method. Using this method, the path turning angle can be optimized, the collision with obstacles can be avoided, the efficiency of path planning is improved, and the automatic feedback mechanism of a mobile robot is realized.

2. SLAM Based on Multi-Sensor Fusion

In the field of robot navigation, the problem of SLAM can be simply described as follows: A robot moving in an unknown environment can estimate its own pose according to the pose and map information and can build an incremental map in the process of moving to achieve autonomous obstacle avoidance and navigation [22]. In order to describe the SLAM problem, the motion of the vehicle in a continuous period of time is decomposed into discrete moments t = 1, 2, ∙∙∙, k, and the corresponding moment positions x1, x2, ∙∙∙, xk constitute the motion trajectory of the vehicle. For the nonlinear system in the actual motion process, the motion model and observation model of the system can be expressed as follows:
{ x k = f ( x k 1 ,   u k , w k ) z k = h ( x k , v k )
where xk represents robot pose, zk represents the observed value of the system, f (∙) is the system motion equation function, h (∙) is the system observation equation function, uk represents the control value of the system, wk represents system process noise, and vk stands for system observation noise.

2.1. Description of Algorithm

On the basis of 2D LiDAR and depth vision sensors, this paper proposes a SLAM framework for autonomous vehicle localization and mapping based on multi-sensor information fusion to make up for the problem that low obstacles in the environment cannot be perceived. The algorithm realizes the information fusion of 2D LiDAR and depth vision sensor, improves the vehicle’s environmental perception ability, increases the vehicle’s ability to detect obstacles in a complex environment, avoids the disadvantage of low detection accuracy due to the performance of single sensor, and improves the SLAM accuracy and stability of a mobile robot. The overall framework of the algorithm is shown in Figure 1.
The overall process of the algorithm is mainly divided into the following four parts:
  • Sensor calibration: The camera, LiDAR, and IMU are calibrated in time and space, and therefore the data information output by the sensor is on the same plane and time.
  • Data preprocessing: Data processing of laser point cloud and camera information. The IMU is pre-integrated to obtain the relative pose transformation matrix of the vehicle, which provides the transformation basis for the LiDAR point cloud frame matching.
  • EKF data fusion: The extended Kalman filter is used to fuse the laser detection data and the visual detection data to generate a local map.
  • Map construction and loopback detection: The real-time pose of the robot is used as the center of the circle, R is the radius to delineate the detection circle, and the laser point cloud ICP algorithm is used to perform loopback detection on the robot trajectory.

2.2. Data Preprocessing

The data processing of the depth camera is used to convert the image data obtained by the depth camera into laser data in order to complete the data fusion between sensors. The principle of converting the depth image acquired by the depth camera into laser data is shown in Figure 2. The conversion steps are as follows:
  • The obtained depth image is screened for the effective area to obtain the depth image (u, v) to be processed.
  • Through the depth image (u, v) and the camera parameter model, the position coordinates M (x, y, z) of each pixel of the depth camera in the world coordinate system are obtained.
  • Project the spatial point cloud (x, y, z) into the corresponding laser scanning range, and calculate the angle AOD between the straight lines AO and DO; the calculation formula is shown in (2):
    θ = a r c t a n t ( x z )
The scanning of the laser is [ ε , ζ ], the laser beam is divided into N parts, and the laser data are represented by laser [N]. The index value n of the point M projected into the array is
n = θ ε ( ζ ε ) / N = N ( θ ε ) ζ ε
The value of laser [N] is the distance r from point D on the x-axis to the center of the camera’s light spot.
l a s e r   [ N ] = r = OD = z 2 + x 2

2.3. Data Fusion Based on EKF

When a depth camera or LiDAR is used alone for map construction, there will be a certain error due to a single sensor defect. Therefore, this paper used the EKF to fuse the depth camera data and LiDAR data for mapping to improve the accuracy of the mapping. When using the extended Kalman filter for data fusion, the measurement accuracy of LiDAR is higher than that of the depth camera, and thus the measurement value of the depth camera is taken as the system prediction value at the current moment, and the measurement value of LiDAR is taken as the system measurement value, so as to ensure the accuracy of data fusion. The process of EKF data fusion is as follows:
  • Forecast section. First, initialize the current position and covariance error with the initial position of the robot. Use the depth camera on the vehicle to collect data on vehicle motion. This value is used as the current moment system input uk to predict the pose of the next moment, and the predicted error covariance is calculated. Predict xk|k − 1 according to the current pose:
    x k | k 1 = f ( x k 1 | k 1 , u k )
    P k | k 1 = A k P k 1 | k 1 A k T + Q k
    where Ak is the Jacobian matrix of the state equation A k = f x | x k | k 1 , u k , and Qk represents the covariance matrix of the Gaussian noise of the predicted state.
  • Data association. Before the data fusion of the depth camera and the LiDAR, the data of the two sensors need to be correlated to prevent the mis-matching of the sensor data and the inability to build a complete map. According to the Mahalanobis distance between the observation information obtained by the sensors, the correlation of the data obtained by the two sensors is judged. It indicates that the data correlation is high when the distance is less than the set threshold x2.
    D i j 2 = τ t T P k | k 1 τ t < x d , α 2 τ t = Z l Z c  
    where d = rank( τ i j ), α is the reliability,   τ t is the measured position error, Zl is the measured value of the LiDAR, and Zc is the measured value of the depth camera.
  • Update section. First, calculate the Kalman gain Kk, and use the measured values of the multi-sensors to correct the previous system predictions, that is, to use the position information obtained by the LiDAR to correct the positioning data obtained by the camera for data fusion. At the same time, the optimal estimation value of sensor data fusion is calculated by Kalman gain, and the update of the covariance matrix is completed.
    K k = P k | k 1 H k T [ H k P k | k 1 H k T + R k ] 1
    x k | k = x k | k 1 + K k ( z k h ( x k | k 1 ) )
    P k | k = ( I K k H k ) P k | k 1
    where Hk is the transformation matrix, and Rk represents the covariance matrix of the Gaussian noise of the measurement.

2.4. Map Construction and Loop Closure Detection

Using the fused sensor data, the algorithm based on graph optimization is used to construct the map after the sensor data processing is completed. The SLAM algorithm framework based on graph optimization is shown in Figure 3.
Loopback detection. Use the ICP algorithm to match the current laser point cloud information with the sub-map, obtain the pose transformation relationship, and form a closed-loop constraint inserted into the back-end optimization. The closed-loop detection process is to judge the similarity between the relative position of the mobile robot in the laser point cloud and the relative position of the carrier in the sub-map through the detection mechanism. The optimal candidate pose is calculated by optimization means, and the paranoia is eliminated to complete the loop closure detection.
Assume that the LiDAR feature point of the current frame is Pi. The carrier position coordinate point obtained by EKF fusion is the center of the circle, and R is the radius for pose distance detection. The detected point cloud of the target frame whose distance from the current pose is less than R is taken as the point cloud Pj to be detected. The detected point cloud of the target frame whose distance from the current pose is less than R is taken as the point cloud Pj to be detected. Then the ICP algorithm is run on Pi and Pj to output pose vector f, and the loopback is considered to be detected. After obtaining f, the position kt of the detected pose in the historical pose can be calculated. Finally, optimize and correct the vehicle pose between the kt −1 and kt frames, and output the corrected pose trajectory and map.

3. Path Planning for Mobile Robots

3.1. Global Path Planning Based on Improved Ant Colony Algorithm

This paper proposes two improvements for traditional algorithms in terms of slow convergence speed and local optimization problems. (1) Use an improved heuristic function to provide an initial selection of the ant colony to guide the direction. (2) Use a hybrid strategy to update the pheromone.
  • Initial state: In the initial state, the probability of the ants choosing a feasible path is equal, and the ant will determine the direction of the next node according to the pheromone content of the state transition point on the selection path. Setting the state transition probability of ant b from point I to point g on the grid graph as p i g ( b ) ( t ) , the pheromone content Tig(t) from point i to point g, heuristic function Nig(t), pheromone heuristic factor a, and expectation the heuristic factor w are determined. Then the state transition probability formula is as follows:
    p ig ( b ) ( t ) = { T i g a ·   N i g w ( t ) / g all ( i ) T i g a ·   N ig w ( t ) 0 ,   otherwise
    where all(i) represents the set of transfer nodes that ant b can select at node i, and Nig(t) is inversely proportional to the distance lig between nodes i to g.
  • Improved heuristic function: On the basis of the square of the sum of the distance from the current node to the next node and the distance from the next node to the target in [23], the heuristic function is introduced to speed up the search speed of the ant colony, which is
    N i g ( t ) = 1 / ( σ · l i g + ( 1 σ ) · l i E ) 2
    where σ ∈ [0, 1], determined by the real-time environment; lig represents the distance between the current node and the next node; and liE represents the distance between the current node and the target point, and increases the ant colony’s selection and guidance effect on the next target point, which helps to reduce the search time of the ant colony.
  • Pheromone update: In order to speed up the search time of ants, to reduce the probability of choosing the route that has been taken, and to avoid the pheromone concentration being too low or accumulating after volatilization, it is necessary to determine the maximum value Tmax and the minimum value Tmin of the pheromone concentration between two points. In this paper, the maximum and minimum values of pheromone were determined according to the volatilization degree and optimal path of the pheromone, which is
    { T m a x = [ 1 / 2 ( 1 λ ) ] × [ 1 / D s ] + 1 / D s T m i n = T m a x / c                                                                                      
    where Ds is the optimal path length after a certain iteration, and c is the number of cycles each time.
After the ants finish each cycle, the pheromone information on the path changes. In order to obtain the optimal path, the pheromone on the entire path needs to be adjusted. The update conditions are as shown in Equation (14). At the same time, in order to prevent the ant colony from using the pheromone on the local optimal path for path planning, the pheromone concentration increment of the b-th ant between the two points adopts the update strategy shown in (16) to ensure that each ant can find a feasible path.
T i g ( t + 1 ) = ( 1 λ ) · T i g ( t ) + Δ T i g ( t )
Δ T ig ( t ) = b = 1 m Δ T ig b ( t )
Δ T ig b = { T min   Δ T ig b + ( 1 λ ) T ig b 1 < T min 1 / r s   T min < Δ T ig b + ( 1 λ ) T ig b 1 < T max               T max Δ T ig b + ( 1 λ ) T ig b 1 > T max
where λ represents the pheromone volatilization coefficient, λ ∈ [0, 1]; ΔTig(t) represents the pheromone increment between node i and g; m represents the pheromone increment of the b-th ant between nodes i and g, m ∈ [1, 50]; and rs is the optimal solution found thus far.

3.2. Path Smoothing Based on Cubic B-Spline Curve

The path planned by the traditional ant colony algorithm forms a certain peak at the turning point, which increases the turning angle of the path to a certain extent. Therefore, the global path is smoothed by the cubic B-spline curve to ensure the smooth and safe passage of the smart car at the turning point, as well as to satisfy the geometric constraint characteristics of the smart car. The smoothing process of third-order B-spline curve is shown in Figure 4.

3.3. Local Path Planning

Global path planning can plan the global optimal solution of the route, but it needs to know the accurate information of the environment in advance. The intelligent robot platform integrates the obstacle information into the environment map in real time through its own environmental sensors, when the driving environment of the intelligent robot changes, such as when an unknown obstacle appears. Then, the driving path of intelligent robot platform is replanned by the local path planning method. This is to avoid obstacles and complete local obstacle avoidance. After completing the local obstacle avoidance, the intelligent robot platform continues to drive according to the global optimal solution.
The dynamic window algorithm can realize trial path planning and real-time obstacle avoidance. In the running process of the smart car, there are no groups of velocity spaces (vt, ωt), and the feasible trajectory of the robot can be simulated within the window region time t. At the same time, the feasible trajectory is evaluated according to some metrics, so as to select the speed corresponding to the optimal trajectory to the mobile platform. In view of the existence of several feasible simulated motion trajectories in the velocity space, this paper calculated the evaluation function combined with the global path planning information to ensure that the final local path planning is the global optimal path. The designed evaluation function is as follows:
G ( v , ω ) = μ · g o a l ( v , ω ) + β · d ( v , ω )                               + d ( v m , ω m ) + λ v e ( v , ω )
where goal(v, ω) represents the angle between the vector pointing to the goal and the vector connecting the starting point and the current position; d(v, ω) represents the vertical distance between the static obstacle and the current trajectory pointing to the target; d(vm, ωm) represents the vertical distance between the unknown obstacle and the current pointing target trajectory; ve(v, ω) represents the evaluation function of the current speed pointing to the target trajectory; and μ, β, ϕ, and λ are expressed as weighting coefficients.

3.4. Path Planning Combined with Improved Ant Colony Algorithm and Dynamic Window Algorithm

The flow chart of the fusion algorithm in this paper is shown in Figure 5:
  • Use on-board sensors to obtain environmental information to realize the vehicle’s own positioning and build a raster map.
  • Use the improved ant colony algorithm to complete the global path planning of the intelligent vehicle and obtain the optimal path.
  • According to the new obstacle information obtained by the vehicle sensor in real time, integrate the information into the grid map, and use the dynamic window method to complete the local obstacle avoidance.
  • Combine the global planning path, plan the local moving target points in real time, and use the preview tracking method to track the target points according to the control parameters in the path planning process to achieve real-time obstacle avoidance and obtain the local optimal path.
  • Determine whether the local moving target point is the final target point; if not, jump to Step 1. If yes, the intelligent vehicle reaches the target point, and the algorithm ends.

4. Analysis and Verification

4.1. ROS-Based Experimental Platform

In this paper, the algorithm was verified and analyzed on the ROS-based intelligent real vehicle platform (DrivingPo). The four in-wheel motors were used in the platform as the drive system, and the platform was equipped with 2D-LiDAR (FS-D10), which was installed horizontally at the front and rear ends of the platform. The depth camera (Kinect V2), Inertial Navigation, Microcomputer, and other equipment were equipped on the platform. The algorithm running environment was Ubuntu18.04, using an AMD Ryzen3 2200G processor (Zhongheng Dayou Automobile Technology Co., Ltd. Beijing, China.) and 4 GB running memory. The ROS intelligent vehicle platform and the system framework diagram are shown in Figure 6.

4.2. Map Building Experiment

The building diagram is shown in Figure 7. By adding cartons as obstacles, the basic experiment environment in the laboratory was built, as is shown in Figure 7a. The height of obstacle A was lower than the installation height of the vehicle-mounted LiDAR. In practical applications, since the information obtained by the two-dimensional LiDAR is affected by the installation location, the depth camera will be affected by the ambient light, and thus there will be obstacles that are missed and the environmental information will be missing, resulting in low accuracy of the constructed environmental map information. Therefore, a map construction experiment was carried out in a simulated environment to verify the reliability of the method proposed in this paper. Figure 7b maps the environment using only LiDAR. It can be seen from the figure that due to the influence of the installation position of the LiDAR, the obstacle A cannot be detected, and therefore the ground information obtained by scanning was incomplete, resulting in low mapping accuracy. Figure 7c maps the environment using only the depth camera. Figure 7d maps the environment after data fusion. When a depth camera is used for map construction, more environmental information can be obtained than LiDAR. However, the detection accuracy of the depth camera is easily affected by ambient light, resulting in poor target detection accuracy and blurred map construction.
Through the method in this paper, the detection data of the depth camera and the LiDAR were fused to construct a map; the map after data fusion is shown in Figure 7d. Through the comparison, it can be found that after the fusion of the data of the two sensors, the limitation of the environment detection by a single sensor was able to be effectively compensated for. The interference of the complex environment can be overcome through the complementarity between the sensors, so that the constructed map information is more complete and accurate, and the environment map is closer to the real environment.

4.3. Path Planning Experiment

4.3.1. Simulation Experiment Verification and Analysis

In order to verify the effectiveness and robustness of the fusion algorithm in this paper, the proposed algorithm was simulated and verified in the MATLAB environment. The improved ant colony algorithm in this paper was compared with the basic ant colony algorithm, RRT algorithm, and the algorithm in reference [24]. The simulation results are shown in Figure 8.
As can be seen from Figure 8, all four algorithms were able to find drivable paths during the process of running the search. The path planning curve of the RRT path planning algorithm is shown in Figure 8c, and the path length was 34.9690 m. It can be seen from the figure that compared with the traditional RRT path planning algorithm, the ant colony algorithm was more efficient in exploring path planning with short and straight paths, as shown in Figure 8a,c. The path planned by the ant colony algorithm will form a certain peak at the turning point, which increases the turning angle of the intelligent robot to a certain extent, affecting the passing efficiency of the intelligent robot. In this paper, by comparing the polynomial fitting curve and the cubic B-spline curve optimization path experiment, we found that the polynomial method only deals with the peak of the path. However, the cubic B-spline curve flattened the overall path, smoothed the turning angle of the path, and accelerated the convergence iteration speed, as shown in Figure 9. Figure 8d is the iterative convergence trend diagram of the three ant colony algorithms. It can be seen from the figure that the ant colony algorithm proposed in [20] had a faster convergence speed, and the optimal path length obtained in the 13th iteration was 29.1848 m. In the 11th iteration of the improved ant colony algorithm proposed in this paper, the shortest path was 28.0416 m. In the 11th iteration of the improved ant colony algorithm proposed in this paper, the shortest path was 28.0416 m. The algorithm had less fluctuation in the initial stage, which improved the convergence speed. Therefore, the improved ant colony algorithm in this paper had better iterative convergence, which increased the smooth and safe passage of the intelligent robot.
The starting point S, the target point T, and the unknown obstacle were set in the raster map, and the fusion algorithm in this paper was used to complete the simulation experiment of different positions of the two unknown obstacles. The experimental results are shown in the Figure 10.
As can be seen from Figure 10, the improved ant colony algorithm first set the global optimal path, while the dynamic window algorithm detected the position of obstacles according to the on-board sensors and set local target points on the global path. If an unknown obstacle is detected, the dynamic window method will set local target points combined with the evaluation function of global path information and will track the local target points until the target point is reached. When avoiding local obstacles, the dynamic window method will ensure that the intelligent robot will not collide with the obstacles, and the direction of real-time tracking of local target points will be consistent with the direction of the target points. During the operation of the intelligent robot, the control parameters will be output in real time according to the number of control nodes, which is convenient to realize the automatic feedback control of the intelligent robot. When encountering unknown obstacles, the intelligent robot starts to slow down for local obstacle avoidance. At the same time, the intelligent robot outputs control parameters in real time according to the number of control nodes to change the intelligent robot pose angle change. When the local obstacle avoidance is finished, the intelligent robot adjusts the running speed of the intelligent robot according to the real-time output control parameters and gradually reaches a stable running speed. The intelligent robot controls the parameter feedback quantity, as shown in Figure 11. Therefore, the local path algorithm integrating global path planning can set local target points according to the optimal global path and can track the local target points according to the control parameters fed back in real time during the path planning process. It shortens the time of path planning and improves the efficiency of path planning. At the same time, the global optimality is always maintained, achieving the purpose of designing the fusion algorithm in this paper.

4.3.2. Real Vehicle Test Verification and Analysis

In the real vehicle test, the basic indoor environment was simulated by adding a card-board box as an obstacle. The ROS smart car with sensors was used to build an environment map, and the real-time operation status of the intelligent robot was observed in the Rviz visualization window. According to the constructed real-time environment information, the path planning algorithm proposed in this paper was verified and analyzed, as shown in Figure 12.
As shown in Figure 13a, both the traditional ant colony algorithm and the improved ant colony algorithm were able to avoid obstacles and reach the target position in the grid graph formed in the real closed environment in a safe and stable manner. However, by improving the heuristic function of the traditional ant colony algorithm and the hybrid strategy, the shortest path effect in the global path planning was found to be better, and the smooth and safe passing speed of the intelligent robot was accelerated. The trajectory and error parameters of the intelligent robot are shown in Figure 14. It can be seen from the figure that when the intelligent robot travelled to the range of 1.6 to 2 m, the error between the actual path and the planned path was about 4 cm at most due to the influence of low obstacles, indicating that the improved ant colony algorithm is more accurate. In order to verify the robustness and accuracy of the algorithm in this paper, the global path planning of the intelligent robot was completed according to the sensors carried by the intelligent robot to detect the driving environment, as shown in Figure 13a. Then, position obstacles were set on the global path, as shown in Figure 12b. When there was a positional obstacle, the intelligent robot adopted the dynamic window method to complete the local obstacle avoidance during the driving process, and always travelled along the target direction of the global optimal path, which improved the efficiency of path planning and the driving stability of the robot, as shown in Figure 13b.

5. Conclusions

  • When a single sensor is used for map construction in an unknown environment, there are problems such as large mapping errors and poor robustness. The method of multi-sensor information fusion of LiDAR, depth camera, and IMU is used to obtain more comprehensive environmental information. Thereby, the reliability and accuracy of map construction are improved, and the robustness of the mobile robots in SLAM mapping is effectively improved.
  • Aiming at the problem that the traditional ant colony algorithm has long searching time and is easy to fall into local stagnation, the heuristic function and pheromone updating strategy are improved, and the turning angle of the path is smoothed by using the cubic B-spline curve, which has good stability and improves the efficiency of path planning.
  • The method of map construction and path planning based on multi-sensor fusion was verified and analyzed. The accuracy and robustness of the map construction was improved by the fusing of multi-sensor information. The local real-time obstacle avoidance can be realized by using the improved path planning algorithm, and the global optimal can be maintained. The efficiency of path planning was improved, and the automatic feedback control of an intelligent vehicle can be realized with good robustness and accuracy.

Author Contributions

Methodology and writing—original draft preparation, A.L.; software, J.C.; formal analysis and investigation, Z.H.; data curation, G.L.; resources, J.W.; funding acquisition, A.L., S.L. and Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This project is supported by the National Natural Science Foundation of China (grant no. 51505258, 51975276, and 61601265); the Shandong Provincial Natural Science Foundation, China (grant no. ZR2015EL019, ZR2020ME126, and ZR2021MF131); the Shandong Province Higher Educational Youth Innovation Science and Technology Program (grant no. 2019KJB019 and 2020KJN002); the Open project of State Key Laboratory of Mechanical Behavior and System Safety of Traffic Engineering Structures, China (grant no. 1903); the Open project of Hebei Traffic Safety and Control Key Laboratory, China (grant no. JTKY2019002), and the China Postdoctoral Science Foundation (grant no. 2021M701405).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank all the authors for their contributions to the writing of this article. Special thanks go to Zhenghong Chen for her efforts.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wen, S.; Wen, Z.; Liu, X. Autonomous mapping and path planning of mobile robot based on ROS. J. Shenyang Univ. Technol. 2022, 46, 90–94. [Google Scholar]
  2. Zheng, G.; Zhang, L.; Zhuang, H. Multi-sensors Fusion Based Autonomous Exploration and Mapping of a Mobile Robot in Unknown Indoor Environment. Control Eng. China 2020, 27, 1743–1750. [Google Scholar]
  3. Cao, J.P.; Li, A.J.; Huang, Z.; Wang, J.B. Path Planning for Intelligent Vehicle Based on Improved Ant Colony Algorithm. In Proceedings of the 2021 Global Reliability and Prognostics and Health Management (PHM-Nanjing), Nanjing, China, 15–17 October 2021. [Google Scholar]
  4. Yang, Y.; Han, B.; Luo, X. Research on Simultaneous Localization and Mapping of Indoor Navigation Map for Mobile Robot. Sci. Technol. Eng. 2021, 21, 7597–7603. [Google Scholar]
  5. Chong, Y.; Ji, J.; Gong, M.; Chen, Q. A Stereo Visual Odometry Aided by IMU based on Swmi-direct Method. J. Southwest China Norm. Univ. (Nat. Sci. Ed.) 2021, 46, 112–120. [Google Scholar]
  6. Ye, H.Y.; Chen, Y.Y.; Liu, M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar]
  7. Huang, K.H.; Stachniss, C. Joint Ego-motion Estimation Using a Laser Scanner and a Monocular Camera Through Relative Orientation Estimation and 1-DoF ICP. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 671–677. [Google Scholar]
  8. Chen, M.X.; Yang, S.W.; Yi, X.D.; Wu, D. Real-time 3D mapping using a 2D laser scanner and IMU-aided visual SLAM. In Proceedings of the 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), Okinawa, Japan, 14–18 July 2017; pp. 297–302. [Google Scholar]
  9. Ahmed, A.; Ahmed, S.; Ahmed, O. Dijkstra algorithm applied: Design and implementation of a framework to find nearest hotels and booking systems in Iraqi. In Proceedings of the 2017 International Conference on Current Research in Computer Science and Information Technology (ICCIT), Sulaymaniyah, Iraq, 26–27 April 2017; pp. 126–132. [Google Scholar]
  10. Niu, H.; Li, A.; Huang, X.; Li, W.; Xu, C. Research on Global Dynamic Path Planning Method Based on Improved A* Algorithm. Math. Probl. Eng. 2021, 2021, 4977041. [Google Scholar] [CrossRef]
  11. Firmansyah, E.; Masruroh, S.; Fahrianto, F. Comparative Analysis of A* and Basic Theta* Algorithm in Android-Based Pathfinding Games. In Proceedings of the 2016 6th International Conference on Information and Communication Technology for The Muslim World (ICT4M), Jakarta, Indonesia, 22–24 November 2016; pp. 275–280. [Google Scholar]
  12. Xiong, C.; Chen, D.; Lu, D. Path planning of multiple autonomous marine vehicles for adaptive sampling using Voronoi-based ant colony optimization. Robot. Auton. Syst. 2019, 115, 90–103. [Google Scholar] [CrossRef]
  13. Eduardo, J.; Ángel, L.; Manuel, O. Dynamic window based approaches for avoiding obstacles in moving. Robot. Auton. Syst. 2019, 118, 112–130. [Google Scholar]
  14. Wang, H.; Yin, P.; Zheng, W. Mobile robot path planning based on improved A* algorithm and dynamic window mwthod. Robot 2020, 42, 346–353. [Google Scholar]
  15. Cheng, C.; Hao, X.; Li, J.; Zhang, Z.J.; Sun, G.P. Global dynamic path planning based on fusion of improved A* algorithm and dynamic window approach. J. Xi’an Jiaotong Univ. 2017, 51, 137–143. [Google Scholar]
  16. Patrick, G.; James, M.; Huang, G.Q. An efficient schmidt-ekf for 3D visual-inertial SLAM. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12105–12115. [Google Scholar]
  17. Zhang, M.; Xu, X.Y.; Chen, Y.M.; Li, M.Y. A Lightweight and Accurate Localization Algorithm Using Multiple Inertial Measurement Units. IEEE Robot. Autom. Lett. 2020, 5, 1508–1515. [Google Scholar] [CrossRef] [Green Version]
  18. Hartley, R.; Ghaffari, M.; Eustice, R.; Grizzle, J. Contact-aided invariant extended Kalman filtering for robot state estimation. Int. J. Robot. Res. 2020, 39, 402–430. [Google Scholar] [CrossRef]
  19. Liu, W.; Wu, S.; Wen, Y.; Wu, X. Integrated Autonomous Relative Navigation Method Based on Vision and IMU Data Fusion. IEEE Access 2020, 8, 51114–51128. [Google Scholar] [CrossRef]
  20. Cheng, L.; Huang, Y. A Novel Adaptive EKF-SLAM Algorithm Via Cholesky Decomposition. J. Anhui Polytech. Univ. 2020, 35, 59–64. [Google Scholar]
  21. Li, Y.; Tang, D.; Dai, Q. Map-building of mobile robot in unknown environment based on multi sensor information fusion. J. Shaanxi Univ. Sci. Technol. 2021, 39, 151–159. [Google Scholar]
  22. Cao, F.; Zhuang, Y.; Yan, F.; Yang, Q.F.; Wang, W. Long-term Autonomous Environment Adaptation of Mobile Robots: State-of-the-art Methods and Prospects. Acta Autom. Sin. 2020, 46, 205–221. [Google Scholar]
  23. Liu, Y.; Zeng, G.; Huang, B. Research on robot path planning based on improved ant colony algorithm. Electron. Sci. Technol. 2020, 33, 13–18. [Google Scholar] [CrossRef]
  24. You, H.; Lu, Z. A fast ant colony algorithm for α, β and ρ adaptive adjustment. Manuf. Autom. 2018, 40, 99–102, 112. [Google Scholar]
Figure 1. Overall framework of the SLAM algorithm.
Figure 1. Overall framework of the SLAM algorithm.
Applsci 12 02913 g001
Figure 2. Schematic diagram of depth-to-laser conversion.
Figure 2. Schematic diagram of depth-to-laser conversion.
Applsci 12 02913 g002
Figure 3. SLAM algorithm framework based on graph optimization.
Figure 3. SLAM algorithm framework based on graph optimization.
Applsci 12 02913 g003
Figure 4. The smoothing process of third-order B-spline curve.
Figure 4. The smoothing process of third-order B-spline curve.
Applsci 12 02913 g004
Figure 5. Flow chart of the fusion algorithm.
Figure 5. Flow chart of the fusion algorithm.
Applsci 12 02913 g005
Figure 6. (a) ROS intelligent vehicle platform; (b) system framework diagram.
Figure 6. (a) ROS intelligent vehicle platform; (b) system framework diagram.
Applsci 12 02913 g006
Figure 7. Build diagram: (a) experimental environment; (b) LiDAR builds maps; (c) depth camera build maps; (d) map after data fusion.
Figure 7. Build diagram: (a) experimental environment; (b) LiDAR builds maps; (c) depth camera build maps; (d) map after data fusion.
Applsci 12 02913 g007aApplsci 12 02913 g007b
Figure 8. Comparison of path algorithms: (a) the optimal path of the basic ant colony algorithm; (b) the optimal path of the ant colony algorithm in the literature [24]; (c) the optimal path of the RRT algorithm; (d) convergence trend of three ant colony algorithms.
Figure 8. Comparison of path algorithms: (a) the optimal path of the basic ant colony algorithm; (b) the optimal path of the ant colony algorithm in the literature [24]; (c) the optimal path of the RRT algorithm; (d) convergence trend of three ant colony algorithms.
Applsci 12 02913 g008
Figure 9. Optimal path optimization comparison of the improved ant colony algorithm.
Figure 9. Optimal path optimization comparison of the improved ant colony algorithm.
Applsci 12 02913 g009
Figure 10. Obstacle avoidance experiment: (a) path planning results of unknown obstacle environment I; (b) path planning results of unknown obstacle environment II.
Figure 10. Obstacle avoidance experiment: (a) path planning results of unknown obstacle environment I; (b) path planning results of unknown obstacle environment II.
Applsci 12 02913 g010
Figure 11. Intelligent robot control parameters: (a) linear speed parameters of the intelligent robot based on path planning feedback; (b) intelligent vehicle angular velocity parameters fed back by path planning.
Figure 11. Intelligent robot control parameters: (a) linear speed parameters of the intelligent robot based on path planning feedback; (b) intelligent vehicle angular velocity parameters fed back by path planning.
Applsci 12 02913 g011
Figure 12. Labyrinth-type environment real scene: (a) simulation of the basic indoor environment; (b) adding the location obstacle environment map.
Figure 12. Labyrinth-type environment real scene: (a) simulation of the basic indoor environment; (b) adding the location obstacle environment map.
Applsci 12 02913 g012
Figure 13. Real-time environment drawing experiment: (a) comparison between basic ant colony and ant colony algorithm in this paper; (b) the result graph of path planning based on the fusion algorithm in this paper.
Figure 13. Real-time environment drawing experiment: (a) comparison between basic ant colony and ant colony algorithm in this paper; (b) the result graph of path planning based on the fusion algorithm in this paper.
Applsci 12 02913 g013
Figure 14. Improved ant colony algorithm path analysis: (a) path analysis; (b) improved path trajectory error of the ant colony algorithm.
Figure 14. Improved ant colony algorithm path analysis: (a) path analysis; (b) improved path trajectory error of the ant colony algorithm.
Applsci 12 02913 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, A.; Cao, J.; Li, S.; Huang, Z.; Wang, J.; Liu, G. Map Construction and Path Planning Method for a Mobile Robot Based on Multi-Sensor Information Fusion. Appl. Sci. 2022, 12, 2913. https://doi.org/10.3390/app12062913

AMA Style

Li A, Cao J, Li S, Huang Z, Wang J, Liu G. Map Construction and Path Planning Method for a Mobile Robot Based on Multi-Sensor Information Fusion. Applied Sciences. 2022; 12(6):2913. https://doi.org/10.3390/app12062913

Chicago/Turabian Style

Li, Aijuan, Jiaping Cao, Shunming Li, Zhen Huang, Jinbo Wang, and Gang Liu. 2022. "Map Construction and Path Planning Method for a Mobile Robot Based on Multi-Sensor Information Fusion" Applied Sciences 12, no. 6: 2913. https://doi.org/10.3390/app12062913

APA Style

Li, A., Cao, J., Li, S., Huang, Z., Wang, J., & Liu, G. (2022). Map Construction and Path Planning Method for a Mobile Robot Based on Multi-Sensor Information Fusion. Applied Sciences, 12(6), 2913. https://doi.org/10.3390/app12062913

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop