Next Article in Journal
Simultaneous Three-Degrees-of-Freedom Prosthetic Control Based on Linear Regression and Closed-Loop Training Protocol
Previous Article in Journal
Study on SR-Crossbar RF MEMS Switch Matrix Port Configuration Scheme with Optimized Consistency
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Path Planning and Obstacle Avoidance Based on High-Precision Mapping and Positioning

1
School of Materials Science and Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
2
Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
3
Shanghai Collaborative Innovation Center of Laser Advanced Manufacturing Technology, Shanghai University of Engineering Science, Shanghai 201620, China
4
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 201900, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(10), 3100; https://doi.org/10.3390/s24103100
Submission received: 26 March 2024 / Revised: 12 May 2024 / Accepted: 12 May 2024 / Published: 13 May 2024
(This article belongs to the Section Intelligent Sensors)

Abstract

:
High-precision positioning and multi-target detection have been proposed as key technologies for robotic path planning and obstacle avoidance. First, the Cartographer algorithm was used to generate high-quality maps. Then, the iterative nearest point (ICP) and the occupation probability algorithms were combined to scan and match the local point cloud, and the positions and attitudes of the robot were obtained. Furthermore, Sparse Matrix Pose Optimization was carried out to improve the positioning accuracy. The positioning accuracy of the robot in x and y directions was kept within 5 cm, the angle error was controlled within 2°, and the positioning time was reduced by 40%. An improved timing elastic band (TEB) algorithm was proposed to guide the robot to move safely and smoothly. A critical factor was introduced to adjust the distance between the waypoints and the obstacle, generating a safer trajectory, and increasing the constraint of acceleration and end speed; thus, smooth navigation of the robot to the target point was achieved. The experimental results showed that, in the case of multiple obstacles being present, the robot could choose the path with fewer obstacles, and the robot moved smoothly when facing turns and approaching the target point by reducing its overshoot. The proposed mapping, positioning, and improved TEB algorithms were effective for high-precision positioning and efficient multi-target detection.

1. Introduction

With the development of artificial intelligence technology, autonomous inspection robots have become an integral part of various industries, including manufacturing, consumer services, and healthcare [1]. These robots perform tasks in complex, hazardous, or hard-to-reach environments, reducing labor costs, improving efficiency, and mitigating risks associated with the work. Autonomous inspection robots are significant in modern industry across various sectors. To achieve customized inspection functionalities, they should possess the basic capability to identify and navigate around obstacles [2]. Additionally, high-precision localization and path planning abilities are necessary for accurate task completion and efficient operations.
Researchers have been working to improve mapping, localization accuracy, and optimize path planning [3]. Simultaneous Localization and Mapping (SLAM) provides a solution for robot localization by estimating the robot’s position in unknown environments while mapping the surroundings. Visual SLAM (vSLAM) methods, such as Mono-SLAM [4], PTAM [5], ORB-SLAM [6], ORB-SLAM2 [7], and OpenVSLAM [8], have been extensively studied for cost-effective indoor localization. In reference [9], VGG16-based image descriptors are used for robust image retrieval. The Top-N similar images are extracted, and candidate images are reordered using region descriptors based on Faster RCNN. Laser-based SLAM algorithms, such as Gmapping [10] and Hector SLAM [11], offer higher accuracy and faster computation speeds.
However, both of these Bayesian-based methods lack cycle detection and optimization, and they require time-consuming pre-mapping of the entire environment to construct a high-resolution geographic feature database for precise localization. It is challenging to achieve continuous tracking of a robot in dynamic indoor and outdoor environments by detecting key frames. The approach of frequently building visual feature databases to address this problem is highly inefficient. Instead, graph-based mapping software [12] can accurately and robustly create maps. Laser-based SLAM methods can also be used for global localization of robots. Matching algorithms [13,14] match the query scans with a key point database generated from grid-based maps, retrieving relevant position information to achieve high-precision robot localization. Other probabilistic localization algorithms [15,16,17,18] change the probability of the robot pose through motion and observation models, but discrepancies between the constructed map and the physical environment may result in inaccuracies.
The planning of inspection robot paths involves two stages: global path planning and local path planning. This paper focuses on local path planning [19]. In real-world scenarios where humans and robots coexist, the robot’s perception of dynamic obstacles in the surroundings is weak, even when it has basic global map information. Therefore, selecting and optimizing local path planning is crucial for achieving robot motion in complex and dynamic inspection spaces. Common local path planners include artificial potential fields [20], reactive replanning methods [21,22], and fuzzy algorithm-based methods [23]. Artificial potential fields are relatively simple and exhibit good real-time planning performance. However, traditional artificial potential field methods tend to fall into local optima and encounter reachability issues when obstacles surround the target point. This paper adopts the improved TEB algorithm for local path planning. Roesmann et al. proposed the TEB algorithm [24] as an extension of the classical EB algorithm. This algorithm optimizes multiple objectives to provide obstacle avoidance through trajectory optimization. Unlike local path planning methods, the TEB algorithm can set multiple constraint conditions as needed to ensure its applicability. However, mobile robots equipped with the TEB algorithm may get stuck in local minima while navigating complex environments, making it difficult to traverse obstacles. To address this issue, Rösmann et al. proposed extensions of the TEB technique by using parallel trajectory planning within spatially unique topological structures [25,26]. However, these methods only consider the positions of obstacles and do not account for potential collisions between the robot and surrounding obstacles. Lan et al. proposed an Active Timed Elastic Band (PTEB) technique for autonomous mobile robot navigation systems in dynamic environments [27]. Previous efforts to improve the performance of the TEB algorithm in complex environments have mainly focused on obstacle avoidance. Additionally, most related re-search aims to avoid local minima and achieve smooth planning paths for AGVs in complex environments, lacking constraints on the shortest local path and resulting in non-optimal local path planning [28].
To achieve high-precision positioning and efficient multi-target detection in inspection robots, we employed the Cartographer algorithm to construct high-quality maps and enhanced the scanning matching module for precise localization. Leveraging these maps and localization modules, we refined the Time-Elastic Band (TEB) algorithm by incorporating critical coefficients to adjust the distance between the robot and obstacles, enhancing the robot’s operational safety. Additionally, constraints on acceleration and terminal velocity were introduced to ensure smooth velocity transitions during motion and as the robot approaches targets.
The primary contributions of this study are summarized as follows:
(1)
We improved the scanning matching module by integrating the Iterative Closest Point (ICP) algorithm with occupancy probability methods, constructing Euclidean submaps to determine the robot’s positioning pose. The optimal pose was then refined using a sparse matrix for pose optimization.
(2)
Building on precise localization, we enhanced the traditional TEB algorithm by introducing critical coefficients and adding constraints on acceleration and terminal velocity to boost the robot’s safety and the smoothness of its movements.

2. Materials and Methods

2.1. Cartographer High-Precision Mapping Process

The Cartographer algorithm has improved the mapping algorithm based on particle filter method by reducing memory consumption in large-scale environments. It adopts a backend data processing method based on graph optimization to construct two-dimensional or three-dimensional environment maps. The algorithm changes the way of constructing global maps from directly using sensor data to first generating submaps and then indirectly constructing global maps.
After generating all the submaps, the Cartographer algorithm applies backend pose graph optimization to correct errors. Therefore, the environment maps established using the Cartographer algorithm have a lower level of error. The mapping algorithm framework can be divided into three parts: receiving sensor data, local SLAM, and global SLAM. The algorithm’s mapping framework is illustrated in Figure 1.
During the mapping process, laser scan observations, odometer data, and IMU data are fused together. In the local SLAM process, newly acquired laser scan data is matched with nearby submaps to find the optimal insertion pose and insert the scan data into the submap. However, errors can accumulate during the scan matching process, and these errors can accumulate over time and as the map grows. In the global SLAM process, pose graph optimization adjusts the robot’s poses to eliminate errors, thus constructing a globally consistent map.
Submaps are built during the local SLAM process. To enhance map accuracy, the Cartographer algorithm performs pose optimization on the data before inserting laser scan point cloud data into the submaps. This is achieved through Ceres nonlinear optimization, refining the robot’s pose through scan matching to increase the likelihood of point cloud matching within the submaps. Once the submaps are constructed, a global map can be derived from them.

2.2. Robot Localization Based on Scan Matching

2.2.1. ICP Algorithm

Scan matching is employed to align the current laser scan data with previous map data to estimate the robot’s pose. The algorithm iteratively reduces the error between scan data and map data. The robot’s pose is updated based on the optimized results until the optimal robot pose is obtained.
The ICP-based optimization algorithm is a method that applies voxel filters in the point cloud library (PCL) to reduce the noise in the point cloud [29]. The ICP algorithm matches the scan to the submap for point cloud registration, and convergence is checked to determine if the algorithm has converged to the best matching pose. If convergence is achieved, the robot’s position is updated, and the process is repeated. Otherwise, further iteration or data optimization may be necessary before repeating the process. This iterative process determines the optimal pose, and the obtained optimal pose undergoes sparse matrix pose optimization to finalize the robot’s position information. The robot’s localization framework is illustrated in Figure 2.
In laser SLAM for mobile robot localization, it is challenging to rely on wheel encoders to obtain mileage information or rely on inertial navigation systems to obtain position data. ICP-based approach is an effective solution. This method utilizes laser ranging technology to reconstruct the robot’s motion state during motion by means of adjacent laser point cloud pairs. The ICP employs a point cloud matching model for inter-frame alignment, uses the nearest neighbor principle to locate candidate points for matching, and constructs a cost function based on these matches as shown in Equation (1). By searching for the best matching parameters and, the position information of the robot can be obtained.
e = arg m i n δ   i = 1 N   δ p k q j 2
where N represents the number of point cloud, and the reference frame, and q j represents the set of reference frame point clouds. The undulating lines depicted in Figure 3 serve as visualizations of obstacle surfaces. Within this context, P i signifies a point awaiting matching, while x i denotes a point extracted from the point cloud of the preceding frame. The objective lies in matching each point x i with its closest neighbor P i .

2.2.2. Improved ICP Matching Model

However, this approach is time-consuming, and the robot’s current position may change when the iteration is completed. Moreover, the approach involves substantial computational overhead, leading to significant cumulative errors during matching, which may decrease the precision of robot localization, making real-time localization challenging.
To solve the above problems, this paper uses a combination of ICP algorithm and occupancy probability method to construct Euclidean subgraphs to improve the robot’s localization accuracy. The ICP algorithm is used to iteratively align the point cloud with the previously known errors of the map information to continuously reduce the errors. The optimization model is shown in Equation (2):
min   i = 1 n   p i ( R q i + t ) 2
where p i is the points in the current point cloud (from a known map), q i is the data from the contingent sensors. R is the rotation matrix and t is the translation vector.
The occupancy probability approach is modeled as in Equation (3). where o denotes the occupancy state (occupied/unoccupied), z denotes the sensor measurements, and x denotes the robot’s bit position. The occupancy probability P; given the sensor measurements and the robot’s bit position adjusts the point cloud according to the possible obstacles.
P 0 = P ( o | z , x )
The Euclidean distance subgraph is created based on the above principle as in Equation (4), where the acquired point cloud is point p i , and e i j represents the distance between pi and its nearest obstacle, and o j denotes the position of the nearest obstacle pointing to p i .
e i j = p i o j
To optimize the position of the point cloud with respect to obstacles, the distance from the point cloud to the nearest obstacle is minimized to effectively update the robot’s understanding of its surroundings. Combining Equations (2) and (4) defines the optimal model shown in Equation (5):
J = m i n R , t   i   m i n j   p i ( R o j + t ) 2  
where J is the objective function. The updated point cloud is used to optimize the estimation of the robot’s position in its environment, thus improving the localization accuracy. The new position is updated based on minimizing the difference between the estimated position and the position collected from the sensor data as shown in Equation (6):
x = arg m i n x   F ( x ) z 2
where F ( x ) denotes the theoretical measurement values at attitude x and z is the actual measurement values.
To obtain the global optimal solution with high precision, the collected pose information is nonlinearly optimized. The pose map is composed of the pose points with nonlinear constraints during the robot’s movement, and these constraints are also common observations around the pose points. In order to achieve high-precision positioning of the mobile robot in the actual process, LM (Levenberg-Marquardt) is used as the framework for the constraints between different poses, and the robot poses are processed by sparse matrix. This method is similar to the sparse cluster optimization problem of camera sensor and external environment processed by LM in visual SLAM. Finally, the method of solving linear system by direct sparse Jorisky decomposition is adopted to deal with the optimization problem of 2D pose map for pose optimization and location information determination.

2.3. Localized Path Planning

2.3.1. TEB Algorithm

The basic principle of the TEB algorithm is to define the path as an elastic band that connects the starting point and the target detection point, and constraints such as obstacles on the path will deform this elastic band. The expression of the TEB algorithm is shown in Equation (7):
B = Q , τ = s 1 , Δ T 1 , s 2 , Δ T 2 , , s n 1 , Δ T n 1 , s n
The TEB algorithm first performs subsequent corrections based on the initial trajectory generated by the global path planner. During the correction process, the algorithm formulates the trajectory optimization problem as a multi-objective optimization problem, and these objectives are considered together in the optimization process to find the optimal trajectory. Based on different optimization objectives, m optimization functions f k ( B ) are established. The weight of each optimization function is represented as η k , expressed as follows:
f B = η k f k B , k = 1,2 , , m
With fixed initial and target poses, the optimal trajectory set B * is obtained by solving the aforementioned optimization functions.
B * = a r g B m i n f B
In the TEB algorithm, path-following constraints can fuse the local optimization trajectory of TEB with the global path. The closest distance between pose points and global path points is defined as d m i n , j , with the maximum allowable distance denoted as r P m a x . The penalty function is formulated as follows:
f p a t h = e τ d m i n , j , r P m a x , ε , S , n
The obstacle avoidance constraint is applied to ensure that the optimized trajectory avoids obstacles. The minimum allowable value is set such that the distance between the pose point and the obstacle is r O m i n . The penalty function is formulated as follows:
f o b = e τ d m i n , j , r O m i n , ε , S , n
In the above two equations, S represents the deformation factor, where the polynomial coefficient n is typically set to 2. The offset factor ε represents a small displacement near the boundary, providing a margin for inequality constraints. The gradient of the penalty function can be interpreted as an external force acting on the elastic band.
The dynamics constraints of a mobile robot consist of the average linear and angular velocities of the robot, and a penalty function can be used to represent the constraints on linear and angular velocities. In the penalty function, the velocity can be penalized for exceeding the constraints as shown in Equation (12).
f v i = e τ v i , v m a x , ε , S , n f w i = e τ w i , w m a x , ε , S , n
Similarly, the penalty functions for linear acceleration and angular acceleration are as follows:
f a i = f a i S i , S i + 1 , S i + 2 , Δ T i , Δ T i + 1 f α i = f α i S i , S i + 1 , S i + 2 , Δ T i , Δ T i + 1
The focus of this paper is to improve the constraints of the TEB algorithm. Therefore, the subsequent content will not be described further.

2.3.2. Improved TEB Algorithm

In this improvement, a critical coefficient is introduced to adjust the distance between path points and obstacles in path planning, as shown in Figure 4. This can be used to enhance the safety and efficiency of path planning. By controlling the minimum safe distance between path points and obstacles in the path planning algorithm, it can be adjusted based on specific applications and requirements to ensure that the generated path approaches the target point as closely as possible without colliding with obstacles. This can be represented mathematically as:
S = P p a t h O o b C o b · K
where P p a t h represents the position of the path point, O o b represents the position of the geometric center of the obstacle, P p a t h O o b denotes the Euclidean distance between the path point and the Geometric center of the obstacle, C o b represents the radius of the obstacle, and K is the critical coefficient. Where S represents the distance between the path and the obstacle; when S is greater than half of the robot width W , the robot can pass through the obstacle region, so a reasonable range of K can be solved.
To improve the motion accuracy of the robot, acceleration and deceleration control is performed between the start-stop and corner program segments. Acceleration and deceleration control is important for path planning. If the acceleration and deceleration control is not sufficient, there will be a sudden change in acceleration/deceleration, resulting in impacts, and even the robot will not be able to stop in time when it reaches the target point, and it will cross the target point.
The acceleration is constrained by assuming X i ,     X i + 1 ,     X i + 2 ,     X i + 3 are four consecutive positional points in the local path and the time interval between the four positional points is T j ,     T j + 1 ,     T j + 2 , respectively. The linear acceleration a v t and angular acceleration a w t are obtained from Equations (15) and (16), respectively:
a v _ t = 2 ( v t + 1 v t ) T t + T t + 1
a w _ t = 2 ( w t + 1 w t ) T t + T t + 1
The linear acceleration and the angular acceleration constraints are derived from Equations (17) and (18):
j l i m _ t = a v _ t + 1 a v _ t 0.25 T t + 0.5 T t + 1 + 0.25 T t + 2
j r o t _ t = a w _ t + 1 a w _ t 0.25 T t + 0.5 T t + 1 + 0.25 T t + 2
Furthermore, it is proposed to intelligently adjust the end velocity of the robot to achieve smooth deceleration and precise arrival at the target point, thereby reducing the impact on the robot and improving the accuracy of velocity planning. This improvement contributes to enhancing the robot’s control performance and safety. Mathematically, this can be represented as:
v d m a x = v m a x   P o s e c u r T > d t h r e s h o l d d d t h r e s h o l d × v m a x   P o s e c u r T d t h r e s h o l d
where v m a x is the pre-set maximum linear velocity of the robot, d t h r e s h o l d is the pre-set threshold of the Euclidean distance between the robot and the target point, P o s e c u r T   represents the Euclidean distance between the current position of the robot and the target point, and v m a x is the maximum linear velocity of the robot during motion.

3. Experiment Process and Result Analysis

3.1. Experimental Platform and Environment

The simulated inspection environment is depicted in Figure 5. It measured 7 m in length and 3.5 m in width. The environment primarily consisted of narrow spaces in which the robot was expected to navigate around and into confined areas, as well as perform obstacle avoidance tasks. This environment could be used to conduct a diverse array of tests, such as evaluating emergency obstacle avoidance capabilities and assessing the smoothness of the robot’s velocity. The obstacles in the environment included 0.5 m × 0.8 m frames and cardboard boxes.
The experimental platform used a tracked differential composite robot, which is shown in Figure 6. The main control board uses an industrial computer (i7 16G 256SSD) as the controller, the LiDAR is VLP 16, and the system adopts the Ubuntu18.04 platform and integrates the ROS system. In the actual experiments, remote control was realized through the ROS distributed framework on the same LAN, which provided a stable operation environment and software support for the robot application. The main parameters of robot movement are shown in Table 1.

3.2. Experimental Results

3.2.1. Robot Localization Accuracy Test

To fully demonstrate the accuracy of the mapping and localization methods, we conducted experiments using a real-world workflow that required a high level of precision. Firstly, we utilized the Cartographer algorithm to map the simulated environment, allowing the robot to perceive the surrounding obstacles. The mapping result is shown in Figure 7.
Based on the constructed map, three inspection points, A, B, and C, required by the robot are marked as shown in the Figure 8.
In this experiment, the robot was set to move autonomously from the starting point to the target point, and three passing points, A, B, and C, were set during the movement. The robot stayed for five seconds each time it passed, and the x, y, and Angle values of the starting point were manually measured. A, B, and C do the same work. The experiment was repeated 16 times, and the measurement data from passing through the three points A, B and C were recorded each time, and the data was compared with the data from the first time, so as to judge the positioning error of the robot in the complex environment. The robot motion process is shown in Figure 9a–c.
Finally, an error analysis of the real-time position and the actual positions of three inspection points was conducted to determine the robot’s localization precision. The experimental results are depicted in Figure 10. From the graph, it is evident that the errors in the robot’s movement from the starting point to inspection points A (Figure 10a), B (Figure 10b), and C (Figure 10c) are uniformly distributed. The errors in the X and Y directions remain stable within 5 cm, while the robot’s turning radius remains stable at 0.04 radians, equivalent to 2 degrees. Such testing precision enhances the performance, ensuring that the robot can complete high-precision inspections without colliding with obstacles.
After the drawing was completed, the positioning time before and after the improvement of the positioning algorithm was compared. The test method included turning on the LiDAR, then turning on the map and positioning algorithm, next operating the robot to implement the positioning process, and lastly recording the time of matching the robot and the map. Ten experiments were repeated to average the results. The experimental results are shown in Table 2. The efficiency of the improved positioning algorithm was increased by 40% and the positioning time of the robot was saved.

3.2.2. Local Path Improvement Test

To verify the effectiveness of the improved TEB planning algorithm, this paper conducted practical experimental tests. We verified the performance of the improved TEB algorithm in two aspects: firstly, the effectiveness of planning a safe trajectory amid dense obstacles; secondly, the effectiveness of keeping the speed of the robot smooth during the movement, and arriving at the target point smoothly and accurately.
In the experimental environment, shown in Figure 11, we set up a wider channel A and a narrower obstacle channel B. The traditional TEB algorithm would choose channel B in Figure 11, but this would lead to the robot being too close to the obstacle when passing through the obstacle, which would result in a higher risk of rubbing against the obstacle. The improved TEB algorithm introduced a criticality coefficient to adjust the distance between the path points and the obstacles in the path planning, and chose the wider channel A in Figure 11, which was more favorable for safe passage of the robot. The experimental results verify the effectiveness of the improved TEB algorithm in selecting safe trajectories.
By smoothing the speed of the robot, we can perform the inspection task more safely and efficiently. To validate the effectiveness of our improvements to the robot’s acceleration and end-velocity constraints, we designed an experimental environment, as shown in Figure 12.
The robot started from the starting point, bypassed obstacles, and reached the final goal point. As shown in Figure 13, before the algorithmic improvement, the robot’s velocity oscillated significantly when going around the obstacle, whereas after the algorithmic improvement the robot effectively reduced the sudden change of velocity when going around the obstacle which, in turn, reduced the energy consumption during the task execution. This will extend the battery life, reduce wear and tear on the robot components, and extend the overall life of the robot. In addition, a smooth velocity profile stabilizes the robot’s motion and reduces collisions with obstacles due to velocity changes. As a result, this can improve the quality of sensor data acquisition and positioning accuracy. In practical applications, a smooth velocity profile can also shorten the robot’s emergency response time in unforeseen situations, thus improving the safety of the inspection task.
When approaching the target point, as shown in Figure 13a, the slope of the velocity curve in the traditional algorithm is suddenly changed, indicating that the absolute value of the acceleration was suddenly increased. The robot, thus, had a velocity jump, which produced a sudden stop when the robot arrived at the target point, and generated a large impact on the equipment it carried. The impact may result in the robot not accurately stopping at the target point; instead, it may perform a reverse adjustment. In contrast, as in Figure 13b the improved algorithm ran more smoothly during deceleration, the absolute value of acceleration gradually decreased to 0, and finally stopped smoothly at the target point. By comparing and analyzing these velocity curves, the significant effect of the optimization algorithm in smoothing the robot’s velocity can be clearly seen. It verifies the algorithm improvement.

4. Conclusions

The proposed algorithm exhibited robust strength in adapting to complex environments; it combined the ICP and occupancy probability algorithms to enhance the localization accuracy of the robot. On complex indoor terrains, the algorithm could efficiently construct maps, plan paths, and provide reliable support for robot autonomous navigation. The Cartographer algorithm was adopted to create maps through combining laser sensor data with wheel odometer and IMU data. To obtain the robot position, LiDAR was utilised to collect environmental point cloud data, and the voxel filter in PCL was used to process the environmental point cloud data in order to reduce the impact of scattered point clouds. Subsequently, the ICP algorithm and the occupation probability algorithm were employed for matching in order to determine the position and orientation of the robot. Finally, a pose optimization method using sparse matrix was utilized to optimize the obtained pose. The critical coefficient was introduced into the TEB algorithm to adjust the distance between the robot and the obstacle, thereby enhancing the safety of the robot’s motion trajectory. The constraints on robot acceleration and terminal velocity improved the smoothness of the robot motion. The experimental results showed that the positioning accuracy of the robot was less than 5 cm, and the angular stability was about 2°. In terms of positioning speed, the improved positioning algorithm was 40% higher than before, effectively reducing the positioning time. Under the guidance of the improved TEB algorithm, the robot is capable of selecting a safer path and moving smoothly. It should be noted that other literature only measures the robot moving a fixed distance and then measuring its error, and ignores the cumulative error in multi-target detection. However, this method cannot guarantee that the robot still has a good positioning accuracy in a complex environment.
The developed inspection robot in this study is capable of intelligently and efficiently performing tasks in complex environments, providing application scenarios for intelligent inspection. However, this research still faces many challenges, such as localization accuracy and speed estimation. Future research will focus on improving the accuracy of robot localization.

Author Contributions

F.Z.: Conceptualization, Investigation, Methodology, Software, Formal analysis, Writing—original draft preparation, and Visualization; P.X. and L.L.: Conceptualization, Formal analysis, Supervision, Writing—review and editing, and Project administration; P.Z.: Conceptualization, Software, Formal analysis, and Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shanghai (grant number: 20ZR1422700) and the Class III Peak Discipline of Shanghai—Materials Science and Engineering (High–Energy Beam Intelligent Processing and Green Manufacturing).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy reasons.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Y.; Li, Q.; Xu, X.; Yang, J.; Chen, Y. Research Progress of Nature-Inspired Metaheuristic Algorithms in Mobile Robot Path Planning. Electronics 2023, 12, 3263. [Google Scholar] [CrossRef]
  2. Rafai, A.N.A.; Adzhar, N.; Jaini, N.I. A Review on Path Planning and Obstacle Avoidance Algorithms for Autonomous Mobile Robots. J. Robot. 2022, 2022, e2538220. [Google Scholar] [CrossRef]
  3. Galceran, E.; Carreras, M. A Survey on Coverage Path Planning for Robotics. Robot. Auton. Syst. 2013, 61, 1258–1276. [Google Scholar] [CrossRef]
  4. Lv, J.; Qu, C.; Du, S.; Zhao, X.; Yin, P.; Zhao, N.; Qu, S.; Lv, J.; Qu, C.; Du, S.; et al. Research on Obstacle Avoidance Algorithm for Unmanned Ground Vehicle Based on Multi-Sensor Information Fusion. Math. Biosci. Eng. 2021, 18, 1022–1039. [Google Scholar] [CrossRef] [PubMed]
  5. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
  6. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  7. Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  8. Sumikura, S.; Shibuya, M.; Sakurada, K. OpenVSLAM: A Versatile Visual SLAM Framework. In MM19: Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 2292–2295. [Google Scholar]
  9. Xu, S.; Chou, W.; Dong, H. A Robust Indoor Localization System Integrating Visual Localization Aided by CNN-Based Image Retrieval with Monte Carlo Localization. Sensors 2019, 19, 249. [Google Scholar] [CrossRef]
  10. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef]
  11. Kohlbrecher, S.; von Stryk, O.; Meyer, J.; Klingauf, U. A Flexible and Scalable SLAM System with Full 3D Motion Estimation. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar]
  12. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-Time Loop Closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
  13. Bosse, M.; Zlot, R. Keypoint Design and Evaluation for Place Recognition in 2D Lidar Maps. Robot. Auton. Syst. 2009, 57, 1211–1224. [Google Scholar] [CrossRef]
  14. Olson, E. 14M3RSM: Many-to-Many Multi-Resolution Scan Matching. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 5815–5821. [Google Scholar]
  15. Fox, D.; Burgard, W.; Thrun, S. Markov Localization for Mobile Robots in Dynamic Environments. J. Artif. Intell. Res. 1999, 11, 391–427. [Google Scholar] [CrossRef]
  16. Thrun, S.; Fox, D.; Burgard, W.; Dellaert, F. Robust Monte Carlo Localization for Mobile Robots. Artif. Intell. 2001, 128, 99–141. [Google Scholar] [CrossRef]
  17. Fox, D. Adapting the Sample Size in Particle Filters Through KLD-Sampling. Int. J. Robot. Res. 2003, 22, 985–1003. [Google Scholar] [CrossRef]
  18. Liu, Z.; Shi, Z.; Zhao, M.; Xu, W. Mobile Robots Global Localization Using Adaptive Dynamic Clustered Particle Filters. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1059–1064. [Google Scholar]
  19. Jian, Z.; Zhang, S.; Chen, S.; Nan, Z.; Zheng, N. A Global-Local Coupling Two-Stage Path Planning Method for Mobile Robots. IEEE Robot. Autom. Lett. 2021, 6, 5349–5356. [Google Scholar] [CrossRef]
  20. Wang, D.; Li, C.; Guo, N.; Song, Y.; Gao, T.; Liu, G. Local Path Planning of Mobile Robot Based on Artificial Potential Field. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 3677–3682. [Google Scholar]
  21. Minguez, J.; Montano, L. Nearness Diagram Navigation (ND): A New Real Time Collision Avoidance Approach. In Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113), Takamatsu, Japan, 31 October–5 November 2000; pp. 2094–2100. [Google Scholar]
  22. Seder, M.; Petrovic, I. Dynamic Window Based Approach to Mobile Robot Motion Control in the Presence of Moving Obstacles. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 1986–1991. [Google Scholar]
  23. Wang, S. Mobile Robot Path Planning Based on Fuzzy Logic Algorithm in Dynamic Environment. In Proceedings of the 2022 International Conference on Artificial Intelligence in Everything (AIE), Nicosia, Cyprus, 2–4 August 2022; pp. 106–110. [Google Scholar]
  24. Roesmann, C.; Feiten, W.; Woesch, T.; Hoffmann, F.; Bertram, T. Trajectory Modification Considering Dynamic Constraints of Autonomous Robots. In Proceedings of the ROBOTIK 2012, 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 1–6. [Google Scholar]
  25. Rösmann, C.; Hoffmann, F.; Bertram, T. Integrated Online Trajectory Planning and Optimization in Distinctive Topologies. Robot. Auton. Syst. 2017, 88, 142–153. [Google Scholar] [CrossRef]
  26. Rösmann, C.; Oeljeklaus, M.; Hoffmann, F.; Bertram, T. Online Trajectory Prediction and Planning for Social Robot Navigation. In Proceedings of the 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Munich, Germany, 3–7 July 2017; pp. 1255–1260. [Google Scholar]
  27. Nguyen, L.A.; Pham, T.D.; Ngo, T.D.; Truong, X.T. A Proactive Trajectory Planning Algorithm for Autonomous Mobile Robots in Dynamic Social Environments. In Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, 22–26 June 2020; pp. 309–314. [Google Scholar]
  28. Wu, J.; Ma, X.; Peng, T.; Wang, H. An Improved Timed Elastic Band (TEB) Algorithm of Autonomous Ground Vehicle (AGV) in Complex Environment. Sensors 2021, 21, 8312. [Google Scholar] [CrossRef] [PubMed]
  29. Li, Q.; Huai, J.; Chen, D.; Zhuang, Y. Real-Time Robot Localization Based on 2D Lidar Scan-to-Submap Matching. In China Satellite Navigation Conference (CSNC 2021) Proceedings; Yang, C., Xie, J., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2021; Volume 773, pp. 141–423. [Google Scholar]
Figure 1. The flowchart of the Cartographer algorithm.
Figure 1. The flowchart of the Cartographer algorithm.
Sensors 24 03100 g001
Figure 2. Schematic diagram of the localization method.
Figure 2. Schematic diagram of the localization method.
Sensors 24 03100 g002
Figure 3. Principle of the ICP algorithm.
Figure 3. Principle of the ICP algorithm.
Sensors 24 03100 g003
Figure 4. Comparison of paths under different critical coefficients.
Figure 4. Comparison of paths under different critical coefficients.
Sensors 24 03100 g004
Figure 5. Experimental environment.
Figure 5. Experimental environment.
Sensors 24 03100 g005
Figure 6. Experimental platform.
Figure 6. Experimental platform.
Sensors 24 03100 g006
Figure 7. The mapping effect created from the Cartographer algorithm.
Figure 7. The mapping effect created from the Cartographer algorithm.
Sensors 24 03100 g007
Figure 8. A schematic diagram of robot inspection positions.
Figure 8. A schematic diagram of robot inspection positions.
Sensors 24 03100 g008
Figure 9. The movement process of the robot. (ac) is the position of the robot as it passes through points A, B, and C, respectively.
Figure 9. The movement process of the robot. (ac) is the position of the robot as it passes through points A, B, and C, respectively.
Sensors 24 03100 g009
Figure 10. Analysis of position recording for points (a) A, (b) B, and (c) C.
Figure 10. Analysis of position recording for points (a) A, (b) B, and (c) C.
Sensors 24 03100 g010
Figure 11. Trajectory safety analysis. (a) The robot passing through safe zone A, (b) The robot passing through dangerous zone B.
Figure 11. Trajectory safety analysis. (a) The robot passing through safe zone A, (b) The robot passing through dangerous zone B.
Sensors 24 03100 g011
Figure 12. Speed-smoothing experimental environments.
Figure 12. Speed-smoothing experimental environments.
Sensors 24 03100 g012
Figure 13. The velocity curve of the robot in the X direction (a) before the improvement of the TEB algorithm and (b) after the improvement of the TEB algorithm.
Figure 13. The velocity curve of the robot in the X direction (a) before the improvement of the TEB algorithm and (b) after the improvement of the TEB algorithm.
Sensors 24 03100 g013
Table 1. The main parameters of robot motion.
Table 1. The main parameters of robot motion.
Constraint ParametersValues
Maximum X linear velocity (m/s)0.4
Maximum backward linear velocity (m/s)0.2
Maximum angular velocity (rad/s)0.4
Maximum X linear acceleration (m/s2)0.3
Maximum angular acceleration (rad/s2)0.3
Obstruction expansion radius (m)0.5
Table 2. Comparison of positioning speed under different maps.
Table 2. Comparison of positioning speed under different maps.
MethodLocation Time under
7 m × 3.5 m Map (s)
Location Time under
15 m × 7 m Map (s)
Traditional Method3.04.3
Proposed Method1.82.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, F.; Li, L.; Xu, P.; Zhang, P. Enhanced Path Planning and Obstacle Avoidance Based on High-Precision Mapping and Positioning. Sensors 2024, 24, 3100. https://doi.org/10.3390/s24103100

AMA Style

Zhang F, Li L, Xu P, Zhang P. Enhanced Path Planning and Obstacle Avoidance Based on High-Precision Mapping and Positioning. Sensors. 2024; 24(10):3100. https://doi.org/10.3390/s24103100

Chicago/Turabian Style

Zhang, Feng, Leijun Li, Peiquan Xu, and Pengyu Zhang. 2024. "Enhanced Path Planning and Obstacle Avoidance Based on High-Precision Mapping and Positioning" Sensors 24, no. 10: 3100. https://doi.org/10.3390/s24103100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop