Next Article in Journal
Lightweight SAR: A Two-Bit Strategy
Next Article in Special Issue
Fast Tailings Pond Mapping Exploiting Large Scene Remote Sensing Images by Coupling Scene Classification and Sematic Segmentation Models
Previous Article in Journal
Detection of Damaged Buildings Using Temporal SAR Data with Different Observation Modes
Previous Article in Special Issue
Multispectral LiDAR Point Cloud Segmentation for Land Cover Leveraging Semantic Fusion in Deep Learning Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR-Based Local Path Planning Method for Reactive Navigation in Underground Mines

School of Resources and Safety Engineering, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(2), 309; https://doi.org/10.3390/rs15020309
Submission received: 24 November 2022 / Revised: 28 December 2022 / Accepted: 1 January 2023 / Published: 4 January 2023

Abstract

:
Reactive navigation is the most researched navigation technique for underground vehicles. Local path planning is one of the main research difficulties in reactive navigation. At present, no technique can perfectly solve the problem of local path planning for the reactive navigation of underground vehicles. Aiming to address this problem, this paper proposes a new method for local path planning based on 2D LiDAR. First, we convert the LiDAR data into a binary image, and we then extract the skeleton of the binary image through a thinning algorithm. Finally, we extract the centerline of the current laneway from these skeletons and smooth the obtained roadway centerline as the current planned local path. Experiments show that the proposed method has high robustness and good performance. Additionally, the method can also be used for the global path planning of underground maps.

Graphical Abstract

1. Introduction

The first generation of autonomous guided vehicles for underground mines was developed in the early 1960s, 1970s, and 1980s [1,2]. These vehicles primarily assist navigation by drawing lines on the floor or lights on the roof. A CCD camera was used to detect the relative position of the straight line directly above or ahead of the vehicle. With the development of technology, vehicle navigation techniques have been mainly divided into two categories. The first category is absolute navigation. The vehicle knows its real-time position in a fixed co-ordinate system. A path is planned in this fixed co-ordinate system, and the vehicle follows this path. Absolute navigation is widely used in indoor and outdoor mobile robots, and simultaneous localization and mapping (SLAM) is the most popular technique in absolute navigation [3,4]. The second category is reactive navigation. The vehicle does not need to know its position on the entire map, it should only know its relative position in the current visible environment, assisting vehicle navigation based on this relative position. In the past few decades, a large amount of software for autonomous driving in mines has been developed, such as AutoMine, MINEGEM, and the Scooptram automation system, and numerous experiments and applications have been carried out in many mines [5]. To date, reactive navigation is the most widely used technique in mines [1,6,7,8], and LiDAR has become the most widely used type of sensor in the underground environment [9].
However, there are still some key issues to be overcome in the application of reactive navigation in mines. Path planning is a critical technique in reactive navigation research. Thus far, there are several different types of mature path planning algorithms [10,11,12], for example, Dijkstra’s algorithm [13], the A* algorithm [14], the potential field method (PFM) [15], the probabilistic roadmap method (PRM) [16], and Rapidly-exploring Random Trees (RRT) [17]. The above algorithms are widely used in different scenarios. Generally speaking, the path planning algorithm needs a starting point and a target point. In reactive navigation in a mine, it is difficult to determine the exact starting point and ending point. Therefore, the above algorithms are difficult to directly apply to the path planning of reactive navigation.
Fortunately, the underground laneway environment has a unique geometry. In the field of mobile robots, the environments in which robots work are divided into indoor and outdoor environments. Indoor refers to an environment where the ground is flat and the space is closed, such as offices and production rooms. An outdoor environment refers to an environment where the ground is uneven and the space is open, such as urban roads and the wild. The underground laneway environment is not the same as the standard indoor and outdoor environment. The underground laneway has the property of a closed structure in the indoor environment and the property of uneven ground in the outdoor environment. Since the underground laneway has floors, ceilings, and walls, it is closer to the indoor environment [1]. Compared with an environment such as an office, the underground environment is simpler, and this structure is called a corridor-like structure.
Aiming at the corridor-like structure of an underground laneway, authors [18] engaged in research in the field of mining have proposed the classic follow-the-wall technology for reactive navigation. The neural network method is an effective means to realize the technology of following the wall. Pomerleau [19] describes how to use a neural network to enable the vehicle to learn to drive, and Dubrawski [20] uses a neural network to allow a vehicle to learn by trial and error online. However, the neural network method requires further research and has not been popularized or applied in mining applications. Due to the unique structure of the underground laneway, some authors take the centerline of the laneway as the planned path. For example, Larsson [21,22] proposed a method based on the Hough transform by detecting two parallel lines in the LiDAR data, considering the center line of these two parallel straight lines as the local planned path of reactive navigation. However, this method can only be applied to the local path planning of a vehicle moving straight, and the method fails when the vehicle turns.
To solve the local path planning problem of the reactive navigation of underground vehicles, we propose a new method of local path planning based on 2D LiDAR. The proposed method can be applied to environments with corridor-like structures, in addition to downhole environments. The main innovations of this research are as follows. (1) A new concept of path planning based on 2D LiDAR is proposed. The idea of extracting the centerline of the laneway using the thinning algorithm is demonstrated by graphing the 2D LiDAR data. (2) A search tree building algorithm to search useful information from images is proposed. (3) The proposed method has high efficiency and good robustness, and it can not only be used for the local path planning of the reactive navigation of underground vehicles but can also be used for the path planning of underground vehicles based on the global map. The structure of the remaining sections of this paper is as follows. Section 2 presents the environment for our experiments and the data used for algorithm validation. Section 3 describes in detail our proposed method and the method used to verify the algorithm. Section 4 verifies the performance of the proposed method on two datasets. Section 5 discusses the impact and outlook of key parameters on the proposed method. Section 6 provides the conclusion.

2. Study Materials

In this study, the load haul dump (LHD) is used as the research object of reactive navigation. As shown in Figure 1, a 2D LiDAR is installed on the front body and the rear body of the LHD. The scanning frequency of the LiDAR is 50 Hz, the angular resolution is 0.5°, and the scanning angle is 270°. The length of the LHD is approximately 10.7 m, the width is approximately 2.7 m, and the height is approximately 2.5 m. The LiDAR data are transmitted to the intelligent controller through the vehicle network, and then the data collected by the intelligent controller are transmitted to the terminal computer through Wi-Fi. When conducting experiments, we record and save LiDAR data through a rosbag file at the remote terminal.
Experiments were carried out at the following two mines, where experimental data were recorded in a rosbag file.
A map of the experiment in Mine A is shown in Figure 2. Mine A mainly adopts the filling mining method. The width of the laneway is around 4.5 m, A-B-D-E-F-G is the main transportation laneway, and the length of the laneway is around 130 m. Point C is the orepass, and the length of the laneway in section BC is approximately 30 m. There are two chambers of the same size near point A, among which the fire chamber is shown in Figure 2c. During the experiment, the driving path of the LHD is A-B-C-B-A-B-D-E-F-G. During this process, the driving mode of the LHD includes straight movement, left turns, and right turns. The travel time is around 290 s. A total of 14,454 frames of LiDAR data are recorded, which we define as dataset 1. In dataset 1, there are 10,847 frames with the straight driving mode, 1109 frames with left turns, and 2498 frames with right turns.
A map of the experiment in Mine B is shown in Figure 3. Mine B mainly adopts the natural caving mining method. The width of the laneway is approximately 4.1 m, A-B is the main transportation laneway, the total length of the laneway is approximately 200 m, and there are 27 mining entrances. During the experiment, the LHD drove straight from point A to point B, and the travel time was around 235 s. During this process, the LHD only traveled straight. A total of 11,705 frames of LiDAR data were recorded, which we define as dataset 2.
The maps of the above two mines were obtained using the corresponding datasets through Cartographer [23]. This paper evaluates the effectiveness and robustness of our proposed method on these two datasets.

3. Methodology

The main idea of our algorithm is to convert each frame of LiDAR data into a binary map, use a thinning algorithm to extract the skeleton of the map, and select a suitable polyline in the skeleton as the local planning path of the current frame. The flowchart of the proposed algorithm is shown in Figure 4.

3.1. Binary Map

In this section, we convert each frame of data collected by LiDAR into a binary map. The original range data obtained by LiDAR contain invalid points and noise. We need to extract the laser spots on the tunnel wall scanned by the LiDAR. We set a valid distance according to LiDAR attributes and LiDAR installation position. In this work, the SICK LMS151 LiDAR is used, and the scanning range is 0.5 m to 50 m. When the reflectivity is 10%, the scanning range is 18 m. As shown in Figure 1, the LiDAR is installed on the axle of the LHD with a width of 2.7 m. Therefore, the valid range of the distance set in this work is [ 1 . 35   m , 20   m ] . By setting a valid range, the influence of the interference points on our subsequent algorithm can be avoided.
The range data of one frame received by the LiDAR are R = { r 1 , r 2 , r i , r n } . n is the number of laser points. According to the scanning range of the LiDAR, the angle data A = { θ 1 , θ 2 , θ i , θ n } corresponding to each laser point can be obtained by linear interpolation. We select valid laser points according to Equation (1).
R = m i n L i m < r i < m a x L i m , r i R i = 1 n ,
m i n L i m , m a x L i m is the minimum and maximum value of the valid range, respectively. Then, we convert the valid laser point R from polar coordinates to Cartesian coordinates according to Equation (2).
x i = r i cos ( θ i ) , r i R , θ i A y i = r i sin ( θ i ) , r i R , θ i A P c = x i , y i i = 0 n
where P c is a two-dimensional point cloud in the Cartesian coordinate system (Figure 4b), and n is the number of laser points in R . x i , y i is the value of the i th laser point in R in the Cartesian coordinate system, and θ i is the angle value corresponding to the i th laser point in R .
According to the principle of LiDAR scanning, P c is an orderly cluster of points. In the Cartesian coordinate system, the points in set P c are sequentially connected into a closed envelope (Figure 4c). To convert this into a binary map, gridding is required. A certain range containing the envelope is gridded, and the grid size is defined as the resolution of the map. We set the mesh color inside the envelope to black, and the mesh color outside the envelope to white. We calculate the Cartesian coordinates of the grid center point according to Equation (3).
h i j = x m i n + ( j 0.5 ) r e s o l u t i o n l i j = y m i n + ( r n i + 0.5 ) r e s o l u t i o n P o = h i j , l i j i = 1 , j = 1 r n , c n
h i j , l i j are the coordinates of the grid center in row i and column j , x m i n , y m i n are the minimum values of the x , y coordinates of the grid area. r n is the number of lines, and c n is the number of lines. r e s o l u t i o n is the map resolution. Then, the binary map (Figure 4d) can be obtained according to Equation (4).
B M ( i , j ) = 0 , ( h i j , l i j ) I 1 , ( h i j , l i j ) I
B M represents a binary map, and I represents the interior point set of the envelope.
There are many methods used to determine whether a point is inside the polygon [24], for example, area summation, angle summation, and ray casting. This study adopts the ray casting approach, which is implemented in detail as follows [25,26]: We shoot a ray from the centroid of a grid and count the intersections with the envelope; if the number is odd, the point is inside the envelope; if it is even, the point is outside.

3.2. Thinning Algorithm

There are many methods for road centerline extraction [27,28,29]. Considering the efficiency of the algorithm, we adopt the skeletonization algorithm. The skeletonization algorithm is widely used in the image field. Cornea [30] summarizes the skeletonization algorithm in detail, and it is mainly divided into the following categories: (1) thinning and boundary propagation; (2) distance field-based, (3) geometric, and (4) general-field functions.
The above algorithms operate either on discrete voxelized datasets or on continuous polygonal representations of 3D objects. In this work, only the skeleton of the 2D binary map needs to be extracted. For the skeletonization of two-dimensional binary graphs, Zhang [31] proposed a fast parallel thinning algorithm called the Zhang–Suen thinning algorithm. This algorithm is very effective in processing binary images, so we choose this algorithm. Algorithm 1 describes the process of the Zhang–Suen thinning algorithm.
Algorithm 1 Thinning algorithm
Input: Binary map B M ( r n , c n ) , where 1 is background and 0 is foreground
Output: Refined map R M ( r n × c n )
1: while Ture do
2:    Define an empty matrix S 1 , S 2   with 2 columns
3:    for  i = 1   to   r n   do
4:       for  j = 1   to   c n   do
5:          if    B M ( i , j ) > 0   then   Continue   end if
6:          Count the number n   of   0   to   1   occurrences of two adjacent pixels in eight pixels near   B M ( i , j ) from B M ( i 1 , j ) clockwise around B M ( i , j )
7:          if  n 1   then   Continue   end if
8:       Count the number n   of   pixels   whose   8   pixels   near   B M ( i , j )   are   foreground  
9:          if  n > 2   or   n > 6   then   Continue   end if
10:         if  B M ( i 1 , j ) B M ( i , j + 1 ) B M ( i + 1 , j ) 0   then   Continue   end if
11:         if  B M ( i , j + 1 ) B M ( i + 1 , j ) B M ( i , j 1 ) 0   then   Continue   end if
12:         Append  [ i , j ]   after   the   last   line   of   S 1
13:      end for
14:    end for
15:    if  S 1   is   not   empty   then
16:       The pixel value of  B M ( S 1 )   is   assigned   the   background   value   0
17:    else
18:       Break
19:    end if
20:    Repeat lines 3–19, changing  S 1   to   S 2   and   lines   10 11   to   lines   21 22
21:    if  B M ( i 1 , j ) B M ( i , j + 1 ) B M ( i , j 1 ) 0   then   Continue   end if
22:    if  B M ( i , j + 1 ) B M ( i + 1 , j ) B M ( i , j 1 ) 0   then   Continue   end if
23 end while
24: Output the refined map R M = B M

3.3. Centerline Extraction

After obtaining the skeleton map (Figure 4e) of the binary map, it is necessary to select a laneway centerline in the skeleton map. First, we select the pixel closest to the coordinate origin as the search start point s . From the starting point, we search for unsearched pixels along the right, upper right, lower right, upper, lower, upper left, lower left, and left directions, and we establish a search tree to store pixels. Algorithm 2 describes the building process of the search tree in detail.
Algorithm 2 Search tree algorithm
Input :   Binary   map   R M ( r n × c n ) ;   Start   point   s Output :   Search   tree   T r e e ( p × 11 ) 1 :   Initialize   an   array   of   2 D   search   trees   T r e e ( 1 × 11 ) , T r e e ( 1 , 1 : 2 ) = s ,   T r e e ( 1 , 3 ) = N A N , T r e e ( 1 , 4 : 11 ) = 0 ,   and   use   F lag ( r n × c n )   to   indicate   whether   the   points   have   been   searched , F l a g = z e r o s ( r n , c n ) 2 :     Use   c u r r e n t   to   represent   the   current   node ,   and   n u m b e r   to   represent   the   size   of   the   T r e e 3 :     while   c u r r e n t n u m b e r   do 4 :           a = T r e e ( c u r r e n t , 1 )   and   a = T r e e ( c u r r e n t , 2 ) 5 :           if   b < c n   then 6 :                 if   R M ( a , b + 1 ) = 1   and   F l a g ( a , b + 1 ) = 0   then 7 :                     n u m b e r = n u m b e r + 1 , T r e e ( c u r r e n t , 4 ) = n u m b e r ,                                 T r e e = [ T r e e ; z e r o s ( 1 , 11 ) ] , T r e e ( n u m b e r , 1 : 2 ) = [ a , b + 1 ] ,                                 T r e e ( n u m b e r , 3 ) = c u r r e n t , F l a g ( a , b + 1 ) = 1 8 :                 end   if 9 :           end   if 10 :         Like   lines   5 9 ,   search   for   the   pixels   in   the   upper   right ,   lower   right ,   upper ,   lower ,   left ,   lower   left ,   and   left   direction   of   ( a , b )   in   turn 11 :         c u r r e n t = c u r r e n t + 1 12 :     end   while
After generating the search tree, we search from the leaf node of the search tree to the parent node layer by layer until the root node of the search tree is found; see Algorithm 3. We can find all paths in this skeleton map.
Algorithm 3 Get all paths from the search tree
Input :   Search   tree   T r e e ( p × 11 ) Output :   All   path   p a t h s ( k × 1 ) 1 :   for   i = 1   to   p   do   2 :       if   T r e e ( i , 4 : 11 ) = 0 3 :           k i = i 4 :               while   T r e e ( k i , 3 )   is   not   N A N   do 5 :                     k i = T r e e ( k i , 3 ) 6 :                     p a t h s = [ p a t h s ; T r e e ( k i , 1 : 2 ) ] 7 :               end   while 8 :       end   if 9 :   end   for
Next, we select the most appropriate path among all paths according to the current vehicle driving mode. According to Equation (5), the driving mode of the vehicle is divided into three modes: forward, left, and right.
r u n _ m o d e = 1 , turn   left 0 , straight 1 , turn   right
In the formula, r u n _ m o d e represents the current operation mode of the LHD. We calculate the direction of each path according to Equation (6).
p a t h s ( k ) = p a t h s ( k ) . e n d s
p a t h s ( k ) represents the direction of the k th path, which is a three-dimensional vector. p a t h s ( k ) . e n d represents the last point of the k th path. X = ( 1 , 0 , 0 ) indicates the direction of the positive X axis. According to Equations (7)–(9), we calculate the angles of all paths to the positive X axis, respectively.
C = p a t h s ( k ) × X = p a t h s ( k ) X sin < p a t h s ( k ) , X >
where |   | represents the module of the vector, and < > represents the included angle of the vector. The vector C represents the vector product of p a t h s ( k ) and X .
α = acos ( p a t h s ( k ) X p a t h s ( k ) X )
α is the included angle between p a t h s ( k ) and X . We calculate the angle φ k between the k th path and the positive direction of X axis using Equation (9).
φ k = α , C ( 3 ) 0 α , C ( 3 ) < 0
C ( 3 ) represents the component of vector C in the Z-axis direction. We calculate the included angle between all paths and the positive direction of the X axis to form a set Φ = φ 1 , φ 2 , , φ n . n is the number of paths. The angle φ k is defined as follows: the angle from the positive direction of the X axis rotating counterclockwise to the direction of the target path is positive; otherwise, the angle from the positive direction of the X axis rotating clockwise to the direction of the target path is negative, and the value range of the angle is π , π . The path is selected according to Equation (10).
s e l e c t _ p a t h = p a t h s ( min ( Φ ) ) ,   r u n _ m o d e = 0 p a t h s ( max ( Φ ) ) ,   r u n _ m o d e = 1   p a t h s ( min ( Φ ) ) ,   r u n _ m o d e = 1  
where [   ] represents the corresponding index, and s e l e c t _ p a t h is the optimal path selected. To generate a locally planned path, the current LiDAR origin is added to the beginning of the selected path to form an initial locally planned path.

3.4. Smoothing Method

To enable the vehicle to run more smoothly, the raw local planning path needs to be smoothed [32]. In particular, if the current driving mode is straight, we wish to avoid the influence of the intersection in the laneway. We first use the following method for smoothing. The start and end points of the raw local planning path form a straight line. We calculate the distances from the other n 1 points to the line except for these two points. If the distance of the point is greater than d l i m , we delete this point; otherwise, we keep this point. In Figure 5, A and B are the endpoints of the initial local planning path. The distance between the gray point in the figure and the straight line AB is greater than d l i m , which needs to be deleted, and other points are reserved. In this paper, d l i m is taken as 0.3 m.
After filtering out the outliers on the point path, we use the Bezier curve to fit p a t h . The general expression of the Bezier curve is shown in Equation (11).
S ( t ) = i = 0 n B i n ( t ) P i , t [ 0 , 1 ] B i n ( t ) = t i ( 1 t ) n i ,
P i is the i th control point, and n is the degree of the Bezier curve. In this paper, n is taken as 3. S is the Bezier curve. We find the optimal control point of the Bezier curve according to the least square method (4)–(12) [33].
argmin   P i j = 1 m p a t h j i = 0 n B i n ( t ) P i 2
p a t h j is the j th point in the p a t h , and m is the number of points in the p a t h . We substitute the optimal control point P i into (4)–(11) to obtain the path of local planning (red line in Figure 4f).

3.5. Method Comparison and Robustness Evaluation

There are few studies on the local path planning method for the reactive navigation of underground vehicles. At present, the method based on the Hough transform [34,35] is the most up-to-date. This method was proposed by Larsson [21] in 2008. The main idea is to detect whether two parallel lines satisfy certain constraints by performing the Hough transform on LiDAR data. If it exists, we solve the centerline of the two straight lines as the local planning path. If it does not exist, the path cannot be planned. The method is robust when the vehicle does not turn. However, this method cannot be used while turning. Therefore, we only compare this method with the method proposed in this paper in the straight-line mode.
In this paper, the offset and direction error are used to quantitatively describe the accuracy of the algorithm. As shown in Figure 6, to calculate the error, we choose a point on the axle as a reference point. Here, we choose a point with a distance of 0.5 m along the LiDAR scan direction as the reference point. When the vehicle is moving forward, we use the front LiDAR, and when the vehicle is moving backward, we use the rear LiDAR. The distance from the reference point to the local planned path is the offset error. When the reference point is on the left side of the local planning path, the offset error is positive; otherwise, when the reference point is on the right side of the local planning path, the offset error is negative. The positive direction of the current X axis is used as the reference direction, and the difference between the direction of the local planned path and the reference direction is used as the direction error. The positive and negative direction error are consistent with the angle definition given in Section 3.3.
To systematically evaluate the robustness of the algorithm proposed in this paper, we refer to the evaluation method proposed by Larsson [21]. We consider two different ways of corrupting the data: adding noise to the scan of the dataset and using a lower data rate. In the present research, first, 10%, 20%, 33%, and 50% of noise is added to the original data, respectively. Second, we reduce the amount of data by 1/2, 3/4, 5/6, and 8/9 of the original data, respectively.

4. Results

To verify the validity of the proposed method, two data sets are tested in this section. The two datasets come from two different mines, described in Section 2.

4.1. Dataset 1

There are three different driving modes of LHD in dataset 1, including straight movement, left turning, and right turning. Firstly, the accuracy of the proposed method is analyzed for the three different driving modes. Finally, the experimental results of the whole dataset 1 are summarized.

4.1.1. In the Case of Moving Straight

Figure 7 shows the local path planning of the proposed method and the method of the Hough transform in the straight-moving case in dataset 1. The LiDAR data collection positions of (a–f) correspond to the vicinity of points A, C, B, E, F, and G in Figure 2d, respectively. It can be seen that (1) in different situations, whether in the normal situation (Figure 7b), under the interference of the chamber (Figure 7a), or under the interference of the intersection (Figure 7c–f), the proposed method can accurately find the centerline of the current laneway. (2) The proposed method is consistent with the local path planning of the Hough transform. (3) The local path of the proposed method is a curve, which is consistent with the changing trend of the laneway, while the local path of the Hough transform method is a straight line.
Figure 8 shows a quantitative evaluation of the results of Figure 7—that is, the evaluation of the offset and direction error of the two methods. The mean absolute offset error of the proposed method is 0.1542 m, and the mean absolute direction error is 0.0775 rad. The average absolute offset error of the Hough transform method is 0.2040 m, and the absolute direction error is 0.0509 rad. To summarize, it can be concluded that in the case of straight travel, the proposed method’s accuracy is comparable to the local path planning accuracy of the Hough transform method, and the errors are all within the allowable range.

4.1.2. In the Case of Turning Left

Figure 9 shows the local path planning of the proposed method in the case of a left turn in dataset 1. The LiDAR data collection locations in Figure 9a–c correspond to the turning paths of C-B-A in Figure 2. It can be seen that as the visible curve area of the LiDAR increases, the local path planning becomes closer to the road centerline. The local path planning errors in Figure 9a–c are shown in Table 1. It can be seen that the offset and direction error in Figure 9c are large, but the local path is on the road centerline. This shows that during the experiment, the vehicle did not drive along the optimal laneway centerline at the position shown in Figure 9c. With the exception of Figure 9c, the absolute mean offset error of the local path planning in Figure 9a,b is 0.1227 m and the absolute mean direction error is 0.0437 rad.

4.1.3. In the Case of Turning Right

Figure 10 shows the local path planning of the proposed method in the case of the right turn in dataset 1. The LiDAR data collection position of Figure 10a–c corresponds to the turning path of A-B-C in Figure 2, and the LiDAR data collection position of Figure 10d,e corresponds to the vicinity of point D in Figure 2. Looking at Figure 10a–e, as in the case of a left turn, it can be seen that as the LiDAR’s visible curve area increases, and the local path planning moves closer to the road centerline. The local path planning errors in Figure 10a–e are shown in Table 2. The absolute average offset error of the local path planning for the proposed method is 0.1892 m, and the absolute average direction error is 0.1395 rad.

4.1.4. Summary

Dataset 1 has 14,454 frames of LiDAR data. The previous section evaluated the effectiveness and accuracy of the proposed method for some feature frames in different situations. The following is a statistical analysis of the accuracy and efficiency of the proposed method after dataset 1 is processed.
To better illustrate the superiority of the proposed method, we used the Hough transform method for comparison. Figure 11 shows the statistical results of the proposed method and the Hough transform method after processing dataset 1, respectively. It can be seen that (1) the absolute average offset error of the proposed method is smaller than the corresponding value of the Hough transform method, and the offset error of the proposed method has a small degree of fluctuation, while the offset error of the Hough transform method has more discrete points (Figure 11a). (2) The absolute average direction error of the proposed method is slightly larger than the corresponding value for the Hough transform method, but the number of discrete points of the direction error of the Hough transform method is greater (Figure 11b). (3) The number of rejected frames for the Hough transform method is 938 frames, while the number of rejected frames for the proposed method is 0 (Figure 11c). This is because the Hough transform method cannot be applied to turning intersections, so 938 frames are rejected. Alternatively, it shows that the proposed method can be applied in both straight and turning situations.
In addition to this, we perform statistics on the processing time of dataset 1 between the proposed method and the Hough transform method, as shown in Table 3. The codes all run in MATLAB, the CPU is an Intel (R) Core (TM) i9-9900K, and the main frequency is 3.60 Hz. It can be obtained that the speed of the proposed method for processing dataset 1 is around 13 times that of the Hough transform method.
Then, we add different levels of noise to each frame of LiDAR data in dataset 1, and the statistical results of error are shown in Figure 12. It can be seen that (1) adding different levels of noise does not affect the absolute average offset error, but when adding 33% and 50% of horizontal noise, it has a greater impact on the outliers of the offset error, and as the noise level increases, the outliers are increasingly scattered (Figure 12). (2) Adding different levels of noise has a greater impact on the absolute average direction error (Figure 12b). In addition to this, the number of rejected frames increases to 117 frames when 50% noise is added (Figure 12c). (3) Adding different levels of noise has little effect on the number of discrete points of offset error and direction error (Figure 12a,b). From the perspective of adding different levels of noise, the proposed method has better robustness.
Below, we randomly reduce different levels of data for each frame of LiDAR data in dataset 1. Figure 13 shows the statistical result of the error. It can be seen that (1) the law of reducing data at different levels is essentially the same as the law of adding noise at different levels, and the direction error is more sensitive than the offset error. (2) When removing 3/4 of the original LiDAR data—that is, the number of LiDAR data points per frame is 135—the number of rejected frames gradually increases (Figure 13c).
Generally, the above experiments that involved adding different levels of noise and reducing different levels of data in dataset 1 show that the proposed method is robust and can perform local path planning for the reactive navigation of underground vehicles.

4.2. Dataset 2

Dataset 2 only refers to the straight driving mode, but there are more and more complex intersections than dataset 1. We adopt a map resolution of 0.5. Below, we extract nine feature frames from dataset 2 (Figure 14). The LiDAR data collection positions in Figure 14a–i correspond to points a-i in Figure 3c. It can be seen that the proposed method is essentially the same as the local path plan using the Hough transform method, and both can find the centerline of the current driving laneway.
Figure 15 shows a quantitative evaluation of the results of Figure 14—that is, the evaluation of the offset and direction error of the two methods. The mean absolute offset error of the proposed method is 0.2093 m and the mean absolute direction error is 0.0466 rad. The average absolute offset error of the Hough transform method is 0.1271 m, and the absolute direction error is 0.0543 rad. To summarize, it can be concluded that the proposed method’s accuracy is comparable to the local path planning accuracy of the Hough transform method, and the errors are all within the allowable range.
Dataset 2 has 11,705 frames of LiDAR data. The following is a statistical analysis of the accuracy and efficiency of dataset 2 after processing by the proposed method. Figure 16 shows the statistical results of the proposed method and the Hough transform method after processing dataset 2, respectively. The number of rejected frames for both methods is 0. It can be seen that the fluctuation degree of the offset error of the proposed method is smaller than that of the Hough transform method, and the difference in the absolute average offset error is small (Figure 16a). (2) The fluctuation degree of the direction error of the proposed method is smaller than that of the Hough transform method, but its absolute average direction error is larger (Figure 16b).
Table 4 shows the processing time of the proposed method and the method of Hough transform for dataset 2. It can be obtained that the speed of the proposed method in processing dataset 1 is around 6 times that of the Hough transform method.
As with dataset 1, the robustness of the proposed method is evaluated by adding different levels of noise to dataset 2 and reducing thedata by the different levels. Figure 17 shows the error statistics of the proposed method and the Hough transform method when adding different levels of noise to dataset 2. Consistent with the basic results of dataset 1, the influence of noise on the direction error is greater than that on the offset error, but the influence of noise on the error is within the allowable range (Figure 17a,b). Since there are no turns in dataset 2, the number of rejected frames is three until the added noise level is 50% (Figure 17c). Figure 18 shows the reduction of the data at different levels for dataset 2. The error statistics of the proposed method and the Hough transform method are consistent with those obtained when adding different levels of noise.
Generally, the testing of the proposed method through datasets 1 and 2 shows that the proposed method is comparable in accuracy to the Hough transform method, has good robustness, runs faster, and solves the problems of the Hough transform method, which cannot be used for the problem of local path planning at turning intersections.

5. Discussion

Firstly, the influence of the map resolution on the proposed method is analyzed by testing the error of the proposed method under different map resolutions. Then, the application scope of the proposed method is extended.

5.1. Sensitivity Analysis of Map Resolution

In this section, we discuss the performance of the proposed method under different map accuracies through datasets 1 and 2. Figure 19 shows the results of the proposed method after processing dataset 1 at different map resolutions. It can be seen that (1) with the increase in the map resolution, the absolute average offset error and the direction error both show a decreasing trend, and the fluctuation degree of the error is slightly smaller, but a few outlier errors increase (Figure 19a,b). (2) When the map resolution is 0.5 m, rejected frames begin to appear, and when the map resolution is 1 m, the number of rejected frames reaches 3590 frames, which is approximately 25% of the total number of frames in dataset 1 (Figure 19c).
Table 5 shows the processing time of the proposed method on dataset 1 at different map resolutions. It can be seen that as the map resolution increases, the time consumed decreases exponentially. Under the premise of ensuring that the number of frames rejected using the proposed method when processing dataset 1 is zero, and considering the error and efficiency comprehensively, the optimal map resolution of dataset 1 is 0.3 m.
Moreover, we analyze dataset 2 in the same way. Figure 20 shows the results of the proposed method after processing dataset 2 under different map resolutions, which is essentially consistent with the law obtained from dataset 1. However, the difference is that with the increase in the map resolution, there is no increase in the error of a few outliers. We refer to this phenomenon as phenomenon 1.
Table 6 shows the processing time of the proposed method on dataset 2 at different map resolutions. The same rule as in Table 5 can be obtained. However, in the process of comparing Table 5 and Table 6, we find another phenomenon: under the same map resolution, the processing of dataset 2 is more time-consuming than processing dataset 1. We call this phenomenon 2. On the premise of ensuring that the number of frames rejected using the proposed method when processing dataset 2 is 0, considering the error and efficiency comprehensively, the optimal map resolution of dataset 2 is 0.5 m.
The following is the analysis and explanation of phenomenon 1. Firstly, why is the error of the proposed method smaller as the map resolution increases? Figure 21 is a binary map under different map resolutions. It can be seen that with the increase in the map resolution, the skeleton of the binary map becomes simpler—that is, the centerline of the laneway becomes rougher. The smaller the map resolution, the finer the generated laneway centerline, and the more sensitive the generated laneway centerline to the shape of the laneway walls (Figure 21a). Conversely, the greater the map resolution, the rougher the generated laneway centerline, and the less sensitive the resulting laneway centerline is to the shape of the two walls of the laneway (Figure 21d). To ensure the smooth running of the vehicle in the laneway, the smoother the driving path, the better. Generally speaking, the two walls of the laneway are uneven (Figure 3b), or there is chamber interference (Figure 2c). To reduce the influence of the shape of the two walls of the laneway on the centerline, the map resolution should be larger. The smaller the influence of the two walls of the laneway on the generation of the center line, the smaller the error. Since the driving mode corresponding to 75% of the LiDAR data in dataset 1 is straight, and the driving mode corresponding to all LiDAR data in dataset 2 is straight, it is the same as the situation in Figure 21. Thus, the greater the resolution of the map, the smaller the mean error for dataset 1 and dataset 2.
Secondly, as the resolution of the map increases, dataset 1 has a small number of outlier errors, while dataset 2 has no such situation. Figure 22 is a binary map and its skeleton of the LiDAR data corresponds to Figure 10b under different map resolutions, and the corresponding driving mode is a right turn. It can be seen that as the map resolution increases, the path to the right becomes shorter and shorter until it disappears completely, and the error corresponding to Figure 22d is the largest. Therefore, in turn mode, the greater the map resolution, the greater the error. Since the LiDAR data in the turning mode of dataset 1 have 3607 frames, as the map resolution increases, their corresponding error will increase, and there will be few outlier errors in the dataset. Meanwhile, dataset 2 has no LiDAR data in turn mode, so this does not occur.
From phenomenon 1, we can conclude that in the straight mode, we should choose a value with a larger map resolution, and in the turning mode, we should choose a value with a smaller map resolution.
The following is an analysis and explanation of phenomenon 2. Firstly, why is the time consumption different between for the proposed method when processing dataset 1 and dataset 2 under the same map resolution? By observing the LiDAR data in Figure 7 corresponding to dataset 1 and Figure 14 corresponding to dataset 2, it can be found that the LiDAR data in Figure 14 have a larger range. In other words, each frame of LiDAR data in dataset 2 has a wider range than that in dataset 1, so the binary map generated under the same map resolution is larger and has more grids. This shows that the efficiency of the proposed method is proportional to the number of grids in the binary map.

5.2. Prospect

The underground laneway is a corridor-type structure, and the centerline of the laneway can be used as the driving path of the vehicle. The proposed method belongs to the image-based path planning methods; therefore, the proposed method can be used for global path planning for underground mines. Figure 23a,b show the global paths extracted using the proposed method in the maps corresponding to dataset 1 (Figure 2d) and dataset 2 (Figure 3a), respectively. The resolution of the maps is 0.2 m. The red line in the figure is the centerline of the main laneway extracted using the proposed method.
The proposed method is well suited for corridor-like structured environments, especially underground mines, and can be used to plan both local and global paths. With the continuous advance of mining, the stability of rock mass is getting worse and worse [36,37], and the unmanned driving technology of mining vehicles will gradually be promoted.

6. Conclusions

The method proposed in this paper solves the problem of local path planning for the reactive navigation of underground vehicles. The proposed method is superior to the Hough transform method and can be used not only for local path planning in the case of straight driving but also for local path planning in the case of turning. In addition to the downhole environment, the proposed method can be used in an environment with corridor-like structures. The effectiveness and robustness of the proposed method are verified through two real datasets. The experimental results show that (1) in the vehicle straight mode, the proposed method has the same accuracy as the Hough transform method, and both can accurately obtain the centerline of the current laneway. (2) In the vehicle turning mode, the problem whereby the Hough transform method cannot plan a local path is solved, and the proposed method can still provide a more accurate driving direction for the vehicle. (3) By adding different levels of noise to the original data and removing different levels of data, we show that the proposed method has strong robustness and stability. (4) The efficiency of the proposed method is much higher than that of the Hough transform method, which can greatly reduce the time for the vehicle control program. Additionally, the proposed method can also be used for the global path planning of underground maps and can be used as a path planning method for the absolute navigation of underground vehicles.
In future work, we intend to integrate the proposed method with the control method to realize the reactive navigation of underground vehicles. The first step in future work is to simulate the underground environment on the simulation platform [38] and realize the reactive navigation of underground vehicles in the simulation environment through integrated technology. The second step is to apply the integrated technology to real underground vehicles.

Author Contributions

Y.J. participated in data analysis, participated in the design of the study and drafted the manuscript; P.P. helped draft the manuscript; L.W. conceived of the study, designed the study, coordinated the study, and helped draft the manuscript; J.W. (Jiaheng Wang) and J.W. (Jiaxi Wu) collected field data; Y.L. carried out the statistical analyses. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No52104170), the National Key Research and Development Program of China under grant 2022YFC2904105, the Postgraduate Scientific Research Innovation Project of Hunan Province under grant CX20200243, the Fundamental Research Funds for the Central Universities of Central South University (2020zzts194).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roberts, J.M.; Duff, E.S.; Corke, P.I. Reactive navigation and opportunistic localization for autonomous underground mining vehicles. Inf. Sci. 2002, 145, 127–146. [Google Scholar] [CrossRef]
  2. Dragt, B.J.; Camisani-Calzolari, F.R.; Craig, I.K. An Overview of the Automation of Load-Haul-Dump Vehicles in an Underground Mining Environment; International Federation of Accountants (IFAC): New York, NY, USA, 2005; Volume 38, ISBN 008045108X. [Google Scholar]
  3. Grisetti, G.; Stachniss, C.; Burgard, W. Improving grid-based SLAM with Rao-Blackwellized particle filters by adaptive proposals and selective resampling. In Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; Volume 2005, pp. 2432–2437. [Google Scholar]
  4. Roh, H.C.; Sung, C.H.; Kang, M.T.; Chung, M.J. Fast SLAM using polar scan matching and particle weight based occupancy grid map for mobile robot. In Proceedings of the URAI 2011—2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence, Incheon, Republic of Korea, 23–26 November 2011; pp. 756–757. [Google Scholar]
  5. Gustafson, A. Automation of Load Haul Dump Machines; Luleå University of Technology: Luleå, Sweden, 2011. [Google Scholar]
  6. Roberts, J.M.; Duff, E.S.; Corke, P.I.; Sikka, P.; Winstanley, G.J.; Cunningham, J. Autonomous control of underground mining vehicles using reactive navigation. Proc.—IEEE Int. Conf. Robot. Autom. 2000, 4, 3790–3795. [Google Scholar]
  7. Duff, E.S.; Roberts, J.M.; Corke, P.I. Automation of an Underground Mining Vehicle using Reactive Navigation and Opportunistic Localization. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003; Volume 4, pp. 3775–3780. [Google Scholar]
  8. Vasilopoulos, V.; Pavlakos, G.; Schmeckpeper, K.; Daniilidis, K.; Koditschek, D.E. Reactive navigation in partially familiar planar environments using semantic perceptual feedback. Int. J. Robot. Res. 2022, 41, 85–126. [Google Scholar] [CrossRef]
  9. Ohradzansky, M.T.; Humbert, J.S. Lidar-Based Navigation of Subterranean Environments Using Bio-Inspired Wide-Field Integration of Nearness. Sensors 2022, 22, 849. [Google Scholar] [CrossRef] [PubMed]
  10. Cheng, C.; Sha, Q.; He, B.; Li, G. Path planning and obstacle avoidance for AUV: A review. Ocean Eng. 2021, 235, 109355. [Google Scholar] [CrossRef]
  11. Tan, C.S.; Mohd-Mokhtar, R.; Arshad, M.R. A Comprehensive Review of Coverage Path Planning in Robotics Using Classical and Heuristic Algorithms. IEEE Access 2021, 9, 119310–119342. [Google Scholar] [CrossRef]
  12. Ayawli, B.B.K.; Chellali, R.; Appiah, A.Y.; Kyeremeh, F. An Overview of Nature-Inspired, Conventional, and Hybrid Methods of Autonomous Vehicle Path Planning. J. Adv. Transp. 2018, 2018, 8269698. [Google Scholar] [CrossRef]
  13. Liu, L.S.; Lin, J.F.; Yao, J.X.; He, D.W.; Zheng, J.S.; Huang, J.; Shi, P. Path Planning for Smart Car Based on Dijkstra Algorithm and Dynamic Window Approach. Wirel. Commun. Mob. Comput. 2021, 2021, 8881684. [Google Scholar] [CrossRef]
  14. Li, C.; Huang, X.; Ding, J.; Song, K.; Lu, S. Global path planning based on a bidirectional alternating search A* algorithm for mobile robots. Comput. Ind. Eng. 2022, 168, 108123. [Google Scholar] [CrossRef]
  15. Park, J.W.; Kwak, H.J.; Kang, Y.C.; Kim, D.W. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance. Comput. Intell. Neurosci. 2016, 2016, 6047906. [Google Scholar] [CrossRef] [Green Version]
  16. Park, B.; Choi, J.; Chung, W.K. Roadmap coverage improvement using a node rearrangement method for mobile robot path planning. Adv. Robot. 2012, 26, 989–1012. [Google Scholar] [CrossRef]
  17. Wang, H.; Li, G.; Hou, J.; Chen, L.; Hu, N. A Path Planning Method for Underground Intelligent Vehicles Based on an Improved RRT* Algorithm. Electronics 2022, 11, 294. [Google Scholar] [CrossRef]
  18. Asensio, J.R.; Montiel, J.M.M.; Montano, L. Goal directed reactive robot navigation with relocation using laser and vision. In Proceedings of the IEEE International Conference on Robotics and Automation, Detroit, MI, USA, 10–15 May 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 4, pp. 2905–2910. [Google Scholar]
  19. Pomerleau, D.A. Neural Network Perception for Mobile Robot Guidance; Springer: New York, NY, USA, 1993. [Google Scholar]
  20. Dubrawski, A.; Crowley, J.L. Self-supervised neural system for reactive navigation. In Proceedings of the IEEE International Conference on Robotics and Automation, San Diego, CA, USA, 8–13 May 1994; IEEE: Piscataway, NJ, USA, 1994; pp. 2076–2081. [Google Scholar]
  21. Larsson, J.; Broxvall, M.; Saffiotti, A. Laser-based corridor detection for reactive navigation. Ind. Robot 2008, 35, 69–79. [Google Scholar] [CrossRef] [Green Version]
  22. Larsson, J.; Broxvall, M.; Saffiotti, A. Laser based intersection detection for reactive navigation in an underground mine. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Nice, France, 22–26 September 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 2222–2227. [Google Scholar]
  23. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; Volume 2016, pp. 1271–1278. [Google Scholar]
  24. Feito, F.; Torres, J.C.; Ureña, A. Orientation, simplicity, and inclusion test for planar polygons. Comput. Graph. 1995, 19, 595–600. [Google Scholar] [CrossRef]
  25. Peng, P.; Wang, L. Targeted location of microseismic events based on a 3D heterogeneous velocity model in underground mining. PLoS ONE 2019, 14, e0212881. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Peng, P.; Jiang, Y.; Wang, L.; He, Z. Microseismic event location by considering the influence of the empty area in an excavated tunnel. Sensors 2020, 20, 574. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, Z.; Zhang, X.; Sun, Y.; Zhang, P. Road centerline extraction from very-high-resolution aerial image and LiDAR data based on road connectivity. Remote Sens. 2018, 10, 1284. [Google Scholar] [CrossRef] [Green Version]
  28. Cao, C.; Sun, Y. Automatic road centerline extraction from imagery using road GPS data. Remote Sens. 2014, 6, 9014–9033. [Google Scholar] [CrossRef] [Green Version]
  29. Zhou, T.; Sun, C.; Fu, H. Road information extraction from high-resolution remote sensing images based on road reconstruction. Remote Sens. 2019, 11, 79. [Google Scholar] [CrossRef] [Green Version]
  30. Cornea, N.D.; Silver, D.; Min, P. Curve-Skeleton Properties, Applications, and Algorithms. IEEE Trans. Vis. Comput. Graph. 2007, 13, 530–548. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, T.Y.; Suen, C.Y. A modified fast parallel algorithm for thinning digital patterns. Pattern Recognit. Lett. 1988, 7, 99–106. [Google Scholar]
  32. Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Hoshino, Y.; Peng, C.C. Path smoothing techniques in robot navigation: State-of-the-art, current and future challenges. Sensors 2018, 18, 3170. [Google Scholar] [CrossRef] [PubMed]
  33. Borges, C.F.; Pastva, T. Total least squares fitting of Bézier and B-spline curves to ordered data. Comput. Aided Geom. Des. 2002, 19, 275–289. [Google Scholar] [CrossRef]
  34. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Hough-Transform and Extended Ransac Algorithms for Automatic Detection of 3D Building Roof Planes From Lidar Data. In Proceedings of the ISPRS Workshop on Laser Scanning 2007 and SilviLaser 2007, Espoo, Finland, 12–14 September 2007; Volume XXXVI, pp. 407–412. [Google Scholar]
  35. Borrmann, D.; Elseberg, J.; Lingemann, K.; Nüchter, A. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design. 3D Res. 2011, 2, 02003. [Google Scholar] [CrossRef]
  36. Zhao, Y.; Zhang, C.; Wang, Y.; Lin, H. Shear-related roughness classification and strength model of natural rock joint based on fuzzy comprehensive evaluation. Int. J. Rock Mech. Min. Sci. 2021, 137, 104550. [Google Scholar] [CrossRef]
  37. Zhao, Y.; Liu, Q.; Zhang, C.; Liao, J.; Lin, H.; Wang, Y. Coupled seepage-damage effect in fractured rock masses: Model development and a case study. Int. J. Rock Mech. Min. Sci. 2021, 144, 104822. [Google Scholar] [CrossRef]
  38. Jiang, Y.; Peng, P.; Wang, L.; Wang, J.; Liu, Y.; Wu, J. Modeling and Simulation of Unmanned Driving System for Load Haul Dump Vehicles in Underground Mines. Sustainability 2022, 14, 15186. [Google Scholar] [CrossRef]
Figure 1. The LHD and sensors layout.
Figure 1. The LHD and sensors layout.
Remotesensing 15 00309 g001
Figure 2. The laneway of Mine A for which dataset 1 was collected. (a) The view of the intersection; (b) the view of the orepass; (c) the view of the firefighting chamber; (d) the map. Note: (ac) correspond to the views of the three red arrows in (d) respectively. A–G in (d) are the sampling points of the map.
Figure 2. The laneway of Mine A for which dataset 1 was collected. (a) The view of the intersection; (b) the view of the orepass; (c) the view of the firefighting chamber; (d) the map. Note: (ac) correspond to the views of the three red arrows in (d) respectively. A–G in (d) are the sampling points of the map.
Remotesensing 15 00309 g002
Figure 3. The laneway of Mine B for which dataset 2 was collected. (a) the map; (b) the view of the laneway; (c) a zoomed-in view of the map. Note: a–i in (c) are the sampling points of the map. A and B in (a) is the starting point and ending point of the laneway.
Figure 3. The laneway of Mine B for which dataset 2 was collected. (a) the map; (b) the view of the laneway; (c) a zoomed-in view of the map. Note: a–i in (c) are the sampling points of the map. A and B in (a) is the starting point and ending point of the laneway.
Remotesensing 15 00309 g003
Figure 4. Flowchart of the proposed algorithm. (af) are the schematic diagrams corresponding to each step of the proposed method.
Figure 4. Flowchart of the proposed algorithm. (af) are the schematic diagrams corresponding to each step of the proposed method.
Remotesensing 15 00309 g004
Figure 5. Schematic diagram of processing method for abnormal points at intersections.
Figure 5. Schematic diagram of processing method for abnormal points at intersections.
Remotesensing 15 00309 g005
Figure 6. Schematic diagram of error calculation.
Figure 6. Schematic diagram of error calculation.
Remotesensing 15 00309 g006
Figure 7. The local path planning of different methods for dataset 1 in the case of traveling straight, in which the red lines and the green lines in (af) are shown as the local path planning of the proposed method and the Hough transform method, respectively.
Figure 7. The local path planning of different methods for dataset 1 in the case of traveling straight, in which the red lines and the green lines in (af) are shown as the local path planning of the proposed method and the Hough transform method, respectively.
Remotesensing 15 00309 g007
Figure 8. The offset error and direction error of different methods corresponding to (af) in Figure 7.
Figure 8. The offset error and direction error of different methods corresponding to (af) in Figure 7.
Remotesensing 15 00309 g008
Figure 9. The local path planning of the proposed method for dataset 1 in the case of turning left, in which the red lines in (ac) are shown as the local path planning of the proposed method.
Figure 9. The local path planning of the proposed method for dataset 1 in the case of turning left, in which the red lines in (ac) are shown as the local path planning of the proposed method.
Remotesensing 15 00309 g009
Figure 10. The local path planning of the proposed method for dataset 1 in the case of turning right, in which the red lines in (ae) are shown as the local path planning of the proposed method.
Figure 10. The local path planning of the proposed method for dataset 1 in the case of turning right, in which the red lines in (ae) are shown as the local path planning of the proposed method.
Remotesensing 15 00309 g010
Figure 11. The error of the proposed method and the Hough transform method for dataset 1: (a) offset error; (b) direction error; (c) number of rejected frames. Note: The red circles in (a,b) represent the corresponding absolute average error, respectively.
Figure 11. The error of the proposed method and the Hough transform method for dataset 1: (a) offset error; (b) direction error; (c) number of rejected frames. Note: The red circles in (a,b) represent the corresponding absolute average error, respectively.
Remotesensing 15 00309 g011
Figure 12. Result from dataset 1 with different levels of added noise: (a) offset error; (b) direction error; (c) number of rejected frames.
Figure 12. Result from dataset 1 with different levels of added noise: (a) offset error; (b) direction error; (c) number of rejected frames.
Remotesensing 15 00309 g012
Figure 13. Result from dataset 1 with different data rates: (a) offset error; (b) direction error; (c) number of rejected frames.
Figure 13. Result from dataset 1 with different data rates: (a) offset error; (b) direction error; (c) number of rejected frames.
Remotesensing 15 00309 g013
Figure 14. The local path planning of different methods for dataset 2, in which the red lines and the green lines in (ai) are shown as the local path planning of the proposed method and Hough transform method, respectively.
Figure 14. The local path planning of different methods for dataset 2, in which the red lines and the green lines in (ai) are shown as the local path planning of the proposed method and Hough transform method, respectively.
Remotesensing 15 00309 g014
Figure 15. The offset error and direction error of different methods corresponding to (ai) in Figure 14.
Figure 15. The offset error and direction error of different methods corresponding to (ai) in Figure 14.
Remotesensing 15 00309 g015
Figure 16. The error of the proposed method and the Hough transform method for dataset 2: (a) offset error; (b) direction error.
Figure 16. The error of the proposed method and the Hough transform method for dataset 2: (a) offset error; (b) direction error.
Remotesensing 15 00309 g016
Figure 17. Result from dataset 2 with different levels of added noise: (a) offset error; (b) direction error; (c) number of rejected frames.
Figure 17. Result from dataset 2 with different levels of added noise: (a) offset error; (b) direction error; (c) number of rejected frames.
Remotesensing 15 00309 g017
Figure 18. Result from dataset 2 with different data rates: (a) offset error; (b) direction error; (c) number of rejected frames.
Figure 18. Result from dataset 2 with different data rates: (a) offset error; (b) direction error; (c) number of rejected frames.
Remotesensing 15 00309 g018
Figure 19. Error of proposed method for dataset 1 under different map resolutions: (a) offset error; (b) direction error; (c) number of rejected frames.
Figure 19. Error of proposed method for dataset 1 under different map resolutions: (a) offset error; (b) direction error; (c) number of rejected frames.
Remotesensing 15 00309 g019
Figure 20. Error of proposed method for dataset 2 under different map resolutions: (a) offset error; (b) direction error; (c) number of rejected frames.
Figure 20. Error of proposed method for dataset 2 under different map resolutions: (a) offset error; (b) direction error; (c) number of rejected frames.
Remotesensing 15 00309 g020
Figure 21. The binary map and its skeleton of the LiDAR data corresponding to Figure 4 under different map resolutions. The map resolution corresponding to (ad) is 0.1, 0.3, 0.5, 1, respectively.
Figure 21. The binary map and its skeleton of the LiDAR data corresponding to Figure 4 under different map resolutions. The map resolution corresponding to (ad) is 0.1, 0.3, 0.5, 1, respectively.
Remotesensing 15 00309 g021
Figure 22. The binary map and its skeleton of the LiDAR data corresponding to Figure 13b under different map resolutions. The map resolution corresponding to (ad) is 0.1, 0.3, 0.5, 1, respectively.
Figure 22. The binary map and its skeleton of the LiDAR data corresponding to Figure 13b under different map resolutions. The map resolution corresponding to (ad) is 0.1, 0.3, 0.5, 1, respectively.
Remotesensing 15 00309 g022
Figure 23. Global path planning for the proposed method; (a,b) denote the extraction of the laneway centerline of the maps corresponding to dataset 1 and dataset 2, respectively. Note: The black line in the map is the extracted map skeleton of the proposed method, and the red line in the maps is the smoothed centerline of the main laneway.
Figure 23. Global path planning for the proposed method; (a,b) denote the extraction of the laneway centerline of the maps corresponding to dataset 1 and dataset 2, respectively. Note: The black line in the map is the extracted map skeleton of the proposed method, and the red line in the maps is the smoothed centerline of the main laneway.
Remotesensing 15 00309 g023
Table 1. The corresponding error in Figure 9.
Table 1. The corresponding error in Figure 9.
Figure 8Offset Error (m)Direction Error (Rad)
(a)−0.1154−0.0102
(b)0.13000.0771
(c)−0.2812−0.6213
Table 2. The corresponding error in Figure 10.
Table 2. The corresponding error in Figure 10.
Figure 10Offset Error (m)Direction Error (Rad)
(a)0.0143−0.1461
(b)−0.4274−0.3398
(c)−0.16960.0069
(d)0.18710.1028
(e)0.1477−0.0920
Table 3. The time consumption of the proposed method and the Hough transform method for dataset 1.
Table 3. The time consumption of the proposed method and the Hough transform method for dataset 1.
MethodTotal Time (s)Average Time (s)
Hough transform5295.170.366
Proposed409.830.028
Table 4. The time consumption of the proposed method and the Hough transform method for dataset 2.
Table 4. The time consumption of the proposed method and the Hough transform method for dataset 2.
MethodTotal Time (s)Average Time (s)
Hough transform4251.260.363
Proposed730.430.062
Table 5. The time consumption of the proposed method for dataset 1 under different map resolutions.
Table 5. The time consumption of the proposed method for dataset 1 under different map resolutions.
Resolution (m)Total Time (s)Average Time (s)
0.13533.860.244
0.3409.830.028
0.5164.250.011
153.10.004
Table 6. The time consumption of the proposed method for dataset 2 under different map resolutions.
Table 6. The time consumption of the proposed method for dataset 2 under different map resolutions.
Resolution (m)Total Time (s)Average Time (s)
0.16548.620.559
0.3730.430.062
0.52840.024
184.180.007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Peng, P.; Wang, L.; Wang, J.; Wu, J.; Liu, Y. LiDAR-Based Local Path Planning Method for Reactive Navigation in Underground Mines. Remote Sens. 2023, 15, 309. https://doi.org/10.3390/rs15020309

AMA Style

Jiang Y, Peng P, Wang L, Wang J, Wu J, Liu Y. LiDAR-Based Local Path Planning Method for Reactive Navigation in Underground Mines. Remote Sensing. 2023; 15(2):309. https://doi.org/10.3390/rs15020309

Chicago/Turabian Style

Jiang, Yuanjian, Pingan Peng, Liguan Wang, Jiaheng Wang, Jiaxi Wu, and Yongchun Liu. 2023. "LiDAR-Based Local Path Planning Method for Reactive Navigation in Underground Mines" Remote Sensing 15, no. 2: 309. https://doi.org/10.3390/rs15020309

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop