*3.1. Pointcloud Extraction Structure*

We denote the process of extracting and mapping the pointcloud to geometric contours as extract ) *Pk cloud* ⊂Q|*<sup>k</sup>* <sup>∈</sup> *<sup>Z</sup>*<sup>+</sup> \* , and the grid as *Grid*, respectively. In most laser-based SLAM, grids are used for accessibility analysis, which means that pointcloud information needs to be recorded in the global layer L*global* and local L*local*. Although an incremental method of updating the pointcloud is proposed, in the case of complex terrain and highresolution grids, the computational resources used to update the pointcloud are still very high. Therefore, a general sparsification module denoted as F is used to create the holedstructure local grid and incrementally update the pointcloud.

The dilated convolutional module [49] is usually used in neural networks to enlarge the receptive field in the picture, and its whole structure gives another way to deal with the pointcloud in the local grid: as shown in Figure 3, the pointcloud will be stored in the complementary hole grid.

**Figure 3.** A schematic diagram of the holed grid structure used for the local update. The pointcloud is updated through the complementary holed grids between every two data frames and merged into the global layer.

The holed-structure local grid is defined as *Gridd*, and the F contains *Sub* ⊂ *Gridd*; when obstacles are detected by the LiDAR, the sensor data S is transferred to *Sub*, all *Sub* forms the *Gridd*. We denote the voxel size as *VS*, and this value will affect the density of pointcloud. In this paper, the value of *VS* is set to 0.15 m. When the *Gridd* is formed, a PCL filter with a kernel size of (*VS*, *VS*, *VS*) will be applied to reduce the size of the pointcloud. <sup>S</sup> is the remaining pointcloud in the *Gridd*. After the local pointcloud is formed, the <sup>S</sup> are classified as obstacles or free, and we denote the classified <sup>S</sup> as ) *Pk cloud*\* . At this step, the fully classified *Gridd* is denoted as L*local*, then integrates L*local* to L*global*. The complementary hole grids will be generated respectively in different data frames, so that the final L*global* still contains all the information of the pointcloud. The module F is shown in Algorithm 1.


As shown in Figure 4, the *Sub* is a cell in the grid, and it stores a part of the pointcloud information in the 3D space. The standard practice is to form a grid from all cells, but in this paper, a grid with holed structure, as shown in Figure 5a,b, is used to let only part of the cells participate in the calculation. In fact, for a grid of a certain size, the number of cells depends on its resolution. As shown in Figure 6, the higher the resolution, the more cells there are, and the denser the grid, the better its mapping effect. However, the amount of computation increases dramatically. Therefore, this kind of holed-structure grid can save the calculation cost very well because it mainly requires half cells to participate in the calculation in each data frame.

**Figure 4.** (**a**) is the spatial 3D pointcloud; (**b**) is the mapping of the pointcloud information in the local grid.

**Figure 5.** The complementary grids with holed structure.

**Figure 6.** Grid with (**a**) higher resolution, and (**b**) lower resolution.

For the obstacle extraction process and obstacle vertices reconstruction process in the v-graph, the sensor information <sup>S</sup> is gridded and stored by the module <sup>F</sup> to obtain <sup>S</sup> . After that, the <sup>S</sup> will be converted to a binary image I. To enhance robustness, the image I will be blurred, then obstacle vertices will be extracted through image processing [50] to generate polygons ) *Pk contour* ⊂Q|*<sup>k</sup>* <sup>∈</sup> *<sup>Z</sup>*<sup>+</sup> \* . The polygon extraction algorithm is shown in Algorithm 2.

To define the kernel size of the box filter in Algorithm 2, the equation is as follows, where *RW* and *RL* are the width and length of the robot, respectively, and *VS* is the voxel size. In this paper, *VS* is set to 0.15 m:

$$(kernel\text{ }width,\text{ }kernal\text{ }height) = \max(|\frac{\frac{\max(R\_W, R\_L)}{2}}{V\_S} + V\_S) \tag{1}$$

Figure 7 demonstrates the blurred picture of LiDAR-mapped obstacle geometry and the time consumption of the laser process. As can be seen from Figure 7a, the hollow structure (using module F) does not affect the image after blurring. Compared with the original (without using a hollow structure), it can be found that our generated contour approximates the original image. The projected outline of the obstacle is thicker because of the blurred image, and the details inside the outline are lost.

The time required for the laser process, according to Figure 7b, includes gathering the raw pointcloud and downsampling. For the same area, the time consumed by using the holed structure is more gentle, while the processing process without the holed structure is steeper and its curve fluctuates greatly. The time it takes to process an image is depicted in Figure 7c. The total time of the image process includes mapping the pointcloud in L*local* into the image I, blurring the image, and initially extracting the obstacle contour points. About 20% of the total image processing time is spent on blurring the outline of obstacles.

**Figure 7.** (**a**) shows blurred images, the vertices of the obstacle will be extracted through the blurred image. The FAR Planner generates images without holed structure, and ours generates pictures through the holed structure. (**b**) shows the consumption of the laser process and (**c**) shows the consumption of the image process.

**Algorithm 2** Polygon Extraction.

**Input:** <sup>S</sup> ∈ L*local* **Output:** *Polygons* : ) *Pk contour*\* 1: *input* <sup>S</sup> 2: *Create binary image* <sup>I</sup> *f rom points in* <sup>S</sup> 3: *Apply box filter with kernal size o f* (*kernel width*, *kernal height*) *to blur image* I 4: *Extract polygons* ) *Pk contour*\* *based on* [50] 5: **for** *each P<sup>k</sup> contour* **do** 6: *Downsample vertices in P<sup>k</sup> contour based on* [51] 7: **end for**

#### *3.2. Simplified Complex Contours Algorithm*

In the previous Section 3.1, the L was constructed to obtain the pointcloud from LiDAR. In this section, the polygons will be extracted and simplified based on L.

For the graph update method of the two-layer architecture, shown in Figure 8, we define G*local* and G*global*. Between them, G*local* is the local layer around the robot, and G*global* is the layer set of the entire observation environment. G*local* will be generated by the sensor information S for each data frame and then merged into G*global* For each data frame, the sensor information S will generate G*local* and then be merged with G*global*, noting that since the module <sup>F</sup> is used to store the sensor information <sup>S</sup>, now <sup>S</sup> ∈ L*local* can be used to merge G*global* at a lower cost.

It is known that the computational complexity involved in constructing a v-graph is O(*n*<sup>2</sup> log *n*) [52], where *n* is the number of vertices in the graph. In normal case, the cost of building a local graph in the environment is small enough so that computational resources can be allocated to each data frame in an incremental update manner. However, redundant nodes will also be generated during each v-graph update if the environment has numerous complex obstacles, leading to a significant increase in the number of edges connecting the nodes. To ensure the effectiveness of the v-graph update, a method for further sparse operation on complex contours is required.

Constructing local layers: The <sup>S</sup> ∈ L*local* will be converted into local polygons ) *Pk contour*\* , and use ) *Pk contour*\* to construct a local visualization graph G*local*. Note that for complex polygons, as shown in Figure 9, the polygon contains many vertices composed of short edges. Adding redundant vertices will construct more useless edges; thus, a lot of computing resources are wasted on unnecessary vertices and edges in the process of path search in complex terrain.

**Figure 9.** The red solid lines are redundant edges which connect with the robot, and the purple lines are the redundant edges from obstacles themselves.

A threshold *η* is set to control the number of vertices for complex large local polygons. When the number of vertices of the polygon ) *Pk contour*\* is greater than *η*, the vertices will be reduced, which not only optimizes the geometric outline of large and complex obstacles, but also retains the geometric characteristics of small obstacles. As shown in Figure 10, the continuous vertices inside those red circles in Figure 10a should be eliminated, but the current method does not eliminate them well, resulting in more vertices and edges in the v-graph. Compared to Figure 10a, the optimized version in Figure 10b has fewer vertices.

(**a**) Extract vertices (**b**) Optimization after extracting vertices

**Figure 10.** (**a**) shows the obstacle vertices extracted after pointcloud mapping. (**b**) shows the remaining obstacle vertices after optimizing the (**a**).

When the number of vertices of an obstacle is greater than *η* in the local layer, the algorithm preferentially records the distance between the two longest vertices in the obstacle. For example, the distance between the longest two vertices is *distmax*. The algorithm traverses the three consecutive vertices *vertexi*−1, *vertexi*, *vertexi*+<sup>1</sup> in the obstacle and calculates the length between the two vertices, respectively. The distance between them is denoted as *dist*(*i*,*i*−1) and *dist*(*i*,*i*+1). If both *dist*(*i*,*i*−1) and *dist*(*i*,*i*+1) are less than 0.1× *distmax*, it means that *vertexi* is an invalid vertex (excess vertex), in which case we delete *vertexi* and destroy its connection edges *edge*(*i*−1,*i*) and *edge*(*i*,*i*+1).

Since the simplified complex contours algorithm only works on the G*local*, the v-graph update process will not be slowed down by the accumulation of the number of nodes in the G*global*.

Update the global layer: After G*local* is constructed, the G*local* and the G*global* are fused. The strategy is: take out the overlapping parts of G*local* in G*global*, and associate the vertex position in the G*local* to the G*global*. The Euclidean distance is used to associate vertices in two layers, and the associated vertices are recorded. The entire graph updating algorithm is as follows in Algorithm 3, and the final obstacle contours and edges are shown in Figure 11.

For the given two points a(*ax*, *ay*, *az*) and b(*bx*, *by*, *bz*), the *distance*(*a*, *b*) in Algorithm 3 is defined as followed:

$$distance(a, b) = \sqrt{(a\_x - b\_x)^2 + (a\_y - b\_y)^2 + (a\_z - b\_z)^2} \tag{2}$$

**Algorithm 3** Visibility Graph Update.

**Input:** <sup>S</sup> ∈ L*local*, *graph* G **Output:** *Update graph* G 1: ) *Pk contour*\* <sup>←</sup> *Polygon Extraction*(S ); // *f rom Algorithm*2 2: **for** *each P<sup>k</sup> contour* **do** 3: **if** *the number o f vertices* > *η* **then** 4: **for** *each vertex in contour* **do** 5: *distmax* = *max*(*distance*(*vertexi*, *vertexi*+1)) 6: **end for** 7: **while** *true* **do** 8: **for** *each vertex in vertices* **do** 9: *dist*(*i*,*i*−1) <sup>=</sup> *distance*(*vertexi*−1, *vertexi*) 10: *dist*(*i*,*i*+1) = *distance*(*vertexi*, *vertexi*+1) 11: **if** *dist*(*i*,*i*±1) < 0.1 × *distmax* **then** 12: *Eliminate vertexi* 13: *Eliminate unnecessary edge*(*i*−1,*i*) *and edge*(*i*,*i*+1) 14: **end if** 15: **end for** 16: **if** *all dist*(*i*,*i*±1) ≥ 0.1 × *distmax* **then** 17: *break* 18: **end if** 19: **end while** 20: **else** 21: *continue* 22: **end if** 23: **end for** 24: *Associate vertices P<sup>k</sup> contour in the* G*global* 25: *Upate to visibility graph* G

**Figure 11.** An illustration of sparse v-graph. The edge (orange) that head into at least one polygon from the shaded angle are eliminated, and the blue one will be kept. After eliminating those green vertices, the dot blue edge will be removed from the G*local*.

#### *3.3. Path-Planning Based on Bidirectional Breadth-First Search*

In FAR Planner, the goal point *Pgoal* is used as a vertex, and the Euclidean distance is used as the score to update the parent node of *Pgoal*. Although the path can be found in an unknown environment, its spatial search range is obviously wasted. The strategy adopted by the robot in many cases is to explore many unnecessary spaces until it finally reaches the goal. As shown in Figure 12, the robot travels from position 0 (start) to position 1, resulting in unnecessary exploration space.

A bidirectional breadth-first search (bidirectional BFS) structure is combined with the v-graph to search for a path, selecting a vertex of a connecting edge of the robot in the forward search while simultaneously beginning a backward search from the *Pgoal* to find the path to the robot's current position. This minimizes the amount of unnecessary exploration space.

In the planning, assume that there are no obstacles in the unknown area where the *Pgoal* is located. The *Pgoal* uses geometric collision checking to establish edges with existing vertices ) *Pnode* <sup>|</sup> *Pnode* ⊂ G*global*\* in the v-graph, and then the *Pgoal* will be connected to the vertices of the discovered obstacles in the v-graph as shown in Figure 13a. The one-way BFS usually wastes some search space in unknown or partially known environments. This is because the one-way BFS starts from the nodes connected to the robot, calculates the target point according to the cost, and then iterates to the robot position according to the parent node of the target point.

As shown in Figure 13b,c, the one-way BFS enters a fork in the planning of the global path from the starting point to the ending point, resulting in an increase in the search space. The result is shown in Figure 13d, from the red point to the green point, the one-way search wastes a huge amount of space. Therefore, this paper embeds the goal in the v-graph and associates it with the existing vertices in the graph, and adopts a bidirectional breadth-first algorithm for path search.

**Figure 12.** The robot travels from position 0 to position 1, and the red dotted box represents the wasted exploration space during navigation.

The bidirectional BFS structure shows in Algorithm 4, in which the two BFS are divided into forward and backward according to the direction of the search (forward searches from the robot position to the *Pgoal*, and backward searches from the *Pgoal* to the robot position).

In the Algorithm 4, *parentF*(·) and *parentB*(·) are the functions returning the forward and backward parent of a node. *QF* and *QB* are the min-priority queues in forward and backward ones, respectively, and *QF* is ordered by *gF*, *QB* is ordered by *gB*. *μ* is the cost of the best path found so far (initially, *μ* is set to ∞). Whenever the robot reaches a node and expands in the other search, *μ* will be updated if a better path goes through the node. *gF* is the current distance from start and *gB* is the current distance from *Pgoal*. *topF* and *topB* are the distances to the top nodes in the forward and backward queues, respectively. The STEP function in Algorithm 5 is responsible for advancing the search through the v-graph and updating *μ*.

One of the benefits of bidirectional search is that it can, to some extent, avoid the wasted search space caused by entering invalid forks. As shown in Figure 13e, the globally planned path no longer passes through the fork, thus avoiding excessive searching. As a result, as shown in Figure 13f, compared to Figure 13d, the distance traveled by the robot is greatly reduced.

It is not possible to use a vertex that has just been extracted from the v-graph G as a point of navigation directly; instead, a transform is used to turn the vertex into a way-point. As Figure 14 shows, in the obstacle where the point is located, the vertices connected to the point at the polygon will be extracted to calculate the direction vector of the point (in Algorithm 6, they are −→*dirf ront* and −→*dirback*, respectively, and the direction vector of the point is the −−→*sur f dir*).

A detailed description is shown in Algorithm 6. In Algorithm 6, the parameter of *searchdist* is set to constrain the searching area, and the *neardist* is a step parameter which extends the way-point from −−→*sur f dir* direction, and *RW* and *RL* are the width and length of the robot, respectively. When the way-point extends, the *maxextend* is set to constrain the length of the extension. The *NearOBS*(·) function is to obtain a range of obscloud from L*global*, with *Pway*−*point* as center and *searchdist* as the radius. *Check*\_*Collision* is used to

**Figure 13.** The blue area of the map represents the part that has been explored by LiDAR and is considered known. (**a**) shows the connection between the endpoint and the existing node, represented by a solid yellow line. (**b**,**c**) show the path planning using one-way BFS. (**d**) is the result of one-way BFS. The red dot is the starting point, and the green dot is the endpoint. (**e**,**f**), respectively, show the path planning and the final result using bidirectional BFS. The globally planned path appears on the map as a thick blue line.

**Algorithm 4** Bidirectional BFS.

**Input:** *Pstart*, *Pgoal*, *Visibility Graph*: G **Output:** *path* : ) *Ppath*\* 1: *QF*, *QB* ← *make min* − *priority queues f or the nodes initially containing only Pstart and Pgoal* 2: *expandedF*, *expandedB* ← *make f orward and backward lookup tables* 3: *searchF* ← (*QF*,*expandedF*, *parentF*) 4: *searchB* ← (*QB*,*expandedB*, *parentB*) 5: *μ* ← ∞ 6: *initialize* G, *associate Pgoal in* G 7: *path* ← *none* 8: **while** *topF* + *topB* < *μ* **do** 9: **if** *at least one queue is non* − *empty* **then** 10: *choose the search to advance* 11: **else** 12: **return** *path* **13: end if 14: if** *f orward search was chosen* **then 15:** (*path*, *μ*) ← **STEP**(*searchF*,*searchR*, *path*, *μ*) **16: else 17:** (*path*, *μ*) ← **STEP**(*searchR*,*searchF*, *path*, *μ*) **18: end if 19: end while 20: return** *path*

**Algorithm 5** STEP (*search*1,*search*2,*solution*, *μ*).

1: // 1 denotes the chosen direction and 2 is other direction 2: // *c*(·) is the cost function, in this paper, manhattan distance is used 3: // *c*(*u*, *v*) = |*ux* − *vx*| + |*uy* − *vy*| + |*uz* − *vz*| 4: *μ* ← *pop the min g*<sup>1</sup> *node f rom Q*<sup>1</sup> 5: **for** *v* ∈ *parent*1(*u*) **do** 6: **if** *v* ∈/ *expanded*<sup>1</sup> ∪ *Q*<sup>1</sup> *or g*1(*u*) + *c*(*u*, *v*) < *g*1(*v*) **then** 7: *g*1(*v*) ← *g*1(*u*) + *c*(*u*, *v*) 8: Add *v* to *Q*<sup>1</sup> 9: **if** *v* ∈ *expanded*<sup>2</sup> *and g*1(*v*) + *g*2(*v*) < *μ* **then** 10: *path* ← *reconstruct the path through u and v* 11: *μ* ← *g*1(*v*) + *g*2(*v*) 12: **end if** 13: **end if** 14: **return** (*path*, *μ*) **15: end for**

In Algorithm 6, the *normalize*(*Pa*, *Pb*) and *normalize*(·) are defined as followed:

$$
normalize(P\_{a\prime}, P\_b) = \frac{P\_b - P\_a}{||P\_b - P\_a||} \tag{3}
$$

$$
overline{(\overrightarrow{V})} = \frac{\overrightarrow{V}}{||\overrightarrow{V}||}\tag{4}$$

**Figure 14.** The schematic diagram of generating a way-point.

#### **Algorithm 6** Way-point generation.

**Input:** *Pstart*, *Path*, L*global* **Output:** *Pway*−*point* 1: *vertex* = *Path*[0] 2: *Path* = *Path*[1 :] 3: *searchdist* = *max*(2.5 × *max*(*RL*, *RW*), 5) 4: *neardist* = *min*(*min*(*RL*, *RW*), 0.5) 5: *Pway*−*point* = *vertex* 6: *maxextend* = *min*(*searchdist*, *distance*(*Pstart*, *Pway*−*point*) 7: −→*dirf ront* <sup>=</sup> *normalize*(*polygonk*(*<sup>i</sup>* <sup>−</sup> <sup>1</sup>), *polygonk*(*i*)) 8: −→*dirback* <sup>=</sup> *normalize*(*polygonk*(*<sup>i</sup>* <sup>+</sup> <sup>1</sup>), *polygonk*(*i*)) 9: **if** *Pway*−*point is a convex point* **then** 10: −−→*sur f dir* <sup>=</sup> <sup>−</sup>*normalize*( −→*dirf ront* <sup>+</sup> −→*dirback*) 11: **else** 12: −−→*sur f dir* <sup>=</sup> *normalize*( −→*dirf ront* <sup>+</sup> −→*dirback*) 13: **end if** 14: *obscloud* = *NearOBS*(L*global*,*searchdist*, *Pway*−*point*) 15: *obscloud* = *setInputCloud*(*obscloud*) //setInputCloud is a pcl library function to build the KDTree of a set of pointcloud 16: *temp* <sup>=</sup> *Pway*−*point* <sup>+</sup> −−→*sur f dir* <sup>×</sup> *neardist* 17: *is*\_*collide* = *Check*\_*Collision*(*temp*, *obscloud*) //from Algorithm 6 18: *extenddist* = *neardist* 19: **while** *is*\_*collide is f alse and extenddist* < *searchdist* **do** 20: *temp*+ = −−→*sur f dir* <sup>×</sup> *neardist* 21: *extenddist*+ = *neardist* 22: *is*\_*collide* = *Check*\_*Collision*(*temp*, *obscloud*) 23: **if** *extenddist* < *maxextend* **then** 24: *Pway*−*point* = *temp* 25: **end if** 26: **end while** 27: **if** *is*\_*collide is true and extenddist* > *max*(*RW*, *RL*) **then** 28: *Pway*−*point* <sup>=</sup> *Pway*−*point*+*vertex*−−−→*sur f dir*×*neardist* 2 29: **return** *Pway*−*point* **30: else 31: return** *drop this vertex and re* − *search path* **32: end if**

#### **Algorithm 7** Check\_Collision.

**Input:** *point P*, *obscloud*

**Output:** *f alse or true*


#### **4. Experiments and Results**

The paper uses the same experimental parameters as FAR Planner [19], (uniform sensor parameters, the robot speed is set to 2 m/s). A highly complex channel network tunnel, a parking garage with multiple floors, and a forest of trees with many irregularly shaped trees are all included in the simulated experimental environment. The indoor is moderately complex but easy to detour. Additionally, Matterport3D [48] provides a simple environment 17DRP5sb8fy (denoted as 17DR), a slightly complex environment PX4nDJXEHrG (denoted as PX4n), and a large complex environment 2azQ1b91cZZ (denoted as 2azQ).

In the simulation environment, all methods run on a 2.6Ghz i7 laptop, and the v-graphbased methods use images at 0.2 m/pixel resolution to extract points to form polygons. The local layer on the v-graph is in a 40 m × 40 m area with the robot in the center. The threshold of the length of each visibility edge is set to 5 m. Finally, the simulated mobile robot is 0.6 m long, 0.5 m wide, and 0.6 m high.

In the physical environment, ours runs on an embedded device, and the robot speed is set to 0.7 m/s. To adapt to the real environment, the local layer of the v-graph is set to 20 m × 20 m. The length, width, and height of the mobile robot are set as 0.32 m, 0.25 m, and 0.31 m, respectively.

#### *4.1. Simulation Experiment*

#### 4.1.1. Laser Process Simulation Experiment

In the laser process simulation experiment, seven different types of environments are used to compare the holed structure with the original one. The experimental results are shown in Figure 15. The robot moves according to a fixed route, and the experiment analyzes the time of the laser processing process. The laser processing process refers to the whole process of extracting and storing the pointcloud information of the local layer into the grid and classification (obstacle pointcloud or free pointcloud).

As shown in Figure 15, for example, in the indoor environment, the robot will start from 0 (start) and pass through the target points 1, 2, 3, 4, 5, and 6 in sequence. The initial state of the robot is set to be in an unknown environment, and the known information in the environment is continuously accumulated through exploration. The program records the time of receiving and processing the pointcloud data from LiDAR during the movement of the robot.

Figure 16a is a summary of the average time of laser processing in different environments. It can be clearly seen from Figure 16a that our method used less time in the processing of the pointcloud information in every data frame. Figure 16b–h show that using the grid with the holed-structure leads to the smooth processing of the laser pointcloud. Compared with the original grid, the use of the holed structure can improve the processing speed of the pointcloud by 30.5∼44.5%.

**Figure 15.** The overall view of seven different environments.

**Figure 16.** (**a**) is the summary of the average time of laser processing in different environments. (**b**–**h**) show the process time of pointcloud in each data frame.
