*5.2. Application Tests on Real-Time Pathfinding*

This work is motivated by the need to engage real-time and collision-free pathfinding tasks for agents with a certain safety radius. Such agents often maneuver in dynamic, crowded environments, and have to decide their next move in a limited amount of time. In order to demonstrate the usefulness of our algorithm in such scenarios, two agents (denoted as agent *X* and agent *Y*) working in a grid map of size 200 × 200 (as shown in Figure 12) are simulated. For each round of testing, both of the two agents are assigned the same start and goal cells located on the top right and left bottom of the grid map, respectively. The resulting paths of Agent *X* and *Y* are respectively shown in Figure 12a,b as light green trajectories; by contrast, the optimal path searched by the classical path planning algorithm, A\*, is marked as red trajectories. Agent *X* simply adopts LRTA\* to directly search and move in the underlying DM, while agent *Y* shown in Figure 12b would firstly search out a high-level path from the DM-based subgoal graph, and then engages LRTA\* to search and move between the segments of the path. In order to gain the average performance, we run the test of each of the ten pairs of start and goal cells 100 times.

For agent *X*, applying LRTA\* to engage a real-time search can help the agent successfully avoid collisions with obstacles, but it is very easy for the underlying DM to generate local minima around typical locations such as concaves, narrow channels, long distance barriers, and so on. To the cells within these locations, the occurrence of the local minima results in an increase in errors between the default heuristic values and the actual values. Therefore, LRTA\* has to recheck these cells many times to incrementally correct their heuristic values (as shown in line 90, Table 6). Only in this way can the heuristic depressions be gradually filled up and finally drive the agent to escape from the local minima. Unfortunately, such runtime correction incurs meaningless movements (As the regions labeled as A, B, and C in Figure 12a) which are unacceptable to the requirements of engaging reasonable behavior.

**Figure 12.** Performance comparison between agent *X* and *Y*. (**a**) The resulting path of agent *X* who simply uses LRTA\* to engage a global search in a DM. As the light green trajectory shows, repeated heuristic corrections and meaningless movements caused by local minima (such as the regions denoted by identifiers A, B, and C) seriously reduce the rationality of the resulting trajectory when it is compared with the high level path searched by A\* form the DM-based subgoal graph (denoted as the red trajectory); (**b**) The resulting path of agent *Y* who adopts LRTA\* to refine the high-level path searched from the DM-based subgoal graph. Due to the sparse and h-direct reachable feature, Agent *Y* prefers to expand cells between consecutive subgoals and thus can efficiently avoid local minima. Therefore, the resulting path (shown as light green trajectory in (**b**)) is, to a large extent, close to the optimal path denoted as the red trajectory.

As for agent *Y*, a high-level path (as the red path shown in both Figure 12a,b, is firstly searched out from the DM-based subgoal graph by executing the algorithms proposed in Table 5 (in order to make the figures clear, we only retain the subgoals and do not draw the direct-h-reachable edges). The resulting path provides a list of sparse and direct-h-reachable waypoints to circumnavigate the collision regions. Based on the path, agent B uses LRTA\* at runtime to search and move along the path; therefore, it can efficiently avoid local minima and terminate meaningless movements.
