*4.4. Experimental Results in a Dynamic Environment*

Experiments based on the proposed algorithm in a dynamic environment is performed to compare the learning results of the D\* algorithm. D\* [43] and its variants have been widely used for mobile robot navigation because of its adaptation on dynamic environment. During the navigation of the robot, new information is used for a robot to replan a new shortest path from its current coordinates repeatedly. Under the 5000 different dynamic environments consisting of the smallized static environment as done in Section 4.3, and dynamic obstacles with random initial positions, the simulation was performed around 100 times per environment.

For the D\* algorithm, if an obstacle is locates on the robot path, the path will be checked again after the path of movement. In the same situation, different paths may occur depending on the circumstances of the moving obstacle. Figure 16 shows that the yellow star is a dynamic obstacle that creates the shortest path, as shown in Figure 16, or the path to the target point rather than the shortest path, as shown in Figure 17. This shows that the situation of the moving obstacle affects the robot's path.

**Figure 16.** The shortest path generated without collision with dynamic obstacles using D\*.

**Figure 17.** Path using D\* in a mobile obstacle environment the route returned by an obstacle.

In the proposed algorithm under a dynamic environment where moving obstacles are located on the traveling path of the robot, they do not block the robot, as shown in Figure 18. It guarantees a short traveling distance of each robot.

**Figure 18.** The generated path using proposed algorithm in dynamic environment.

As shown in Figure 19, the robot moves downward to avoid collision with obstacles on the path of the robot. In Figure 19, the extent to which the path created by D\* and the proposed algorithm was not moved to the existing path by the mobile yellow star and explored to generate the bypass path.


**Figure 19.** Robot bypass path generation using the proposed algorithm in dynamic environment.

Since the robot moves actively to the next position depending on the results learned step by step, it is possible to create a path to the target point in a dynamic environment.

Table 4 shows under a dynamic environment, the first bypass path of D\* and the proposed algorithm have 84 and 65 search ranges to move to the target point, respectively. In addition, for the second bypass, each method has 126 and 83 search range, respectively.


**Table 4.** Total search scope under dynamic environment for D\* and proposed algorithm.

Table 5 shows the minimum search range, the maximum search range, and the average search range of the simulation for the D\* algorithm and the proposed algorithm for the overall simulation results. All of the three indicators show that the search range of the proposed algorithm is no bigger than D\*. Since the search range of the proposed method is small when searching for the bypass path, efficient path planning is possible to the target point. The proposed algorithm used the A\* algorithm as a comparison algorithm, which is not available in a dynamic environment. The proposed algorithm was learned by comparing it with the A\* algorithm, but unlike the A\*, path generation is possible even in a dynamic environment.

**Table 5.** Minimum, maximum, and average exploration range during the full simulation of D\* and proposed algorithms.

