5.4.4. Initial Conditions

As mentioned above, the initial state pose is assumed to be (0, 0, 0) and the local goal **g** is given in the robot frame {*B*} along with the extracted obstacles **o**1...*<sup>O</sup>* from LiDAR data. One part that has not been addressed is how the remaining state variables, that from a control perspective is actually used to command the platform at hand, are determined. Looking at the state Equation (14), we also have the linear velocity *v*, the linear velocity direction *ϕ* and the angular velocity *ω*. One key question is how to obtain and provide these values.

As the platform is non-holonomic by nature, since there it takes some time in order to turn the steering wheels, we cannot directly set any arbitrary *ϕ* value. Nor can we directly set an arbitrary rotational velocity *ω* and expect an instantaneous response. One approach could be to define that the robot always starts moving in one direction and then use this as an initial condition, but this would limit the advantages of the platform that you can start moving in an arbitrary direction or to start by a rotation. Instead of modeling the time taken for the robot to rotate its wheels which corresponds to the commanded linear and angular velocity, we at startup directly look at the feedback from the system that contains these velocities, see Figure 7. Without doing so our internal velocity prediction step will assume that the velocity is greater than the robot platform can deliver. When the vehicle has started to drive we then can utilize our acceleration models used within the optimization to predict more velocities.

As the MPC scheme to a great extent solves very similar problems from one iteration to the next, we also utilize a "warm-start", which is to provide the previous solution to the optimization problem as an initial start configuration. Performing the navigation task though the doorway as depicted in Figure 16 with warm start had an average optimization time of 29 ± 3 ms compared to 34 ± 5 ms without warm start, a reduction of approximately 10–20%.

**Figure 7.** An example output driving from a start pose to a goal pose showing the heading and velocity of all four steer and drive units. In the beginning, the steering wheels all have an angular heading of zero. To allow the vehicle to start driving at an arbitrary direction, we utilize the feedback given by the platform to ensure that all wheels are aligned before the velocity is applied. When the vehicle has reached its goal the heading angles is set back to zero by the internal controller.

#### *5.5. Perception*

As mentioned in previous sections, the amount of constraints per added obstacles can be rather high and depends on the footprint representation and the horizon length *N*. To keep the amount of obstacle points **o** low we have to do some "clever" filtering of the range data. An example on filtered data is depicted in Figure 15. At a higher level, we have a perception system that is based on a previously built occupancy map which is overlaid with LiDAR data. This occupancy map is used to generate the global path. As we use this global path to extract the local path we do have some obstacle-avoidance systems in place. The global path planner has two main parameters that are of importance. First, to simplify the global motion planning the platform's footprint is approximated with a circle, to allow the robot to traverse narrow areas. The circle does not enclose the full actual footprint but ensures that the vehicle can pass given that its orientation is suitable. The second parameter is the distance from any object or occupied cell that will cause the planned motion to have additional cost. Given an empty environment and a wall, this distance will ensure that the generated plan will always keep this distance away from the wall, unless the goal is given closer towards the wall. These two parameters ensure that the global plan finds traversable areas in constrained areas while it makes sure that the the robot keep some distance in areas where there is space.

From the LiDAR we get a set of range readings that can be transformed into a corresponding 3D coordinate depending on sensor placement and internal parameters. Around the yaw rotation in the robot frame, a sector is defined where the closest reading that is considered to be an obstacle is stored (for example, readings that are sampled from the floor or objects that are higher than the actual height of the robot are ignored). To simplify processing the order, the sector is sorted based on the corresponding yaw angle. From this set of sectors we would like to extract the *O* closest obstacles. From all sectors we choose the closest reading as the first obstacle. It would be likely that the next closest obstacle is from a sector close by, but instead of looking solely at the second closest we make sure that the distance between the selected readings is at least a threshold distance apart. By doing so we can drastically reduce the amount of obstacles while still providing a set that can reasonably well represent the spatial layout of the environment. These obstacle readings

will be continuously updated at the next iteration so obstacles that were filtered away might now be visible. Another aspect is that the global plan provided will keep a path with a distance to obstacles. The obstacles here will be used as constraint in the optimization framework, meaning that they will only impact the control actions taken if the obstacles are within the specified collision bounds. This requires that the obstacle is sufficiently close to the robot.

#### **6. Experiments and Analysis**

The MSDU Local Planner was evaluated in a number of real-world experiments on the robotic platform. The evaluation was performed in a research lab, and initially the robot was manually driven through the environment to build a global map using the LiDAR sensors (see Figure 8 for the created map). This map was used for planning and localization. The map was not updated during the experiments, which took place over several weeks, even though minor modifications occurred within the environment, with obstacles appearing and disappearing. Nonetheless, the system was able to successfully localize and plan.

**Figure 8.** The map generated by the robot and used for localization, navigation, and planning. The robot was manually driven around the environment to create the map before the experiments took place.

The experiments performed evaluated the efficiency and accuracy of the paths planned by the MSDU Local Planner, and compared its performance to the TEB Local Planner [9], a widely used local planner. Further experiments evaluated the performance of MSDU in specific scenarios relating to particularly challenging environment aspects, such as narrow gaps and doorways.

The first evaluation compared the paths generated by MSDU against TEB. The goal positions were randomly generated in the environment, and the robot drove between the goals positions using MSDUThe experiment was repeated using TEB. ˙

The termination criteria used are the distance between the current pose and goal as well as the maximum allowed velocity. If both these criteria are met the robot is considered to have reached the goal. In order to give as fair comparison as possible, these parameters were tweaked so that for both planners all goals were reachable within reasonable time and without causing excessive turning in the end. It should also be noted that the TEB Local Planner is not targeting pseudo-omnidirectional platform specifically and the parameters used were empirically found to produce outputs that the platform at hand worked well with. This comparison should therefore primarily be seen as a base-line of how an off-theshelf planner can perform.

The generated paths were evaluated according to the following criteria:


The localization performance was evaluated using the robot's pre-generated map and pose.

The experiment was performed across both "short" and "long" paths. Short paths are particularly challenging for the local motion planners, as they have to take tighter, more confined routes to the goal position. Here, "short" paths had a direct distance between starting and ending positions of less than 1 m. A total of 69 short paths were tested, with a distance normally distributed around a mean of 0.55 m, with a maximum distance of 0.98 m and a minimum distance of 0.12 m. The longer, less challenging paths had a mean distance between starting and ending positions of 2.9 m, with a maximum distance of 3.8 m and a minimum distance of 2.3 m. The experiments are summarized in Table 1.

**Table 1.** Summary of accuracy and precision experiments.


Repeatability experiments were also performed, where the robot traveled repeatedly between two goals. The first repeatability experiment was over a long path between two goals 2.4 m apart, while the second repeatability experiment was over a short path between two goals 0.42 m apart. The robot traveled to each goal position 20 times. The repeatability experiments are summarized in Table 2.

**Table 2.** Summary of repeatability experiments.


#### **7. Results**

This section presents the experimental evaluation of the MSDU Local Planner. Section 7.1 presents the quantitative experiments evaluating the paths generated by MSDU and TEB. Sections 7.2 and 7.3 present qualitative descriptions of the performance of MSDU in narrow and confined spaces. Finally, Section 7.4 provides a brief summary of the main conclusions from the path evaluation experiments.

### *7.1. Accuracy and Efficiency of Planned Paths*

Figure 9 presents the error (both translational and angular) between the final robot pose and the desired goal pose for each goal, for both MSDU and TEB for "long" (2.3–3.8 m) paths. Some key metrics are also summarized in Table 3. It can be seen (Figure 9a) that both planners achieved a high accuracy in terms of linear position. However, MSDU typically

has a smaller translational error than TEB: MSDU has a mean error of 0.0019 m from the goal while TEB has a mean error of 0.0031 m from the goal position.

**Figure 9.** Distance between final robot pose and desired goal position for the MSDU Local Planner (black) and the TEB Local Planner (white) over long paths. (**a**) Translational error between the final robot pose and the desired goal position; (**b**) angular error between the final robot pose and the desired goal position.

Similar results can be seen when it comes to angular error (see Figure 9b and Table 3). MSDU has a mean angular error of 0.0007 rad while TEB has a mean angular error of 0.0025 rad.


**Table 3.** Summary of performance of MSDU and TEB for long paths.

Figure 10 and Table 4 present the distance traveled by the robot to the final goal position. It can be seen that TEB consistently takes longer paths to the goal than MSDU. This is particularly the case for angular distance, where in the worst case TEB travels 20 times the angular distance of MSDU (Goal 18).



The results for the short paths can be seen in Figure 11 and Table 5. The first thing to note is that the short paths are more challenging for both planners than the long paths: the mean (translational, angular) error increases from (0.0019 m, 0.0007 rad) on the long paths to (0.0028 m , 0.0010 rad) on the short paths for MSDU and from (0.0031 m, 0.0025 rad) on the long paths to (0.0033 m, 0.0038 rad) on the short paths for TEB. However, MSDU continues to outperform TEB in terms of accuracy.

For the short paths, the distance traveled to each goal pose is displayed in Figure 12 and Table 6. The translational distance traveled is longer for the TEB in 97% of the experiments (67 out of 69). In the cases where the translational distance traveled by TEB is shorter, the MSDU path is no more than 1.07 times as long as the TEB path (0.51 m to 0.48 m), while when the TEB path is longer; in the worst case it is 1.78 times as long as MSDU (0.98 m to 0.55 m).

As with the long paths, the angular distance traveled by TEB is consistently longer than that traveled by MSDU: in the worst case TEB takes an angular distance 51 times as long as that of MSDU (Goal 47).

These results show that the MSDU Local Planner can achieve final robot poses that are both closer to the desired goal pose while traveling shorter distances both in terms of translational and angular displacement.

**Figure 11.** Distance between final robot pose and desired goal position for the MSDU Local Planner (black) and the TEB Local Planner (white) over short (<1 m) paths. (**a**) Translational error between the final robot pose and the desired goal position; (**b**) angular error between the final robot pose and the desired goal position.


**Table 5.** Summary of performance of MSDU and TEB for short paths (<1 m).

To understand why the distances traveled differ so much between planners, we display a number of the trajectories taken by the robot. We display the trajectories for the first four short goals in Figure 13. In each figure, the goal is marked by the black circle, the trajectory planned by MSDU is displayed in black and the trajectory planned by TEB is displayed in red. It is important to note that the trajectories, even when generally smooth, appear to have a slightly "jagged" appearance. This does not represent the actual path of the robot, but is due to the localization process, where the dead-reckoning estimate based on odometry is updated periodically by the AMCL localization process, thus creating small apparent "jumps" in the path.

**Figure 12.** Distance traveled between the starting and final robot pose for the MSDU Local Planner (black) and the TEB Local Planner (white) over short (<1 m) paths. (**a**) Translational distance traveled between the starting and final robot pose; (**b**) angular distance traveled between the starting and final robot pose.



However, it is still possible to see the deviation between the paths planned by MSDU and those planned by TEB. TEB often takes a wider, more sweeping path—as one might see from a more car-like vehicle—and sometimes (as in the case of Goal 2) must do some complicated corrections close to the goal. In comparison, MSDU takes a more direct path, and very little correction occurs close to the goal.

**Figure 13.** Trajectories taken by the robot for each goal. The goal is denoted by the black circle. The position of the robot using the MSDU Local Planner is displayed in black, and the robot position by the TEB Local Planner is displayed in red. Note that discrepancy in the curves are due to the localization update.

The repeatability of the planners is displayed in Figure 14 below, for both the long trajectory (Figure 14a) and the short trajectory (Figure 14b). The trajectory of the robot using MSDU is displayed in black, and the trajectory of the robot using TEB is displayed in red. The qualitative difference between the two planners' behavior can be clearly seen. The quantitative comparison is presented in Tables 7 and 8. A key distinction between the two planners is that in each case the distance MSDU travels to Goal 1 is very similar to the distance it travels to Goal 2, while TEB takes quite different-length paths between the two goals for the short paths. This can also be observed in Figure 14b.

**Figure 14.** Trajectories taken by the robot during the repeatability experiment. The trajectory of the robot using MSDU is displayed in black, and the robot trajectory using TEB is displayed in red. (**a**) Long path (2.4 m between the two goals); (**b**) short path (0.42 m between the two goals).


**Table 7.** Repeatability of performance of MSDU and TEB on long paths.

**Table 8.** Repeatability of performance of MSDU and TEB on short paths.

