*6.1. Experiments with Heterogeneous Maps*

In this section, we describe the results of the proposed method with heterogeneous maps. The environment for experiments is shown in Figure 9b,c. The dimensions of the environment are shown in Figure 9d. The environment had two static obstacles 'Obstacle1' and 'Obstacle2' marked in Figure 9d. The start and goal positions are marked as 'S' and 'G', respectively, in Figure 9d. The T-node map is shown in Figure 9e.

The two robots used in the experiment were both equipped with 2D Lidar and RGBD sensors. As shown in Figure 9a, the Pioneer P3DX robot was equipped with a Sick-Lidar of 10 m range and ASUS Xtion-Pro RGBD camera. Turtlebot was equipped with a Hokuyo Lidar of 20 m range and a Kinect RGBD camera.

To test the proposed method with heterogeneous maps, Pioneer P3DX robot was programmed to use only the RGBD sensor to build a 3D map, and navigate in the environment. Pioneer P3DX first started navigation from location 'S' to the goal location 'G' as shown in Figure 10a. A\* algorithm [47] and SHP algorithm [48,49] were used for path planning and path smoothing, respectively. As soon as the P3DX robot started moving, a long new obstacle was placed in the environment as shown in Figure 10b, well outside the range of the sensor. As shown in Figure 10c, the person moves in front of the robot and blocks its way purposefully. The details of dynamic obstacle are discussed in the next Section 6.2. P3DX perceives the new obstacle and alters its path towards the goal as shown in Figure 10d–f, while also updating the map with the newly added obstacle. P3DX was programmed to come back to its initial position 'S' and the navigation in shown in Figure 10g,h.

**Figure 10.** Timely snapshots of the experiment. (**a**) P3DX starts moving with old map. (**b**) Person adds a new obstacle. (**c**) Person moves in front of P3DX. (**d**,**e**) P3DX observes the new obstacle, changes trajectory and updates map. (**f**–**h**) P3DX return to the starting position. *(Supplementary Materials)*

The updated 3D map build by P3DX robot is shown in Figure 11a. In this experiment, Turtlebot used only the 2D Lidar sensor with 2D gridmap. Hence, P3DX could not directly share the 3D point-cloud information due to the heterogeneous maps used by the two robots. By using the T-node map, P3DX blocked the path between nodes n6 and n7 and shared this information with the Turtlebot to plan appropriate path. More information regarding the dimensions of the new obstacle could also be shared for better path planning. Hence, the 3D information was converted to a 2D information to be shared with Turtlebot. Grid maps are the most commonly used 2D maps in which each grid-value represents whether the grid is occupied, free or unknown. The 3D point-cloud were projected to the ground which was detected using a RANSAC based plane detection [50]. This 2D information was shared by P3DX robot with Turtlebot and the updated T-node map is shown in Figure 11b. In the updated T-node map of Figure 11b, the obstacle is placed between the nodes n6 and n7 blocking it.

Turtlebot was programmed to navigate from the same start location 'S' to the goal location 'G'. In the absence of the proposed information sharing mechanism, the path planned by Turtlebot would be (Figure 11b),

$$\mathsf{S} \to \mathsf{n}\_2 \to \mathsf{n}\_5 \to \mathsf{n}\_6 \to \mathsf{n}\_7$$

The Turtlebot would encounter a new obstacle between the nodes n6 and n7 and would have to re-plan a new path towards the goal. However, with the proposed information sharing mechanism, Turtlebot could directly plan a path considering the newly added obstacle and the planned path was (Figure 11b),

$$S \to \mathfrak{n}\_2 \to \mathfrak{n}\_5 \to \mathfrak{n}\_6 \to \mathfrak{n}\_9 \to \mathfrak{n}\_8 \to \mathfrak{n}\_7.$$

**Figure 11.** (**a**) RGBD map updated by P3DX. (**b**) Node-map. S and G are the start and goal points. The new obstacle is shown on nodes n6n7. Ellipse represents positional uncertainty.

Notice that, this appropriate path was generated by Turtlebot 'remotely' before actually encountering the new obstacle. Figure 12 shows the navigation of Turtlebot after considering the new obstacle. The entire navigation is illustrated between Figure 12a–h. In particular, it can be seen from Figure 12e–g, that Turtlebot maintains a safe threshold from the start itself. Turtlebot itself updated its map using the attached Lidar and the updated grid-map is shown in Figure 13.

**Figure 12.** Timely snapshots of the experiment. (**a**–**h**) Turtlebot starts navigation with the updated information and plans a trajectory considering the new obstacle. (*Supplementary Materials*)

The dimensions of the obstacles in the experiment are given in Table 3.

The decay curve is shown in Figure 14. In the experiment, *Cth* was set to 0.45 and *tz* to 20 min. Based on the uncertainty of the obstacle, the Ufactor was calculated as approximately 15% of *tz*. Figure 14 shows the decay of confidence considering the uncertainty of the obstacles.

Figure 15 shows different decay curves for different amounts of estimated positional uncertainties of the new obstacle. Although Figure 14 shows the actual decay curve of the experiment, Figure 15 shows theoretical values for different values of uncertainty. Figure 15a–d shows the confidence decay with increasing uncertainty of 35%, 45%, 55% and 65%, respectively. It can be seen that, for increasing uncertainty, the curve decays much faster, as desired.

Thus, the T-node enables robots to share information across heterogeneous maps. Indeed, there is a need to transform the newly added obstacle's information to spatial coordinates but it can easily be achieved in real-time. Moreover, to avoid the overheads of such computation for time-critical

applications, only the blocked/un-blocked information could also be shared. Using the same approach, information among other type of maps could be shared effectively.

**Figure 13.** 2D gridmap updated by Turtlebot during navigation.

**Table 3.** Obstacle Dimensions in the Experiment.


**Obstacle Confidence Decay: [ Cth = 0.45, tz = 1200 s, Ufactor = 15]**

**Figure 14.** Confidence decay curve in the new experiment. *Cth* = 0.45, *tz* = 1200 s, Ufactor = 15.

**Figure 15.** Estimation of confidence decay curve for different values of positional uncertainty with *Cth* = 0.45 and *tz* = 1200 s. (**a**) Ufactor = 35%. (**b**) Ufactor = 45%. (**c**) Ufactor = 55%. (**d**) Ufactor = 65%. It can be seen that for higher uncertainties, the curve decays faster as desired.

#### *6.2. Results with Dynamic Entities (Moving Obstacle)*

The proposed method was tested under complex scenarios by purposefully moving a person in front of the robot and blocking its way. This is shown in Figure 16. As P3DX robot started navigation from start location 'S' to goal location 'G', a person blocked its way by randomly moving in front of the robot. This is shown in Figure 16a–j.

Similarly, the path of P3DX robot was blocked again while it was navigating back from the goal location 'G' to its start location 'S'. This is shown in Figure 17. The person randomly moved in front of the robot blocking its path as shown in Figure 17a–j.

In both the cases of Figures 16 and 17, the robot attempted to avoid collision and planned alternate trajectories or stopped if the person stands dangerously close to the robot. Moreover, in both the cases, the robot did not update the map corresponding to the person as a new obstacle in the map. This is because the positional uncertainty corresponding to the moving obstacle was large as calculated by Algorithm 2. Even if the person is falsely identified as a new obstacle and the map is updated, it has no adverse effects in the proposed method, as uncertainty is integrated in the confidence decay mechanism. Any wrong map update corresponding to dynamic obstacles has high probability of larger positional uncertainty corresponding to the dynamic obstacle and therefore a quicker decay given by Equation (6), (7) and (9). On the other hand, for static new obstacles in the map, the underlying SLAM (Algorithm 2) algorithm estimates smaller positional uncertainty and therefore a larger decay time, ensuring its permanence in the map.

Thus, uncertainty integration has two merits in the information sharing scheme. First, it acts a filter for wrong map updates corresponding to the dynamic obstacles in the environment through a quick confidence decay. Second, it ensures that only the correct information is shared with other robots corresponding to the new static obstacles. It should be noted that the dynamic detection of

moving people can be done using image processing for camera-based sensors [51], RGBD sensors [52] or leg detector for Lidar-based sensors [53] and integration of such approaches [54] will increase the robustness of the system.

**Figure 16.** Dynamic obstacle experiment with P3DX navigation from position S to G in Figure 9. (**a**–**j**) Person moved randomly in front of P3DX for a long time moving in and out of the range of sensors. P3DX changed trajectories or stopped if the obstacle was dangerously close. (*Supplementary Materials*)

**Figure 17.** Dynamic obstacle experiment with P3DX navigation from position G to S in Figure 9. (**a**–**j**) To further test the method under pressure, a person moved randomly near P3DX. (*Supplementary Materials*)

### **7. Conclusions**

Information sharing is a powerful technique which has many potential benefits in path planning of multi-robot systems. A node-map was proposed in our previous work to solve the problems of information sharing in different robots. We extended our previous work by integrating the positional uncertainty of the new obstacles in the confidence decay mechanism which models the transience of the obstacles. This minimizes false map updates and notifications in the system. New experiments were performed to share information about new obstacles in heterogeneous maps. The results shown that using the nodemap allows the robots to smoothly share the information. Moreover, since path planning is also done using the nodemap, efficient trajectories considering the position of new obstacles can be done in real-time. The information sharing mechanism allows the robots to obtain timely information about remote obstacles in the map without having to explicitly visit those areas. In addition, new experiments were performed to test the proposed mechanism in complex environments with moving people in the vicinity of the robots. Due to the tight coupling of uncertainty and decay mechanism, the dynamic obstacles could be filtered and avoided false update of the map. Even if there is some false update, the confidence corresponding to them decays fast due to larger uncertainty. Experiment results confirm that, in the long run in large environments employing multiple robots, the proposed method can improve the efficiency of the system in terms of shorter distance traveled by the robots and shorter planning time by eliminating path re-planning. In future, we will continue to test the

robustness of the proposed method in more complex and realistic environments such as cafeterias and offices.

**Supplementary Materials:** The supplementary materials are available online at http://www.mdpi.com/2076- 3417/9/13/2753/s1.

**Author Contributions:** A.R. and A.A.R. conceived the idea, designed, performed experiments, and summarized the research; Y.K. made valuable suggestions to analyze the data and improve the manuscript. Y.H. provided important feedback to improve the manuscript. The manuscript was written by A.R.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.
