*6.1. Experimental Platform*

To carry out the experimental part of this work, we implemented the solution described in Section 4 using the robotic research platform BLUE. This robot is derived from a conventional electric cart and has been developed entirely by our research group, AUROVA. It has been designed to ease the experimental processes in mobile robotics research [42] and incorporates a powerful low-level module—also developed by our group—that is called control logic for easy Ackermann robotization (CLEAR). This module permits to turn conventional Ackermann vehicles into research robots integrated into ROS [42], and provides—besides other interesting features—a very accurate estimation of the steering angle and the platform speed even using extremely-low resolution sensors [43]. The robot main hardware components are the following: 3D LiDAR sensor Velodyne VPL16, 2D LiDAR sensor Hokuyo UBG-04LX-F01, RGBD camera Intel Realsense D435, GNSS real-time kinematic (RTK) capable Ublox M8P, and IMU sensor CHR-UM7, among others. It also has an onboard computer

in which sensors and actuators are integrated via ROS. This computer is also used to run the system implemented following the architecture shown in Figures 2 and 3.

#### *6.2. Settings*

In Sections 4 and 5, we commented about different configurable parameters, some that depend on the robot used to implement the application, and others depending on the selected environment. In Table 1 we specify the parameters used for the experimental sessions described in this section. In [44], we can see the large number of default parameters that GMapping uses. We choose to limit the experimental tunning process to just the three parameters shown in Table 1. However, our tunning process ended up replicating the default values, so our GMapping configuration is the same as the standard one. In the case of AMCL we obtained the *α* parameters from the motion model as described in [40], leaving the rest of default AMCL parameters [45] unchanged. The planning module has only one adjustable parameter, which is the distance between local goals (Section 4.2.2). For the control subsystem, we show the value of the control adjustment constants described in (1) which are obtained empirically, as well as the maximum and minimum speeds and the maximum steering angle, that were selected by design, taking into account the physical limitations of the robot. For the Kalman filter based fusion system, we adjusted the model noise variances empirically and used the variances provided by the AMCL and the GNSS modules as observation noises. In Section 5.5 we discussed the outlier rejection process—that depends on a Mahalanobis distance threshold—and the maximum AMCL variance threshold that activates the recovery process of the Monte Carlo estimator. The adjustment of these thresholds has been done experimentally, looking to obtain the best qualitative results.

In the rest of this section, we describe and discuss the experiments performed using the configuration described in Table 1.


**Table 1.** Adjustable parameters grouped in subsystems. The table shows the values used in the experimental sessions. Some parameters depend on the robot, as is the case with the parameters *α* of AMCL, while others depend on the environment, as the map resolution in GMapping.

#### *6.3. Initial Framework Experiments*

To test the initial framework implementation described in Section 4, we carried out an experimental session in the surroundings of the University Institute for Computing Research building, where the secondary laboratory of our research group is located (Figure 10, right). For this experiment, we used our framework to create a basic but complete autonomous navigation application able to navigate indefinitely repeating the same trajectory in the described environment. Following the procedure detailed in Section 4, the experimental session was divided into two phases: pre-execution, and execution.

In the pre-execution phase, we recorded a dataset in manual driving making a closed loop around the building shown in Figure 10, right. Then, offline we ran this dataset in order to generate the inputs for the architecture shown in Figure 2. As result of this process we obtained a grid map (shown in Figure 10, left) and a subsampled trajectory as described in Section 4.2.2. Thanks to the use of standard ROS launch files we can run the whole pre-execution system with just one click. In the online execution phase, we used the architecture shown in Figure 3 to replicate autonomously the trajectory obtained in the previous phase, cyclically and indefinitely.

The use of a roslaunch file made the starting process fast and easy. During the experiment, the system behavior was quite good. The localization was very stable, allowing the robot to complete the experiment—consisting in five consecutive laps around the building—without interruption. The system demonstrated a good repetitiveness with very small variations in the trajectories from lap to lap. The control was smooth and stable along the entire path adapting perfectly to the given trajectories without suffering any oscillations. This first experiment proved that our framework is able to produce a real autonomous navigation application quickly and easily, requiring just the execution of two launch files. As we can see in [24], it would be also possible to perform an equivalent experiment using the tools provided by the navigation stack and some plug-ins. However, in the following section, we demonstrate the versatility of our framework by changing the localization system to navigate on and off-map, which would have not been possible using the navigation stack.

**Figure 10.** (**left**) Grid map of the secondary laboratory building, obtained using the architecture shown in Figure 2. (**right**) The actual building and the environment where the initial framework experiments have been carried out.

#### *6.4. GNSS/SLAM Fusion Framework Experiments*

To demonstrate our framework versatility, we generated a second application in a different environment. In this case, we replaced the original localization module with the fusion module described in Section 5. Our framework permitted to make this replacement by just changing the corresponding extensible markup language (XML) tag in the launch file of the execution phase without worrying about other nodes and component interactions. The rest of the process to create the application was exactly the same as the one described in the previous section. In this way, we verified the easiness that our framework provides to test new algorithms within a fully operative autonomous navigation application. Concretely, in this case, it would have been very complicated to generate an equivalent application using the navigation stack since it was not prepared for off-map navigation.

In the rest of this section, we describe and comment on the experiments carried out to test the second application generated using our framework. We tested the new localization module observing how it affected the whole autonomous navigation system. We performed these experiments on the University of Alicante campus (in the area whose grid map is shown in Figure 11) because it was an environment where we can find different scenarios that may affect the localization of our vehicle. For example, there were areas with high trees and buildings—that could cause satellite occlusions affecting the GNSS—and lots of dynamic obstacles, such as pedestrians, electric maintenance cars, or even road vehicles, that can affect the Monte Carlo localization.

**Figure 11.** Grid map obtained in the pre-execution phase (Figure 2), and latter used in the execution phase (Figure 3). We generated this map using datasets previously recorded during the experimental sessions described in [42].
