*4.2. Execution Phase*

Figure 3 shows the architecture proposed as a closed-loop application for testing software modules in autonomous navigation. In the left part of the diagram, we show the different abstraction levels in perception, from the sensor layer to the localization module. This localization module could be replaced for any another, either SLAM-based or GNSS-based. This feature is useful to compare, in the same environment, the localization algorithms under development with well-known state-of-the-art algorithms. In the basic configuration, our localization module consists of a Monte Carlo localization (AMCL) algorithm. On the right side of the diagram, we show the different abstraction levels in the action domain, from high-level planning to the actuators layer. This is the closed-loop part where we can observe how localization affects planning and control. In the upper part of the scheme, we show a node that will serve as a high-level interface with the external agent. Using this interface we can visualize the position of the robot in the map and we can act on the system by sending global goals.

Below we describe the most important modules of our application in perception and motion. We also dedicate the last subsection to describe the reactive safety system implemented to avoid collisions.

**Figure 3.** ROS nodes scheme that shows the system architecture in the execution phase for our first framework implementation. Notice that each node is placed at its corresponding abstraction level. The execution is now online because it is a closed loop application that acts on the environment through the control node. For both localization and planning, we will use the files generated in the previous phase shown in Figure 2.

4.2.1. Localization Module: AMCL

In the basic configuration of our system, once we have a grid map obtained using GMapping (Figure 2), we need an algorithm to locate the robot inside it. We rely on the well-known AMCL. This algorithm is based on the method described in [40] and is part of the ROS navigation stack [24]. AMCL is based on the Monte Carlo approximation applied sequentially, which in the literature is called a particle filter. This method uses the sensor model described in the section beam models of range finders in [40]. It also uses the action model described in the odometry motion model section in [40]. In order to adjust the AMCL parameters, we must follow the processes described in the given references. In Section 6.2 we show the parameters obtained through these processes. This module

accepts as inputs the odometry of the robot, and laser scans from a light detection and ranging (LiDAR) sensor, in our case a Velodyne VLP-16. These inputs are standard ROS messages, which eases the process of using different LiDAR sensors or even robots with different odometers. The output of this module is also a standard ROS pose message containing pose and covariance information. These covariances will play an important role in the new localization module described in Section 5, where the AMCL estimation will be fused with GNSS information by using a Kalman filter. Thanks to this structure of inputs and outputs we can replace this localization node without affecting the rest of the architecture.
