4.2.2. Planning

At the top level of the motion part of the system, we defined the planning module. At first, we assessed the possibility of implementing the planner available in the navigation stack of ROS. Finally, we discarded this possibility due to the lack of versatility of this system in terms of the localization algorithms that can be used, since the planners of this stack are based solely on grid maps to obtain the trajectories. The interest of our research group is focused on the localization of robots, and we plan to use other ways to represent the environment apart from grid maps, or even transit through unmapped areas (Section 5). For this reason, we decided to develop our own system: a simple and functional planner that is more easily adaptable to any localization algorithm.

For the developed planning module, we use a topological-metric map that contains all the possible trajectories required by the target application that we want to generate using our framework. This trajectory map is generated in the pre-execution phase as shown in Figure 2. We represent these trajectories by means of a graph in which each node represents an intersection, and each link represents the path between the intersections. Each node contains information of its location in map coordinates, and each link contains equidistant points also expressed in map coordinates. This metric information is obtained in pre-execution by recording the trajectories during manual driving, next sub-sampling them, and finally assigning each point to the node or link to which it belongs. In Figure 4 we can see an example of a graph that expresses in a topological way all the possible trajectories for a certain environment and application. The inputs to this module are the graph described above, the position obtained through AMCL (or an alternative localization module), and the global goal received from the high-level interface. With this data, the module generates as output the combination of nodes and links that describes the shortest path to the global goal within the graph. This path is composed of the points expressed in map coordinates that are sent sequentially to the low-level control as local goals. This planner assures that the robot circulates always through traversable areas.

**Figure 4.** Example of a trajectories graph expressed abstractly as a topological map. This map also includes the metric information. Each node represents the location of roads intersections, and each link contains a series of points in 2D Cartesian coordinates that will serve as local goals to the controller. This graph is obtained as shown in Figure 2, and used in execution as shown in Figure 3.

## 4.2.3. Control

At the level below the planning, we find the local control level of the vehicle. In this case, we also developed a basic and functional controller designed for low-speed vehicles—in the case of our robot roughly 1.3 m/s of maximum speed. The developed controller is designed for Ackermann vehicles, but given the versatility of the framework, it would be easy to integrate a new controller for vehicles with different kinematic constraints.

The aim of local control is to reach a local goal. Once it is reached, this module warns the upper level (planning) requesting a new local goal. The controller tracks the straight line that joins two consecutive goals (Figure 5, left). These lines can be generated with only one point since the orientation is specified in the ROS pose message used to communicate the goals. The controller is proportional to the error signals in distance and angle with respect to the line. Error signal in distance *ed* is obtained as the distance between the vehicle base link and the nearest point in the goal straight line (green in Figure 5, right). Error signal in angle *e<sup>α</sup>* is obtained as the angle between the goal line (green in Figure 5, right) and the orientation vehicle line (red in Figure 5, right). The control action consists of two variables: steering angle (*u*), and speed (*v*). On the one hand, we compute the desired steering angle *u* as (Figure 6):

$$
\mu = \varepsilon\_d k\_d + \varepsilon\_d k\_{u\_f} \tag{1}
$$

where *ed* and *e<sup>α</sup>* are the errors in distance and orientation respectively, and where *kd* and *k<sup>α</sup>* are constants adjusted experimentally for each vehicle in which the framework is implemented. On the other hand, the *ed* error is saturated so that |*u*| does not exceed the maximum possible value *umax*. This saturation is required because the linear speed applied as a part of a control action is calculated as a function of the steering as:

$$
\omega = \upsilon\_{\max} - |u| k\_{\upsilon\_{\prime}} \tag{2}
$$

where *kv* = *vmax* − *vmin*/*umax*, *vmax*, and *vmin*, are the maximum and minimum velocities configurable as parameters. Note that if |*u*| > *umax*, the resulting speed would be less than *vmin*. For this reason we saturated the *ed* error signal shown in (1). Figure 6 shows the block diagram of the described controller for steering variable.

**Figure 5.** (**left**) Representation of local goals on the road. Each local goal is expressed as a point (*x*, *y*, *θ*). With this information, we can generate a line (in green) with an orientation *θ* that crosses the point (*x*, *y*) (in red). (**right**) Error signal in distance *d* is obtained as the distance between the vehicle base link and the nearest point in the goal straight line (green). The error signal in angle *a* is obtained as the angle between the goal line (green) and the orientation vehicle line (red).

**Figure 6.** Block diagram that shows the proportional controller described in (1). The set-point is the position and orientation of the point closest to the line shown in Figure 5, left. The error signals are obtained depending on the position and orientation of the vehicle (Figure 5, right). Both, the set-point and the vehicle pose are expressed in map coordinates. The control signal *u* is the steering value expressed in degrees, that we send to the low-level control module of our vehicle.

#### 4.2.4. Safety

Between the levels of sensors and preprocessing in perception, and between the levels of control and actuators in motion, we implement a reactive hidden layer for safety reasons. In the perception part, we designed a module that given the LiDAR point-clouds and the steering angle returns the distance to the closest point of the closest reachable obstacle. Based on this distance and a safety margin, we calculate the maximum safe speed that is defined as the one required to reach the minimum distance allowed to the obstacle in a safety period of seconds (user configurable). As the vehicle gets progressively closer to the object the speed is reduced due to the fact that the safety period is constant. Finally, when the minimum distance is reached the safety speed is just zero. In the motion part, we implement a speed recommender that reads the speed contained in the control action and the maximum speed calculated by the obstacle detector sending the lowest of them to the actuators. Thus, if the vehicle finds an obstacle along its trajectory it will stop at a defined safety distance and it will wait until the obstacle gets away from the robot. If there are no close obstacles the vehicle will recover its original speed.
