*2.6. Obstacle Detection Methodology*

Obstacle detection refers to the task of searching for dangerous obstacles in specific regions of the maps introduced in Section 2.5 in order to guarantee safe UAS autonomous navigation. As already mentioned, the vision system guarantees high resolution for distances within 40 m, while the radar system provides a lower resolution but an operating range up to 120 m [12]. Therefore, the visual detection algorithm is typically more effective in urban/indoor environments or in lower UAS speed ranges, whereas in larger open environments, and with few obstacles, the radar detection system guarantees safer operations, allowing for faster flight. In addition to this, it must be considered that in urban or indoor environments, there may be poor or incomplete GNSS coverage. In these conditions, the visual detection system is more reliable, not only in terms of resolution, but also because the vision maps are computed using the position provided by the SLAM process, which does not require geolocation data.

In order to be able to detect the presence of dangerous obstacles starting from the two maps used (vision and radar), it is necessary to define a region of interest (ROI) [25] as a subset of the maps where the obstacles are searched. The approach followed in this article considers any object (static or dynamic) found inside the ROI to be potentially dangerous. The shape of the ROI is defined to ensure that the global trajectory produced a priori by the

planner can be considered safe under the condition that no obstacles are inside the ROI. In order to simplify the computational cost of the obstacle search, we make some assumptions about the UAS navigation which are quite reasonable:


Under these operating conditions, we define the ROI as a cylindrical sector with radius *dm* (maximum search depth along the flight direction), a FOV angle *ψR*, and vertical height *hm*. This region can be first computed with respect to a reference frame *B* centered in the UAS position - *P* and with Euler angles (0,0,*ψ*), i.e., with the *y<sup>b</sup>* axis aligned with the drone heading angle *ψ*, as shown in Figure 7. The figure shows a schematic of the detection process applied to the local map: the sensor FOV where obstacles are detected is shown in gray. Instead, the yellow area defines the ROI such that the presence of obstacles in this sub-region is considered dangerous.

**Figure 7.** Mapping region of the local map (**gray**); searching region where the presence of an obstacle is considered dangerous (**yellow**); occupied voxels by obstacles (**blue**).

As the drone moves along the planned trajectory, the coordinates of the ROI in the fixed reference frame must be updated at each iteration of the algorithm, taking into account the UAS current position and yaw angle through the relation

$$
\vec{F}\_{\vec{l}} = R\_z(\psi)^{-1} \vec{F}\_{\vec{l}}^b + \vec{P} \quad ; \quad \vec{i} = 1, \cdots, N \tag{1}
$$

where - *Fi* and - *Fb <sup>i</sup>* are the coordinates of the *i*-th point of the ROI in ENU and *B* frames, respectively, and

$$\left(R\_z(\psi)\right)^{-1} = \begin{bmatrix} \cos(\psi) & \sin(\psi) & 0\\ -\sin(\psi) & \cos(\psi) & 0\\ 0 & 0 & 1 \end{bmatrix} \in \mathbb{R}^{3 \times 3} \tag{2}$$

is the corresponding rotation matrix.

This operation can be expensive, and its burden depends on the number *N* of points representing the ROI. For this reason, in this article, an efficient algorithm to generate the minimum number of points belonging to the ROI volume, given the map resolution *mr*, is reported in Appendix A.

Once all the iterations necessary to update the entire ROI are performed, the result shown in Figure 8 is obtained, where the relative voxel can be seen for each calculated point. Finally, the last step of the obstacle detection task is to query the radar and vision maps (via the Octomap libraries) using the coordinates of the ROI vectors - *Fi* in Equation (1): for each point of the ROI, an occupation probability is returned such that for probabilities greater than a given threshold (0.5 in our experiments), the relative voxel is considered occupied, and an obstacle-avoiding strategy needs to be activated.

**Figure 8.** The figure shows the voxels calculated by the detection algorithm within the search region. In particular, the number of points computed by the algorithm depends both on the map resolution and ROI volume.

It is important to underline that the ROI update algorithm should start calculating the coordinates of the points from those closest to the drone. This is preferable to make sure that the search for dangerous obstacles starts from the most critical points, i.e., those that are closest to the drone.
