*2.7. Simplified Avoidance Strategy*

To carry out the experiments required by the AURORA project, it was useful to design and implement an obstacle avoidance strategy on board the drone. Although in the literature, there are many refined techniques [26–29] that could have been used, the main objective of the work package assigned to the University of Florence unit was the development and testing of a fast complementary obstacle detection system exploiting radar and optical sensors. In this context, it was chosen to implement a very simple avoidance strategy able to correctly operate in an open environment.

In Figure 9, a flowchart shows the workflow of the algorithm: during the autonomous mission, the planner and the obstacle detection (OD) blocks are enabled. As long as no obstacles are detected inside the ROI, the OD block keeps the planner generating a trajectory through a velocity reference *V*-1. When the OD task detects the presence of one or more obstacles, it disables the planner by interrupting the loop connection and activates the obstacle avoidance (OA) block, which remains active as long as there are obstacles inside the ROI. As already mentioned, a simplified avoidance strategy was implemented by switching to a velocity reference vector *V*-2, which corresponds to an increase in altitude. This state persists until no more obstacles are found in the ROI such that the standard mission planner can be re-enabled.

**Figure 9.** Flowchart of the obstacle detection (OD) and avoidance algorithm (OA).

Figure 10 illustrates the different steps of the simplified avoidance strategy.

**Figure 10.** Avoidance strategy of dangerous obstacles. The voxels in front of the drone define the ROI where the algorithm searches for dangerous obstacles (**a**). When an object is detected inside it (**b**), the avoidance strategy is activated (**c**).

1

Notice that this strategy, once enabled, does not allow the subsequent lowering of the altitude for realignment with the global path. This means that the mission will be completed at a higher altitude than that of the path defined a priori. However, this is not a problem since, as already mentioned, the avoidance strategy has been implemented only for testing the detection system.

#### **3. Results**

The tests were carried out at the Celle castle at Incisa Valdarno (Italy) visible in Figure 11. To test the correct functioning of the obstacle detection and avoidance system in the planner, an autonomous mission was preloaded inside the control node, where the global trajectory consists of a straight path that collides with one of the buildings present on

the site. Considering the type of environment in which the tests were carried out, to prevent the avoidance strategy from immediately starting to work, the desired depth of the ROI was set at 7 m for the vision and 10 m for the radar.

The trajectory that the drone needs to follow is shown in Figure 12. This mission foresees an automatic take-off from point 1, an arrival in point 2 and a return to the take-off point with consequent landing. All phases of the mission were managed on board the Jetson, which triggered the avoidance strategy explained in the previous section as soon as the ODS node detected the presence of a building.

**Figure 11.** Celle castle site used to carry out the tests on the obstacle detection and avoidance system.

**Figure 12.** Example of mission carried out to test the correct functioning of the obstacle detection and avoidance system. This mission involves a trajectory that takes the drone on a collision course with a building on the site.

Since the two detection systems (radar and optical) are able to work independently, several missions were carried out in order to separately test each sensor. Figure 13 presents

the results obtained by enabling first the radar detection system and then the optical one. On the top row of Figure 13a, three snapshots of a mission for the radar detection system are shown together with the voxels relating to the global map and the voxels that make up the ROI. In the central image, the ROI turns red since the algorithm detects an obstacle (the edge of a building). Instead, the bottom row shows the text area, where the ODS node starts publishing the warning messages with the coordinates of the detected obstacles (central image) and the function that implements the avoidance strategy (right image). This leads the drone to climb in altitude and subsequently to pass the building, avoiding the collision. A similar behavior can also be observed for the optical detection system, visible in Figure 13b.

As already explained, the resolutions that the two sensors can provide are very different: in fact, for the global radar map, only the voxels relating to the edges of the building are displayed, being more reflective, while in the optical local map, the entire visible wall of the building is filled with voxels. In this regard, the resolution of the radar maps is set at 1 meter, while for the vision maps, it is equal to 0.5 m. Since the radar has less resolution, it is safer to increase the volume associated with the single obstacles detected.

D

E

**Figure 13.** Screenshots of the building detection and avoidance phases through the separate use of the two sensors: radar detection system (**a**), and optical detection system (**b**).

Table 1 shows the average update times for the coordinates of the ROI as a function of different desired depths. Obviously, at greater depths, the algorithm must calculate a larger number of voxels, and consequently, the complexity increases. These timings were achieved by setting a map resolution of 0.5 m and a FOV of 24 degrees.


**Table 1.** Number of voxels automatically calculated by the update algorithm of the ROI, the resolution and the computation time as a function of various desired depths.

As can be seen, even at the maximum search depth (120 m), the average update time is only 46.2 ms. This allows for the effective real-time detection of dangerous obstacles, thus maximizing the time available for the implementation of the avoidance strategy.

#### **4. Discussion**

The experimental tests presented in this paper have shown how in an urban scenario, the complementary obstacle detection and avoidance system is always able to detect and avoid dangerous static obstacles along the UAS path. Having adopted a redundant architecture consisting of a radar sensor and an optical sensor, which possess complementary operational features, it is possible to guarantee the detection of dangerous obstacles in different operating conditions, significantly increasing the safety of the entire system. In fact, as already discussed in the previous sections, in addition to having different operating ranges and resolutions between the radar and the optical sensor, in low light conditions, the radar can effectively replace the vision, even for short ranges. In conditions of acceptable brightness, vision can guarantee higher resolutions and therefore greater safety in detecting short-range obstacles. Moreover, redundancy can also be found in the different map reference systems employed: radar maps are based on GNSS geolocation data, while the vision maps use an independent position reference obtained from the optical SLAM process. In this way, if the satellite coverage fails, the vision maps will continue to function correctly and vice versa.

It is important to note that research associated with radar maps can involve larger regions of interest than optical ones, as the radar sensor is able to reach longer ranges. This implies that by appropriately scaling the resolutions of the vision and radar maps as a function of the specific sensor range, it is possible to obtain equivalent calculation times for the related regions of interest. This approach allows to add greater complementarity for the two detection systems.

Future research work will focus on the implementation of more complex avoidance strategies, which, eventually with the help of GPUs, can simultaneously compute multiple alternative trajectories in order to find the optimal one, also avoiding moving obstacles. Another interesting aspect that deserves additional work concerns the study of possible different strategies for the creation and management of radar and vision maps, where, for example, the RTK and SLAM references can be merged or replaced if necessary.

**Author Contributions:** Conceptualization, M.P. and M.B.; methodology, M.B. and E.B.; software, L.B.; validation, L.B., L.M. and T.C.; investigation, L.B, L.M. and T.C.; formal analysis, L.B.; data curation, L.B.; writing–original draft preparation, L.B.; writing—review and editing, M.P.; funding acquisition, M.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was co-funded by Horizon 2020, European Community (EU), AURORA (Safe Urban Air Mobility for European Citizens) project, ID number 101007134.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study are available on request from the corresponding author.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **Appendix A. ROI Generation Algorithm**

This appendix presents an algorithm to define a ROI geometry with the shape of a cylindrical sector, depending on the parameters:


The ROI is computed as the following set of *N* coordinates - *F<sup>b</sup>* in the reference frame *B*

$$ROI = \left\{ \vec{F}^b \in \mathbb{R}^3 \text{ s.t. } \vec{F}^b = \left[ \begin{array}{c} n\_i m\_r \sin\left(n\_j/n\_i\right) \\ n\_i m\_r \cos\left(n\_j/n\_i\right) \\ n\_k m\_r \end{array} \right] \right\} \tag{A1}$$

where the indexes *ni*, *nj*, *nk* are integers depending both on the ROI geometry and map resolution:

$$n\_i = 1, \dots, \left\lceil \frac{d\_m}{m\_r} \right\rceil, \quad -\left\lceil n\_i \frac{\psi\_R}{2} \right\rceil \le n\_j \le \left\lceil n\_i \frac{\psi\_R}{2} \right\rceil, \quad -\left\lfloor \frac{h\_m}{2m\_r} \right\rfloor \le n\_k \le \left\lfloor \frac{h\_m}{2m\_r} \right\rfloor.$$

In brief, the computation of the *N* points of the ROI proceeds along the radius (using the *ni* index) from the origin up to the maximum depth, with a step resolution equal to *mr*. For any generic depth, the *nj* index spans the FOV angle *ψ<sup>R</sup>* to compute all the points along the corresponding arc of circle. Finally, the third index *nk* computes a copy of the above planar geometry along the vertical dimension. The number *N* of the points computed by the algorithm can be approximately estimated by the ratio between the ROI geometrical volume and the voxel volume, such that

$$N \approx \frac{\psi\_R d\_m^2 h\_m}{2m\_r^3}.$$

Figure A1 shows two examples of ROI with a common FOV angle of 24 degrees but different depths.

**Figure A1.** On the left (**a**) is the ROI relative to a desired depth of 7 m; on the right (**b**) is the ROI relative to a desired depth of 120 m. FOV angle is equal to 24 degrees.

#### **References**

