*2.1. UAS Architecture*

The platform used for the development of the obstacle detection and avoidance system consists of a DJI Matrice 300 RTK drone to which a ZED 2 stereoscopic camera, an AWR1843BOOST radar and an Nvidia Jetson Xavier AGX board have been added as a payload. The obstacle detection algorithm runs on the Jetson board, which is directly interconnected with the camera and the radar. Moreover, through a UART connection, the board is also able to pilot the UAS autonomously.

Figure 1 shows the DJI Matrice 300 RTK equipped with the additional components. In particular, at the top, it is possible to observe the Nvidia Jetson Xavier AGX board, while at the bottom, there are the radar and the optical sensors used for the implementation of the obstacle detection algorithm.

#### *2.2. Stereoscopic Vision*

To obtain three-dimensional information on the surrounding environment through the use of computer vision, there are basically two technologies that can be used. The first is based on RGB-D technology, where an RGB optical sensor is used alongside a *TOF* (time of flight) depth sensor. The second one uses the optical flow coming from two RGB cameras that needs to be processed on board the companion computer to provide 3D data. The TOF sensor mounted on RGB-D cameras uses a laser beam matrix, very sensitive to ambient light conditions, and does not guarantee high operating ranges.

**Figure 1.** DJI Matrice 300 RTK drone equipped with the additional hardware components.

To comply with the specifications imposed by the AURORA project, classical stereoscopy based on two RGB optical streams was chosen, which provides more stable performance, especially in outdoor environments. In particular, the ZED 2 stereoscopic camera by StereoLabs was used, which in addition to providing the point cloud of the surrounding environment defined in the fixed frame, is able to generate an estimate of the trajectory traveled by the camera in 3D space. This estimate was obtained from the SLAM algorithm implemented in the SDK (Software Development Kit) supplied with the camera. Unlike other optical systems (see Intel t-265), the ZED 2 camera does not perform calculations on board, leaving all the computational load to the Nvidia Jetson module. This is a significant limitation since a considerable amount of the hardware resources is already reserved for the camera SDK, leaving less space for user applications. For these limitations, it is important to pay particular attention to the optimization of the detection algorithm in order to minimize the computational cost necessary for the detection of dangerous obstacles. Despite the presence of these problems, currently the ZED 2 camera represents the state of the art for stereoscopic vision systems. In fact, it guarantees an operating range of up to 40 m and performance superior to those offered by other products. Figure 2 shows a functional example of this camera, where the RGB optical flow (a) and the depth flow, processed on board the Jetson module (b), are visible.

**Figure 2.** Functional example of a ZED 2 stereoscopic camera. Image (**a**) shows an RGB frame coming from the camera, while image (**b**) shows a depth frame processed by the ZED SDK on board the Jetson module.
