**2. Related Work**

The development of drone-related applications has exploded in recent years. In particular, there is enormous interest in using these robots for detecting and monitoring terrestrial mobile objects using their on-board cameras. However, as previously mentioned, their low autonomy renders them unable to perform tasks of long duration. That is why much of the research so far has focused on landing UAVs on mobile platforms, giving them greater versatility, since in many scenarios, it is impossible to ensure a stationary landing area. A good overview of the research done in the development of vision-based autonomous landing systems, as well as the challenges in this field, can be found in [10].

Ling et al. [11] tried to solve the problem that arises when taking pictures of icebergs using drones launched from a ship. Traditionally, the aerial vehicle had to be rescued semi-manually between two operators: one would pilot the drone until it was close enough to the boat for a second operator to manually recover it, with the danger that this action entailed. Ling proposed a precision landing algorithm to eliminate completely the human participation in this type of situation, which uses a downward-facing camera to track a target on the landing platform and generates high quality relative pose estimates.

Lee et al. [12] focused on the use of vertical cameras and Image-Based Visual Servoing (IBVS) algorithms to track a platform in a two-dimensional space and perform a Vertical Take-Off and Landing (VTOL). They obtained the speed at which the platform moved, and then they used this information as a reference to perform an adaptive control of sliding movement. Compared to other vision-based control algorithms that reconstruct a complete 3D representation of the objective (which requires accurate depth estimates), the IBVS algorithms are computationally less expensive.

Prior to these two works, Saripalli and Sukhatme [13] worked with vision algorithms for the autonomous landing of a helicopter on a mobile platform. They used Hu's moments of inertia [14] for an accurate detection of the objective and a Kalman filter for tracking. Based on the output of the tracking algorithm, it was possible for them to implement a trajectory controller that ensured the landing on the mobile target.

The literature also contains some proposals with Model Predictive Control (MPC). Maces et al. [15] considered a mission with three phases (target detection, target tracking, and autonomous landing) that were modeled in a state machine. During the last two phases, an MPC is used for position control, whereas a PID controller is employed for altitude control. The system we present extends the state machine proposed by Maces with a key additional phase, namely a *recovery mode*. This new state increases the system's robustness by allowing the UAV to re-locate the landing platform autonomously in case the latter accidentally leaves the field of view of the drone's camera. Feng et al. [16] combined a vision-based target position measurement, a Kalman filter for target localization, an MPC for the guidance of the UAV, and an integral control for robustness. They tested their algorithms on a DJI M100 quadcopter and reached a maximum error of 37 cm with a platform moving at up to 12 m s−1.

However, there are works that considered other techniques. Almeshal et al. [17] proposed a neural network to estimate the target position, as well as a PID controller to track it and perform landing, and validated it with a Parrot AR.Drone quadcopter. Finally, Yang et al. [18] developed a complete UAV autonomous landing system using a hybrid camera array (fish-eye and stereo cameras) and a state estimation algorithm based on motion compensation and tested them with multiple platforms (Parrot Bebop and DJI M100).

A common assumption in many of these systems is that the speed of the mobile target is low enough for the UAV to be able to land on it without compromising the integrity of both robotic platforms. However, experiments carried out by the German Aerospace Center (DLR) have demonstrated that it is possible to land a fixed wing drone on a net attached to the top of a car moving at 70 km h−<sup>1</sup> [19]. Note, however, that in this experiment the ground vehicle followed a linear trajectory, which is not always possible in SaR missions, where the debris forces the UGV to make turns almost continuously.

Indeed, there are substantial differences when trying to land a UAV on a terrestrial moving platform describing either a linear or a circular trajectory. Most of the research so far focused solely on the former, without thoroughly considering that the movement of the target can also be circular or even a mixture of both, thus producing random trajectories. This is, therefore, an interesting line of work, since in SaR tasks we want to provide the terrestrial robot with complete freedom of movement. In such a scenario, the UAV has to adapt to the trajectory described by the ground robot for a successful landing. The work presented in this paper takes a step forward in this direction by demonstrating a system capable of autonomously landing a UAV for both a linear and a circular trajectory of the moving landing platform.

Finally, all of the works above described strategies towards precise landing in moving platforms, but none presented a full system capable of operating for extended periods of time. In our work, a robust state machine together with a recovery and re-localization module allows for life-long operation, as we show in Section 4.

#### **3. Proposed Approach**

This section describes the proposed approach: (1) a state machine to execute robustly the complete autonomous takeoff, tracking, and landing of a UAV on a moving landing platform (Section 3.1); (2) detection and localization of the mobile target using a downward-looking camera (Section 3.2); and (3) vision-based tracking of the mobile platform while in flight (Section 3.3).
