*4.3. Experiments in the Real Environment*

On the real robotic platforms we only tested the height-adaptive PID w/o predictive action, since for the predictive system to have worked we would have needed an additional means to localize the landing platform's position in global coordinates, as described in Section 3.3.2. In the simulated environment, transforming positions to a fixed global frame was straightforward. In the real world, however, this is more complex; implementing a Visual Inertial Odometry (VIO) or even a full visual Simultaneous Localization and Mapping (vSLAM) system would have been required in order to localize the drone in the scene with respect to a fixed frame.

What we did, however, was localize the landing platform relative to the UAV's coordinate frame, which is the only input required by the non-predictive approach. Therefore, on the real robotic platforms we qualitatively tested the system that employs our novel height-adaptive PID w/o prediction.

In particular, we performed five takeoff-tracking-landing sequences both for the linear and circular trajectory, following the same strategy as that described in Section 4.2.1. The UAV landed successfully in all five experiments for the linear trajectory and only failed once for the circular trajectory. The landing quality is visualized in Figure 14. We therefore demonstrate that the system presented in this work can be deployed on real robotic platforms. Figure 15 visualizes one of the linear trajectory experiments, and Figure 16 shows a sequence of a recovery maneuver. The complete sequences can be found in the video provided as Supplementary Material, or at https://youtu.be/CCrPBw\_we2E.

**Figure 14.** Percentage of successful landings in the real environment when using the height-adaptive, non-predictive PID controller for a linear and circular trajectory of the UGV. Note that these experiments were obtained by re-launching the system from scratch for every new test, as depicted in Section 4.2.1.

(**a**) Real robots (**b**) Landing sequence of the real UAV

**Figure 15.** Real robotic platforms (**a**) and landing sequence (**b**).

**Figure 16.** Re-localization maneuver in the real environment.

The reader must note that the real experiments were targeted as a qualitative demonstration of how our system can be integrated into real robotic platforms. We believe that the numerous quantitative experiments presented for the simulated environment (where we have used the same UAV model as in the real tests, as well as the same UGV) can serve to demonstrate the robustness and accuracy of the system, while the qualitative tests performed on the real robots can demonstrate that our system can be deployed on the real world.

#### **5. Conclusions**

In this work, we proposed a ROS-based system that enables a UAV to take off, track, and land autonomously on a moving landing platform. A novel height-adaptive PID controller suffices to operate the UAV satisfactorily when the landing platform describes either a linear or a circular trajectory at a speed of 0.5 m s−<sup>1</sup> along its x axis. Introducing a Kalman filter to predict the future position of the landing platform further improves the overall performance of the system, reducing the position error in comparison to the non-predictive approach.

Furthermore, we proposed a finite state machine architecture to keep track of different stages robustly. Together with a novel recovery module, they enable our system to operate in a continuous manner, providing it with life-long operation capability.

We extensively tested the system in the simulated environment (Gazebo), executing a total of 120 takeoff-tracking-landing sequences and reporting detailed results that validate the system's performance. We also implemented our algorithms on real robotic platforms and carried out qualitative evaluations, thus demonstrating that our system can be deployed in the real world.

Regarding future work, using a UAV with a better downward-looking camera would allow leveraging a marker detection system instead of the current color- and shape-based detection algorithm. By doing so, the whole system could be deployed in any kind of environment, regardless of the terrain's texture. Furthermore, a module could be added to localize the UAV in global coordinates, e.g., VIO or visual SLAM. This would allow implementing the predictive variant of our system in real platforms, which has demonstrated to outperform its non-predictive counterpart in the simulated environment.

**Supplementary Materials:** The software presented in this work is publicly available at https://github.com/pab lorpalafox/uav-autonomous-landing. A video demonstrating the system can be viewed at https://youtu.be/C CrPBw\_we2E. Furthermore, we also provide as Supplementary Material all our log files as raw CSV files (plus several Python scripts) so that the results presented in this work can be reproduced.

**Author Contributions:** Conceptualization, P.R.P. and M.G.; methodology, P.R.P., M.G., and J.V.; software, P.R.P. and M.G.; validation, P.R.P. and M.G.; formal analysis, P.R.P., J.V., and J.J.R.; investigation, P.R.P.; resources, P.R.P. and A.B.; data curation, P.R.P.; writing, original draft preparation, P.R.P.; writing, review and editing, M.G., J.V., and J.J.R.; visualization, P.R.P. and J.V., ; supervision, M.G.; project administration, A.B.; funding acquisition, J.J.R. and A.B.

**Funding:** The research leading to these results received funding from RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by "Programas de Actividades I+Den la Comunidad de Madrid" and cofunded by Structural Funds of the EU, and from the project DPI2014-56985-R (Robotic protection of critical infrastructures), financed by the Ministry of Economy and Competitiveness of the Government of Spain.

**Conflicts of Interest:** The authors declare no conflict of interest.
