**1. Introduction**

Autonomous navigation is currently one of the more important topics in robotics since robots capable of moving freely in their environments can produce a large number of new applications in many fields, like logistics [1], agriculture [2] or passenger transport [3]. There are researches covering this topic since the nineteen-seventies [4], so it is a mature field with lots of published algorithms and available tools. In addition, recent advances like the emergence of deep learning are providing researchers with very advanced scene understanding algorithms [5,6]. However, real applications in real conditions—especially in outdoor environments—remain to be a challenge [7].

The usual architecture of an autonomous navigation system—whether it be terrestrial, marine or aerial—relies on the use of several dedicated subsystems, that solve different required tasks like localization [8,9], mapping, path-planning, and control, among others [10–12]. Each one of these tasks belongs to different research topics making mobile robotics an extremely interdisciplinary field [13]. One approach to cope with this complexity is to study the different subsystems separately, what simplifies—and reduces the costs of—the experimental processes. For instance, localization algorithms can be developed making use of public datasets [14], and researchers in control can take advantage of robotic simulators to develop their algorithms [15]. However, these approaches are not sufficient to make a comprehensive evaluation of the developed algorithms, because real

systems present interactions—and even feedbacks—between their different modules whose effects can only be observed when the whole system is implemented and tested in real-world conditions [16]. Only extensive experimental tests spanning different conditions and environments can bring enough information for a comprehensive system evaluation. Moreover, these kinds of tests are extremely useful to guide the development processes in a robust and productive way. In the present paper, we introduce a robotic operating system (ROS)-based navigation framework that provides a powerful basic structure based on abstraction levels. This framework is designed to generate minimal but complete autonomous navigation solutions, thus speeding-up the implementation processes required to obtain a full system suitable for experimental testing. This approach permits to save efforts and to keep the focus in concrete research problems, without the drawbacks of simulators and datasets. To show that our framework can be an excellent tool to test and compare different algorithms, we first implement a fully operative autonomous navigation system and then we show how easy results to modify it. Concretely, we change its initial 2D simultaneous localization and mapping (SLAM) localization module by a new one that implements a loosely coupled architecture integrating global navigation satellite system GNSS information and 2D SLAM by means of a Kalman filter. Thanks to this GNSS fusion approach our ground vehicle is able to navigate autonomously in the University of Alicante campus, going in and out from the mapped area. This mixed on-map/off-map navigation is straightforward using our framework, but it would have been very hard to implement using the existing alternatives, as explained later. The system has been evaluated through three experimental sessions, in two different environments, with two different localization algorithms, accumulating more than twenty kilometers of navigation in real-world conditions. Summarizing, our contributions are the following (All the software described in this paper is publicly available and can be found in a GitHub repository https://github.com/AUROVA-LAB/aurova\_framework.):


The rest of this paper is organized as follows: in Section 2, we describe some related works focusing on general frameworks for developing autonomous navigation systems. In Section 3, we explain the requirements and the design decisions to fulfill them, giving a conceptual architecture for our framework. Then, in Section 4, we explain the basic implementation and its features, from perception to control. Section 5 is devoted to explaining the second particularization of our framework, which implements a fusion of GNSS and Monte Carlo localization by means of a Kalman filter. In Section 6, the different experimental sessions are described and discussed, including real autonomous navigation in two different challenging outdoor environments. Finally, in Section 7, we give conclusions and future works.
