Next Article in Journal
Fused Microknot Optical Resonators in Folded Photonic Tapers for in-Liquid Durable Sensing
Next Article in Special Issue
Feasibility Study on S-Band Microwave Radiation and 3D-Thermal Infrared Imaging Sensor-Aided Recognition of Polymer Materials from End-of-Life Vehicles
Previous Article in Journal
Improving Fall Detection Using an On-Wrist Wearable Accelerometer
Previous Article in Special Issue
Direct Electron Transfer of Dehydrogenases for Development of 3rd Generation Biosensors and Enzymatic Fuel Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments †

1
Department of Computer Science, CUCEI, University of Guadalajara, Guadalajara 44430, Mexico
2
Department of Automatic Control, Technical University of Catalonia UPC, 08034 Barcelona, Spain
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 5th International Symposium on Sensor Science (I3S 2017), Barcelona, Spain, 27–29 September 2017.
Sensors 2018, 18(5), 1351; https://doi.org/10.3390/s18051351
Submission received: 21 January 2018 / Revised: 3 April 2018 / Accepted: 19 April 2018 / Published: 26 April 2018
(This article belongs to the Special Issue I3S 2017 Selected Papers)

Abstract

:
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

1. Introduction

Nowadays, unmanned aerial vehicles (UAVs) have received great attention from the robotics research community. In this case, one of the main objectives has been the improvement of the autonomy of these systems. In particular, the multi-rotor aerial systems allow great versatility of movements, making this kind of aerial platform very useful for a great variety of applications. Altogether with the recent advances in computational processing, computer vision has become an important tool in order to improve the autonomy of robotics systems. Cameras are well adapted for embedded systems because they are inexpensive, lightweight and power-saving. For instance, applications of surveillance [1], tracking and rescue [2], among others, seem to be feasible for aerial robots equipped with onboard cameras.
A fundamental requirement in order to improve the autonomy of an aerial robot has to do with the capacity of self-location and perception of the operational environment. In this case, for most applications, GPS (Global Positioning System) still represents the main alternative for addressing the localization problem. Nevertheless, the use of GPS presents some drawbacks, for instance, the precision error can be substantial, and it provides poor operability due to multipath propagation. However, several mission profiles require the UAVs to fly in GPS-challenging or GPS-denied environments, as in natural and urban canyons [3]. The use of range sensors like laser, sonar or radar (see [4,5,6]) allows obtaining knowledge about the environment of the robot. However, this kind of sensor can be expensive and sometimes heavy, and its use in outdoor environments can be somewhat limited. For instance, sonar systems have a limited range of operation. Active laser systems (e.g., LiDAR) represent a very interesting sensing technology; they can operate under any visibility condition (i.e., both day and night, unlike cameras) and can directly provide 3D measurements about the surrounding environment. On the other hand, LiDAR is generally expensive; it can overload the system for certain applications; and it has moving parts, which can generate error.

1.1. Related Work

Visual SLAM is a technique that makes use of visual features as landmarks. Visual SLAM is intended to address the navigation problem of a robot moving in a previously unknown environment, while it provides information about the environment, using mainly angular measurements obtained from cameras. Currently, there are two main approaches for implementing vision-based SLAM systems: (i) filtering-based methods [7,8,9] and (ii) the optimization-based methods [10,11]. While the latter approach is shown to give accurate results when the availability of computational power is enough, filtering-based SLAM methods might be still beneficial if limited processing power is available [12].
Some examples of visual SLAM approaches applied to unmanned aerial vehicles are [13,14]. In [15], a visual SLAM proposal that adds inertial measurements given by an Inertial Measurement Unit (IMU) is presented. The potential problem with this kind of approach is related to the fact that the acceleration obtained from the IMU has a dynamic bias, which is difficult to estimate. In [16], an EKF-based (Extended Kalman Filter) method is proposed in order to perform visual odometry with an unmanned aircraft. This method uses inertial sensors, a monocular downward facing camera and a range sensor (sonar altimeter). Unlike vision-based SLAM, in visual odometry approaches, there is not a mapping process. Furthermore, in those approaches, the operating altitude of the UAV is limited by the operating range of the sonar. More recently, new approaches appeared addressing the problem of visual-based navigation in GPS-denied environments, such as [17,18,19].
Multi-robot systems have also received great attention from the robotics research community. This attention is motivated by the inherent versatility that this kind of system has for performing tasks that could be difficult for a single robot. The use of several robots shows advantages like cost reductions, more robustness, better performance and efficiency [20,21]. In the case of the SLAM problem, in [22,23], a centralized architecture is used where all vehicles send their sensor data to a unique Kalman filter. In [16,24,25], the idea of combining monocular SLAM with cooperative, multi-UAV information to improve navigation capabilities in GPS-challenging environments is presented.
In works like [26,27,28,29], it has been shown that 6DOF-SLAM (six degrees of freedom), based only on angular measurements (i.e., monocular SLAM), is a partially observable system that can be applied to both the single-robot case and the multi-robot case. In [30], cooperative localization with visual information is addressed. According to the analysis presented in that work, the proposed system is completely observable. However, in this case, only distances and the relative orientations between robots are estimated. This fact can represent a clear drawback for applications where global measurements of the system are required (e.g., absolute position).

1.2. Objectives and Contributions

In this work, nonlinear observability properties of an aerial multi-robot system are analyzed. Based on this analysis, it is shown that the observability properties of this kind of system are improved by the inclusion of measurements of the relative distance between the aerial robots. Furthermore, based on the observability analysis, it is shown that the cooperative approach has theoretical advantages with respect to other configurations like the single-robot monocular SLAM approach. In addition, it is demonstrated that in a system composed of several UAVs, the observation of common landmarks is a sufficient condition in order to propagate through the whole system the information provided by the measurement of the relative distance between two robots. This property allows flexibility in the system as opposed to the absolute need for multiple contacts between robots.
In order to take advantage of all the above theoretical results, in this work, a novel cooperative monocular-based SLAM approach for multi-UAV systems is proposed. The system model is composed of the dynamics of each aerial robot and the Euclidean position of each landmark. The measurements of the system are the projections of the landmarks in the images, provided by the monocular cameras carried individually in every aerial robot. Additionally, as was mentioned before, the availability of some measurements about the relative distance between two robots is assumed.
In order to accomplish the requirement of having measurements of the relative distance between two robots, a technique based on a homography is also presented in this research. The main idea is to exploit the physical structure of the aerial robots in order to obtain measurements of relative distances by means of visual information. In this case, the method is developed assuming a team of quadrotors. It is important to remark that this proposed approach could be also applied to many other aerial platforms. The only requirement for the presented approach is that at least one robot has to be maintained inside the field of view of another aerial robot, while sharing the observation of one common visual landmark (see Figure 1).
The geometric structure of a typical quadrotor is cross-shaped, and therefore, each rotor is mounted at the different ends of the cross. This kind of physical geometry can allow a standard computer vision algorithm to extract and track the centroids of the rotors. In this case, those centroids can be assumed to be coplanar. In order to compute the relative distance from one quadcopter in the field of view of another one, a homography is applied from the camera coordinate reference system of the observing robot to the plane formed by the four rotors of the robot being observed. The information obtained by the homography is fused with the orientation of the observing robot, provided by an IMU, which finally allows one to obtain measurements of relative distances. It is important to note that, based on the theoretical results presented in this work, it should be straightforward to replace the homographic technique used for estimating the relative distance between UAVs by another technique that would provide a similar measurement.
In addition to the benefit of improving the observability of the system, the relative distance obtained between any pair of robots provides metric information of the system, which is an important issue to be addressed in monocular-based systems. For example, in other configurations, the metric information is obtained purely from inertial systems (i.e., monocular/Inertial Navigation Systems (INS) solutions), but inertial sensors present some drawbacks due to the large drift bias, which is inherent to this kind of sensor [31,32].
In the proposed system, in order to take advantage of the multi-UAV cooperative configuration, the initialization process of new map features is carried out through a pseudo-stereo system composed of two monocular cameras mounted on two UAVs respectively (one camera per UAV) that observe common landmarks. This approach allows initializing landmarks with less uncertainty than a pure monocular system since 3D information of the position of landmarks is gathered from the beginning of the observation. It is well known that, in visual SLAM, the initialization process can play an important role in the convergence of the filter. Having a flexible baseline in the pseudo-stereo system allows one to initialize landmarks at distances far away with less uncertainty, unlike stereo systems with a rigid baseline [32] or delayed monocular initialization methods. The above fact allows the proposed cooperative system to have better performance in environments where landmarks are far from the measurement system, contrary to SLAM approaches based on depth cameras, stereo systems, monocular cameras or sonars.

1.3. Paper Outline

The document is organized in the following manner: Section 2 presents the specifications of the system; Section 3 presents the nonlinear observability analysis that represents the theoretical basis of the proposed method; Section 4 presents the proposed cooperative approach for monocular-based SLAM; in Section 5, the results obtained from numerical simulations are presented in order to validate the proposal, and finally, in Section 6, some final remarks are given.

2. System Specification

In this section, the models used in this work are introduced. The model used for representing the dynamics of a camera carried by a quadcopter is presented. The representation of the landmarks as map features is also defined. The camera projection model used in this work is described. The technique based on homographies that is used for estimating the relative distance between two quadcopters is introduced, as well.

2.1. Dynamics of the System

Let us consider the following continuous-time model describing the dynamics of the j-th UAV-camera system (see Figure 2):
x ˙ = x ˙ c j q ˙ c j v ˙ c j ω ˙ c j x ˙ a i = v c j 1 2 Ω c j q c j 0 3 × 1 0 3 × 1 0 3 × 1
where the state vector x is defined by:
x = x c j q c j v c j ω c j x a i T
With i = 1 , , n 1 , let n 1 be the number of landmarks, and with j = 1 , , n 2 , let n 2 be the number of UAV-camera systems.
Additionally, let x c j = x c j y c j z c j T represent the position of the reference system C of the j-th camera, with respect to the reference system W. Let q c j = q 0 j q x j q y j q z j T be a unit quaternion representing the orientation of the reference system C of the j-th camera, with respect to the reference system W. Let v c j = x ˙ c j y ˙ c j z ˙ c j T represent the linear velocity of the j-th camera. Let ω c j = ω x j ω y j ω z j T represent the angular velocity of the j-th camera. Finally, let x a i = x a i y a i z a i T be the position of the i-th landmark with respect to the reference system W, defined by its Euclidean parameterization. Furthermore the next definitions should be considered:
Ω c j = 0 ω c j T ω c j [ ω c j ] × , [ ω c j ] × = 0 ω z j ω y j ω z j 0 ω x j ω y j ω x j 0
In (1), it is assumed that every UAV-camera is moving freely in the space with six degrees of freedom: three for translation and three for rotation. Furthermore, note that a non-acceleration model is assumed for UAV-camera systems, and the landmarks are assumed to remain static.

2.2. Camera Measurement Model

Consider the projection of a specific landmark over the image plane of a camera. Using the pinhole model [33] (see Figure 3), the following expression can be defined:
i z c j = i h c j = i u c j i v c j = 1 i z d j f c j d u j 0 0 f c j d v j i x d j i y d j + c u j + d u r j + d u t j c v j + d v r j + d v t j
Let [ i u c j , i v c j ] define the coordinates (in pixels) of the projection of the i-th landmark over the image of the j-th camera. Let f c j be the focal length (in meters) of the j-th camera. Let [ d u j , d v j ] be the conversion parameters (in m/pixel) for the j-th camera. Let [ c u j , c v j ] be the coordinates (in pixels) of the image central point of the j-th camera. Let [ d u r j , d v r j ] be components (in pixels) accounting for the radial distortion of the j-th camera. Let [ d u t j , d v t j ] be components (in pixels) accounting for the tangential distortion of the j-th camera. All the intrinsic parameters of the j-th camera are assumed to be known by means of some calibration method. Let i p d j = i x d j i y d j i z d j T represent the position (in meters) of the i-th landmark with respect to the coordinate reference system C of the j-th camera. Additionally,
i p d j = W R c j ( x a i x c j )
Let W R c j ( q c j ) S O 3 be the rotation matrix, obtained from the quaternion q c j , that transforms the world coordinate reference system W to the coordinate reference system C of the j-th camera.

2.3. Relative Distance Measurement Model

To estimate the relative distance between UAV-camera systems, the physical structure of the aerial robots (quadcopters in this research) is exploited. In this case, the rotors of a quadcopter are considered as highlighted points in the images captured by another quadcopter (see Figure 1 and Figure 3). These points can be considered coplanar in the reference system Q of the n-th quadcopter. Therefore, knowing their geometry, it is possible to formulate a homography [33]. In order to determine the exact correspondences between the motors’ positions in the image plane and their real positions in reference Q, it is assumed that one rotor has a distinguishable color or geometry from the other ones. The other three correspondences can be determined given only the first one because it is also assumed that the quadrotor will not fly upside down. The homographic process will allow estimating the distance between the reference system of the camera to the plane to which the four points belong.
According to Equation (4), the following expression is obtained:
j γ m n j u m n j v m n 1 = T c j 0 3 × 1 j E c n x m n y m n z m n 1
With m = { 1 , , 4 } , let [ j u m n , j v m n ] define the coordinates (in pixels) of the projection of the m-th point of the n-th quadcopter over the image of the j-th camera. Let [ x m n , y m n , z m n ] represent the position of the m-th point with respect to the reference system Q of the n-th quadcopter, and let j γ m n be a scale factor. Additionally, it is defined:
T c j = f c j d u j 0 c u j + d u r j + d u t j 0 f c j d v j c v j + d v r j + d v t j 0 0 1
j E c n = j R q n j d q n 0 1 × 3 1
Let j d q n be the translation vector from the reference system Q of the n-th quadcopter to the reference system C of the j-th camera. Let j R q n S O 3 be the rotation matrix that transforms the coordinate reference system Q of the n-th quadcopter to the coordinate reference system C of the j-th camera. The assumption that the four m points are coplanar implies that z m n = 0 ; therefore, Equation (6) can take the following form:
j u m n j v m n 1 = 1 j γ m n T c j 0 3 × 1 j E c n x m n y m n 0 1 = j s 11 n j s 12 n j s 13 n j s 21 n j s 22 n j s 23 n j s 31 n j s 32 n j s 33 n x m n y m n 1 = j S c n x m n y m n 1
where j S c n is a non-singular homogeneous matrix. In this case, it is allowed to scale the matrix in such a way that j s 33 n = 1 . This fact does not affect the projective transformation [33]. Therefore, the matrix j S c n can be redefined as:
j S c n = j s 11 n j s 12 n j s 13 n j s 21 n j s 22 n j s 23 n j s 31 n j s 32 n 1 = j s 1 n j s 2 n j s 3 n
In (10), the values of j S c n are unknown; therefore, the following equation system can be formed from (9):
j G m n j t c n = j g m n
where:
j G m n = x m n y m n 1 0 0 0 x m n j u m n y m n j u m n 0 0 0 x m n y m n 1 x m n j v m n y m n j v m n
j t c n = j s 11 n j s 12 n j s 13 n j s 21 n j s 22 n j s 23 n j s 31 n j s 32 n T
j g m n = j u m n j v m n T
Considering the projection of the four points, the solution to the system can be given as follows:
j t c n = ( j G c n ) 1 j g c n
with:
j G c n = j G 1 n j G 2 n j G 3 n j G 4 n T j g c n = j g 1 n j g 2 n j g 3 n j g 4 n T
From the method proposed in [34], where the orthonormality property of a rotation matrix is exploited and knowing the intrinsic parameters of the camera, j R q n and j d q n can be computed from (15) as follows:
j R q n = j r 1 n j r 2 n j r 1 n × j r 2 n
j d q n = j r 3 n
with:
j r 1 n j r 2 n j r 3 n = 1 j δ c n ( T c j ) 1 j S c n
and:
j δ c n = ( T c j ) 1 j s 1 n = ( T c j ) 1 j s 2 n
Finally, the distance between the j-th camera and the n-th camera can be computed from:
j z r n = j h r n = x c n x c j = ( W R c j ) T ( j R q j d c n + j d q n )
where d c n is the translation vector of the reference system Q to the reference system C of the n-th UAV-camera system. This parameter is obtained by a camera-robot calibration process. The rotation matrix W R c i can be obtained from an Attitude and Heading Reference System (AHRS) or an inertial measurement unit [35,36] mounted on board the j-th UAV-camera system.

3. Observability Analysis

In this section, the nonlinear observability properties of an aerial multi-robot system are studied. Observability is an inherent property of a dynamic system and has an important role in the accuracy of its estimation process; moreover, this fact has important consequences in the context of SLAM.
A system is defined as observable if the initial state x 0 , at any initial time t 0 , can be determined given the state transition and observation models of the system and observations z [ t 0 , t ] , from time t 0 to a finite time t. In [37], it is demonstrated that a non-linear system is locally weakly observable if the observability rank condition r a n k ( O ) = d i m ( x ) is verified, where O is the observability matrix.
As previously mentioned, 6DOF-monocular SLAM represents a kind of partially-observable system with a high number of unobservable modes and states that can be applied to both the single-robot case and the multi-robot case. The following references are examples of works where the problem of the observability of 6DOF-monocular SLAM systems has also been studied, such as [26,27,28,29].
For the analysis developed in this work, and for the sake of simplicity, the system (1) is redefined as:
x ˙ = f ( x , t ) = x ˙ c j λ ˙ c j v ˙ c j ω ˙ c j x ˙ a i = v c j ω c j 0 3 × 1 0 3 × 1 0 3 × 1
Let λ c j = ϕ c j θ c j ψ c j T be the Euler angles of the j-th camera with respect to the coordinate system W.
The observability matrix O can be computed as:
O = ( L f 0 ( i h c j ) ) x ( L f 1 ( i h c j ) ) x ( L f 0 ( j h r n ) ) x ( L f 1 ( j h r n ) ) x T
Let L f s h be the s-th-order Lie derivative [38] of the scalar field h with respect to the vector field f . For example, in (23), the zero-order and first-order Lie derivatives are used for each measurement.
For the measurement given by a monocular camera, according to (4) and (22), the following zero-order Lie derivative can be defined:
( L f 0 ( i h c j ) ) x = 0 2 × 12 ( j 1 ) H x j 0 2 × 12 ( n 2 j ) 0 2 × 3 ( i 1 ) H a i 0 2 × 3 ( n 1 i )
where:
H x j = H c W R c j i p d j × 0 2 × 6
H a i = H c w R c w
and:
H c = f c j i z d j 2 i z d j 0 i x d j 0 i z d j i y d j
Note that · × denotes the antisymmetric matrix of the vector ( · ) . The first-order Lie derivative can also be defined in the following:
( L f 1 ( i h c j ) ) x = 0 2 × 12 ( j 1 ) H dx j 0 2 × 12 ( n 2 j ) 0 2 × 3 ( i 1 ) H da i 0 2 × 3 ( n 1 i )
where:
H dx j = X j Ψ j H c W R c j H c i p d j ×
H da i = H c 1 H c 2 H c 3 W R c j ( W R c j v c j + i p d j × ω c j ) H c ω c j × W R c j
with:
X j = H da i
Ψ j = H c 1 H c 2 H c 3 i p d j × ( W R c j v c j + i p d j × ω c j ) H c W R c j v c j × H c ω c j × i p d j ×
and:
H c 1 = f c j i z d j 2 0 0 1 0 0 0
H c 2 = f c j i z d j 2 0 0 0 0 0 1
H c 3 = f c j i z d j 3 i z d j 0 2 i x d j 0 i z d j 2 i y d j
Considering the case where relative measurements of the distance between robots are available, the following statement can be defined from (21) and (22):
For the zero-order Lie derivative, if j < n (the index of the observing robot is lesser than the index of the observed robot):
( L f 0 ( j h r n ) ) x = 0 3 × 12 ( j 1 ) M x j 0 3 × 12 ( n j 1 ) M x n 0 3 × 12 ( n 2 n ) 0 3 n 1
On the other hand, if j > n , then:
( L f 0 ( j h r n ) ) x = 0 3 × 12 ( n 1 ) M x n 0 3 × 12 ( j n 1 ) M x j 0 3 × 12 ( n 2 j ) 0 3 n 1
and:
M x j , n = I 3 0 3 × 9
where I is the identity matrix.
For the first-order Lie derivative, if j < n :
( L f 1 ( j h r n ) ) x = 0 3 × 12 ( j 1 ) M dx j 0 3 × 12 ( n j 1 ) M dx n 0 3 × 12 ( n 2 n ) 0 3 n 1
On the other hand, if j > n (the index of the observing robot is higher than the index of the observed robot), then:
( L f 1 ( j h r n ) ) x = 0 3 × 12 ( n 1 ) M dx n 0 3 × 12 ( j n 1 ) M dx j 0 3 × 12 ( n 2 j ) 0 3 n 1
with
M dx j , n = 0 3 × 6 I 3 0 3
With the above considerations, the observability matrix for the proposed system (22) can be defined as follows:
O = 0 2 × 12 ( j 1 ) H x j 0 2 × 12 ( n 2 j ) 0 2 × 3 ( i 1 ) H a i 0 2 × 3 ( n 1 i ) 0 2 × 12 ( j 1 ) H dx j 0 2 × 12 ( n 2 j ) 0 2 × 3 ( i 1 ) H da i 0 2 × 3 ( n 1 i ) 0 3 × 12 ( j 1 ) M x j 0 3 × 12 ( n j 1 ) M x n 0 3 × 12 ( n 2 n ) 0 3 n 1 0 3 × 12 ( j 1 ) M dx j 0 3 × 12 ( n j 1 ) M dx n 0 3 × 12 ( n 2 n ) 0 3 n 1 0 3 × 12 ( n 1 ) M x n 0 3 × 12 ( j n 1 ) M x j 0 3 × 12 ( n 2 j ) 0 3 n 1 0 3 × 12 ( n 1 ) M dx n 0 3 × 12 ( j n 1 ) M dx j 0 3 × 12 ( n 2 j ) 0 3 n 1
The maximum rank of the observability matrix (42) is r a n k ( O ) = 3 n 1 + 12 n 2 3 , where n 1 is the number of landmarks being measured and n 2 is the number of robots. n 1 is multiplied by three, since this is the number of states per landmark given by the Euclidean parametrization. n 2 is multiplied by 12, since this is the number of states per robot given by its global position, orientation (Euler angles) and its derivatives. Therefore, O will be rank deficient ( r a n k ( O ) < d i m ( x ) ).
The unobservable modes are spanned by the right nullspace basis of the observability matrix O ; therefore:
N = null ( O ) = I 3 1 0 9 × 3 1 I 3 j 0 9 × 3 j I 3 1 I 3 i
It is straightforward to verify that the right nullspace basis of O spans for N (i.e., O N = 0 ).
From (43), it can be seen that the system is partially observable and that the unobservable modes cross with the states that correspond to the global position of the robots and the landmarks; these states are unobservable. An important conclusion is that all the vectors of the right null space basis are orthogonal with the rest of the states, and therefore, these states are completely observable.
The results of the observability analysis are summarized in Table 1.
Some important remarks on the analysis can be extracted:
  • In order to obtain the previous results, it is necessary to link the members of the multi-UAV system through the measurements (see Figure 4). In other words, (i) a robot needs to share the observation of at least two landmarks with another robot or (ii) a robot needs to measure its relative distance with respect to another robot in addition to both observing one landmark in common.
  • A single measurement of the relative distance between two robots represents a sufficient condition to obtain the previous results (see Figure 4).
  • Adding Lie derivatives of higher order to the observability matrix does not improve the results.
From the above results, it can be concluded that the proposed cooperative system, although still partially observable, considerably reduces the unobservable modes and states with respect to the 6DOF-monocular SLAM system. This contribution represents an advantage to improve the accuracy and consistency in the estimation process.

4. EKF-Cooperative Monocular SLAM

In this section, the proposed monocular cooperative SLAM algorithm, based on an Extended Kalman Filter (EKF), is presented. Figure 5 shows the architecture of the proposed system.

4.1. EKF-SLAM

According to (1), the discrete system state to be estimated is defined by:
x k = f ( x k 1 , n k 1 ) = x c k j q c k j v c k j ω c k j x a k i = x c k 1 j + ( v c k 1 j ) Δ t q c k 1 j × q ( ( ω c k 1 j ) Δ t ) v c k 1 j + ζ c k 1 j ω c k 1 j + η c k 1 j x a k 1 i
n k = ζ c k j η c k j = a c j Δ t α c j Δ t
with system measurements defined according to (4) and (21), as:
z k = h ( x k , r k ) = i h c k j + i r c k j j h r k n + j r e k n
r k = i r c k j j r e k n
Let a c j and α c j represent unknown linear and angular accelerations that are assumed to have a Gaussian distribution with zero mean. Let n k N ( 0 , Q k ) and r k N ( 0 , R k ) be the noise vectors that affect the state and the measurement, which are assumed to be mutually uncorrelated. Let Δ t be the differential of time and k the sample step. Note that in this work, for simplicity, a Gaussian random process is used for propagating the velocity of the vehicle. However, a feasible alternative could be to use the dynamical model of the aircraft instead. However, this approach commonly requires having considerable knowledge of the specific physics of each aerial vehicle where the proposed method would have to be applied.
The prediction stage of the EKF is defined by:
x ^ k = f ( x ^ k 1 , 0 )
P k = A k P k 1 A k T + W k Q k 1 W k T
The correction stage of the EKF is defined by:
x ^ k = x ^ k + K k ( z k h ( x ^ k , 0 ) )
P k = ( I K k C k ) P k
with:
K k = P k C k T ( C k P k C k T + V k R k V k T ) 1
and:
A k = f x ( x ^ k 1 , 0 ) C k = h x ( x ^ k , 0 ) W k = f n ( x ^ k 1 , 0 ) V k = h r ( x ^ k , 0 )
P is the covariance matrix of the system state, and K is the Kalman gain.

4.2. Initialization of Map Features

Taking advantage of the multi-UAV cooperative system, the initialization process of new map features is carried out through a pseudo-stereo system composed of two different UAV cameras that observe common landmarks. This fact allows initializing the landmarks with less uncertainty since 3D information of the position of the landmarks is gathered from the beginning. The three-dimensional data obtained by the pseudo-stereo system can improve the information obtained by other sensors. For example, the traditional fixed stereo system has a limited operating range due to the fixed baseline between the cameras.
The process of initialization is carried out when a new landmark is observed by two cameras, and if this condition is fulfilled, then the landmark can be initialized by means of a linear triangulation. In this case, the measurement is computed using the a posteriori values obtained in the correction stage of the EKF.
According to (4) and (6), the following expression can be defined in homogeneous coordinates:
i γ c j i u c j i v c j 1 = T c j 0 3 × 1 E ^ c j x a i 1
where:
E ^ c j = W R ^ c j x ^ c j 0 1 × 3 1
Using (54) and considering the projection onto two any UAV cameras, a linear system can be formed in order to estimate x a i :
D i x a i = b i x a i = D i b i
where D i is the Moore–Penrose right pseudo-inverse matrix of D i , and:
D i = k 31 j i u c j k 11 j k 32 j i u c j k 12 j k 33 j i u c j k 13 j k 31 j i v c j k 21 j k 32 j i v c j k 22 j k 33 j i v c j k 23 j b i = k 14 j k 34 j i u c j k 24 j k 34 j i v c j
with:
T c j 0 3 × 1 E ^ c j = k 11 j k 12 j k 13 j k 14 j k 21 j k 22 j k 23 j k 24 j k 31 j k 32 j k 33 j k 34 j
When a new landmark is initialized, the system state x is augmented by: x = x c j q c j v c j ω c j x a i x a n e w T .
The new covariance matrix P n e w is computed by:
P n e w = Δ J P 0 0 i R j = Δ J T
where Δ J is the Jacobian for the initialization function and i R j is the measurement noise covariance matrix for ( i u c j , i v c j ) .

Map Management

The real-time feasibility of EKF-based visual SLAM systems has been proven since early works like [39]. Nevertheless, it is well known that due to the nature of the Kalman filter, in SLAM, the system state can always reach a size that will make it impossible to maintain a real-time performance for a given hardware. In this sense, this work is mainly intended to address the local navigation problem, that is the proposed system is intended to be applied in scenarios involving flight trajectories relatively near the origin of the navigation frame. Therefore, old features can be removed from the system state and covariance matrix, to prevent the system state from reaching a size that affects the computational performance.
On the other hand, although large-scale SLAM and loop-closing are not considered in this work, it is important to note that a SLAM framework that works reliably in a local way can be applied to large-scale problems using different methods, such as sub-mapping or graph-based global optimization [12].

5. Computer Simulations Results

In this section, computer simulation results are presented. The computer simulations were performed in order to validate the performance of the proposed method. A MATLAB ® implementation was used for this purpose.
With the intention of making an exhaustive analysis of the performance of the proposed system, a comparison is carried out with respect to the other three typical single-robot SLAM configurations. The comparison allows one to note the advantages and drawbacks of multi-UAV systems compared with single robot systems.
For the computer simulations setup, two quadcopters equipped with an onboard monocular camera are simulated, while moving maintaining a stable flight formation. In this case, a Quadcopter (Quad 2) navigates over the other (Quad 1) at an arbitrary relative distance. In the computer simulations, it is considered that Quad 1 remains all the time inside the visual field of Quad 2. It is also assumed that there exist some landmarks observed in common by the cameras of both quadcopters.
The characteristics of the three SLAM configurations used for the comparison are described below:
  • The first configuration to be compared is monocular SLAM. In this case, the estimates are obtained from the monocular camera carried by Quad 1. The Monocular SLAM approach used to implement this configuration is based on the method proposed in [40]. In this method, the map features are parametrized with the inverse depth parametrization. Both the initialization and update process are performed by means of the monocular measurements. The metric scale of the estimates cannot be retrieved when only monocular vision is used. For this reason, for this configuration, it is assumed that the position of the landmarks seen in the first frame (at the beginning of the flight trajectory) is perfectly known.
  • The second configuration to be compared is stereo SLAM. In this case, the estimates are obtained from a stereo system, with a baseline of 15 cm, carried by Quad 1. In this method, the map features are parametrized with the Euclidean parametrization. The feature initialization process is carried out directly by means of the 3D information provided by the stereo system. The state update is also performed using the stereo measurements.
  • The third configuration to be compared is a hybrid system stereo-monocular SLAM. In this case, the estimates are obtained from a stereo system, with a baseline of 15 cm, carried by Quad 1. In this method, the map features are parametrized with the Euclidean parametrization. The features initialization process is carried out directly by means of the 3D information provided by the stereo system. Unlike the second configuration, in this case, the state update is performed through monocular measurements obtained from one of the cameras of the stereo system.
In computer simulations, it is assumed that the initial condition of the quadcopter states is known with certainty. In order to emulate uncertainty, Gaussian noise with σ c = 3 pixels is added to the measurements given by the cameras. The measurements from the cameras are taken with a frequency of 10 Hz. The intrinsic parameters used for the cameras are f c j / d u j = f c j / d v j = 200.1 and c u j = c v j = 500 . The environment is composed of 3D points, randomly distributed over the ground. Furthermore, it is assumed that the camera can detect and track visual features without error, avoiding the data association problem. Furthermore, the problem of the influence of the estimates on the control system was not considered. In other words, an almost perfect control over the vehicle is assumed. The trajectory followed by the vehicles begins near the ground, then it moves away from the initial position taking a higher altitude as the trajectory progresses.
The average NEES (Normalized Estimation Error Squared [41]) over n 3 Monte Carlo runs was used in order to evaluate the consistency of each method, as proposed in [42]. The NEES is estimated as follows:
ϵ k = x k x ^ k T P k 1 x k x ^ k
The average NEES is computed from:
ϵ ¯ k = 1 n 3 r = 1 n 3 ϵ k r
Figure 6 shows the real and estimated trajectory obtained from the cooperative system. Figure 7 shows the real and estimated trajectory obtained with all the configurations. Note that in this case, only the trajectory of Quad 1 is presented. In this simulation, it can be seen that as the trajectory evolves, the error considerably increases for the single-robot configurations. On the other hand, for the proposed (cooperative) method, the error is better bounded.
Figure 8 shows the evolution over time of the real and estimated states (position and orientation) for Quad 1. In this case, the initial results are confirmed. The results of the estimated state of Quad 2 are not shown, but they are closely similar to those presented for Quad 1. Table 2 summarizes the Mean Squared Error (MSE) for the position in the three axes of Quad 1.
Figure 9 shows the average NEES over 50 Monte Carlo runs obtained for each method. The average NEES is calculated taking into account the twelve variables that define the complete state of the vehicle (position, orientation, linear velocity and angular velocity). It is very interesting to note how the consistency of the filter considerably degenerates in the three cases of the single-robot configurations. On the other hand, for the cooperative case, the consistency of the filter remains practically stable.
Figure 10 shows the relative distances (from Quad 1 to Quad 2) estimated with the method proposed in Section 2. It can be seen that these measurements are good enough to be used to update the filter (see Section 4). It is important to remark that the observability results presented in Section 3 depend on these measurements. The lower-right plot of Figure 10 shows an image frame captured from the monocular camera carried by Quad 2. In this case, the projection of the landmarks can be appreciated, as well as the projections of the four rotors of Quad 1 needed to compute the homography.
In order to compare the quality of the measurements obtained with the fixed stereo system and those obtained with the cooperative pseudo-stereo system, some computer simulations were performed. In this case, the error was computed for the estimated landmarks’ positions, assuming that the position of Quad 1 was perfectly known along the flight trajectory. For the fixed stereo system, the camera-camera calibration is perfectly known. For the cooperative pseudo-stereo system, the camera-camera calibration is obtained from the homography, and therefore, it presents a certain level of error.
Figure 11 shows the absolute value of the mean error obtained for both methods. In this experiment, the same measurements were performed for both systems. In the lower-right plot, the number of measurements per frame is shown. In the case of the fixed stereo system, the accuracy of its measurements is affected by the small baseline between cameras. This is especially notorious when the vehicle moves far away from the landmarks (the altitude is increased). In the case of the cooperative pseudo-stereo system, the error in estimation is much better bounded, although the calibration of the system is not perfectly known. A suitable explanation has to do with the possibility of having an arbitrarily greater baseline between the cameras.
Figure 12 illustrates the above fact. In this case, the statistical results obtained from simulating the measurement of a single landmark with (i) the cooperative pseudo-stereo system and (ii) a monocular method are presented. In the simulation, the UAV-Camera 1 system is located at [ x , y , z ] = [ 3 , 3 , 25 ] at instant k. The UAV-Camera 2 system is located at [ x , y , z ] = [ 4 , 3 , 30 ] at instant k. Thus, the baseline in the cooperative system is equal to 5.09 meters. A landmark is located at [ x , y , z ] = [ 3.5 , 3 , 15 ] . In order to model the inaccuracies associated with the cooperative pseudo-stereo approach, the estimated location of the UAV-Camera 2 system was modeled by adding a Gaussian noise with σ = 50 cm to its actual location. In order to emulate the monocular measurements, it is assumed that the UAV-Camera 1 system was moved (at some instant k + t ) to [ x , y , z ] = [ 3.3 , 3 , 25.1 ] to generate a parallax with respect to the landmark. Thus, the baseline in the monocular system is equal to 0.31 meters. The drift associated with the estimated displacement of the UAV-Camera 1 system is modeled by adding Gaussian noise with standard deviation σ = 5 cm to the actual location at instant k + t . In all cases, the angular measurements provided by the cameras are corrupted by Gaussian noise with σ = 3 degrees. Using the above conditions, a Monte Carlo simulation with 1000 executions has been used to estimate the landmark position with linear triangulation. In Figure 12, ellipsoids are used to illustrate the uncertainties in the estimated positions. According to the simulation results, it is better to have a larger baseline between two cameras with greater position uncertainty (like the cooperative system) than a small baseline with small uncertainty (like monocular measurements with low parallax).
In practical applications, there are several related factors that can severely also affect the performance of a system. For instance, in visual SLAM, the data association problem is critical for these approaches to be reliable. Although currently, there are several methods available for rejecting outliers, it is difficult to completely eliminate this problem. In addition, in cooperative visual systems, the data association problem can be extended from the single-image case to the multiple-image case. Furthermore, a problem that can arise in multi-robot systems, contrary to the mono-robot systems, is related to the communication issues between robots. This problem can cause loss of information or even make the interchange of information impossible during certain periods.
In order to take into account the above practical considerations, a set of additional computer simulations is presented. In this case, based on the same simulation setup used previously, the following aspects are now added: (i) outliers for the visual data association in each camera; (ii) outliers for the cooperative visual data association; (iii) outages of communication between robots; (iv) failures in the homography-based technique used for estimating the relative distance between robots.
In order to emulate the failures of the visual data association process, 5 % of the total number of visual correspondences are forced to be outliers in a random manner. In this case, each outlier is modeled by means of a big measurement error of e u 2 + e v 2 = 56 ± 14 σ pixels. With the objective of having a good insight into the performance of the proposed method, under the above conditions, a comparison with a reliable general method is carried out. In this case, the method chosen is a monocular SLAM system aided by measurements of the position given by a GPS and attitude measurements obtained from an IMU (monocular SLAM + GPS + IMU).
Table 3 shows the number of failures introduced into the simulation: (i) the number of outliers introduced in the visual tracking process of Quad 1; (ii) the number of outliers introduced in the visual tracking process of Quad 2; (iii) the number of outliers introduced in the visual data association process used for cooperatively measuring the landmarks by means of Quad 1 and Quad 2; (iv) the number of outages in communication between robots, which result in filter update not being carried out with the information given by Quad 2; and (v) the number of failures in the homography-based technique, which result in the filter update not being carried out with the information given by the relative distance between the Quads.
Figure 13 shows the real and estimated trajectory obtained with the two configurations: (i) cooperative SLAM; and (ii) monocular SLAM + GPS + IMU. Figure 14 shows the evolution over time of the real and estimated states (position and orientation) of Quad 1 obtained with both configurations. Note that in this case, only the trajectory of Quad 1 is presented for illustration purposes, but estimates of Quad 2 are closely similar to those presented for Quad 1. Table 4 summarizes the mean squared error for the position in the three axes of Quad 1 obtained with both configurations. In this simulation, both configurations have a good performance, in the case of monocular SLAM + GPS + IMU, this result was expected, since this system has enough sources of information to determine all the states. The cooperative system shows a good performance despite all the failures introduced into the system. The above study provides a good insight about the robustness of the proposed (cooperative) system.
Table 5 provides an insight into the performance of the proposed method for estimating the features map. In this case, the total (sum of all) of the mean squared errors for the estimated position of the landmarks is presented for both configurations. Furthermore, the total of the mean squared errors for the initial estimated position of the landmarks is presented. Note that the results are presented for each coordinate of the reference frame W. The results show that the proposed cooperative system has a better performance than the monocular SLAM + GPS + IMU system, regarding the error obtained in the estimation of the position of the landmarks, although the latter has more sources of information provided by its sensors.

6. Conclusions

In this work, a vision-based cooperative SLAM system with application to unmanned aerial vehicles has been presented. The general idea is to take advantage of a cooperative UAV scheme in order to improve the accuracy and consistency of the state estimation process of the whole system. To achieve this purpose, it was proposed to add some relative distances between the robots as system measurements for updating the EKF. These measures provide metric information to the system, unlike other configurations where the scale of the system is a problem. Through a non-linear observability analysis, it is verified that the observability of the cooperative system improves the observability obtained for a single-robot configuration. In this case, the observability of the system is improved by adding the measures of relative distances. Sufficient conditions required for obtaining the observability results were established. In order to infer the 3D knowledge of the position of the landmarks for initializing the map features with less uncertainty, in the proposed method, pseudo-stereo systems are formed from pairs of aerial robots.
An extensive set of computer simulations was performed in order to validate the proposed method. In the computer simulations, the proposed system was compared against four single-robot configurations of visual SLAM. Based on the results of the simulations, it can be observed how the proposed method (cooperative) improves the estimation of the state with respect to the other configurations. The difference in the performance of the systems is especially notorious when the distance from the cameras to the landmarks increases. Furthermore, it was shown that the consistency of the filter is improved with the proposed method. Computer simulations also show that the accuracy of the measurements obtained from the pseudo-stereo system is better than the measurements obtained from a stereo system with a fixed small baseline.
In computer simulations, an effort has been made in order to emulate several aspects regarding applicability in real scenarios of the proposed approach. For instance, the data association problem has been considered by emulating outliers (mismatches) during the tracking of visual features on each monocular camera, as well as on the pseudo-stereo matching. Furthermore, issues for the multi-robot communication were considered, as well as failures on the homography technique used to provide measurements of the relative distance between robots. However, although computer simulations are useful for evaluating the full statistical consistency of the methods, they can still neglect important practical issues that appear when the methods are used in real scenarios. In this sense, it is important to note that future work should be focused on developing experiments with real data in order to validate the applicability of the proposed approach fully. Therefore, it should be interesting to investigate more practical aspects, like the homography-based technique or the pseudo-stereo matching process.

Author Contributions

R.M. and A.G. developed the theoretical framework and algorithms presented in this research; R.M. and J-C.T. conceived and designed the experiments; E.G. and J-C.T. performed the experiments and supervised data management; R.M and A.G. wrote the paper.

Acknowledgments

This research has been funded by AEROARMS EU Project H2020-ICT-2014-1-644271.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, Z.; Douillard, B.; Morton, P.; Vlaskine, V. Towards Collaborative Multi-MAV-UGV Teams for Target Tracking. In Proceedings of the 2012 Robotics: Science and Systems Workshop on Integration of Perception with Control and Navigation for Resource-Limited, Highly Dynamic, Autonomous Systems, Sydney, Australia, 9–10 July 2012. [Google Scholar]
  2. Michael, N.; Shen, S.; Mohta, K.; Mulgaonkar, Y.; Kumar, V.; Nagatani, K.; Okada, Y.; Kiribayashi, S.; Otake, K.; Yoshida, K.; et al. Collaborative mapping of an earthquake-damaged building via ground and aerial robots. J. Field Robot. 2012, 29, 832–841. [Google Scholar] [CrossRef]
  3. Bachrach, S.P.; He, R.; Roy, N. RANGE-Robust autonomous navigation in GPS-denied environments. J. Field Robot. 2011, 28, 644–666. [Google Scholar] [CrossRef]
  4. Aut, F.; De la Cruz, C.; Carelli, R.; Bastos, T. Navegación Autónoma asistida basada en SLAM para una silla de ruedas robotizada en entornos restringidos. Rev. Iberoam. Autom. Inform. Ind. 2011, 8, 81–92. [Google Scholar] [CrossRef]
  5. Andert, F.; Lorenz, S.; Mejias, L.; Bratanov, D. Radar-Aided Optical Navigation for Long and Large-Scale Flights over Unknown and Non-Flat Terrain. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016. [Google Scholar]
  6. Vázquez-Martín, R.; Núñez, P.; Bandera, A.; Sandoval, F. Curvature Based Environment Description for Robot Navigation using Laser Range Sensors. Sensors 2009, 9, 5894–5918. [Google Scholar] [CrossRef] [PubMed]
  7. Zhou, H.; Zou, D.; Pei, L.; Ying, R.; Liu, P.; Yu, W. Structslam: Visual slam with building structure lines. IEEE Trans. Veh. Technol. 2015, 64, 1364–1375. [Google Scholar] [CrossRef]
  8. Pi, S.; He, B.; Zhang, S.; Nian, R.; Shen, Y.; Yan, T. Stereo visual slam system in underwater environment. In Proceedings of the OCEANS 2014, Taipei, Taiwan, 7–10 April 2014; pp. 1–5. [Google Scholar]
  9. Schmidt, A. The EKF-Based Visual SLAM System with Relative Map Orientation Measurements. In Proceedings of the Computer Vision and Graphics: International Conference, ICCVG 2014, Warsaw, Poland, 15–17 September 2014; Springer International Publishing: Cham, Switzerland, 2014; pp. 570–577. [Google Scholar]
  10. Meilland, M.; Comport, A.I. On unifying key-frame and voxel-based dense visual slam at large scales. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 3677–3683. [Google Scholar]
  11. Lu, Y.; Song, D. Visual navigation using heterogeneous landmarks and unsupervised geometric constraints. IEEE Trans. Robot. 2015, 31, 736–749. [Google Scholar] [CrossRef]
  12. Strasdat, H.; Montiel, J.; Davison, A. Realtime monocular SLAM: Why filter? In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–7 May 2010; pp. 2657–2664. [Google Scholar]
  13. Artieda, J.; Sebastian, J.; Campoy, P.; Correa, J.; Mondragon, I.; Martinez, C.; Olivares, M. Visual 3-D SLAM from UAVs. J. Intell. Robot. Syst. 2009, 55, 299–321. [Google Scholar] [CrossRef] [Green Version]
  14. Weiss, S.; Scaramuzza, D.; Siegwart, R. Monocular SLAM based navigation for autonomous micro helicopters GPS-denied environments. J. Field Robot. 2011, 28, 854–874. [Google Scholar] [CrossRef]
  15. Wang, C.L.; Wang, T.M.; Liang, J.H.; Zhang, Y.C.; Zhou, Y. Bearing-only Visual SLAM for Small Unmanned Aerial Vehicles in GPS-denied Environments. Int. J. Autom. Comput. 2014, 10, 387–396. [Google Scholar] [CrossRef]
  16. Chowdhary, G.; Johnson, E.N.; Magree, D.; Wu, A.; Shein, A. GPS-denied Indoor and Outdoor Monocular Vision Aided Navigation and Control of Unmanned Aircraft. J. Field Robot. 2013, 30, 415–438. [Google Scholar] [CrossRef]
  17. Andert, F.; Mejias, L. Improving monocular SLAM with altimeter hints for fixed-wing aircraft navigation and emergency landing. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015. [Google Scholar]
  18. Urzua, S.; Munguía, R.; Grau, A. Vision-based SLAM system for MAVs in GPS-denied environments. Int. J. Micro Air Veh. 2017, 9, 283–296. [Google Scholar] [CrossRef]
  19. Perez-Grau, F.J.; Ragel, R.; Caballero, F.; Viguria, A.; Ollero, A. An architecture for robust UAV navigation in GPS-denied areas. J. Field Robot. 2018, 35, 121–145. [Google Scholar] [CrossRef]
  20. Pappas, H.; Tanner, G.; Kumar, V. Leader-to-formation stability. IEEE Trans. Robot. Autom. 2004, 20, 443–455. [Google Scholar]
  21. Zhu, Z.; Roumeliotis, S.; Hesch, J.; Park, H.; Venable, D. Architecture for Asymmetric Collaborative Navigation. In Proceedings of the IEEE Position Location and Navigation Symposium (PLANS), Myrtle Beach, SC, USA, 23–26 April 2012. [Google Scholar]
  22. Mourikis, A.I.; Roumeliotis, S.I. Performance Bounds for Cooperative Simultaneous Localisation and Mapping (C-SLAM). In Proceedings of the Robotics: Science and Systems Conference, Cambridge, MA, USA, 8–10 June 2005. [Google Scholar]
  23. Fenwick, J.W.; Newman, P.M.; Leonard, J.J. Cooperative Concurrent Mapping and Localisation. In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC, USA, 11–15 May 2002. [Google Scholar]
  24. Vetrella, A.R.; Opromolla, R.; Fasano, G.; Accardo, D.; Grassi, M. Autonomous Flight in GPS-Challenging Environments Exploiting Multi-UAV Cooperation and Vision-aided Navigation. In Proceedings of the AIAA Information Systems-AIAA Infotech @ Aerospace, AIAA SciTech Forum, (AIAA 2017-0879), Grapevine, TX, USA, 9–13 January 2017. [Google Scholar] [CrossRef]
  25. Vetrella, A.R.; Fasano, G.; Accardo, D. Cooperative Navigation in GPS-Challenging Environments Exploiting Position Broadcast and Vision-based Tracking. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems—ICUAS’16, Arlington, VA, USA, 7–10 June 2016. [Google Scholar]
  26. Huang, G.P.; Mourikis, A.I.; Roumeliotis, S.I. Generalized Analysis and Improvement of the Consistency of EKF-based SLAM; Multiple Autonomous Robotic Systems; Technical Report Number 2008-0001; University of Minnesota: Minneapolis, MN, USA, 2008. [Google Scholar]
  27. Huang, G.P.; Trawny, N.; Mourikisy, A.I.; Roumeliotis, S.I. On the Consistency of Multi-robot Cooperative Localization. In Proceedings of the International Conference on Robotics Science and Systems, RSS 2009, Seattle, WA, USA, 28 June–1 July 2009. [Google Scholar]
  28. Kim, J.; Sukkarieh, S. Improving the Real-Time Efficiency of Inertial SLAM and Understanding its Observability. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004. [Google Scholar]
  29. Bryson, M.; Sukkarieh, S. Active Airborne Localisation and Exploration in Unknown Environments using Inertial SLAM. In Proceedings of the 2006 IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006. [Google Scholar]
  30. Melnyk, I.V.; Hesch, J.A.; Roumeliotis, S.I. Cooperative Vision-aided Inertial Navigation Using Overlapping Views. In Proceedings of the International Conference on Robotics and Automation, RiverCentre, Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar]
  31. Barbour, N.M. Inertial Navigation Sensors. In NATO RTO Lecture Series, RTO-EN-SET-116, Low-Cost Navigation Sensors and Integration Technology; NATO: Washington, DC, USA, 2011. [Google Scholar]
  32. Batzdorfer, S.; Bestmann, U.; Becker, M.; Schwithal, A.; Schattenbergm, J.; Lang, T.; Andert, F.; Dittrich, J. Using combined IMU/Stereo Vision/cooperative GNSS System for Positioning of UxV Swarms within Catastrophic Urban Scenarios. In Proceedings of the Institute of Navigation Pacific PNT 2013, Honolulu, HI, USA, 23–25 April 2013. [Google Scholar]
  33. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, MA, USA, 2003. [Google Scholar]
  34. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  35. Euston, M.; Coote, P.; Mahony, R.; Kim, J.; Hamel, T. A complementary filter for attitude estimation of a fixed-wing UAV. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008, Nice, France, 22–26 September 2008; pp. 340–345. [Google Scholar]
  36. Munguia, R.; Grau, A. A Practical Method for Implementing an Attitude and Heading Reference System. Int. J. Adv. Robot. Syst. 2014, 11, 62. [Google Scholar] [CrossRef] [Green Version]
  37. Hermann, R.; Krener, A. Nonlinear controllability and observability. IEEE Trans. Autom. Control 1977, 22, 728–740. [Google Scholar] [CrossRef]
  38. Slotine, J.E.; Li, W. Applied Nonlinear Control; Prentice-Hall: Englewood Cliffs, NJ, USA, 1991. [Google Scholar]
  39. Davison, A. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; Volume 2, pp. 1403–1410. [Google Scholar]
  40. Montiel, J.M.M.; Civera, J.; Davison, A. Unified inverse depth parametrization for monocular SLAM. In Proceedings of the Robotics: Science and Systems Conference, Philadelphia, PA, USA, 19 August 2006. [Google Scholar]
  41. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley and Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  42. Bailey, T.; Nieto, J.; Guivant, J.; Stevens, M.; Nebot, E. Consistency of the ekf-slam algorithm. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Beijing, China, 4–6 December 2006. [Google Scholar]
Figure 1. Cooperative monocular-based SLAM.
Figure 1. Cooperative monocular-based SLAM.
Sensors 18 01351 g001
Figure 2. Coordinate reference systems.
Figure 2. Coordinate reference systems.
Sensors 18 01351 g002
Figure 3. Pinhole camera projection model.
Figure 3. Pinhole camera projection model.
Sensors 18 01351 g003
Figure 4. Requirements to obtain the results of the observability analysis for the proposed system.
Figure 4. Requirements to obtain the results of the observability analysis for the proposed system.
Sensors 18 01351 g004
Figure 5. Block diagram showing the architecture of the system: EKF-cooperative monocular SLAM.
Figure 5. Block diagram showing the architecture of the system: EKF-cooperative monocular SLAM.
Sensors 18 01351 g005
Figure 6. Estimated trajectories of the Quadcopters (Quad) obtained with the cooperative method.
Figure 6. Estimated trajectories of the Quadcopters (Quad) obtained with the cooperative method.
Sensors 18 01351 g006
Figure 7. Estimated trajectory of Quad 1 obtained with all the configurations.
Figure 7. Estimated trajectory of Quad 1 obtained with all the configurations.
Sensors 18 01351 g007
Figure 8. Estimated state of Quad 1.
Figure 8. Estimated state of Quad 1.
Sensors 18 01351 g008
Figure 9. Average Normalized Estimation Error Squared (NEES) obtained with the four configurations.
Figure 9. Average Normalized Estimation Error Squared (NEES) obtained with the four configurations.
Sensors 18 01351 g009
Figure 10. Estimation of the relative distances between the flying vehicles by means of homographies.
Figure 10. Estimation of the relative distances between the flying vehicles by means of homographies.
Sensors 18 01351 g010
Figure 11. Comparison of the quality of the measurements obtained from a fixed stereo system and those obtained with the cooperative pseudo-stereo system.
Figure 11. Comparison of the quality of the measurements obtained from a fixed stereo system and those obtained with the cooperative pseudo-stereo system.
Sensors 18 01351 g011
Figure 12. Measurement of a single landmark using: (i) cooperative pseudo-stereo system; and (ii) the delayed monocular initialization method.
Figure 12. Measurement of a single landmark using: (i) cooperative pseudo-stereo system; and (ii) the delayed monocular initialization method.
Sensors 18 01351 g012
Figure 13. Estimated trajectory of Quad 1 obtained with the two configurations.
Figure 13. Estimated trajectory of Quad 1 obtained with the two configurations.
Sensors 18 01351 g013
Figure 14. Estimated state of Quad 1.
Figure 14. Estimated state of Quad 1.
Sensors 18 01351 g014
Table 1. Results of the non-linear observability analysis.
Table 1. Results of the non-linear observability analysis.
Unobservable ModesUnobservable StatesObservable States
Monocular5 x c j , v c j , x a i , ψ c j , ψ ˙ c j ϕ c j , ϕ ˙ c j , θ c j , θ ˙ c j
Cooperative3 x c j , x a i v c j , λ c j , ω c j
Table 2. Mean squared error in the position estimation.
Table 2. Mean squared error in the position estimation.
MSE x (m) MSE y (m) MSE z (m)
Cooperative 0.36 0.05 0.008
Monocular 30.31 6.28 2.36
Stereo 28.42 5.33 3.82
Monocular + Stereo 12.64 1.51 2.85
Table 3. Number of failures introduced into the simulation.
Table 3. Number of failures introduced into the simulation.
No. of Quad 1 Visual OutliersNo. of Quad 2 Visual OutliersNo. of Cooperative Visual OutliersNo. of Communication OutagesNo. of Homography Failures
Cooperative900284001706210420
Monocular + GPS + IMU9535----
Table 4. Mean squared error in the position estimation.
Table 4. Mean squared error in the position estimation.
MSE x (m) MSE y (m) MSE z (m)
Cooperative 0.85 0.23 0.07
Monocular + GPS + IMU 0.01 0.04 0.05
Table 5. Total mean squared error in: (i) the position estimation of the landmarks ( M S E x m , M S E y m , M S E z m ); and (ii) the initial position estimation of the landmarks ( M S E x m i , M S E y m i , M S E z m i ).
Table 5. Total mean squared error in: (i) the position estimation of the landmarks ( M S E x m , M S E y m , M S E z m ); and (ii) the initial position estimation of the landmarks ( M S E x m i , M S E y m i , M S E z m i ).
MSE xm (m) MSE ym (m) MSE zm (m) MSE xmi (m) MSE ymi (m) MSE zmi (m)
Cooperative 7.84 14.87 6.29 74.21 83.72 45.50
Monocular + GPS + IMU 21.80 30.31 13.27 394.05 427.91 193.99

Share and Cite

MDPI and ACS Style

Trujillo, J.-C.; Munguia, R.; Guerra, E.; Grau, A. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments. Sensors 2018, 18, 1351. https://doi.org/10.3390/s18051351

AMA Style

Trujillo J-C, Munguia R, Guerra E, Grau A. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments. Sensors. 2018; 18(5):1351. https://doi.org/10.3390/s18051351

Chicago/Turabian Style

Trujillo, Juan-Carlos, Rodrigo Munguia, Edmundo Guerra, and Antoni Grau. 2018. "Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments" Sensors 18, no. 5: 1351. https://doi.org/10.3390/s18051351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop