Next Article in Journal
Information and Metabolism in Bacterial Chemotaxis
Previous Article in Journal
Using Exergy to Correlate Energy Research Investments and Efficiencies: Concept and Case Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

Department of Electronics Engineering, Federico Santa Maria Technical University, Av. Espana 1680,Valparaíso, 110-V, Chile
Entropy 2013, 15(1), 287-310; https://doi.org/10.3390/e15010287
Submission received: 15 October 2012 / Revised: 16 December 2012 / Accepted: 8 January 2013 / Published: 17 January 2013

Abstract

:
Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter), the EKF (Extended Kalman Filter) and the UKF (Unscented Kalman Filter). Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

1. Introduction

This work addresses the problem of determining a non-arbitrary measurement selection criterion for Gaussian-based algorithms. In particular, this work presents the problem of selecting the most significant features from an environment from both an estimation convergence perspective and the covariance matrix associated with the extracted features point of view.
The way a sensor extracts features from the environment is known as sensing methodology and the set of features extracted from the environment can be used to create a map of the environment [1,2,3]. The nature of the feature is related with the capabilities of the sensor used, although this relation is not strictly direct. For example, a range laser sensor is able to acquire range information regarding the surrounding environment [4,5]. With the set of laser measurements it is possible to extract geometric primitives such as lines or corners—associated with points—to infer moving objects or to recognize previously defined patterns [6]. Similar procedures can be implemented on sonar range sensors, where the features extracted will have a larger associated covariance matrix due to the high uncertainty associated with the sonar measurements [1,7,8]. Also, the features extracted from a range sensor are not necessarily related to features extracted by, e.g., a monocular vision system [9,10]. Therefore, the features extracted by sensors are dependent on the nature of such sensors. Nevertheless, further processing allows the extraction of valuable information regarding the raw sensor data [11].
Features extracted from the environment can also be used for localization purposes. Localization procedures can be performed by both a direct method and an indirect method. A direct method uses sensors specially dedicated to estimate the a certain position. For example, a GPS (global positioning measurement) can be considered as a direct positioning sensor [2], as is an IMU (inertial module unit) [12], a dead-reckoning system [2], an UWB (ultra-wide band) localization system [13], etc. These methods allow the estimation of a given position within the environment regardless of the elements of the environment. For example, [12] uses an inertial/GPS fusion technique to estimate the position of an UAV (unmanned aerial vehicle); in [14], the authors use a 3-axis magnetic sensor to estimate position and orientation. In addition, in [15], the author uses a distributed sensor network to localize nodes based on acoustic signals. Also, in [16], the authors use dead-reckoning and sensor data fusion integration to estimate the location of a vehicle as it drives through an environment.
On the other hand, an indirect positioning system uses the information provided by the environment to localize a given object (or a set of objects) within such environment. Such is the case shown in [17], which extracts patterns from consecutive frames of 2-D images to localize the vision system within the environment; with the same insight, [18] uses infrared images to recognize patterns associated with a given target and to localize such a target within the environment; in [19], the authors use edges detection from ultrasonic data for objects localization within unknown environments. In [20], the authors use beacons—strategically placed over a given environment—to localize a mobile anchor.
A SLAM (simultaneous localization and mapping) algorithm uses both the direct and the indirect localization method for concurrently estimating the pose (position and orientation) of a given sensor (or a vehicle equipped with such a sensor) while mapping the surrounding environment [1,7,8,21]. Thus, for example, in a feature-based SLAM, the features extracted from the environment are used to estimate the pose of a mobile and to build a feature-based model of the environment (see [21,22]). The nature of the feature uses is strictly dependent on the sensor capabilities. Thus, in [23] the authors use a line-based SLAM, where lines are extracted by a laser range sensor and the implementation of recursive algorithms to estimate lines within the Cartesian space; in [3], the author uses point-based features (like corners) to perform a SLAM algorithm. On the other hand, [24] uses a stereo-vision system to estimate the orientation of the camera while mapping the surrounding environment. In [25], the authors use a SLAM algorithm—based on a monocular vision system—to compensate for the accumulative errors in an inertial-based navigation system.
Several filters have been proposed in the literature to enhance the measurement process, minimizing its errors, and improving the efficiency of the estimation process. Such is the case of the Gaussian-based filters [26]. Gaussian-based filters have been used to improve the localization system, the environment mapping, and the SLAM algorithm. The EKF (extended Kalman filter) is one of the first and m most commonly used Gaussian-based filters. It can be used equally well for localization, mapping, or as a SLAM solution [1,7,8,21,23,25,27]. It uses linearized models for both the sensor’s motion model and the measurements (i.e., extracted features). Another solution is the EIF (extended information filter) [1]. The EIF reduces the computational cost associated with the correction stage of the EKF. An alternative strategy, the UKF (unscented Kalman filter) has a better performance when dealing with non-linear models for the measurements and the environment [1,28]. Nevertheless, the sequential EKF [1,29] considerably reduces the computational costs of the EKF by using sparse matrices in its formulation. The EKF, EIF and UKF are considered to be Gaussian filters [6]. Other non-Gaussian solutions, such as the particle filter (PF) and Markov processes have also shown to be effective to solve the SLAM problem [30,31,32]. In this work, we focus on Gaussian-based estimation algorithms.
If at time instant k there are, say, five measurements (features) with correct data association [7,8], then the EKF, EIF and UKF would use the five features at the same time to correct the system state and its covariance matrix [1,7,25]. Therefore, as the number of features with correct association increases, the computational cost associated with the correction stage of the SLAM algorithm also increases. Also, from the five features available at time k, not all the five features will contribute in the same manner to improve the estimation process. We could have the case when only one feature contributes the most to the estimation process and the contribution of the remaining four features can be discarded.
Several works have been proposed in the literature to restrict the number of features to be used within the correction stage of a Gaussian-based estimator. Thus, [24] uses only a fixed number of features at every SLAM execution cycle, whereas [23] uses only the most robust features extracted by the feature extraction procedure. On the other hand, [33] uses an entropy [34] gate based on the SLAM covariance matrices to restrict the features to be used by such a SLAM algorithm. Thus, only the features whose associated entropy is below a certain threshold are used. With the same insight, approaches as the ones shown in [35,36] offer optimal solutions to the restriction of features to be used by the estimation algorithm: the use of sub-maps reduces the computational cost without affecting the optimality of the estimation. On the other hand, the UKF proposed by [37] offers a slightly sub-optimal solution for the SLAM problem, showing improvements in the computational cost and in the non-linearity of both the process and the measurement models. However, no considerations are made regarding how the set of features with correct data association are managed.
Thus, the main contribution of this work lies in the proposal of a new measurement (feature) selection criterion based on the covariance matrices associated with the measurements, which are also associated with the feature extraction procedure and the sensor used. In [29,33,38], the feature selection criteria are based on the manipulation of the covariance matrices—predicted and corrected ones—associated with the estimation system state. In this article, the feature selection criterion proposed does not process the estimation information. Instead, it uses the covariance matrix associated with a feature to decide whether or not that feature will have implications on the convergence of the estimation algorithm. The last also implies that the feature selection criterion proposed herein decreases the computational cost associated with the estimation process because no further processing is needed in order to select the most significant features. Thus, not all detected features from the environment are necessary for efficient estimation purposes (e.g., for localization, mapping, or SLAM).
In this work, a feature selection criterion for the EKF, the EIF and the UKF-based algorithms is shown. The three Gaussian algorithms are used for environment modeling; although the results presented herein are not restricted to the application in which the Gaussian algorithms are immersed in. The selection criterion can be summarized as follows: let M be a set of, e.g., five features with correct data association at time k. The selection criterion presented herein chooses the most significant features according to their associated covariance matrices and their filter convergence implications. In addition, the same feature selection criterion is then applied to select the most meaningful feature in the presence of redundant information. Therefore, if a same feature is acquired by two or more different sensors—or by two or more feature extraction procedures—then the most meaningful feature is determined based on the selection criterion presented herein. Furthermore, we show that the feature whose covariance matrix is the closest to a null matrix yields a global minimum in the correction stage of the Gaussian-based algorithms.
This work is organized as follows. Section 2 shows the Gaussian convergence theorem and selection criterion proposal for the EKF, the EIF and the UKF algorithms. Section 3 shows several implementation results of the selection criterion proposed in this work. Conclusions are drawn in Section 4.

2. General Proposal

The general idea of the feature selection criterion presented in this work is illustrated in Figure 1. In Figure 1, the solid-black circles represent general features from the environment; the grey zone limited by a dashed red line represents the area of the environment within which the sensors are able to detect and to extract features. As it can be seen in Figure 1, five features lie within the sensor field. The feature selection criterion introduced in this work allows the selection of the most significant features and thus restricts the number of features used by the correction stage of the Gaussian-based estimation algorithms (two features are selected in Figure 1).
Figure 1. Graphical representation of features selection procedure.
Figure 1. Graphical representation of features selection procedure.
Entropy 15 00287 g001
The EKF convergence theorem, presented in [22] establishes that, at the limit,
lim k | P k | = 0
where P k is the corrected covariance matrix associated with the EKF process at time k. Also, according to [38],
0 | P k | | P k - | 1 ; w i t h | P k - | 0
where P k - is the predicted covariance matrix. Therefore, in a consistent estimation process, the volume of uncertainty—expressed by the determinant of the covariance matrices in Equation (2)—of the EKF correction stage is always smaller than, or at least equal to, the EKF prediction stage. Thus, the feature selection criterion used in this work is based on choosing those features with correct association that cause the largest decrement of the covariance ratio shown in Equation (2) in order to execute the correction stage of the EKF estimation algorithm. The selection procedure is based only on the covariance matrix associated with the feature extraction procedure and it is shown in Equation (3).
( z , R ) o p t : a r g z , R m i n ( | P k | | P k - | ) a r g z , R m i n ( | P k | ) | P k - | = c o n s t a n t f o r a l l ( z , R ) N
Thus, let M be the set of features detected at time k and N the set of features with correct association. Let z N be a feature -observation—from the environment and R the covariance matrix of the corresponding observation noise—obtained from the feature extraction procedure. Equation (3) implies that the most significant feature in N at time k is the one that causes the largest decrement of the uncertainty volume of the corrected covariance matrix of the Gaussian estimation algorithm (see [29]). Therefore, the selection criterion implemented in this work consists of using only the optimal features found in Equation (3) to correct the estimation algorithm. The feature selection criterion shown in this work applies only to the correction stage of the Gaussian-based algorithms.

2.1. The Extended Kalman Filter

The general formulation of the EKF algorithm is shown in Equation (2.1). In Equation (2.1), ξ ^ k is the system state vector (where ξ ^ k - is the predicted EKF system state vector and ξ ^ k its correction). f is the model of the process (it describes its motion). u k is the control commands vector; P k is the covariance matrix associated with ξ ^ k . A k is the Jacobian matrix of f with respect to ξ ^ k ; Q k represents the Gaussian noise of the process and W k is its Jacobian matrix; K k is the Kalman gain, H k is the Jacobian matrix associated with the observation model and R k is the covariance matrix associated with such an observation (feature). z k is the current feature of the environment extracted by the system whereas h is its mathematical model. Further information of the EKF and its implementation issues can be found in [1,7,38].
ξ ^ k - = f ( ξ ^ k - 1 , u k )
P k - = A k P k - 1 A k T + W k Q k W k T
K k = P k - H k T ( H k P k - H k T + R k ) - 1
ξ ^ k = ξ ^ k - + K k ( z k - h ( ξ ^ k - ) )
P k = ( I - K k H k ) P k -
For simplicity, subindex k associated with time in Equation (2.1) will be omitted in the remainder of this section. The objective is to determine which feature is the most important from the convergence perspective of the EKF algorithm by using the covariance matrix associated with an observed feature.
The following lemmas show the behavior of the uncertainty volume of the EKF correction stage at the limits of the covariance matrix associated with an observation.
Lemma I–
Let R 0 —where 0 stands for positive definiteness—be a positive definite covariance matrix associated with a given observation z. Also, let R Θ R imply that all entries of R, and its corresponding eigenvalues, tend to zero without compromising the positive definiteness of R, then
lim R Θ R | P | = 0
Proof
According to Equation (2.1) we have that,
P = P - - P - H T ( H P - H T + R ) - 1 H P - = ( I P - P - H T ( H P - H T + R ) - 1 H ) P -
where I P is the identity matrix with the dimension of P. By applying the determinant to the above expression we have the following result [39],
| P | = | I P - P - H T ( H P - H T + R ) - 1 H | | P - | = | I P - ( H P - H T + R ) - 1 H P - H T | | P - |
In Equation (6), | P - | > 0 because P - 0 . Then,
lim R Θ R | P | = lim R Θ R | I P - ( H P - H T + R ) - 1 H P - H T | | P - | = | I P - ( H P - H T ) - 1 H P - H T | | P - | = | I P - I P | | P - | = 0
Therefore, Equation (7) shows that when R Θ R , | P | 0 , which proves Lemma I.-
Lemma I shows that when R Θ R , the volume of uncertainty associated with the correction stage of the EKF—as defined in Equation (1)—tends to its minimum. Also, as R Θ R , the correction stage of the EKF becomes independent on the Jacobian matrix associated with the observation. The following lemma applies to the case when R R .
Lemma II–
Let R be the covariance matrix associated to an observation as in Lemma I. Also, let R R imply that all entries of R, and its corresponding eigenvalues, tend to infinity without compromising the positive definiteness of R. Then,
lim R R | P | = | P - |
Proof
According to Equation (2.1): K = P - H T ( H P - H T + R ) - 1 , therefore,
lim R R K = lim R R P - H T ( H P - H T + R ) - 1 lim R R P - H T R - 1 = Θ K
In (9), when R , it becomes much greater than H P - H T —under a consistent estimation assumption [7,8]—and the Kalman gain, K, tends to a null matrix. Thus, applying Equation (9) into the covariance matrix of the EKF system state,
lim R R P = = lim R R ( P - - K H P - ) = P - - Θ K H P - = P - - Θ P = P -
Thus Equation (8) follows.-
Therefore, when the observation z has associated a covariance matrix R with a high uncertainty, the uncertainty volume associated with the covariance matrix of the EKF estimation process is not decreased.
Using the results shown above, it is possible to see that R Θ R minimizes (2.1).
Thus, the feature selection approach shown in Equation (3) is applied to the EKF algorithm using the previous lemmas, as it is shown in Equation (11).
( z , R ) o p t : a r g z , R m i n ( R i ) ; f o r i = 1 . . # N
In Equation (11) N is the set of features with correct association at time instant k and # N is the number of features with correct association. Therefore, the feature with minimum covariance matrix is considered as the optimal feature to be used within the correction stage of the EKF.

2.2. The Extended Information Filter

The general formulation of the EIF algorithm is shown in Equation (12). In Equation (12), μ k is the vector system state; ξ k and Ω k –the information matrix–are the parameters of the EIF, where ξ k is of the dimension of μ k and Ω k is of the dimension of the covariance matrix associated with vector system state; G k is the Jacobian matrix associated with the model of the system process–g–whereas Q k is the covariance matrix associated with the process noise; u k represents the command control vector. H k is the Jacobian matrix associated with the model of the observation and R k is the covariance matrix of the observation noise; the current feature processed by the EIF is denoted by z k . Finally, ξ ¯ k and Ω ¯ k are the predicted parameters of the EIF (further information regarding the EIF can be found in [1,40]).
μ k - 1 = Ω k - 1 - 1 ξ k - 1 Ω ¯ k = ( G k Ω k - 1 - 1 G k T + Q k ) - 1 ξ ¯ k = Ω ¯ k g ( u k , μ k - 1 ) μ ¯ k = g ( u k , μ k - 1 ) Ω k = Ω ¯ k + H k T R k - 1 H k ξ k = ξ ¯ k + H k T Q k - 1 [ z k - h ( μ ¯ k ) + H k μ ¯ k ]
The parametrization of the EIF can be derived directly from the EKF formulation [1]. In Equation (13), ξ is the EKF system state and P its covariance matrix.
P k = Ω k - 1 ξ k = Ω k μ k
Thus, according to Equation (1) and Equation (13) it is possible to see that,
lim k | P k | = lim k 1 | Ω k | = 0 .
Therefore, lim k | Ω k | = . Thus, the feature that contributes the most to the EIF process is the feature that increases the volume of information of the information matrix of the EIF algorithm. For simplicity, subindex k associated with time in Equation (12)–Equation (14) will be omitted in the remainder of this section.
The following lemmas show the behavior of the volume of information associated with the EIF at the limits of the covariance matrix associated with a given observation.
Lemma III–
Let R 0 be a positive definite covariance matrix associated with an observation z, as previously stated in Lemma I. Then,
lim R Θ R | Ω R | =
Proof
Regarding Equation (13) and Lemma I, the proof of Lemma III is straightforward. Thus, according to Equation (5) and Equation (13),
lim R Θ R | P | = lim R Θ R 1 | Ω | = 0
therefore,
lim R Θ R | Ω | =
which proves Lemma III.-
Lemma III establishes that when R Θ R , the volume of information of the corrected stage of the EIF tends to infinity. Thus, R Θ R is a maximum of the correction stage of the EIF. Also, as R Θ R , the correction stage of the EIF becomes independent on the Jacobian matrix associated with the observation. The following lemma deals with the case when R R .
Lemma IV–
Let R be the covariance matrix associated with an observation as stated in Lemma I. Also, let R R imply that all entries of R, and its corresponding eigenvalues, tend to infinity without compromising the positive definiteness of R. Then,
lim R R | Ω R | = | Ω ¯ |
Proof
As in Lemma III, the proof is straightforward, using the results shown in Equation (13) and Equation (5) and in Lemma II.
Lemma IV shows that when the observation is too noisyi.e., the entries of its associated covariance matrix tend to infinity—then the information matrix of the EIF algorithm is not improved by such observation. This is so because as R R , the term H T R - 1 H tends to its null. Therefore, the volume of information is not increased.
Using the results shown in Lemmas III and IV, it is possible to see that the volume of information reaches its maximum when R Θ R .
The feature selection approach shown in Equation (3) is applied to the EIF algorithm using Lemmas III and VI, as it is shown in Equation (17)—which is equivalent to the one shown in Equation (3)–.
( z , R ) o p t : a r g z , R m i n ( R i ) ; f o r i = 1 . . # N
In Equation (17), N is the set of features with correct association at time instant k and # N is the number of features with correct association. Thus, the feature whose associated covariance matrix is the closest to the null matrix Θ R is chosen as the optimum feature for the EIF correction stage.

2.3. The Unscented Kalman Filter

The formulation of the UKF is shown in Equation (18). In Equation (18), μ ¯ and μ are the predicted and corrected system state, respectively; Σ ¯ and Σ are the predicted and corrected covariance matrices associated with the UKF system state; u is the control vector input and g the function that describes the mobile robot motion; χ are the sigma points of the transformation associated with the estimation system state whereas the Z ¯ -sigma points are associated with the observation model; z ^ is the feature prediction and Σ ¯ x , z is a cross-covariance term; K is the Kalman gain, R is the covariance matrix associated with the extracted feature and Q is the covariance matrix of the process noise. Additional information regarding the UKF can be found in [1,41].
χ k - 1 = ( μ k - 1 μ k - 1 + γ Σ k - 1 μ k - 1 - γ Σ k - 1 ) χ ¯ k * = g ( u k , χ k - 1 ) μ ¯ k = i = 0 2 n σ m [ i ] χ ¯ k * [ i ] Σ ¯ k = i = 0 2 n σ c * [ i ] ( χ ¯ k * [ i ] - μ ¯ k ) ( χ ¯ k * [ i ] - μ ¯ k ) T + Q k χ ¯ k = ( μ ¯ k μ k + γ Σ ¯ k μ k - γ Σ ¯ k ) Z ¯ k = h ( χ ¯ k ) z ^ k = μ ¯ k = i = 0 2 n σ m [ i ] Z ¯ k * [ i ] S k = i = 0 2 n σ c * [ i ] ( Z ¯ k * [ i ] - z ^ k ) ( Z ¯ k * [ i ] - z ^ k ) T + R k Σ ¯ k x , z = i = 0 2 n σ c * [ i ] ( χ ¯ k [ i ] - μ ¯ k ) ( Z ¯ k [ i ] - z ^ k ) T K k = Σ ¯ k x , z S k - 1 μ k = μ ¯ k + K k ( z k - z ^ k ) Σ k = Σ ¯ k - K k S k K k T
Applying the same procedure as the EKF–see Lemmas I and II–we have that,
lim R Θ R Σ ¯ - K S R K T = = lim R Θ R Σ ¯ - Σ ¯ x , z S R - T [ Σ ¯ x , z ] T
In Equation (18), let S = i = 0 2 n σ c * [ i ] ( Z ¯ * [ i ] - z ^ ) ( Z ¯ * [ i ] - z ^ ) T + R be S R = M + R . Then Equation (19) turns into:
lim R Θ R Σ ¯ - K S R K T = Σ ¯ - Σ ¯ x , z M - T [ Σ ¯ x , z ] T
Considering that, in Equation (20) Σ, Σ ¯ and Σ ¯ x , z M - T [ Σ ¯ x , z ] T are positive semi-definite, then Equation (19) might reach a global minimum as R Θ R . Nevertheless, as it is shown in Equation (20), the limit of Σ—the corrected covariance matrix—when R tends to a null, depends on M, where M = i = 0 2 n σ c * [ i ] ( Z ¯ k * [ i ] - z ^ k ) ( Z ¯ k * [ i ] - z ^ k ) T . Each feature has its own sigma points and then, each feature has its own associated M–matrix. On the other hand, for the EKF and the EIF case, in the limit the correction stage becomes independent on the Jacobian matrix associated with the observation model as shown in Equation (7) and Equation (15). For simplicity, subindex k associated with time in Equation (18) will be omitted in the remainder of this section
In order to use the feature selection criterion proposed in this work, the following modification is introduced. Instead of comparing two different features, a same feature extracted by—at least—two different methods or sensors will be used (see Figure 2). Examples of this situation is a point-based feature extracted by a range laser sensor and by sonar (see [4,42]); or a position estimation given by the mobile robot’s encoder-based odometric measurements and the position estimation given by the integrations of internal accelerometers measurements [2].
Figure 2. Graphical representation of the feature selection for the UKF. A same feature is extracted from the environment—z—by two different methods—or sensors—with their respective covariance matrices ( R a and R b ).
Figure 2. Graphical representation of the feature selection for the UKF. A same feature is extracted from the environment—z—by two different methods—or sensors—with their respective covariance matrices ( R a and R b ).
Entropy 15 00287 g002
Lemma V–
Let z be a same feature extracted by two different methods—or sensors—as shown in Figure 2 and let R a and R b be their associated positive definite covariance matrices. If R a R b (where ⪰ stands for positive semi-definite) then,
| Σ | R a | Σ | R b
In Equation (21), | Σ | R j , j = a , b is the volume of uncertainty of the UKF algorithm when using observation z and its corresponding covariance matrix R j in the correction stage of the filter.
Proof
Let R a R b as established in the hypothesis, then, according to Equation (18), S = i = 0 2 n σ c * [ i ] ( Z ¯ * [ i ] - z ^ ) ( Z ¯ * [ i ] - z ^ ) T + R , but i = 0 2 n σ c * [ i ] ( Z ¯ * [ i ] - z ^ ) ( Z ¯ * [ i ] - z ^ ) is the same for both feature detection processes, therefore,
S a = i = 0 2 n σ c * [ i ] ( Z ¯ * [ i ] - z ^ ) ( Z ¯ * [ i ] - z ^ ) T + R a i = 0 2 n σ c * [ i ] ( Z ¯ * [ i ] - z ^ ) ( Z ¯ * [ i ] - z ^ ) T + R b = S b
Also, considering that S is positive definite, then [39],
S a - 1 S b - 1
Since S - 1 is also positive definite, the following result is valid [39]:
( S a - 1 ) T ( S b - 1 ) T
Also Σ ¯ x , z in Equation (18) is positive definite, then:
( Σ ¯ x , z ) ( S a - 1 ) T ( Σ ¯ x , z ) T ( Σ ¯ x , z ) ( S b - 1 ) T ( Σ ¯ x , z ) T
Considering that K = ( Σ ¯ x , z ) S - 1 , then the expression above turns into the following:
K a S a K a T K b S b K b T
Adding Σ ¯ —the predicted UKF system state covariance—to both sides of the expression above and re-ordering, we have that,
( Σ ¯ - K a S a K a T ) - ( Σ ¯ - K b S b K b T ) 0
Therefore, according to Equation (18) it is possible to see that Σ R a Σ R b . Considering also that Σ is positive semi-definite, then | Σ | R a | Σ | R b —see [39]—which proves Lemma V.-
Thus, Lemma V establishes that the measurement with the lowest covariance will have the largest decrement of the uncertainty associated with the estimation process. Also, Lemma V can be directly applied to the EKF and EIF-based algorithms, when the feature-selection criterion is restricted only to those features that can be extracted—acquired—by two or more methods—or sensors. Thus, under the same hypothesis of Lemma V, for the EKF case: | P | R a | P | R b ; and, for the EIF case: | Ω | R a | Ω | R b . Both expressions can be directly obtained from Equations (2.1) and Equation (12), considering that a same feature has a same Jacobian matrix associated with it [29].

3. Implementation Results

In this section, two case studies are shown. The first case study is a positioning system based on 3-D information acquired by a TOF (time of flight) camera. It uses an EKF to position the camera based on a previous 3-D image reference. The second case study is an EIF–SLAM algorithm widely used in robotics for mobile robots’ localization and mapping of unknown environments. Both approaches can be also implemented by using the UKF optimization presented in this work.

3.1. First Approach: EKF Positioning System for a 3-D TOF Camera

In this first approach, a TOF SR4000 camera—which acquires a range of up to 10 meters—was used. The EKF localization procedure was implemented based on [43]. In [43], the authors use features extracted from the environment to locate the sensor within such an environment. The experiment consisted of the following.
(1)
The experiment was carried out within an office-like environment.
(2)
An IMU—an inertial module unit consisting of a 3-axis accelerometer, 3-axis gyroscope and a pressure sensor acting as an altitude sensor—was located at the top of the camera. Both the camera and the IMU were connected via USB to a personal computer. The data acquisition was synchronized between the IMU and the TOF camera. The acquisition software was developed using Visual C++ and the OpenCV library.
(3)
The camera movements were on a plane—over a desk. Therefore, only two axes of the accelerometer were used. Several way-points were drawn over the desk. The camera movements were performed by hand.
(4)
The features extracted from the acquired images correspond to corners. The EKF uses this information to localize the camera as it moves over the plane. The EKF initialization procedures can be found in [7,43]. The corners detection procedure was based on [44] which uses raw 3-D data to determine corners in the Cartesian space.
(5)
The EKF was performed by using the feature selection criterion shown in Equation (11).
Figure 3(a) shows a schematic of the sensors used: the IMU and the TOF camera connected to the personal computer. Figure 3(b) shows an image acquired by the camera; the image is represented from the camera plane point of view. On the other hand, Figure 3(c) and Figure 3(d) show two images of the same environment acquired from different positions and referenced to the focus of the camera, i.e., the coordinates reference system is attached to the camera focus [43]. In addition, the solid black arrows in Figure 3(b) show the corners determined in the 3-D image (up to 10 corners were found).
Figure 4 shows the experiment carried out in the office-like environment. As stated before, the camera was moving on a plane; therefore, no elevation was registered. Thus, Figure 4 shows only a top view of the experiments. The blue circles correspond to the corners detected in the first acquired image; see Figure 3(b). The localization is then performed according to these corners. Within the first image, 10 corners were detected. The red diamonds correspond to the way-points of the camera movement, which are used as ground truth in the experiments. The way-points were previously drawn over the desk used for the camera’s displacement. During the image acquisition process, the camera was always pointing outward with the same orientation. The solid black line that joints the red diamonds is the actual path followed by the camera. In Figure 4(a), the solid magenta path is the path estimated by the IMU implemented over the TOF camera; see Figure 3(a). In addition, the ellipses represent the covariance ellipse associated with each way-point, according to the IMU measurements. Figure 4(b) and Figure 4(c) show the path estimated by the EKF algorithm in solid blue line; in particular, Figure 4(b) shows the path estimated by the EKF using all the detected features in each execution of the algorithm, whereas Figure 4(c) shows the estimated path by using the optimization criterion presented in Equation (11). For the optimization case experiment, the maximum number of features used was two. That is, only the two features with the lowest covariance matrix associated with their detection procedure were used in the EKF algorithm. Figure 4(d) shows the error evolution associated with each way-point. The dotted green line is the error associated with the IMU measurements; the dotted blue line represents the error associated with the EKF estimation using all the detected features, whereas the magenta dotted line shows the error associated with the EKF estimation using only the two most significant features according to Equation (11). As it can be seen, the optimization method allows a consistent estimation of the location of the TOF camera with respect to the extracted corners from the 3-D image.
Figure 3. 3-D images acquired by the TOF camera. Figure 3(a) shows a representation of the TOF camera with the IMU used to estimate the camera’s movements; Figure 3(b) shows a picture of the office-like environment from the camera plane point of view–the solid black arrows represent the corners detected in such image–; Figure 3(c) and Figure 3(d) show two snapshots of the TOF camera from the camera’s focus reference frame.
Figure 3. 3-D images acquired by the TOF camera. Figure 3(a) shows a representation of the TOF camera with the IMU used to estimate the camera’s movements; Figure 3(b) shows a picture of the office-like environment from the camera plane point of view–the solid black arrows represent the corners detected in such image–; Figure 3(c) and Figure 3(d) show two snapshots of the TOF camera from the camera’s focus reference frame.
Entropy 15 00287 g003
In addition, Figure 5(a) shows the path estimated by the EKF when the two worst features are used instead of the two most significant features–according to the criterion shown in Equation (11)–. As it can be seen, the path is not accurate with respect to way-points. Also, Figure 5(b) shows the error among the estimated path and ground truth. The dotted magenta line is corresponded to the error of the path found by using the feature selection criterion proposed herein; it is also shown in Figure 4(d); the dotted blue line is the estimation error obtained by using all detected features, whereas the dotted red line is the estimation error obtained by using the two worst features (the features with the highest covariance matrices among all possible features). As it can be seen, the feature selection criterion shows a better performance when comparing with the worst case.
Figure 4. EKF positioning system. Figure 4(a) shows a top view of the detected corners from the first acquired image, the way-points—used as ground truth—the traveled path of the camera (solid black line) and the estimated path by the IMU measurements (solid magenta line). The green ellipses represent the covariance of the IMU measurements at each way-point of the path. Figure 4(b) and Figure 4(c) show the EKF estimation (solid blue line). In Figure 4(b) all the detected features are used within the correction stage of the EKF algorithm whereas in Figure 4(c), only the two most significant features—from the optimization criterion point of view, presented in Equation (11)—were used. In addition, Figure 4(d) shows the error evolution of the three cases: the IMU estimation (dotted green line) and the two EKF estimations (the dotted blue line) correspond to the full features case.
Figure 4. EKF positioning system. Figure 4(a) shows a top view of the detected corners from the first acquired image, the way-points—used as ground truth—the traveled path of the camera (solid black line) and the estimated path by the IMU measurements (solid magenta line). The green ellipses represent the covariance of the IMU measurements at each way-point of the path. Figure 4(b) and Figure 4(c) show the EKF estimation (solid blue line). In Figure 4(b) all the detected features are used within the correction stage of the EKF algorithm whereas in Figure 4(c), only the two most significant features—from the optimization criterion point of view, presented in Equation (11)—were used. In addition, Figure 4(d) shows the error evolution of the three cases: the IMU estimation (dotted green line) and the two EKF estimations (the dotted blue line) correspond to the full features case.
Entropy 15 00287 g004
Figure 5. EKF positioning system using the worst two features. Figure 5(a) shows the path estimated by the EKF using only the worst two features–i.e., the features with the highest covariance matrices associated with them–; Figure 5(b) shows the error in the estimation process: the dotted blue line corresponds to the error of the EKF estimation when using the full set of features; the dotted magenta line corresponds to the EKF localization estimation when using the best two features according to the selection criterion shown in Equation (11); the dotted red line is the case when the localization process is performed by using the worst two features.
Figure 5. EKF positioning system using the worst two features. Figure 5(a) shows the path estimated by the EKF using only the worst two features–i.e., the features with the highest covariance matrices associated with them–; Figure 5(b) shows the error in the estimation process: the dotted blue line corresponds to the error of the EKF estimation when using the full set of features; the dotted magenta line corresponds to the EKF localization estimation when using the best two features according to the selection criterion shown in Equation (11); the dotted red line is the case when the localization process is performed by using the worst two features.
Entropy 15 00287 g005

3.2. Second Approach: An EIF–SLAM Case Study

A feature-based SLAM (simultaneous localization and mapping) algorithm [1,7,8,45,46] applied to a mobile robot, concurrently estimates the mobile robot’s pose and the features from the environment, minimizing errors. As the robot navigates through the environment and new features are detected by the mobile robot’s sensors, they are added into the SLAM system state and into its covariance matrix [1,4,7,42]. Thus, the dimension of both the SLAM system state and its covariance matrix is dynamic [2,23,47] and, if no feature is deleted from the SLAM algorithm, it is also incremental. Thus, the computational cost of the SLAM algorithm increases as the robot acquires more information from the environment [41,48]. This increases the processing time of the SLAM algorithm, compromising the mobile robot navigation strategy, due to non-constant sampling time [38,40]. In this approach, an EIF is used to implement a SLAM algorithm for agricultural environments.
The mobile robot used in this work is a non-holonomic unicycle type Pioneer 3AT built by ActivMedia, with a range sensor laser SICK incorporated on it. The laser acquires 181 measurements in a range of 30 meters, from 0 to 180 degrees. Figure 6 shows the mobile robot used, as well as the SICK laser mounted on it.
The experimental scenario can be summarized as follows.
  • The experimentation was carried out at an experimental fruit plantation.
  • The features extracted from the environment correspond to tree-stems which are modeled as point-based features [5]. The tree-stems were extracted using the range laser sensor shown in Figure 6. The feature extraction process was based on a clustering algorithm previously published in [38].
  • The position of the trees is available, due to the fact that they were previously obtained by means of a differential GPS—global positioning system—built by Novatel with an absolute error of 20 cm.
  • The robot is controlled by a hand-joystick. The robot’s movements were recorded using the differential GPS. During its motion, the robot acquired environment information by means of the range laser sensor.
  • Once the information from the environment (and from the differential GPS) was available, the EIF–SLAM algorithm was executed. The off-line execution of the SLAM algorithm allows the study of the SLAM behavior without considering the motion implications of the mobile robot controller nor its inherent delays ([29,38] use real time SLAM executions with non-reactive mobile robot motion controllers).
  • Only the two features with the lower associated covariance matrix were used in the correction stage of the EIF–SLAM algorithm (see Equation (17)).
Figure 6. Picture of the mobile robot Pioneer 3AT used in this work. The range sensor laser is mounted on the vehicle.
Figure 6. Picture of the mobile robot Pioneer 3AT used in this work. The range sensor laser is mounted on the vehicle.
Entropy 15 00287 g006
The mobile robot kinematic model is presented in Equation (22), where u V and u W are the linear and angular velocity imparted to the mobile robot by the hand-joystick; ξ k represents the position of the robot ( x k and y k ) and θ k its orientation within the environment at time instant t ; Δ k is the sampling time of the system. Further information can be found in [2,38].
ξ k = x k y k θ k = x k - 1 y k - 1 θ k - 1 + Δ k u V , k cos ( θ k ) u V , k sin ( θ k ) u W , k
Figure 7 shows four snapshots of the experiment with the EIF–SLAM with feature selection criterion incorporated into it. The robot’s initial position was ξ = [ 0 0 0 ] T . In Figure 7, the blue triangles represent the differential GPS position of each tree from the environment; the red crosses are the trees detected and used by the SLAM algorithm; the solid red triangle represents the mobile robot pose within the environment estimated by the SLAM algorithm. The solid magenta line is the path traveled by the mobile robot and estimated by the differential GPS measurements, whereas the solid black line represents the path estimated by the EIF–SLAM algorithm.
Figure 7. Several snapshots of the EIF–SLAM algorithm with implemented feature selection criterion.
Figure 7. Several snapshots of the EIF–SLAM algorithm with implemented feature selection criterion.
Entropy 15 00287 g007
Figure 8 shows a comparison between the EIF–SLAM shown in Figure 7 and the same EIF–SLAM without restricting the features acquired from the environment. Figure 8(a) shows the EIF–SLAM result when all detected features were used within the estimation process; on the other hand, Figure 8(b) shows the EIF–SLAM with the feature selection criterion presented herein for the EIF case. The blue triangles correspond to the true localization of the trees within the environment (obtained by means of the differential GPS device); the red crosses correspond to the estimated trees localization performed by the SLAM algorithm; the solid magenta line is the path traveled by the mobile robot and recorded by the differential GPS; whereas the solid black line is the path estimated by the SLAM algorithm. Figure 8(c) and Figure 8(d) show a zoom in of Figure 8(a) and Figure 8(b), respectively. As it is illustrated, the path estimated by the full-featured EIF–SLAM is closer to the differential GPS measurements than the path estimated by the EIF–SLAM with the features selection criterion for the region of the figure showed. Nevertheless, in Figure 9, sub-Figure 9(a) and Figure 9(c) show the robot’s position consistency test for the full-featured EIF–SLAM algorithm, whereas Figure 9(b) and Figure 9(d) show the robot’s position consistency test for the EIF–SLAM algorithm with the feature selection approach. As it can be seen, both cases show a consistent SLAM: the error of the estimated mobile robot’s position ( ξ ^ x and ξ ^ y ) is bound by twice its standard deviation ( σ x and σ y , respectively). For both cases, the consistency tests were carried out using the differential GPS measurements as true information. In addition, Figure 9(a) and Figure 9(c) show that the error between the estimated robot’s position—in x or y coordinates—obtained by the SLAM algorithm and the differential GPS measurements does not exceed 0.3 meters. On the other hand, Figure 9(b) and Figure 9(d) show that the error between the mobile robot’s position estimated—in x or y coordinates—by the EIF–SLAM with feature selection approach and the differential GPS measurements does not exceed 0.6 meters. The total distance traveled by the mobile robot was approximately 420 meters. Figure 9(e) shows the evolution of the covariance associated with several features from the environment, whereas Figure 9(f) shows the evolution of the same features when using the EIF–SLAM algorithm with the feature selection criterion. As it is illustrated, the covariance associated with the features tends to decrease in both cases as the robot navigates through the agricultural environment. Also, Figure 9(e) and Figure 9(f) show that the uncertainty associated with the selected features remains in the same order of magnitude. Thus, the feature selection criterion shows similar map estimation results than the full-featured EIF–SLAM.
Figure 8. Experimental comparison of the full-featured EIF–SLAM and the EIF–SLAM with feature selection criterion. Figure 8(a) shows the map constructed using all the information available, whereas Figure 8(b) shows the map built using the information acquired with the feature selection criterion. In order to see the differences between both SLAM algorithms, Figure 8(c) shows a zoom in of Figure 8(a) and Figure 8(d) shows a zoom in of Figure 8(b).
Figure 8. Experimental comparison of the full-featured EIF–SLAM and the EIF–SLAM with feature selection criterion. Figure 8(a) shows the map constructed using all the information available, whereas Figure 8(b) shows the map built using the information acquired with the feature selection criterion. In order to see the differences between both SLAM algorithms, Figure 8(c) shows a zoom in of Figure 8(a) and Figure 8(d) shows a zoom in of Figure 8(b).
Entropy 15 00287 g008
Figure 9. Consistency tests of both the full-featured EIF–SLAM and the EIF–SLAM with feature selection criterion show in Figure 8. Figure 9(a) and Figure 9(c) show the mobile robot localization consistency test of the full-featured EIF–SLAM; Figure 9(b) and Figure 9(d) show the localization consistency test for the EIF–SLAM with feature selection criterion. Figure 9(e) shows the evolution of the variance associated with several trees from the environment for the full-featured SLAM, whereas Figure 9(f) shows the variance evolution of the same features for the EIF–SLAM with feature selection restriction. The number of iterations corresponds to the instant in which the SLAM algorithm was performed.
Figure 9. Consistency tests of both the full-featured EIF–SLAM and the EIF–SLAM with feature selection criterion show in Figure 8. Figure 9(a) and Figure 9(c) show the mobile robot localization consistency test of the full-featured EIF–SLAM; Figure 9(b) and Figure 9(d) show the localization consistency test for the EIF–SLAM with feature selection criterion. Figure 9(e) shows the evolution of the variance associated with several trees from the environment for the full-featured SLAM, whereas Figure 9(f) shows the variance evolution of the same features for the EIF–SLAM with feature selection restriction. The number of iterations corresponds to the instant in which the SLAM algorithm was performed.
Entropy 15 00287 g009
Although the full-featured EIF–SLAM has shown a better performance in the estimation process, a similar performance was achieved by using the feature selection criterion, as can be seen in Figure 9. Nevertheless, the computational cost of the full-featured EIF–SLAM is not bound. It depends on the number of features used by the correction stage of the EIF algorithm. However, when restricting the number of features to be used (according to the feature selection criterion shown in Equation (17)), the computational cost remains bound by the number of features used. Figure 10 shows the computational cost associated with the experiment shown in Figure 8(d). As can be seen, the computational cost of the EIF–SLAM with feature selection criterion remains lower and bound with respect to the instantaneous computational cost of the full-featured EIF–SLAM. Although the computational cost behavior is the same regardless of the selected features (i.e., any two features selected during the EIF estimation process will produce the same safe in the computational cost of the algorithm) the fact of choosing the most significant features—according to Equation (17)—ensures the consistency and convergence of the estimation process. In Figure 10, the computational cost reaches zero values because no feature has been used in the correction stage of the EIF–SLAM. The computational cost analysis shown in Figure 10 was obtained by the implementation of the EIF–SLAM shown herein in MatLab 2008.a on a computer with a Intel(R) Core(TM) i5-2500 of 3.30 GHz CPU.
Figure 10. Computational cost analysis. The solid red line shows the computational cost (in s e c o n d s - 2 ) of implementing the EIF–SLAM using all the features detected at each correction stage. On the other hand, the solid green line shows the computational cost of implementing the EIF–SLAM for the same experiment, shown in Figure 9, when using the feature selection criterion shown in Equation (17). As is evident, the selection criterion reduces the computational cost of the correction stage of the EIF algorithm.
Figure 10. Computational cost analysis. The solid red line shows the computational cost (in s e c o n d s - 2 ) of implementing the EIF–SLAM using all the features detected at each correction stage. On the other hand, the solid green line shows the computational cost of implementing the EIF–SLAM for the same experiment, shown in Figure 9, when using the feature selection criterion shown in Equation (17). As is evident, the selection criterion reduces the computational cost of the correction stage of the EIF algorithm.
Entropy 15 00287 g010

4. Conclusions

This paper has presented a non-arbitrary feature selection criterion for Gaussian-based SLAM algorithms. In particular, the feature selection criterion was applied to an EKF, EIF and UKF-based SLAM algorithm. The selection criterion presented herein was based on a convergence perspective of the SLAM algorithm implemented on each filter. It consisted of choosing only the features that cause the largest decrement of the uncertainty associated with the correction stage covariance matrix of each filter (EKF, EIF and UKF). The last was accomplished by using the covariance matrix associated with the feature’s extraction procedure.
It was shown that, for the three filters studied, the feature that causes the largest decrement of the uncertainty associated to the correction stage of the SLAM is the feature that has associated the smallest covariance matrix. Also, when the covariance matrix R associated with a feature tends to a null matrix of the dimension of R, the corrected covariance matrix of the EKF and UKF SLAM algorithms present a minimum for R, i.e., the volume of uncertainty reaches a minimum. On the other hand, for the EIF–SLAM case, when R tends to a null, the volume of information reaches a maximum for R.
The results included in this work were obtained through a long-term experiment: the robot navigated approximately 420 meters within an fruit plantation while executing an EIF–SLAM algorithm with the feature selection approach shown in Section 2.2. The features extracted from the environment consisted of tree-stems. The map obtained by the EIF–SLAM with the feature selection criterion proposed herein has shown it to be coherent with the a previously obtained map by means of differential GPS measurements. Also, the consistency test associated with the position of the mobile robot has shown that the position estimate remains bound by twice its standard deviation. In addition, the map obtained by the EIF–SLAM algorithm without feature selection was also presented. Although the SLAM algorithm without feature selection criterion has shown the smallest positioning and mapping errors in an off-line execution of the SLAM algorithm, it used all the information available, whereas the EIF–SLAM with feature selection approach had used only the two most significant features according to Section 2.2 without compromising the consistency of the estimation process nor the coherency of the map reconstruction.
Despite the fact that in this work the SLAM algorithm was based on point-based features, the selection criterion proposed herein is independent from the nature of the features used. In addition, the number of the most significant features to be used in the correction stage of the SLAM algorithm depends on the design criterion.

Acknowledgements

The author would like to thank to the Instituto de Automatica of the National University of San Juan, Argentina, to the Instituto Nacional de Tecnología Agropecuaria (INTA, National Institute of Agricultural Technology), Argentina, and to the Department of Electronics Engineering, Universidad Técnica Federico Santa María, DGIP project 23.12.09, Valaparaíso, Chile, for their support.

References

  1. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  2. Siegwart, R.; Nourbahsh, I. Introduction to Autonomous Mobile Robots; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  3. Katz, J.; Nebot, E. Probabilistic Scheme for Laser Based Motion Detection. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 161–166.
  4. Howarth, B.; Katupitiya, J.; Guivant, J.; Szwec, A. Novel Robotic 3D Surface Mapping Using Range and Vision Fusion. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1539–1544.
  5. Guivant, J.; Nebot, M. Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Trans. Robot. Autom. 2001, 17, 242–257. [Google Scholar] [CrossRef]
  6. Bar-Shalom, X.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
  7. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping (SLAM): Part I essential algorithms. IEEE Robot. Autom. Mag. 2006, 13, 99–108. [Google Scholar] [CrossRef]
  8. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping (SLAM): Part II state of the art. IEEE Robot. Autom. Mag. 2006, 13, 108–117. [Google Scholar] [CrossRef]
  9. Zhow, W.; Valls Miro, J.; Dissanayake, G. Information-efficient 3D visual SLAM for unstructured domains. IEEE Trans. Robot. 2008, 24, 1078–1087. [Google Scholar]
  10. Munguia, R.; Grau, A. Closing loops with a virtual sensor based on monocular SLAM. IEEE Trans. Instrum. Meas. 2009, 58, 2377–2384. [Google Scholar] [CrossRef]
  11. Theodoridis, S.; Koutroumbas, K. Pattern Recognition; Elsevier Academic Press: London, UK, 2003. [Google Scholar]
  12. Nemra, A.; Aouf, N. Robust ins/gps sensor fusion for uav localization using SDRE nonlinear filtering. IEEE Sens. J. 2010, 10, 789–798. [Google Scholar] [CrossRef]
  13. Segura, M.; Auat Cheein, F.; Toibero, M.; Carelli, R. Ultra wide-band localization and SLAM: A comparative study for mobile navigation. Sensors 2011, 11, 2035–2055. [Google Scholar] [CrossRef] [PubMed]
  14. Hu, C.; Li, M.; Song, S.; Yang, W.; Zhang, R.; Meng, H. A cubic 3-axis magnetic sensor array for wireless tracking magnetic position and orientation. IEEE Sens. J. 2010, 10, 903–913. [Google Scholar] [CrossRef]
  15. Frampton, K. Acoustic self-localization in a distributed sensor network. IEEE Sens. J. 2006, 6, 166–172. [Google Scholar] [CrossRef]
  16. Lazarus, S.; Ashokaraj, I.; Tsourdos, A.; Zbikowski, R.; Silson, P.; Aouf, N.; White, B. Vehicle localization using sensors data fusion via integration of covariance intersection and interval analysis. IEEE Sens. J. 2010, 7, 1302–1314. [Google Scholar] [CrossRef]
  17. Brown, S.; Miller, D. Feature-aided Multiple Target Tracking in the Image Plane. In Proceedings of the SPIE Intelligent Computing: Theory and Applications, Orlando, USA, April 2006.
  18. Petrellis, N.; Konofaos, N.; Alexiou, G. Target Localization utilizing the success rate in infrared pattern recognition. IEEE Sens. J. 2006, 6, 1355–1364. [Google Scholar] [CrossRef]
  19. Krammer, P.; Schweinzer, H. Localization of object edges in arbitrary spatial dispositions based on ultrasonic data. IEEE Sens. J. 2006, 6, 203–210. [Google Scholar] [CrossRef]
  20. Chang, C.; Lin, C. Anchor-guiding mechanism for beacon-assisted localization in wireless sensor networks. IEEE Sens. J. 2012, 12, 1098–1111. [Google Scholar] [CrossRef]
  21. Bryson, M.; Sukkarieh, S. Observability analysis and active control for airborne SLAM. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 261–280. [Google Scholar] [CrossRef]
  22. Castellanos, J.; Neira, J.; Tardos, J. Limits to the Consistency of EKF-based SLAM. In Proceedings of the IFAC/EURON Symposium on Intelligent Autonomous Vehicles, Lisbon, Portugal, 5–7 July 2004.
  23. Garulli, A.; Giannitrapani, A.; Rosi, A.; Vicino, A. Mobile Robot SLAM for Line-based Environment Representation. In IEEE Conference on Decision and Control, Seville, Spain, 12–15 December 2005.
  24. Paz, L.; Pinies, P.; Tardos, J.; Neira, J. Large-scale 6-DOF SLAM with stereo-in-hand. IEEE Trans. Robot. Autom. 2008, 24, 946–957. [Google Scholar] [CrossRef]
  25. Sazdovski, V.; Sislson, P. Intertial navigation aided by vision-based simultaneous localization and mapping. IEEE Sens. J. 2011, 11, 1646–1656. [Google Scholar] [CrossRef]
  26. Okane, T.; Frederiksen, J. Comparison of Statistical Dynamical, Square Root and Ensemble Kalman Filters. Entropy 2008, 10, 684–721. [Google Scholar] [CrossRef]
  27. Kang, J.; Choi, W.; An, S.; Oh, S. Augmented EKF-based SLAM Method for Improving the Accuracy of the Feature Map. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3725–3731.
  28. Suderhauf, N.; Lange, S.; Protzel, P. Using the Unscented Kalman Filter in Mono-Slam with Inverse Depth Parametrization for Autonomous Airship Control. In Proceedings of the IEEE International Workshop on Safety Security and Rescue Robotics, Rome, Italy, 27–29 September 2007; pp. 1–6.
  29. Auat Cheein, F.; Carelli, R. Analysis of different feature selection criteria based on the covariance convergence perspective for a SLAM algorithm. Sensors 2011, 11, 62–89. [Google Scholar] [CrossRef] [PubMed]
  30. Grisetti, G.; Stachniss, C.; Burgard, W. Improving Grid-based SLAM with Rao-blackwellized Particle Filters by Adaptive Proposals and Selective Resampling. In Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 2432–2437.
  31. Welle, J.; Schulz, D.; Bachram, T.; Cremers, A. Optimization Techniques for Laser-based 3D Particle Filter SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, USA, 3–7 May 2010; pp. 3525–3530.
  32. Eade, E.; Fong, P.; Munich, M. Monocular graph SLAM with Complexity Reduction. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Edmonton, Canada, 18–22 October 2005; pp. 1175–1180.
  33. Zhang, D.; Xie, L.; Adams, M. Entropy Based Feature Selection Scheme for Real Time Simultaneous Localization and Map Building. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Edmonton, Canada, 2–6 August 2005; pp. 1175–1180.
  34. Michalowicz, J.; Nichols, J.; Bucholtz, F. Calculation of differential entropy for a mixed gaussian distribution. Entropy 2008, 10, 200–206. [Google Scholar] [CrossRef]
  35. Huang, S.; Wang, Z.; Dissanayake, G. sparse local submap joining filter for Building large-scale maps. IEEE Trans. Robot. 2008, 24, 1121–1130. [Google Scholar] [CrossRef]
  36. Williams, S.; Dissanayake, G.; Durrant-Whyte, H. Efficient Simultaneous Localisation and Mapping Using Local Submaps. In Proceedings of the Australian Conference on Robotics and Automation, Seoul, Korea, May 2001; pp. 128–134.
  37. Julier, S.; Uhlman, J. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  38. Auat Cheein, F.; Scaglia, G.; di Sciascio, F.; Carelli, R. Towards features updating selection based on the covariance matrix of the SLAM system state. Robotica 2011, 29, 271–282. [Google Scholar] [CrossRef]
  39. Harville, D. Matrix Algebra from a Statisticians Perspective; Springer-Verlag: New York, NY, USA, 1997. [Google Scholar]
  40. Auat Cheein, F.; Steiner, G.; Perez, G.; Carelli, R. Optimized EIF-SLAM algorithm for precision agriculture mapping based on stems detection. Comput. Electron. Agric. 2011, 78, 195–207. [Google Scholar] [CrossRef]
  41. Paz, L.; Jensfelt, P.; Tardos, J.; Neira, J. EKF SLAM Updates in o(n) with Divide and Conquer SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 1657–1663.
  42. Masson, F.; Guivant, J.; Nebot, E. Hybrid Architecture for Simultaneous Localization and Map Building in Large Outdoors Areas. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; pp. 570–571.
  43. Hochdorfer, S.; Schlegel, C. 6-DOF SLAM using a TOF camera: The Challenge of a Continuously Growing Number of Landmarks. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3981–3986.
  44. Yokihama, H.; Chikatsu, H. Efficient Corner Detector for 3D Point Crowd Daa and Application to 3D Modeling of Structures. In Proceedings of the SPIE Videometrics VIII, San Jose, USA, Jan. 2005; pp. 208–215.
  45. Lee, Y.; Song, J. Autonomous Selection, Registration, and Recognition of Objects for Visual SLAM in Indoor Environments. In Proceedings of the International Conference on Control, Automation and Systems, Seoul, Korea, 17– 20 October 2007; pp. 668–673.
  46. Suhyeoun, G.; Sangwoo, L.; Taekyu, Y. Autonomous SLAM Technique with Stereo Vision. In Proceedings of the Chinese Control Conference, Kunming, China, 16–18 July 2008; pp. 330–335.
  47. Briechle, K.; Hanebeck, U. Localization of mobile robot using relative bearing measurements. IEEE Trans. Robot. Autom. 2004, 20, 36–44. [Google Scholar] [CrossRef]
  48. Silveira, G.; Malis, E.; Rives, P. An Efficient Direct Method for Improving Visual SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, April 2007; pp. 4090–4095.

Share and Cite

MDPI and ACS Style

Cheein, F.A.A. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms. Entropy 2013, 15, 287-310. https://doi.org/10.3390/e15010287

AMA Style

Cheein FAA. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms. Entropy. 2013; 15(1):287-310. https://doi.org/10.3390/e15010287

Chicago/Turabian Style

Cheein, Fernando A. Auat. 2013. "Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms" Entropy 15, no. 1: 287-310. https://doi.org/10.3390/e15010287

Article Metrics

Back to TopTop