Next Article in Journal
Development of a Computer Writing System Based on EOG
Previous Article in Journal
Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Does RAIM with Correct Exclusion Produce Unbiased Positions?

by
Peter J. G. Teunissen
1,2,*,
Davide Imparato
1 and
Christian C. J. M. Tiberius
2
1
GNSS Research Centre, Curtin University of Technology, 6845 Perth, Australia
2
Department of Geoscience and Remote Sensing, Delft University of Technology, 2628 CN Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(7), 1508; https://doi.org/10.3390/s17071508
Submission received: 18 May 2017 / Revised: 16 June 2017 / Accepted: 22 June 2017 / Published: 26 June 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
As the navigation solution of exclusion-based RAIM follows from a combination of least-squares estimation and a statistically based exclusion-process, the computation of the integrity of the navigation solution has to take the propagated uncertainty of the combined estimation-testing procedure into account. In this contribution, we analyse, theoretically as well as empirically, the effect that this combination has on the first statistical moment, i.e., the mean, of the computed navigation solution. It will be shown, although statistical testing is intended to remove biases from the data, that biases will always remain under the alternative hypothesis, even when the correct alternative hypothesis is properly identified. The a posteriori exclusion of a biased satellite range from the position solution will therefore never remove the bias in the position solution completely.

1. Introduction

Statistical inference principles as estimation and testing play a fundamental role in the broad spectrum of navigation applications. Estimation is then usually aimed at finding unbiased estimators having the best possible precision, while testing is used to safeguard against incorrect modeling and its consequences. In the safety-critical navigation application of aviation, the concept of Receiver Autonomous Integrity Monitoring (RAIM) was specifically developed to safeguard the navigation integrity by means of self-contained fault detection at the GNSS navigation receiver [1,2]. With RAIM, the use of statistical hypothesis testing is commonplace. One may test the validity of the assumed working hypothesis by means of a chi-square distributed sum-of-squared-residuals [3,4], or more specifically, test the working hypothesis against specified alternatives (e.g., single satellite faults) through Gaussian distributed test-statistics [5,6]. Depending on the parametrization of the underlying model, many different implementations of these test-statistics exist [7,8,9,10,11]. Next to aviation, RAIM-testing finds its application also in a broad range of other applications (see, e.g., [12,13,14,15,16,17,18,19]).
In RAIM, estimation and testing are combined, which results in an overall estimator that is more complex than when one would treat estimation and testing separately. That is, the navigation solution of exclusion-based RAIM is the result of a combination of least-squares estimation and a statistical pseudorange ’inclusion–exclusion’ process. This combination of estimation and testing implies that one cannot assign the properties of the estimators under the different hypotheses to the actual estimator computed. In RAIM without exclusion, this is not an issue as one is then only working with one single estimator, namely with the one computed under the null hypothesis in case of acceptance, while in case of rejection, no estimator is computed as the solution is then said to be unavailable. Hence, in RAIM without exclusion, the distributional properties of the estimator are known, which can then directly be used to compute and evaluate the probability of hazardous misleading information. This situation changes however when exclusion is included in the process. In that case, one is not dealing with one single estimator, but actually with a combination of multiple estimators, one for each hypothesis specified. Hence, to obtain a probabilistic description of this combination, one will have to take the uncertainty of the combined estimation-testing procedure into account and perform the propagation of uncertainty accordingly. In this contribution, we explain the mechanism of this interplay and study its effect on the first moment of the distribution. We analyse, theoretically as well as empirically, how the mean of the computed estimator is affected by this interplay.
This contribution is organized as follows. We start in Section 2 with a brief review of the necessary estimation and testing results of linear model theory. We then make the case in Section 3 that, although estimation and testing are often treated separately and independently, in actual practice, they are usually combined, thus the navigation solution of exclusion-based RAIM is the product of a least-squares estimation that is applied to pseudoranges that already underwent a statistical ’inclusion–exclusion’ process. In Section 3, we identify the consequences of the combined estimation-testing procedure has for the distribution of the estimators involved and in particular we show that the estimators remain biased even if the correct alternative hypothesis has been identified. These results are then generalized in Section 4 for the case of having multiple alternative hypotheses, for which the events of correct detection and correct identification then need to be separated out. In Section 5, we apply the theoretical results and demonstrate the presence of said biases in the single-receiver pseudorange GNSS positioning results of exclusion-based RAIM. This shows that the a posteriori exclusion of a biased satellite range will never remove the bias in the position solution completely. The size of such remaining biases depends on the strength of the underlying model, the chosen false alarm rate, and the size and type of the actual input bias. We show how these biases can be computed and that an increased model strength and larger levels of significance allow for a reduction of the bias. The contribution is finally concluded with a summary in Section 6.

2. Estimation and Testing

In this section, we briefly review some necessary estimation and testing results of linear model theory.

2.1. Estimation

Consider the linear observation model
H 0 : E ( y ) = A x , D ( y ) = Q y y ,
with E ( . ) the expectation operator, y R m the normally distributed random vector of observables, A R m × n the given design matrix of rank n, and x R n the to-be-estimated unknown parameter vector, D ( . ) the dispersion operator and Q y y R m × m the given positive-definite variance matrix of y. The linear model (1) will be referred to as our null-hypothesis H 0 .
Under H 0 , the best linear unbiased estimator (BLUE) of x is given as
x ^ 0 = A + y ,
with least-squares (LS) inverse A + = Q x ^ 0 x ^ 0 A T Q y y - 1 , in which Q x ^ 0 x ^ 0 = D ( x ^ 0 ) = ( A T Q y y - 1 A ) - 1 is the dispersion or variance matrix of x ^ 0 .
As the BLUE’s property of x ^ 0 depends on the validity of H 0 , it is important that one has sufficient confidence in the assumptions underlying the null-hypothesis. Although every part of the null-hypothesis can be wrong of course, we assume here that if a misspecification occurred, it is confined to an under-parametrization of the mean of y, in which case
H a : E ( y ) = A x + C b , D ( y ) = Q y y
for some vector b y = C b . Experience has shown that these types of misspecifications are by and large the most common errors that occur when formulating the model. Note that formulation (3) is general in the sense that, through the choice of matrix C, it allows one to capture any type of additive mismodelling in the observation equations.Through b y = C b , one may model, for instance, the presence of one or more blunders (outliers) in the data, cycle-slips in phase data, satellite failures, antenna-height errors, erroneous neglecting of atmospheric delays, or any other systematic effect that one failed to take into account under H 0 .
In the following, we assume matrix [ A , C ] R m × ( n + q ) to be known of rank n + q and the parameter vector b R q to be unknown. The linear model (3) will be referred to as the alternative-hypothesis H a .
Under H a , the BLUE of x is given as
x ^ a = A ¯ + y ,
with LS-inverse A ¯ + = Q x ^ a x ^ a A ¯ T Q y y - 1 , A ¯ = P C A , P C = I m - C ( C T Q y y - 1 C ) - 1 C T Q y y - 1 , and Q x ^ a x ^ a = D ( x ^ a ) = ( A ¯ T Q y y - 1 A ¯ ) - 1 . As this BLUE is based on a model with more parameters, its precision will never be better than that of x ^ 0 , i.e., D ( x ^ 0 ) D ( x ^ a ) .

2.2. Testing

The estimation of x would not pose a problem if we would know which of the two models would be true. In the case of H 0 , we would use x ^ 0 to estimate x, but if we would know that H a is true, then we would use x ^ a instead. Using the estimator x ^ 0 when knowing that H a is true should be avoided, as this would result in a biased solution, since
E ( x ^ 0 | H a ) = x + A + C b .
The problem in practice of course is that we do not know which of the models are true. Even if we have taken the utmost care in formulating a model which we believe to be true, misspecifications could still be present, thus invalidating the model. Methods of statistical testing have therefore been developed that allow us to decide with some confidence which of the models to work with. In the case of the above H 0 and H a , it seems reasonable to decide in favour of H 0 if the BLUE of b can be considered ’insignificant’. With the BLUE of b under H a given as
b ^ = C ¯ + y ,
with LS-inverse C ¯ + = ( C ¯ T Q y y - 1 C ¯ ) - 1 C ¯ T Q y y - 1 , C ¯ = P A C and variance matrix Q b ^ b ^ = ( C ¯ T Q y y - 1 C ¯ ) - 1 , the decision in favour of H 0 is therefore taken when b ^ lies in the acceptance region A ,
b ^ A = { b R q | | | b | | Q b ^ b ^ 2 χ α 2 ( q , 0 ) }
with | | . | | Q b ^ b ^ 2 = ( . ) T Q b ^ b ^ - 1 ( . ) and χ α 2 ( q , 0 ) the critical value computed from the central chi-square distribution with q degrees of freedom and chosen level of significance α . This test is known to be a uniformly most powerful invariant (UMPI) test for testing H 0 against H a [7,20]. Note that test (7) should not be confused with the sum-of-squared-residuals test. The two are only the same in the special case that the least-squares residual under the alternative becomes identically zero (cf. (8)). To give further meaning to the statistic | | b ^ | | Q b ^ b ^ 2 , we note that it can be written in different ways ([7], p. 79), two of which are
| | b ^ | | Q b ^ b ^ 2 = | | y ^ 0 - y ^ a | | Q y y 2   = | | e ^ 0 | | Q y y 2 - | | e ^ a | | Q y y 2 ,
with y ^ 0 = A x ^ 0 , y ^ a = A x ^ a + C b ^ , e ^ 0 = y - y ^ 0 , and e ^ a = y - y ^ a . The first equality of (8) states that the statistic is equal to the squared norm of the solution separation in the observation domain. Thus, if this solution separation is small enough, one has no reason for rejecting H 0 . The second equality of (8) states that the statistic is also equal to the difference of the squared norm residuals under H 0 and H a . Thus, again, if this difference is small enough, the decision is that there is no reason for mistrusting H 0 . Although we will be using the first expression of (8) for the test statistic, linkage to the other two will be made when we discuss multiple hypothesis testing in Section 4.
If the outcome of testing is to reject H 0 in favour of H a , then not x ^ 0 , but x ^ a is provided as the estimator for x. The three estimators, x ^ 0 (cf. (2)), x ^ a (cf. (4)) and b ^ (cf. (6)) are related as
x ^ a = x ^ 0 - A + C b ^ .
Thus, if H 0 is rejected in favour of H a , then A + C b ^ is the correction, which is aimed at removing the bias A + C b (cf. (5)) from x ^ 0 . Whether or not this is actually achieved is discussed in the following sections.

3. Estimation Bias Due to Testing

3.1. The Estimator Revisited

As mentioned above, estimation and testing are combined, so one cannot assign the properties of x ^ 0 or x ^ a to the actual estimator that is computed. That is, the actual estimator that is produced is not x ^ 0 nor x ^ a , but in fact (see Figure 1).
x ¯ = x ^ 0 , if b ^ A , x ^ a , if b ^ A .
Hence, it is the quality of x ¯ , rather than that of x ^ 0 or x ^ a , that determines the quality of the produced results. Since ideally the goal of testing is to be able to have the bias A + C b removed from x ^ 0 when H a is true (cf. (5)), it is relevant to know what the mean of the actual estimator x ¯ is. By making use of the relation (9), the expectation of x ¯ can be determined as
E ( x ¯ | H 0 ) = x - A + C A β p b ^ ( β | H 0 ) d β , E ( x ¯ | H a ) = x + A + C A β p b ^ ( β | H a ) d β ,
with p b ^ ( β | H 0 ) and p b ^ ( β | H a ) being the probability density function (PDF) of b ^ under resp. H 0 and H a .
The result (11) shows that the estimator x ¯ is biased in general. This in contrast to x ^ 0 under H 0 and x ^ a under H a . The cause for the presence of these biases is the nonlinearity involved in the mapping of (10). Thus, although x ^ 0 and x ^ a are both individually linear functions of y, the actually produced estimator x ¯ is not. It is this nonlinearity that prohibits the unbiasedness of x ^ 0 and x ^ a , under, respectively, H 0 and H a , to be passed on to x ¯ .
Although (11) indicates that x ¯ is generally biased under both H 0 and H a , we have in our case A β p b ^ ( β | H 0 ) d β = 0 , due to the symmetry with respect to the origin of both the acceptance region A and the PDF p b ^ ( β | H 0 ) . Hence, in our case, the estimator x ¯ is fortunately always unbiased under H 0 :
E ( x ¯ | H 0 ) = x .
This is not true, however, for x ¯ under H a . We have
E ( x ¯ | H a ) = x + b x ¯ ,
with the bias given as
b x ¯ = A + C b A with b A = A β p b ^ ( β | H a ) d β .
This shows that the bias in x ¯ is driven by the vector b A and its propagation into the parameter space. The vector b A itself is governed by the acceptance region A and through the PDF p b ^ ( β | H a ) , by the actual bias b and the precision with which it can be estimated, Q b ^ b ^ . Generally, p b ^ ( β | H a ) is not symmetric over region A .
To see the effect testing has, one can compare the testing-induced bias (14), with the bias one otherwise would have when using x ^ 0 under H a (cf. (5)),
b x ^ 0 = E ( x ^ 0 - x | H a ) = A + C b .
It follows from comparing (14) with (15), since b = E ( b ^ | H a ) = R q β p b ^ ( β | H a ) d β , that through testing, it is the component of this integral over the acceptance region A that is retained. We thus have b x ¯ = b x ^ 0 if A = R q , which corresponds to the case of always accepting H 0 .
A summary overview of the means of the random vectors x ^ 0 , x ^ a and x ¯ is given in Table 1. Note that over-parametrization, i.e., using model (3) rather than (1), delivers unbiased results: E ( x ^ a | H 0 ) = x .

3.2. The One-Dimensional Case

As mentioned above, the testing induced-bias b x ¯ is driven by b A and its propagation into the parameter space. To describe its significance, we will work with the dimensionless bias-to-noise ratio (BNR) | | b x ¯ | | Q x ^ 0 x ^ 0 and study its behaviour for the one-dimensional case. If q = 1 , then matrix C becomes a vector, C = c , and b becomes a scalar. For this case the BNR works out as
| | b x ¯ | | Q x ^ 0 x ^ 0 = | b A | σ b ^ tan θ ,
with θ being the angle that vector c makes with the range space of the orthogonal complement of A, i.e., tan θ = | | P A c | | Q y y / | | P A c | | Q y y [7], p. 111. Here, P A = A A + and P A = I m - A A + . In the decomposition (16), | b A | / σ b ^ describes the significance of b A , while tan θ shows how it gets propagated into the parameter space (Figure 2).
There are two cases for which b A will be ’small’. It will be small when the PDF p b ^ ( β | H a ) has only a small portion of its probability mass over A , and it will be small when it differs only a little from the PDF under H 0 . To quantify this behaviour, we make use of the one-dimensional integral (cf. (14))
b A = 1 2 π σ b ^ - χ α ( 1 , 0 ) σ b ^ χ α ( 1 , 0 ) σ b ^ β exp { - 1 2 β - b σ b ^ 2 } d β ,
from which it can be worked out that, with χ α ( 1 , 0 ) = χ α 2 ( 1 , 0 ) ,
b A σ b ^ = F ( χ α ( 1 , 0 ) ) - F ( - χ α ( 1 , 0 ) ) ,
in which F ( x ) = ϕ b σ b ^ + x + b σ b ^ Φ b σ b ^ + x with ϕ ( x ) = 1 2 π exp { - 1 2 x 2 } and Φ ( x ) = - x ϕ ( v ) d v .
Figure 3 shows b A / σ b ^ as a function of b / σ b ^ for different values of α (here, and in the following, we consider b 0 ). The straight line in the figure describes the bias one would have in case no testing would be performed (i.e., A = R ). As b A b for every value of b, the figure clearly shows the benefit of testing: the bias that remains after testing is always smaller than the original bias. Note that this benefit, i.e., the difference between b and b A , only kicks in after the bias b has become large enough. The difference is small, when b is small, and it gets larger for larger b, with b A approaching zero in the limit. Also note that for smaller levels of significance α , the difference between b and b A stays small for a larger range of b-values. This is understandable as a smaller α corresponds with a larger acceptance interval A , as a consequence of which one would have for a larger range of b-values an outcome of testing that does not differ from the no-testing scenario.

3.3. The Conditional Mean

We have shown that the combination of estimation and testing always produces a biased estimator under H a , that is, a fraction of the original bias b will always be passed on to the estimator x ¯ when H a is true. However, the mean considered so far is an unconditional mean, i.e., one that does not take the outcome of testing into account. We therefore now consider the mean of x ¯ under the condition of either incorrect acceptance or correct rejection of H 0 , i.e., under the condition that b ^ A or b ^ A while H a is true.
It follows from (10) and (15) and the independence of b ^ and x ^ 0 that
E ( x ¯ | b ^ A , H a ) = E ( x ^ 0 | H a ) = x + b x ^ 0 .
Thus, in case of wrongful acceptance, the bias of x ¯ is that of x ^ 0 .
We now consider the case of correct rejection, i.e., b ^ A while H a is true. In that case, we have for the mean
E ( x ¯ | b ^ A , H a ) = E ( x ^ a | b ^ A , H a ) = x + b x ¯ | b ^ A ,
with
b x ¯ | b ^ A = A + C b A - P ( b ^ A | H a ) P ( b ^ A | H a ) ( b - b A )   = A + C b - b - b A 1 - P ( b ^ A | H a ) = A + C b A | b ^ A .
This result is obtained as follows. From taking the conditional expectation of (9) and noting that x ^ 0 and b ^ are independent, we obtain E ( x ^ a | b ^ A , H a ) = E ( x ^ 0 | H a ) - A + C E ( b ^ | b ^ A , H a ) , from which the result follows by using (15). Note how the two expressions of (21) show how the bias compares to that of x ¯ (cf. (14)) and to that of x ^ 0 (cf. (15)). Also note that the above provided general formula (21) allows one to evaluate the parameter effect b x ¯ | b ^ A for any size and any type of bias. A summary overview of the conditional means of x ¯ is given in Table 2.
Figure 4 shows b A | b ^ A / σ b ^ as a function of b / σ b ^ for different values of α . Note, since b A | b ^ A = b - E ( b ^ | b ^ A , H a ) 0 , that the testing induced bias-correction overcompensates for the bias b. This can be explained by the shape of the conditional PDF p b ^ | b ^ A ( x | H a ) . As the probability mass of the unconditional PDF over the origin centred acceptance interval A has been distributed over its complement, the conditional PDF has more probability mass on the right side of b than on the left side of b, as a consequence of which its mean value will be larger than b. Furthermore, since the probability mass that gets distributed over the complement of A increases for a smaller α , the overcompensation also gets larger for smaller α .
The results shown in the Figure 3 and Figure 4 are linked by the probability of correct detection, i.e., the power of the test. It follows from (21) that
[ b - b A ] = P ( b ^ A | H a ) [ b - b A | b ^ A ] .
Hence, particularly when the probability of correct detection is small, there will be large differences between b A and b A | b ^ A . Furthermore, since the probability of detection gets smaller for smaller α , in particular for not too large b, also the differences between b A and b A | b ^ A get larger for smaller α .
Note that where in the unconditional case the remaining bias b A is smaller than b in absolute value (see Figure 3), this is not the case with the conditional bias b A | b ^ A (see Figure 4). In the conditional case of correct detection, the size of the bias remaining after testing can even be larger than the original bias. This may come as a surprise, but is a direct consequence of the fact that in this conditional case, the samples of b ^ that come from the distribution under H a , but lie in the acceptance region A , are not considered. If for one’s application, the size of the bias b A | b ^ A is considered too large, one has the following remedy available: one can either design a stronger model, thus giving a smaller σ b ^ , or one can decide to work with a larger level of significance α , thereby accepting more false rejections of H 0 .

4. Multiple Alternative Hypotheses

4.1. Test Procedure

Up to now, we have been working with only one single alternative hypothesis H a . In actual practice, however, one typically works with many more such hypotheses, namely with one for every misspecification that one believes has sufficient potential of occurrence. Hence, one will then have a set of, say m, alternative hypotheses,
H a i : E ( y ) = A x + C i b i , D ( y ) = Q y y ,
in which each C i b i , i = 1 , , m , is assumed to take care of one of the potential misspecifications. Also in this case, biases will remain in the computed results of the combined estimation and testing process [21]. We will show this by means of one of the most commonly used multiple hypothesis testing problems, namely the screening of observations for possible outliers. As we consider one outlier at a time, matrix C i = c i becomes the canonical unit vector having 1 as its ith entry, with b i being the scalar outlier and i indicating the potentially outlier-affected observation. In total, m tests are carried out. The applied decision rule is then to accept the null hypothesis, unless
max i { 1 , , m } | b ^ i | k i σ b ^ i > 1 with k i = χ α ( 1 , 0 ) ,
in which case the corresponding hypothesis is selected. Once a hypothesis, say H a j , is selected, the parameter vector is estimated as
x ^ a j = A ¯ ( j ) + y = x ^ 0 - A + c j b ^ j ,
in which A ¯ ( j ) = P c j A . Note that in case of uncorrelated observations, i.e., Q y y = diag ( σ 1 2 , , σ m 2 ) , the adapted design matrix A ¯ ( j ) = P c j A is the original design matrix with its j th row replaced by zeros. Hence, x ^ a j is then the estimator with the j th observable excluded.
Note that in (24), no preference is given to the contributions of any of the m individual statistics | b ^ i | / σ b ^ i , i.e., k i is the same for all i. Although this is a common usage that we will follow in this contribution, it is not necessary, as one could also weigh the contributions of the individual statistics in dependence of their importance for the computed results, i.e., by testing more stringently for biases c i b i that have a larger impact on the parameters of interest.

4.2. Alternative Statistics

With reference to our earlier alternative expressions of the test statistic (cf. (8)), we note that instead of maximizing | b ^ i | / σ b ^ i to identify the alternative hypothesis, one may also do this by finding the minimizing | | e ^ i | | Q y y 2 [2] and then use the difference | | e ^ 0 | | Q y y 2 - | | e ^ i | | Q y y 2 as a test statistic.
Similarly, we noted (cf. (8)) that one may use the solution separation statistic in the observation domain | | y ^ 0 - y ^ i | | Q y y 2 = | | P A C i b ^ i | | Q y y 2 , instead of | | b ^ | | Q b ^ b ^ 2 . By using the Pythagorean rule (see Figure 5), we therefore have in the scalar case, when C i = c i , for the corresponding solution separation statistic in the parameter domain the relation
| | x ^ 0 - x ^ i | | Q x ^ 0 x ^ 0 2 = | | y ^ 0 - y ^ i | | Q y y 2 tan 2 θ i ,
with tan 2 θ i = | | P A c i | | Q y y 2 / | | P A c i | | Q y y 2 . Thus, if one would like to perform the test procedure (24) using (26), then the maximum of | | x ^ 0 - x ^ i | | Q x ^ 0 x ^ 0 / ( k i | tan θ i | ) is sought for. Likewise, if k i in (24) would be set as k i = k / | tan θ i | , with k chosen such that the overall false-alarm rate remains the same, one would be using max i { 1 , , m } | | x ^ 0 - x ^ i | | Q x ^ 0 x ^ 0 > k as the decision rule. In this case, one would thus be testing less stringently for model biases c i b i that have a small influence on the parameter solution, i.e., for which the θ i are small.
Finally, note that in the scalar case, when C i = c i , one may also use any linear function of the parameter solution separation as a test statistic. As it follows from (25) that f T ( x ^ 0 - x ^ i ) = ( f T A + c i ) b ^ i , we have
| b ^ i | σ b ^ i = | f T ( x ^ 0 - x ^ i ) | σ f T ( x ^ 0 - x ^ i ) ,
for every nonzero f R n . Hence, the choice of f is in this case of no consequence for the outcome of testing.

4.3. Biases

We will now examine how the combined estimation and multiple testing affects the mean of the computed parameters. As we now have with (3) more than one alternative hypothesis, the correct detection of a mismodeled H 0 is not the same anymore as the correct identification of an alternative hypothesis. For correct detection (CD), assuming H a 1 to be true, the occurrence | b ^ j | / σ b ^ j > χ α ( 1 , 0 ) , with j = arg max i ( | b ^ i | / σ b ^ i ) , needs to happen for some j { 1 , , m } , while for correct identification (CI), the occurrence needs to happen for j = 1 only. We therefore now consider the following three biases under H a 1 : the unconditional bias b x ¯ = E ( x ¯ - x | H a 1 ) , the bias conditioned on correct detection, b x ¯ | CD = E ( x ¯ - x | CD , H a 1 ) , and the bias conditioned on correct identification, b x ¯ | CI = E ( x ¯ - x | CI , H a 1 ) . These three biases will be first illustrated by means of a simple example.

Example: Averaging

In this example, the data are generated from a model of the form
H a 1 : E ( y ) = A x + c 1 b , D ( y ) = I m ,
in which
A = [ 1 , , 1 ] T and c 1 = [ 1 , 0 , , 0 ] T .
Thus, the data are generated such that the first observation is corrupted with the only outlier b. Since A + = 1 m [ 1 , , 1 ] , the LS-estimators of x under H 0 and H a i are then
x ^ 0 = 1 m j = 1 m y j and x ^ a i = x ^ 0 - 1 m b ^ i ,
with b ^ i = m m - 1 ( y i - x ^ 0 ) . The data are generated for different values of b and to each such generated data set the above described testing procedure (cf. (24)) is applied. Figure 6 shows the three types of biases b x ¯ , b x ¯ | CD and b x ¯ | CI that remain after testing. Note that b x ¯ and b x ¯ | CI behave similarly as their one-dimensional counterparts (see Figure 3 and Figure 4). However, the behaviour of b x ¯ | CD is different. Here, in contrast to b x ¯ | CI , the bias still follows in the beginning for small b the ’no-testing’ bias b x ^ 0 = 1 m b . This difference in behaviour between b x ¯ | CD and b x ¯ | CI is due to the multivariate nature of the testing, and it is driven by two multi-dimensional effects. First note, since the multiple testing (24) has used the same critical value as used for the single test in Section 3, that, in actual fact, due to the increase in dimensions, the false alarm probability of the current multiple testing problem is larger than α . The probability mass over the acceptance region that needs to be redistributed to account for the correct detection conditioning is therefore smaller than 1 - α . Secondly, since in the multivariate case, correct detection admits incorrect identifications, such outcomes make the conditional mean be not larger than b, in particular when b is still small. This is why the steep decrease, which is present in b x ¯ | CI , is absent in b x ¯ | CD .

5. Testing Bias in RAIM

As the above revealed and discussed biases are present in any inference procedure that combines estimation with testing, such biases are also present in, for instance, the navigation solution of exclusion-based RAIM. This will be demonstrated by applying the theory of the previous sections to the case of single-receiver pseudorange GNSS positioning. We consider GPS+Galileo with a receiver-satellite geometry as depicted in Figure 7. For a single system, GPS or Galileo, the m × 4 design matrix A and pseudorange variance matrix Q y y are structured as
A = 1 u 1 T 1 u m T , Q y y = diag ( σ 1 2 , , σ m 2 ) ,
with u i being the ith receiver-satellite unit direction vector. The unknown parameter vector x = ( d t , d E , d N , d U ) T consists of the receiver clock offset and the increments to the three position coordinates. Thus, here the ECEF (Earth-Centred, Earth-Fixed) coordinates have already been transformed into the local datum ENU (East-North-Up) coordinates, as these are the coordinates that a practical user in his/her local/national datum will use. Note that if the position coordinates would be known, the GNSS design matrix (31) reduces to that of the example in the previous section. The stochastic model is based on ionosphere-free observations (from dual frequency L1 and L5), with the entries of the diagonal variance matrix in (31) constructed according to Chapter 7 of [21]. For the design matrix of the dual-system GPS+Galileo, an additional column is added to the design matrix above to take care of the inter-system bias or system-specific receiver clock offset.
First, we consider the estimation-testing bias in the up-component of the GPS+Galileo model. Figure 8 shows this effect as a result of a pseudorange bias in either PRN 1 (top row) or PRN 64 (bottom row). Note that the behaviour of the three types of biases, b x ¯ , b x ¯ | CD , and b x ¯ | CI , is similar to that shown for the previous models. Also note that the impact of the pseudorange outlier in PRN 64 is much smaller than that of the poorer controlled pseudorange outlier in PRN 1 (see Figure 7). To further illustrate the importance of redundancy, Figure 9 shows the horizontal position scatter plots for the GPS-only (top) and GPS + Galileo (bottom) case. The GNSS data were simulated according to (31) but with a single pseudorange outlier included, in PRN 5 for the GPS-only case and in PRN 64 for the GPS + Galileo case (see Figure 7). The same testing procedure as before was applied. The four panels show, for each case, the scatter plots for missed detection (MD), correct detection (CD), correct identification (CI) and the unconditional (UN) case. The scatters of MD and CD together form that of UN, and the scatter of CI is a subset of that of CD. The effect of the increased strength in the GNSS model when using GPS + Galileo instead of GPS-only is clearly visible. In the GPS-only case, as opposed to the GPS + Galileo case, the CD-scatter still contains some quite incorrect identifications due to significant correlations between some of the test statistics. Thus, although the null hypothesis is correctly rejected, these correlations can then still result in incorrect satellite exclusion. Furthermore, even with correct identification (CI), the GPS-only position is in this case off, on average, by more than half a meter. This example has thus illustrated that the a posteriori exclusion of a biased satellite range from the position solution will not remove the bias in the position solution completely and therefore needs to be accounted for in the calculation of the probability of hazardous misleading information.

6. Conclusions

In this contribution, we have studied the bias propagation in exclusion-based RAIM. Although statistical testing is intended to remove biases from the data through exclusion, we have shown that biases will always remain under the alternative hypothesis, even in the case that such hypothesis is correctly identified. The usage of estimators that are unbiased under their respective hypotheses is therefore no guarantee that the finally computed solution is unbiased as well. We have shown that the presence of such biases in the final solution can be explained by the nonlinearity created by the combination of estimation and testing. The size of these remaining biases depends on the strength of the underlying model, the chosen false alarm rate, and the size and type of the actual input bias. The size of the remaining bias will get smaller with increasing model strength and larger levels of significance. Despite the presence of these biases, the benefit of testing was demonstrated by showing that the remaining bias is always smaller than the biased one otherwise would have been in the absence of testing. However, it was also shown that this is not the case when conditioned solutions are considered. The remaining biases in correctly identified solutions, for instance, can be larger than the original input bias. They are small, however, when the input biases are either small or sufficiently large. As the unbiasedness property of the applied estimators is undone when used in combination with testing, one may question whether or not other estimators exist or can be constructed that do a better job in dealing with the discussed bias propagation. In this vein, one may think of relaxing the constraint of using unbiased estimators so that a larger class of estimators can be used to increase flexibility for further improvements of the probability of hazardous misleading information. This is an open question for future research and one that is somewhat similar in spirit to the search for ’integrity optimized estimators’.

Acknowledgments

The first author is recipient of an Australian Research Council (ARC) Federation Fellowship (project number FF0883188).

Author Contributions

P.T., D.I. and C.T. conceived, developed and designed the material of the paper. P.T. wrote the initial version which was further completed by C.T. and D.I. P.T. derived the formulas and D.I. computed and analysed the simulations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, Y.C. Analysis of range and position comparison methods as a means to provide GPS integrity in the user receiver. In Proceedings of the Annual Meeting of the Institute of Navigation, Seattle, WA, USA, 24–26 June 1986; pp. 1–4. [Google Scholar]
  2. Parkinson, B.W.; Axelrad, P. Autonomous GPS integrity monitoring using the pseudorange residual. NAVIGATION 1988, 35, 255–274. [Google Scholar] [CrossRef]
  3. Sturza, M. Navigation system integrity monitoring using redundant measurements. NAVIGATION 1988, 35, 483–501. [Google Scholar] [CrossRef]
  4. Walter, T.; Enge, P. Weighted RAIM for precision approach. In Proceedings of the ION GPS-95, Palm Springs, CA, USA, 12–15 September 1995; pp. 1995–2004. [Google Scholar]
  5. Teunissen, P.J.G. An integrity and quality control procedure for use in multi sensor integration. In Proceedings of the ION GPS-1990, Colorado Spring, CO, USA, 19–21 September 1990; Republished in ION Red Book Series. Volume 7, pp. 513–522. [Google Scholar]
  6. Powe, M.; Owen, J. A flexible RAIM algorithm. In Proceedings of the ION GPS 1997, Kansas City, MO, USA, 16–19 September 1997; pp. 439–449. [Google Scholar]
  7. Teunissen, P.J.G. Testing Theory: An Introduction; Series on Mathematical Geodesy and Positioning; Delft University Press: Delft, The Netherlands, 2000. [Google Scholar]
  8. Joerger, M.; Chan, F.C.; Pervan, B. Solution Separation Versus Residual-Based RAIM. NAVIGATION 2014, 61, 273–291. [Google Scholar] [CrossRef]
  9. Joerger, M.; Pervan, B. Fault detection and exclusion using solution separation and chi-squared ARAIM. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 726–742. [Google Scholar] [CrossRef]
  10. Young, R.S.Y.; McGraw, G.A. Fault Detection and Exclusion Using Normalized Solution Separation and Residual Monitoring Methods. NAVIGATION 2003, 50, 151–169. [Google Scholar] [CrossRef]
  11. Pervan, B.; Pullen, S.; Christie, J. A Multiple Hypothesis Approach to Satellite Navigation Integrity. NAVIGATION 1998, 45, 61–71. [Google Scholar] [CrossRef]
  12. Gokalp, E.; Gungor, O.; Boz, Y. Evaluation of Different Outlier Detection Methods for GPS Networks. Sensors 2008, 8, 7344–7358. [Google Scholar] [CrossRef] [PubMed]
  13. Li, F.; Zhang, J.; Li, R.; Cao, X.; Wang, J. Vision-Aided RAIM: A New Method for GPS Integrity Monitoring in Approach and Landing Phase. Sensors 2015, 15, 22854–22873. [Google Scholar]
  14. Qian, C.; Liu, H.; Zhang, M.; Shu, B.; Xu, L.; Zhang, R. A Geometry-Based Cycle Slip Detection and Repair Method with Time-Differenced Carrier Phase (TDCP) for a Single Frequency Global Position System (GPS) + BeiDou Navigation Satellite System (BDS) Receiver. Sensors 2016, 16, 2064. [Google Scholar] [CrossRef] [PubMed]
  15. Borio, D.; Gioia, C. Galileo: The Added Value for Integrity in Harsh Environments. Sensors 2016, 16, 111. [Google Scholar] [CrossRef] [PubMed]
  16. Zair, S.; Hégarat-Mascle, S.; Seignez, E. Outlier Detection in GNSS Pseudo-Range/Doppler Measurements for Robust Localization. Sensors 2016, 16, 580. [Google Scholar] [CrossRef] [PubMed]
  17. Yang, L.; Li, B.; Shen, Y.; Rizos, C. Extension of Internal Reliability Analysis Regarding Separability Analysis. J. Surv. Eng. 2017, 143. [Google Scholar] [CrossRef]
  18. Zangeneh-Nejad, F.; Amiri-Simkooei, A.; Sharifi, M.; Asgari, J. Cycle slip detection and repair of undifferenced single-frequency GPS carrier phase observations. GPS Solut. 2017, 21. [Google Scholar] [CrossRef]
  19. Hsu, L.; Gu, Y.; Kamijo, S. NLOS Correction/Exclusion for GNSS Measurement Using RAIM and City Building Models. Sensors 2015, 15, 17329–17349. [Google Scholar] [CrossRef] [PubMed]
  20. Arnold, S. The Theory of Linear Models and Multivariate Analysis; Wiley: New York, NY, USA, 1981; Volume 2. [Google Scholar]
  21. Imparato, D. GNSS Based Receiver Autonomous Integrity Monitoring for Aircraft Navigation; TU Delft: Delft, The Netherlands, 2016. [Google Scholar]
Figure 1. (left) H 0 known: x ^ 0 is BLUE of x; (middle) H a known: x ^ a is BLUE of x; (right) H 0 and H a selected under chance: x ¯ not a BLUE of x (cf. (10), (12) and (13)).
Figure 1. (left) H 0 known: x ^ 0 is BLUE of x; (middle) H a known: x ^ a is BLUE of x; (right) H 0 and H a selected under chance: x ¯ not a BLUE of x (cf. (10), (12) and (13)).
Sensors 17 01508 g001
Figure 2. Orthogonal decomposition of c b A into range space of A and its orthogonal complement: c b A = ( P A c + P A c ) b A [7].
Figure 2. Orthogonal decomposition of c b A into range space of A and its orthogonal complement: c b A = ( P A c + P A c ) b A [7].
Sensors 17 01508 g002
Figure 3. Bias b A / σ b ^ as function of b / σ b ^ for different values of α .
Figure 3. Bias b A / σ b ^ as function of b / σ b ^ for different values of α .
Sensors 17 01508 g003
Figure 4. Bias, for correct rejection of H 0 , b A | b ^ A / σ b ^ as function of b / σ b ^ for different values of α , where b A | b ^ A = b - ( b - b A ) / ( 1 - P ( b ^ A | H a ) ) .
Figure 4. Bias, for correct rejection of H 0 , b A | b ^ A / σ b ^ as function of b / σ b ^ for different values of α , where b A | b ^ A = b - ( b - b A ) / ( 1 - P ( b ^ A | H a ) ) .
Sensors 17 01508 g004
Figure 5. Orthogonal decomposition of solution separation statistics: | | c i b ^ i | | Q y y 2 = | | y ^ 0 - y ^ i | | Q y y 2 + | | x ^ 0 - x ^ i | | Q x ^ 0 x ^ 0 2 .
Figure 5. Orthogonal decomposition of solution separation statistics: | | c i b ^ i | | Q y y 2 = | | y ^ 0 - y ^ i | | Q y y 2 + | | x ^ 0 - x ^ i | | Q x ^ 0 x ^ 0 2 .
Sensors 17 01508 g005
Figure 6. Three types of biases with m = 10 for model (29). (top left): unconditional bias b x ¯ = E ( x ¯ - x | H 1 ) . (top right): correct detection bias b x ¯ | CD = E ( x ¯ - x | CD , H 1 ) . (bottom): correct identification bias b x ¯ | CI = E ( x ¯ - x | CI , H 1 ) ( C is the correct identification area for b ^ ). The black straight line corresponds with the ’no-testing’ bias b x ^ 0 = 1 m b .
Figure 6. Three types of biases with m = 10 for model (29). (top left): unconditional bias b x ¯ = E ( x ¯ - x | H 1 ) . (top right): correct detection bias b x ¯ | CD = E ( x ¯ - x | CD , H 1 ) . (bottom): correct identification bias b x ¯ | CI = E ( x ¯ - x | CI , H 1 ) ( C is the correct identification area for b ^ ). The black straight line corresponds with the ’no-testing’ bias b x ^ 0 = 1 m b .
Sensors 17 01508 g006
Figure 7. Skyplot for Delft, the Netherlands, showing satellite IDs of nominal GPS (blue) and planned Galileo (red) for the 3rd January 2001 at 2:00 a.m.
Figure 7. Skyplot for Delft, the Netherlands, showing satellite IDs of nominal GPS (blue) and planned Galileo (red) for the 3rd January 2001 at 2:00 a.m.
Sensors 17 01508 g007
Figure 8. Estimation-testing bias in up-component of GPS+Galileo model (31) due to pseudorange bias in GPS PRN 1 (case (a), σ b ^ = 1 . 36 m ) or in Galileo PRN 64 (case (b), σ b ^ = 1 . 45 m ). Clockwise: unconditional bias b x ¯ , correct detection bias b x ¯ | CD , and correct identification bias b x ¯ | CI , with C the correct identification area for b ^ . The black straight lines correspond with the ’no-testing’ bias b x ^ 0 = A + c b .
Figure 8. Estimation-testing bias in up-component of GPS+Galileo model (31) due to pseudorange bias in GPS PRN 1 (case (a), σ b ^ = 1 . 36 m ) or in Galileo PRN 64 (case (b), σ b ^ = 1 . 45 m ). Clockwise: unconditional bias b x ¯ , correct detection bias b x ¯ | CD , and correct identification bias b x ¯ | CI , with C the correct identification area for b ^ . The black straight lines correspond with the ’no-testing’ bias b x ^ 0 = A + c b .
Sensors 17 01508 g008aSensors 17 01508 g008b
Figure 9. Horizontal position biases, b x ¯ | M D , b x ¯ | C D , b x ¯ | C I and b x ¯ , for GPS-only pseudorange bias of size b 5 = 2 . 5 σ b ^ 5 in GPS PRN 5 (a) and GPS+Galileo pseudorange bias of size b 64 = 2 . 5 σ b ^ 64 in Galileo PRN 64 (b).
Figure 9. Horizontal position biases, b x ¯ | M D , b x ¯ | C D , b x ¯ | C I and b x ¯ , for GPS-only pseudorange bias of size b 5 = 2 . 5 σ b ^ 5 in GPS PRN 5 (a) and GPS+Galileo pseudorange bias of size b 64 = 2 . 5 σ b ^ 64 in Galileo PRN 64 (b).
Sensors 17 01508 g009aSensors 17 01508 g009b
Table 1. The mean of the random parameters vectors x ^ 0 , x ^ a and x ¯ under H 0 and H a , respectively.
Table 1. The mean of the random parameters vectors x ^ 0 , x ^ a and x ¯ under H 0 and H a , respectively.
H 0 H a
x ^ 0 E ( x ^ 0 | H 0 ) = x E ( x ^ 0 | H a ) = x + b x ^ 0
x ^ a E ( x ^ a | H 0 ) = x E ( x ^ a | H a ) = x
x ¯ E ( x ¯ | H 0 ) = x E ( x ¯ | H a ) = x + b x ¯
Table 2. The conditional means of x ¯ under H 0 and H a .
Table 2. The conditional means of x ¯ under H 0 and H a .
H 0 H a
x ¯ | b ^ A E ( x ¯ | b ^ A , H 0 ) = x E ( x ¯ | b ^ A , H a ) = x + b x ^ 0
x ¯ | b ^ A E ( x ¯ | b ^ A , H 0 ) = x E ( x ¯ | b ^ A , H a ) = x + b x ¯ | b ^ A

Share and Cite

MDPI and ACS Style

Teunissen, P.J.G.; Imparato, D.; Tiberius, C.C.J.M. Does RAIM with Correct Exclusion Produce Unbiased Positions? Sensors 2017, 17, 1508. https://doi.org/10.3390/s17071508

AMA Style

Teunissen PJG, Imparato D, Tiberius CCJM. Does RAIM with Correct Exclusion Produce Unbiased Positions? Sensors. 2017; 17(7):1508. https://doi.org/10.3390/s17071508

Chicago/Turabian Style

Teunissen, Peter J. G., Davide Imparato, and Christian C. J. M. Tiberius. 2017. "Does RAIM with Correct Exclusion Produce Unbiased Positions?" Sensors 17, no. 7: 1508. https://doi.org/10.3390/s17071508

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop