Stochastic Models for Geodesy and Geoinformation Science

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Engineering Mathematics".

Deadline for manuscript submissions: closed (31 May 2020) | Viewed by 28139

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Geodesy and Geoinformation Science, Technische Universität Berlin, 10623 Berlin, Germany
Interests: adjustment theory; terrestrial laser scanning; surface approximation; measurement-and model-based structural analysis

Special Issue Information

Dear Colleagues,

In geodesy and geoinformation science, as well as in many other technical disciplines, it is often not possible to directly determine the desired target quantities, for example, the 3D coordinates of an object. Therefore, the unknown parameters must be linked with measured values, for example, directions, angles, and distances, by a mathematical model. This consists of two fundamental components—the functional and the stochastic model. The functional model describes the geometrical–physical relationship between the measured values and the unknown parameters. This relationship is sufficiently well known for most applications.

The definition of a stochastic model in the form of a variance–covariance matrix for the measurements, which best corresponds to the real conditions, is however a big challenge, as influences from the (multi)-sensor system, the signal path, and the object properties have to be considered. With the help of adjustment calculations and variance–covariance propagation, the unknown parameters can be determined and the necessary cofactor matrices for the quality assessment of the results in terms of precision and reliability can be determined. It should be noted critically that these calculations are almost always performed in linear or linearized models that are used instead of the original nonlinear problem, which can lead to bias effects.

In the Special Issue, the current developments in the field of stochastic modelling will be presented and illustrated by means of practical examples.

Prof. Dr. Frank Neitzel
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Measurement uncertainty
  • Variance–covariance matrix
  • Covariance functions
  • Variance–covariance propagation
  • Adjustment computations
  • Bias effects
  • Statistical tests
  • Model testing and optimization
  • Multi sensor systems
  • Surface approximation

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 302 KiB  
Article
Weighted Total Least Squares (WTLS) Solutions for Straight Line Fitting to 3D Point Data
by Georgios Malissiovas, Frank Neitzel, Sven Weisbrich and Svetozar Petrovic
Mathematics 2020, 8(9), 1450; https://doi.org/10.3390/math8091450 - 29 Aug 2020
Cited by 1 | Viewed by 2668
Abstract
In this contribution the fitting of a straight line to 3D point data is considered, with Cartesian coordinates xi, yi, zi as observations subject to random errors. A direct solution for the case of equally weighted and uncorrelated [...] Read more.
In this contribution the fitting of a straight line to 3D point data is considered, with Cartesian coordinates xi, yi, zi as observations subject to random errors. A direct solution for the case of equally weighted and uncorrelated coordinate components was already presented almost forty years ago. For more general weighting cases, iterative algorithms, e.g., by means of an iteratively linearized Gauss–Helmert (GH) model, have been proposed in the literature. In this investigation, a new direct solution for the case of pointwise weights is derived. In the terminology of total least squares (TLS), this solution is a direct weighted total least squares (WTLS) approach. For the most general weighting case, considering a full dispersion matrix of the observations that can even be singular to some extent, a new iterative solution based on the ordinary iteration method is developed. The latter is a new iterative WTLS algorithm, since no linearization of the problem by Taylor series is performed at any step. Using a numerical example it is demonstrated how the newly developed WTLS approaches can be applied for 3D straight line fitting considering different weighting cases. The solutions are compared with results from the literature and with those obtained from an iteratively linearized GH model. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
18 pages, 3716 KiB  
Article
Stochastic Properties of Confidence Ellipsoids after Least Squares Adjustment, Derived from GUM Analysis and Monte Carlo Simulations
by Wolfgang Niemeier and Dieter Tengen
Mathematics 2020, 8(8), 1318; https://doi.org/10.3390/math8081318 - 8 Aug 2020
Cited by 5 | Viewed by 1994
Abstract
In this paper stochastic properties are discussed for the final results of the application of an innovative approach for uncertainty assessment for network computations, which can be characterized as two-step approach: As the first step, raw measuring data and all possible influencing factors [...] Read more.
In this paper stochastic properties are discussed for the final results of the application of an innovative approach for uncertainty assessment for network computations, which can be characterized as two-step approach: As the first step, raw measuring data and all possible influencing factors were analyzed, applying uncertainty modeling in accordance with GUM (Guide to the Expression of Uncertainty in Measurement). As the second step, Monte Carlo (MC) simulations were set up for the complete processing chain, i.e., for simulating all input data and performing adjustment computations. The input datasets were generated by pseudo random numbers and pre-set probability distribution functions were considered for all these variables. The main extensions here are related to an analysis of the stochastic properties of the final results, which are point clouds for station coordinates. According to Cramer’s central limit theorem and Hagen’s elementary error theory, there are some justifications for why these coordinate variations follow a normal distribution. The applied statistical tests on the normal distribution confirmed this assumption. This result allows us to derive confidence ellipsoids out of these point clouds and to continue with our quality assessment and more detailed analysis of the results, similar to the procedures well-known in classical network theory. This approach and the check on normal distribution is applied to the local tie network of Metsähovi, Finland, where terrestrial geodetic observations are combined with Global Navigation Satellite System (GNSS) data. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

21 pages, 535 KiB  
Article
Mean Shift versus Variance Inflation Approach for Outlier Detection—A Comparative Study
by Rüdiger Lehmann, Michael Lösler and Frank Neitzel
Mathematics 2020, 8(6), 991; https://doi.org/10.3390/math8060991 - 17 Jun 2020
Cited by 9 | Viewed by 2670
Abstract
Outlier detection is one of the most important tasks in the analysis of measured quantities to ensure reliable results. In recent years, a variety of multi-sensor platforms has become available, which allow autonomous and continuous acquisition of large quantities of heterogeneous observations. Because [...] Read more.
Outlier detection is one of the most important tasks in the analysis of measured quantities to ensure reliable results. In recent years, a variety of multi-sensor platforms has become available, which allow autonomous and continuous acquisition of large quantities of heterogeneous observations. Because the probability that such data sets contain outliers increases with the quantity of measured values, powerful methods are required to identify contaminated observations. In geodesy, the mean shift model (MS) is one of the most commonly used approaches for outlier detection. In addition to the MS model, there is an alternative approach with the model of variance inflation (VI). In this investigation the VI approach is derived in detail, truly maximizing the likelihood functions and examined for outlier detection of one or multiple outliers. In general, the variance inflation approach is non-linear, even if the null model is linear. Thus, an analytical solution does usually not exist, except in the case of repeated measurements. The test statistic is derived from the likelihood ratio (LR) of the models. The VI approach is compared with the MS model in terms of statistical power, identifiability of actual outliers, and numerical effort. The main purpose of this paper is to examine the performance of both approaches in order to derive recommendations for the practical application of outlier detection. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

9 pages, 639 KiB  
Article
Total Least-Squares Collocation: An Optimal Estimation Technique for the EIV-Model with Prior Information
by Burkhard Schaffrin
Mathematics 2020, 8(6), 971; https://doi.org/10.3390/math8060971 - 13 Jun 2020
Cited by 2 | Viewed by 1801
Abstract
In regression analysis, oftentimes a linear (or linearized) Gauss-Markov Model (GMM) is used to describe the relationship between certain unknown parameters and measurements taken to learn about them. As soon as there are more than enough data collected to determine a unique solution [...] Read more.
In regression analysis, oftentimes a linear (or linearized) Gauss-Markov Model (GMM) is used to describe the relationship between certain unknown parameters and measurements taken to learn about them. As soon as there are more than enough data collected to determine a unique solution for the parameters, an estimation technique needs to be applied such as ‘Least-Squares adjustment’, for instance, which turns out to be optimal under a wide range of criteria. In this context, the matrix connecting the parameters with the observations is considered fully known, and the parameter vector is considered fully unknown. This, however, is not always the reality. Therefore, two modifications of the GMM have been considered, in particular. First, ‘stochastic prior information’ (p. i.) was added on the parameters, thereby creating the – still linear – Random Effects Model (REM) where the optimal determination of the parameters (random effects) is based on ‘Least Squares collocation’, showing higher precision as long as the p. i. was adequate (Wallace test). Secondly, the coefficient matrix was allowed to contain observed elements, thus leading to the – now nonlinear – Errors-In-Variables (EIV) Model. If not using iterative linearization, the optimal estimates for the parameters would be obtained by ‘Total Least Squares adjustment’ and with generally lower, but perhaps more realistic precision. Here the two concepts are combined, thus leading to the (nonlinear) ’EIV-Model with p. i.’, where an optimal estimation (resp. prediction) technique is developed under the name of ‘Total Least-Squares collocation’. At this stage, however, the covariance matrix of the data matrix – in vector form – is still being assumed to show a Kronecker product structure. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

15 pages, 5213 KiB  
Article
Evaluation of VLBI Observations with Sensitivity and Robustness Analyses
by Pakize Küreç Nehbit, Robert Heinkelmann, Harald Schuh, Susanne Glaser, Susanne Lunz, Nicat Mammadaliyev, Kyriakos Balidakis, Haluk Konak and Emine Tanır Kayıkçı
Mathematics 2020, 8(6), 939; https://doi.org/10.3390/math8060939 - 8 Jun 2020
Cited by 1 | Viewed by 2247
Abstract
Very Long Baseline Interferometry (VLBI) plays an indispensable role in the realization of global terrestrial and celestial reference frames and in the determination of the full set of the Earth Orientation Parameters (EOP). The main goal of this research is to assess the [...] Read more.
Very Long Baseline Interferometry (VLBI) plays an indispensable role in the realization of global terrestrial and celestial reference frames and in the determination of the full set of the Earth Orientation Parameters (EOP). The main goal of this research is to assess the quality of the VLBI observations based on the sensitivity and robustness criteria. Sensitivity is defined as the minimum displacement value that can be detected in coordinate unknowns. Robustness describes the deformation strength induced by the maximum undetectable errors with the internal reliability analysis. The location of a VLBI station and the total weights of the observations at the station are most important for the sensitivity analysis. Furthermore, the total observation number of a radio source and the quality of the observations are important for the sensitivity levels of the radio sources. According to the robustness analysis of station coordinates, the worst robustness values are caused by atmospheric delay effects with high temporal and spatial variability. During CONT14, it is determined that FORTLEZA, WESTFORD, and TSUKUB32 have robustness values changing between 0.8 and 1.3 mm, which are significantly worse in comparison to the other stations. The radio sources 0506-612, NRAO150, and 3C345 have worse sensitivity levels compared to other radio sources. It can be concluded that the sensitivity and robustness analysis are reliable measures to obtain high accuracy VLBI solutions. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

23 pages, 3056 KiB  
Article
On Estimating the Hurst Parameter from Least-Squares Residuals. Case Study: Correlated Terrestrial Laser Scanner Range Noise
by Gaël Kermarrec
Mathematics 2020, 8(5), 674; https://doi.org/10.3390/math8050674 - 29 Apr 2020
Cited by 8 | Viewed by 3710
Abstract
Many signals appear fractal and have self-similarity over a large range of their power spectral densities. They can be described by so-called Hermite processes, among which the first order one is called fractional Brownian motion (fBm), and has a wide range of applications. [...] Read more.
Many signals appear fractal and have self-similarity over a large range of their power spectral densities. They can be described by so-called Hermite processes, among which the first order one is called fractional Brownian motion (fBm), and has a wide range of applications. The fractional Gaussian noise (fGn) series is the successive differences between elements of a fBm series; they are stationary and completely characterized by two parameters: the variance, and the Hurst coefficient (H). From physical considerations, the fGn could be used to model the noise of observations coming from sensors working with, e.g., phase differences: due to the high recording rate, temporal correlations are expected to have long range dependency (LRD), decaying hyperbolically rather than exponentially. For the rigorous testing of deformations detected with terrestrial laser scanners (TLS), the correct determination of the correlation structure of the observations is mandatory. In this study, we show that the residuals from surface approximations with regression B-splines from simulated TLS data allow the estimation of the Hurst parameter of a known correlated input noise. We derive a simple procedure to filter the residuals in the presence of additional white noise or low frequencies. Our methodology can be applied to any kind of residuals, where the presence of additional noise and/or biases due to short samples or inaccurate functional modeling make the estimation of the Hurst coefficient with usual methods, such as maximum likelihood estimators, imprecise. We demonstrate the feasibility of our proposal with real observations from a white plate scanned by a TLS. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

21 pages, 3380 KiB  
Article
Elementary Error Model Applied to Terrestrial Laser Scanning Measurements: Study Case Arch Dam Kops
by Gabriel Kerekes and Volker Schwieger
Mathematics 2020, 8(4), 593; https://doi.org/10.3390/math8040593 - 15 Apr 2020
Cited by 10 | Viewed by 2798
Abstract
All measurements are affected by systematic and random deviations. A huge challenge is to correctly consider these effects on the results. Terrestrial laser scanners deliver point clouds that usually precede surface modeling. Therefore, stochastic information of the measured points directly influences the modeled [...] Read more.
All measurements are affected by systematic and random deviations. A huge challenge is to correctly consider these effects on the results. Terrestrial laser scanners deliver point clouds that usually precede surface modeling. Therefore, stochastic information of the measured points directly influences the modeled surface quality. The elementary error model (EEM) is one method used to determine error sources impact on variances-covariance matrices (VCM). This approach assumes linear models and normal distributed deviations, despite the non-linear nature of the observations. It has been proven that in 90% of the cases, linearity can be assumed. In previous publications on the topic, EEM results were shown on simulated data sets while focusing on panorama laser scanners. Within this paper an application of the EEM is presented on a real object and a functional model is introduced for hybrid laser scanners. The focus is set on instrumental and atmospheric error sources. A different approach is used to classify the atmospheric parameters as stochastic correlating elementary errors, thus expanding the currently available EEM. Former approaches considered atmospheric parameters functional correlating elementary errors. Results highlight existing spatial correlations for varying scanner positions and different atmospheric conditions at the arch dam Kops in Austria. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

19 pages, 629 KiB  
Article
A Generic Approach to Covariance Function Estimation Using ARMA-Models
by Till Schubert, Johannes Korte, Jan Martin Brockmann and Wolf-Dieter Schuh
Mathematics 2020, 8(4), 591; https://doi.org/10.3390/math8040591 - 15 Apr 2020
Cited by 5 | Viewed by 3094
Abstract
Covariance function modeling is an essential part of stochastic methodology. Many processes in geodetic applications have rather complex, often oscillating covariance functions, where it is difficult to find corresponding analytical functions for modeling. This paper aims to give the methodological foundations for an [...] Read more.
Covariance function modeling is an essential part of stochastic methodology. Many processes in geodetic applications have rather complex, often oscillating covariance functions, where it is difficult to find corresponding analytical functions for modeling. This paper aims to give the methodological foundations for an advanced covariance modeling and elaborates a set of generic base functions which can be used for flexible covariance modeling. In particular, we provide a straightforward procedure and guidelines for a generic approach to the fitting of oscillating covariance functions to an empirical sequence of covariances. The underlying methodology is developed based on the well known properties of autoregressive processes in time series. The surprising simplicity of the proposed covariance model is that it corresponds to a finite sum of covariance functions of second-order Gauss–Markov (SOGM) processes. Furthermore, the great benefit is that the method is automated to a great extent and directly results in the appropriate model. A manual decision for a set of components is not required. Notably, the numerical method can be easily extended to ARMA-processes, which results in the same linear system of equations. Although the underlying mathematical methodology is extensively complex, the results can be obtained from a simple and straightforward numerical method. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

15 pages, 3718 KiB  
Article
Variance Reduction of Sequential Monte Carlo Approach for GNSS Phase Bias Estimation
by Yumiao Tian, Maorong Ge and Frank Neitzel
Mathematics 2020, 8(4), 522; https://doi.org/10.3390/math8040522 - 3 Apr 2020
Cited by 3 | Viewed by 2875
Abstract
Global navigation satellite systems (GNSS) are an important tool for positioning, navigation, and timing (PNT) services. The fast and high-precision GNSS data processing relies on reliable integer ambiguity fixing, whose performance depends on phase bias estimation. However, the mathematic model of GNSS phase [...] Read more.
Global navigation satellite systems (GNSS) are an important tool for positioning, navigation, and timing (PNT) services. The fast and high-precision GNSS data processing relies on reliable integer ambiguity fixing, whose performance depends on phase bias estimation. However, the mathematic model of GNSS phase bias estimation encounters the rank-deficiency problem, making bias estimation a difficult task. Combining the Monte-Carlo-based methods and GNSS data processing procedure can overcome the problem and provide fast-converging bias estimates. The variance reduction of the estimation algorithm has the potential to improve the accuracy of the estimates and is meaningful for precise and efficient PNT services. In this paper, firstly, we present the difficulty in phase bias estimation and introduce the sequential quasi-Monte Carlo (SQMC) method, then develop the SQMC-based GNSS phase bias estimation algorithm, and investigate the effects of the low-discrepancy sequence on variance reduction. Experiments with practical data show that the low-discrepancy sequence in the algorithm can significantly reduce the standard deviation of the estimates and shorten the convergence time of the filtering. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

20 pages, 4057 KiB  
Article
Automatic Calibration of Process Noise Matrix and Measurement Noise Covariance for Multi-GNSS Precise Point Positioning
by Xinggang Zhang, Pan Li, Rui Tu, Xiaochun Lu, Maorong Ge and Harald Schuh
Mathematics 2020, 8(4), 502; https://doi.org/10.3390/math8040502 - 2 Apr 2020
Cited by 8 | Viewed by 3108
Abstract
The Expectation-Maximization algorithm is adapted to the extended Kalman filter to multiple GNSS Precise Point Positioning (PPP), named EM-PPP. EM-PPP considers better the compatibility of multiple GNSS data processing and characteristics of receiver motion, targeting to calibrate the process noise matrix Qt [...] Read more.
The Expectation-Maximization algorithm is adapted to the extended Kalman filter to multiple GNSS Precise Point Positioning (PPP), named EM-PPP. EM-PPP considers better the compatibility of multiple GNSS data processing and characteristics of receiver motion, targeting to calibrate the process noise matrix Qt and observation matrix Rt, having influence on PPP convergence time and precision, with other parameters. It is possibly a feasible way to estimate a large number of parameters to a certain extent for its simplicity and easy implementation. We also compare EM-algorithm with other methods like least-squares (co)variance component estimation (LS-VCE), maximum likelihood estimation (MLE), showing that EM-algorithm from restricted maximum likelihood (REML) will be identical to LS-VCE if certain weight matrix is chosen for LS-VCE. To assess the performance of the approach, daily observations from a network of 14 globally distributed International GNSS Service (IGS) multi-GNSS stations were processed using ionosphere-free combinations. The stations were assumed to be in kinematic motion with initial random walk noise of 1 mm every 30 s. The initial standard deviations for ionosphere-free code and carrier phase measurements are set to 3 m and 0.03 m, respectively, independent of the satellite elevation angle. It is shown that the calibrated Rt agrees well with observation residuals, reflecting effects of the accuracy of different satellite precise product and receiver-satellite geometry variations, and effectively resisting outliers. The calibrated Qt converges to its true value after about 50 iterations in our case. A kinematic test was also performed to derive 1 Hz GPS displacements, showing the RMSs and STDs w.r.t. real-time kinematic (RTK) are improved and the proper Qt is found out at the same time. According to our analysis despite the criticism that EM-PPP is very time-consuming because a large number of parameters are calculated and the first-order convergence of EM-algorithm, it is a numerically stable and simple approach to consider the temporal nature of state-space model of PPP, in particular when Qt and Rt are not known well, its performance without fixing ambiguities can even parallel to traditional PPP-RTK. Full article
(This article belongs to the Special Issue Stochastic Models for Geodesy and Geoinformation Science)
Show Figures

Figure 1

Back to TopTop