Next Article in Journal
Multi-Perspective Representation to Part-Based Graph for Group Activity Recognition
Next Article in Special Issue
Enhanced Vertical Navigation Using Barometric Measurements
Previous Article in Journal
BO-ALLCNN: Bayesian-Based Optimized CNN for Acute Lymphoblastic Leukemia Detection in Microscopic Blood Smear Images
Previous Article in Special Issue
Wavelet-Based Identification for Spinning Projectile with Gasodynamic Control Aerodynamic Coefficients Determination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Customizable Stochastic High-Fidelity Model of the Sensors and Camera Onboard a Fixed Wing Autonomous Aircraft

Centro de Automática y Robótica, Universidad Politécnica de Madrid-Consejo Superior de Investigaciones, c/José Cutiérrez Abascal 2, 28006 Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5518; https://doi.org/10.3390/s22155518
Submission received: 8 June 2022 / Revised: 13 July 2022 / Accepted: 18 July 2022 / Published: 24 July 2022
(This article belongs to the Special Issue Sensors in Aircraft)

Abstract

:
The navigation systems of autonomous aircraft rely on the readings provided by a suite of onboard sensors to estimate the aircraft state. In the case of fixed wing vehicles, the sensor suite is usually composed by triads of accelerometers, gyroscopes, and magnetometers, a Global Navigation Satellite System (GNSS) receiver, and an air data system (Pitot tube, air vanes, thermometer, and barometer), and it is often complemented by one or more digital cameras. An accurate representation of the behavior and error sources of each of these sensors, together with the images generated by the cameras, is indispensable for the design, development, and testing of inertial, visual, or visual–inertial navigation algorithms. This article presents realistic and customizable models for each of these sensors; a ready-to-use C++ implementation is released as open-source code so non-experts in the field can easily generate realistic results. The pseudo-random models provide a time-stamped series of the errors generated by each sensor based on performance values and operating frequencies obtainable from the sensor’s data sheets. If in addition, the simulated true pose (position plus attitude) of the aircraft is provided, the camera model generates realistic images of the Earth’s surface that resemble those taken with a real camera from the same pose.

1. Introduction

The sensors onboard an autonomous aircraft measure various aspects of the aircraft real or actual state  x = x TRUTH and provide these measurements to the aircraft guidance, navigation, and control (GNC) system. The outputs of these sensors, collectively known as the sensed state x ˜ = x SENSED , represent the only link between the real but unknown actual states and the GNC system in charge of achieving an actual trajectory that deviates as little as possible from the guidance targets (Figure 1).
Researchers and engineers designing, developing, or testing aircraft navigation systems require realistic renditions of the time variations of both the actual and sensed states to analyze the behavior of their algorithms in simulation. It is only after validating the algorithms under a wide range of conditions that these can be installed onboard the aircraft and field tested. The obtainment of realistic x = x TRUTH states covering the different maneuvers to be analyzed is generally achieved by discrete integration of the aircraft equations of motion coupled with the applicable guidance targets. Realistic x ˜ = x SENSED sensor outputs are however quite difficult to generate given their stochastic nature, the various underlying technologies, and the lack of detailed error models from the sensor manufacturers. In addition, realistic-looking images of the Earth’s surface that resemble those taken from a real aircraft are also necessary to test visual navigation algorithms.
Note that although the actual aircraft state varies continuously in the real world, in simulation, it is usually the outcome of a high-frequency discrete integration process [1] that results in x t t = x TRUTH t t , where t t = t · Δ t TRUTH . The sensed trajectory x ˜ t s = x SENSED t s , where t s = s · Δ t SENSED , is however intrinsically discrete, although the working frequency of the different sensors may vary. This article considers that all sensors operate at the same rate of Δ t SENSED , with the exception of the GNSS receiver and the onboard camera, which work at Δ t GNSS and Δ t IMG , respectively. It is also assumed that all sensors are fixed to the aircraft structure in a strapdown configuration and that their measurement processes are instantaneous and time synchronized with each other at their respective frequencies.
The sensed states or sensed trajectory can be defined as a time-stamped series of state vectors x ˜ = x SENSED that groups the measurements provided by the different onboard sensors (1) (note that when present, the super index represents the frame or reference system in which a certain variable is viewed; if two sub-indexes are present, it implies that the vector goes from the first frame to the second. For example, ω IB B represents the angular velocity from the inertial frame F I to the body frame F B viewed in body. This article makes use of the body frame F B , which is rigidly attached to the aircraft structure with origin in its center of mass [2], the NED frame F N also centered on the aircraft center of mass with axes in the North–East–Down directions [2], and the inertial frame F I , which is usually considered as centered in the Sun with axes fixed with respect to other stars [3]), comprising the only view of the actual states at the disposal of the navigation system. Its components are listed in Table 1. Note that the specific force f IB is defined as the non-gravitational acceleration experienced by the aircraft body with respect to an inertial frame [4].
x ˜ = x SENSED = f ˜ IB B , ω ˜ IB B , B ˜ B , x ˜ GDT , v ˜ N , p ˜ , T ˜ , v ˜ TAS , α ˜ , β ˜ , I T
Following a review of the objectives, state of the art, and novelty in Section 1.1, Section 1.2, Section 1.3, the following sections provide detailed descriptions of the stochastic models representing the errors present in the measurements of the different sensors: Section 2 describes the inertial sensors (accelerometers and gyroscopes), Section 3 focuses on the magnetometers, GNSS receiver, and air data system, and Section 4 presents the tool employed to generate realistic images that resemble what a real camera would view if located at the same position and attitude. Although the camera differs from all other sensors in that it does not provide a measurement or reading but a digital image, in this article, it is indeed considered a sensor as it provides the navigation system with information about its surroundings that can be employed for navigation. Section 5 describes sensor calibration activities that are indispensable for the determination of various parameters present in the sensor models. Section 6 discusses the main characteristics of the models, with special emphasis on the input seeds that control its stochastic properties; it also includes an example on how to customize the models for the case of a low Size, Weight, and Power (SWaP) aircraft. The conclusions are presented in Section 7.

1.1. Objectives

Sensor manufacturers do not provide models with which to estimate the errors introduced by their products, this is, the differences between the actual and sensed states. Instead, they usually publish data sheets that contain selected performance parameters in different formats and units, with few or no instructions for their interpretation. In the case of high-grade inertial sensors, an Allan variance curve is sometimes provided.
Faced with this situation, researchers willing to understand how their GNC systems will perform when the aircraft is equipped with given sensors face a difficult choice, especially in the case of low-cost sensors. One possibility is to adopt a sensor model from the literature and rely on the Allan variance curve (if available) to identify the required parameters, although it is not always clear how to do so [5]. If the Allan curve is not available, a second possibility is to obtain the curve by analyzing the sensor outputs when placed on a test bench, but the process is time consuming, requires specialized equipment and know-how, and the results are only valid for the specific tested hardware [5]. The end result is that researchers often rely on simple sensor models, which although easy to implement, fail to provide realistic outputs of the errors introduced by each sensor type. This has negative consequences for the performance of their GNC algorithms, which may not work as desired when faced with the real sensor outputs instead of the simulated ones employed for their development.
The first objective of this article is to address the need for realistic error models that can be quickly customized by the user based exclusively on the performance parameters contained in the data sheets provided by the sensor manufacturers, without the need for specific test equipment nor expert knowledge in the behavior of the different sensors. By implementing the described models or employing the provided open-source C++ code [6], the end user can quickly obtain realistic pseudo-random results without any expertise in the behavior of the different onboard sensors. Once the user introduces the desired performance parameters and operating frequencies, the simulated outputs of all onboard sensors rely exclusively on two input seeds (one identifying the airframe and the other identifying the specific flight). Different pairs of seeds can be employed as part of a Monte Carlo simulation, or the same pair can be used repeatedly in case the same outputs are required for further analysis. The use of the proposed models hence enables researchers to quickly obtain realistic time-stamped series of the values of x ˜ = x SENSED with which to feed their simulations.
The second objective of this article is to develop a camera model capable of generating realistic images, resembling what a real camera would record if mounted on the aircraft, so the resulting images of the Earth’s surface can be employed for the development and testing of visual and visual–inertial navigation systems. The resulting Earth Viewer application, described in Section 4.2, is also released as open-source code within [6].

1.2. State of the Art

The standard means for conveying the performance of a given single-axis inertial sensor is by means of its Allan variance curve [5], even though manufacturers do not always make it available, in particular for low-cost sensors. Although standards exist for how to generate the curve [7,8,9], it is not always clear how to convert the Allan variance information into a suitable sensor model [5]. Various Allan curve translation methods [10,11,12,13,14,15,16,17,18,19] have been available for a long time, but [5] provides the first clear exposition of the underlying ideas, issues, and trade-offs between the different methods. More recent attempts to identify the required parameters involve the use of maximum likelihood estimators [20] as well as machine learning [21].
As described in [5], the Allan variance is a well-known time domain analysis technique originally developed to analyze the frequency stability of oscillators [22,23,24], which has been successfully adopted to communicate the performances of inertial sensors and to characterize their stochastic errors [11,13,15,18,19,25,26,27,28,29].
The various error sources that influence the output of an inertial sensor are described in multiple navigation textbooks, such as [2,4,30,31,32,33]. In addition to the single-axis sensor model described in [5], the system noise and random walk contributions of the final error of a single-axis sensor are discussed in detail in [34,35], which constitute the basis for the model presented in Section 2.2. The added difficulty of combining three inertial sensors into a triad, treated in Section 2.5, is discussed in [2,31,32]. Basic magnetometer and GNSS receiver models can also be found in these navigation textbooks.

1.3. Novelty

The main contribution of this article is that it provides customizable, stochastic, and realistic models for the errors introduced by the various sensors onboard a fixed wing aircraft, with special emphasis on the inertial ones (accelerometers and gyroscopes), without relying on the Allan variance curves, as is the case in the rest of the literature reviewed in Section 1.2. The required characteristics of the models are the following:
  • Customizable so the user can employ the values that better resemble the performances of the specific equipment being modeled.
  • Stochastic to properly represent the nature of the different random processes involved, while ensuring that the time variation of the errors generated by each sensor can be repeated if so desired.
  • Realistic to provide a faithful description of the variation with time of the measurement errors, including as few simplifications as possible.
This enables researchers to quickly generate pseudo-random time-stamped series of the errors introduced by each sensor without the need for the expert know-how in the behavior of each sensor required to process the Allan variance curve nor the expensive and time-consuming process required to generate the curve independently. The results can be employed to feed Monte Carlo simulations that require the sensor readings as inputs, such as those required to analyze the behavior of GNC algorithms.
To develop comprehensive models whose parameters can be obtained exclusively from the data sheets published by the manufacturers, the authors have built on established models for the system noise and random walk contributions to single-axis sensors as well as the scale factor and cross-coupling contributions that appear when using sensor triads. The comprehensive models take into consideration the influence of the true relative pose (position plus attitude) of the sensor triad with respect to the platform as well as the uncertainty in the processor’s knowledge about such poses. The contribution of the various calibration procedures on the required parameters is also discussed.
The second contribution of this article is the release of the Earth Viewer application, which is capable of providing realistic and distortion-free images of the Earth surface that resemble what a real camera would record when mounted on the aircraft. To the knowledge of the authors, this is the first time that a tool capable of considering the six degrees of freedom of the camera pose has been published. These images can be employed to test the behavior of visual and visual inertial navigation systems.

2. Inertial Sensors

Inertial sensors comprise accelerometers and gyroscopes, which measure the specific force and inertial angular velocity about a single axis, respectively [36]. An inertial measurement unit (IMU) encompasses multiple accelerometers and gyroscopes, usually three of each, obtaining three-dimensional measurements of the specific force and angular rate [2] viewed in the platform frame F P (Section 2.5). However, the individual accelerometers and gyroscopes are not aligned with the F P axes but with those of the non-orthogonal accelerometers F A and gyroscopes F Y frames, which are also defined in Section 2.5. The output of the inertial sensors must hence first be transformed from the F A and F Y frames to F P , as described in Section 2.6 and Section 2.7, and then from the F P frame to the body frame F B as explained in Section 2.8, where they can be employed by the navigation system. The accelerometers and gyroscopes are assumed to be infinitesimally small and located at the IMU reference point (Section 2.8), which coincides with the origin of these three frames O P = O A = O Y .
The IMU is physically attached to the aircraft structure in a strapdown configuration, so both the displacement T BP B and the Euler angles ϕ BP = ψ P , θ P , ξ P T that describe the relative position and rotation between the body F B and platform F P frames are constant. Accelerometers can be divided by their underlying technology into pendulous and vibrating beams, while gyroscopes are classified into spinning mass, optical (ring laser or fiber optic), and vibratory [4]. Current inertial sensor development is mostly focused on micro machined electromechanical system (MEMS) sensors (there exist both pendulous and vibrating beam MEMS accelerometers, but all MEMS gyroscopes are vibratory), which makes direct use of the chemical etching and batch processing techniques used by the electronics integrated circuit industry to obtain sensors with small size, low weight, rugged construction, low power consumption, low price, high reliability, and low maintenance [30]. On the negative side, the accuracy of MEMS sensors is still low, although tremendous progress has been achieved in the last two decades, and more is expected in the future.
There is no universal classification of inertial sensors according to their performance, although they can be broadly assigned into five different categories or grades: marine (submarines and spacecraft), aviation (commercial and military), intermediate (small aircraft and helicopters), tactical (unmanned air vehicles and guided weapons), and automotive (consumer) [4]. The full range of grades covers approximately six orders of magnitude of gyroscope performance and only three for the accelerometers, but higher performance is always associated with bigger size, weight, and cost. Tactical grade IMUs cover a wide range of performance values but can only provide a stand-alone navigation solution for a few minutes, while automotive grade IMUs are unsuitable for navigation.
The different errors that appear in the measurements provided by accelerometers and gyroscopes are described in Section 2.1. Section 2.2 presents a model for the measurements of a single inertial sensor, while Section 2.3 and Section 2.4 focus on how to obtain the specific values for white noise and bias on which the model relies from the documentation. Section 2.5 describes the reference systems required to represent the IMU measurements. Additional errors appear when three accelerometers or gyroscopes are employed together, and these are modeled in Section 2.6 for accelerometers and Section 2.7 for gyroscopes. The analysis of the inertial sensors concludes with Section 2.9, which provide a comprehensive error model for the IMU measurements. The final model also depends on the relative position of the IMU with respect to the body frame, which is described in Section 2.8.

2.1. Inertial Sensor Error Sources

In addition to the accelerometers and gyroscopes, an IMU also contains a processor, storage for the calibration parameters, one or more temperature sensors, and a power supply. As described below, each sensor has several error sources, but each of them has four components: fixed contribution, temperature-dependent variation, run-to-run variation, and in-run variation [4,37]. The first two can be measured at the laboratory (at different temperatures) and the calibration results can be stored in the IMU so the processor can later compensate the sensor outputs based on the reading provided by the temperature sensor. Calibration, however, increases manufacturing costs, so it may be absent in the case of inexpensive sensors. The run-to-run variation results in a contribution to a given error source that varies every time the sensor is employed but remains constant within a given run. It cannot be compensated by the IMU processor but can be calibrated by the navigation system every time it is turned on with a process known as fine alignment [2,4,31]. The in-run contribution to the error sources slowly varies during execution and cannot be calibrated in the laboratory nor by the navigation system.
Let us now discuss the different sources of error that influence an inertial sensor [4,38,39,40]:
  • The bias is an error exhibited by all accelerometers and gyroscopes that is independent of the underlying specific force or angular rate being measured, and it comprises the dominant contribution to the overall sensor error. It can be defined as any nonzero output when the sensor input is zero [37], and it can be divided into its static and dynamic components. The static part, also known as fixed bias, bias offset, turn-on bias, or bias repeatability, comprises the run-to-run variation, while the dynamic component, known as in-run bias variation, bias drift, or bias instability (or stability), is typically about 10% of the static part and slowly varies over periods of the order of one minute. As the bias is the main contributor to the overall sensor error, its value can be understood as a sensor quality measure. Table 2 provides approximate values for the inertial sensor biases according to the IMU grade [4].
    While the bias offset can be greatly reduced through fine alignment [2,4,31], the bias drift cannot be determined and needs to be modeled as a stochastic process. It is mostly a warm-up effect that should be almost non-existent after a few minutes of operation, and it corresponds to the minimum point of the sensor’s Allan curve [7,8,9,39]. It is generally modeled as a random walk process obtained by the integration of a white noise signal coupled with limits that represent the conclusion of the warm up process.
  • The scale factor error is the departure of the input output gradient of the instrument from unity following unit conversion at the IMU processor. It represents a varying relationship between sensor input and output caused by aging and manufacturing tolerances. As it is a combination of fixed contribution plus temperature-dependent variation, most of it can be eliminated through calibration (Section 5.1).
  • The cross-coupling error or non-orthogonality error is a fixed contribution that arises from the misalignment of the sensitive axes of the inertial sensors with respect to the orthogonal axes of the platform frame due to manufacturing limitations, and it can also be highly reduced through calibration. The scale factor and cross-coupling errors are in the order of 10 4 and 10 3 for most inertial sensors, although they can be higher for some low-grade gyroscopes. The cross-coupling error is equal to the sine of the misalignment, which is listed by some manufacturers.
  • System noise or random noise is inherent to all inertial sensors and can combine electrical, mechanical, resonance, and quantization sources. It can originate at the sensor itself or at any other electronic equipment that interferes with it. System noise is a stochastic process usually modeled as white noise because its noise spectrum is approximately white, and it cannot be calibrated as there is no correlation between past and future values. A white noise process is characterized by its power spectral density (PSD), which is constant as it does not depend on the signal frequency. It corresponds to the 1 s crossing of the sensor’s Allan curve [7,8,9,39].
    System noise is sometimes referred to as random walk, which can generate confusion with the bias. The reason is that the inertial sensor outputs are always integrated to obtain ground velocity in the case of accelerometers and aircraft attitude in the case of gyroscopes. As the integration of a white noise process is indeed a random walk, the later term is commonly employed to refer to system noise. Table 3 contains typical values for accelerometer and gyroscope root PSD according to sensor grade [4].
  • Other minor error sources not considered in this article are the g-dependent bias (sensitivity of spinning mass and vibratory gyroscopes to specific force), scale factor nonlinearity, and higher-order errors (spinning mass gyroscopes and pendulous accelerometers).

2.2. Single-Axis Inertial Sensor Error Model

As the inertial sensors provide measurements at equispaced discrete times t s = s · Δ t SENSED = s · Δ t , this section focuses on obtaining a discrete model for the bias and white noise errors of a single-axis inertial sensor. The results obtained here will be employed in the following sections to generate a comprehensive IMU model.
Let us consider a sensor in which the difference between its measurement at any given time x ˜ t and the real value of the physical magnitude being measured at that same time x t can be represented by a zero mean white noise Gaussian process η v t with spectral density σ v 2 :
x ˜ t = x t + η v t
Dividing (2) by Δ t SENSED = Δ t and integrating results in:
1 Δ t t 0 t 0 + Δ t x ˜ t dt = 1 Δ t t 0 t 0 + Δ t x t + η v t dt
Assuming that the measurement and real value are both constant over the integration interval (note that the stochastic process η v cannot be considered constant over any interval) [34] yields
x ˜ t 0 + Δ t = x t 0 + Δ t + 1 Δ t t 0 t 0 + Δ t η v t dt
This expression results in the white noise sensor error w t , which is the difference between the sensor measurement x ˜ t and the true value x t . Its mean and variance can be readily computed:
w t 0 + Δ t = 1 Δ t t 0 t 0 + Δ t η v t dt
E w t 0 + Δ t = 0
Var w t 0 + Δ t = σ v 2 Δ t
Based on these results, the white noise error can be modeled by a discrete random variable identically distributed to the above continuous white noise error, that is, one that results in the same mean and variance, where N v N 0 , 1 is a standard normal random variable:
w s Δ t SENSED = w s Δ t = σ v Δ t 1 / 2 N vs
Let us now consider a second model in which the measurement error or bias is given by a first-order random walk process or integration of a zero mean white noise Gaussian process η u t with spectral density σ u 2 :
b ˙ t = η u t b t 0 + Δ t = b t 0 + t 0 t 0 + Δ t η u t dt
Its mean and variance can be quickly computed:
E b t 0 + Δ t = E b t 0
Var b t 0 + Δ t = σ u 2 Δ t
These results indicate that the bias can be modeled by a discrete random variable identically distributed to the continuous random walk above:
b t 0 + Δ t = b t 0 + σ u Δ t 1 / 2 N u
where N u N 0 , 1 is a standard normal random variable. Operating with the above expression results in the final expression for the discrete bias as well as its mean and variance:
b s Δ t = B 0 N u 0 + σ u Δ t 1 / 2 i = 1 s N ui
E b s Δ t = 0
Var b s Δ t = B 0 2 + σ u 2 s Δ t
A comprehensive single-axis sensor error model without a scale factor can hence be constructed by adding together the influence of the system noise provided by (8) and the bias given by (13) [35], while assuming that the standard normal random variables N u and N v are uncorrelated (note that the expected value and variance of each of the two discrete components of this sensor model coincide with those of their continuous counterparts, but their combined mean and variance provided by expressions (17) and (18) differ from that of the combination of the two continuous error models given by (5) and (9). This is the case even if considering that the two zero mean white noise Gaussian processes η u and η v are independent and hence uncorrelated. It is however possible to obtain a discrete model whose discrete bias and white noise components are not only identically distributed to those of their continuous counterparts [34], even adding the equivalence of covariance between the bias and the sensor error, but this results in a significantly more complex model that behaves similarly to the one above at all but has the shortest time samples after sensor initialization. The authors have decided not to do so in the model described in this article, reducing complexity with little or no loss of realism):
e BW s Δ t = x ˜ s Δ t x s Δ t = B 0 N u 0 + σ u Δ t 1 / 2 i = 1 s N ui + σ v Δ t 1 / 2 N vs
E e BW s Δ t = 0
Var e BW s Δ t = B 0 2 + σ u 2 s Δ t + σ v 2 Δ t
The discrete sensor error or difference between the measurement provided by the sensor x ˜ s Δ t SENSED = x ˜ s Δ t at any given discrete time s Δ t and the real value of the physical variable being measured at that same discrete time x s Δ t is the combination of a bias or first-order random walk and a white noise process, and it depends on three parameters: the bias offset B 0 , the bias instability σ u , and the white noise σ v . The contributions of these three different sources to the sensor error as well as to its first and second integrals (gyroscopes measure angular velocity, and their output needs to be integrated once to obtain attitude, while accelerometers measure specific force and are integrated once to obtain velocity and twice to obtain position) are very different and inherent to many of the challenges encountered when employing accelerometers and gyroscopes for inertial navigation, as explained below.
Figure 2 and Figure 3 represent the performance of a fictitious sensor of B 0 = 1.6 × 10 2 , σ u = 4 × 10 3 , and σ v = 1 × 10 3 working at a frequency of 100 Hz Δ t = 0.01 s , and they are intended to showcase the different behavior and relative influence on the total error of each of its three components. The figures show the theoretical variation with time of the sensor error mean (Figure 2) and standard deviation (Figure 3) given by (17) and (18) together with the average of fifty different runs. In addition, Figure 2 also includes ten of those runs to showcase the variability in results implicit to the random variables (although the data are generated at 100 Hz , for visibility purposes, the figure only employs 1 out of every 1000 points, so it appears far less noisy than the real data), while Figure 3 shows the theoretical contribution to the standard deviation of each of the three components. In addition to the near equivalence between the theory and the average of several runs, the figures show that the bias instability is the commanding long-term factor in the deviation between the sensor measurement and its zero mean (the standard deviation of the bias instability grows with the square root of time while the other two components are constant). As discussed in Section 2.1, the bias drift or bias instability is indeed the most important quality parameter of an inertial sensor. This is also the case when the sensor output is integrated, as discussed below.
Let us integrate the sensor error over a timespan s Δ t to evaluate the growth with time of both its expected value and its variance (as the interest lies primarily in s 1 , a simple integration method such as the rectangular rule is employed):
f BW s Δ t = f BW 0 + 0 s Δ t e BW τ d τ = f BW 0 + Δ t i = 1 s e BW i Δ t = f BW 0 + B 0 N u 0 s Δ t + σ u Δ t 3 / 2 i = 1 s s i + 1 N ui + + σ v Δ t 1 / 2 i = 1 s N vi
E f BW s Δ t = f BW 0 Var f BW s Δ t = B 0 2 s Δ t 2 + σ u 2 6 Δ t 3 s s + 1 2 s + 1 + σ v 2 s Δ t
B 0 2 s Δ t 2 + σ u 2 3 s Δ t 3 + σ v 2 s Δ t
Figure 4 and Figure 5 follow the same pattern as Figure 2 and Figure 3 but applied to the error integral instead of to the error itself. They would represent the attitude error resulting from integrating the gyroscope output or the velocity error expected when integrating the specific force measured by an accelerometer. The conclusions are the same as before but significantly more accentuated. Not only is the expected value of the error constant instead of zero ( f BW 0 = 3 has been employed in the experiment), but the growth in the standard deviation (over a nonzero mean) is much quicker than before. The bias instability continues to be the dominating factor but now increases with a power of t 3 / 2 , while the bias offset and white noise contributions also increase with time, although with powers of t and t 1 / 2 , respectively. Let us continue the process and integrate the error a second time:
g BW s Δ t = g BW 0 + 0 s Δ t f BW τ d τ = g BW 0 + Δ t i = 1 s f BW i Δ t = g BW 0 + f BW 0 s Δ t + B 0 2 N u 0 s Δ t 2 + + σ u Δ t 5 / 2 i = 1 s j = 1 s i + 1 j N ui + σ v Δ t 3 / 2 i = 1 s s i + 1 N vi
E g BW s Δ t = g BW 0 + f BW 0 s Δ t
Var g BW s Δ t = B 0 2 4 s Δ t 4 + σ u 2 Δ t 5 i = 1 s j = 1 s i + 1 j 2 + σ v 2 6 Δ t 3 s s + 1 2 s + 1 B 0 2 4 s Δ t 4 + σ u 2 20 s Δ t 5 + σ v 2 3 s Δ t 3
Figure 6 and Figure 7 show the same type of figures but applied to the second integral of the error ( g BW 0 = 1.5 has been employed in the experiment). In this case, the degradation of the results with time is even more intense to the point where the measurements are useless after a very short period of time. Unless corrected by the navigation system, this is equivalent to the error in position obtained by double integrating the output of the accelerometers.
Let us summarize the main points of the single-axis inertial sensor discrete error model developed in this section, which includes the influence of the bias and the system error but not that of the scale factor and cross-coupling errors included in the three-dimensional error model of Section 2.9. The error e BW s Δ t , which applies to specific force for accelerometers and inertial angular velocity in the case of gyroscopes, depends on three factors: bias offset B 0 , bias drift σ u , and white noise σ v . Its mean is always zero, but the error standard deviation grows with time ( t 1 / 2 ) due to the bias drift with constant contributions from the bias offset and the white noise. When integrating the error to obtain f BW s Δ t , which is equivalent to ground velocity for accelerometers and attitude for gyroscopes, the initial speed error or initial attitude error f BW 0 becomes the fourth contributor, and an important one indeed, as it becomes the mean of the first integral error at any time. The standard deviation, which measures the spread over the nonzero mean, increases very quickly with time because of the bias instability ( t 3 / 2 ), with contributions from the offset ( t ) and the white noise ( t 1 / 2 ). If integrating a second time to obtain g BW s Δ t , or position in case of the accelerometer, the initial position error g BW 0 turns into the fifth contributor. The expected value of the position error grows linearly with time due to the initial velocity error with an additional constant contribution from the initial position error, while the position standard deviation (measuring spread over a growing average value) grows extremely quick due mostly to the bias instability ( t 5 / 2 ) but also because of the bias offset ( t 2 ) and the white noise ( t 3 / 2 ). Table 4 shows the standard units of the different sources of error for both accelerometers and gyroscopes.

2.3. Obtainment of System Noise Values

This section focuses on the significance of system or white noise error σ v and how to obtain it from sensor specifications, which often refer to the integral of the output instead of the output itself. As the integral of white noise is a random walk process, the angle random walk of a gyroscope is equivalent to white noise in the angular rate output, while velocity random walk refers to the specific force white noise in accelerometers [37]. The discussion that follows applies to gyroscopes but is fully applicable to accelerometers if replacing the angular rate by specific force and attitude or angle by ground velocity.
Angle random walk, measured in ( rad / s 1 / 2 ), ( / h 1 / 2 ), or equivalent, describes the average deviation or error that occurs when the sensor output signal is integrated due to system noise exclusively, without considering other error sources such as bias or scale factor [41]. If integrating multiple times and obtaining a distribution of end points at a given final time s Δ t , the standard deviation of this distribution, containing the final angles at the final time, scales linearly with the white noise level σ v , the square root of the integration step size Δ t , and the square root of the number of steps s , as noted by the last term of (21). This means that an angle random walk of 1 / s 1 / 2 translates into a standard deviation for the error of 1 after 1 s , 10 after 100 s , and 1000 1 / 2 31.6 after 1000 s .
Manufacturers often provide this information as the power spectral density PSD of the white noise process in ( , 2 / h 2 / Hz ) or equivalent, where it is necessary to take its square root to obtain σ v , or as the root PSD in ( / h / Hz 1 / 2 ) that is equivalent to σ v . Sometimes, it is even provided as the PSD of the random walk process, not the white noise, in units ( / h ) or equivalent. It is then necessary to multiply this number by the square root of the sampling interval Δ t or divide it by the square root of the sampling frequency to obtain the desired σ v value.

2.4. Obtainment of Bias Drift Values

This section describes the meaning of bias instability σ u (also known as bias stability or bias drift) and how to obtain it from sensor specifications. As in the previous section, the discussion is centered on gyroscopes, but it is fully applicable to accelerometers as well. Bias instability can be defined as the potential of the sensor error to stay within a certain range for a certain time [42]. A small number of manufacturers directly provide sensor output changes over time, which directly relates with the bias instability (also known as in-run bias variation, bias drift, or rate random walk) per the second term of (18). If provided with an angular rate change of x ( / s ) ( 1 σ ) in t ( s ), then σ u can be obtained as follows [34,43]:
σ u = x t 1 / 2
As the bias drift is responsible for the growth of sensor error with time (Figure 2 and Figure 3), manufacturers more commonly provide bias stability measurements that describe how the bias of a device may change over a specified period of time [35], typically around 100 s . Bias stability is usually specified as a 1 σ value with units ( / h ) or ( / s ), which can be interpreted as follows according to (16)—(18). If the sensor error (or bias) is known at a given time t, then a 1 σ bias stability of 0.01 / h over 100 s means that the bias at time t + 100 s is a random variable with the mean error at time t and standard deviation 0.01 / h , and expression (25) can be used to obtain σ u . As the bias behaves as a random walk over time whose standard deviation grows proportionally to the square root of time, the bias stability is sometimes referred as a bias random walk.
In reality, bias fluctuations do not really behave as a random walk. If they did, the uncertainty in the bias of a device would grow without bound as the timespan increased, which is not the case. In practice, the bias is constrained to be within some range, and therefore, the random walk model is only a good approximation to the true process for short periods of time [35].

2.5. Platform, Accelerometers, and Gyroscopes Frames

The following sections make use of three different reference frames to describe the readings of accelerometers and gyroscopes:
  • The platform frame  F P is a Cartesian reference system with its origin located at the IMU reference point (Section 2.8) and its three axes i 1 P , i 2 P , i 3 P forming a right-hand system that is loosely aligned with the aircraft body axes, so they point in the general directions of the aircraft fuselage (forward), aircraft wings (rightwards), and downward, respectively [2,31,32].
    A proper definition of the platform frame is indispensable for navigation, as the calibrated outputs of the accelerometers and gyroscopes are based on it (Section 2.6 and Section 2.7). The F P frame can be obtained from the body frame F B by a rotation best described by the Euler angles ϕ BP = ψ P , θ P , ξ P T (these Euler angles correspond to the 3–2–1 (yaw, pitch, roll) convention employed in aeronautics) followed by a translation T BP B (Section 2.8) from the aircraft center of mass to the IMU reference point.
  • The accelerometers frame  F A is a non-orthogonal reference system also centered at the IMU reference point [2,31,32]. The basis vectors i 1 A , i 2 A , i 3 A are aligned with each of the three accelerometer’s sensing axes (each accelerometer hence only senses the specific force component parallel to its sensing axis) (Section 2.6), but they are not orthogonal among them due to manufacturing inaccuracies. This implies that the angles between the F A and F P axes are very small.
    It is always possible, with no loss of generality, to impose that i 1 P coincides with i 1 A and that i 2 P is located in the plane defined by i 1 A and i 2 A . If this is the case, i 1 A i 2 P , i 1 A i 3 P , and i 2 A i 3 P , and the relative attitude between the F P and F A frames can be defined by three independent small rotations.
    The i 2 A axis can be obtained from i 2 P by means of a small rotation α ACC , 3 about i 3 P .
    The i 3 A axis can be obtained from i 3 P by two small rotations: α ACC , 1 about i 1 P and α ACC , 2 about i 2 P .
    Although the exact relationships can be obtained [31], and given that the angles are very small, it is possible to consider cos α ACC , i = 1 , sin α ACC , i = α ACC , i , and α ACC , i · α ACC , j = 0 i , j 1 , 2 , 3 , i j , resulting in the following transformations between free vectors viewed in the platform ( v P ) and accelerometer ( v A ) frames, respectively (As F A is not orthogonal, the transformation matrices are denoted with ⋆ to indicate that they are not proper rotation matrices):
    v P = R PA v A = 1 0 0 α ACC , 3 1 0 α ACC , 2 α ACC , 1 1 v A
    v A = R AP v P = 1 0 0 α ACC , 3 1 0 α ACC , 2 α ACC , 1 1 v P
  • The gyroscopes frame  F Y is similar to the accelerometers frame F A defined above, but it is aligned with the gyroscopes’ sensing axes instead of those of the accelerometers [2,31,32]. It is also a non-orthogonal reference system centered at the IMU reference point, but no simplifications can be made about the relative orientation of its axes i 1 Y , i 2 Y , i 3 Y with respect to those of F P , so their relative attitude is defined by six small rotations α GYR , ij i , j 1 , 2 , 3 , i j , where α GYR , ij is the rotation of i i Y about i j P .
    An approach similar to that employed with accelerometers leads to the following transformations between free vectors viewed in the platform ( v P ) and gyroscope ( v Y ) frames:
    v P = R PY v Y = 1 α GYR , 23 α GYR , 32 α GYR , 13 1 α GYR , 31 α GYR , 12 α GYR , 21 1 v Y
    v Y = R YP v P = 1 α GYR , 23 α GYR , 32 α GYR , 13 1 α GYR , 31 α GYR , 12 α GYR , 21 1 v P

2.6. Accelerometer Triad Sensor Error Model

An IMU is equipped with an accelerometer triad composed by three individual accelerometers, each of which measures the projection of the specific force over its sensing axis as described in Section 2.2 while incurring in an error e BW , ACC that can be modeled as a combination of bias offset, bias drift, and white noise (16). The three accelerometers can be considered infinitesimally small and located at the IMU reference point, which is defined as the intersection between the sensing axes of the three accelerometers. As the accelerometer frame F A is centered at the IMU reference point and its three non-orthogonal axes coincide with the accelerometers’ sensing axes, (30) joins together the measurements of the three individual accelerometers:
f ˜ IA A = S ACC f IA A + e BW , ACC A
where f IA A is the specific force viewed in the accelerometer frame F A , f ˜ IA A represents its measurement also viewed in F A , e BW , ACC A is the error introduced by each accelerometer (16), and S ACC is a square diagonal matrix containing the scale factor errors s ACC , 1 , s ACC , 2 , s ACC , 3 for each accelerometer (Section 2.1). It is however preferred to obtain an expression in which the specific forces are viewed in the orthogonal platform frame F P instead of the accelerometer frame F A . As both share the same origin,
f ˜ IP P = R PA S ACC R AP f IP P + e BW , ACC A
where R PA and R AP , defined by (26) and (27), contain the cross-coupling errors α ACC , 1 , α ACC , 2 , α ACC , 3 generated by the misalignment of the accelerometer sensing axes. The scale factor and cross-coupling errors contain fixed and temperature-dependent error contributions (refer to Section 2.1) that can be modeled as normal random variables:
s ACC , i = N 1 , s ACC 2 i 1 , 2 , 3
α ACC , i = N 0 , α ACC 2 i 1 , 2 , 3
where s ACC and α ACC can be obtained from the sensor specifications. Equation (31) can be transformed to make it more useful by defining the accelerometer scale and cross-coupling error matrix M ACC :
M ACC = R PA S ACC R AP = m ACC , 11 0 0 m ACC , 21 m ACC , 22 0 m ACC , 31 m ACC , 32 m ACC , 33 s ACC , 1 0 0 α ACC , 3 s ACC , 1 s ACC , 2 s ACC , 2 0 α ACC , 2 s ACC , 3 s ACC , 1 α ACC , 1 s ACC , 2 s ACC , 3 s ACC , 3
Considering that the scale and cross-coupling errors are uncorrelated and very small, and taking into account the expressions for the mean and variance of the sum and product of two random variables [44], the different components m ACC , ij of M ACC can be obtained as follows i , j 1 , 2 , 3 :
m ACC , ij = N 1 , s ACC 2 i = j
m ACC , ij = N 0 , 2 α ACC s ACC 2 = N 0 , m ACC 2 i > j
m ACC , ij = 0 i < j
Let us also define the accelerometer error transformation matrix N ACC as
N ACC = R PA S ACC = n ACC , 11 0 0 n ACC , 21 n ACC , 22 0 n ACC , 31 n ACC , 32 n ACC , 33 = s ACC , 1 0 0 α ACC , 3 s ACC , 1 s ACC , 2 0 α ACC , 2 s ACC , 1 α ACC , 1 s ACC , 2 s ACC , 3
A process similar to that employed above leads to:
n ACC , ij = N 1 , s ACC 2 i = j
n ACC , ij = N 0 , α ACC 2 1 + s ACC 2 N 0 , α ACC 2 i > j
n ACC , ij = 0 i < j
Taking into account the expressions for the mean and variance of the sum and product of two random variables [44], and knowing that the cross-coupling errors are very small 1 + α ACC 2 1 , it can be proven that the bias and white noise errors viewed in the platform frame F P respond to a expression similar to (16):
e BW , ACC P = e BW , ACC P s Δ t SENSED = e BW , ACC P s Δ t = N ACC e BW , ACC A = B 0 ACC N u 0 , ACC + σ u ACC Δ t 1 / 2 i = 1 s N ui , ACC + σ v ACC Δ t 1 / 2 N vs , ACC
where each N u , ACC and N v , ACC is a random vector composed by three independent standard normal random variables N 0 , 1 . Note that as the bias drift is mostly a warm-up process that stabilizes itself after a few minutes of operation, the random walk within (42) is not allowed to vary freely but is restricted to within a band of width ± 100 σ u ACC Δ t 1 / 2 . The final model for the accelerometer measurements viewed in F P results in
f ˜ IP P = M ACC f IP P + e BW , ACC P
where M ACC is described in (34) through (37) and e BW , ACC P is provided by (42). This model relies on inputs for the bias offset B 0 ACC , bias drift σ u ACC , white noise σ v ACC , scale factor error s ACC , and cross-coupling error m ACC . Section 6.2 provides an example on how to obtain these values from the data sheet provided by the accelerometer manufacturer.

2.7. Gyroscopes Triad Sensor Error Model

The IMU is also equipped with a triad of gyroscopes, each of which measures the projection of the inertial angular velocity over its sensing axis as described in Section 2.2. The obtainment of the gyroscope triad model is fully analogous to that of the accelerometers in the previous section, with the added difficulty that the transformation between the gyroscope frame F Y and platform frame F P relies on six small angles instead of three. The starting point hence is:
ω ˜ IP P = R PY S GYR R YP ω IP P + e BW , GYR Y
where ω IP P is the inertial angular velocity viewed in the platform frame F P , ω ˜ IP P represents its measurement also viewed in F P , e BW , GYR Y is the error introduced by each gyroscope (16), S GYR is a square diagonal matrix containing the scale factor errors s GYR , 1 , s GYR , 2 , s GYR , 2 , and R PY and R YP , defined by (28) and (29), contain the cross-coupling errors α GYR , 12 , α GYR , 21 , α GYR , 13 , α GYR , 31 , α GYR , 23 , α GYR , 32 generated by the misalignment of the gyroscope sensing axes.
Operating in the same way as in Section 2.6 leads to:
e BW , GYR P = e BW , GYR P s Δ t SENSED = e BW , GYR P s Δ t = B 0 GYR N u 0 , GYR + σ u GYR Δ t 1 / 2 i = 1 s N ui , GYR + σ v GYR Δ t 1 / 2 N vs , GYR
ω ˜ IP P = M GYR ω IP P + e BW , GYR P
where each N ui , GYR and N v , GYR is a random vector composed by three independent standard normal random variables N 0 , 1 . As in the case of the accelerometers, the random walk within (45) representing the bias drift is not allowed to vary freely but is restricted to within a band of width ± 100 σ u GYR Δ t 1 / 2 . This model relies on inputs for the bias offset B 0 GYR , bias drift σ u GYR , white noise σ v GYR , scale factor error s GYR , and cross-coupling error m GYR , which can be obtained from the gyroscope specifications. An example of this process is included in Section 6.2. The gyroscope scale and cross-coupling error matrix M GYR responds to:
M GYR = R PY S GYR R YP = m GYR , 11 m GYR , 12 m GYR , 13 m GYR , 21 m GYR , 22 m GYR , 23 m GYR , 31 m GYR , 32 m GYR , 33 s GYR , 1 α GYR , 23 s GYR , 1 s GYR , 2 α GYR , 32 s GYR , 3 s GYR , 1 α GYR , 13 s GYR , 1 s GYR , 2 s GYR , 2 α GYR , 31 s GYR , 2 s GYR , 3 α GYR , 12 s GYR , 3 s GYR , 1 α GYR , 21 s GYR , 2 s GYR , 3 s GYR , 3
m GYR , ij = N 1 , s GYR 2 i = j
m GYR , ij = N 0 , 2 α GYR 2 s GYR 2 = N 0 , m GYR 2 i j

2.8. Mounting of Inertial Sensors

Equations (43) and (46) contain the relationships between the specific force f IP P and inertial angular velocity ω IP P and their measurements f ˜ IP P , ω ˜ IP P when evaluated and viewed in the platform frame F P . However, from the point of view of the navigation system, both magnitudes need to be evaluated and viewed in the body frame F B instead of F P . These equations thus need to be modified so they relate f IB B with f ˜ IB B as well as ω IB B with ω ˜ IB B , respectively, as described in Section 2.9 below. To do that, it is necessary to define the relative pose (position plus attitude) between the F P and F B frames, and to distinguish between the true position T BP B and attitude ϕ BP , and their estimations by the IMU processor ( T ^ BP B and ϕ ^ BP ). Note that the IMU, represented by the platform frame F P , should be mounted in the aircraft as close as possible to the center of gravity (this reduces errors, as described in Section 2.9), and it is loosely aligned with the aircraft body axes.
To increase the realism, this article assumes that the real displacement T BP B between the two frames is deterministic, while the relative rotation ϕ BP = ψ P , θ P , ξ P T is stochastic. In this way, each simulation run exhibits a slightly different IMU platform attitude with respect to the aircraft body:
  • As the IMU reference point is fixed with respect to the structure but the aircraft center of mass slowly moves as the fuel load diminishes, it is possible to establish a linear model that provides the displacement between the origins of both frames according to the aircraft mass (the aircraft masses m full and m empty ) when the fuel tank is fully loaded or empty as inputs, as are the displacements between the IMU reference point and the aircraft center of mass T BP , full B and T BP , empty B .:
    T BP B = f m = T BP , full B + m full m m full m empty T BP , empty B T BP , full B
  • The platform Euler angles respond to the stochastic model provided by (51), in which each specific Euler angle is obtained as the product of the user-selected standard deviations ( σ ψ P , σ θ P , σ ξ P ) by a single realization of a standard normal random variable N 0 , 1 ( N ψ P , N θ P , and N ξ P ).
    ϕ BP = σ ψ P N ψ P , σ θ P N θ P , σ ξ P N ξ P T
Once the real pose between the F P and F B frames is established, it is necessary to specify its estimation employed by the IMU processor in the comprehensive model introduced in Section 2.9, which is discussed in Section 5.2. Stochastic models are employed for both the translation T ^ BP B and rotation ϕ ^ BP , changing their values from one execution to the next:
T ^ BP B = T BP B + σ T ^ BP B N T ^ BP , 1 P , σ T ^ BP B N T ^ BP , 2 P , σ T ^ BP B N T ^ BP , 3 P T
ϕ ^ BP = ϕ BP + σ ϕ ^ BP N ψ ^ P , σ ϕ ^ BP N θ ^ P , σ ϕ ^ BP N ξ ^ P T
As in the previous case, the model relies on two user-selected standard deviations ( σ T ^ BP B and σ ϕ ^ BP ), as well as six realizations of a standard normal random variable N 0 , 1 , which are denoted as N ψ ^ P , N θ ^ P , N ξ ^ P , N T ^ BP , 1 P , N T ^ BP , 2 P , and N T ^ BP , 3 P . Section 6.2 suggests values for the five required settings ( σ ψ P , σ θ P , σ ξ P , σ T ^ BP B , σ ϕ ^ BP ), although they can be adjusted by the user.
T BP can be considered quasi-stationary as it slowly varies based on the aircraft mass, and the relative position of their axes ϕ BP remains constant because the IMU is rigidly attached to the aircraft structure. Although Euler angles have been employed in this section, from this point on, it is more practical to employ the rotation matrix R BP to represent the rotation between two different frames [45]. The time derivatives of T BP and R BP are hence zero:
T ˙ BP = R ˙ BP = 0 v BP = a BP = ω BP = α BP = 0

2.9. Comprehensive Inertial Sensor Error Model

Two considerations are required to establish the measurement equations for the inertial sensors viewed in the body frame F B . First, let us apply the composition rules of Appendix A.6 considering F I as F 0 , F B as F 1 , and F P as F 2 , which results in:
ω IP = ω IB
α IP = α IB
v IP = v IB + ω ^ IB T BP
a IP = a IB + α ^ IB T BP + ω ^ IB ω ^ IB T BP
Second, it is also necessary to consider that as R ^ BP is a rotation matrix in which all rows and columns are unitary vectors, the projection of the F P frame bias and white noise errors e BW , ACC P and e BW , GYR P onto the F B frame does not change their stochastic properties:
e BW , ACC B s Δ t = R ^ BP e BW , ACC P = B 0 ACC N u 0 , ACC + σ u ACC Δ t 1 / 2 i = 1 s N ui , ACC + σ v ACC Δ t 1 / 2 N vs , ACC
e BW , GYR B s Δ t = R ^ BP e BW , GYR P = B 0 GYR N u 0 , GYR + σ u GYR Δ t 1 / 2 i = 1 s N ui , GYR + σ v GYR Δ t 1 / 2 N vs , GYR
As the inertial angular velocity does not change when evaluated in the F B and F P frames (55), its measurement in the body frame can be derived from (46) by first projecting it from F B to F P based on the real rotation matrix R BP and then projecting back the measurement into F B based on the estimated rotation matrix R ^ BP . The bias and white noise error is also projected according to (60):
ω ˜ IB B = R ^ BP M GYR R BP T ω IB B + e BW , GYR B
The expression for the specific force measurement is significantly more complex because the back and forth transformations of the specific force between the F B and F P frames need to consider the influence of the lever arm T BP , as indicated in (58). The additional terms introduce errors in the measurements, so as indicated in Section 2.8, it is desirable to locate the IMU as close as possible to the aircraft center of mass.
f ˜ IB B = R ^ BP M ACC R BP T f IB B + α ^ IB B T BP B + ω ^ IB B ω ^ IB B T BP B α ^ ^ IB B T ^ BP B ω ^ ^ IB B ω ^ ^ IB B T ^ BP B + e BW , ACC B
Note that this expression cannot be directly evaluated as the estimated values for the inertial angular velocity and acceleration ( ω ^ IB B , α ^ IB B ) are unknown by the IMU until obtained by the navigation filter. The IMU can however rely on the gyroscope readings, directly replacing ω ^ IB B with ω ˜ IB B and computing α ˜ IB B based on the difference between the present and previous ω ˜ IB B readings, resulting in:
f ˜ IB B = R ^ BP M ACC R BP T f IB B + α ^ IB B T BP B + ω ^ IB B ω ^ IB B T BP B α ˜ ^ IB B T ^ BP B ω ˜ ^ IB B ω ˜ ^ IB B T ^ BP B + e BW , ACC B
Table 5 lists the error sources contained in the comprehensive inertial sensor error model represented by (61), (63). The first two columns list the different error sources, while the third column specifies their origin according to the criterion established in the first paragraph of Section 2.1. The section where each error is described appears on the fourth column, which is followed by the seeds (refer to Section 6 for the meaning of the terms Υ i , A and Υ j , F ) employed to ensure the results variability for different aircraft ( Υ i , A ) as well as different flights ( Υ j , F ).
Note that all the required error sources ( 2 nd column) need to be specified by the user. As an example, Section 6.2 suggests values appropriate for a low SWaP aircraft. It is worth pointing out that all errors are modeled as stochastic variables or processes (with the exception of the T BP displacement between the body and platform frames, which is deterministic), as expressions (61), (63) rely on the errors provided by (59), (60), the scale and cross-coupling matrices given by (34), (47), and the transformations given by (50), (51), (52), (53).
In the case of the accelerometer triad, the stochastic nature of the fixed and run-to-run error contributions to the model relies on three realizations of normal distributions for the bias offset, three for the scale factor errors, three for the cross-coupling errors, and nine for the mounting errors, while the in-run error contributions require three realizations each for the bias drift and system noise at every discrete sensor measurement. The gyroscope triad is similar but requires six realizations to model the cross-coupling errors instead of three while using the same six realizations as the accelerometer triad to model the true and estimated rotation between the F B and F P frames.
Expressions (61), (63) can be rewritten to show the measurements as functions of the full errors ( E ACC , E GYR ), which represent all the errors introduced by the inertial sensors with the exception of white noise.
f ˜ IB B s Δ t = f IB B s Δ t + E ACC s Δ t + σ v ACC Δ t 1 / 2 N vs , ACC
ω ˜ IB B s Δ t = ω IB B s Δ t + E GYR s Δ t + σ v GYR Δ t 1 / 2 N vs , GYR

3. Non-Inertial Sensors

This section describes the different non-inertial sensors usually installed onboard a fixed wind autonomous aircraft, such as a triad of magnetometers to measure the Earth’s magnetic field, a GNSS receiver that provides absolute position and velocity measurements, and the air data system, which in addition to the pressure altitude and temperature also provides a measurement of the airspeed and the airflow angles.

3.1. Magnetometers

Magnetometers measure magnetic field intensity along a given direction and are very useful for estimating the aircraft heading. Although other types exist, magnetoinductive and magnetoresistive sensors are generally employed for navigation due to their accuracy and small size [4,30]. As with the inertial sensors, three orthogonal magnetometers are usually employed in a strapdown configuration to measure the magnetic field with respect to the body frame F B .
Unfortunately, magnetometers do not only measure the Earth’s magnetic field B but also that generated by the aircraft permanent magnets and electrical equipment (known as hard iron magnetism) as well as the magnetic field disturbances generated by the aircraft ferrous materials (soft iron magnetism). For that reason, the magnetometers should be placed in a location inside the aircraft that minimizes these errors. On the positive side, magnetometers do not exhibit the bias instability present in inertial sensors, and the error of an individual sensor can be properly modeled by the combination of bias offset and white noise. A triad of magnetometers capable of measuring the magnetic field in three directions adds the same scale factor (nonlinearity) and cross-coupling (misalignment) errors as those present in the inertial sensors, together with the transformation between the magnetic axes and the body ones.
Modeling the behavior of a triad of magnetometers is simpler but less precise than that of inertial sensors, as the effect of the fixed hard iron magnetism is indistinguishable from that of the run-to-run bias offset, while the fixed effect of soft iron magnetism is indistinguishable from that of the scale factor and cross-coupling error matrix. This has several consequences. First of all is that magnetometers cannot be calibrated at the laboratory before being mounted in the aircraft as in the case of inertial sensors (Section 5.1) but are instead calibrated once attached to the aircraft by a process known as swinging (Section 5.3), which is less precise, as the aircraft attitude during swinging cannot be determined with so much accuracy as it would be in a laboratory setting. Second is that defining a magnetic platform frame to then transform the results into body axes serves no purpose, as the magnetometer readings are only valid, this is, contain the effects of hard and soft iron magnetism, once they are attached to the aircraft, and then they can be directly measured in body axes. Third is that percentage-wise, the errors induced by the magnetometers are bigger than those of the inertial sensors. The implemented model is the following:
B ˜ B = B HI , MAG + B 0 , MAG + M MAG R BN B N + e W , MAG B
B ˜ B s Δ t = B HI , MAG N HI , MAG + B 0 , MAG N u 0 , MAG + M MAG R BN B N + σ v , MAG Δ t 1 / 2 N vs , MAG
where B ˜ B is the measurement viewed in F B , B HI , MAG is the fixed hard iron magnetism, B 0 , MAG is the run-to-run bias offset, M MAG is a fixed matrix combining the effects of soft iron magnetism with the scale factor and cross-coupling errors, and B N is the real magnetic field including local anomalies. N HI , MAG , N u 0 , MAG and N vs , MAG are uncorrelated normal vectors of size three each composed of three uncorrelated standard normal random variables N 0 , 1 . The soft iron, scale factor and cross-coupling matrix M MAG does not vary with time and is computed as follows:
M MAG = s MAG m MAG m MAG m MAG s MAG m MAG m MAG m MAG s MAG N m , MAG
In this expression, N m , MAG contains nine outputs of a standard normal random variable N 0 , 1 , and the symbol ∘ represents the Hadamart or element-wise matrix product.
Table 6 lists the error sources contained in the magnetometer model in the same format as the previous section, noting that soft iron magnetism is included in both the scale factor and cross-coupling errors. Note that all the required error sources ( B HI , MAG , B 0 , MAG , σ v , MAG , s MAG , m MAG ) need to be specified by the user. As an example, Section 6.2 suggests values appropriate for a low SWaP aircraft. The stochastic nature of the fixed and run-to-run error contributions to the magnetometer model relies on three realizations of normal distributions for the hard iron magnetism, three for the bias offset, three for the scale factor errors, and six for the cross-coupling errors, while the in-run error contributions require three realizations for system noise at every discrete sensor measurement.
Expression (67) can be rewritten to show the measurements as functions of the magnetometer full error E MAG , which represents all the errors introduced by the magnetometers with the exception of white noise:
B ˜ B s Δ t = B B s Δ t + E MAG s Δ t + σ v , MAG Δ t 1 / 2 N vs , MAG

3.2. Global Navigation Satellite System Receiver

A GNSS receiver enables the determination of the aircraft position and absolute velocity based on signals obtained from various constellations of satellites, such as GPS, GLONASS, and Galileo. The position is obtained by triangulation based on the accurate satellite position and time contained within each signal. Instead of derivating the position with respect to time, which introduces noise, GNSS receivers obtain the vehicle absolute velocity by measuring the Doppler shift between the constant satellite frequencies and those measured by the receiver.
It is important to note that because of the heavy processing required to fix a position based on the satellite signals, GNSS receivers are not capable of working at the high frequencies characteristic of inertial and air data sensors, so Δ t GNSS is usually a multiple of Δ t SENSED . The position error of a GNSS receiver can be modeled as the sum of a zero mean white noise process plus slow varying ionospheric effects [33] modeled as the sum of the bias offset plus a random walk. This random walk is modeled with a frequency of 1 / 60 Hz ( Δ t ION = 60 s ) and linearly interpolated in between. The ground velocity error is modeled exclusively with a white noise process.
e GNSS , POS g Δ t GNSS = x ˜ GDT x GDT = σ GNSS , POS N g , GNSS , POS + e GNSS , ION g Δ t GNSS
e GNSS , VEL g Δ t GNSS = v ˜ N v N = σ GNSS , VEL N g , GNSS , VEL
e GNSS , ION g Δ t GNSS = e GNSS , ION i Δ t ION + r f ION e GNSS , ION i + 1 Δ t ION e GNSS , ION i Δ t ION
g = f ION · i + r 0 r < f ION
e GNSS , ION i Δ t ION = B 0 , GNSS , ION N u 0 , GNSS , ION + σ GNSS , ION j = 1 i N j , GNSS , ION
f ION = Δ t ION / Δ t GNSS = 60
where σ GNSS , POS , σ GNSS , ION , B 0 , GNSS , ION , and σ GNSS , VEL are user-supplied inputs (Section 6.2 contains an example on how to fill up these values), and N g , GNSS , POS , N g , GNSS , VEL , N u 0 , GNSS , ION , and N j , GNSS , ION are uncorrelated normal vectors of size three, each composed of three uncorrelated standard normal random variables N 0 , 1 . In addition, note that as both g and f ION are integers, the quotient remainder theorem guarantees that there exist unique integers i and r that comply with (73) [46].
Table 7 lists the error sources contained in the GNSS receiver model in the same format as previous sections. Note that all errors are modeled as stochastic variables or processes. Three realizations of a normal distribution are required for the run-to-run error contributions, while the in-run error contributions require three realizations each for position and velocity at every discrete sensor measurement, plus an extra three when corresponding for the ionospheric error.

3.3. Air Data System

The mission of the air data system is to measure the aircraft pressure altitude H P [47,48] by means of the atmospheric pressure p , the outside air temperature T, the airspeed v TAS , and the angles of attack α and sideslip β that provide the orientation of the aircraft structure with respect to the airflow.
A barometer or static pressure sensor, generally part of the Pitot tube as explained below [30], measures atmospheric pressure, which can be directly translated into pressure altitude [47,48]. Air data systems are also equipped with a thermometer to measure the external air temperature T. The implemented models, where OSP stands for outside static pressure and OAT means outside air temperature, include contributions from both bias offsets ( B 0 OSP , B 0 OAT ) and random noises ( σ OSP , σ OAT ):
e OSP s Δ t SENSED = e OSP s Δ t = p ˜ s Δ t p s Δ t = B 0 OSP N 0 , OSP + σ OSP N s , OSP
e OAT s Δ t SENSED = e OAT s Δ t = T ˜ s Δ t T s Δ t = B 0 OAT N 0 , OAT + σ OAT N s , OAT
where N 0 , OSP , N s , OSP , N 0 , OAT , and N s , OAT are uncorrelated standard normal random variables N 0 , 1 .
A Pitot probe is a tube with no outlet pointing directly into the undisturbed air stream, where the values of the air variables (temperature, pressure, and density) at its dead end resemble the total or stagnation variables of the atmosphere prior to its deceleration inside the Pitot [49]. Measuring the air flow total pressure p t at the tube dead end enables the estimation of the aircraft airspeed v TAS .
The air data system is also capable of measuring the direction of the air stream with respect to the aircraft, which is represented by the angles of attack α and sideslip β . To do so, it can be equipped with two air vanes that align themselves with the unperturbed air stream or with a more complex multi-hole Pitot probe. In all three cases, the errors can also be modeled by a combination of bias offsets ( B 0 TAS , B 0 AOA , B 0 AOS ) and random noises ( σ TAS , σ AOA , σ AOS ):
e TAS s Δ t SENSED = e TAS s Δ t = v ˜ TAS s Δ t v TAS s Δ t = B 0 TAS N 0 , TAS + σ TAS N s , TAS
e AOA s Δ t SENSED = e AOA s Δ t = α ˜ s Δ t α s Δ t = B 0 AOA N 0 , AOA + σ AOA N s , AOA
e AOS s Δ t SENSED = e AOS s Δ t = β ˜ s Δ t β s Δ t = B 0 AOS N 0 , AOS + σ AOS N s , AOS
where N 0 , TAS , N s , TAS , N 0 , AOA , N s , AOA , N 0 , AOS , and N s , AOS are uncorrelated standard normal random variables N 0 , 1 . Table 8 lists the error sources contained in the air data sensor model represented by (76), (77), (78), (79), and (80) in the same format as previous tables. Note that all errors are modeled as stochastic variables or processes. The stochastic nature of the run-to-run error contributions to the models relies on five realizations of normal distributions for the bias offsets, while the in-run error contributions require five realizations for the system noises at every discrete sensor measurement.

4. Camera

Image generation is a power and data-intensive process that cannot work at the high frequencies characteristic of inertial and air data sensors, so Δ t IMG is usually significantly higher than Δ t SENSED but not as much as Δ t GNSS . The camera is considered rigidly attached to the aircraft structure, and it is assumed that the shutter speed is sufficiently high so that all images are equally sharp, and that the image generation process is instantaneous. In addition, the camera ISO setting remains constant during the flight, and all generated images are noise free. The model also assumes that the visible spectrum radiation reaching all patches of the Earth’s surface remains constant, and the terrain is considered Lambertian [50], so its appearance at any given time does not vary with the viewing direction. The combined use of these assumptions implies that a given terrain object is represented with the same luminosity in all images, even as its relative pose (position and attitude) with respect to the camera varies. Geometrically, a perspective projection or pinhole camera model [50] is employed, which in addition is perfectly calibrated and hence shows no distortion. Table 9 lists the required configuration parameters, which shall be provided by the user.

4.1. Mounting of Camera

The digital camera can be located anywhere on the aircraft structure as long as its view of the terrain is unobstructed by other platform elements. It is desirable that the lever arm or distance between the camera optical center and the aircraft center of mass is as small as possible to reduce the negative effects of any camera alignment error. With respect to its orientation, the camera should be facing down to show a balanced view of the ground during level flight, but minor deviations are not problematic.
As in the case of the IMU platform, the model considers that the camera location is deterministic but its orientation is stochastic. The expressions below are hence analogous to those employed in Section 2.8, where each specific camera Euler angle is obtained as the product of the standard deviations ( σ ψ C , σ θ C , σ ξ C ) by a single realization of a standard normal random variable N 0 , 1 ( N ψ C , N θ C , and N ξ C ):
T BC B = f m = T BC , full B + m full m m full m empty T BC , empty B T BC , full B
ϕ BC = 90 + σ ψ C N ψ C , σ θ C N θ C , σ ξ C N ξ C T
In addition to the true translation and rotation between the F B and F C frames given by the previous equations, the model also requires the accuracy with which they are known to the navigation system. The determination of the camera position T ^ BC B and rotation ϕ ^ BC = ψ ^ C , θ ^ C , ξ ^ C T is discussed in Section 5.4. As in previous cases, stochastic models are considered for both the translation T ^ BC B and rotation ϕ ^ BC , changing their values from one run to another:
T ^ BC B = T BC B + σ T ^ BC B N T ^ BC , 1 B , σ T ^ BC B N T ^ BC , 2 B , σ T ^ BC B N T ^ BC , 3 B T
ϕ ^ BC = ϕ BC + σ ϕ ^ BC N ψ ^ C , σ ϕ ^ BC N θ ^ C , σ ϕ ^ BC N ξ ^ C T
where N ψ ^ C , N θ ^ C , N ξ ^ C , N T ^ BC , 1 B , N T ^ BC , 2 B , and N T ^ BC , 3 B are six realizations of a standard normal random variable N 0 , 1 . Section 6.2 provides an example of the standard deviations required to fill up the model, although they can be adjusted by the user.
The translation T BC B between the origins of the F B and F C frames can be considered quasi-stationary, as it slowly varies based on the aircraft mass (81), and the relative position of their axes ϕ BC remains constant because the camera is rigidly attached to the aircraft structure (82).

4.2. Earth Viewer

The camera model differs from all other sensor models described in this article in that it does not return a sensed variable x ˜ consisting of its real value x plus a sensor error E but instead generates a digital image simulating what a real camera would record based on the aircraft position and attitude as given by the actual or real state x = x TRUTH . When provided with the camera pose with respect to the Earth at equally time-spaced intervals, the available model implementation [6] is capable of generating images that resemble the view of the Earth’s surface that the camera would record if located at that particular pose. To do so, it relies on three software libraries:
  • OpenSceneGraph [51] is an open-source high-performance 3D graphics toolkit written in C++ and OpenGL, used by application developers in fields such as visual simulation, games, virtual reality, scientific visualization and modeling. The library enables the representation of objects in a scene by means of a graph data structure, which allows grouping objects that share some properties to automatically manage rendering properties such as the level of detail necessary to faithfully draw the scene but without considering the unnecessary detail that slows down the graphics hardware drawing the scene.
  • osgEarth [52] is a dynamic and scalable 3D Earth surface rendering toolkit that relies on OpenSceneGraph, and it is based on publicly available orthoimages of the area flown by the aircraft. Orthoimages consist of aerial or satellite imagery geometrically corrected such that the scale is uniform; they can be used to measure true distances as they are accurate representations of the Earth surface, having been adjusted for topographic relief, lens distortion, and camera tilt. When coupled with a terrain elevation model, osgEarth is capable of generating realistic images based on the camera position as well as its yaw and pitch, but it does not accept the camera roll (in other words, the osgEarth images are always aligned with the horizon).
  • Earth Viewer is a modification to osgEarth implemented by the authors, so it is also capable of accepting the bank angle of the camera with respect to the NED axes. Earth Viewer is capable of generating realistic Earth images as long as the camera height over the terrain is significantly higher than the vertical relief present in the image. As an example, Figure 8 shows two different views of a volcano in which the dome of the mountain, having very steep slopes, is properly rendered.

5. Calibration Procedures

This section describes various calibration processes required for the determination of the fixed and run-to-run error contributions to the accelerometers, gyroscopes, magnetometers, and onboard camera. These procedures only need to be executed once and do not need to be repeated unless the sensors are replaced or their position inside the aircraft is modified (in addition, the swinging process of Section 5.3 needs to be performed every time new equipment is installed inside the aircraft, as this may modify the hard and soft iron magnetism and hence the magnetometer readings).
The calibration procedures include the laboratory calibration of the accelerometers and gyroscopes described in Section 5.1, the determination of the pose between the platform and body frames explained in Section 5.2, the magnetometer calibration or swinging described in Section 5.3, and the determination of the pose between the camera and body frames explained in Section 5.4. Their main objective is the determination of the fixed contributions to the sensor error models (refer to Section 2.1 for the different types of sensor error contributions, including fixed, run-to-run, and in-run), that is, the scale factor and cross-coupling errors of both inertial sensors and magnetometers ( M ^ ACC , M ^ GYR , M ^ MAG ) (note that M ^ MAG also includes the soft iron magnetism), the magnetometers hard iron magnetism B ^ HI , MAG , the body to platform transformation ( T ^ BP B , ϕ ^ BP ), and the body-to-camera transformation ( T ^ BC B , ϕ ^ BC ). These procedures also provide estimations for the run-to-run error contributions ( B ^ 0 ACC , B ^ 0 GYR , B ^ 0 , MAG ), but these need to be discarded, as they change every time the aircraft systems are switched on.

5.1. Inertial Sensors Calibration

Calibration is the process of comparing instrument outputs with known references to determine coefficients that force the outputs to agree with the references over a range of output values [31]. The IMU inertial sensors need to be calibrated to eliminate the fixed errors originated from manufacturing and also to determine their temperature sensitivity [32]. The calibration process requires significant material and time resources, but it greatly reduces the measurement errors. While high-grade IMUs are always factory calibrated, low-cost ones generally are not, so it is necessary to calibrate the IMU at the laboratory before mounting it on the aircraft [4].
The calibration process is executed at a location where the position x GDT and the gravity (gravity includes both gravitation and centrifugal accelerations) vector g c have been previously determined with great precision [31]. It relies on a three-axis table, which enables rotating the IMU with known angular velocities into a set of predetermined precisely controlled orientations [32,53]. Accelerometer and gyroscope measurements are then compared to reference values (gravity for the accelerometers, torquing rate plus Earth angular velocity for the gyroscopes) and the differences employed to generate corrections [31].
During calibration, the amount of time that the IMU is maintained stationary at each attitude, as well as the time required to rotate it between two positions, are trade-offs based on two opposing influences. On one side, longer periods of time are preferred as the negative influence of the system noise in the measurements tends to even out over time, while on the other, shorter times imply smaller variations of the bias drift over the measurement interval.
It is worth noting that as the calibration is performed before the IMU is installed on the aircraft, it relies on the platform frame F P and the models contained in Section 2.6 and Section 2.7. Although it is possible to use a calibration strategy based on selecting platform orientations that isolate sensor input onto a single axis (for example, gravity will only be sensed by the accelerometer that is placed vertically with respect to the Earth’s surface) to then apply least squares techniques, in real life, it is better to employ state estimation techniques (the estimation filter not only relies on known gravity and angular velocity but also the fact that the IMU is stationary and hence its velocity is zero) to obtain estimates of the inertial sensor’s scale factors, cross-coupling errors, and bias offsets [4,32]. The process is repeated at different temperatures so the IMU processor can later apply the correction based on the IMU sensor temperature [4].
The twenty-one coefficients estimated in the calibration process are listed in Table 10. Once the coefficients have been estimated, they can be introduced into the IMU processor so it automatically performs the corrections contained in (85) and (86):
f ˜ ˜ IP P = M ^ ACC 1 f ˜ IP P B ^ 0 ACC
ω ˜ ˜ IP P = M ^ GYR 1 ω ˜ IP P B ^ 0 GYR
This article assumes that the bias offset is exclusively a run-to-run source of error that varies every time the IMU is switched on, so the B ^ 0 ACC and B ^ 0 GYR coefficients obtained by calibration are discarded, as they have no relation to the offsets that occur during flight. Modeling the results obtained by the calibration process implies reducing the scale factor and cross-couplings errors found on the inertial sensors specifications by an arbitrary amount that can be specified by the user. To summarize, instead of applying (85), (86) to the measurements obtained by (61), (63), the model directly employs (61), (63) with reduced M GYR and M ACC values.

5.2. Determination of the Platform Frame Pose

The true relative pose between the body and platform frames ( F B , F P ), given by T BP B and ϕ BP , as well as their estimated values T ^ BP B and ϕ ^ BP , play a key role in the readings generated by the inertial sensors, as explained in Section 2.9.
Considering that the position of the aircraft center of mass is known (in both full and empty tank configurations), the true displacement T BP B can be determined with near exactitude based on the IMU attachment point to the aircraft, resulting in very low σ T ^ BP B values to be employed for the estimation of T ^ BP B in (52).
With regard to the attitude ϕ BP , after mounting the IMU platform so two of its axes are approximately aligned with the forward and down directions of an approximate aircraft plane of symmetry (with no particular need for accuracy), it is possible to estimate the angular deviation ϕ BP by means of self-alignment [4], resulting in small σ ϕ ^ BP values when estimating ϕ ^ BP in (53).

5.3. Swinging or Magnetometer Calibration

Magnetometer calibration is inherently more complex than that of the inertial sensors, as it must be performed with the sensors already mounted on the aircraft, as otherwise, it would not capture the fixed contributions of the hard iron and soft iron magnetisms (Section 3.1). The calibration process, known as swinging, relies on obtaining magnetometer readings while the aircraft is positioned at different attitudes that encompass a wide array of heading, pitch, and roll values [4], and it is executed at a location where the magnetic field is precisely known.
The accuracy of the results is very dependent of the precision with which the different aircraft attitudes can be determined during swinging. This can be done with self-alignment procedures [4] or with the use of expensive static instruments. In any case, attitude accuracy is always going to be inferior to that obtained with a three-axis table during inertial sensor calibration. Once the magnetic field readings are obtained, they are compared to the real magnetic field values, and expression (67) is employed with least squares techniques to obtain estimations of the bias (sum of hard iron magnetism B HI , MAG and offset B 0 , MAG ), and the scale factor and cross-coupling matrix M MAG , which also includes the soft iron magnetism. The process can be repeated several times to isolate the influence of hard iron magnetism (a fixed effect that does not change) from the offset, which is a run-to-run error source that changes every time the magnetometers are turned on.
The fifteen coefficients estimated in the swinging process are listed in Table 11. Once the coefficients have been estimated, they can be introduced into the processor so it automatically performs the corrections shown in (87):
B ˜ ˜ B = M ^ MAG 1 B ˜ B B ^ HI , MAG B ^ 0 , MAG
This articles assumes that the bias offset B 0 , MAG is exclusively a run-to-run source of error that varies every time the magnetometer is switched on, so bias offset coefficients obtained by swinging are discarded, as they have no relation to the offsets that occur during flight. Modeling the results obtained by swinging implies reducing the hard iron bias B HI , MAG and scale factor and cross-coupling errors M MAG found on the sensor’s specifications by an arbitrary amount that can be specified by the user. To summarize, instead of applying (87) to the measurements obtained by (67), the model directly employs (67) with reduced B HI , MAG and M MAG values.

5.4. Determination of the Camera Frame Pose

The images generated by the onboard camera, and simulated by means of the Earth Viewer application introduced in Section 4.2, do not only depend on the relative pose between the body and the Earth but also on that of the camera with respect to the aircraft structure, which is represented by the rotation ϕ BC and displacement T BC B generated when mounting the camera, as described in Section 4.1. Visual navigation algorithms, however, rely on the navigation system best estimate of this pose, this is, ϕ ^ BC and T ^ BC B , which need to be estimated once the already calibrated camera has been mounted on the aircraft. The two-phase process requires a chess board such as that employed for camera calibration [50,54].
The first phase uses an optimization procedure quite similar to that used in calibration to determine the relative pose between the camera frame F C and the one rigidly attached to the chessboard. Instead of using the location of each chess box corner in different images, this process relies on a single photo and imposes that all chess boxes are square and have the same size, which is enough to obtain a solution up to an unknown scale. The size of the chess boxes provides the scale required to unambiguously solve the identification problem with high precision.
The second step is to obtain the pose between the chessboard and body frames. This is a straightforward geometric optimization problem that relies on distance measurements between chessboard points and aircraft structure points whose coordinates in the F B frame are known. The resulting accuracy depends on the accuracy with which these distances can be measured, so special equipment may be required given the importance of the final estimations for the success of the visual navigation algorithms.
Overall, this is a robust and accurate process if properly executed, which results in the user-selectable σ T ^ BC B σ ϕ ^ BC values employed for the stochastic estimation of T ^ BC B and ϕ ^ BC in each run by means of (83) and (84).

6. Discussion: Realism, Stochastic Properties and Customizable Inputs

This article provides models of the various sensors usually installed onboard a fixed wing autonomous aircraft, and it includes a ready-to-use open source C++ implementation [6] so researchers can quickly generate realistic, stochastic (pseudo-random), and customizable time-stamped series of the outputs of the different sensors, including images of the Earth’s surface that closely resemble what a real camera would record from the same positions and attitudes.
  • In terms of realism, the detailed descriptions of Section 2, Section 3 and Section 4 show that in addition to the usually present white noise and random walk contributions, the models also include key error sources such as scale factors and cross-coupling effects, which are challenging for navigation systems, since these often include some type of linearization, and inputs that should theoretically be restricted to a single axis (caused by maneuvering or turbulence) in fact result in outputs to all three sensor axes. Supplied with a series of time-varying aircraft positions and attitudes, the Earth Viewer application generates detailed distortion-free images, resembling what a real onboard camera would record if flying the same trajectory.
    In addition, and only for the case of the camera and inertial sensors, the models do not only consider the relative pose (position and attitude) between the platform (IMU) and camera frames (in which the information is recorded) and the body frame (into which they are converted for output), as well as their slow variations with time as a consequence of fuel load changes, but also take into account the inaccuracies in the aircraft processors’ knowledge of these relative poses.
  • With respect to randomness, the random nature of the outputs generated by the various sensors is reflected in the extensive use of stochastic processes and distributions within the models. Section 6.1 below explains how all sensor errors are derived from two input seeds (one identifying the aircraft or fixed errors, and another specifying the flight, or run-to-run and in-run error contributions). This facilitates the use of the models within Monte Carlo simulations while also enabling the same set of outputs to be reproduced if so desired.
  • Regarding the customization, the realism in the models relies on multiple input parameters that need to be provided by the end user. Most of them can be found in the data sheets provided by the manufacturers (usually with different names and conventions, which are explained in previous sections), others depend on the methods employed for the mounting of the sensors onboard the aircraft, and some of them can be improved by means of the calibration procedures described in Section 5. Section 6.2 describes the process followed to obtain the parameters for the case of a small low SWaP aircraft, which constitutes the default configuration for the model’s C++ implementation [6].

6.1. Stochastic Models and the Use of Input Seeds

As explained in the corresponding sections, the outputs of the different sensor models depend on the input seeds listed in Table 5, Table 6, Table 7 and Table 8 and grouped together in Table 12 for convenience.
The following steps describe how these seeds are obtained in the available C++ implementation of the models [6]:
  • Initialize a discrete uniform distribution with any seed (any value is valid, so 1 was employed by the authors), which produces random integers where each possible value has an equal likelihood of being produced. Call this distribution a number of times equal or higher than twice the maximum number of runs to be executed (each run provides the variation of time of all sensors for an unlimited amount of time and corresponds to a single aircraft flight), divide them into two groups of the same size, and store the results for later use. These values, called Υ i , A and Υ j , F , are, respectively, the aircraft seeds and the flight seeds, where i is the aircraft number representing given fixed error realizations (fixed error contributions vary from aircraft to aircraft but are constant for all flights of that aircraft), and j is the trajectory number representing run-to-run and in-run error realizations (run-to-run and in-run error contributions vary from one flight of a given aircraft to the next). The stored aircraft and flight seeds become the initialization seeds for each of the executions or runs, so this step does not need to be repeated.
  • Every time the simulator needs to obtain the errors generated by the different sensors (which usually correspond to a given flight), it is initialized with a given aircraft seed Υ i , A together with a flight seed Υ j , F . As these seeds are the only inputs required for all the stochastic processes within the sensors, the results of a given run can always be repeated by employing the same two seeds.
    The selected aircraft and flight seeds are then employed to initialize two different discrete uniform distributions. One is executed five times to provide the fixed sensor seeds ( υ i , A , ACC , υ i , A , GYR , υ i , A , MAG , υ i , A , PLAT , and υ i , A , CAM ), while the other is realized nine times to obtain the run sensor seeds ( υ j , F , ACC , υ j , F , GYR , υ j , F , MAG , υ j , F , OSP , υ j , F , OAT , υ j , F , TAS , υ j , F , AOA , υ j , F , AOS , and υ j , F , GNSS ). These seeds hence become the initialization seeds for each of the different sensors described throughout this article.
  • Each sensor relies on either one or two standard normal distributions N 0 , 1 , depending on whether its error model is based exclusively on run-to-run and in-run error contributions or it also contains fixed error sources. The normal distributions of every sensor are initialized with the corresponding seeds ( υ i , A , XXX and υ j , F , XXX ) for that sensor.
  • Upon initialization, the fixed normal distribution of every sensor is employed to generate all the values corresponding to scale factors, cross couplings, hard iron magnetism, and mounting errors. The run normal distribution in turn is employed to generate the required bias offsets.
  • Once the model has been initialized, it is able to estimate the errors generated by each sensor working at the required sensor rate. As time advances, every time a sensor is called to provide a measurement, its already initialized and used run normal distribution is called to generate the corresponding random walk increments and white noises.

6.2. Example: Input Parameters for a Low SWaP Fixed Wing Aircraft

This section describes the process followed by the authors to obtain the input parameters required by the various models (listed in Table 5, Table 6, Table 7 and Table 8) for the specific case of a small low SWaP fixed wing autonomous aircraft. These values constitute the default configuration of the supplied C++ implementation [6], but they can be modified by the user to reflect different hardware, mounting, or calibration procedures.
With respect to the operating frequencies, Table 13 reflects the considered values, which are all within the working range of the specific sensors described in the following paragraphs.
The gyroscope values correspond to the MEMS gyroscopes present inside the Analog Devices ADIS16488A IMU [55]. Table 14 shows its performances, which have been taken from the data sheet when possible and corrected when suspicious. A calibration process such as that described in Section 5.1 is assumed to eliminate 95 % of the scale factor and cross-coupling errors.
Table 14 contains three columns of data. The left most column (“Spec”) corresponds to data taken directly from the sensor’s specifications, which become converted in the middle column (“Value”) to the parameters shown in Section 2.2,Section 2.3, Section 2.4, Section 2.5, Section 2.6, Section 2.7, Section 2.8, Section 2.9 (the conversion between bias instability and σ u uses a period of 100 s , as noted in Section 2.4). The right column (“Calibration”) contains the values after the calibration process, which reduces the scale factor and cross-coupling errors by 95 % . A similar process based on the explanations of Section 2.3 and Section 2.4 should be followed by the user to obtain the parameters applicable to different gyroscopes, noting that a certain level of arbitrariness is required to account for the effects of calibration.
The accelerometer values also correspond to the MEMS accelerometers present inside the Analog Devices ADIS16488A IMU [55]. All values shown in Table 15 have been taken from the data sheet. As in the case of the gyroscopes, a calibration process as that described in Section 5.1 is assumed to eliminate 95 % of the scale factor and cross-coupling errors (the conversion between bias instability and σ u uses a period of 100 s , as noted in Section 2.4). As in the previous case, a similar process should be followed by the user to obtain the parameters applicable to different accelerometers and calibration procedures.
The magnetometer values are shown in Table 16, where the white noise has been taken from [4] and the rest of the parameters correspond to the magnetometers present in the Analog Devices ADIS16488A IMU [55]. Although the value of hard and soft iron magnetism in aircraft is rather small, the authors have not been able to obtain trusted values for them. To avoid eliminating sources of error, the authors have decided to increase by 50% the values for bias offset, scale factor, and cross-coupling errors found in the literature, as shown in the compensation column (“Comp.”). As both result in a similar effect, the authors expect that the realism of the results will not be adversely affected. In the case of the bias, the authors have assigned most of the error to the fixed hard iron error B HI , MAG and the remaining to the run-to-run bias offset B 0 , MAG .
A swinging process such as that described in Section 5.3 is assumed to eliminate 90% of the fixed error contributions, this is, the hard iron magnetism, the scale factor, and the cross-coupling error (the soft iron effect is combined with the scale factor and cross-coupling errors). The final results can be found in the rightmost column above. The user should follow a similar process to obtain the parameters applicable to each specific case. Note however that because of the influence of the hard and soft iron effects, and the intrinsic difficulty of swinging, the determination of the parameters is more arbitrary than in the case of the inertial sensors.
With respect to the GNSS receiver, the horizontal position accuracy shown in Table 17 corresponds to the U-blox NEO-M8 receiver data sheet [56], where CEP stands for circular error probability. As CEP is equivalent to 1.18 standard deviations [57], it enables the obtainment of σ GNSS , POS , HOR . As no vertical position accuracy is available within [56], a conservative value for σ GNSS , POS , VER of twice that of σ GNSS , POS , HOR has been selected. The velocity accuracy also originates at the U-blox NEO-M8 receiver data sheet [56]. Assuming that it corresponds to a per-axis error of ± 0.05 m / s instead of CEP, and knowing that the 50% mark of a normal distribution lies at 0.67448 standard deviations, it is possible to obtain σ GNSS , VEL .
In regards to the air data system, the σ OSP value shown in Table 18 originates at the ±10 m altitude error listed in the specifications of the Aeroprobe air data system [58], which translates into ±100 Pa at a pressure altitude of 1500 m . The σ OAT value is taken from the Analog Devices ADT7420 temperature sensor [59]. With respect to σ TAS , the Aeroprobe air data system specifications [58] list a maximum airspeed error of 1 m / s , which can be interpreted as 3 σ , and hence results in the σ TAS value included in the table. The multi-hole Pitot tube contained in the Aeroprobe air data system [58] measures both flow angles (attack and sideslip) with a maximum error of ± 1 . 0 . If interpreted as 3 σ , this results in standard deviations σ AOA and σ AOS of 0 . 33 . Although never present in the data sheets, the bias offsets for all variables present in the the Section 3.3 air data system model have been set equal to the system noises to obtain further realism in the results.
Table 19 suggests appropriate values for the IMU platform and camera pose estimation errors described in Section 5.2 and Section 5.4, together with totally subjective ones for their true attitude with respect to the body frame.

7. Conclusions

This article presents realistic, stochastic, and customizable models for the errors generated by the sensors typically installed onboard a fixed wing aircraft. These can be used to generate pseudo-random time-stamped series of values that simulate the sensor outputs during flight as well as a series of images of the Earth’s surface that resemble what would be recorded by a camera mounted on the aircraft.
The article provides instructions and an example of how to obtain the parameters on which the models rely based exclusively on the information usually displayed in the data sheets provided by the sensors’ manufacturers, so the user can employ the values that better resemble the performances of the specific equipment being modeled. The models properly represent the stochastic nature of the different random processes involved, while ensuring that the time-stamped series of outputs generated by each sensor can be repeated if so desired. The various sensor models include the contributions of the most important sources of error, and they are intended to be used as inputs to Monte Carlo simulations that rely on the sensor outputs, such as those required to evaluate inertial, visual, and visual–inertial navigation systems.
The authors release an open-source C++ implementation of the described models.

Author Contributions

Investigation, E.G.; Methodology, E.G.; Software, E.G.; Supervision, A.B.; Writing—original draft, E.G.; Writing—review and editing, A.B. All authors have read the manuscript and agree to its published version.

Funding

This research has been possible thanks to the financing of the RoboCity2030- DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by “Programas de Actividades I+D en la Comunidad Madrid” and cofunded by Structural Funds of the EU.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

An open source C++ implementation of the described models can be found at [6]. Its execution generates pseudo-random time-stamped series of the errors introduced by the different sensors.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACCACCelerometer
AOAAngle of Attack
AOSAngle of Sideslip
CAMCAMera
CEPCircular Error Probability
GLONASSGLObal NAvigation Satellite System
GNCGuidance, Navigation, and Control
GNSSGlobal Navigation Satellite System
GPSGlobal Positioning System
GYRGYRoscope
IMUInertial Measurement Unit
ISOInternational Organization for Standardization
MAGMAGnetometer
MEMSMicromachined ElectroMechanical System
NEDNorth–East–Down
OATOutside Air Temperature
OSPOutside Static Pressure
PSDPower Spectral Density
SWaPSize, Weight, and Power
TASTrue Air Speed

Appendix A. Motion of Multiple Rigid Bodies

The equations employed in this article make use of positions, velocities, and accelerations (both linear and angular) that refer to different reference systems or rigid bodies, which are in continuous motion (translation and rotation) among themselves. Their relationships are obtained in this appendix based on the three reference systems shown in Figure A1: an inertial reference system F 0 { 0 0 , i 1 0 , i 2 0 , i 3 0 } and two non-inertial reference systems F 1 { 0 1 , i 1 1 , i 2 1 , i 3 1 } and F 2 { 0 2 , i 1 2 , i 2 2 , i 3 2 } , where T 01 , v 01 , a 01 are the position, linear velocity, and linear acceleration of the origin of F 1 with respect to F 0 , T 02 , v 02 , a 02 are those of the origin of F 2 with respect to F 0 , and T 12 , v 12 , a 12 are those of the origin of F 2 with respect to F 1 . Similarly, ω 01 , α 01 are the angular velocity and angular acceleration of F 1 with respect to F 0 , ω 02 , α 02 are those of F 2 with respect to F 0 , and ω 12 , α 12 are those of F 2 with respect to F 1 . Let us also consider that R 01 , R 02 , and R 12 are the rotation matrices among the three different rigid bodies.
Figure A1. Reference system for combination of movements.
Figure A1. Reference system for combination of movements.
Sensors 22 05518 g0a1

Appendix A.1. Composition of Position

The relationship between the linear position vectors T 02 , T 12 , and T 01 can be established by vectorial arithmetics when expressed in the same reference frame or by coordinate transformation [45] when not so:
T 02 0 = T 12 0 + T 01 0 = R 01 T 12 1 + T 01 0

Appendix A.2. Composition of Linear Velocity

The derivation with time of (A1) results in:
T ˙ 02 0 = R ˙ 01 T 12 1 + R 01 T ˙ 12 1 + T ˙ 01 0
The use of the relationship between the rotation matrix and its time derivative [45] results in (note that the wide hat ˂ · ^ ˃ refers to the skew-symmetric form of a vector):
T ˙ 02 0 = R 01 ω ^ 01 1 T 12 1 + R 01 T ˙ 12 1 + T ˙ 01 0
Reordering and replacing the position time derivatives with their respective velocities results in the relationship between the linear velocity vectors v 02 , v 12 , and v 01 expressed in the inertial frame F 0 :
v 02 0 = R 01 v 12 1 + v 01 0 + R 01 ω ^ 01 1 T 12 1
v 02 0 = v 12 0 + v 01 0 + ω ^ 01 0 T 12 0

Appendix A.3. Composition of Linear Acceleration

The derivation with time of (A4) results in:
v ˙ 02 0 = R ˙ 01 v 12 1 + R 01 v ˙ 12 1 + v ˙ 01 0 + R ˙ 01 ω ^ 01 1 T 12 1 + R 01 ω ^ ˙ 01 1 T 12 1 + R 01 ω ^ 01 1 T ˙ 12 1
Replacing the rotation matrix time derivative results in:
v ˙ 02 0 = R 01 ω ^ 01 1 v 12 1 + R 01 v ˙ 12 1 + v ˙ 01 0 + R 01 ω ^ 01 1 ω ^ 01 1 T 12 1 + R 01 ω ^ ˙ 01 1 T 12 1 + R 01 ω ^ 01 1 T ˙ 12 1
Reordering, replacing the position, linear velocity, and angular velocity time derivatives with their respective velocities, linear accelerations and angular accelerations, results in the relationship between the linear acceleration vectors a 02 , a 12 , and a 01 expressed in the inertial frame F 0 :
a 02 0 = R 01 a 12 1 + a 01 0 + R 01 α ^ 01 1 T 12 1 + R 01 ω ^ 01 1 ω ^ 01 1 T 12 1 + 2 R 01 ω ^ 01 1 v 12 1
a 02 0 = a 12 0 + a 01 0 + α ^ 01 0 T 12 0 + ω ^ 01 1 ω ^ 01 1 T 12 0 + 2 ω ^ 01 1 v 12 0
The term on the left-hand side is called absolute acceleration, while the three right-hand side terms are usually named relative, transport, and Coriolis accelerations, respectively.

Appendix A.4. Composition of Angular Velocity

The relationship among the different frames angular velocities is given by the rotation matrix composition rule [45], which can be derivated with respect to time:
R 02 = R 01 R 12 R ˙ 02 = R ˙ 01 R 12 + R 01 R ˙ 12
Replacing the rotation matrix time derivatives results in:
R 02 ω ^ 02 2 = ω ^ 01 0 R 01 R 12 + R 01 R 12 ω ^ 12 2 = ω ^ 01 0 R 02 + R 02 ω ^ 12 2 = R 02 ω ^ 01 2 + R 02 ω ^ 12 2
The relationship among the angular velocity vectors ω 02 , ω 12 , and ω 01 is hence the following:
ω 02 0 = ω 12 0 + ω 01 0 = R 01 ω 12 1 + ω 01 0

Appendix A.5. Composition of Angular Acceleration

The derivation with time of (A12) results in:
ω ˙ 02 0 = R ˙ 01 ω 12 1 + R 01 ω ˙ 12 1 + ω ˙ 01 0
Replacing the rotation matrix time derivatives results in:
ω ˙ 02 0 = R 01 ω ^ 01 1 ω ^ 12 1 + R 01 ω ˙ 12 1 + ω ˙ 01 0
Reordering, replacing the angular velocity time derivatives with their respective angular accelerations, results in the relationship between the angular acceleration vectors α 02 , α 12 , and α 01 expressed in the inertial frame F 0 :
α 02 0 = R 01 α 12 1 + α 01 0 + R 01 ω ^ 01 1 ω ^ 12 1
α 02 0 = α 12 0 + α 01 0 + ω ^ 01 0 ω ^ 12 0

Appendix A.6. Summary of Compositions

The final expressions of the compositions above (A1), (A5), (A9), (A12), and (A16) are all expressed in the inertial frame F 0 , but they are also valid in any other frame as long as all its components are converted into that frame (note that it is not the same to compute a time derivative (velocity, acceleration, or angular acceleration) in the inertial frame and then convert it into a different frame than to directly compute the derivative in a non-inertial frame):
T 02 = T 12 + T 01
v 02 = v 12 + v 01 + ω ^ 01 T 12
a 02 = a 12 + a 01 + α ^ 01 T 12 + ω ^ 01 ω ^ 01 T 12 + 2 ω ^ 01 v 12
ω 02 = ω 12 + ω 01
α 02 = α 12 + α 01 + ω ^ 01 ω 12

References

  1. Gallo, E. Stochastic High Fidelity Simulation and Scenarios for Testing of Fixed Wing Autonomous GNSS-Denied Navigation Algorithms. arXiv 2021, arXiv:2102.00883v3. [Google Scholar] [CrossRef]
  2. Farrell, J.A. Aided Navigation, GPS with High Rate Sensors; McGraw-Hill: New York, NY, USA, 2008; ISBN 0-071-49329-8. [Google Scholar]
  3. Etkin, B. Dynamics of Atmospheric Flight; John Wiley & Sons: Hoboken, NJ, USA, 1972; ISBN 0-486-44522-4. [Google Scholar]
  4. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: Norwood, MA, USA, 2008; ISBN 978-1-58053-255-6. [Google Scholar]
  5. Farrell, J.A.; Oliveira e Silva, F.; Rahman, F.; Wendel, J. IMU Error Modeling for State Estimation and Sensor Calibration: A Tutorial. IEEE Control. Syst. Mag. 2021; accepted. [Google Scholar]
  6. Gallo, E. High Fidelity Model of the Sensors and Camera onboard a Low SWaP Fixed Wing UAV. C++ Open Source Code. 2021. Available online: https://github.com/edugallogithub/sensor_camera_model (accessed on 17 July 2022).
  7. Standard 952; IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Interferometric Fiber Optic Gyros. IEEE: New York, NY, USA, 2003.
  8. Standard 647; IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Laser Gyros. IEEE: New York, NY, USA, 2006.
  9. Standard 1293; IEEE Standard Specification Format Guide and Test Procedure for Linear, Single-Axis, Non-Gyroscopic Accelerometers. IEEE: New York, NY, USA, 1999.
  10. Baziw, J.; Leondes, C.T. In Flight Alignment and Calibration of Inertial Measurement Units—Part I: General formulation. IEEE Trans. Aerosp. Electron. Syst. 1972, 8, 439–449. [Google Scholar] [CrossRef]
  11. Dierendonck, A.J.; McGraw, J.B.; Brown, R.G. Relationship between Allan Variances and Kalman Filter Parameters. In Proceedings of the 16th Annual Precise Time and Time Interval Systems and Applications Meeting, Greenbelt, MD, USA, 27–29 November 1984. [Google Scholar]
  12. Ford, J.J.; Evans, M.E. Online Estimation of Allan Variance Parameters. J. Guid. Control. Dyn. 2000, 23, 980–987. [Google Scholar] [CrossRef]
  13. El-Sheimy, N.; Hou, H.; Niu, X. Analysis and Modeling of Inertial Sensors using Allan Variance. IEEE Trans. Instrum. Meas. 2008, 57, 140–149. [Google Scholar] [CrossRef]
  14. Zhang, X.; Li, Y.; Mumford, P.; Rizos, C. Allan Variance Analysis on Error Characters of MEMS Inertial Sensors for an FPGA-Based GPS/INS System. In Proceedings of the International Symposium on GPS/GNNS; University of New South Wales: Sydney, NSW, Australia, 2008; pp. 127–133. [Google Scholar]
  15. Xing, Z.; Gebre-Egziabher, D. Modeling and Bounding Low Cost Inertial Sensor Errors. In Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; pp. 1122–1132. [Google Scholar] [CrossRef]
  16. Saini, V.; Rana, S.C.; Kube, M.M. Online Estimation of State Space Error Model for MEMS IMU. J. Model. Simul. Syst. 2010, 1, 219–225. [Google Scholar]
  17. Vaccaro, V.; Zaki, A. Statistical Modeling of Rate Gyros and Accelerometers. IEEE Trans. Instrum. Meas. 2012, 61, 673–684. [Google Scholar] [CrossRef]
  18. Hidalgo-Carrio, J.; Arnold, S.; Poulakis, P. On the Design of Attitude-Heading Reference Systems using the Allan Variance. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2016, 63, 656–665. [Google Scholar] [CrossRef]
  19. Silva, F.O.; Hemerly, E.M.; Leite Filho, W.C. On the Error State Selection for Stationary SINS Alignment and Calibration Kalman Filters—Part II: Observability/Estimability Analysis. Sensors 2017, 17, 439. [Google Scholar] [CrossRef] [Green Version]
  20. Nikolic, J.; Furgale, P.; Melzer, A.; Siegwart, R. Maximum Likelihood Identification of Inertial Sensor Noise Model Parameters. IEEE Sens. J. 2016, 16, 163–176. [Google Scholar] [CrossRef]
  21. Li, Y.; Ruizhi, C.; Niu, X.; Zhuang, Y.; Gao, Z.; Hu, X.; El-Sheimy, N. Inertial Sensing Meets Machine Learning: Opportunity or Challenge? IEEE Trans. Intell. Transp. Syst. 2021, 4712–4719. [Google Scholar] [CrossRef]
  22. Allan, D.W. Statistics of Atomic Frequency Standards. Proc. IEEE 1966, 54, 221–230. [Google Scholar] [CrossRef] [Green Version]
  23. Riley, W.J. Handbook of Frequency Stability Analysis; National Institute of Standards and Technology, U.S. Department of Commerce: Gaithersburg, MD, USA, 2008. [Google Scholar]
  24. Barnes, J.A.; Chi, A.R.; Cutler, L.S.; Healey, D.J.; Leeson, D.B.; McGunigal, T.E.; Mullen, J.A., Jr.; Smith, W.L.; Sydnor, R.L.; Vessot, R.F.C.; et al. Characterization of Frequency Stability. IEEE Trans. Instrum. Meas. 1971, IM-20, 105–120. [Google Scholar] [CrossRef] [Green Version]
  25. Stebler, Y.; Guerrier, S.; Skaloud, J.; Victoria-Feser, M.P. A Framework for Inertial Sensor Calibration using Complex Stochastic Error Models. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012. [Google Scholar] [CrossRef]
  26. Quinchia, A.G.; Falco, G.; Falletti, E.; Dovis, F.; Ferrer, C. A Comparison between Different Error Modeling of MEMS applied to GPS/INS Integrated Systems. Sensors 2013, 13, 9549–9588. [Google Scholar] [CrossRef] [Green Version]
  27. Nassar, S.; Schwarz, K.P.; El-Sheimy, N.; Noureldin, A. Modeling Inertial Sensor Errors Using Autoregressive Models. Navig. J. Inst. Navig. 2014, 51, 259–268. [Google Scholar] [CrossRef]
  28. Miao, Z.; Shen, F.; Xu, D.; He, K.; Tian, C. Online Estimation of Allan Variance Coefficients based on a Neural Extended Kalman Filter. Sensors 2015, 15, 2496–2524. [Google Scholar] [CrossRef] [Green Version]
  29. Rudyk, A.V.; Semenov, A.O.; Kryvinska, N.; Semenova, O.O.; Kvasnikov, V.P.; Safonyk, A.P. Strapdown Inertial Navigation Systems for Positioning Mobile Robots—MEMS Gyroscopes Random Errors Analysis Using Allan Variance Method. Sensors 2020, 20, 4841. [Google Scholar] [CrossRef]
  30. Titterton, D.; Weston, J. Strapdown Inertial Navigation Technology, 2nd ed.; The Institution of Engineering and Technology: Stevenage, UK, 2004; ISBN 0-863-41358-7. [Google Scholar]
  31. Chatfield, A.B. Fundamentals of High Accuracy Inertial Navigation; Progress in Astronautics and Aeronautics; AIAA: Reston, VA, USA, 1997; ISBN 1-56347-243-0. [Google Scholar]
  32. Rogers, R.M. Applied Mathematics in Integrated Navigation Systems, 3rd ed.; AIAA: Reston, VA, USA, 2007; ISBN 1-563-47927-3. [Google Scholar]
  33. Kayton, M.; Fried, W.R. Avionics Navigation Systems, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 1997; ISBN 0-471-54795-6. [Google Scholar]
  34. Crassidis, J.L. Sigma-Point Kalman Filtering for Integrated GPS and Inertial Navigation. IEEE Trans. Aerosp. Electron. Syst. 2006, 42, 750–756. [Google Scholar] [CrossRef]
  35. Woodman, O.J. An Introduction to Inertial Navigation; Technical Report; University of Cambridge Computer Laboratory: Cambridge, UK, 2007. [Google Scholar]
  36. Hibbeler, R.C. Engineering Mechanics: Statics and Dynamics, 4th ed.; Pearson: Upper Saddle River, NJ, USA, 2015; ISBN 978-0-13391-542-6. [Google Scholar]
  37. Grewal, M.; Andrews, A. How Good Is Your Gyro? IEEE Control. Syst. Mag. 2010, 30, 12–86. [Google Scholar] [CrossRef]
  38. Trusov, A.A. Allan Variance Analysis of Random Noise Modes in Gyroscopes; Technical Report; University of California: Berkeley, CA, USA, 2011. [Google Scholar]
  39. Guide to Comparing Gyro and IMU Technologies: Micro-Electro-Mechanical Systems and Fiber Optic Gyros. KVH Fiber Optic Gyro. 2014. Available online: https://caclase.co.uk/wp-content/uploads/2016/11/Guide-to-Comparing-Gyros-0914.pdf (accessed on 17 July 2022).
  40. Chow, R. Evaluating Inertial Measurement Units; Technical Report; Epson Electronics America: Los Alamitos, CA, USA, 2011. [Google Scholar]
  41. Stockwell, W. Angle Random Walk; Technical Report; Crossbow Technology Inc.: San Jose, CA, USA, 2003. [Google Scholar]
  42. Renaut, F. MEMS Inertial Sensors Technology; Technical Report; Swiss Federal Institude of Technology: Zurich, Switzerland, 2013. [Google Scholar]
  43. Farrenkopf, R.L. Analytic Steady-State Accuracy Solutions for Two Common Spacecraft Attitude Estimators. J. Guid. Control. 1974, 1, 282–284. [Google Scholar] [CrossRef]
  44. Frishman, F. On the Arithmetic Means and Variances of Products and Ratios of Random Variables; Technical Report; Army Research Office: Adelphi, MD, USA, 1971. [Google Scholar]
  45. Shuster, M.D. A Survey of Attitude Representations. J. Astronaut. Sci. 1993, 41, 439–517. [Google Scholar]
  46. Pinter, C.C. A Book of Abstract Algebra, 2nd ed.; Dover Publications: New York, NY, USA, 1990; ISBN 0-486-47417-8. [Google Scholar]
  47. Manual of the ICAO International Standard Atmosphere, 3rd ed.; Technical Report; ICAO DOC-7488/3; International Civil Aviation Organization: Montreal, QC, Canada, 2000.
  48. Gallo, E. Quasi Static Atmospheric Model for Aircraft Trajectory Prediction and Flight Simulation. arXiv 2021, arXiv:2101.10744v1. [Google Scholar] [CrossRef]
  49. Eshelby, M.E. Aircraft Performance, Theory and Practice; Arnold: London, UK, 2000; ISBN 0-340-75897-X. [Google Scholar]
  50. Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S.S. An Invitation to 3-D Vision, From Images to Geometric Models; Springer: Berlin/Heidelberg, Germany, 2001; ISBN 978-1-4419-1846-8. [Google Scholar]
  51. Open Scene Graph. Available online: http://openscenegraph.org (accessed on 17 July 2022).
  52. osgEarth. Available online: http://osgearth.org (accessed on 17 July 2022).
  53. Tedaldi, D.; Pretto, A.; Menegatti, E. A Robust and Easy to Implement Method for IMU Calibration without External Equipments Required. IEEE Int. Conf. Robot. Autom. 2014, 3042–3049. [Google Scholar] [CrossRef]
  54. Kaehler, A.; Bradski, G. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2016; ISBN 978-1-491-93799-0. [Google Scholar]
  55. Analog Devices ADIS16488A Ten Degrees of Freedom Inertial Sensor Data Sheet. 2015. Available online: http://www.analog.com/media/en/technical-documentation/data-sheets/ADIS16488A.pdf (accessed on 17 July 2022).
  56. U-Blox NEO-M8 Data Sheet. Available online: https://www.u-blox.com/en/product/neo-m8-series (accessed on 17 July 2022).
  57. GPS Position Accuracy Measures. 2003. Available online: http://www.gisresources.com/wp-content/uploads/2014/03/gps_book.pdf (accessed on 17 July 2022).
  58. Aeroprobe Corporation Air Data Systems Data Sheet. Available online: www.aeroprobe.com (accessed on 17 July 2022).
  59. Analog Devices ADT7420 Temperature Sensor. 2012. Available online: http://www.analog.com/media/en/technical-documentation/data-sheets/ADT7420.pdf (accessed on 17 July 2022).
Figure 1. Sensors flow diagram.
Figure 1. Sensors flow diagram.
Sensors 22 05518 g001
Figure 2. Propagation with time of sensor error mean.
Figure 2. Propagation with time of sensor error mean.
Sensors 22 05518 g002
Figure 3. Propagation with time of sensor error standard deviation.
Figure 3. Propagation with time of sensor error standard deviation.
Sensors 22 05518 g003
Figure 4. Propagation with time of first integral of sensor error mean.
Figure 4. Propagation with time of first integral of sensor error mean.
Sensors 22 05518 g004
Figure 5. Propagation with time of first integral of sensor error standard deviation.
Figure 5. Propagation with time of first integral of sensor error standard deviation.
Sensors 22 05518 g005
Figure 6. Propagation with time of second integral of sensor error mean.
Figure 6. Propagation with time of second integral of sensor error mean.
Sensors 22 05518 g006
Figure 7. Propagation with time of second integral of sensor error standard deviation.
Figure 7. Propagation with time of second integral of sensor error standard deviation.
Sensors 22 05518 g007
Figure 8. Example of Earth Viewer images.
Figure 8. Example of Earth Viewer images.
Sensors 22 05518 g008
Table 1. Components of sensed trajectory.
Table 1. Components of sensed trajectory.
ComponentsVariableMeasured byAcronymRate
Specific force f ˜ IB B AccelerometersACC Δ t SENSED
Inertial angular velocity ω ˜ IB B GyroscopesGYR Δ t SENSED
Magnetic field B ˜ B MagnetometersMAG Δ t SENSED
Geodetic coordinates x ˜ GDT GNSS receiverGNSS Δ t GNSS
Ground velocity v ˜ N GNSS receiverGNSS Δ t GNSS
Air pressure p ˜ BarometerOSP Δ t SENSED
Air temperature T ˜ ThermometerOAT Δ t SENSED
Airspeed v ˜ TAS Pitot tubeTAS Δ t SENSED
Angle of attack α ˜ Air vanesAOA Δ t SENSED
Angle of sideslip β ˜ Air vanesAOS Δ t SENSED
Image I Digital cameraCAM Δ t IMG
Table 2. Typical inertial sensor biases according to IMU grade.
Table 2. Typical inertial sensor biases according to IMU grade.
IMU GradeAccelerometer Bias [mg]Gyroscope Bias [°/h]
Marine0.010.001
Aviation0.03–0.10.01
Intermediate0.1–10.1
Tactical1–101–100
Automotive>10>100
Table 3. Typical inertial sensor system noise according to IMU grade.
Table 3. Typical inertial sensor system noise according to IMU grade.
IMU GradeAccelerometer Root PSD [m/s/h0.5]Gyroscope Root PSD [°/h0.5]
Aviation0.0120.002
Tactical0.060.03–0.1
Automotive0.61
Table 4. Units for single-axis inertial sensor error sources.
Table 4. Units for single-axis inertial sensor error sources.
Units B 0 σ u σ v f BW 0 g BW 0
Accelerometer m / s 2 m / s 2.5 m / s 1.5 m / s m
Gyroscope / s / s 1.5 / s 0.5 N/A
Table 5. Inertial sensor error sources.
Table 5. Inertial sensor error sources.
ErrorSourceDescriptionSeeds
Bias Offset B 0 ACC , B 0 GYR run-to-runSection 2.2 υ j , F , ACC , υ j , F , GYR Υ j , F
Bias Drift σ u ACC , σ u GYR in-runSection 2.2 υ j , F , ACC , υ j , F , GYR Υ j , F
System Noise σ v ACC , σ v GYR in-runSection 2.2 υ j , F , ACC , υ j , F , GYR Υ j , F
Scale Factor s ACC , s GYR fixed and TSection 2.6, Section 2.7 υ i , A , ACC , υ i , A , GYR Υ i , A
Cross-Coupling m ACC , m GYR fixedSection 2.6, Section 2.7 υ i , A , ACC , υ i , A , GYR Υ i , A
Lever Arm T BP , σ T ^ BP B fixedSection 2.8 υ i , A , PLAT Υ i , A
IMU Attitude σ ψ P , σ θ P , σ ξ P , σ ϕ ^ BP fixedSection 2.8 υ i , A , PLAT Υ i , A
Table 6. Magnetometer error sources.
Table 6. Magnetometer error sources.
ErrorSourceSeeds
Hard Iron B HI , MAG fixed υ i , A , MAG Υ i , A
Bias Offset B 0 , MAG run-to-run υ j , F , MAG Υ j , F
System Noise σ v , MAG in-run υ j , F , MAG Υ j , F
Scale Factor s MAG fixed υ i , A , MAG Υ i , A
Cross Coupling m MAG fixed υ i , A , MAG Υ i , A
Table 7. GNSS receiver error sources.
Table 7. GNSS receiver error sources.
ErrorSourceSeeds
Bias Offset B 0 , GNSS , ION run-to-run υ j , F , GNSS Υ j , F
System Noise σ GNSS , POS , σ GNSS , VEL , σ GNSS , ION in-run υ j , F , GNSS Υ j , F
Table 8. Air data sensor error sources.
Table 8. Air data sensor error sources.
ErrorSourceSeeds
Bias Offset B 0 OSP , B 0 OAT , B 0 TAS , B 0 AOA , B 0 AOS run-to-run υ j , F , OSP , υ j , F , OAT Υ j , F
System Noise σ OSP , σ OAT , σ TAS , σ AOA , σ AOS in-run υ j , F , TAS , υ j , F , AOA , υ j , F , AOS
Table 9. Camera parameters.
Table 9. Camera parameters.
ParameterSymbolUnit
Focal lengthfmm
Image width S H px
Image height S V px
Pixel size s PX mm/px
Principal point horizontal location c 1 IMG px
Principal point vertical location c 2 IMG px
Horizontal field of view Θ H
Vertical field of view Θ V
Table 10. Results of calibration process.
Table 10. Results of calibration process.
Estimation#Coefficients
M ^ ACC 6 s ^ ACC , i , m ^ ACC , ij
M ^ GYR 9 s ^ GYR , i , m ^ GYR , ij
B ^ 0 ACC 3 B 0 ACC N ^ u 0 , ACC , i
B ^ 0 GYR 3 B 0 GYR N ^ u 0 , GYR , i
Table 11. Results of swinging process.
Table 11. Results of swinging process.
Estimation#Coefficients
M ^ MAG 9 s ^ MAG , i , m ^ MAG , ij
B ^ HI , MAG 3 B ^ HI , MAG , i
B ^ 0 , MAG 3 B 0 , MAG N ^ u 0 , MAG , i
Table 12. Sensor seeds.
Table 12. Sensor seeds.
Type Error SourcesSeeds
Aircraftifixed υ i , A , ACC , υ i , A , GYR , υ i , A , MAG , υ i , A , PLAT , υ i , A , CAM
Flightjrun-to-run and in-run υ j , F , ACC , υ j , F , GYR , υ j , F , MAG , υ j , F , OSP , υ j , F , OAT
υ j , F , TAS , υ j , F , AOA , υ j , F , AOS , υ j , F , GNSS
Table 13. Example of frequencies of the different sensors.
Table 13. Example of frequencies of the different sensors.
Discrete TimeFrequencyRate
t t = t · Δ t TRUTH 500 Hz 0.002 s
t s = s · Δ t SENSED 100 Hz 0.01 s
t i = i · Δ t IMG 10 Hz 0.1 s
t g = g · Δ t GNSS 1 Hz 1 s
Table 14. Example for gyroscopes performance values.
Table 14. Example for gyroscopes performance values.
GYRSpecUnitVariableValueCalibrationUnit
In-Run Bias Stability (1 σ )5.10 / h σ u GYR 1.42 × 10 4 1.42 × 10 4 / s 1.5
Angle Random Walk (1 σ )0.26 / h 0.5 σ v GYR 4.30 × 10 3 4.30 × 10 3 / s 0.5
Nonlinearity 10.01% s GYR 3.00 × 10 4 1.50 × 10 5 -
Misalignment±0.05 m GYR 8.70 × 10 4 4.35 × 10 5 -
Bias Repeatability (1 σ )±0.2 / s B 0 GYR 2.00 × 10 1 2.00 × 10 1 / s
1 The 0.01% scale factor error obtined in [55] is considered too optimistic and hence modified to 0.03% = 3.00 × 10−4.
Table 15. Example for accelerometers’ performance values.
Table 15. Example for accelerometers’ performance values.
ACCSpecUnitVariableValueCalibrationUnit
In-Run Bias Stability (1 σ )0.07 mg σ u ACC 6.86 × 10 5 6.86 × 10 5 m / s 2.5
Velocity Random Walk (1 σ )0.029 m / s / h 0.5 σ v ACC 4.83 × 10 4 4.83 × 10 4 m / s 1.5
Nonlinearity0.1% s ACC 1.00 × 10 3 5.00 × 10 5 -
Misalignment±0.035 m ACC 6.11 × 10 4 3.05 × 10 5 -
Bias Repeatability (1 σ )±16 mg B 0 ACC 1.57 × 10 1 1.57 × 10 1 m / s 2
Table 16. Example for magnetometers’ performance values.
Table 16. Example for magnetometers’ performance values.
MAGSpecUnitVariableValueComp.SwingingUnit
Output Noise5 nT · s 0.5 σ v , MAG 5.00 × 10 0 5.00 × 10 0 5.00 × 10 0 nT · s 0.5
Nonlinearity0.5% s MAG 5.00 × 10 3 7.50 × 10 3 7.50 × 10 4 -
Misalignment±0.35 m MAG 6.11 × 10 3 9.16 × 10 3 9.16 × 10 4 -
Bias (1 σ )±1500 nT B HI , MAG 1.50 × 10 3 1.75 × 10 3 1.75 × 10 2 nT
Repeatability B 0 , MAG 5.00 × 10 2 5.00 × 10 2 nT
Table 17. Example of GNSS receiver performance values.
Table 17. Example of GNSS receiver performance values.
GNSSSpecUnitVariableValueUnit
Horizontal position accuracy (CEP 50%)2.50 m σ GNSS , POS , HOR 2.12 × 10 0 m
Vertical position accuracy (CEP 50%)N/A σ GNSS , POS , VER 4.25 × 10 0 m
Ionospheric random walk 1 / 60 Hz N/A σ GNSS , ION 1.60 × 10 1 m
Ionospheric bias offsetN/A B 0 , GNSS , ION 8.00 × 10 0 m
Velocity accuracy (50%)0.05 m / s σ GNSS , VEL 7.41 × 10 2 m / s
Table 18. Example of air data system performance values.
Table 18. Example of air data system performance values.
Air Data SystemSpecUnitVariableValueUnit
Altitude Error±10 m σ OSP 1.00 × 10 2 Pa
B 0 OSP 1.00 × 10 2 Pa
Temperature Error ( 3 σ )±0.15 K σ OAT 5.00 × 10 2 K
B 0 OAT 5.00 × 10 2 K
Airspeed Error (max)1 m / s σ TAS 3.33 × 10 1 m / s
B 0 TAS 3.33 × 10 1 m / s
Flow Angle Error (max)±1.0 σ AOA 3.33 × 10 1
B 0 AOA 3.33 × 10 1
Flow Angle Error (max)±1.0 σ AOS 3.33 × 10 1
B 0 AOS 3.33 × 10 1
Table 19. Example for IMU and camera mounting accuracy values.
Table 19. Example for IMU and camera mounting accuracy values.
ConceptVariableValueUnitVariableValueUnit
True Yaw Error σ ψ P 0.5 σ ψ C 0.1
True Pitch Error σ θ P 2.0 σ θ C 0.1
True Bank Error σ ξ P 0.1 σ ξ C 0.1
Position Estimation Error σ T ^ BP B 0.01 m σ T ^ BC B 0.002 m
Attitude Estimation Error σ ϕ ^ BP 0.03 σ ϕ ^ BC 0.01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gallo, E.; Barrientos, A. Customizable Stochastic High-Fidelity Model of the Sensors and Camera Onboard a Fixed Wing Autonomous Aircraft. Sensors 2022, 22, 5518. https://doi.org/10.3390/s22155518

AMA Style

Gallo E, Barrientos A. Customizable Stochastic High-Fidelity Model of the Sensors and Camera Onboard a Fixed Wing Autonomous Aircraft. Sensors. 2022; 22(15):5518. https://doi.org/10.3390/s22155518

Chicago/Turabian Style

Gallo, Eduardo, and Antonio Barrientos. 2022. "Customizable Stochastic High-Fidelity Model of the Sensors and Camera Onboard a Fixed Wing Autonomous Aircraft" Sensors 22, no. 15: 5518. https://doi.org/10.3390/s22155518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop