Next Article in Journal
Efficient Feature Selection for Static Analysis Vulnerability Prediction
Previous Article in Journal
Use of Amperometric and Potentiometric Probes in Scanning Electrochemical Microscopy for the Spatially-Resolved Monitoring of Severe Localized Corrosion Sites on Aluminum Alloy 2098-T351
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High Precision Outdoor and Indoor Reference State Estimation for Testing Autonomous Vehicles †

by
Eduardo Sánchez Morales
1,*,
Julian Dauth
1,
Bertold Huber
2,
Andrés García Higuera
3 and
Michael Botsch
1
1
Technische Hochschule Ingolstadt, Esplanade 10, 85049 Ingolstadt, Germany
2
GeneSys Elektronik GmbH, In der Spöck 10, 77656 Offenburg, Germany
3
European Parliamentary Research Service, Rue Wiertz 60, B-1047 Brussels, Belgium
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in: Sánchez Morales, E.; Botsch, M.; Huber, B.; García Higuera, A. High precision indoor positioning by means of LiDAR. In Proceedings of the 2019 DGON Inertial Sensors and Systems (ISS), Braunschweig, Germany, 10–11 September 2019.
Sensors 2021, 21(4), 1131; https://doi.org/10.3390/s21041131
Submission received: 5 January 2021 / Revised: 1 February 2021 / Accepted: 3 February 2021 / Published: 6 February 2021
(This article belongs to the Section Remote Sensors)

Abstract

:
A current trend in automotive research is autonomous driving. For the proper testing and validation of automated driving functions a reference vehicle state is required. Global Navigation Satellite Systems (GNSS) are useful in the automation of the vehicles because of their practicality and accuracy. However, there are situations where the satellite signal is absent or unusable. This research work presents a methodology that addresses those situations, thus largely reducing the dependency of Inertial Navigation Systems (INSs) on the SatNav. The proposed methodology includes (1) a standstill recognition based on machine learning, (2) a detailed mathematical description of the horizontation of inertial measurements, (3) sensor fusion by means of statistical filtering, (4) an outlier detection for correction data, (5) a drift detector, and (6) a novel LiDAR-based Positioning Method (LbPM) for indoor navigation. The robustness and accuracy of the methodology are validated with a state-of-the-art INS with Real-Time Kinematic (RTK) correction data. The results obtained show a great improvement in the accuracy of vehicle state estimation under adverse driving conditions, such as when the correction data is corrupted, when there are extended periods with no correction data and in the case of drifting. The proposed LbPM method achieves an accuracy closely resembling that of a system with RTK.

Graphical Abstract

1. Introduction

Autonomous driving has become a popular trend in automotive research. The motivation for autonomous driving ranges from comfort or practicality functions to safety critical applications. Independently of the final use, the fact that the vehicles are machines that can cause serious harm to humans in the event of a malfunction has to be taken into consideration. Therefore, autonomous driving functions need to be subjected to an extensive process of testing and validation which requires a highly accurate reference vehicle state.
A common practice to generate such a highly accurate reference vehicle state is the fusion of data from Inertial Measurement Units (IMU) with measurements from external sensors. Given the extensive scientific community working on these systems, their practicality, and especially their high accuracy, Satellite Navigation (SatNav) receivers have become the most common choice of sensor to fuse with IMUs. Even consumer-grade receivers are capable of acquiring information from various GNSS (such as GPS, GLONASS, Galileo, Beidou, etc.), which enables the receivers to accurately estimate several state variables. Unfortunately, there are common driving situations where satellite signals are either absent or corrupted to an extent that renders them unusable. Some examples of these driving situations are tunnels, bridge underpasses, parking structures or testing halls.
The objective of the present work is to reduce the dependency of INSs on external sensors, while still being able to generate a reference vehicle state. Six key aspects of the estimation of the vehicle state are addressed: (i) the standstill recognition (Section 2.1), (ii) the horizontation of IMU measurements (Section 2.2), (iii) sensor fusion (Section 2.3), (iv) drift detection (Section 2.4), (v) outlier detection for correction data (Section 2.5), and (vi) highly accurate indoor navigation (Section 2.6). It will be shown that the proposed methods are able to provide important improvements in the estimation of the vehicle state even when used individually. Furthermore, if used in combination, the proposed methods allow for an accurate enough estimation of the vehicle state to test and validate autonomous driving functions even in indoor environments.
For an adequate estimation of the vehicle state by means of IMUs, it is essential to ascertain when the vehicle is not moving because it is at this time that both the velocity over ground and the proper acceleration are equal to zero [1]. Even though, historically, vehicle speed sensors were designed to measure velocity rather than when a vehicle is at a standstill [2], they are now able to accurately measure a standstill. Nonetheless, there could be disadvantages if external devices continuously read the vehicle sensor data in real time (for example, through the OBDII interface). Even when various methods [3,4,5] and tools used by manufacturers [6] theoretically allow a CAN-bus use of ≈80% [7], recent studies suggest that CAN-bus messages can still miss their deadlines [8,9,10]. The traffic within a CAN-bus would increment if an external device continuously accesses the vehicle sensor data in real time, which could further increase the risk that the CAN-bus messages miss their deadlines. Consequently, more research is required before considering fusing the vehicle on-board sensors with external IMUs. Alternative sensors for measuring the velocity over ground, such as the Wheel Pulse Transducer (WPT) [11] or the Correvit [12,13] increase testing costs and complexity.
To the best of authors’ knowledge, the only method specifically designed for standstill detection is patented by Robert Bosch GmbH [14]. This method compares the position of an object in consecutive frames of a video provided by a camera in the vehicle, and an algorithm estimates the velocity of the vehicle relative to the object. As with all camera-based methods, there is a trade-off between the spatial resolution and the distance to the recorded objects, which negatively affects the standstill recognition.
The method proposed in this work for standstill recognition is based on a machine learning classifier, and has the key advantage that all required inputs are generated solely with IMU measurements. No additional sensors or infrastructural elements are required, which reduces testing complexity, improves the robustness of the method and makes it highly adequate for indoor navigation.
The state of the vehicle can be estimated once the algorithm has reliably established that the vehicle is moving. If Gauss-Markov stochastic processes are assumed, the most computationally efficient method to estimate the optimum state of moving objects is the Kalman Filter (KF) [15]. The KF algorithm can fuse sensor measurements with the vehicle state predicted by means of a motion model. Since the aim of this work is to generate a reference vehicle state for testing and validation of autonomous driving functions, the motion model used is especially designed for accurate performance under tractive driving.
Two critical parameters for the KF are the measurement noise and the system noise, because they are required to calculate the Kalman gain. Therefore, two methods are proposed to monitor both values, and thus to improve the robustness of the KF: a drift recognition and an outlier detector.
The drift recognition tackles the task of monitoring the vehicle state to detect when the vehicle is drifting. This information is extremely valuable because the motion model used in this research work is optimized for tractive-driving situations. When the vehicle starts to drift, the previously estimated system noise is no longer valid. Knowing when the vehicle is drifting enables the possibility of adjusting the system noise or switching to a better suited motion model.
Other works found in the literature address the issue of drifting too, however, they tend to limit the scope of their research to simulation environments [16,17] or require additional parameters that cannot easily be estimated with an IMU [18]. Conversely, the drift recognition in this work is designed to operate by performing consistency checks between two estimated state variables: the lateral acceleration and the sideslip angle. As demonstrated later, these state variables provide enough information to perform a robust drift detection.
The outlier detector serves as a complement to the drift detector. Although the drift detector monitors the validity of the motion model, the outlier detector does the same for the measurements. This is especially beneficial for SatNav measurements due to the multi-path effect. This effect appears when the signal received from the satellites does not follow a straight path, but bounces off other objects before reaching the SatNav receiver. The multi-path effect can occur as the vehicle drives near objects that are able to reflect the SatNav signal, such as trailers, tall buildings or metallic structures, and causes problems for the methods that measure the vehicle state (trilateration, carrier-phase measurements, etc.). A relevant consequence of the multi-path effect is that it reduces the accuracy of the measured vehicle state. It is quite common to see significant jumps in the measured state variables while the multi-path effect is present. There are several approaches in the literature to deal with the multi-path problem, such as [19,20,21,22,23]. However, to implement these approaches would require access to the pseudo-ranges of the SatNav receivers which is not possible for most consumer-grade devices. Therefore, the problem is addressed by filtering outliers. This filtering is achieved by a consistency check of the measurement vector before it is used as correction data.
In this research work a dynamic low-pass filter is designed to refine the outlier detection method by means of taking the measurement noise into account. This dynamic low-pass filter is highly beneficial for all sensors that estimate a quality indicator for their own measurements, as is the case of the “dilution of precision”, “diminution of precision” [24,25] or standard deviation of the SatNav receivers. Many sensors can track the accuracy associated with the measurements that they perform, but are not always capable of detecting estimation errors of this self-reported accuracy.
The investments made in GNSS [26,27] have provided various advantages, such as (i) high accuracy [28,29], (ii) a low cost for end users [30], and (iii) an extensive scientific community that works on refining its measurements and overcoming its limitations [31]. Arguably, these three advantages make the GNSS the best choice for external sensors in open-sky use-cases. However, there are still situations where the GNSS are not available or usable, as is the case in tunnels, parking structures or urban canyons. Moreover, there is an interest in testing in indoor halls because this allows engineers to test sensors, algorithms and systems under simulated and reproducible weather conditions such as day, night, rain, fog or even SatNav outages.
Due to the aforementioned reasons, it is necessary to investigate approaches for acquiring feedback in closed-sky situations. Comprehensive surveys on indoor positioning systems can be found in [32,33]. These approaches are based on the combination of diverse methods and sensors, such as WiFi, sonar, radar, or proprietary technologies such as the iBeacon [34].
The LiDAR-based approach with the best performance is presented on [35] with a mean localization error of 11.5 cm and standard deviation of 5.4 cm. The accuracy is evaluated by comparing the outputs of the algorithm to “human-labeled” ground-truth data. The indoor positioning system with the best performance that was found is the “Active Bat” [36,37] with a mean localization error of 3 cm, but the evaluation methodology is not clearly specified [38]. The biggest disadvantage of the Active Bat is the numerous receivers required which must be placed with separations of 1.2 m and must cover the complete measurement area. Therefore, the installation of such a system might not be practical. Other indoor positioning systems already on the market [39,40,41] are designed for applications that do not require a centimeter-precise localization. Alternative navigation methods that use LiDARs are presented in [42,43,44]. However, the existing methods still face a series of challenges, such as the positioning accuracy, the number of state variables they can measure or the algorithm runtime.

2. Materials and Methods

The present work deals with the measurements provided by different sensors, such as IMUs, LiDARs, SatNav receivers, etc. For simplification purposes, it is assumed that all sensors output their measurements in the Local Car Plane (LCP). The LCP is the vehicle reference frame and is defined analogue to the ISO 8855:2011 norm [45]. The LCP is composed by the mutually perpendicular x LCP , y LCP , and z LCP axes, with z LCP = x LCP × y LCP and origin o LCP at the Center of sprung Mass (CoM) of the vehicle. The x LCP axis is parallel to the longitudinal axis of the vehicle and points towards the hood, the z LCP axis points upwards, and the y LCP axis is given according to the right-hand rule. The LCP is illustrated in Figure 1.
The vehicles move on the Local Tangent Plane (LTP). The LTP is a Cartesian reference frame, composed by the mutually perpendicular x LTP , y LTP and z LTP axes, with z LTP = x LTP × y LTP ; and with origin o LTP at an arbitrary location on the surface of the Earth. The axis orientation of the LTP is similar to the East-North-Up (ENU) reference frame, where the x LTP × y LTP -plane is perpendicular to the gravitational pull of the Earth, the y LTP axis points to the true north of the Earth, and the z LTP axis is parallel to the gravitational pull of the Earth, and points upwards.

2.1. Standstill Recognition

The first step for estimating the vehicle state is the standstill detection. The strategy to achieve this, is to generate features from the IMU signals that can describe whether the vehicle is standing still or not. The generated features are then classified by means of machine learning. The proposed method uses the Random Forest (RF) [46] for classifying standstill or motion according to the features generated from the IMU signals as inputs. The classes used are shown in Table 1. False positives are unacceptable because they can affect greatly the estimation of the vehicle state, especially if they occur at highway velocities. False negatives can be tolerated because, as shown later, their effect is negligible for the estimation of the vehicle state. If this holds, the standstill classification is considered robust.
The use of the RF is justified mainly because it is not possible to manually determine thresholds for the IMU signals that allow a robust standstill classification. This happens because some driving conditions are almost equal to a standstill, such as cruising on a highway or driving at walking velocity. This fact is further strengthened when considering the numerous configurations possible for the same vehicle chassis (engine, suspension, tires, etc.), and that even small components, such as the engine mounts, can have a significant impact on the vibrations and rotations sensed by the IMU.
The RF not only enables the implementation of a robust standstill recognition, but a 10 min dataset is also enough for a robust classification, and the training process is done within seconds. Also, given that the RF is real-time capable, it allows the online use of the proposed method. Further details about the RF can be found in [46]. The procedure to recognize a standstill consists of three steps: (i) generation of the training dataset, (ii) training of the RF, and (iii) classification of the vehicle state. The specifics applicable to the present research work is detailed below.
The first step to recognize a standstill, is to generate a training dataset for the RF. The 10 min-long dataset that is required to generate the features for the RF is composed by the proper acceleration vector a LCP and the composition of rotations θ ˙ LCP that are measured by the IMU, these are such that
a LCP = a x , LCP a y , LCP a z , LCP T ,
θ ˙ LCP = θ ˙ roll θ ˙ pitch θ ˙ yaw T .
The training dataset for the RF is equally distributed between three conditions: standstill, moving at walking pace and random driving.
For the standstill condition, the vehicle should be as motionless as possible, but ready to drive off. In the case of vehicles with automatic transmission, “drive” should be selected; and vehicles with manual transmission should be either in neutral or in gear and with the clutch engaged. The reason for this, is to let the IMU sense engine and drivetrain vibrations as well. During the standstill, all dead loads are negligible, and all live loads are undesirable. Except for engine and drivetrain, all actions that provoke chassis vibration or motion should be avoided: passenger movement, vehicle loading, opening or closing doors, trunk or hood, etc.
For the condition of moving at walking pace, the vehicle is driven as slow as possible and on a straight line. Contact with pedals or steering wheel should be avoided. This allows one to generate samples with the “motion” label but that are very similar to a “standstill”, which is highly relevant for differentiating between both.
For the condition of random driving, the vehicle is driven with random accelerations, velocities and directions. The more diverse, the better for the dataset.
Next, the features for the RF are generated in the time and frequency domain. The features in the time domain are: (i) a x , LCP 2 + a y , LCP 2 , (ii) a z , LCP 2 , (iii) θ ˙ roll 2 + θ ˙ pitch 2 , and (iv) θ ˙ yaw 2 .
The features in the frequency domain are created from the features in the time domain by generating the Discrete Fourier Transform (DFT) of each feature in the time domain. In the present work, the DFT is generated with a MATLAB Toolbox that is based on [47,48,49]. The frequency range that the DFT uses is given as in Equation (3).
[ 0   :   F s n DFT   :   F s ] ,
where F s is the sampling frequency and n DFT is the number of samples used for the DFT. The highest meaningful frequency from the DFT is F s 2 [50,51], and a 0 Hz frequency (direct current) is not relevant in this work. Therefore, the used frequency range is given by Equation (4).
[ F s n DFT   :   F s n DFT   : n DFT 2 · F s n DFT ] .
The IMU sampling rate is set to F s = 100   Hz , and a time window of τ DFT = 170   m s is used. Therefore, n DFT = 17 samples are used each time for the DFT generation: the latest values of the features in the time domain, and their values from the previous n DFT 1 samples. Therefore, the used DFT frequencies are (1) 5.8824 Hz, (ii) 11.7647 Hz, (iii) 17.6471 Hz, (iv) 23.5294 Hz, (v) 29.4118 Hz, (vi) 35.2941 Hz, (vii) 41.1765 Hz, and (viii) 47.0588 Hz. The single-sided amplitude of these 8 frequencies are the features in the frequency domain for a total of 36 features: 4 in the time domain and 32 in the frequency domain. It should be noted that due to the sliding window, it is not possible to generate a DFT for the first n DFT 1 samples. In this case, the amplitudes of their frequencies are set to zero.
By analyzing the features of the training datasets of different vehicles, it can be noticed that the dominant frequencies at standstill can be associated with the vehicle motorization. For the vehicles with internal combustion engines, to the idling Revolutions Per Minute (RPM) and the number of cylinders. When the vehicles move at walking velocity, the dominant frequency can be associated mainly with the vehicle differential. Finally, when the vehicle moves faster than walking velocity, the dominant frequencies come from the live loads of the vehicle, such as road imperfections and suspension travel, among others.
As shown in Table 1, “standstill” and “motion” are the two labels used for classification. The labeling of the training dataset can be done by using other sensors as reference, by manually analyzing the IMU signals, or a combination of both. Whichever way, it is important to notice that the labeling directly affects the classification performance of the RF.
A best practice for labeling the training datasets is to include the jerks that appear at drive-off into the “motion” class. When the vehicle comes to a stop, the “standstill” class should start only once the chassis has stopped moving and there are no more jerks or rotations present in the IMU signals. In case of uncertainty, it is preferable to label a sample as “motion” rather than “standstill” because, as is shown later, false negatives can be tolerated, but false positives are unacceptable.
The algorithm initialization and the transition between states are two effects of the proposed method to keep in mind. As for the initialization, n DFT 1 samples are needed for algorithm initialization. This is because, as explained above, the DFT can be generated only once n DFT samples from the IMU are obtained. The single-sided amplitudes for the previous samples are set to zero, which might not be representative of the vehicle state.
Regarding the transition between states, as a vehicle goes from one state to the other (motion to standstill or vice versa), it might happen that the samples used for the DFT belong to different classes. The smaller the sliding window τ DFT , the faster all samples for the DFT fall within a single class. Otherwise, as the sliding window τ DFT gets bigger, more features can be used for the classification, but this also increases the lapse where samples from both classes are fed to the RF. The current size of τ DFT is found to be adequate for a robust standstill recognition, and is determined after testing the RF with several hours of test datasets, and by comparing the classification performance of the RF with different values for τ DFT .
Once the features in time and frequency domain are generated and labeled, the second step to recognize a standstill, is to train the RF. The training parameters used in this work are (i) number of trees: 12, (ii) stopping criteria: minimum leaf size, and (iii) minimum leaf size: 5. These parameters provide a robust classification and can be used for a variety of land vehicles.
It should be noted that the trained RF is not specific to the vehicle used to generate the training dataset, but to all vehicles with the same configuration (chassis, engine, transmission, suspension, etc.). This means that the training occurs only once, and the trained RF can be used for as long as the vehicle configuration remains the same.
Once the RF is trained, the third step to recognize a standstill, is to use the RF for the standstill recognition on test data. For this, the 36 features required as inputs for the RF are generated as the vehicle drives. The first step to do so is to buffer n DFT IMU samples, i.e., the most recent IMU measurements and the previous n DFT 1 . The latest IMU measurements are the features in the time domain ( a x , LCP 2 + a y , LCP 2 , a z , LCP 2 , θ ˙ roll 2 + θ ˙ pitch 2 , and θ ˙ yaw 2 ), and the features in the frequency domain are generated with all n DFT IMU buffered samples. When all 36 features (4 in the time domain and 32 in the frequency domain) are generated, they are used as input for the RF.
If a standstill is recognized, all change rates (velocities, accelerations and rotation rates) are set to zero. Only the position and orientation are carried forward from the previous time step. Otherwise, if the vehicle moves, its state is estimated as is detailed later.

2.2. Horizontation of IMU Measurements

The horizontation is understood as the transformation of the IMU measurements, as if the x LCP × y LCP and x LTP × y LTP planes were parallel, and the z LCP axis had the same orientation and direction as the z LTP axis. The horizontation is required mainly because most vehicle motion models found in the literature depict the movement of cars from a bird’s-eye perspective. Also, it is not a practical solution to physically maintain the x LCP × y LCP and x LTP × y LTP planes parallel when the vehicles are in motion. The horizontation requires tracking of the LTP axes relative to the LCP, so that the IMU measurements can be projected on the tracked LTP axes. This process varies depending on whether the vehicle is standing still or not. Both cases are detailed in what follows.
With exception of the engine vibrations, there are no live loads present in a vehicle during a standstill. Therefore, the gravitational pull G is the only force sensed by the IMU. This is given at standstill by
G = a LCP = a x , LCP 2 + a y , LCP 2 + a z , LCP 2 9.81 m s 2 .
By using the gravitational pull as reference, the roll and pitch angles are calculated as follows
θ roll = arccos a y , LCP G ,
θ pitch = arccos a x , LCP G .
Then, the easiest manner to obtain an initial value of the yaw angle θ yaw , is to equip the vehicle with a sensor that measures the Course Over Ground (COG), such as a SatNav receiver, and to drive the vehicle on a straight line. This is so because driving on a straight line results in θ yaw = COG .
Two rotations are necessary to transform the LTP to the LCP: one rotation around an arbitrary axis and one rotation around the z LTP axis. The first rotation aligns the z LCP axis with the z LTP axis. The second rotation aligns the x LCP axis with the x LTP axis. The two parameters of the first rotation are the arbitrary axis r h 0 and the rotation angle θ h . They are expressed as follows
θ h = arccos a z , LCP G ,
r h 0 = a x , LCP a y , LCP a z , LCP × 0 0 1 = r x , h r y , h r z , h .
The rotation matrix R LCP LCP that aligns the z LCP axis with the z LTP axis is then given by:
R LCP LCP = r x , h 2 + r y , h 2 + r z , h 2 cos θ h r m r x , h r y , h r c r z , h r m sin θ h r m r x , h r z , h r c + r y , h r m sin θ h r m r x , h r y , h r c + r z , h r m sin θ h r m r y , h 2 + r x , h 2 + r z , h 2 cos θ h r m r y , h r z , h r c r x , h r m sin θ h r m r x , h r z , h r c r y , h r m sin θ h r m r y , h r z , h r c + r x , h r m sin θ h r m r z , h 2 + r x , h 2 + r y , h 2 cos θ h r m ,
r c = 1 cos θ h ,
r m = r x , h 2 + r y , h 2 + r z , h 2 .
The Equation (10) is known as the “Rodriguez’ Formula”, and its step-by-step derivation can be found in [52]. A graphical representation of a rotation around an arbitrary axis is shown in Figure 2. It should be noted that Equation (10) can present a singularity if θ roll = 0 and θ pitch = 0 , because in that case r m = 0 as well. However, this would mean that the z LCP axis and the z LTP axis are aligned. Hence, R LCP LCP = I 3 × 3 , where I is the identity matrix.
The matrix that tracks the LTP with respect to the LCP is then given by
R LTP LCP = cos θ yaw sin θ yaw 0 sin θ yaw cos θ yaw 0 0 0 1 R LCP LCP T .
Once the vehicle pose at standstill is determined, it must be updated as the vehicle moves. The update is done primarily with the gyroscopes because even when cruising in a straight line, random forces act on road vehicles. Some sources of these forces could be road irregularities, bumps, road joints or the wind. These forces are sensed by the IMU, which complicates the update of R LTP LCP with the accelerometers. Therefore, the composition of rotations is used instead. This is detailed in what follows.
With the instantaneous rotation rates θ ˙ roll , θ ˙ pitch and θ ˙ yaw around the x LCP , y LCP , and z LCP axes respectively, the instantaneous composition of rotations is expressed as follows
S θ ˙ LCP , τ 2 LCP , τ 1 = 0 θ ˙ yaw θ ˙ pitch θ ˙ yaw 0 θ ˙ roll θ ˙ pitch θ ˙ roll 0 ,
θ ˙ LCP , τ 2 LCP , τ 1 = θ ˙ roll θ ˙ pitch θ ˙ yaw .
Next, the instantaneous composition of rotations is integrated as if it was a vector and added to R LTP LCP . The step-by-step derivation of this procedure is shown below.
R ^ LTP LCP , τ 2 = R LCP , τ 1 LCP , τ 2 R LTP LCP , τ 1 ,
R ^ LTP LCP , τ 2 = R LCP , τ 1 LCP , τ 1 + Δ τ R ˙ LCP , τ 1 LCP , τ 2 R LTP LCP , τ 1 ,
R ^ LTP LCP , τ 2 = R LCP , τ 1 LCP , τ 1 + Δ τ S θ ˙ LCP , τ 1 LCP , τ 2 R LCP , τ 1 LCP , τ 2 R LTP LCP , τ 1 ,
R ^ LTP LCP , τ 2 = R LCP , τ 1 LCP , τ 1 + Δ τ S θ ˙ LCP , τ 2 LCP , τ 1 R LCP , τ 1 LCP , τ 2 R LTP LCP , τ 1 ,
R ^ LTP LCP , τ 2 = I 3 × 3 + Δ τ S θ ˙ LCP , τ 2 LCP , τ 1 T I 3 × 3 R LTP LCP , τ 1 ,
R ^ LTP LCP , τ 2 = Δ τ S θ ˙ LCP , τ 2 LCP , τ 1 T + I 3 × 3 R LTP LCP , τ 1 ,
where Δ τ = τ 2 τ 1 . Equation (21) is used each time that a new IMU measurement is available to keep updating R LTP LCP .
To add rotations as vectors is an approximation that is valid only for rotations that tend to zero [53]. This is because R ^ LTP LCP , τ 2 is not an orthonormal matrix. Therefore, R ^ LTP LCP , τ 2 must be adjusted to make its axes perpendicular to each other. The procedure is shown in the following.
R ^ LTP LCP , τ 2 = r h 1 r h 2 r h 3 ,
r h 3 = r h 3 ,
r h 1 = r h 2 × r h 3 ,
r h 2 = r h 3 × r h 1 ,
R LTP LCP , τ 2 = r h 1 r h 1 r h 2 r h 2 r h 3 r h 3 .
The next step is to project on the LTP the acceleration vector a LCP and the composition of rotations θ ˙ LCP measured by the IMU. This is done by using the tracker matrix R LTP LCP , τ 2 as follows
a LTP , τ 2 = R LTP LCP , τ 2 T a LCP = a x , LTP , τ 2 a y , LTP , τ 2 a z , LTP , τ 2 ,
θ ˙ LTP , τ 2 = R LTP LCP , τ 2 T θ ˙ LCP = θ ˙ x , LTP , τ 2 θ ˙ y , LTP , τ 2 θ ˙ z , LTP , τ 2 ,
where a LTP , τ 2 is the acceleration vector in LTP at time instance τ 2 , θ ˙ LTP , τ 2 is the composition of rotations in LTP at time instance τ 2 , a x , LTP , τ 2 , a y , LTP , τ 2 and a z , LTP , τ 2 are the accelerations at time instance τ 2 along the x LTP , y LTP and z LTP axes accordingly; and θ ˙ x , LTP , τ 2 , θ ˙ y , LTP , τ 2 and θ ˙ z , LTP , τ 2 are the rotations at time instance τ 2 around the x LTP , y LTP and z LTP axes correspondingly.
The acceleration vector and composition of rotations are then projected on the x LTP × y LTP -plane. The magnitude a OG and direction θ aOG of the acceleration over ground are calculated as follows
a OG = a x , LTP , τ 2 2 + a y , LTP , τ 2 2 ,
θ aOG   =   arctan2 a y , LTP , τ 2 , a x , LTP , τ 2 .
The magnitude θ ˙ OG and direction θ rOG of the composition of rotations over ground are given by
θ ˙ OG = θ ˙ x , LTP , τ 2 2 + θ ˙ y , LTP , τ 2 2 ,
θ rOG   =   arctan2 θ ˙ y , LTP , τ 2 , θ ˙ x , LTP , τ 2 .
Once the accelerations along and rotations around the LTP axes are known, they are projected on the projection on the x LTP × y LTP -plane of the longitudinal axis of the vehicle, i.e., the acceleration vector a LTP , τ 2 and composition of rotations θ ˙ LTP , τ 2 are projected on a vector located on the x LTP × y LTP -plane and with θ yaw orientation. Therefore,
a x , h a y , h = a OG cos θ aOG θ yaw a OG sin θ aOG θ yaw ,
θ ˙ roll θ ˙ pitch = θ ˙ OG cos θ rOG θ yaw θ ˙ OG sin θ rOG θ yaw ,
where a x , h and a y , h are called “horizontal accelerations”. The measurement vector of the Extended Kalman Filter (EKF) that is designed in this research work (Section 2.3) includes θ ˙ z , LTP , a x , h and a y , h as measurement variables. The process from Equation (14) to Equation (34) is repeated with each new inertial measurement while the vehicle is in motion to keep updating the values of the measurement vector of the EKF.

2.3. Statistical Filtering

The next step is to estimate the vehicle state. This is done by using a motion model to predict the vehicle motion and by using exteroceptive sensors to get feedback.
If one assumes Gauss-Markov stochastic processes, the most computationally efficient method to estimate the optimum state of moving objects is the Kalman Filter. The Kalman Filter fuses the predicted vehicle motion with observed measurements. Its detailed derivation can be found in [15], and the specifics for the present research work are given in what follows.
The measurement vector is defined in the present work as
z h = x h y h θ z , LTP θ ˙ z , LTP v h β h a x , h a y , h T ,
where x h , y h are the x , y coordinates of o LCP in LTP, θ z , LTP is the yaw angle of the vehicle, v h is the velocity over ground of the vehicle, and β h is the sideslip angle of the vehicle. The used measurement noise covariance matrix σ z and state vector x s are defined as follows
σ z = diag σ x , h 2 σ y , h 2 σ θ z , LTP 2 σ θ ˙ z , LTP 2 σ v , h 2 σ β , h 2 σ a x , h 2 σ a y , h 2 ,
x s = x s y s θ yaw , s θ ˙ yaw , s v s β sv β ˙ sv β sp a x , sh a y , sh T ,
where x s , y s are the x , y coordinates of o LCP in LTP, θ yaw , s is the yaw angle, θ ˙ yaw , s is the yaw rate, v s is the velocity over ground, β sv is the first estimation of the vehicle sideslip angle, β ˙ sv is the rate of change of the vehicle sideslip angle, β sp is the second estimation of the vehicle sideslip angle, and a x , sh and a y , sh are the vehicle horizontal accelerations. Two sideslip estimators are used to improve the prediction of the vehicle motion. The first sideslip estimator ( β sv and β ˙ sv ) is based on the balance of moments of inertia and provides a better performance for the estimation of the vehicle velocity. The second sideslip estimator ( β sp ) is based on a geometrical model under the assumption of tractive driving, and gives a better performance for the position estimation.
With a distance l r along the longitudinal axis of the vehicle from o LCP to the rear transaxle, and additive system noise η s , the used state equations are given by
x τ i + 1 , s = f x τ i , s + η s ,
f 1 10 x τ i , s = x s + v s c θ yaw , s + β sp Δ τ + a x , sh c θ yaw , s a y , sh s θ yaw , s Δ τ 2 2 y s + v s s θ yaw , s + β sp Δ τ + a x , sh s θ yaw , s + a y , sh c θ yaw , s Δ τ 2 2 θ yaw , s + θ ˙ yaw , s Δ τ θ ˙ yaw , s v s + a x , sh c β sv + a y , sh s β sv Δ τ β sv + β ˙ sv Δ τ 1 v s a y , sh c β sv a x , sh s β sv θ ˙ yaw , s tan 1 l r θ ˙ yaw , s v s a x , sh a y , sh .
From f 7 x s and f 8 x s , one deducts that as the vehicle drives slower than 1.5 m s , the sideslip estimators lead to a mathematical indetermination. To solve this, when v s < 1.5 m s , then x s 6 , x s 7 and x s 8 are set to zero. Therefore, no sideslip is considered for the velocity or position estimation. However, given the low velocities where the indetermination happens, the integration error is negligible.

2.4. Drift Recognition

The geometrical model from which f 8 x s is derived, assumes tractive driving. Hence, the performance of the motion model to predict the vehicle motion decays during a drift. By recognizing a drift, it is possible to (i) identify when the performance of the motion model decays, and (ii) take measures such as adjusting the system noise, or switching to other motion models. Therefore, an objective of this research work is to create a drift detector. Other approaches achieve this with multiple SatNav receivers or with specific sensors, such as the Correvit. A drift is recognized in this work by checking the consistency between state variables to reduce the dependence of INSs on external sensors.
The tire longitudinal and lateral slips are a consequence of the forces that act on the tires as well as their mechanical properties. As such, both slips indicate how much force can the tires transmit to the ground. When classified by the tire slips, the driving state of a vehicle can be classified as either “tractive” or “non-tractive” [54]. The tractive driving is characterized by high lateral and longitudinal tire tractive forces and by tire slips usually less than 10 sideslip and less than 10% longitudinal slip. The non-tractive driving is characterized by low tire tractive forces and by high slips that are usually greater than 10 sideslip and greater than 10% longitudinal slip.
Hence, the first pair of state variables to compare to detect a drift, are both sideslip estimators. During tractive driving, β sv β sp . However, as the vehicle starts drifting, the difference between both gets bigger because β sp is derived under the assumption of tractive driving. Therefore, the difference between β sv and β sp is a strong indicator that shows whether the vehicle is drifting or not.
The second pair of variables that are used for the drift recognition are the estimated lateral acceleration a y , sh and the expected lateral acceleration a y , sc . The latter results from assuming a tire sideslip of β t 0 . This would mean that the path of a turning vehicle describes an arch with constant radius, so that a y , sc = θ ˙ yaw , s v s . Therefore, a y , sh estimates the lateral acceleration in reality, and a y , sc describes how the lateral acceleration should ideally be. During tractive driving, a y , sh a y , sc . And as the vehicle starts drifting, the difference between a y , sh and a y , sc becomes statistically significant.
In the present work, it is said that the vehicle is drifting if | a y , sh a y , sc | 2.1 m s 2 and | β sv β sp | 0.2 rad . These thresholds are determined after comparing the corresponding state variables as measured during drifting tests, and as estimated during tractive driving on open roads.

2.5. Outlier Detection

The next step is to fuse the predicted vehicle motion with measurements from external sensors (correction data). However, the sensor information could be corrupted, which would affect negatively the estimation of the vehicle state. To avoid this, the state vector is used as reference to filter out flawed sensor measurements. This is because, by definition, the state vector represents the last known optimal state of the vehicle, and because, assuming that both the motion model and the sensors depict reality with 100% fidelity, the measurement vector is equal to the predicted vehicle state.
The consistency check is done with two consecutive sensor measurements made at time instances τ 1 and τ 2 , which are compared to the corresponding predicted state variable. Therefore, the difference between two consecutive velocity measurements could be expressed as follows
v τ 2 , h v τ 1 , h = a x , sh c β sv + a y , sh s β sv Δ τ ,
and to consider the inaccuracies of the sensor and the motion model, this can be rewritten as:
v τ 2 , h v τ 1 , h a x , sh c β sv + a y , sh s β sv Δ τ ,
Since the purpose is to filter outliers, an additional margin is given by means of a factor κ = 3 . Also, only the absolute difference is taken to consider both positive and negative accelerations of the vehicle. Therefore, the measured velocity v τ 2 , h at time instance τ 2 is used as correction data if
v τ 2 , h v τ 1 , h a x , sh c β sv + a y , sh s β sv Δ τ κ .
The value of κ is determined after analyzing hours of real-world data, where the behavior of the SatNav measurements during regular operation is compared to the behavior of the measurements during periods of time where the multi-path effect is present.
Since θ yaw COG during tractive driving, the COG measured by the SatNav receivers can be used as correction data for θ yaw , s when θ ˙ yaw , s 0 . Analogue to the velocity measurement, the measured COG θ τ 2 , yaw , h at time instance τ 2 is used as correction data if
θ τ 2 , yaw , h θ τ 1 , yaw , h θ ˙ yaw , s Δ τ κ .
Analogue to the velocity and the COG, the SatNav location measurement x τ 2 , h , y τ 2 , h at time instance τ 2 is used as correction data if
x τ 2 , h x τ 1 , h 2 + y τ 2 , h y τ 1 , h 2 v s + a x , sh c β sv + a y , sh s β sv Δ τ Δ τ κ .
The same method to detect outliers can be applied for β sv and β sp if necessary.
Some sensors can compute quality indicators for the own measurements, such as a dilution of precision or a standard deviation. The problem is that sometimes the sensors do not recognize when they miscalculate their quality measures, as is the case with the standard deviation of SatNav receivers.
To address this, the quality measure is modelled as a PT1 element to dampen it over time. Therefore, let σ τ 1 , h and σ τ 2 , h be an element of σ z at time instances τ 1 and τ 2 , σ τ 1 , h and σ τ 2 , h be the values corresponding to σ τ 1 , h and σ τ 2 , h that the sensor calculates, t sat = 1.5   s a saturation parameter, t τ 1 and t τ 2 a timer at time instances τ 1 and τ 2 . The update of the standard deviation is then modelled as follows
t τ 2 = 0 , σ τ 1 , h < σ τ 2 , h t τ 1 + Δ τ otherwise ,
σ τ 2 , h = σ τ 2 , h , σ τ 1 , h < σ τ 2 , h σ τ 1 , h e t τ 2 t sat + σ τ 2 , h 1 e t τ 2 t sat otherwise .
The saturation parameter t sat is optimized for a SatNav receiver by analyzing its measurements with and without the multi-path effect, and can be optimized for other sensors as required.
From Equations (45) and (46), it can be deducted that the strategy is to quickly shift the Kalman gain towards the motion model if the quality of the sensor measurements decays, and to gradually shift the gain towards the sensor if the quality of its measurements continuously improve.

2.6. LiDAR-Based Positioning Method

As stated above, there are various situations where no SatNav is available, but where highly precise correction data is required. A highly precise source of correction data is generated in this work by means of a Velodyne LiDAR HDL-32E [55], and the method is detailed in what follows.
The first step is to identify references or “markers” with the LiDAR. Since recognizing markers by their shape is computationally intensive, they are identified by their reflectivity. It should be noted that the point cloud resolution decreases (i) as the distance dvelo between the LiDAR and the measured object increases, and (ii) as the rotational velocity ωvelo of the LiDAR head increases. The Euclidean distance dhor,velo between two horizontally adjacent points as a function of dvelo and ωvelo is given by
d hor , velo = d velo · sin ω velo · 360 60 · 0.00004608   m s ,
where 0.00004608   m s is the time required for one firing cycle (a single shot of all lasers for a single LiDAR azimuth). From Equation (47), the relation between d velo , ω velo and the point cloud resolution can be deducted. This relation can be very relevant as the vehicle velocity increases.
The chosen LiDAR can measure the NIST calibrated reflectivity [56], which ranges from 0 for a lost reflection to 255 for a reflection with no loses. This allows differentiation of highly reflective objects from the rest of the world, but not to identify the highly reflective objects individually.
Once only the points with high reflectivity remain, they are clustered using a time threshold of Γ cluster = 1   m s . Therefore, the points measured within Γ cluster one another, are clustered together. Hence, all points that are within a clustering distance d cluster from each other are clustered together, so that
d cluster = d velo · sin ω velo · 360 60 · Γ cluster .
The value of d cluster is important when defining ω velo , Γ cluster and the spacing between markers.
The next step is to determine a single set of (x,y) coordinates for the cluster. For this work, the motion of the vehicle is limited to the x LTP × y LTP -plane. Therefore, the i-th cluster point p i , velo is defined as
p i , velo = x i , velo y i , velo = d i , velo · cos θ i , velo d i , velo · sin θ i , velo ,
and the i-th cluster of LiDAR measurements is then defined as
P i , velo R 2 , n pts = x 1 , velo x i , velo . . . x n pts , velo y 1 , velo y i , velo . . . y n pts , velo ,
where d i , velo and θ i , velo are respectively the range and azimuth angle of the i-th measured point, and n pts is the number of points of the cluster.
The coordinates p i , m = [ x i , m , y i , m ] T that express the measured position in LCP of the i-th cluster (marker) are then calculated by means of the mid-range arithmetic mean, so that
p i , m = x i , m y i , m = max P i , velo 1 , : + min P i , velo 1 , : 2 max P i , velo 2 , : + min P i , velo 2 , : 2 .
The mid-range arithmetic is applied as well to d velo , θ velo and the timestamp of the clustered measurements. This allows one to obtain the distance d i , m , the azimuth θ i , m and the time instance at which the i-th marker is seen. Next, the true location in LTP of the markers must be known. This is defined for the i-th marker as follows
p i , map = [ x i , map , y i , map ] T .
The true location in LTP of all markers is stored in the marker map, which is assumed to be available beforehand. This map can be generated, for example, by means of a tachymeter or by stitching together LiDAR measurements of the test area. The marker map is then defined as follows
P map R 3 , n pts = 1 i . . . n pts x 1 , map x i , map . . . x n pts , map y 1 , map y i , map . . . y n pts , map ,
where n pts is the total number markers, and the first row is a unique identifier for each marker.
In order to identify which marker is being measured, an approximate position p app and an approximate orientation θ app of the vehicle in LTP is required. This approximate position can be, for example, x s , y s , x h , y h or initialization values, and is defined as follows
p app = [ x app , y app ] T .
The positioning error η d , app is then defined as follows
η d , app = p app p true ,
were p true = [ x true , y true ] T is the true position of the vehicle in LTP. Since the location of the vehicle is tracked by means of an EKF, during tractive driving it holds that
p true p app x s y s .
The approximate orientation can be θ yaw , s or an initialization value, and during tractive driving it holds that
θ true θ app θ yaw , s ,
were θ true is the true orientation of the vehicle in LTP. The orientation error η θ is defined as follows
η θ = θ app θ true .
So as to prevent ambiguities in the marker identification, η d , app and η θ are constrained as follows
p i , m sin η θ + η d , app < min d map 2 ,
where d map is a vector with the Euclidean distance between all possible marker combinations. The markers do not have to be placed in specific patterns, as long as the constraint of Equation (59) holds. The measured position p i , m = [ x i , m , y i , m ] T in LTP of the i-th marker is then given by
p i , m = p app + cos θ app sin θ app sin θ app cos θ app p i , m .
The element of P map closest to p i , m is then the measured marker.
Once the markers can be individually identified, the measurement variables are calculated. The first one is the velocity over ground v h . To calculate it, it is required that two LiDAR measurements (consecutive or not) point to the same marker. So, let p τ 1 , m and p τ 2 , m be the measurements in LCP of one marker done at time instances τ 1 and τ 2 accordingly, so that
p τ 1 , m = x τ 1 , m y τ 1 , m = d τ 1 , m · cos θ τ 1 , m d τ 1 , m · sin θ τ 1 , m ,
p τ 2 , m = x τ 2 , m y τ 2 , m = d τ 2 , m · cos θ τ 2 , m d τ 2 , m · sin θ τ 2 , m .
From p τ 1 , m and p τ 2 , m , the geometry of a cone is constructed as shown in Figure 3.
Since the sum of the internal angles of a triangle is [ π ] rad , the internal angles of the constructed geometry are calculated as follows
θ 1 , v = θ τ 1 , m , θ τ 1 , m π 2 π θ τ 1 , m , θ τ 1 , m > π ,
θ 2 , v = π θ τ 2 , m + θ ˙ yaw , s · Δ τ , θ τ 2 , m π θ τ 2 , m + θ ˙ yaw , s · Δ τ π , θ τ 2 , m > π ,
θ 3 , v = π θ 2 , v θ 1 , v .
The vehicle displacement d car between the time instances τ 1 and τ 2 is calculated by means of the cosine law. In addition, since the LiDAR measurements have a timestamp, the measured velocity over ground of the vehicle v h between τ 1 and τ 2 is calculated as follows
v h = d car Δ τ = d τ 1 , m 2 + d τ 2 , m 2 2 · d τ 1 , m · d τ 2 , m · cos θ 3 , v Δ τ .
Nevertheless, the cone geometry is not completely defined as it can rotate around the marker. Therefore, it cannot be used to calculate the vehicle location. Instead, both LiDAR measurements must point to different markers. Also, to completely define the geometry shown in Figure 4, p τ 2 , m is expressed in the LCP at time instance τ 1 as follows
p ˜ τ 2 , m = cos θ τ 2 , yaw , s θ τ 1 , yaw , s sin θ τ 2 , yaw , s θ τ 1 , yaw , s sin θ τ 2 , yaw , s θ τ 1 , yaw , s cos θ τ 2 , yaw , s θ τ 1 , yaw , s T p τ 2 , m + x τ 2 , s x τ 1 , s y τ 2 , s y τ 1 , s = x ˜ τ 2 , m y ˜ τ 2 , m .
Next, assuming that p τ 1 , m and p τ 2 , m point to p 1 , map and p 2 , map respectively, the angular offset Ξ θ , LCP from the LTP to the LCP at time instance τ 1 is given by
Ξ θ , LCP   =   arctan2 y ˜ τ 2 , m y τ 1 , m , x ˜ τ 2 , m x τ 1 , m     arctan2 y 2 , map y 1 , map , x 2 , map x 1 , map .
Afterwards, the marker measurements in LCP at time instance τ 1 are rotated as follows
p ˜ τ 1 , m * = cos Ξ θ , LCP sin Ξ θ , LCP sin Ξ θ , LCP cos Ξ θ , LCP T p τ 1 , m = x ˜ τ 1 , m * y ˜ τ 1 , m * ,
p ˜ τ 2 , m * = cos Ξ θ , LCP sin Ξ θ , LCP sin Ξ θ , LCP cos Ξ θ , LCP T p ˜ τ 2 , m = x ˜ τ 2 , m * y ˜ τ 2 , m * .
Finally, the linear offsets Ξ LCP from the LTP to the LCP at time instance τ 1 can be estimated by comparing p ˜ τ 1 , m * and p ˜ τ 2 , m * with the true marker location p 1 , map and p 2 , map from the marker library. Since both measurements have the same accuracy, the average of both offsets is calculated as follows
Ξ LCP = p ˜ τ 1 , m * p 1 , map + p ˜ τ 2 , m * p 2 , map 2 = Ξ x , LCP Ξ y , LCP .
The location and orientation measurement variables at time instances τ 1 and τ 2 are then given by
x τ 1 , h y τ 1 , h θ τ 1 , yaw , h = Ξ x , LCP Ξ y , LCP Ξ θ , LCP ,
x τ 2 , h y τ 2 , h θ τ 2 , yaw , h = Ξ x , LCP Ξ y , LCP Ξ θ , LCP + x τ 2 , s x τ 1 , s y τ 2 , s y τ 1 , s θ τ 2 , yaw , s θ τ 1 , yaw , s .
As can be deducted from the previous equations, the measurement variables for two consecutive time instances are calculated at each iteration, i.e., once the LiDAR measurement from the time instance τ 2 is acquired, the vehicle pose for the time instances τ 1 and τ 2 is calculated. When the LiDAR measurement from the time instance τ 3 is acquired, the vehicle pose for the time instances τ 2 and τ 3 is calculated; and so on. Since this creates an overlap of two pose measurements per time instance, both are averaged. Therefore, let x h , a , y h , a and x h , b , y h , b be the measured location of the vehicle at the i-th time instance that is calculated with the LiDAR measurements from the time instances i 1 to i + 1 . The measurement variables for the vehicle location at the i-th time instance are then given by
x τ i , h y τ i , h = x h , a y h , a + x h , b y h , b 2 .
Analogously, let θ yaw , h , a and θ yaw , h , b be the measured orientation of the vehicle at the i-th time instance that is calculated with the LiDAR measurements from the time instances i 1 to i + 1 . The measurement variable for the vehicle orientation at the i-th time instance is then given by
θ τ i , yaw , h = θ yaw , h , a + θ yaw , h , b 2 .
As can be deducted from Equations (74) and (75), to average the measurement variables that are obtained using different pairs of LiDAR measurements aids to compensate LiDAR measurement errors, and creates a time-smoothing effect for the measurement vector. A graphical depiction of this process is shown in Figure 5.
To ensure that the measured object is a marker and not any other reflective object, outliers are filtered out. For this, the LiDAR measurements are translated to the LTP to compare them with the true marker position from the marker library. Therefore, the locations p ^ τ i , m and p ^ τ i + 1 , m in LTP of the LiDAR measurements p τ i , m and p τ i + 1 , m made at the i-th and i + 1 -th time instance are expressed as follows
p ^ τ i , m = cos Ξ θ , LTP sin Ξ θ , LTP sin Ξ θ , LTP cos Ξ θ , LTP T p τ i , m Ξ LTP ,
p ^ τ i + 1 , m = cos Ξ θ , LTP sin Ξ θ , LTP sin Ξ θ , LTP cos Ξ θ , LTP T p τ i + 1 , m Ξ LTP .
Assuming that p ^ τ i , m and p ^ τ i + 1 , m are closest to the i-th and i + 1 -th markers respectively, their measurement errors η i , m and η i + 1 , m are given by
η i , m   =   p ^ τ i , m p i , map   =   η i + 1 , m   =   p ^ τ i + 1 , m p i + 1 , map .
The previous because, as shown in Equations (71) and (75), the angular and linear offsets are estimated with both LiDAR measurements.
A distance threshold is then used to separate outliers according to the following criteria:
inlier , η i , m min d map 2 outlier , η i , m > min d map 2 .
If an outlier is detected, both measurements are discarded.
From what is detailed above, the LbPM can be used only in places with known markers, or an extra effort is needed to place and measure markers in new areas. However, traffic signs have a fixed position and are highly reflective as well. Therefore, they too can be used as markers for Simultaneous Localization and Mapping (SLAM) instead of having a marker library a priori.
Ongoing work of the present research focuses on evaluating the adequacy of the LbPM for SLAM purposes. A critical aspect to evaluate an LbPM-based SLAM, is to identify the sources of error individually. Since the vehicle motion prediction between LiDAR measurements can be approximated as circular movement, the LbPM does not depend on the motion model of the EKF. Hence, it is possible to analyze the motion model with no correction data and the LbPM separately. However, the recursiveness of the prediction-correction process of the EKF and localization-mapping process of the SLAM complicate the isolation of the sources of error when the EKF and the LbPM are combined.
To study separately the sources of error, the mapping performance of the LbPM-based SLAM is inspected under two circumstances: (i) with position, velocity and COG correction data; and (ii) with only velocity correction data. The first variant excludes all possible LbPM feedback errors. The second variant excludes only the velocity feedback from the LbPM as a source of error. The presented method is based on [57,58,59,60], and the specifics applicable to this work are detailed in what follows.
First, some measurement variables are considered instead as inputs to perform the prediction of vehicle state. An input vector u h and the control covariance matrix Q k are then defined as follows
u h = a x , h a y , h θ yaw , h v h T ,
Q k = d i a g σ a x , h 2 σ a y , h 2 σ θ ˙ yaw 2 σ v , h 2 .
With this, the vehicle state process covariance matrix can be updated as follows
P xx , k = F x P xx , k - 1 F x T + F u Q k F u T ,
where F u is the Jacobian matrix of f x s , derived after the control vector u h .
Next, a measurement vector z M is defined as
z M = d i , m θ i , m T ,
where d i , m and θ i , m are respectively the range and azimuth angle of the i-th cluster.
The task of the observation model is to calculate the estimated measurement z ˜ M , for a given vehicle state x s and one specific marker p i , m . For this, two cases can be distinguished. If the current marker library is empty, then the marker is added to it with the location derived from x s and z M . If the marker library is not empty, then an association check is required to compute the probability that the current observation corresponds to an existing marker. For this, the Mahalanobis Distance [61] Individual Compatibility (IC) check is performed for each marker in the library. The one yielding the minimum Mahalanobis Distance is the marker from the library with the highest probability to be the observed one. For this, the innovation term y M is calculated as follows
y M = z M z ˜ M .
The md is then given by
d M = y M T S M 1 y M ,
with S M being the corresponding covariance matrix for the innovation term y M . Finally, the marker from the library is associated with the observation according to the following criteria
associated , d M Γ M not associated , d M > Γ M ,
Γ M = 3 .
This is because a threshold Γ M = 3 means a probability of 98.9 % that the sensor measurement and the estimated measurement refer to the same marker [62].

3. Results

To validate the performance of the From methods that are detailed above, a series of tests are designed and performed. The From methods are tested with several hours of real-world data. In what follows, the testing details, the evaluation metrics and the most relevant results are presented.

3.1. Standstill Recognition

The performance of the standstill recognition is measured in terms of its classification performance according to Table 1, as well as its robustness against false positives. The four most relevant tests and their results are detailed in what follows.
The results of the 1st test are shown on the Figure 6. Here, a 3rd generation Smart ForTwo electric drive is placed on a test track. The INS is mounted in the trunk of the vehicle. The vehicle is turned on, the front tires are set to point straight ahead, and the gear selector placed on “D”. Some seconds after the data recording starts, the brake pedal is released. The car is driven on the test track for approximately 530 s. The accelerator pedal is not touched at any point, and the brake pedal is used only at the beginning to let the car roll and at the end to stop it. The steering wheel is used only to realign the vehicle towards paths that allow prolonged straight-line driving.
What makes this test special is the combination of (i) the low driving velocity, (ii) the drivetrain of the vehicle (electrical motor and single gear reduction transmission), and (iii) the prolonged driving moments without rotations.
As can be seen, the RF is able to recognize that the vehicle is in motion, even when driving several seconds constantly at walking velocity on a straight line: (17.87–92.53 s) and (434.80–522.00 s). It is seen as well that the RF has no false positives.
The results of the 2nd test are shown on the Figure 7. Here, a 5th generation Audi A4 3.0 L TDI with an S-Tronic 7-gear transmission is driven randomly in a city. The INS is mounted in the trunk of the vehicle. On three occasions, the gear selector is placed on “N”, and the car is allowed to roll. On one of those occasions (22.26 s), the vehicle does come to a brief stop. On the other two occasions (138.10 s and 682.20 s), the car reaches very low velocities ( 0.18 m s and 0.12 m s respectively), but it does not come to a full stop. A prolonged standstill (302.10–652.00 s) is also included. During this period of time, the gear selector is kept in “S”.
This test shows (i) three rolling instances, and (ii) a prolonged standstill. Diesel engines are known to produce more vibrations than their gasoline counterparts. Also, when the gear selector is placed on “N”, much less vibrations are transmitted to the vehicle chassis because the transmission is not engaged, and because the engine idles. Therefore, the IMU signals when the vehicle rolls closely resemble those when the car is standing still, especially at low velocities. Contrarily, when the gear selector is on “S” instead of “D”, the vibrations transmitted to the chassis increase notably.
As can be seen, in the first rolling instance (22.26 s) the RF is able to recognize the standstill first when the vehicle does come to a stop and not before. Also, the RF can recognize that the vehicle is still in motion at the other two rolling instances (138.10 s and 682.20 s), despite the velocity over ground almost reaching zero. The RF is also mostly able to recognize that the vehicle is standing still (302.10–652.00 s), despite the increased vibrations provoked by selecting “S”. After 720.20 s of true standstill, the estimated total displacement of the vehicle is 0.008 m (8 mm) and the estimated total rotation of the vehicle is 0.011 rad (0.6282 deg). It is seen as well that the RF has no false positives.
The results of the 3rd test are shown on the Figure 8. Here, a Suzuki GSX-R750 K2 is driven on a test track. The INS is mounted by means of a metal plate directly on the chassis of the vehicle. On four occasions (251.80 s, 327.60 s, 391.20 s and 457.10 s), the “N” gear is selected, and the vehicle can roll to a full stop. Various instances of several seconds of standstill are included as well. In two of those instances (43.88–53.75 s and 94.08–108.20 s), the vehicle was let to rest on its side stand, with the motor idling and the gear “N” selected.
What makes this test special are (i) the chassis-drivetrain configuration, and (ii) the rolling instances. By design, the engine mounts of motorcycles are not designed to dampen the engine vibrations as well as the engine mounts of cars. Thus, these vibrations are transmitted in a much more direct manner to the motorcycle chassis. Also, given that the INS is mounted on the chassis with no dampening, it senses the engine vibrations in a much more direct manner. Thus, the INS signals under these conditions, in combination with the rolling instances, echo the signals that are present at standstill.
As can be seen, on two of the occasions where the motorcycle can roll (251.80 s and 327.60 s), false positives appear. On the other two occasions (391.20 s and 457.10 s), the RF detects a standstill first when the vehicle comes to a stop. It is seen as well that the RF has no false positives under regular driving conditions (driving always on gear).
The results of the 4th test are shown on the Figure 9. Here, a 2nd generation Audi Q7 is parked on a test track with the engine idling and the gear “P” selected. The INS is mounted in the trunk of the vehicle. The volume of the sound system is set to the maximum and music with high bass level is played for 254 s, to induce very strong vibrations for prolonged periods of time. The objective is to test if the RF can recognize the standstill regardless of the induced vibrations.
What makes this test special is the constant induction of vibrations which by magnitude is many times bigger than that of when the vehicle is standing still.
As can be seen, the output of the RF toggles much more than in the other tests. However, it is often still able to correctly detect a standstill. The maximum period of time where the vehicle state is wrongly classified as “motion” is 4.20 s, between 190.3 s and 194.50 s. In this test, after 254.01 s of true standstill, the estimated total displacement of the vehicle is 0.12 m (12.12 cm) and the estimated total rotation of the vehicle is 0.018 rad (1.08 deg).

3.2. Horizontation of IMU Measurements

The performance of this module is measured in terms of its capability to describe the pose (roll, pitch and yaw angles) of a vehicle. The motorcycle from the 3rd test of Section 3.1 is used because bigger roll angles can be achieved with it than with a car. The motorcycle is driven randomly on a test track, including a U-turn (772.70–780.00 s), a slalom (783.30–791.00 s), and “8” figures (791.00–843.70 s).
What makes this test special is that it serves as a practical demonstration of the mathematical methods detailed in Section 2.2. The results of the horizontation of the IMU measurements are shown on the Figure 10. As can be seen, the practical implementation of the mathematical methods is able to describe the vehicle pose.

3.3. Statistical Filtering

The results of the statistical filtering are shown on the Figure 11. The performance of this module is measured in terms of its ability to (i) fuse the vehicle path (as estimated with the motion model) with correction data, (ii) estimate the vehicle path despite SatNav shortages or with corrupted SatNav measurements, and (iii) estimate the vehicle path solely by means of the motion model (with no correction data). For this, a vehicle is driven in various types of roads for 1904.15 s. The test starts at a country road (blue), followed by an express way (magenta, thick), then in-city driving (magenta, thin) and a parking structure (green). A standstill instance of 40.00 s inside the parking structure is included as well. The average velocity of the vehicle during the test, including the 40.00 s standstill inside the parking structure, is 8.19 m s .
What makes this test special is (i) the inclusion of various driving conditions, (ii) the inclusion of extended periods of time without or with corrupted SatNav correction data (parking structure), and (iii) the dead-reckoning navigation.
As can be seen, the statistical filtering is able to fuse the estimation of the motion model with the SatNav measurements. It can also be seen that even in total absence of SatNav correction data, the motion model is able to accurately estimate the vehicle path. The test shown in Figure 11 is a representative case of the average accuracy of the motion model because it includes various driving situations. At the end of the test, the cumulative deviation of the position estimated by the motion model from the position measured by the SatNav receiver is 130.00 m, which accounts for a deviation of 245.79 m per driven hour at an average velocity of 8.19 m s .

3.4. Outlier Detection

The performance of this module is measured in terms of its ability to recognize corrupted correction data, even when the sensors suggest otherwise. For this, two segments of the results of Section 3.3 are highlighted.
What makes this test special are the extended periods of time without or with corrupted correction data. As can be seen, the outlier detection recognizes the faulty correction data, thus dynamically adapting the Kalman gains accordingly. This greatly improves the estimation of the vehicle state. Some relevant results of the outlier detection are shown on the Figure 12.

3.5. Drift Recognition

A representative result of the drift recognition is shown on the Figure 13. The performance of this module is measured in terms of its ability to recognize that the vehicle is drifting. For this, a 3rd generation BMW M5 is driven 51 times on a test track, alternating between tractive and non-tractive driving. The test vehicle is equipped with an INS and a Correvit S-Motion. No correction data is used in these tests to test the accuracy of the motion model under these conditions.
The relevance of this test lies in (i) the extended periods of non-tractive driving, and (ii) the exclusion of correction data. As can be seen, when the vehicle enters the first curve, the module recognizes that the vehicle is not drifting. It is only when the sideslip starts to abruptly change (22.98 s) that the module starts to detect a drift. As the vehicle remains on a steady state drift (23.54–26.05 s), the drift bit stabilizes at “drifting”. During the drift transition (26.05–26.83 s), when there are moments of tractive driving, the drift bit toggles between tractive and non-tractive driving. Once the transition is finished, and the vehicle enters again into a steady state drift, the drift bit does not toggle any more. As the vehicle ends the drift (34.31 s) the drift bit toggles for a few milliseconds before stabilizing at tractive driving.
The average deviation of the final position as estimated by the motion model when compared to the position measured by the SatNav receiver for all 51 tests is ≈24.47 s, what accounts for a deviation of 2465.30 m per driven hour at 6.35 m s . However, it should be noted that (i) the average duration of these tests is ≈35 s, (ii) there is tractive and non-tractive driving in each test, (iii) around the first 20 s of each test are solely straight-line driving, and (iv) there is a full reset at the beginning of every test.

3.6. LiDAR-Based Positioning Method

The accuracy results of the LbPM are shown on the Table 2. The performance of this module is measured in terms of its accuracy to measure the vehicle state. For this, a vehicle is equipped with an INS with RTK and a LiDAR. An array of reflective markers is placed on a test track, and the marker library is generated by measuring the markers with a SatNav receiver with RTK. Two maneuvers (a slalom and a drive-by) are driven with various velocities ranging from 5 k m h and up to 40 k m h . The outputs from the LbPM are then compared to those of the INS with RTK. It should be noted that the INS serves as an accurate and objective reference, but no correction data is used for the LbPM.
What makes this test special are (i) the exclusion of correction data, and (ii) the validation with both maneuvers and various velocities. As can be seen, the mean accuracy of the LbPM is almost the same as that of the INS with RTK for all the state variables that the LbPM can measure.

3.7. LbPM Adequacy for SLAM

The adequacy of the LbPM for SLAM is measured in terms of its accuracy to generate the marker library. For this, the same test setup from Section 3.6 is used. Two tests are performed. In the first test, the position, velocity and orientation of an INS with RTK is used as correction data. In the second test, only the velocity from the INS with RTK is used as correction data. In both tests, the location of the markers is computed with the estimated vehicle state and the LiDAR measurements. The observed location of the markers is then compared to their true position from the library.
The relevance of these tests lies in that (i) it allows identify some sources of error, and (ii) it allows evaluation of the identified sources of error individually. The results are presented in Table 3. As can be seen, the marker library that is generated when the INS-RTK correction data is present, closely resembles the marker library that is generated with the RTK SatNav receiver. As expected, the marker library that is generated with only velocity correction data is less accurate. However, this inaccuracy is still smaller than that of SatNav with no correction data.

3.8. Runtime

The runtime results of the presented methodology are shown on Table 4. These values are obtained by executing the Matlab code of each module for 1.1+ million cycles on an Intel i7-6820HQ CPU, and by using the Matlab Profiler to measure the runtime of each module. Given that the IMU horizontation, the outlier and the drift detectors are embedded in the statistical filtering module, the runtime of these four modules is considered to be a single one.
The relevance of this test lies in that it gives a baseline to estimate the possibility of using the proposed methodology in real-time applications. As can be seen, the median runtime of all modules is always in the microsecond area. Even in a worst-case scenario (median plus 3σ, the runtime for the complete methodology is ≈1.33 ms, which remains under a typical IMU sampling time of 10 ms.

4. Discussion

As stated in Section 1, the focus of this research is to generate a reference state for ground vehicles, while reducing the dependency of the INSs on the SatNav. This allows the INSs to bridge SatNav outages for much longer periods of time, or even to function with no SatNav at all. Specifically, for the automotive industry, there are various use-cases that justify the use of SatNav-deprived INSs, such as the navigation in tunnels, underpasses or parking structures. A highly precise and prolonged navigation in places with no SatNav, such as testing halls, is also very relevant for the automotive research. As shown in Section 3, the methods developed in this research work address the aspects where the INSs profit the most.
Starting with the standstill recognition, it is shown that it is possible to classify whether a vehicle is moving or standing still by using only machine learning techniques and IMU measurements. The fact that no additional sensor is required, clearly implies important advantages, such as (i) reduced costs, (ii) less testing complexity, (iii) simplified information processing, and (iv) stand-alone functioning.
The proposed method uses the same features in the frequency domain, regardless of the vehicle. However, a first glance at the Laplace transformation of the features in the time domain, suggests that the choice of frequencies can be further refined if one considers them on a vehicle-specific basis. This could eventually lead to an improvement in the standstill recognition. Therefore, future work could include a vehicle-specific analysis of the features in the frequency domain. The use of the on-board vehicle sensors could also help refine and automate the data labeling.
As for the horizontation of INS measurements, modern INSs do use correction data (typically from the SatNav) to refine the IMU measurements. However, it is shown that the pose of the vehicle can be accurately described with the raw IMU measurements. This implies independence from the SatNav. The inclusion of the detailed mathematics to horizontate IMU measurements, implies an aid to scientists working on similar topics or trying to replicate the results presented here.
One challenging aspect of the horizontation of the IMU measurements is to detect the IMU mounting pose in the vehicle, i.e., it is possible to estimate the pose of the IMU with respect to the gravity vector and the true north of the Earth. However, it is not an easy task to relate the pitch and roll of the IMU with that of the vehicle because the IMU could be mounted in such a manner that its x × y -plane is not parallel to the x × y -plane of the vehicle. Knowing the offsets between both planes could be highly valuable to recognize certain driving situations, such as driving inside parking structures, wheelies or stoppies. Since the acquisition of these offsets on a vehicle-to-vehicle basis is a complicated and time-consuming task, future work could include the investigation of automated methods that allow estimation of these offsets.
The KF is a computationally efficient algorithm for fusing data. However, the accuracy of its output is limited by the accuracy of the information to fuse. Therefore, it is important to refine the sources of information as much as possible before using them as inputs for the KF. In this work, the EKF fuses the state vector estimated by the motion model with the measurements of the vehicle state made with external sensors. Therefore, a lot of effort is put on refining the motion model for tractive-driving applications. As shown above, the proposed motion model can accurately estimate the vehicle state for extended periods of time with no correction data, which implies a high confidence on the vehicle state, even when navigating by dead reckoning.
Given that the motion model is designed to perform best for tractive driving, its performance is reduced during non-tractive driving. Also, given that the drifting tests are designed to test the drift recognition, they are not long enough to objectively determine the accuracy of the motion model during a drift. Therefore, future work could include a deeper investigation of the accuracy of the motion model during non-tractive driving. This could help to estimate adequate values for the system noise, and so improve the performance of the KF during drifting.
As for the outlier detector, it further helps in refining the sources of information before they are fed to the KF. Its effect is especially noticeable when the sensors deliver quality metrics for their own measurements, as is the case of the standard deviation for the SatNav. As shown above, to include a smart outlier detector implies a very useful consistency check of the sensor measurements, thus avoiding fusing information that would negatively affect the estimation of the vehicle state.
This detector uses the latest state vector because it is the last known vehicle state. However, as time passes, the predicted state without correction data might differ so much from the true state that the constraints shown in Section 2.5 filter out meaningful sensor measurements. Future work could include the combination of the system noise with the state vector to better adapt the outlier detector.
Yet another step that is taken to improve the KF is the drift detector. Given that the motion model is designed to function best in tractive driving, it is very useful to differentiate whether the vehicle is drifting or not. As shown above, the drift recognition can make this distinction. This implies that the time instance to make changes in the KF is known. This can be, for example, to adjust the system noise accordingly or to use a more adequate motion model.
The drift recognition here presented is based on the thresholds of two features: the magnitude of the sideslip and the difference between the estimated and expected lateral acceleration. Future work on this method could include the analysis of a bigger dataset that includes vehicles with different drivetrains (front, rear and all-wheel drive). This could help to gain a better understanding on the ideal thresholds for vehicles according to their drivetrain.
One crucial module to reduce the dependency of INSs on the SatNav is the LbPM. The results clearly show that the accuracy of the vehicle state as measured by this method closely resembles that of an INS with RTK correction data. Considering the refresh rate of the LiDAR and the state variables that can be measured by means of the LbPM, this could already be enough for an accurate vehicle indoor navigation. Future work on this module could include the implementation of the method on dedicated hardware to analyze the computing requirements for a real-time implementation.
The results on Table 3, suggest that the LbPM is adequate for SLAM. Given that the LbPM is capable of delivering RTK-like correction data, it could enable the generation of an accurate marker library in unknown environments. It is not clear whether or not a lane-accurate indoor navigation could be possible in places with a reduced marker density. Future work on this module could include adapting the LbPM algorithm for SLAM, and implementing it on dedicated hardware for real-world testing. For this purpose, tailored-made hardware solutions, such as [63,64], could aid in the navigation of complex environments, as can be parking structures. The use of lightweight artificial intelligence techniques, as shown in [65,66] could aid in the implementation of the standstill recognition in hardware with limited computational resources, as are embedded systems. Finally, the sensor fusion with on-board sensors in the vehicles could improve the dead-reckoning navigation in indoor environments.
Regarding the methodology runtime, it should be noted that even with high-level programming code, such as a Matlab, its runtime never exceeded 2 ms. This implies that it is possible to implement the methodology in real-time applications. Future work could include the implementation using a more efficient programming language, such as C++, and the use of dedicated hardware.

5. Conclusions

The objective of generating a reference vehicle state for proper testing and validation of automated driving functions is achieved in a prototypical manner by developing novel methods and by adapting existing ones to the problems at hand. This while reducing the dependency of INSs on external sensors, such as the SatNav. The testing results demonstrate that the proposed methods greatly improve the accuracy of the estimated vehicle state. The measured runtime suggests the possibility of real-time implementation of the proposed methodology.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/21/4/1131/s1.

Author Contributions

E.S.M. contributed with most of the presented methodology, its implementation and its validation. J.D. contributed with the methodology of the LiDAR-based SLAM. B.H. contributed with the conceptualization of the methodology. A.G.H. contributed with the supervision of the methodology. M.B. contributed with the conceptualization and supervision of the methodology, the funding acquisition and the project administration. All authors contributed in the writing, review and editing. Andrés García Higuera is a policy analyst in the European Parliamentary Research Service (EPRS), the internal research service and think-tank of the European Parliament. He is writing in a personal capacity and any views expressed do not represent an official position of the Parliament. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC was funded by the Federal Ministry of Education and Research of Germany (Bundesministerium für Bildung und Forschung) in the framework of FH-Impuls, project number 03FH7I02IA.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets of the IMU signals and SatNav measurements of the results shown are available within the article as Supplementary Materials. The data presented in this study are available within the article as Supplementary Materials.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SatNavSatellite Navigation
IMU(s)Inertial Measurement Unit(s)
INS(s)Inertial Navigation System(s)
RTKReal-Time Kinematic
LbPMLiDAR-based Positioning Method
RFRandom Forest
DFTDiscrete Fourier Transform
RPMRevolutions Per Minute
COGCourse Over Ground
EKFExtended Kalman Filter
GPSGlobal Positioning System

References

  1. Einstein, A. Relativity; The University of Sheffield: Sheffield, UK, 1920. [Google Scholar]
  2. Schulze, O. Wirbelstrom-Tachometer. German DRP146134, 7 October 1902. [Google Scholar]
  3. Tindell, K.; Burns, A. Guaranteeing message latencies on Controller Area Network (CAN). In Proceedings of the 1st International CAN Conference, Mainz, Germany, 13–14 September 1994; pp. 1–11. [Google Scholar]
  4. Tindell, K.; Hanssmon, H.; Wellings, A.J. Analysing Real-Time Communications: Controller Area Network (CAN). In Proceedings of the RTSS Citeseer, San Juan, Puerto Rico, 7–9 December 1994; pp. 259–263. [Google Scholar]
  5. Tindell, K.; Burns, A.; Wellings, A.J. Calculating controller area network (CAN) message response times. Control Eng. Pract. 1995, 3, 1163–1169. [Google Scholar] [CrossRef]
  6. Casparsson, L.; Rajnak, A.; Tindell, K.; Malmberg, P. Volcano—A Revolution in On-Board Communications; Technical Report; Volvo: Gothenburg, Sweden, 1998. [Google Scholar]
  7. DeMeis, R. Cars sag under weighty wiring. EE Times 2005, 10, 24. [Google Scholar]
  8. Davis, R.I.; Burns, A.; Bril, R.J.; Lukkien, J.J. Controller Area Network (CAN) schedulability analysis: Refuted, revisited and revised. Real Time Syst. 2007, 35, 239–272. [Google Scholar] [CrossRef] [Green Version]
  9. Davis, R.I.; Navet, N. Controller area network (CAN) schedulability analysis for messages with arbitrary deadlines in FIFO and work-conserving queues. In Proceedings of the 2012 9th IEEE International Workshop on Factory Communication Systems, Lemgo, Germany, 21–24 May 2012; pp. 33–42. [Google Scholar] [CrossRef]
  10. Dziurzanski, P.; Davis, R.I.; Indrusiak, L.S. Synthesizing Real-Time Schedulability Tests using Evolutionary Algorithms: A Proof of Concept. In Proceedings of the 2019 IEEE Real-Time Systems Symposium (RTSS), Hong Kong, China, 3–6 December 2019; pp. 43–55. [Google Scholar] [CrossRef] [Green Version]
  11. WPT Sensors: Wheel Pulse Transducers. Available online: https://www.kistler.com/files/document/000-811e.pdf (accessed on 11 November 2019).
  12. Correvit S-Motion DTI: 2-axis Optical Sensors. Available online: https://www.kistler.com/files/document/003-395e.pdf (accessed on 1 January 2020).
  13. Haus, J.; Lauinger, N. Optische Gitter: Die Abbildung der Realität—75 Jahre berührungslose dynamische Meßtechnik auf der Basis optischer Gitter. Laser Tech. J. 2007, 4, 43–47. [Google Scholar] [CrossRef]
  14. Robert Bosch GMBH Vehicle Standstill Recognition. German PCT/US2013/074650, 12 December 2013.
  15. Kalman, R. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  16. Zubov, I.; Afanasyev, I.; Gabdullin, A.; Mustafin, R.; Shimchik, I. Autonomous Drifting Control in 3D Car Racing Simulator. In Proceedings of the 2018 International Conference on Intelligent Systems (IS), Madeira, Portugal, 25–27 September 2018; pp. 235–241. [Google Scholar] [CrossRef]
  17. Wang, H.; Zhao, L.; Yang, Y.; Yan, Y.; Liu, J. Modeling and analysis of wheel mobile robot for high-speed precise drift. In Proceedings of the 31st Chinese Control Conference, Heifei, China, 25–27 July 2012; pp. 5064–5069. [Google Scholar]
  18. Bárdos, A.; Domina, A.; Szalay, Z.; Tihanyi, V.; Palkovics, L. MIMO Controller Design for Stabilizing Vehicle Drifting. In Proceedings of the 2019 IEEE 19th International Symposium on Computational Intelligence and Informatics and 7th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Sciences and Robotics (CINTI-MACRo), Szeged, Hungary, 14–16 November 2019; pp. 000187–000192. [Google Scholar] [CrossRef]
  19. El-Saigh, A.I.; Macario, R.C.V. A review of anti-multipath techiques, past and present. In Proceedings of the IEE Colloquium on Multipath Countermeasures, London, UK, 23 May 1996; pp. 2/1–2/6. [Google Scholar]
  20. Burr, A.G. The multipath problem: An overview. In Proceedings of the IEE Colloquium on Multipath Countermeasures, London, UK, 23 May 1996; pp. 1/1–1/7. [Google Scholar]
  21. Cheng, L.; Chen, J.; Gan, M. Multipath error analysis of carrier Tracking Loop in GPS receiver. In Proceedings of the 29th Chinese Control Conference, Beijing, China, 28–31 July 2010; pp. 4137–4141. [Google Scholar]
  22. Liu, L.; Amin, M.G. Comparison of Average Performance of GPS Discriminators in Multipath. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP‘07, Honolulu, HI, USA, 15–20 April 2007; Volume 3, pp. III-1285–III-1288. [Google Scholar]
  23. Yedukondalu, K.; Sarma, A.D.; Kumar, A. Mitigation of GPS multipath error using recursive least squares adaptive filtering. In Proceedings of the 2010 IEEE Asia Pacific Conference on Circuits and Systems, Kuala Lumpur, Malaysia, 6–9 December 2010; pp. 104–107. [Google Scholar]
  24. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: London, UK, 2013. [Google Scholar]
  25. Farrell, J. Aided Navigation: GPS with High Rate Sensors; McGraw-Hill Inc.: London, UK, 2008. [Google Scholar]
  26. Program Funding. Available online: https://www.gps.gov/policy/funding/ (accessed on 13 January 2021).
  27. Annual Activity Report of the European GNSS Agency 2019. Available online: https://www.gsa.europa.eu/sites/default/files/annual_report_gsa_2019.pdf (accessed on 12 January 2021).
  28. New Civil Signals. Available online: https://www.gps.gov/systems/gps/modernization/civilsignals/ (accessed on 13 January 2021).
  29. Release No: CR-048-17. Available online: https://www.defense.gov/News/Contracts/Contract-View/Article/1112618/ (accessed on 16 April 2018).
  30. Lam, K. Broadcom Introduces World’s First Dual Frequency GNSS Receiver with Centimeter Accuracy for Consumer LBS Applications. Available online: https://www.broadcom.com/company/news/product-releases/2302120 (accessed on 16 April 2018).
  31. Beidou Upgrades for Global Reach. Available online: https://en.unicorecomm.com/news/detail/18 (accessed on 16 April 2018).
  32. Batistić, L.; Tomic, M. Overview of indoor positioning system technologies. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 0473–0478. [Google Scholar] [CrossRef]
  33. Einsiedler, J.; Radusch, I.; Wolter, K. Vehicle indoor positioning: A survey. In Proceedings of the 2017 14th Workshop on Positioning, Navigation and Communications (WPNC), Bremen, Germany, 25–26 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  34. iBeacon-Apple Developer. Available online: https://developer.apple.com/ibeacon/ (accessed on 20 April 2020).
  35. Ibisch, A.; Stümper, S.; Altinger, H.; Neuhausen, M.; Tschentscher, M.; Schlipsing, M.; Salinen, J.; Knoll, A. Towards autonomous driving in a parking garage: Vehicle localization and tracking using environment-embedded LIDAR sensors. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, Australia, 23–26 June 2013; pp. 829–834. [Google Scholar] [CrossRef]
  36. Harter, A.; Hopper, A.; Steggles, P.; Ward, A.; Webster, P. The Anatomy of A Context-Aware Application. Wirel. Netw. 2002, 8, 187–197. [Google Scholar] [CrossRef]
  37. Ward, A.; Jones, A.; Hopper, A. A New Location Technique for the Active Office. IEEE Pers. Commun. 1997, 4, 42–47. [Google Scholar] [CrossRef] [Green Version]
  38. Addlesee, M.; Curwen, R.; Hodges, S.; Newman, J.; Steggles, P.; Ward, A.; Hopper, A. Implementing a sentient computing system. Computer 2001, 34, 50–56. [Google Scholar] [CrossRef] [Green Version]
  39. Indoor Positioning System-Data Drives Innovation. Available online: https://indoo.rs/solution/indoor-positioning-system/ (accessed on 12 May 2020).
  40. Quick Start: Indoor Positioning Systems. Available online: https://www.infsoft.com/indoor-positioning (accessed on 1 May 2020).
  41. What Is Indoor Positioning Systems? Available online: https://senion.com/indoor-positioning-system/#difference (accessed on 20 May 2020).
  42. Gatesichapakorn, S.; Takamatsu, J.; Ruchanurucks, M. ROS based Autonomous Mobile Robot Navigation using 2D LiDAR and RGB-D Camera. In Proceedings of the 2019 First International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics (ICA-SYMP), Bangkok, Thailand, 16–18 January 2019; pp. 151–154. [Google Scholar] [CrossRef]
  43. Toth, C.; Grejner-Brzezinska, D.A.; Lee, Y. Terrain-based navigation: Trajectory recovery from LiDAR data. In Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; pp. 760–765. [Google Scholar] [CrossRef]
  44. Cong, D.; Zhang, L.; Su, P.; Tang, Z.; Meng, Y.; Wang, Y. Design and implementation of LiDAR navigation system based on triangulation measurement. In Proceedings of the 2017 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 6060–6063. [Google Scholar] [CrossRef]
  45. ISO 8855:2011. ISO/TC 22/SC 33 Vehicle Dynamics and Chassis Components; Technical Report; International Organization for Standardization: Geneva, Switzerland, 2011. [Google Scholar]
  46. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  47. Frigo, M.; Johnson, S.G. The Design and Implementation of FFTW3. Proc. IEEE 2005, 93, 216–231. [Google Scholar] [CrossRef] [Green Version]
  48. Johnson, S.G.; Frigo, M. A modified split-radix FFT with fewer arithmetic operations. IEEE Trans. Signal Process. 2007, 55, 111–119. [Google Scholar] [CrossRef]
  49. Frigo, M.; Leiserson, C.E.; Prokop, H.; Ramachandran, S. Cache-oblivious algorithms. In Proceedings of the 40th Annual Symposium on Foundations of Computer Science, New York, NY, USA, 17–19 October 1999; pp. 285–297. [Google Scholar]
  50. Nyquist, H. Certain Topics in Telegraph Transmission Theory. Trans. Am. Inst. Electr. Eng. 1928, 47, 617–644. [Google Scholar] [CrossRef]
  51. Shannon, C.E. Communication in the Presence of Noise. In Proceedings of the 1949 Proceedings of the Institute of Radio Engineers; IEEE: New York, NY, USA, 1949; Volume 37, pp. 10–21. [Google Scholar]
  52. Murray, G. Rotation about an Arbitrary Axis in 3 Dimensions. Available online: https://tinyurl.com/y6rvtbtx (accessed on 17 November 2020).
  53. Caicedo, M. Cinemática de las Rotaciones. Available online: http://hc09paa1.pbworks.com/f/FisII_rotation.pdf (accessed on 15 November 2020).
  54. Abdulrahim, M. On the Dynamics of Automobile Drifting. SAE Mobilus 2006. [Google Scholar] [CrossRef]
  55. Velodyne LiDAR, Inc. HDL-32E User Manual; Velodyne LiDAR, Inc.: San Jose, CA, USA, 2017. [Google Scholar]
  56. Laser Measurements Calibrations. Available online: https://www.nist.gov/calibrations/laser-measurements-calibrations (accessed on 6 August 2018).
  57. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
  58. Mochnac, J.; Marchevsky, S.; Kocan, P. Bayesian filtering techniques: Kalman and extended Kalman filter basics. In Proceedings of the 2009 19th International Conference Radioelektronika, Bratislava, Slovakia, 7 July 2009; pp. 119–122. [Google Scholar]
  59. Matsebe, O.; Namoshe, M.; Tlale, N. Basic Extended Kalman Filter-Simultaneous Localisation and Mapping; INTECH Open Access Publisher: London, UK, 2010. [Google Scholar]
  60. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef] [Green Version]
  61. Mahalanobis, P.C. On the Generalized Distance in Statistics. Proc. Natl. Inst. Sci. India 1936, 2, 49–55. [Google Scholar]
  62. Wang, B.; Shi, W.; Miao, Z. Confidence Analysis of Standard Deviational Ellipse and Its Extension into Higher Dimensional Euclidean Space. PLoS ONE 2015, 10, e0118537. [Google Scholar] [CrossRef] [PubMed]
  63. Lindner, L.; Sergiyenko, O.; Rivas-López, M.; Ivanov, M.; Rodríguez-Quiñonez, J.C.; Hernández-Balbuena, D.; Flores-Fuentes, W.; Tyrsa, V.; Muerrieta-Rico, F.N.; Mercorelli, P. Machine vision system errors for unmanned aerial vehicle navigation. In Proceedings of the 2017 IEEE 26th International Symposium on Industrial Electronics (ISIE), Edinburgh, UK, 19–21 June 2017; pp. 1615–1620. [Google Scholar] [CrossRef]
  64. Garcia-Cruz, X.; Sergiyenko, O.; Tyrsa, V.; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Rodriguez-Quiñonez, J.; Basaca-Preciado, L.; Mercorelli, P. Optimization of 3D laser scanning speed by use of combined variable step. Opt. Lasers Eng. 2014, 54, 141–151. [Google Scholar] [CrossRef]
  65. Gupta, N.; Khosravy, M.; Gupta, S.; Dey, N.; González Crespo, R. Lightweight Artificial Intelligence Technology for Health Diagnosis of Agriculture Vehicles: Parallel Evolving Artificial Neural Networks by Genetic Algorithm. Int. J. Parallel Program. 2020. [Google Scholar] [CrossRef]
  66. Gupta, N.; Khosravy, M.; Patel, N.; Dey, N.; Gupta, S.; Darbari, H.; González Crespo, R. Economic data analytic AI technique on IoT edge devices for health monitoring of agriculture machines. Appl. Intell. 2020. [Google Scholar] [CrossRef]
Figure 1. Graphical depiction of the Local Car Plane.
Figure 1. Graphical depiction of the Local Car Plane.
Sensors 21 01131 g001
Figure 2. Shown is a rotation around an arbitrary axis to align the z LCP axis with the z LTP axis. The axes of the LTP are shown in green, the axes of the LCP are shown in magenta, and the arbitrary axis r h 0 and the rotation angle θ h with grey.
Figure 2. Shown is a rotation around an arbitrary axis to align the z LCP axis with the z LTP axis. The axes of the LTP are shown in green, the axes of the LCP are shown in magenta, and the arbitrary axis r h 0 and the rotation angle θ h with grey.
Sensors 21 01131 g002
Figure 3. Shown is a graphical depiction of the cone geometry used for the velocity calculation from two LiDAR measurements that point to the same marker.
Figure 3. Shown is a graphical depiction of the cone geometry used for the velocity calculation from two LiDAR measurements that point to the same marker.
Sensors 21 01131 g003
Figure 4. Shown is a graphical depiction of the geometry used for calculating the vehicle location.
Figure 4. Shown is a graphical depiction of the geometry used for calculating the vehicle location.
Sensors 21 01131 g004
Figure 5. Shown is a graphical depiction of the generation of the measurement variables for the vehicle pose, as computed from LiDAR measurements.
Figure 5. Shown is a graphical depiction of the generation of the measurement variables for the vehicle pose, as computed from LiDAR measurements.
Sensors 21 01131 g005
Figure 6. Shown is the RF output (red, S = standstill, M = motion) and the velocity over ground of the vehicle (blue). Vehicle: 3rd Gen. Smart ForTwo. Motorization: Electric Drive. Power source: Electricity.
Figure 6. Shown is the RF output (red, S = standstill, M = motion) and the velocity over ground of the vehicle (blue). Vehicle: 3rd Gen. Smart ForTwo. Motorization: Electric Drive. Power source: Electricity.
Sensors 21 01131 g006
Figure 7. Shown is the RF output (red, S = standstill, M = motion) and the velocity over ground of the vehicle (blue). Vehicle: 5th Gen. Audi A4. Motorization: 6 cylinder, 3.0 L TDI. Power source: Diesel.
Figure 7. Shown is the RF output (red, S = standstill, M = motion) and the velocity over ground of the vehicle (blue). Vehicle: 5th Gen. Audi A4. Motorization: 6 cylinder, 3.0 L TDI. Power source: Diesel.
Sensors 21 01131 g007
Figure 8. Shown is the RF output (red, S = standstill, M = motion) and the velocity over ground of the vehicle (blue). Vehicle: Suzuki GSX-R750 K2. Motorization: 4 cylinder, 749 cc. Power source: Gasoline.
Figure 8. Shown is the RF output (red, S = standstill, M = motion) and the velocity over ground of the vehicle (blue). Vehicle: Suzuki GSX-R750 K2. Motorization: 4 cylinder, 749 cc. Power source: Gasoline.
Sensors 21 01131 g008
Figure 9. Shown are the vehicle accelerations along the z LCP (blue), y LCP (green) and x LCP (black) axes. (a) Shows the vehicle accelerations of the RF training dataset with the label “Standstill”. (b) Shows the vehicle accelerations while the music is playing and the RF output (red, S = standstill, M = motion). Vehicle: 2nd Gen. Audi Q7. Motorization: 6 cylinder, 3.0 L TFSI. Power source: Gasoline.
Figure 9. Shown are the vehicle accelerations along the z LCP (blue), y LCP (green) and x LCP (black) axes. (a) Shows the vehicle accelerations of the RF training dataset with the label “Standstill”. (b) Shows the vehicle accelerations while the music is playing and the RF output (red, S = standstill, M = motion). Vehicle: 2nd Gen. Audi Q7. Motorization: 6 cylinder, 3.0 L TFSI. Power source: Gasoline.
Sensors 21 01131 g009
Figure 10. Shown are the roll (blue), pitch (green) and yaw (magenta) angles. Vehicle: Suzuki GSX-R750 K2. Motorization: 4 cylinder, 749 cc. Power source: Gasoline.
Figure 10. Shown are the roll (blue), pitch (green) and yaw (magenta) angles. Vehicle: Suzuki GSX-R750 K2. Motorization: 4 cylinder, 749 cc. Power source: Gasoline.
Sensors 21 01131 g010
Figure 11. Shown is the vehicle position as measured by a SatNav receiver (black) and as estimated by means of statistical filtering (blue, magenta and green). (a) Shows the statistical filtering when SatNav correction data is used. (b) Shows the statistical filtering with no correction data.
Figure 11. Shown is the vehicle position as measured by a SatNav receiver (black) and as estimated by means of statistical filtering (blue, magenta and green). (a) Shows the statistical filtering when SatNav correction data is used. (b) Shows the statistical filtering with no correction data.
Sensors 21 01131 g011
Figure 12. Shown is the vehicle position as measured by a SatNav receiver (black) and as estimated by means of statistical filtering (blue, magenta and green). (a) Shows the dataset section of a bridge underpass. (b) Shows the dataset section of a parking structure.
Figure 12. Shown is the vehicle position as measured by a SatNav receiver (black) and as estimated by means of statistical filtering (blue, magenta and green). (a) Shows the dataset section of a bridge underpass. (b) Shows the dataset section of a parking structure.
Sensors 21 01131 g012
Figure 13. Shown are (a) the drift bit (green), the estimated (black) and expected (magenta) lateral acceleration, (b) the drift bit (green), the measured (black) and estimated (magenta solid, magenta dashed) sideslips. Drift bit = 1 means drifting.
Figure 13. Shown are (a) the drift bit (green), the estimated (black) and expected (magenta) lateral acceleration, (b) the drift bit (green), the measured (black) and estimated (magenta solid, magenta dashed) sideslips. Drift bit = 1 means drifting.
Sensors 21 01131 g013
Table 1. Standstill classification.
Table 1. Standstill classification.
Classified StateStandstillMotion
True State
StandstillTrue positiveFalse negative
MotionFalse positiveTrue negative
Table 2. Accuracy results for the proposed LbPM. Shown are the maneuvers, maneuverer velocity, mean deviation from the reference and Std. dev. of the corresponding errors.
Table 2. Accuracy results for the proposed LbPM. Shown are the maneuvers, maneuverer velocity, mean deviation from the reference and Std. dev. of the corresponding errors.
ManoeuvrersPositioning AccuracyOrientation AccuracyVelocity Accuracy
Drive-byMean (m)Std. dev. (m)Mean (deg)Std. dev.
(deg)
Mean m s Std. dev. m s
5 k m h 0.040.020.730.250.060.08
10 k m h 0.030.020.190.200.080.10
15 k m h 0.030.020.260.190.070.09
20 k m h 0.030.020.370.230.080.09
25 k m h 0.040.020.580.230.080.10
30 k m h 0.060.020.510.220.080.10
35 k m h 0.070.030.440.250.080.11
40 k m h 0.080.030.410.260.110.13
SlalomMean (m)Std. dev. (m)Mean (deg)Std. dev.
(deg)
Mean m s Std. dev. m s
5 k m h 0.040.020.240.290.080.11
10 k m h 0.040.020.400.290.090.12
20 k m h 0.040.020.320.360.140.17
30 k m h 0.050.020.360.400.180.24
40 k m h 0.100.020.530.430.180.22
Table 3. Results of the mapping accuracy. Shown are the maneuvers, driving velocity, mean deviation from the observed to the true marker location, std. dev. of the errors and the maximum deviation. The columns (a) show the results when using position, velocity and orientation correction data. The columns (b) show the results when using only velocity correction data.
Table 3. Results of the mapping accuracy. Shown are the maneuvers, driving velocity, mean deviation from the observed to the true marker location, std. dev. of the errors and the maximum deviation. The columns (a) show the results when using position, velocity and orientation correction data. The columns (b) show the results when using only velocity correction data.
ManoeuvreMapping AccuracyManoeuvreMapping Accuracy
Drive-byMean (m)Std. dev.
(m)
Max (m)SlalomMean (m)Std. dev.
(m)
Max (m)
(a)/(b)(a)/(b)(a)/(b) (a)/(b)(a)/(b)(a)/(b)
5 k m h 0.11/0.900.04/0.510.19/1.565 k m h 0.33/0.930.24/0.510.81/1.62
10 k m h 0.11/0.130.04/0.060.19/0.2210 k m h 0.08/0.270.03/0.110.14/0.42
15 k m h 0.12/0.180.05/0.070.22/0.3415 k m h 0.08/0.530.03/0.250.16/1.04
20 k m h 0.14/0.290.06/0.120.28/0.4820 k m h 0.08/0.780.03/0.560.14/1.70
25 k m h 0.17/0.110.08/0.060.32/0.3925 k m h 0.10/0.300.05/0.140.23/0.64
30 k m h 0.14/0.160.06/0.090.28/0.4430 k m h 0.08/0.670.04/0.360.21/1.53
35 k m h 0.10/0.200.05/0.150.20/0.6935 k m h 0.11/0.960.06/0.520.25/1.96
40 k m h 0.10/0.380.05/0.190.23/0.7740 k m h 0.11/0.260.06/0.150.25/0.64
Table 4. Shown is the median and the std. dev. of the runtime of the modules presented above.
Table 4. Shown is the median and the std. dev. of the runtime of the modules presented above.
ModuleMedian (μs)Std. dev. (μs)
Standstill classifier12350
Statistical filtering40398
LbPM-point clustering4320
LbPM-velocity estimation1114
LbPM-position estimation4254
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sánchez Morales, E.; Dauth, J.; Huber, B.; García Higuera, A.; Botsch, M. High Precision Outdoor and Indoor Reference State Estimation for Testing Autonomous Vehicles. Sensors 2021, 21, 1131. https://doi.org/10.3390/s21041131

AMA Style

Sánchez Morales E, Dauth J, Huber B, García Higuera A, Botsch M. High Precision Outdoor and Indoor Reference State Estimation for Testing Autonomous Vehicles. Sensors. 2021; 21(4):1131. https://doi.org/10.3390/s21041131

Chicago/Turabian Style

Sánchez Morales, Eduardo, Julian Dauth, Bertold Huber, Andrés García Higuera, and Michael Botsch. 2021. "High Precision Outdoor and Indoor Reference State Estimation for Testing Autonomous Vehicles" Sensors 21, no. 4: 1131. https://doi.org/10.3390/s21041131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop