Skip to Content
AerospaceAerospace
  • Article
  • Open Access

29 October 2023

Integrating GRU with a Kalman Filter to Enhance Visual Inertial Odometry Performance in Complex Environments

,
,
and
School of Aerospace, Transport, and Manufacturing, Cranfield University, Cranfield MK43 0AL, UK
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue UAV Path Planning and Navigation

Abstract

To enhance system reliability and mitigate the vulnerabilities of the Global Navigation Satellite Systems (GNSS), it is common to fuse the Inertial Measurement Unit (IMU) and visual sensors with the GNSS receiver in the navigation system design, effectively enabling compensations with absolute positions and reducing data gaps. To address the shortcomings of a traditional Kalman Filter (KF), such as sensor errors, an imperfect non-linear system model, and KF estimation errors, a GRU-aided ESKF architecture is proposed to enhance the positioning performance. This study conducts Failure Mode and Effect Analysis (FMEA) to prioritize and identify the potential faults in the urban environment, facilitating the design of improved fault-tolerant system architecture. The identified primary fault events are data association errors and navigation environment errors during fault conditions of feature mismatch, especially in the presence of multiple failure modes. A hybrid federated navigation system architecture is employed using a Gated Recurrent Unit (GRU) to predict state increments for updating the state vector in the Error Estate Kalman Filter (ESKF) measurement step. The proposed algorithm’s performance is evaluated in a simulation environment in MATLAB under multiple visually degraded conditions. Comparative results provide evidence that the GRU-aided ESKF outperforms standard ESKF and state-of-the-art solutions like VINS-Mono, End-to-End VIO, and Self-Supervised VIO, exhibiting accuracy improvement in complex environments in terms of root mean square errors (RMSEs) and maximum errors.

1. Introduction

In recent years, unmanned aerial vehicles (UAVs) have gained attraction with the evolution of technologies such as artificial intelligence and computer vision, which have effectively broadened pathways for diverse applications and services. UAVs have been utilized in many civil applications, such as aerial surveillance, package delivery, precision agriculture, search and rescue operations, traffic monitoring, remote sensing, and post-disaster operations [1]. The increasing demand for commercial UAVs for such applications has highlighted the need for robust, secure, and accurate navigation solutions. However, achieving accurate and reliable UAV positioning in complex environments, including overpasses, urban canyons, illumination variability, etc., has become more challenging.
Although Global Navigation Satellite Systems (GNSS) have become one of the most popular navigation systems in recent decades, the utilization of GNSS remains suspicious due to its vulnerability to satellite visibility, interference of jamming and spoofing, as well as environmental effects such as multipath, building mask, ionospheric and tropospheric delays. Furthermore, the effects will lead to sharp deteriorations in the positioning precision and GNSS availability [2]. The inertial navigation system (INS) facilitates the provision of high-frequency and continuous position, velocity, and attitude information, which makes the integration of INS with GNSS prevalent in most navigation architecture designs. However, the drift error generated from INS accumulates over time, which will result in divergent positioning output. The impact of INS drifting on GNSS/INS fusion performance in the case of long-term GNSS outages has been explored widely [3,4,5]. Nevertheless, more sensor types are still in demand to provide more resilient and accurate positioning resolutions under complex operation scenarios.
The vision-based navigation system is a promising alternative for providing reliable positioning information without radio frequency interference effects during GNSS outages. Visual odometry (VO) is frequently employed as a critical component in the vision-based navigation system due to its efficient deployment with low computational complexity in contrast to Visual Simultaneous Localization and Mapping (VSLAM). The Visual Inertial Navigation Systems (VINS) have been fully explored by researchers, encompassing notable examples like VINS-Mono [6], MSCKF [7], ORB-SLAM3 [8], and open-VINS [9]. As a common solution for improving navigation performance in terms of accuracy, integrity, update rate, and robustness through adding sensor types with GNSS, the VINS navigation system with multiple integrated sensors presents a higher possibility of existing multiple faults, noise, and sensor failures within the system. It was discovered that purely VO-enabled navigation presents performance degradation caused by factors such as illumination, motion blur, field of view, moving objects, and texture environment [10]. As a result, there is a need to explore fault-tolerant designs in the VINS navigation systems to mitigate the fault impact on the visual systems.
For achieving the fault tolerance capability in integrated multi-sensor systems, the decentralized filtering design, especially using federated architecture, has become popular in recent years. Dan et al. [11] proposed an adaptive positioning algorithm based on a federated Kalman filter combined with a robust Error Estate Kalman Filter (ESKF) with adaptive filtering for the UAV-based GNSS/IMU/VO navigation system to eliminate issues of GNSS signal interruption and lack of sufficient feature points while navigating in indoor and outdoor environments. However, most papers using ESKF only measure VO faults by adding position errors, whilst faults coming from visual cues like scarcity of features caused by environment complexity and motion dynamics or high non-linearity characteristics have not been fully taken into account. Therefore, there is a gap in detecting and identifying faults and threats with consideration of visual cues in the GNSS/IMU/VO navigation system.
Current state-of-the-art fault-tolerant GNSS/IMU/VO navigation systems encounter more difficulties when operating in complex scenarios due to the challenges of identifying visual failure modes and correcting VO errors. As a structured and systematic fault identification method, Failure Mode and Effect Analysis (FMEA) is capable of identifying various fault types, defects, and sensor failures based on predicted and measured values as they occur instantly or shortly after they occur. FMEA is commonly used to assess risks to improve the reliability of complex systems by identifying and evaluating potential failures with the provision of occurrence likelihood, severity of impact, and detectability, as well as prioritizing high-risk failure modes. However, researchers working on VIO discussed several faults caused by navigation environment or sensor error individually [10,12,13], but identifying failure modes is a gap. Moreover, despite their extracted failure modes, systematic faults have not been discovered with only the consideration of single or specific combined faults like motion blur, rapid motion, and illumination variation.
To resolve the inherent non-linearity in the visual navigation system, AI has been employed with a Kalman filter to enhance the ability to model temporal changes in sensor data. Nevertheless, AI has the disadvantages of training time and predicted value inevitability containing errors that can be partially resolved by simplifying the neural network, for example, with a Gradient Recurrent Unit (GRU) suppressed with ESKF fusion, thus enabling ESKF to better handle scenarios with verifying level of uncertainty and dynamic sensor conditions.
To implement a fault tolerance navigation system against the visually degraded environment, this paper proposes a GRU-aided ESKF VIO algorithm that conducts FMEA on the VINS system to identify failure modes and then assists the architecture of fault-tolerant multi-sensor system where the AI-aided ESKF VIO integration is used as one of the sub-filters to correct identified visual failure modes. The major contributions of this paper are highlighted as follows:
  • The proposition of an FMEA-supported fault-tolerant federated GNSS/IMU/VO integrated navigation system. The FMEA execution on an integrated VINS system contributes to enhancing the system’s design, with a focus on accurate navigation during GNSS outages.
  • The proposition of a GRU-based enhancement of ESKF for predicting increments of positions to update measurements of ESKF, aiming to correct visual positioning errors, leading to more accurate and robust navigation in challenging conditions.
  • Performance evaluation of GRU-aided ESKF-based VIO within the fault-tolerant GNSS/IMU/VO multi-sensor navigation system. Training datasets for the GRU model are selected to replicate the failure modes extracted with fault conditions from FMEA. The verification is simulated and benchmarked on the Unreal engine, where the environment includes complex scenes of sunlight, shadow, motion blur, lens blur, no-texture, light variation, and motion variation. The validation dataset is grouped into multiple zone categories in accordance with single or multiple fault types due to environmental sensitivity and dynamic motion transitions.
  • The performance of the proposed algorithm is compared with the state-of-the-art End-to-End VIO and Self-supervised VIO by testing similar datasets on the proposed algorithm.
The remaining part of the paper is organized as follows. Section 2 discusses the existing systems designed based on a hybrid approach; Section 3 introduces the proposed GRU-aided KF-based federated GNSS/INS/VO navigation System; Section 4 discusses the experimental setup; Section 5 discusses the roast test, and result analysis comparison with state-of-the-art systems, and the conclusion is presented in Section 6.

3. Proposed Fault Tolerant Navigation System

To correct visual positioning errors that arise from multiple systematic faults when navigating in urban areas, FMEA is executed at the first step to identify and analyse systematic failure modes according to the extracted fault tree model. The failure modes are prioritized based on potential impact and likelihood of occurrence, enabling the anticipation and mitigation of visual positioning errors. With the FMEA outcome, the hybrid GRU-aided ESKF VIO algorithm is discussed, as well as the algorithm implementation following the federated multi-sensor framework. The overall fault-tolerant multi-sensor system aided with FMEA is shown in Figure 2.
Figure 2. Fault-tolerant multi-sensor aided with FMEA framework.

3.1. Failure Mode and Effect Analysis (FMEA)

The implementation of FMEA on vision-based navigation systems enables the breaking down of high-level failure events into lower-level events along with allocating risks. Referring to error sources in every domain from the literature review [10], the fault tree model shown in Figure 3 is extracted.
Figure 3. Fault tree analysis for feature-based VIO.
The preliminary conclusion of FMEA is that the error occultation in the camera presents a higher likelihood of the possibility during feature extraction due to the presence of multiple faults resulting in position errors in the whole system. Specifically, two major fault events, i.e., navigation environment errors and data association errors [10], show higher possibilities of faults in visual systems. Table 1 lists reviews of common error sources over the navigation environment and data association error events, along with their visual faults targeted for mitigation in the context of VIO.
Table 1. Common faults in the visual positioning based on a state-of-the-art review.
  • One common fault in the navigation environment fault event is the feature extraction error that contains deterministic biases that frequently lead to position errors.
  • Another common fault in the data association failure event is the feature association error that occurs during matching 2D feature locations with 3D landmarks.
  • The sensor model error/long drift failure events represent errors generated by sensor dynamics, including VO error and IMU error types.
  • User failure events stand for the errors created during user operations that are normally relevant to the user calibration mistakes.
With the extension of fault events from the state-of-the-art reviews and proposition of thorough error analysis, i.e., Figure 3, this study aims to mitigate feature extraction errors occurring in failure modes linked to navigation environment and data association events through a fault-tolerant GNSS/IMU/VO navigation system. The hybrid integration of VIO holds great potential for achieving precise and reliable navigation performance in complex conditions.

3.2. Fault-Tolerant Federated Navigation System Architecture

The utilization of a federated architecture proves advantageous in implementing fault-tolerant multi-sensor systems, as it is known for its robustness in handling faults. This paper proposes a federated architecture-based integrated multi-sensor fusion scheme based on the IMU, VO, and GNSS combination. The overall architecture of our method is shown in Figure 4.
Figure 4. The architecture of the proposed federated fault-tolerant multi-sensor navigation system.
Two sub-filters exist in the proposed architecture: a hybrid GRU-aided ESKF IMU/VO sub-filter and an EKF-based traditional GNSS/IMU integration sub-filter. The output of the two sub-filters is merged together with a global EKF to generate the ultimate position estimations. The former sub-filter of the hybrid GRU-aided ESKF IMU/VO attempts to compensate for VO errors, while the latter sub-filter of EKF-based GNSS/IMU integration aims to correct errors from GNSS and IMU.

3.2.1. Proposed GRU-aided ESKF VIO Integration (Sub-Filter 1)

ESKF VIO Fusion

In the tightly coupled GRU-aided ESKF VIO integration, pose measurements generated by VO are fused with the linear acceleration generated by the accelerometer and angular velocity generated by the gyroscope. The extraction of visual features by VO is firstly adapted to produce relative pose measurements for ESKF updates [14]. The system filter uses GRU-predicted VO increments to correct the corresponding VIO states to obtain the corrected position.
The state vector x of the proposed GRU-aided ESKF VIO selects the following states:
x = P x ,   P y , P z ,   V x , V y , V z , A x , A y , A z , Δ α b x , Δ α b y , Δ α b z , Δ ω b x , Δ ω b y , Δ ω b z
where, P x ,   P y and P z denote the position, V x , V y   and V z denote velocity, A x , A y and A z denote attitude, Δ α x , Δ α y and Δ α z denote accelerometer bias and , Δ ω x , Δ ω y and , Δ ω z denote gyroscope bias in the x-axis, y-axis and z-axis.
The widely used, tightly coupled VIO is based on a Kalman filter [14]-[25]. The system dynamic model and measurement model are:
x k =   Σ k x k 1 + Q k  
z k = H k x k + Π k  
where x k and x k 1 represent the system vector at k and k 1 epoch; Σ k and Q k represent the state transition matrix and system processing noise; z k and H k represent the measurement vector and measurement matrix; and Π k represents the measurement noise.
Differing from conventional KF, ESKF uses an error-state representation, which has several benefits in terms of computational efficiency, numerical stability and prevention of singularities or gimbal lock issues. The details of the nominal and error states are included in the state-of-the-art research [14].
In order to improve positioning in diverse visual conditions, the predicted error measurements P V O / G R U are added to the error-state in a measurement update of ESKF. After the error-state update, the nominal state is updated with corrected error-states using the appropriate compositions:
x k = x k 1   X ^ k
In addition, the updated estimated state vector by correcting error states are:
P ^ k = P ^ k | k 1 + P ^ k + Δ P V O | k 1
V ^ k = V ^ k | k 1 + V ^ k
A ^ k = A ^ k | k 1 + A ^ k
α ^ k = α ^ k | k 1 + α k ^
ω ^ k = ω ^ k | k 1 + ω ^ k
where, P ^ k and P ^ k | k 1   denote the nominal position vector at k and k 1 epoch; P ^ k denotes the measured error-state position; Δ P V O | k 1 denotes the VO predicted increments vector at k 1 epoch; V ^ k and V ^ k | k 1 denote the nominal velocity at k and k 1 ; V ^ k denotes the measured error-state velocity; A ^ k and A ^ k | k 1 denote the nominal attitude at k and k 1 ; A ^ k denotes measured error-state attitude; α ^ k and α ^ k | k 1 denote nominal acceleration bias at k and k 1 ; α ^ k denotes the measured error-state acceleration bias; ω ^ k and ω ^ k | k 1 denote the nominal gyroscope bias at k and k 1 ; and ω ^ k denotes measured error-state gyroscope bias. After the error integrates into the updated nominal state, the error-state variables need to reset, which has been adopted from Zhonghan et al. [67].
The proposed VIO measurement update process is illustrated in the following equations:
K k =   Σ ^ k | k 1 H k T H k   Σ ^ k | k 1 H k T + Π 1
x k = x k | k 1 + K k z k H k x k | k 1
Σ k = 1 K k H k Σ k | k 1
where K k represents Kalman gain, Σ k and   Σ ^ k | k 1 represent the measurement covariance matrix at k and k 1 epoch.

GRU-Aided VIO

The VIO sub-filter uses the proposed ESKF-based tightly coupled integration strategy with a GRU model, which works during GNSS outages. The GRU consists of an update gate z t that controls the extent of the impact on the current state force by the previous state and a reset gate to determine the forgetfulness degree of the hidden state information h t . The details of the GRU propagation formula and architecture are adapted from Geragersian et al. [68]. The formula for the GRU architecture is presented below:
z t = σ W z X t + U z h t 1
r t = σ W r X t + U r h t 1
here, W z and W r are the weight matrix of input for hidden layers and reset gate respectively; U z and U r are the weight matrix of the hidden state for hidden layers and reset gate, respectively; σ is the activation matrix.
The GRU model is trained with multiple trajectories containing complex scenarios that facilitate failure modes extracted using FMEA analysis so that it can predict VO error during flight under diverse conditions.
The GRU output equation is formulated by:
P V O / G R U = P V O Δ P V O  
where P V O denotes position error from VO, Δ P V O is position error deviations, and P V O / G R U represents the predicted position increments.
The GRU model is shown in Figure 5.
Figure 5. Illustrative diagram of GRU model.
Two data sources from IMU and VO are generated and used to gather positioning and attitude data for training each trajectory. Additionally, the VO position P V O and orientation φ V O covering multiple complex environments by UAV and IMU measured angular velocity α I N S and linear acceleration ω I N S   are used to calculate inputs of GRU. The output of the GRU model is the positioning error Δ P V O   generated by VO. When the GNSS signal is unavailable, IMU/VO operates to estimate the position of the UAV where the GRU module operates in predicting mode that predicts the position error Δ P V O , which is to be updated to the measurements vector in the ESKF module. When VO diverges, the GRU block predicts visual errors for error correction.

3.2.2. EKF Based GNSS/IMU Integration (Sub-Filter 2)

The tightly coupled architecture is implemented in the GNSS/MU integrated sub-filter of the proposed fault-tolerant multi-sensor navigation system. The GPS measurement position and IMU measurement acceleration and angular velocity proceed to estimate the state vector, including position velocity and attitude, using traditional EKF-based fusion filtering. Optimal states of the state vector from traditional EKF can be obtained through prediction and observation update and is discussed in Mitchel et al. [69]. The generic observation equation for EKF can be written as:
z k G N S S / I M U = H k G N S S / I M U X k G N S S / I M U + Λ G N S S / I M U
where, H k G N S S / I M U represents the observation matrix; z k G N S S / I M U represents the observation vector; X k G N S S / I M U represents the observation state vector, and Λ G N S S / I M U represents the observation noise matrix.

3.2.3. Federated Filter for Multi-Sensor Fusion

The proposed federated GNSS/IMU/VO multi-sensor navigation system uses VIO and GNSS/IMU integrated systems as the switching criterion. However, the global filter integration was conducted using the EKF approach to fuse the data generated by sub-filters. The detailed description and the state equations are the same as in the GNSS/INS sub-filter. In order to reduce computational complexity, the state equation of the GNSS/IMU sub-filter is the same as the global filter. The fusion resolution of the federated filter is as follows:
P k | g l o b a l = i = 1 r P i l
X k | g l o b a l = P k | g l o b a l 1 i = 1 r P i l X i l
where r is the number of sub-filters; P k | g l o b a l and P i l are the covariance vector of the ith sub-filter and the global filter; and X k | g l o b a l and X i l   are the estimated states of ith sub-filter and the global filter. The global state estimation and the covariance vector are obtained by fusing the sub-filter estimated position, thus yielding a global solution.
The pseudo-code of the proposed fault-tolerant federated multi-sensor navigation system algorithm is presented in Algorithm 1.
Algorithm 1: Algorithm of GNSS/IMU/VO Multi-Sensor Navigation System
  • Input:  P V O ,     α x y z ,     ω x y z ,   P G N S S // P V O denotes the VO estimated position; α x y z denotes the linear acceleration; ω x y z denotes the angular velocity; and P G N S S denotes the GNSS estimated position
  • Output:  P V I O ,   P G N S S / I M U ,   P G N S S / I M U / V O // P V I O denotes the sub-filter 1 estimated position; P G N S S / I M U denotes the sub-filter 2 estimated position;   and   P G N S S / I M U / V O denotes the global filter estimated position
Initialize:
1.
Initial   the   values   for   VIO   and   GNSS / IMU   sub - filters :   x k = x 0 ;// x 0   denotes ground truth from UAV.
2.
Initial   values   for   global   filter   EKF :   x k | g l o b a l = x 0 ;// x 0   denotes the ground truth from UAV.
  • for n = 1,2,3, …. do                    // n denotes number of iterations
Prediction Phase:
3.
The sub-filter time update model has been derived in Equation (2), which is similar for both sub-filters and global as using the EKF variant.
4.
Covariance   vector   is   defined   as ,   Σ k | k 1 = F k | k 1 Σ k 1 F k | k 1 T + L k | k 1 Q k 1 L k | k 1 T .
Measurement Phase for sub-filters:
5.
Generate   the   P V O / G R U predicted increments and updated the following measurement steps of the VIO sub-filter.
6.
Estimate the states using the updated Equations (4)–(9).
7.
Calculate the Kalman gain, updated VIO states, and the covariance matrix using Equations (10)–(12).
8.
Corrected   X K V I O = x k | k 1 V I O + K k V I O z k V I O H k V I O x k | k 1 V I O .
9.
Start from step 7 to estimate the Kalman gain using Equation 10 for the GNSS/MU integration.
10.
Update the observation Equation (16).
11.
Estimate the updated states and covariance matrix of the GNSS/IMU filter using Equations (11) and (12).
12.
The proposed VIO integrated sub-filter and GNSS/INS integration using EKF are running in parallel to estimate the position for each filter.
Measurement update for Global filter:
13.
Calculate the Kalman gain:
K g l o b a l =   Σ ^ g l o b a l H k g l o b a l T H k g l o b a l   Σ ^ k | k 1 g l o b a l H k g l o b a l T + Π g l o b a l 1
14.
Update the states using the traditional EKF Equation (16).
15.
Obtain   P k | g l o b a l and X k | g l o b a l using Equations (17) and (18).
  • End for

4. Experimental Setup

To verify the proposed fault-tolerant navigation system performance under complex environments, a GNSS/IMU/VO simulator is built on Unreal Engine with UAV unity dynamic models integrated into urban scenarios in MATLAB 2022a. The sensors implemented in the simulator include an IMU block, GNSS receivers, and a front-facing monocular camera model generated by the Navigation Toolbox and UAV toolbox. The choice of a monocular camera is beneficial in our application due to its advantage of being less expensive, simpler to implement compared to a stereo camera, and lightweight to fit into a drone. The complex simulated environment is generated using Unreal Engine 5.0.3 and has a ‘US city’ scenario available in the ‘Simulation 3D Scene Configuration’ block. The simulated scenario used in the experiments is a bright sunny day with 20% floating clouds. The dataset is acquired with these sensors mounted on a quadrotor, as shown in Figure 6.
Figure 6. Experimental simulation setup in MATLAB.
The simulator consists of four blocks in total. The first block is the 3D simulation environment block, which aims to simulate the US city environment with a combination of camera- and UAV-based quadrotor models. The second block is GNSS integration with a quadrotor consisting of GNSS and the quadrotor dynamics. The third block is the IMU block interfaced with the quadrotor block. The fourth block is the ground truth from the quadrotor dynamics to provide true quadrotor trajectories.
The IMU selects the ICM 20649 model with the specifications provided in Table 2. The experimental data are collected with sampling rates of 10 Hz, 100 Hz, and 100 Hz for the camera, IMU, and GNSS, respectively. The random walk error [0.001, 0.001, 0.001] m / s 2 in the IMU accelerometer results in a position error growing at a quadratic rate.
Table 2. Specification of the ICM 20649 IMU model.
The GNSS model is initialized by injecting two common failure modes of random walk and step error that will most likely occur in an urban environment, leading to a multipath effect.
The camera model has specifications, including a 1109 focal length, 640 × 360 optical center, and 720 × 1280 image size. Regarding the calibration of the camera and extraction of extrinsic and intrinsic parameters of the simulated front-facing camera, the coordinate conversion matrix from world coordination to pixel coordination is denoted by camera instincts matrix k :
k = 1109 0 640 0 1109 360 0 0 0
For urban operation scenarios surrounded by buildings, the visual data of tall buildings are captured by a camera for VIO to provide positioning information. Meanwhile, the satellite availability is obstructed by buildings, causing a GNSS outage. The MATLAB simulator connects to the QGroundControl software to generate real-time flight trajectories for the data collection and save it into text file format. The QGroundControl uses a MAVLink communicator to connect the base station of the UAV block in the Simulator [70]. The integration of MAVLink with MATLAB/Simulink is adopted into the UAV package delivery example.
Regarding the training GRU models, 10 trajectories covering more than 100,000 samples from each sensor are used for training. The sensor blocks, including IMU, the GPS provided by the UAV Toolbox, and the Navigation Toolbox, operate in the local frame. To ensure compatibility and effective data fusion using the proposed algorithm, a crucial step in the data pre-processing phase involves converting the data from the local frame to the sensor body frame. This transformation is essential for aligning the sensor data with the algorithm’s requirements and the system’s operational frame reference.
The general performance evaluation method uses root mean square error (RMSE) formulated by:
R M S E   o v e r a l l = N i = 1 ( P x P r e P x G T ) 2 + ( P y P r e P y G T ) 2 + ( P z P r e P z G T ) 2 2 N
here, P x P r e ,   P y P r e , P z P r e ,   are the predicted position generated by proposed algorithm in x-, y- and z-axis, respectively; P x G T ,   P y G T , P z G T are the ground truth generated from UAV in x-, y- and z-axis, respectively. The number of samples is represented using capital N.

5. Test and Results

In order to evaluate the performance of the proposed GRU-aided ESKF-based VIO, two trajectories corresponding to experiments 1 and 2 are selected from the package delivery experiment in an urban environment. Both experiments are carried out under sunlight conditions, introducing common fault scenario shadows, lighting variations, motion blur, no-texture, and motion variation consistently present throughout the flight duration. For both experiments, a consistent fault condition is injected, i.e., the shadow of tall buildings on a sunny day during the fight. According to the fault types and number, the flying regions in the experiments are categorized into four distinct zones that encompass single faults, multiple faults, and combined faults. The faults arising from environmental sensitivity and dynamic motion transitions under previously estimated two major failure events in visual systems using FMEA analysis results in Section 3.1 have been discussed.

5.1. Experiment 1—Dense Urban Cynon

The purpose of this experiment is to validate the effectiveness of the proposed GRU-aided VIO in managing specific failure modes in complex conditions. This experimental environment includes a combination of tall and short buildings. During the experiment, a distance of 235 m was covered within a time span of 55 s. Figure 7 shows the accumulated 3D visual position error. Our proposed GRU-aided VIO is able to reduce position error by 86.6% compared with an ESKF-based VIO reference system. The maximum error in Figure 7 reduces from 7.5 m to 1.9 m with the GRU adoption.
Figure 7. Overall visual navigation error in the presence of multiple failure modes for Experiment 1.
When the UAV takes off with rapid and sudden changes in waypoints, the Dynamic Motion Transitions failure mode occurs under the condition of jerking movements that consistently result in feature tracking errors and feature association error failure modes in VO. Moreover, the other dynamic motion transition failure mode is incorporated, i.e., turning to replicate complex environmental conditions, thereby emulating real fight scenarios. The UAV takes off speed sets to 20 m/s while its speed will increase up to 50 m/s at 7 s.
Figure 8 shows the RMSE position error of the proposed GRU-aided VIO system and benchmarked with two references of VO- and ESK-based VIO position error. The RMSE position error in the x-axis under NED coordination is relatively lower compared to y- and z-axis. It is worth noting that the maximum position error of the ESKF-based VIO reference system is 3.227 m, 5.6 m, and 4.1 m in the x, y, and z-axes.
Figure 8. Visual navigation error along each axis in the presence of multiple failure modes for Experiment 1.
In the y-axis, the position error axis increases at 22 s due to the shadow of another building creating a variation of light. At 27 s, the UAV encounters a turn facing a plain wall because of a lack of textures, leading to drift, inaccuracies and failure of visual odometry. ESKF-based VIO showed relatively poor performance along the y-axis during diagonal motion due to cross-axis coupling and multiple failure modes due to featureless plain wall, sunlight variation, and shadow of tall buildings leading to feature degradation and tracking features. The loss of visual features results in insufficient information for ESKF to estimate the position accurately. In Figure 7 and Figure 8, it is shown that VIO based on ESKF fails to mitigate visual positioning error due to non-linear motion, lack of observable features, non-Gaussian noise, and uncertain state estimation, leading to non-linearization error propagation. However, it was found that our proposed solution can mitigate position error by 60.44%, 78.13% and 77.13% in the x-, y- and z-axes, respectively. The maximum position error is decreased from 1.5 m, 1.6 m, and 1.2 m.
When analyzing the z-axis performance in Figure 8, at around 37 s, the ESKF-based VIO position error starts increasing due to a dark wall shadowed from another tall building, causing variation of light in the frame leading to feature association error. ESKF performance degrades due to multiple factors present in the scenario, and error accumulates over time. After applying our proposed GRU-aided ESKF, the VIO fusion method is able to reduce position error at 37 s from 2.1 m to 0.6 m by predicting the VO error indicated with details in Table 3.
Table 3. RMSE Comparison on the performance of two experiments.

5.2. Experiment 2—Semi-Structured Urban Environment

This experiment aims to measure GRU-aided ESKF VIO performance under environments of tall buildings with open space parking areas. In this experiment, the UAV encounters two turns, which means changing two waypoints at 60 s and 110 s, leading to motion variation causing feature association error described as ‘Dynamic Motion Transitions’.
Figure 9 shows the accumulated 3D position error of the proposed GRU-aided ESKF VIO system and is benchmarked with two references of VO- and ESKF-based VIO position error. In Figure 9, the position error in the first few seconds is negative due to multiple factors, such as shadows of tall buildings, trees on plain surfaces, shadowed buildings, and lighting variations due to sunlight, motion blur, and rapid motion. The maximum position error is 19.1 m at 110 s due to combinational failure modes such as dark wall, rapid motion and motion blur. The proposed GRU-aided ESKF VIO is able to mitigate 86.62% of overall position error. The maximum error is reduced from 19.1 m to 6.8 m.
Figure 9. The D-position error in the presence of multiple failure modes for Experiment 2.
Figure 10 shows the VIO position RMSE error in the separate x-, y- and z-axes. Table 3 indicates that position error RMSE of ESKF is 4.7 m, 2.5 m, and 8.6 m in the x-, y- and z-axes, respectively. The proposed GRU-aided ESKF VIO has a remarkable improvement in terms of position error, and the specific values along the x-, y- and z-axes are 0.8 m, 0.9 m, and 0.5 m, respectively. Due to cross-axis coupling, the y-axis faces a larger estimated position error than others. During the time interval of 57–65 s, UAV takes a turn and passes through a parking area where buildings are under limited field of view. In this case, the feature extraction and tracing processes encounter challenges and lead to position estimation errors. The proposed solution has shown excellent performance improvement in the presence of failure modes of feature extraction error and feature tracking error, where the position error is decreased by 62.86% in comparison to the reference systems.
Figure 10. Position error along each axis in the presence of multiple failure modes for Experiment 2.
When analysing the z-axis performance in Figure 10, the proposed GRU-aided ESKF VIO outperforms the reference ESKF fusion with respect to a reduction in the z-axis error by 93.46%. According to Table 3, the reference ESKF has shown the worst performance of 8.6 m RMSE compared to the proposed method of 0.5 m RMSE. It is noted that at 104 s during the landing phase, the UAV turns around and encounters a black wall. This leads to higher performance errors because the VO system struggles to extract enough features in the complex scene with poor lighting [6,8,9]. The GRU-aided ESKF VIO demonstrates improvement compared to the traditional ESKF-based approach, resulting in a remarkable reduction of error of 93.45%. The maximum position error in the z-axis due to dark scene at 110 s is reduced from 16.2 m to 3.0 m. During the experiment, a distance of 459 m was covered within a time span of 1 min 50 s.
Table 4 indicates the maximum position error comparison for two of the experiments. By integrating fault-tolerant mechanisms, our approach achieves more accurate position estimation, even in challenging situations with limited visual cues. The fault-tolerant GRU-aided ESKF VIO architecture shows robustness over a number of realistic visual degradation scenarios.
Table 4. Maximum position error comparison for two experiments.

5.3. Performance Evaluation Based on Zone Categories

To further evaluate the successful rate when mitigating failure modes from experiments 1 and 2, as detailed in Section 3.1, the fault zones are extracted in the above two experiments.
Zone 1 indicates building shadow as the single fault triggering feature matching and feature tracking error failure modes within the time interval. In experiment 1, between the time interval of 33–49 s, UAV passes through shadow buildings that distort visual features, causing incorrect matches and tracking errors when they move into or out of shadows, as shown in Figure 11. In addition, introducing the sudden change in lighting may be misinterpreted as IMU acceleration and rotation.
Figure 11. Feature tracking and Feature mismatch error failure modes due to tall-shadowed buildings in experiment 1.
Figure 12 a,b depict the zone 1 region to show performance comparisons of our proposed algorithm with the reference algorithms in the presence of two failure modes. The maximum position errors for experiment 1 in zone 1 along x-, y- and z-axes are reduced by 52.38%, 81.57%, and 73.17%, respectively. In experiment 2, a single fault is encountered twice during the time interval of 13–44 s and 70–106 s. Maximum position error in the time interval of 13–44 s in x-, y- and z-axes are reduced by 93.33%, 75%, and 85%, respectively. Hence, the proposed solution proves to be robust over two failure modes.
Figure 12. Position errors of Zone 1 (shadowed building) estimated in each axis for experiment 1 (a). Position errors of Zone 1 (shadowed building) estimated in each axis for experiment 2 (b).
Zone 2 includes multiple faults, including turning manoeuvre and shadow of tall buildings that are present in both experiments. When a UAV makes a turn, the motion dynamics change rapidly. This leads to challenges in estimating camera motion and orientation estimations, causing tracking errors. In the meantime, visual distortion also causes feature extraction errors and feature mismatch errors due to inconsistent lighting, as shown in Figure 13. The combination of both conditions adds complexity to the environment, exacerbating the existing challenges in traditional ESKF-based VIO. Our proposed algorithm is able to mitigate these failure modes and shows robustness in such complex scenes compared with traditional VIO systems. The algorithm is able to mitigate motion dynamics and feature extraction error, reducing feature matching error by 20%, 20%, and 50% at the time interval of 13–24 s in experiment 1 and 62.5%, 40%, and 90% at the time interval of 45–70 s with respect to the x-, y- and z-axes, as shown in Figure 14a,b.
Figure 13. Motion dynamics, feature tracking and feature mismatch error failure modes due to the tall, shadowed buildings in experiment 2.
Figure 14. Position errors of Zone 2 (shadowed building and UAV turns) estimated in each axis for experiment 1 (a). Position errors of Zone 2 (shadowed building and UAV turn) estimated in each axis for experiment 2 (b).
In Zone 3, multiple faults are combined together, including the turning manoeuvre, shadows from the tall buildings, variations in lighting, areas of darkness and sunlight shadows. Zone 3 only exists as one of the most complex conditions in experiment 2. As observed in Zone 1 and Zone 2, the turning behaviour and shadow of the tall buildings introduce changes in motion dynamics that make the position estimation and feature tracking challenging for traditional, ESKF-based VIO. Additionally, the presence of both dark and well-lit areas within the scene created abrupt changes in illumination.
Figure 15 presents one demonstration image from the mounted front-facing camera in a UAV when passing through an illuminated and shaded area. The shadows caused by direct sunlight also create sharp pixel contrast between illuminated and shaded areas. These sudden lighting changes and the combination of multiple fault conditions amplify the challenges posed by each individual fault, making the overall VIO performance more susceptible to tracking errors, feature mismatches, and feature extraction error failure modes.
Figure 15. Feature tracking, feature extraction error, and feature mismatch failure modes for dark and well-illuminated tall buildings in experiment 2.
Figure 16 shows that the GRU-aided ESKF VIO architecture reduces maximum position error in experiment 2 at the time interval of 107–114 s from 32.6%, 81.327%, and 64.397% in x-, y- and z-axes. Therefore, the GRU-aided fusion algorithm can perform without interruption when the UAV navigates in illuminated and shaded areas and has shown robustness in the presence of multiple failure modes and moving features amidst dynamic lighting.
Figure 16. Position errors in Zone 3 (UAV turn and illuminated tall buildings) estimated in each axis for experiment 2.
Zone 4 consists of a combination of complex faults, including navigation environmental error and data association fault events. The fault events consist of turning manoeuvres, building shadows, the presence of featureless blank walls and variation in lighting. In experiment 1, the UAV encountered a plain wall at 27–32 s of its flight, resulting in a feature extraction error due to the lack of distinctive features on the wall shown in Figure 17. As a result, the feature extraction process failed, leading to a lack of identifiable features to track and match in consecutive frames. Such lack of features caused the VO to lose its frame-to-frame correspondence, resulting in the inability to accurately estimate the UAV’s motion in this specific time of 27–32 s. Figure 18 shows the increment of position error caused by the mentioned disruption. The ESKF algorithm performance is heavily affected, leading to incremental tracking errors and loss of tracking when dealing with a featureless wall. Figure 18 shows that our algorithm has effectively reduced the maximum position error by 42.1%, 63.3%, and 60.12% in x-, y- and z-axes, respectively.
Figure 17. Feature extraction error due to the plain wall in experiment 1.
Figure 18. Position errors of Zone 4 (UAV turn, shadowed buildings, and featureless wall buildings) estimated in each axis for experiment 1.
To evaluate the performance of the proposed fault-tolerant federated multi-sensor navigation system, the experiment is conducted using the Experiment 1 dataset with GNSS condition applied (faulted-GNSS and without fault GNSS) adopted from [71]. Table 5 shows the performance comparison results of the proposed GRU-aided ESKF VIO with faulted GNSS and no-fault GNSS. The results indicate that the hybrid approach enables mitigating the overall position error even with faulted GNSS conditions compared to [11,71]. Therefore, it is approved that the FMEA-assisted fault-tolerant multi-sensor navigation system facilitates positioning performance in the presence of multiple faults covering all the sensor faults in diverse, complex environments.
Table 5. The RMSE of 3D position errors of GNSS/faulted-IMU/faulted-VO solution with faulted-GNSS/faulted-IMU/faulted-VO.

5.4. Performance Comparison with Other Datasets

This paper selects the EUROC dataset, specifically MH05_difficult, for benchmarking with other algorithms since EUROC is commonly applied by other researchers [12,45]. The EUROC sequence dataset was collected in an indoor machine hall with light variation. MH05_difficult dataset contained black and white images, which they referred to as dark and shadowed environments and captured with rapid motion. The dataset is captured in a customized way that has several limitations, including manipulated images, customized blur, and brightness.
Figure 19 presents the 3D position error of running our proposed GRU-aided ESKF VIO to process MH05_difficult. The key finding is that the position error in RMSE is reduced by 67.32%. The maximum error is reduced from 2.81 m to 1.5 m.
Figure 19. The D position error estimated using a MH05 seq EUROC Dataset with a motion blur failure mode.
Table 6 highlights the comparison results in terms of the accumulated RSME with state-of-the-art systems, i.e., End-to-End VIO, Self-supervised VIO [12,45]. It is worth mentioning that the EUROC dataset does not provide tight synchronization between IMU and images, which is a primary requirement of using RNN-based VIO.
Table 6. Comparison with state-of-the-art methods that used MH_05 seq. EUROC Dataset with motion blur failure mode.
By cross-checking with the work of Brandon et al. [12], the proposed GRU-aided ESKF VIO confirms the robust improvement in the presence of motion blur failure mode.

6. Conclusions

Aiming to provide fault-tolerant VIO navigation solutions against complex environments, this study proposed a hybrid federated navigation system framework aided by FMEA for enabling fault tolerance and GRU fused with ESKF-VIO to mitigate visual positioning errors.
Through simulations, the main advantages of the GRU and ESKF hybrid algorithm are summarized as follows: (1) A high-efficiency recurrent neural cell with simple architecture, namely the GRU, was chosen to predict the position error during visual degradation. Benefiting from the proper selection of the Kalman filter performance enhancement method, such as updating the state vector by predicting errors using the AI method, our proposed algorithm possesses superior navigation accuracy under complex conditions. (2) The FMEA analysis helps to prioritize anticipated failure modes such as feature extraction error, feature tracking error, and motion dynamics, enabling us to mitigate position error caused by these failure modes before they lead to operation failure. (3) The mitigation of feature extraction failure modes, which can subsequently lead to feature association errors. Via demonstrations, it is found that multiple factors or faults within the navigation environment and the UAV’s dynamics reduce the impact of those failures.
This approach represents a significant step towards improving the robustness and reliability of VIO, particularly in complex and dynamic environments where feature extraction error, feature tracking error, and feature mismatch are critical for accurate navigation. With the correction of the VIO, the fault-tolerant multi-sensor performance is demonstrated to be improved under diverse, complex urban environments in terms of robustness and accuracy at different time scales, enabling uninterrupted and seamless flight operations.

Author Contributions

Conceptualization, I.P.; methodology, T.E.T.; data curation, T.E.T.; formal analysis, T.E.T.; investigation, T.E.T.; resources, I.P. and T.E.T.; software, T.E.T.; validation, T.E.T.; visualization, T.E.T. and Z.X.; writing-original draft, T.E.T.; writing-review and editing, Z.X., I.P. and Z.A.R.; supervision, I.P. and Z.A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arafat, M.Y.; Alam, M.M.; Moh, S. Vision-Based Navigation Techniques for Unmanned Aerial Vehicles: Review and Challenges. Drones 2023, 7, 89. [Google Scholar] [CrossRef]
  2. Alkendi, Y.; Seneviratne, L.; Zweiri, Y. State of the Art in Vision-Based Localization Techniques for Autonomous Navigation Systems. IEEE Access 2021, 9, 76847–76874. [Google Scholar] [CrossRef]
  3. Afia, A.B.; Escher, A.; Macabiau, C. A Low-cost GNSS/IMU/Visual monoSLAM/WSS Integration Based on Federated Kalman Filtering for Navigation in Urban Environments. In Proceedings of the 28th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2015), Tampa, FL, USA, 14–18 September 2015. [Google Scholar]
  4. Lee, Y.D.; Kim, L.W.; Lee, H.K. A tightly-coupled compressed-state constraint Kalman Filter for integrated visual-inertial-Global Navigation Satellite System navigation in GNSS-Degraded environments. IET Radar Sonar Navig. 2022, 16, 1344–1363. [Google Scholar] [CrossRef]
  5. Liao, J.; Li, X.; Wang, X.; Li, S.; Wang, H. Enhancing navigation performance through visual-inertial odometry in GNSS-degraded environment. GPS Solut. 2021, 25, 50. [Google Scholar] [CrossRef]
  6. Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  7. Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar] [CrossRef]
  8. Campos, C.; Elvira, R.; Rodriguez, J.J.G.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  9. Geneva, P.; Eckenhoff, K.; Lee, W.; Yang, Y.; Huang, G. OpenVINS: A Research Platform for Visual-Inertial Estimation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4666–4672. [Google Scholar] [CrossRef]
  10. Zhu, C.; Meurer, M.; Günther, C. Integrity of Visual Navigation—Developments, Challenges, and Prospects. Navig. J. Inst. Navig. 2022, 69, 518. [Google Scholar] [CrossRef]
  11. JDai, J.; Hao, X.; Liu, S.; Ren, Z. Research on UAV Robust Adaptive Positioning Algorithm Based on IMU/GNSS/VO in Complex Scenes. Sensors 2022, 22, 2832. [Google Scholar] [CrossRef]
  12. Wagstaff, B.; Wise, E.; Kelly, J. A Self-Supervised, Differentiable Kalman Filter for Uncertainty-Aware Visual-Inertial Odometry. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, Sapporo, Japan, 11–15 July 2022; pp. 1388–1395. [Google Scholar] [CrossRef]
  13. Zhai, Y.; Fu, Y.; Wang, S.; Zhan, X. Mechanism Analysis and Mitigation of Visual Navigation System Vulnerability. In China Satellite Navigation Conference (CSNC 2021) Proceedings; Springer: Singapore, 2021; Volume 773, pp. 515–524. [Google Scholar] [CrossRef]
  14. Markovic, L.; Kovac, M.; Milijas, R.; Car, M.; Bogdan, S. Error State Extended Kalman Filter Multi-Sensor Fusion for Unmanned Aerial Vehicle Localization in GPS and Magnetometer Denied Indoor Environments. In Proceedings of the 2022 International Conference on Unmanned Aircraft Systems, ICUAS 2022, Dubrovnik, Croatia, 21–24 June 2022; pp. 184–190. [Google Scholar] [CrossRef]
  15. Xiong, X.; Chen, W.; Liu, Z.; Shen, Q. DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF. arXiv 2019, arXiv:1905.00684. [Google Scholar]
  16. Fanin, F.; Hong, J.H. Visual Inertial Navigation for a Small UAV Using Sparse and Dense Optical Flow. In Proceedings of the 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS), Cranfield, UK, 25–27 November 2019. [Google Scholar]
  17. Bloesch, M.; Burri, M.; Omari, S.; Hutter, M.; Siegwart, R. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback. Int. J. Robot. Res. 2017, 36, 1053–1072. [Google Scholar] [CrossRef]
  18. Sun, K.; Mohta, K.; Pfrommer, B.; Watterson, M.; Liu, S.; Mulgaonkar, Y.; Taylor, C.J.; Kumar, V. Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight. IEEE Robot. Autom. Lett. 2018, 3, 965–972. [Google Scholar] [CrossRef]
  19. Li, G.; Yu, L.; Fei, S. A Binocular MSCKF-Based Visual Inertial Odometry System Using LK Optical Flow. J. Intell. Robot. Syst. Theory Appl. 2020, 100, 1179–1194. [Google Scholar] [CrossRef]
  20. Yang, Y.; Geneva, P.; Eckenhoff, K.; Huang, G. Visual-Inertial Odometry with Point and Line Features. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Macau, China, 3–8 November 2019; pp. 2447–2454. [Google Scholar] [CrossRef]
  21. Ma, F.; Shi, J.; Yang, Y.; Li, J.; Dai, K. ACK-MSCKF: Tightly-coupled ackermann multi-state constraint kalman filter for autonomous vehicle localization. Sensors 2019, 19, 4816. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, Z.; Pang, B.; Song, Y.; Yuan, X.; Xu, Q.; Li, Y. Robust Visual-Inertial Odometry Based on a Kalman Filter and Factor Graph. IEEE Trans. Intell. Transp. Syst. 2023, 24, 7048–7060. [Google Scholar] [CrossRef]
  23. Omotuyi, O.; Kumar, M. UAV Visual-Inertial Dynamics (VI-D) Odometry using Unscented Kalman Filter. IFAC Pap. 2021, 54, 814–819. [Google Scholar] [CrossRef]
  24. Sang, X.; Li, J.; Yuan, Z.; Yu, X.; Zhang, J.; Zhang, J.; Yang, P. Invariant Cubature Kalman Filtering-Based Visual-Inertial Odometry for Robot Pose Estimation. IEEE Sensors J. 2022, 22, 23413–23422. [Google Scholar] [CrossRef]
  25. Xu, J.; Yu, H.; Teng, R. Visual-inertial odometry using iterated cubature Kalman filter. In Proceedings of the 30th Chinese Control and Decision Conference, CCDC, Shenyang, China, 9–11 June 2018; pp. 3837–3841. [Google Scholar] [CrossRef]
  26. Liu, Y.; Xiong, R.; Wang, Y.; Huang, H.; Xie, X.; Liu, X.; Zhang, G. Stereo Visual-Inertial Odometry with Multiple Kalman Filters Ensemble. IEEE Trans. Ind. Electron. 2016, 63, 6205–6216. [Google Scholar] [CrossRef]
  27. Kim, S.; Petrunin, I.; Shin, H.-S. A Review of Kalman Filter with Artificial Intelligence Techniques. In Proceedings of the Integrated Communications, Navigation and Surveillance Conference, ICNS, Dulles, VA, USA, 5–7 April 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  28. Jwo, D.-J.; Biswal, A.; Mir, I.A. Artificial Neural Networks for Navigation Systems: A Review of Recent Research. Appl. Sci. 2023, 13, 4475. [Google Scholar] [CrossRef]
  29. Shaukat, N.; Ali, A.; Moinuddin, M.; Otero, P. Underwater Vehicle Localization by Hybridization of Indirect Kalman Filter and Neural Network. In Proceedings of the 2021 7th International Conference on Mechatronics and Robotics Engineering, ICMRE 2021, Budapest, Hungary, 3–5 February 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 111–115. [Google Scholar] [CrossRef]
  30. Shaukat, N.; Ali, A.; Iqbal, M.J.; Moinuddin, M.; Otero, P. Multi-sensor fusion for underwater vehicle localization by augmentation of rbf neural network and error-state kalman filter. Sensors 2021, 21, 1149. [Google Scholar] [CrossRef]
  31. Vargas-Meléndez, L.; Boada, B.L.; Boada, M.J.L.; Gauchía, A.; Díaz, V. A sensor fusion method based on an integrated neural network and Kalman Filter for vehicle roll angle estimation. Sensors 2016, 16, 1400. [Google Scholar] [CrossRef]
  32. Jingsen, Z.; Wenjie, Z.; Bo, H.; Yali, W. Integrating Extreme Learning Machine with Kalman Filter to Bridge GPS Outages. In Proceedings of the 2016 3rd International Conference on Information Science and Control Engineering, ICISCE 2016, Beijing, China, 8–10 July 2016; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2016; pp. 420–424. [Google Scholar] [CrossRef]
  33. Zhang, X.; Mu, X.; Liu, H.; He, B.; Yan, T. Application of Modified EKF Based on Intelligent Data Fusion in AUV Navigation; Application of Modified EKF Based on Intelligent Data Fusion in AUV Navigation. In Proceedings of the 2019 IEEE Underwater Technology (UT), Kaohsiung, Taiwan, 16–19 April 2019. [Google Scholar]
  34. Al Bitar, N.; Gavrilov, A.I. Neural_Networks_Aided_Unscented_Kalman_Filter_for_Integrated_INS_GNSS_Systems. In Proceedings of the 27th Saint Petersburg International Conference on Integrated Navigation Systems (ICINS), St. Petersburg, Russia, 25–17 May 2020; pp. 1–4. [Google Scholar]
  35. Miljković, Z.; Vuković, N.; Mitić, M. Neural extended Kalman filter for monocular SLAM in indoor environment. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 2016, 230, 856–866. [Google Scholar] [CrossRef]
  36. Choi, M.; Sakthivel, R.; Chung, W.K. Neural Network-Aided Extended Kalman Filter for SLAM Problem. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 1686–1690. [Google Scholar]
  37. Kotov, K.Y.; Maltsev, A.S.; Sobolev, M.A. Recurrent neural network and extended Kalman filter in SLAM problem. IFAC Proc. Vol. 2013, 46, 23–26. [Google Scholar] [CrossRef]
  38. Chen, L.; Fang, J. A Hybrid Prediction Method for Bridging GPS Outages in High-Precision POS Application. IEEE Trans. Instrum. Meas. 2014, 63, 1656–1665. [Google Scholar] [CrossRef]
  39. Lee, J.K.; Jekeli, C. Neural Network Aided Adaptive Filtering and Smoothing for an Integrated INS/GPS Unexploded Ordnance Geolocation System. J. Navig. 2010, 63, 251–267. [Google Scholar] [CrossRef]
  40. Bi, S.; Ma, L.; Shen, T.; Xu, Y.; Li, F. Neural network assisted Kalman filter for INS/UWB integrated seamless quadrotor localization. PeerJ Comput. Sci. 2021, 7, e630. [Google Scholar] [CrossRef] [PubMed]
  41. Zhao, S.; Zhou, Y.; Huang, T. A Novel Method for AI-Assisted INS/GNSS Navigation System Based on CNN-GRU and CKF during GNSS Outage. Remote Sens. 2022, 14, 4494. [Google Scholar] [CrossRef]
  42. Xie, D.; Jiang, J.; Wu, J.; Yan, P.; Tang, Y.; Zhang, C.; Liu, J. A Robust GNSS/PDR Integration Scheme with GRU-Based Zero-Velocity Detection for Mass-Pedestrians. Remote Sens. 2022, 14, 300. [Google Scholar] [CrossRef]
  43. Jiang, Y.; Nong, X. A Radar Filtering Model for Aerial Surveillance Base on Kalman Filter and Neural Network. In Proceedings of the IEEE International Conference on Software Engineering and Service Sciences, ICSESS, Beijing, China, 16–18 October 2020; pp. 57–60. [Google Scholar] [CrossRef]
  44. Miao, Z.; Shi, H.; Zhang, Y.; Xu, F. Neural network-aided variational Bayesian adaptive cubature Kalman filtering for nonlinear state estimation. Meas. Sci. Technol. 2017, 28, 10500. [Google Scholar] [CrossRef]
  45. Li, C.; Waslander, S. Towards End-to-end Learning of Visual Inertial Odometry with an EKF. In Proceedings of the 2020 17th Conference on Computer and Robot Vision, CRV 2020, Ottawa, ON, Canada, 13–15 May 2020; pp. 190–197. [Google Scholar] [CrossRef]
  46. Tang, Y.; Jiang, J.; Liu, J.; Yan, P.; Tao, Y.; Liu, J. A GRU and AKF-Based Hybrid Algorithm for Improving INS/GNSS Navigation Accuracy during GNSS Outage. Remote Sens. 2022, 14, 752. [Google Scholar] [CrossRef]
  47. Hosseinyalamdary, S. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study. Sensors 2018, 18, 1316. [Google Scholar] [CrossRef]
  48. Song, L.; Duan, Z.; He, B.; Li, Z. Application of Federal Kalman Filter with Neural Networks in the Velocity and Attitude Matching of Transfer Alignment. Complexity 2018, 2018, 3039061. [Google Scholar] [CrossRef]
  49. Li, D.; Wu, Y.; Zhao, J. Novel Hybrid Algorithm of Improved CKF and GRU for GPS/INS. IEEE Access 2020, 8, 202836–202847. [Google Scholar] [CrossRef]
  50. Gao, X.; Luo, H.; Ning, B.; Zhao, F.; Bao, L.; Gong, Y.; Xiao, Y.; Jiang, J. RL-AKF: An Adaptive Kalman Filter Navigation Algorithm Based on Reinforcement Learning for Ground Vehicles. Remote Sens. 2020, 12, 1704. [Google Scholar] [CrossRef]
  51. Aslan, M.F.; Durdu, A.; Sabanci, K. Visual-Inertial Image-Odometry Network (VIIONet): A Gaussian process regression-based deep architecture proposal for UAV pose estimation. Measurement 2022, 194, 111030. [Google Scholar] [CrossRef]
  52. Chen, C.; Lu, C.X.; Wang, B.; Trigoni, N.; Markham, A. DynaNet: Neural Kalman Dynamical Model for Motion Estimation and Prediction. IEEE Trans. Neural Networks Learn. Syst. 2021, 32, 5479–5491. [Google Scholar] [CrossRef] [PubMed]
  53. Yusefi, A.; Durdu, A.; Aslan, M.F.; Sungur, C. LSTM and Filter Based Comparison Analysis for Indoor Global Localization in UAVs. IEEE Access 2021, 9, 10054–10069. [Google Scholar] [CrossRef]
  54. Zuo, S.; Shen, K.; Zuo, J. Robust Visual-Inertial Odometry Based on Deep Learning and Extended Kalman Filter. In Proceedings of the 2021 China Automation Congress, CAC 2021, Beijing, China, 22–24 October 2021; pp. 1173–1178. [Google Scholar] [CrossRef]
  55. Luo, Y.; Hu, J.; Guo, C. Right Invariant SE2(3)EKF for Relative Navigation in Learning-based Visual Inertial Odometry. In Proceedings of the 2022 5th International Symposium on Autonomous Systems, ISAS 2022, Hangzhou, China, 8–10 April 2022. [Google Scholar] [CrossRef]
  56. Bhatti, U.I.; Ochieng, W.Y. Failure Modes and Models for Integrated GPS/INS Systems. J. Navig. 2007, 60, 327–348. [Google Scholar] [CrossRef]
  57. YDu, Y.; Wang, J.; Rizos, C.; El-Mowafy, A. Vulnerabilities and integrity of precise point positioning for intelligent transport systems: Overview and analysis. Satell. Navig. 2021, 2, 3. [Google Scholar] [CrossRef]
  58. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  59. Gao, B.; Hu, G.; Zhong, Y.; Zhu, X. Cubature Kalman Filter With Both Adaptability and Robustness for Tightly-Coupled GNSS/INS Integration. IEEE Sensors J. 2021, 21, 14997–15011. [Google Scholar] [CrossRef]
  60. Gao, B.; Gao, S.; Zhong, Y.; Hu, G.; Gu, C. Interacting multiple model estimation-based adaptive robust unscented Kalman filter. Int. J. Control Autom. Syst. 2017, 15, 2013–2025. [Google Scholar] [CrossRef]
  61. Gao, G.; Zhong, Y.; Gao, S.; Gao, B. Double-Channel Sequential Probability Ratio Test for Failure Detection in Multisensor Integrated Systems. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
  62. Gao, G.; Gao, B.; Gao, S.; Hu, G.; Zhong, Y. A Hypothesis Test-Constrained Robust Kalman Filter for INS/GNSS Integration With Abnormal Measurement. IEEE Trans. Veh. Technol. 2022, 72, 1662–1673. [Google Scholar] [CrossRef]
  63. Gao, B.; Li, W.; Hu, G.; Zhong, Y.; Zhu, X. Mahalanobis distance-based fading cubature Kalman filter with augmented mechanism for hypersonic vehicle INS/CNS autonomous integration. Chin. J. Aeronaut. 2022, 35, 114–128. [Google Scholar] [CrossRef]
  64. Gao, B.; Hu, G.; Zhong, Y.; Zhu, X. Cubature rule-based distributed optimal fusion with identification and prediction of kinematic model error for integrated UAV navigation. Aerosp. Sci. Technol. 2021, 109, 106447. [Google Scholar] [CrossRef]
  65. Gao, B.; Hu, G.; Zhu, X.; Zhong, Y. A Robust Cubature Kalman Filter with Abnormal Observations Identification Using the Mahalanobis Distance Criterion for Vehicular INS/GNSS Integration. Sensors 2019, 19, 5149. [Google Scholar] [CrossRef] [PubMed]
  66. Hu, G.; Gao, B.; Zhong, Y.; Ni, L.; Gu, C. Robust Unscented Kalman Filtering With Measurement Error Detection for Tightly Coupled INS/GNSS Integration in Hypersonic Vehicle Navigation. IEEE Access 2019, 7, 151409–151421. [Google Scholar] [CrossRef]
  67. Li, Z.; Zhang, Y. Constrained ESKF for UAV Positioning in Indoor Corridor Environment Based on IMU and WiFi. Sensors 2022, 22, 391. [Google Scholar] [CrossRef]
  68. Geragersian, P.; Petrunin, I.; Guo, W.; Grech, R. An INS/GNSS fusion architecture in GNSS denied environments using gated recurrent units. In Proceedings of the AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2022, San Diego, CA, USA, 3–7 January 2022. [Google Scholar] [CrossRef]
  69. Kourabbaslou, S.S.; Zhang, A.; Atia, M.M. A Novel Design Framework for Tightly Coupled IMU/GNSS Sensor Fusion Using Inverse-Kinematics, Symbolic Engines, and Genetic Algorithms. IEEE Sens. J. 2019, 19, 11424–11436. [Google Scholar] [CrossRef]
  70. Ramirez-Atencia, C.; Camacho, D. Extending QGroundControl for Automated Mission Planning of UAVs. Sensors 2018, 18, 2339. [Google Scholar] [CrossRef]
  71. Hernandez, G.E.V.; Petrunin, I.; Shin, H.-S.; Gilmour, J. Robust multi-sensor navigation in GNSS degraded environments. In Proceedings of the AIAA SCITECH 2023 Forum, National Harbor, MD, USA, 23–27 January 2023. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.