Next Article in Journal
A WKNN Indoor Fingerprint Localization Technique Based on Improved Discrimination Capability of RSS Similarity
Previous Article in Journal
Circular SAW Resonators: Influence of Sensitive Element Dimensions on Strength Characteristics and First Experimental Samples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noise-Adaptive State Estimators with Change-Point Detection

College of Automation, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4585; https://doi.org/10.3390/s24144585
Submission received: 5 June 2024 / Revised: 10 July 2024 / Accepted: 14 July 2024 / Published: 15 July 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Aiming at tracking sharply maneuvering targets, this paper develops novel variational adaptive state estimators for joint target state and process noise parameter estimation for a class of linear state-space models with abruptly changing parameters. By combining variational inference with change-point detection in an online Bayesian fashion, two adaptive estimators—a change-point-based adaptive Kalman filter (CPAKF) and a change-point-based adaptive Kalman smoother (CPAKS)—are proposed in a recursive detection and estimation process. In each iteration, the run-length probability of the current maneuver mode is first calculated, and then the joint posterior of the target state and process noise parameter conditioned on the run length is approximated by variational inference. Compared with existing variational noise-adaptive Kalman filters, the proposed methods are robust to initial iterative value settings, improving their capability of tracking sharply maneuvering targets. Meanwhile, the change-point detection divides the non-stationary time sequence into several stationary segments, allowing for an adaptive sliding length in the CPAKS method. The tracking performance of the proposed methods is investigated using both synthetic and real-world datasets of maneuvering targets.

1. Introduction

State estimation of dynamical systems from noisy observations in real time is one of the most fundamental tasks in localization, tracking, and navigation [1]. The Kalman filter (KF) is an optimal state estimator for a linear Gaussian state-space model requiring known noise statistics [2]. However, in many practical situations, the statistical noise covariances are partially unknown and may abruptly change [3]. The underlying motivating application is maneuvering-target tracking [4], wherein some real-world targets, including ground vehicles, aircraft, and ballistic missiles, are capable of making very sharp, evasive maneuvers to escape from the radar’s tracking and locking. The parameters, such as the process noise covariance accounting for unexpected maneuverability, are unknown and abruptly change between different motion mode segments.
The classical solution to state estimation problems with uncertain parameters is adaptive estimators [5], which perform statistics on the model parameters or noise, as well as the dynamic state, simultaneously. Noise-adaptive estimators can be broadly divided into four categories, including Bayesian inference, maximum likelihood estimation, covariance-matching, and correlation methods [3]. Bayesian inference is the most general approach, while the other methods are often interpreted as approximations of Bayesian inference [6]. In the Bayesian inference approach, noise-adaptive estimators are required to calculate the intractable joint posterior probability density function (PDF) of the dynamic state and unknown parameters. There are three primary methods for approximating adaptive estimators, including multiple model (MM) methods, sequence Monte Carlo (SMC) methods, and variational Bayesian (VB) methods.
MM methods [7] regard the underlying dynamics as switching systems among a finite number of models, representing different noise levels or system structures. There exist both continuous noise uncertainties and discrete model uncertainties in switching systems. MM methods carry out state estimation and model selection recursively, which can be divided into static and dynamic MM estimators depending on whether the model is switched. By assuming that the model switching process is a Markov process, the well-known interacting MM (IMM) [8,9] estimator achieves a trade-off between computation and accuracy, which has proven to be promising for tracking highly maneuvering targets [1]. The applicability of IMM estimators depends on the completeness of the model sets and suffers from the curse of dimensionality. Some extended methods can tackle these issues [10].
SMC methods [11] approximate the intractable joint PDF by propagating a set of random particles, drawing from the tractable proposal distribution. For adaptive state estimation with static parameters, one solution is based on artificial parameter evolution, which perturbs particles by adding artificial noise to avoid over-diffuse approximations [12]. An alternative approach is based on particle learning [13], which marginalizes the static parameters out of the posterior distribution and can be implemented in an online fashion by constructing sufficient statistics [14]. Nemeth et al. [15] extended the work of [12] to time-varying parameters by combining SMC approaches with change-point models. Arnold [16] presented two SMC-based adaptive estimators with time-varying parameters using the concept of artificial parameter evolution. SMC methods provide a flexible and accurate Bayesian inference for adaptive estimation but are limited to small-scale state estimation problems due to their demand for massive computational power. Meanwhile, the performance of SMC methods depends on the choice of the proposal distribution. Improper design of the proposal distribution often leads to poor approximation, especially for high-dimensional estimation problems.
VB methods [17] approximate the intractable joint PDF through optimization. The VB-based adaptive state estimator has received significant attention due to its computational efficiency compared to SMC methods. Most existing VB methods address the adaptive state estimation for a linear state-space model with unknown measurement noise covariance (MNC). By modeling the conjugate prior distribution of the state and MNC as a Gaussian inverse-Wishart distribution, the joint posterior PDFs are approximated using factorized free-form (mean-field approximation) and updated via the coordinate ascent method. Särkkä and Nummenmaa [6] presented the first VB-based adaptive Kalman filtering with unknown MNC. The extension to unknown process noise covariance (PNC) is not straightforward because the joint prior of the state and PNC is non-conjugate. Huang et al. [18] extended the work of [6] to both unknown MNC and PNC by regarding the state-predicted covariance as an inverse-Wishart distribution. Ma et al. [19] constructed the conjugate prior distribution of the state and PNC by inducing auxiliary latent variables. Ardeshiri et al. proposed an adaptive smoother with unknown PNC and MNC. Xu et al. [20] proposed adaptive fixed-lag smoothing with unknown MNC. Ma et al. [19] proposed VB-based joint state estimation and model identity for multiple model systems. Zhu et al. [21] proposed an outlier-robust variational Kalman filter by leveraging Student-t noise modeling. Yu and Meng [22] proposed robust Kalman filters with multiplicative noise modeling. Xia et al. [23] proposed an adaptive variational Kalman filter with unknown MNC to solve the calibration problem. Zhu et al. [24,25] proposed variational Kalman filters with unknown, time-varying, and non-stationary heavy-tailed process and measurement noises. Huang et al. [26] proposed an adaptive Kalman filter with a Gaussian inverse-Wishart mixture distribution for unknown MNC. For nonlinear state-space models, it is intractable to directly optimize the objective with the coordinate ascent method. The basic idea is to approximate the intractable expectation of nonlinear expressions, such as adaptive Metropolis sampling [27] and the cubature integration rule [28]. An alternative approach is to employ stochastic gradient methods [29]. Lan et al. [30] proposed a nonlinear adaptive Kalman filter with unknown PNC based on stochastic search VB, achieving high estimation accuracy but suffering from slow iteration convergence.
As stated in [26], existing VB-based adaptive Kalman filters (AKF) are quite sensitive to the initial value setting of PNC. This is mainly because the Kullback–Leibler (KL) divergence is generally a nonconvex objective function, and the coordinate ascent method only guarantees convergence to a local optimum, which can be sensitive to initialization. In the domain of maneuvering-target tracking, the dimension of latent variables (target state, PNC, MNC) is generally larger than the dimension of measurements, making it easy for variational iterations to converge to local minima. As a result, initialization issues hinder the application of existing adaptive Kalman filters to sharply maneuvering-target-tracking problems, where PNC may change abruptly and accurate prior information is unavailable. In order to deal with tracking sharply maneuvering targets, an adaptive initialization strategy should be addressed.
Motivated by the challenges of tracking sharply maneuvering targets, this paper develops novel variational adaptive state estimators for joint target state and process noise parameter estimation for a class of linear state-space models with abruptly changing parameters. By combining variational inference with change-point detection in an online Bayesian fashion, two adaptive estimators—a change-point-based adaptive Kalman filter (CPAKF) and a change-point-based adaptive Kalman smoother (CPAKS)—are proposed in a recursive detection and estimation process. In each step, the run-length probability of the current maneuver mode is first calculated, and then the joint posterior of the target state and process noise parameter conditioned on the run length is approximated by variational inference. Compared with existing variational noise-adaptive Kalman filters, the proposed methods are robust to initial iterative value settings, improving their ability to track sharply maneuvering targets. Meanwhile, the change-point detection divides the non-stationary time sequence into several stationary segments, allowing for an adaptive sliding length in the CPAKS method. Finally, the superior tracking performance of the proposed methods is verified using both synthetic and real-world datasets of maneuvering-target tracking.
The remainder of this paper is organized as follows. Section 2 describes the problem formulation of adaptive state estimation with unknown process noise covariance. Section 3 presents the proposed VB-based adaptive state estimation with change-point detection. Section 5 and Section 6 provide performance comparisons using simulated and real data, respectively. Finally, Section 7 concludes this paper.

2. Problem Description

Maneuvering-target tracking can be characterized by the following discrete-time state-space model
x k = F k x k 1 + v k
y k = H k x k + w k
where x k R n x is the target kinematic state of dimension n x and y k R n y is the sensor measurement of dimension n y . The state transition matrix F k and measurement matrix H k are assumed to be known. The process noise vector v k R n x and measurement noise vector w k R n y are the Gaussian distribution with zero mean and the corresponding covariance matrices Q k and R k , respectively. Assume that the initial state satisfies x 0 N ( x ^ 0 | 0 , P 0 | 0 ) and the random variables x 0 , v k , and w k are independent of each other.
In modern target tracking, non-cooperative targets are capable of making very sharp and evasive maneuvers to avoid tracking and locking on. Assume that a hostile aircraft initially travels at a constant velocity (CV) of 200 m/s for 33 s, then enters a constant turn (CT) of 10 deg/s for 33 s, and the target accelerates in a straight line at 3 m / s 2 . As shown in the upper figure in Figure 1, when using the nominal CV model to represent the target motion, the magnitudes of modeling errors in the x-axis resulting from target maneuvers vary over time and change significantly when the model is switched. As shown in the lower figure in Figure 1, the change points can divide the target flight into several non-overlapping motion segments, known as the run length, which refers to the length of flight time since the last change point [31], and the model parameters during each run-length segment are assumed to be constant or slow-varying. The objective of this paper is to perform adaptive state estimation by continuously adjusting the process noise level.
Definition 1 
([31]). Define the discrete random variable r k { 1 , , k } as the run length at time k. At each time k, the run length r k has only two outcomes: it either continues to grow r k = r k 1 + 1 if no change point occurs or drops to r k = 1 when a change point occurs.
Remark 1. 
The run-length variable r k is used to describe changes in the kinematic model or process noise level. Taking Figure 1 as an example, during the CV motion segment from 0 to 39 s, the run length continues to grow ( r k = r k 1 + 1 ) with r 1 = 1 . At time k = 40 , the CV motion becomes CT motion, the run-length variable at k = 40 is r 40 = 1 , and the run-length variable begins to grow again ( r k = r k 1 + 1 ) with the initial value r 40 = 1 . Similarly, the run-length variable begins to grow again ( r k = r k 1 + 1 ) with the initial value r 70 = 1 during the CA motion segment.
For noncooperative maneuvering-target tracking, the process noise covariance Q k , which accounts for motion model uncertainty, is commonly unknown and time-varying, and it should be estimated along with the target state x k . The VB-based AKF approximates the intractable joint posterior PDFs p ( x k , Q k | y 1 : k ) through optimization, whereas the posterior of each latent variable ( x k and Q k ) is updated iteratively via the coordinate ascent method. Some variational Bayesian adaptive Kalman filters (VBAKF) have been proposed in [18,26], where the unknown process noise covariance is assumed to vary slowly. However, these methods are quite sensitive to the initial values due to the local optimality of VB, limiting their ability to track sharply maneuvering targets [26].
To tackle the iterative initialization problem, we introduce the run length r k as an auxiliary latent variable to enhance the performance of AKF. The run length r k divides the sequence of model parameters Q k into non-overlapping segments. Within each segment, the parameters Q k are dependent. However, across different segments, the parameters Q k are independent of each other. Therefore, the initial value of Q k at each iteration is determined by the specific run length r k .
The objective of AKF with the presence of unknown Q k and r k is to compute the posterior PDF p ( x k , Q k , r k | y 1 : k ) . Formally, the well-known recursive Bayesian optimal filtering consists of the following prediction-update cycle after starting from the prior PDF p ( x 0 , Q 0 , r 0 ) :
  • Prediction: Following the Chapman–Kolmogorov equation, the predictive PDF of joint latent variables is
    p ( x k , Q k , r k | y 1 : k 1 ) = r k 1 p ( x k | x k 1 , Q k ) × p ( Q k | Q k 1 , r k ) p ( r k | r k 1 ) × p ( x k 1 , Q k 1 , r k 1 | y 1 : k 1 ) d x k 1 d Q k 1 .
  • Update: Given the measurement y k , the posterior PDF is updated using Bayes’ rule:
    p ( x k , Q k , r k | y 1 : k ) p ( y k | x k , Q k , r k ) × p ( x k , Q k , r k | y 1 : k 1 )
Due to the intractable integrations and summarization involved in the prediction-update cycle, the general Bayesian filtering solution is not analytically tractable. We effectively solve the above recursive equations using VB, where the intractable posterior PDF p ( x k , Q k , r k | y 1 : k ) is approximated by a variational distribution q ( x k , Q k , r k ) by minimizing the Kullback–Leibler (KL) divergence. Minimizing the KL divergence to zero guarantees that the variational distribution matches the exact posterior.

3. Adaptive Kalman Filter with Change-Point Detection

Recall that recursive Bayesian optimal filtering consists of initialization and the recursive prediction-update cycle. The initialization step involves modeling the prior distribution of latent variables at the previous time step. Given the measurements y 1 , , y k 1 , the joint conditional PDF for x k 1 , Q k 1 , and r k 1 is assumed to be factorized as
p ( x k 1 , Q k 1 , r k 1 | y 1 : k 1 ) = N ( x k 1 | x ^ k 1 | k 1 r , P k 1 | k 1 r ) × IW ( Q k | u ^ k 1 | k 1 r , U k 1 | k 1 r ) p ( r k 1 | y 1 : k 1 )
where the priors of x k 1 and Q k 1 are, respectively, a Gaussian distribution and an inverse-Wishart distribution with parameters associated with the run length r k 1 . The notations N ( z | d , D ) and IW ( Z | a , A ) denote the Gaussian and IW distributions, with the PDF functions given by
N ( z | d , D ) = exp 0.5 ( z d ) D 1 ( z d ) ( 2 π ) n | D |
IW ( Z | a , A ) = | A | 0.5 a 2 0.5 n a Γ n ( 0.5 a ) | Z | 0.5 ( a + n + 1 ) × exp [ 0.5 Tr ( A Z 1 ) ]
where z R n × 1 is a Gaussian random variable with mean d and covariance D , and Z R n × n is an IW random variable with a and A being the degrees of freedom and positive-definite scale matrix, respectively.

3.1. Time Prediction Step

According to the prediction equation in (3), the joint predictive PDF of latent variables can be factorized as
p ( x k , Q k , r k | y 1 : k 1 ) = p ( x k | Q k , r k , y 1 : k 1 ) × p ( Q k | r k , y 1 : k 1 ) p ( r k | y 1 : k 1 )
The state prediction x k is a Gaussian distribution
p ( x k | Q k , r k , y 1 : k 1 ) = N ( x k | x ^ k | k 1 r , P k | k 1 r )
with mean x ^ k | k 1 r and covariance P k | k 1 r given by
x ^ k | k 1 r = F k x ^ k 1 | k 1 , P k | k 1 r = F k P k 1 | k 1 F k + Q ^ k r ,
where Q ^ k r is the expectation of Q k , defined by
Q ^ k r E p ( Q k | r k , y 1 : k 1 ) [ Q k ] = U k | k 1 r u ^ k | k 1 r n x 1
The PNC prediction Q k is an inverse-Wishart distribution
p ( Q k | r k , y 1 : k 1 ) = IW ( Q k | u ^ k | k 1 r , U k | k 1 r )
with the parameters conditional on r k given by
u ^ k | k 1 r = β ( u ^ k 1 | k 1 r n x 1 ) + n x + 1 , r k = r k 1 + 1 β ( u 0 n x 1 ) + n x + 1 , r k = 1
and
U k | k 1 r = β U k 1 | k 1 r , r k = r k 1 + 1 β U 0 , r k = 1
where β ( 0 , 1 ] denotes the forgetting factor of Q k . The predefined parameters u 0 and U 0 are the initial values of PNC Q k when the change point occurs at time k.
Remark 2. 
Note that the detailed dynamical model of NPC, denoted as p ( Q k | Q k 1 ) , is unavailable. Most VB-based AKF methods employ heuristic dynamics that simply propagate their previous approximate posteriors. This heuristic is stable during VB iteration when Q k is stationary or slow-varying. However, it is not applicable for non-stationary or fast-varying Q k in sharply maneuvering-target tracking.
The predictive PDF of run length r k can be factorized as
p ( r k | y 1 : k 1 ) = r k 1 p ( r k | r k 1 ) p ( r k 1 | y 1 : k 1 )
where p ( r k | r k 1 ) is the transition probability of run length r k . According to the definition of the run length, the transition probability has only two outcomes: the run length either continues to grow or a change point occurs. We have
p ( r k | r k 1 ) = 1 G ( r k 1 + 1 ) , r k = r k 1 + 1 G ( r k 1 + 1 ) , r k = 1
where G ( · ) denotes the hazard function representing the prior distribution of the change point. To incorporate flexibility, various complex hazard function models can be utilized. For simplicity, we choose the geometric distribution, and the hazard function is given by [31]
G ( r k ) = 1 λ
where λ is the timescale parameter of the change point.
Substituting (16) and (17) into (15) yields
p ( r k | y 1 : k 1 ) = 1 λ λ p ( r k 1 | y 1 : k 1 ) ,   r k = r k 1 + 1 1 λ r k 1 p ( r k 1 | y 1 : k 1 ) ,   r k = 1

3.2. Posterior Update Step

In the posterior update step, the joint posterior PDF of latent variables x k , Q k , and r k can be factorized as
p ( x k , Q k , r k | y 1 : k ) = p ( r k | y 1 : k ) p ( x k , Q k | r k , y 1 : k )
The posterior PDF of the run length r k can be derived via the Bayesian theorem:
p ( r k | y 1 : k ) = p ( r k | y 1 : k 1 ) p ( y k | r k , y 1 : k 1 ) p ( y k 1 ) p ( y 1 : k ) p ( r k | y 1 : k 1 ) Λ k r
where Λ k r p ( y k | r k , y 1 : k 1 ) is given by
Λ k r = p ( y k | x k , R k ) p ( x k | r k , Q k , y 1 : k 1 ) d x k = N ( y k | H k x ^ k | k 1 r , H k P k | k 1 r H k + R k )
Conditioned on the run length r k , the state x k and PNC Q k are coupled through the likelihood p ( y k | x k , Q k ) , making the exact conditional posterior p ( x k , Q k | r k , y 1 : k ) devoid of a tractable analytic form. One must resort to the approximate Bayesian inference technique. The standard VB method is employed to approximate the exact conditional posterior with a tractable variational distribution q ( x k , Q k | r k ) , which takes the form of factorized free factors as follows
q ( x k , Q k | r k ) q ( x k | r k ) q ( Q k | r k ) = N ( x k | x ^ k | k r , P k | k r ) IW ( Q k | u ^ k | k r , U k r )
The variational hyperparameters λ k { λ x r , λ Q r } can be obtained by minimizing the KL divergence between the variational distribution and the exact posterior, where λ x r = { x ^ k | k r , P k | k r } and λ Q r = { u ^ k | k r , U k | k r } . Minimizing the KL divergence is equivalent to maximizing the evidence lower bound (ELBO) B ( λ k ) , defined as
B ( λ k ) = E q ( x k , Q k | r k ) [ log p ( x k , Q k , y k | r k , y 1 : k 1 ) log q ( x k , Q k | r k ) ]
where the joint PDF J k p ( x k , Q k , y k | r k , y 1 : k 1 ) can be formulated as
J k = p ( y k | x k , R k ) p ( x k | r k , Q k , y 1 : k 1 ) × log p ( Q k | r k , y 1 : k 1 ) = N ( y k | H k x k , R k ) N ( x k | x ^ k | k 1 r , P k | k 1 r ) × IW ( Q k | u ^ k | k 1 r , U k | k 1 r )
Substituting (22) and (24) into (23), the ELBO B ( λ k ) can be rewritten as
B ( λ k ) = E q ( x k | r k ) [ log N ( y k | H k x k , R k ) ] + E q ( x k , Q k | r k ) [ log N ( x k | x ^ k | k 1 r , P k | k 1 r ) ] E q ( x k | r k ) [ log N ( x k | x ^ k | k r , P k | k r ) ] + E q ( Q k | r k ) [ log IW ( Q k | u ^ k | k 1 r , U k | k 1 r ) ] E q ( Q k | r k ) [ log IW ( Q k | u ^ k | k r , U k | k r ) ]
Then, the optimal hyperparameters of x k and Q k can be obtained as follows:
{ ( λ x r ) * , ( λ Q r ) * } = argmax λ x r , λ Q r B ( λ k )
Remark 3. 
By maximizing the ELBO in (26), the optimal hyperparameters of x k and Q k are derived by setting the gradients of B ( λ k ) with respect to the corresponding hyperparameters to zero [32]. The expectation parameters of the posterior distributions are used in the following derivation. For q ( x k | r k ) and q ( Q k | r k ) , the expectation parameters are { E [ x k ] , E [ x k x k ] } and { E [ Q k 1 ] , E [ log | Q k | ] } , respectively. For simplicity, the expectation E q ( · ) is written as E .
Rewrite the EBLO B ( λ k ) as the function of x k , and omit the rest independent terms, denoted as B x ( λ x r ) . One has
B x ( λ x r ) = 1 2 E tr [ R k 1 ( y k H k x k ) ( · ) ] 1 2 E tr [ ( P k | k 1 r ) 1 ( x k x ^ k | k 1 r ) ( · ) ] + 1 2 E tr [ ( P k | k r ) 1 ( x k x ^ k | k r ) ( · ) ]
where tr ( A ) denotes the trace of matrix A and A ( · ) AA .
Simplify the ELBO B x ( λ x r ) as a formulation with respect to its expectation parameters { E [ x k ] , E [ x k x k ] } . The remaining independent terms can be ignored since their gradients with respect to the expectation parameters are zero. One has
B x ( λ x r ) = 1 2 E Tr [ H k R k 1 ( 2 y k x k + H k x k x k ) ] 1 2 E Tr ( P k | k 1 r ) 1 2 x ^ k | k 1 r x k + x k x k + 1 2 E Tr ( P k | k r ) 1 2 x ^ k | k r x k + x k x k = Tr H k R k 1 y k + E [ ( P k | k 1 r ) 1 ] x ^ k | k 1 r ( P k | k r ) 1 x ^ k | k r E [ x k ]   1 2 Tr H k R k 1 H k + E [ ( P k | k 1 r ) 1 ] ( P k | k r ) 1 E [ x k x k ]
Let the gradients with respect to the expectation parameters { E [ x k ] , E [ x k x k ] } be equal to zero. The optimal hyperparameters are obtained as follows:
( P k | k r ) 1 = E [ ( P k | k 1 r ) 1 ] + H k R k 1 H k ( P k | k r ) 1 x ^ k | k r = E [ ( P k | k 1 r ) 1 ] x ^ k | k 1 r + H k R k 1 y k
Remark 4. 
Since the exact solution of E [ ( P k | k 1 r ) 1 ] = E q ( Q k | r k ) [ F k P k 1 | k 1 F k + Q k ] 1 is intractable, as an alternative approximation, we assume that E [ ( P k | k 1 r ) 1 ] [ F k P k 1 | k 1 F k + E ( Q k ) ] 1 .
After obtaining the conditional distribution q ( x k | r k ) , the approximate variational distribution q ( x k ) is given by
q ( x k ) = r k q ( x k | r k ) p ( r k | y 1 : k ) = N ( x k | x ^ k | k , P k | k )
where
P k | k 1 = r k p ( r k | y 1 : k ) ( P k | k r ) 1 P k | k 1 x ^ k | k = r k p ( r k | y 1 : k ) ( P k | k r ) 1 x ^ k | k r
Similarly, rewrite the EBLO B ( λ k ) as a function of Q k and omit the remaining terms dependent on Q k , denoted as B Q ( λ Q r ) . Then, one has
B Q ( λ Q r ) = E [ log N ( x k | x ^ k | k 1 r , P k | k 1 r ) ] + E [ log IW ( Q k | u ^ k | k 1 r , U k | k 1 r ) ] E [ log IW ( Q k | u ^ k | k r , U k r ) ] .
It can be seen that the state x k and PNC Q k are coupled through the predictive PDF N ( x k | x ^ k | k 1 r , P k | k 1 r ) . It is intractable to directly optimize the ELBO objective due to the non-conjugate problem. In order to make the computations tractable, the following assumption is made:
log N ( x k | x ^ k | k 1 r , P k | k 1 r ) = log E q ( x k 1 ) [ N ( x k | F k x k 1 , Q k ) ] E q ( x k 1 ) log [ N ( x k | F k x k 1 , Q k ) ]
Substituting (33) into (32), one has
B Q ( λ Q r ) = 1 2 tr { ( U k | k r U k | k 1 r A k ) E [ Q k 1 ] } + 1 2 ( u ^ k | k r u ^ k | k 1 r 1 ) E [ log | Q k | ]
Let the gradients of B Q ( λ Q r ) with respect to the expectation parameters { E [ Q k 1 ] , E [ log | Q k | ] } be equal to zero. The optimal hyperparameters of Q k can be obtained as
u ^ k | k r = u ^ k | k 1 r + 1 U k | k r = U k | k 1 r + A k
where the sufficient statistic A k can be calculated as [33]
A k = E q ( x k | r k ) { E q ( x k 1 ) [ ( x k F k x k 1 ) ( · ) ] } = ( x ^ k | k r F k x ^ k 1 | k 1 ) ( · ) + F k P k 1 | k 1 F k + P k | k r P k , k 1 F k F k P k , k 1
where P k , k 1 = P k 1 | k 1 F k ( P k | k 1 r ) 1 P k | k r .
The proposed AKF based on change-point detection, referred to as CPAKF, is summarized in Algorithm 1.
Algorithm 1: CPAKF (at time k).
Sensors 24 04585 i001
Remark 5. 
CPAKF is distinct from other adaptive estimators. For instance, the IMM estimator assumes several discrete process noise levels and uses a switching rule, whereas CPAKF involves continuous process noise level adjustment through joint estimation. Compared with the existing VBAKF [18], CPAKF improves the VB initialization process by incorporating change-point detection. Unlike traditional maneuver detection methods, CPAKF relies on online Bayesian change-point detection.
Remark 6. 
CPAKF is an adaptive Kalman filter. This algorithm is suitable for sharply maneuvering-target tracking scenarios with unknown noise statistics. The adaptation means that the algorithm can perform noise parameter identification and state estimation simultaneously.
Remark 7. 
The convergence of the proposed CPAKF can be explained as follows. The state estimation and process noise identification of the maneuvering-target tracking are formulated as a variational optimization problem. CPAKF implements three steps to solve this optimization problem. Firstly, the posterior of the run length p ( r k | y 1 : k ) can be calculated using (20) via the Bayesian theorem. Secondly, the conditional posteriors q ( x k | r k ) and q ( Q k | r k ) are calculated using (29) and (35) by maximizing the ELBO (26). Finally, the state posterior q ( x k ) is obtained using (31) via the Bayesian theorem. Since the solutions obtained by the Bayesian theorem are optimal, the convergence of the CVIAKF depends on the procedure of maximizing the ELBO (26). Recall that maximizing the ELBO is core to mean-field variational inference [17]. From the theoretical analysis in [34], one knows that the mean-field variational inference has good convergence, guaranteeing a linear convergence rate. Therefore, the proposed CPAKF gradually converges to a local optimum as the number of iterations increases.

3.3. Implementation Details

Based on the Bayesian filtering framework, CPAKF consists of a recursive prediction-update cycle. To deal with state estimation of uncertain NPC, AKF carries out the measurement update via VB iteration. Meanwhile, change-point detection is employed to enhance the initialization of VB iteration. Some implementation details are as follows:
  • Reduce computation complexity: In our algorithm, the run length r k { 1 , 2 , , k } is introduced to characterize change points. The number of its probability values depends on k, which means the number of posterior p ( r k | y 1 : k ) will increase with k, resulting in an explosion of computation complexity. This problem can be mitigated by truncating less probable events. Two main truncating strategies can be employed, as suggested in [31]. One approach is to remove events where the probability of r k is less than a threshold δ p . Another method involves retaining M high-probability events of r k and normalizing their posterior probabilities. In this article, the second method is adopted to guarantee robustness across different scenarios.
  • Change-point detection: In order to determine whether a change point has occurred, the maximum a posteriori (MAP) is exploited to estimate the optimal run length r k * :
    r k * = arg max r k p ( r k | y 1 : k 1 )
    where r k * represents the optimal run length at time k. According to the definition of the run length r k , a change point is detected when r k * = 1 ; otherwise, no change point occurs.

4. Adaptive Kalman Smoother Based on Change-Point Detection

In the previous section, we proposed the CPAKF algorithm. When detecting a change point, the process noise parameters can be initialized in accordance with the maneuvering scenario. To further improve estimation performance, fixed-interval adaptive Kalman smoothing based on change-point detection is proposed in this section, which can be accomplished based on CPAKF.

4.1. Update of the Joint Posterior

Consider the measurement set y D with interval D = [ k l , k ] , where l denotes the length of the interval. The objective of smoothing is to obtain the posterior estimation of smoothing of the joint latent variables set (i.e., x D s and Q D s ) over this interval.
At time k l , the joint PDF p ( x k l , Q k l ) can be defined as
p ( x k l , Q k l ) = q ( x k l ; x ^ k l | k l , P k l | k l ) ×   q ( Q k l ; u ^ k l | k l , U k l | k l )
Then, the joint posterior of Z { x D s , Q D s } can be obtained via the Bayesian rule:
p ( Z | y D ) p ( Z , y D ) = p ( x k l , Q k l ) × p ( y k | x k ) n = 1 k l p ( Q n + 1 | Q n ) × n = 0 k l p ( y k | x n , R n ) p ( x n + 1 | x n , Q n + 1 )
Since the analytical solution of the joint posterior is intractable, the mean-field variational inference can be employed to obtain the following approximate solution:
p ( Z | y D ) q ( Z ) q ( x D s ) q ( Q D s )
where q ( x D s ) and q ( Q D s ) denote the approximate posterior distributions, respectively. Similar to the work in [33], we can obtain the optimization solution via the coordinate ascent method, that is:
log q ( x D ) = c E q ( Q D ) [ log p ( Z , y D | y k l : k 1 ) ]
log q ( Q D ) = c E q ( x D ) [ log p ( Z , y D | y k l : k 1 ) ]
For further derivation of (41), the forward filter from n = k l to n = k can be updated as follows [33]:
(43) x ^ n | n 1 = F n x ^ n 1 | n 1 (44) P n | n 1 = F n P n 1 | n 1 F n + U n | n s / u ^ n | n s (45) K n = P n | n 1 H n ( H n P n | n 1 H n + R n ) 1 (46) x ^ n | n = x ^ n | n 1 + K n ( y n H n x ^ n | n 1 ) (47) P n | n = P n | n 1 K n H n P n | n 1
where U n | n s and u ^ n | n s denote the hyperparameters of q ( Q n s ) .
The backward smoother from n = k 1 to n = k l can be updated as
(48) G n = P n | n F n P n + 1 | n 1 (49) x ^ n | n s = x ^ n | n + G n ( x ^ n + 1 | n + 1 s x ^ n + 1 | n ) (50) P n | n s = P n | n + G n ( P n + 1 | n + 1 s P n + 1 | n ) G n
For the derivation of q ( Q D s ) , we can also obtain the following update of the forward filter from n = k l to n = k :
(51) u ^ n | n 1 = β ( u ^ n 1 | n 1 n x 1 ) + n x + 1 (52) U n | n 1 = β U n 1 | n 1 (53) u ^ n | n = u ^ n | n 1 + 1 (54) U n | n = U n | n 1 + A n s
where
A n s = ( x ^ n + 1 | n + 1 s F n x ^ n | n s ) ( x ^ n + 1 | n + 1 s F n x ^ n | n s ) F n P n + 1 , n P n + 1 , n F n + P n + 1 | n + 1 s + F n P n | n s F n
with P n + 1 , n = G n P n + 1 | n + 1 s . According to the beta-Bartlett model proposed in [35], the backward smoothing from n = k 1 to n = k l can be derived as
(56) u ^ n | n s = ( 1 β ) u ^ n | n + β u ^ n + 1 | n + 1 s (57) U n | n s = ( 1 β ) U n | n 1 + β ( U n + 1 | n + 1 s ) 1 1

4.2. Algorithm Details

The startup logic is introduced as follows. Record the end time of the last smoothing interval as t c . According to (37), determine whether there exists a change point at time k. If a change point is detected, reset the smoothing interval to D = [ t c + 1 , k ] and start the variational smoothing algorithm. Note that if no change point is detected, define a maximum sliding-window length l m a x . When k t c l m a x , execute the smoothing algorithm. Finally, by embedding the variational interval smoothing algorithm into the CPAKF filtering algorithm proposed in the previous section, we obtain a variational smoothing algorithm based on change points. The proposed smoothing algorithm, named CPAKS, is summarized in Algorithm 2.
Remark 8. 
CPAKS is proposed to improve estimation accuracy by embedding the variational interval smoothing algorithm [33] into CPAKF. In the smoothing procedure, the optimization solution for CPAKS is derived from (41) and (42) via the coordinate ascent method. The coordinate ascent method is part of the fixed iteration method in mean-field variational inference [17]. According to the convergence guarantee in [34], CPAKS can converge to the local optimum as the number of iterations increases.
Algorithm 2: CPAKS.
Sensors 24 04585 i002

5. Results for Synthetic Data

To verify the effectiveness and superiority of the proposed CPAKF and CPAKS, the following simulated maneuvering-target tracking scenarios are considered.

5.1. Simulation Configuration

The target state is represented as x k = [ x k , x ˙ k , y k , y ˙ k ] . The target kinematics are assumed to follow a constant-velocity kinematic model. The measurement information is obtained from a linear sensor. Then, the corresponding parameters are set as follows:
F k = I 2 1 T 0 1 ,   H k = 1 0 0 0 0 0 1 0
The simulated scenarios of typical aerial maneuvering targets proposed in [36] are considered in this article, which are available in the Benchmark Trajectories for Multi-Object Tracking tool in Matlab. The measurement noise covariance R k is known, given by R k = diag ( 10 4 , 10 4 ) . The process noise covariance is unknown and can be denoted as
Q k = δ k 2 I 2 T 3 / 2 T 2 / 2 T 2 T
where the process noise level is defined as δ k 2 and T denotes the sampling period. I n is an n × n identity matrix. The simulated scenarios shown in Figure 2 are described as follows:
  • S1: The flight trajectory of a large airplane is shown in Figure 2a. The target first flies at a constant speed of 290 m/s, and then makes a small turn at twice the gravitational acceleration. Subsequently, it begins to move at a constant speed in a straight line. Lastly, it performs a turning motion at triple the gravitational acceleration. The true moment set at which change points occur is [ 60 , 79 , 110 , 130 ] (second).
  • S2: The second target trajectory is generated using a small aircraft. Its trajectory contains two turn movements. During this period, several accelerated movements are conducted. The set of change points in time is [ 31 , 51 , 101 , 115 ] (second).
  • S3: The target denotes a plane moving at high velocity. The flight trajectory consists of two turning movements with accelerations four times that of gravitational acceleration. The plane slows down in the middle of the second turn. The corresponding set of times for change points is [ 31 , 40 , 75 , 91 , 104 ] (second).
  • S4: In this scenario, the target and its flight trajectory are similar to those in S3, but the turning accelerations are 4 g and 6 g , respectively, where g denotes the gravitational acceleration. The set of times for change points is [ 31 , 36 , 70 , 81 , 92 , 142 ] (second).
  • S5: The target is a fast-maneuvering plane. The trajectory contains three turns at a constant speed. During these periods, the target is accelerated. The set of time for change point is [ 31 , 36 , 63 , 70 , 118 , 128 , 133 , 171 ] (second).
  • S6: The target trajectory contains four turns. After the second turn, the plane decreases its altitude and velocity and begins to conduct the third turn. Thereafter, the plane accelerates rapidly and enters into the fourth turn. The set of times for change points is [ 31 , 40 , 70 , 83 , 117 , 25 , 151 , 155 ] (second).
The following adaptive Kalman filtering and smoothing methods are compared with the proposed CPAKF and CPAKS. The corresponding parameters are set as follows:
  • IMM [7]: The state transition F k is assumed to be the same as in (58). The process noise level set satisfies δ i 2 { 0.1 δ 2 , 1 δ 2 , 10 δ 2 } . The measurement update is carried out via the standard Kalman filter. The model transition probability P m is set as
    P m = 0.9 0.01 0.09 0.025 0.75 0.225 0.15 0.35 0.50
  • VBAKF [33]: This is an adaptive Kalman filter based on variational Bayesian methods. The initial parameters are u 0 | 0 = 7 and U 0 | 0 = 3 Q 0 , and the tuning parameter is τ = 3 . The decreasing factor is β = 0.98 , and the maximum iteration number is I m a x = 50 .
  • CPAKF: This is an adaptive Kalman filter based on change-point detection proposed by us. Most parameters are set the same as in VBAKF. In addition, the hazard function is 1 / λ = 0.06 . The change-point parameter is τ 1 = 200 .
  • IMMS [37]: This is an interactive multiple-model smoothing algorithm, which consists of a forward filter and a backward smoother. The smoothing interval is fixed, and its length is l m a x = 10 . The other parameters are the same as those used in IMM.
  • VBAKS [33]: This is an adaptive Kalman smoothing method. The smoothing interval is fixed, and its length is l m a x = 10 . The other parameters are set the same as in VBAKF.
  • CPAKS: This is an adaptive Kalman smoothing method based on change-point detection. The maximum smoothing interval length is l m a x = 10 . The other parameters are similar to those of VBKAF-CP.
For each scenario, 100 Monte Carlo runs were carried out on a Legion Y9000P computer equipped with a CPU operating at 2.20 GHz. To evaluate the tracking performance of all the above methods, the Root Mean Square Error (RMSE) and Average Root Mean Square Error (ARMSE) were utilized. Additionally, to describe the abilities of CPAKF and CPAKS in change-point detection, the modified F 1 score metric proposed in [38] was used to evaluate detection accuracy. In the practical time series, the places where change points occur are influenced by some stochastic factors (e.g., process noise and measurement noise). Based on the definition of F 1 , change-point detection is viewed as a classification problem of change points and invariant points. It can be defined as
F 1 = 2 P R P + R
where P denotes precision (the ratio of correctly detected change points to the number of detected change points). R denotes recall (the ratio of correctly detected change points to the number of true change points). The criteria for correctly detecting change points need to be defined. Given the error margin M of the true change point, a detected change point is considered correct if it falls within this margin. To avoid the double-counting problem, it needs to be ensured that only one change point is detected within the error margin of each true change point. On this basis, let X denote the set of change points detected by our algorithms and T denote the set of true change points. The set of correctly detected change points in the set X is expressed as T P ( T , P ) . For each change point τ in the set T P ( T , P ) , there is one change point x in the set X satisfying | τ x | M and only one x can match the true change point τ . Thus, precision P and recall R can be computed as
P = | T P ( T , P ) | | X |
R = | T P ( T , P ) | | T |
where | · | denotes the number of factors in the set. Note that P and R cannot be well defined if no change point is reported by our algorithms or if no true change point occurs in the scenarios. To avoid this, the point at the initial time is considered a generalized change point.

5.2. Simulation Results

The simulation results are shown in Figure 3. The gray dotted line indicates the moment when the target maneuvers. Among the three filtering algorithms, i.e., IMM, VBAKF, and CPAKF, the estimation accuracy of IMM in the non-maneuvering segment across all scenarios shows a significant advantage because IMM employs multiple models, allowing for better estimation through model probability weighting. Second is the CPAKF algorithm proposed by us. Its RMSE is significantly lower compared to VBAKF. This is mainly because CPAKF uses a run-length probability weighting method to obtain the maximum entropy distribution of the state estimation, which is better than the single-mode estimation used in VBAKF. In the maneuvering segments, the estimation performance of IMM is not satisfactory, especially after a change point occurs, which leads to a significant increase in its RMSE. In comparison, the other two adaptive filtering algorithms are more robust since adaptive filtering can adjust process noise adaptively in response to target maneuvers, while the IMM relies on the completeness of its model set. When unknown maneuver patterns appear in target motion, performance degradation caused by model mismatch is likely to occur. Compared with VBAKF, CPAKF has better estimation performance in maneuver segments because it can adjust the noise parameter when a change point is detected. The performance of the three smoothing algorithms, i.e., IMMS, VBAKS, and CPAKS, is superior to the three filtering algorithms. This is because the smoothing algorithms can utilize more measurement information. Nevertheless, their performance deteriorates after a change point occurs. Specifically, CPAKS exhibits better estimation accuracy. This is because CPAKS can divide smoothing intervals through maneuver detection, effectively avoiding the impact on smoothing estimation caused by the state in the interval containing different motion modes of the target. In addition, when a change point is detected, both CPAKF and CPAKS speed up the convergence of parameter estimation by taking decisive action, such as identifying and initializing parameters to avoid falling into the local optimal solution.
The ARMSE of the simulated scenarios is summarized in Table 1. Across all simulated scenarios, CPAKF significantly outperforms the other filtering algorithms. The proposed CPAKS exhibits better smoothing performance compared to IMMS and VBAKS across all scenarios, except for S2. Specifically, CPAKS achieves more pronounced results compared to the other algorithms in the S5 and S6 scenarios because the target maneuvers in these scenarios include more complex forms and higher maneuver frequencies, and CPAKS can better “capture” these prominent change points and provide adaptive interval smoothing state estimation.
The time cost of all simulated scenarios is summarized in Table 2. VBAKF and VBAKS achieve the lowest computational cost compared to the other filtering or smoothing algorithms across all scenarios, followed by CPAKF and CPAKS. IMM and IMMS incur the highest computational burden. Despite the need to estimate an additional discrete variable (run length r k ) that grows over time for maneuver detection in our algorithm, the accuracy of variational approximation and the fast convergence due to change-point detection results in only a slight increase in computational cost compared to VBAKF and VBAKS.
Among all the algorithms, VBAKF and CPAKF belong to the framework of variational adaptive filtering. In Table 3, we calculate their average iterations in different scenarios. Both algorithms have maximum iteration numbers less than I m a x = 50 , which indicates good convergence properties. Specifically, CPAKF achieves faster convergence.
Finally, in order to measure the validity and accuracy of change-point detection for CPAKF and CPAKS, F 1 score metrics were computed, and the results are shown in Table 4. From the results, it can be seen that CPAKF and CPAKS both exhibit capabilities in detecting change points, but CPAKS outperforms CPAKF in change-point detection since CPAKS contains forward filtering and backward smoothing, which leads to better change-point detection performance.

6. Results for Real-World Data

To further validate the effectiveness of our methods, real-world data of aerial target tracking were used. The aerial target trajectory is shown in Figure 4, which consists of six different segments or maneuver modes. These segments include segment A (CV), segment B (left CT), segment C (CV), segment D (right CT), segment E (CA), and segment F (figure-eight flight pattern). Through calculation and analysis of the segments, the set of true change points is { 13 , 34 , 65 , 81 , 100 } (second).
The parameter settings for all methods in the real-world scenario are similar to those in the simulated scenario. The results are depicted in Figure 5. The ARMSE and average time cost for all methods are summarized in Table 5. The results show that the performance differences of all algorithms in segments A, B, C, and D are not significant. However, the RMSE peak values of CPAKF in segments E and F and CPAKS in segments D, E, and F are significantly smaller than those of the other algorithms, which indicates their better performance in the final ARMSE results. The set of detected change points for CPAKF is { 51 , 75 , 101 } (second), and for CPAKS it is { 51 , 74 , 100 } (second). For M = 10 , the F 1 score metric of both CPAKF and CPAKS is 0.5.

7. Conclusions

In this article, novel variational adaptive state estimators for target state and process noise parameter estimation are developed for a class of linear state-space models with abruptly changing parameters. By combining variational inference with change-point detection in an online Bayesian fashion, two adaptive estimators—a change-point-based adaptive Kalman filter (CPAKF) and a change-point-based adaptive Kalman smoother (CPAKS)—are proposed in a recursive detection and estimation process. In each step, the run-length probability of the current maneuver mode is first calculated, and then the joint posterior of the target state and process noise parameter conditioned on the run length is approximated by variational inference. Compared with existing variational noise-adaptive Kalman filters, the proposed methods are robust to initial iterative value settings, improving their ability to detect sharply maneuvering targets. Meanwhile, the change-point detection divides the non-stationary time sequence into several stationary segments, allowing for an adaptive sliding length in the CPAKS method. Finally, the performance of the proposed methods is validated using both synthetic and real-world datasets of maneuvering-target tracking.

Author Contributions

Conceptualization, X.H., S.Z., J.H. and H.L.; methodology, H.L. and X.H.; software, S.Z.; validation, J.H.; formal analysis, S.Z. and J.H.; investigation, S.Z. and J.H.; data curation, S.Z. and J.H.; writing—original draft preparation, S.Z., J.H., X.H. and H.L.; writing—review and editing, X.H., J.H. and H.L.; visualization, S.Z. and J.H.; supervision, X.H. and H.L.; project administration, X.H. and H.L.; funding acquisition, X.H. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under grant numbers 61703343 and 62371398.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The benchmark trajectories data used in this paper is available at: https://github.com/xlhou/CPAKF-CPAKS (accessed on 4 June 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

KFKalman filter
KLKullback–Leibler
CPChange point
PNCProcess noise covariance
MNCMeasurement noise covariance
PDFProbability density function
SMCSequence Monte Carlo
VBVariational Bayesian
IMMInteracting multiple model
AKFAdaptive Kalman filter
VBAKFVariational Bayesian adaptive Kalman filter
CPAKFChange-point-based adaptive Kalman filter
CPAKSChange-point-based adaptive Kalman smoother

References

  1. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  2. Kalman, R.E.; Bucy, R.S. New results in linear filtering and prediction theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef]
  3. Zhang, L.; Sidoti, D.; Bienkowski, A.; Pattipati, K.R.; Bar-Shalom, Y.; Kleinman, D.L. On the identification of noise covariances and adaptive kalman filtering: A new look at a 50 year-old problem. IEEE Access 2020, 8, 59362–59388. [Google Scholar] [CrossRef] [PubMed]
  4. Visina, R.; Bar-Shalom, Y.; Willett, P. Multiple-model estimators for tracking sharply maneuvering ground targets. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1404–1414. [Google Scholar] [CrossRef]
  5. Moose, R. An adaptive state estimation solution to the maneuvering target problem. IEEE Trans. Autom. Control 1975, 20, 359–362. [Google Scholar] [CrossRef]
  6. Sarkka, S.; Nummenmaa, A. Recursive noise adaptive kalman filtering by variational bayesian approximations. IEEE Trans. Autom. Control 2009, 54, 596–600. [Google Scholar] [CrossRef]
  7. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. part v. multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar]
  8. Mazor, E.; Averbuch, A.; Bar-Shalom, Y.; Dayan, J. Interacting multiple model methods in target tracking: A survey. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 103–123. [Google Scholar] [CrossRef]
  9. Blom, H.A.; Bar-Shalom, Y. The interacting multiple model algorithm for systems with markovian switching coefficients. IEEE Trans. Autom. Control 1998, 33, 780–783. [Google Scholar] [CrossRef]
  10. Li, X.-R.; Bar-Shalom, Y. Multiple-model estimation with variable structure. IEEE Trans. Autom. Control 1996, 41, 478–493. [Google Scholar]
  11. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  12. Liu, J.; West, M. Combined parameter and state estimation in simulation-based filtering. In Sequential Monte Carlo Methods in Practice; Springer: Berlin/Heidelberg, Germany, 2001; pp. 197–223. [Google Scholar]
  13. Carvalho, C.M.; Johannes, M.S.; Lopes, H.F.; Polson, N.G. Particle learning and smoothing. Stat. Sci. 2010, 25, 88–106. [Google Scholar] [CrossRef]
  14. Storvik, G. Particle filters for state-space models with the presence of unknown static parameters. IEEE Trans. Signal Process. 2002, 50, 281–289. [Google Scholar] [CrossRef]
  15. Nemeth, C.; Fearnhead, P.; Mihaylova, L. Sequential monte carlo methods for state and parameter estimation in abruptly changing environments. IEEE Trans. Signal Process. 2013, 62, 1245–1255. [Google Scholar] [CrossRef]
  16. Arnold, A. When artificial parameter evolution gets real: Particle filtering for time-varying parameter estimation in deterministic dynamical systems. Inverse Probl. 2022, 39, 014002. [Google Scholar] [CrossRef]
  17. Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational inference: A review for statisticians. J. Am. Stat. Assoc. 2017, 112, 859–877. [Google Scholar] [CrossRef]
  18. Huang, Y.; Zhang, Y.; Wu, Z.; Li, N.; Chambers, J. A novel adaptive Kalman filter with inaccurate process and measurement noise covariance matrices. IEEE Trans. Autom. Control 2018, 63, 594–601. [Google Scholar] [CrossRef]
  19. Ma, Y.; Zhao, S.; Huang, B. Multiple-model state estimation based on variational bayesian inference. IEEE Trans. Autom. Control 2018, 64, 1679–1685. [Google Scholar] [CrossRef]
  20. Xu, H.; Duan, K.; Yuan, H.; Xie, W.; Wang, Y. Adaptive fixed-lag smoothing algorithms based on the variational bayesian method. IEEE Trans. Autom. Control 2020, 66, 4881–4887. [Google Scholar] [CrossRef]
  21. Zhu, F.; Huang, Y.; Xue, C.; Mihaylova, L.; Chambers, J. A sliding window variational outlier-robust kalman filter based on student’s t-noise modeling. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 4835–4849. [Google Scholar] [CrossRef]
  22. Yu, X.; Meng, Z. Robust kalman filters with unknown covariance of multiplicative noise. IEEE Trans. Autom. Control 2023, 69, 1171–1178. [Google Scholar] [CrossRef]
  23. Xia, M.; Zhang, T.; Wang, J.; Zhang, L.; Zhu, Y.; Guo, L. The fine calibration of the ultra-short baseline system with inaccurate measurement noise covariance matrix. IEEE Trans. Instrum. Meas. 2021, 71, 1–8. [Google Scholar] [CrossRef]
  24. Zhu, H.; Zhang, G.; Li, Y.; Leung, H. A novel robust kalman filter with unknown non-stationary heavy-tailed noise. Automatica 2021, 127, 109511. [Google Scholar] [CrossRef]
  25. Zhu, H.; Zhang, G.; Li, Y.; Leung, H. An adaptive kalman filter with inaccurate noise covariances in the presence of outliers. IEEE Trans. Autom. Control 2021, 67, 374–381. [Google Scholar] [CrossRef]
  26. Huang, Y.; Zhang, Y.; Shi, P.; Chambers, J. Variational adaptive kalman filter with gaussian-inverse-wishart mixture distribution. IEEE Trans. Autom. Control 2020, 66, 1786–1793. [Google Scholar] [CrossRef]
  27. Mbalawata, I.S.; Sarkka, S.; Vihola, M.; Haario, H. Adaptive Metropolis algorithm using variational Bayesian adaptive Kalman filter. Comput. Stat. Data Anal. 2015, 83, 101–115. [Google Scholar] [CrossRef]
  28. Dong, P.; Jing, Z.; Leung, H.; Shen, K. Variational Bayesian adaptive cubature information filter based on Wishart distribution. IEEE Trans. Autom. Control 2017, 62, 6051–6057. [Google Scholar] [CrossRef]
  29. Hoffman, M.D.; Blei, D.M.; Wang, C.; Paisley, J. Stochastic variational inference. J. Mach. Learn. Res. 2013, 14, 1303–1347. [Google Scholar]
  30. Lan, H.; Hu, J.; Wang, Z.; Cheng, Q. Variational nonlinear kalman filtering with unknown process noise covariance. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 9177–9190. [Google Scholar] [CrossRef]
  31. Adams, R.P.; MacKay, D.J. Bayesian online changepoint detection. arXiv 2007, arXiv:0710.3742. [Google Scholar]
  32. Khan, M.E.; Rue, H. The Bayesian learning rule. J. Mach. Learn. Res. 2023, 24, 1–46. [Google Scholar]
  33. Ardeshiri, T.; Özkan, E.; Orguner, U.; Gustafsson, F. Approximate bayesian smoothing with unknown process and measurement noise covariances. IEEE Signal Process. Lett. 2015, 22, 2450–2454. [Google Scholar] [CrossRef]
  34. Zhang, A.Y.; Zhou, H.H. Theoretical and computational guarantees of mean field variational inference for community detection. Ann. Stat. 2020, 48, 2575–2598. [Google Scholar] [CrossRef]
  35. Carvalho, C.M.; West, M. Dynamic matrix-variate graphical models. Bayesian Anal. 2007, 2, 69–97. [Google Scholar] [CrossRef]
  36. Blair, W.; Watson, G.; Kirubarajan, T.; Bar-Shalom, Y. Benchmark for radar allocation and tracking in ecm. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 1097–1114. [Google Scholar] [CrossRef]
  37. Nadarajah, N.; Tharmarasa, R.; McDonald, M.; Kirubarajan, T. Imm forward filtering and backward smoothing for maneuvering target tracking. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2673–2678. [Google Scholar] [CrossRef]
  38. den Burg, G.J.V.; Williams, C.K. An evaluation of change point detection algorithms. arXiv 2020, arXiv:2003.06222. [Google Scholar]
Figure 1. Illustration of maneuvering-target tracking.
Figure 1. Illustration of maneuvering-target tracking.
Sensors 24 04585 g001
Figure 2. Subfigure (af) are the true trajectories for six simulated scenarios S1–S6.
Figure 2. Subfigure (af) are the true trajectories for six simulated scenarios S1–S6.
Sensors 24 04585 g002aSensors 24 04585 g002b
Figure 3. Subfigure (af) exhibit the comparison results of RSME for six simulated scenarios S1–S6.
Figure 3. Subfigure (af) exhibit the comparison results of RSME for six simulated scenarios S1–S6.
Sensors 24 04585 g003
Figure 4. Trajectory for real-world scenario.
Figure 4. Trajectory for real-world scenario.
Sensors 24 04585 g004
Figure 5. RMSE for real-world scenario.
Figure 5. RMSE for real-world scenario.
Sensors 24 04585 g005
Table 1. ARMSE for all simulated scenarios.
Table 1. ARMSE for all simulated scenarios.
MethodS1S2S3S4S5S6
IMM115.55119.41120.65114.32171.13184.86
VBAKF109.72114.70115.82116.34121.19119.19
CPAKF97.03100.0898.9199.35111.32113.92
IMMS68.0670.6271.1173.92100.25108.14
VBAKS76.0266.6670.1386.59112.14102.24
CPAKS60.3267.2757.4460.1778.6169.77
Table 2. Time cost for simulated scenarios.
Table 2. Time cost for simulated scenarios.
MethodS1S2S3S4S5S6
IMM0.800.830.810.790.790.79
VBAKF0.040.040.040.040.040.04
CPAKF0.450.570.460.460.460.46
IMMS1.001.021.000.980.980.98
VBAKS0.300.310.310.300.300.30
CPAKS0.800.950.800.800.820.81
Table 3. Average iteration numbers for simulated scenarios.
Table 3. Average iteration numbers for simulated scenarios.
MethodS1S2S3S4S5S6
VBAKF27.5728.3126.8029.4126.1427.87
CPAKF9.6710.0810.1010.3910.9411.76
Table 4. Change-point detection performance.
Table 4. Change-point detection performance.
MethodS1S2S3S4S5S6
CPAKF (M = 5)0.250.670.400.600.430.80
CPAKS (M = 5)0.250.670.400.600.430.57
CPAKF (M = 10)0.500.890.600.600.710.93
CPAKS (M = 10)0.500.890.600.600.710.57
Table 5. ARMSE and time cost for real-world scenario.
Table 5. ARMSE and time cost for real-world scenario.
MethodARMSETime Cost
IMM556.480.58
VBAKF537.280.05
CPAKF530.770.24
IMMS510.140.69
VBAKS518.500.21
CPAKS507.080.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, X.; Zhao, S.; Hu, J.; Lan, H. Noise-Adaptive State Estimators with Change-Point Detection. Sensors 2024, 24, 4585. https://doi.org/10.3390/s24144585

AMA Style

Hou X, Zhao S, Hu J, Lan H. Noise-Adaptive State Estimators with Change-Point Detection. Sensors. 2024; 24(14):4585. https://doi.org/10.3390/s24144585

Chicago/Turabian Style

Hou, Xiaolei, Shijie Zhao, Jinjie Hu, and Hua Lan. 2024. "Noise-Adaptive State Estimators with Change-Point Detection" Sensors 24, no. 14: 4585. https://doi.org/10.3390/s24144585

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop