Next Article in Journal
A Spatial-Frequency Domain Associated Image-Optimization Method for Illumination-Robust Image Matching
Previous Article in Journal
Analysis of Postural Control in Sitting by Pressure Mapping in Patients with Multiple Sclerosis, Spinal Cord Injury and Friedreich’s Ataxia: A Case Series Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Vehicle Cooperative Target Tracking with Time-Varying Localization Uncertainty via Recursive Variational Bayesian Inference

1
Automotive Engineering Research Institute, Jiangsu University, Zhenjiang 212013, China
2
School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6487; https://doi.org/10.3390/s20226487
Submission received: 14 October 2020 / Revised: 6 November 2020 / Accepted: 10 November 2020 / Published: 13 November 2020
(This article belongs to the Section Sensor Networks)

Abstract

:
Cooperative target tracking by multiple vehicles connected through inter-vehicle communication is a promising way to improve the estimation of target state. The effectiveness of cooperative tracking closely depends on the accuracy of relative localization between host and cooperative vehicles. However, the localization signal usually provided by the satellite-based navigation system is rather susceptible to dynamic driving environment, thus influencing the effectiveness of cooperative tracking. In order to implement reliable cooperative tracking, especially when the statistical characteristic of the relative localization noise is time-varying and uncertain, this paper presents a recursive Bayesian framework which jointly estimates the state of the target and the cooperative vehicle as well as the localization noise parameter. An online variational Bayesian inference algorithm is further developed to achieve efficient recursive estimate. The simulation results verify that our proposed algorithm can effectively boost the accuracy of target tracking when the localization noise dynamically changes over time.

1. Introduction

Intelligent vehicles (IVs) [1] equipped with different types of on-board sensors, such as Lidar, Radar, Camera, etc, can collect the information about surrounding environment. By analyzing this information, the IVs can achieve reliable situational awareness [2] and thus make correct decisions [3,4]. Among various environment perception tasks, target tracking [5,6] plays a crucial role for many autonomous driving functions, such as the forward collision assistance or adaptive cruise control. Traditionally, the state of target is estimated based on the data collected by on-board sensors. Consequently, the tracking performance is restricted due to the limitation of sensors. For example, the mutual occlusion between targets will prevent the IVs to recognize those occluded targets [7]. Recently, due to the fast development of inter-vehicle and cellular communication, cooperative perception through vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) communication has attracted much attention [4,7]. The IVs equipped with dedicated short range communication (DSRC) radios [8] or 4G/5G technology [9] are able to exchange their information including the position, speed, heading, etc., with other neighboring IVs or roadside units (RSUs). Then, the vehicle receiving such information can fuse the transmitted data with its local information to improve the state estimate of target and extend the perception range. In the foreseeable future, a standardized 5G network can enable vehicular communication with remarkably low power consumption, high peak data rate, and low latency [10].
During the last few years, many studies have been proposed to fuse the information from multiple sources in cooperative intelligent transportation systems (C-ITS) areas [11]. In this study, we focus on cooperative target tracking via multiple vehicles, although the developed algorithm is also applicable for other cooperative situations. For convenience, we call the vehicle transmitting information as the cooperative vehicle (CV), while the vehicle receiving information as the host vehicle (HV). Depending on the information shared among vehicles, the architecture of cooperative perception can be categorized into three types: data-level, feature-level, and track-level fusion.
For data-level fusion [12], CV directly forwards the raw measurements, such as point cloud or image, to HV, where most processing operations, including spatial registration and state estimation, are carried out. For feature-level fusion [13], on the other hand, the raw data are partially processed independently by each vehicle so as to extract low-dimensional features, which are then exchanged among vehicles. Different from the above two types of fusion, track-level fusion supposes that the raw measurements are completely processed by each vehicle to obtain local state estimation of the target. Then, the local estimation is transmitted to other vehicles for further fusion. Generally, data-level fusion can yield more accurate estimation because it can make full use of the available information. However, it may cause large computation and communication burden. In contrast, track-level fusion [14,15] reduces the communication bandwidth; however, may lose useful information. Feature-level fusion strikes the balance between computation and communication load, and, hence, can be viewed as compromise between data-level and track-level fusion.
In [16], a cooperative perception approach exchanging raw LiDAR data among vehicles was proposed. In [13], an algorithm which can share the feature extracted from point cloud data among multiple vehicles was developed. In [17], a collaborative tracking algorithm was proposed for the tracking of multiple vehicles. PHD filter [18] was independently executed by HV and CV to obtain the state of vehicles. The PHD density was then transformed from the coordinate frame of CV to that of HV and then merged with local estimate via covariance intersection (CI) fusion [19]. The resulting cooperative perception was further applied for the overtaking decision [20].
In order to achieve reliable cooperative tracking, the relative localization between HV and CV is an important factor used to convert the local measurement or track estimation of target from the frame of one vehicle to that of another. In the literatures, most works assume such information is perfectly known when conducting cooperative tracking [21]. Currently, satellite-based navigation system, such as global positioning system (GPS) and Beidou, has been widely applied in the localization of vehicles. However, the localization signal is subject to noise from different sources. The signal can be degraded or even blocked when traveling in dense urban areas close to building or vegetation, thus leading to large localization errors [22,23]. As a result, the relative location between vehicles calculated from the localization signal is expected to be uncertain and time-varying. Few works investigated the cooperative tracking problem with imperfect localization information. In [24], the high-precision and low-precision positioning situations are, respectively, characterized by two observation likelihood models. In [25], an approach for jointly estimating the state of vehicle and targets based on Poisson multi-Bernoulli filter (PMBM) [26] was developed to consider the uncertain localization of vehicle. However, the localization noise covariance was assumed to be exactly known. In [27], a cooperative pedestrian tracking algorithm was presented in the GPS-denied environments. In [28], a robust cooperative tracking algorithm was proposed for the situation where the localization information is not available.
In summary, achieving reliable cooperative target tracking especially when the relative localization noise is uncertain and time-varying is a nontrivial task. Therefore, this paper presents a recursive Bayesian framework to address the above problem. Specifically, both HV and CV collect the measurement of the same target while the information collected by CV is transmitted to HV. Under the assumption that the uncertainty of relative position and orientation between two vehicles may vary over time, HV attempts to estimate the joint state of target and CV as well as the noise parameter based on the measurements. To guarantee the conjugate relationship, a solution algorithm based on variational Bayesian (VB) inference is then developed, such that the system state and the noise parameter can be alternatively corrected. VB inference is widely applied in machine learning community [29,30] and has been successfully introduced into target tracking and sensor fusion [31,32,33] in recent years. Overall, this paper is interesting from the following aspects
1.
Our algorithm can be used in dynamic environments, in the sense that the statistical characteristic of the localization noise is uncertain and time-varying.
2.
The cooperative tracking problem is formulated in recursive Bayesian framework and an efficient VB inference algorithm is proposed.
3.
Simulation shows that the tracking error of cooperative tracking can be reduced by 18.15% to 9.5% compared with non-cooperative tracking, depending on the uncertainty level of the localization noise.
The remainder of this paper is organized as follows. In Section 2, we give the system description and model definition based on recursive Bayesian framework. In Section 3, the principle of VB inference is briefly explained. Then, the specific solution algorithm for the estimation of system state and the localization noise parameter is presented in Section 4. In Section 5, the proposed approach is evaluated through computer simulation experiments. Finally, Section 6 gives the conclusions of the paper.

2. System Description and Model Definition

The scenario considered in this work is shown in Figure 1. The host and cooperative vehicles collect the information about the same target based on their individual on-board sensors. Then, CV transmits the measurement about the target along with its own localization information to HV. Finally, HV fuses the received measurement with its own measurement about the same target based on the relative localization between HV and CV. It should be emphasized that the scenario considered in this work is very universe and can be extended to more complex situations through integrating with other technologies. For example, by introducing data association, such as global nearest-neighbor method, etc., our model can be immediately used for multiple target tracking.
As shown in Figure 1, taking the coordinate frame of HV as reference frame, we assume that the motion state of both target and CV evolve following linear state space model. Therefore, we can combine their individual motion state together to yield an augmented dynamic model as
x k = F k x k 1 + w k ,
where   k = 1 , 2 , 3 ,   is the time step, x k R n   is the system state. Specifically, x k = [ x k t , x k c ] T consists of two parts: x k t = [ p x , k t , p y , k t , p ˙ x , k t , p ˙ y , k t ] T , representing the motion state of the target at time step k, including the position and velocity along x and y direction, x k c = [ p x , k c , p y , k c , p ˙ x , k c , p ˙ y , k c , θ k c , θ ˙ k c ] T ,   representing the motion state of the cooperative vehicle, including extra heading angle θ k c and heading angle speed θ ˙ k c besides the position and velocity as the target. F k R n × n   is the state transition matrix, w k R n   is the process noise following Gaussian distribution N ( w k ; 0 , Q ) with zero mean and covariance matrix Q . We also assume that the system state x 1 at the first time step is subject to have a Gaussian distribution with mean vector x ^ 1 | 1 and the covariance matrix P 1 | 1 , i.e., N ( x 1 ; x ^ 1 | 1 , P 1 | 1 ) .
For the cooperative tracking scenario we consider in this work, the statistical property of the relative localization noise of cooperative vehicle is time-varying and uncertain while the statistical property of observation noise of target by on-board sensor of HV and CV is known and stable. Therefore, we have the following measurement equation of the system state
{ y k 1 = h ( x k ) + v k 1 y k 2 = H x k + v k 2   ,
where y k 1 = [ y k t , y k t c ] T R m 1 , y k 2 = [ y k c , θ k c ] T R m 2 . Specifically, y k t   and y k t c represent the measurement of the same target in the coordinate frame of HV and CV, respectively. v k 1 is the corresponding measurement noise. y k c and θ k c are the measurements of CV in the coordinate frame of HV, including the observed position in x and y direction and heading angle. In this work, we assume that the measurement noise v k 1 obeys Gaussian distribution N ( v k 1 ; 0 , Σ k 1 ) with known covariance matrix Σ k 1 R m 1 × m 1 while the measurement noise v k 2 also follows Gaussian distribution N ( v k 2 ; 0 , Σ k 2 ) ; however, with uncertain and time-varying covariance matrix Σ k 2 = diag ( σ k | k , 1 2 , σ k | k , m 2 1 2 ) , with diag ( · ) denoting a diagonal matrix.
According to the relationship between the coordinate frames of HV and CV shown in Figure 1, the measurement equation h ( x k ) in (2) can be expressed as
h ( x k ) = [ 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 R 0 0 0 0 R 0 0 0 0 0 0 0 0 ] x k ,
where R = [ c o s θ k c s i n θ k c s i n θ k c c o s θ k c ] . For the observation y k 2 , we have the measurement matrix in (2) as
H = [ 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ] ,
In our online estimation model, the measurement set y 1 : k = { y 1 , , y k } is observable, while the system state x k and the localization noise covariance Σ k 2 , are viewed as hidden variables and need to be estimated based on y 1 : k . The aim of optimal recursive Bayesian filtering is to estimate the posterior probability distribution of   x k   and   Σ k 2   based on the observed data y 1 : k , so as to realize the joint estimation of the unknown variables. It generally consists of two steps. First, according to the Chapman–Kolmogorov (CK) equation, the prediction of the unknown variables is given by
p ( x k , Σ k 2 | y 1 : k 1 ) = p ( x k , Σ k 2 | x k 1 , Σ k 1 2 ) × p ( x k 1 , Σ k 1 2 | y 1 : k 1 ) d x k 1 d Σ k 1 2 ,
Then, the observed information is incorporated by the well-known Bayesian theorem, thus yielding the following correction equation
p ( x k , Σ k 2 | y 1 : k ) = p ( y k | x k , Σ k 2 ) p ( x k , Σ k 2 | y 1 : k 1 ) p ( y k | x k , Σ k 2 ) p ( x k , Σ k 2 | y 1 : k 1 ) d x k d Σ k 2 ,
Given the measurements y 1 : k 1 , we assume the joint posterior distribution of x k 1 and   Σ k 1 2 at time step k 1 can be approximated as the product of the Gaussian distribution and the inverse-Gamma distribution, that is
p ( x k 1 , Σ k 1 2 | y 1 : k 1 ) = p ( x k 1 | y 1 : k 1 ) p ( Σ k 1 2 | y 1 : k 1 ) ,
where we have
{ p ( x k 1 | y 1 : k 1 ) = N ( x k 1 ; x ^ k 1 | k 1 , P k 1 | k 1 )         p ( Σ k 1 2 | y 1 : k 1 ) = t = 1 m 2 I G ( σ k 1 , t 2 ; α k 1 | k 1 , t , β k 1 | k 1 , t ) ,
Here, I G ( σ k 1 , t 2 ; α k 1 | k 1 , t , β k 1 | k 1 , t ) denotes the inverse-Gamma distribution with the scale parameter   α k 1 | k 1 , t   and the shape parameter β k 1 | k 1 , t . In summary, the proposed cooperative tracking model under uncertain localization noise can be represented as the probabilistic graphical model (PGM) [34] shown in Figure 2.

3. Principle of Variational Bayesian Inference

Suppose that we want to infer the posterior distribution p ( ϕ | Z ) of the hidden variable ϕ given the observed data Z under the Bayesian framework. Based on the Bayes’ theorem, we have
p ( ϕ | Z ) = p ( Z | ϕ ) p ( ϕ ) p ( Z ) = p ( Z | ϕ ) p ( ϕ ) p ( Z | ϕ ) p ( ϕ ) d ϕ ,
where   p ( Z | ϕ ) is the likelihood of observed data Z ,   p ( ϕ ) is the prior distribution and p ( Z ) denotes the marginal likelihood of data. For many problems, it is infeasible to evaluate the posterior p ( ϕ | Z )   analytically or numerically. To address this problem, VB inference was proposed to find variational distribution of hidden variables that can approximate true posterior distribution as closely as possible. Specifically, we always have the following equation for the logarithm of p ( Z ) [35]
log p ( Z ) = F ( q ( Φ ) ) + K L ( q ( Φ ) | | p ( ϕ | Z ) ) ,
where q ( Φ ) is the introduced variational distribution aiming to approximate the intractable posterior distribution p ( ϕ | Z ) ; F ( q ( Φ ) ) is the free energy of the following form
F ( q ( Φ ) ) = q ( Φ ) log p ( Φ , Z ) p ( Z ) d Φ ,
and K L ( q ( Φ ) | | p ( ϕ | Z ) ) denotes the Kullback–Leibler (KL) divergence between q ( Φ ) and   p ( ϕ | Z ) , which is used to measure the consistency between two probability distributions. That is,
K L ( q ( Φ ) | | p ( ϕ | Z ) ) = q ( Φ ) log q ( Φ ) p ( Φ | Z ) d Φ ,
Equation (10) indicates that the sum of free energy and KL divergence is always equal to the logarithm of marginal likelihood. In VB framework, we seek for the variational posterior q ( Φ ) by minimizing KL divergence (12), indicating that the variational distribution is able to approximate the true posterior. However, it is infeasible to minimize (12) directly because p ( Φ | Z ) is unknown. To avoid this issue, we can instead maximize the free energy F ( q ( Φ ) ) in (11), since log p ( Z ) is constant given observation Z . Furthermore, let Φ can be partitioned into disjoint groups Φ 1 , Φ 2 , , Φ M where m = 1 M Φ m = Φ . Then, based on the mean field theory, we suppose that   q ( Φ )   has a factorized form as
q ( Φ ) = m = 1 M q ( Φ m ) ,
It indicates that Φ m and Φ l ( l m ) are independent to each other. In order to maximize F ( q ( Φ ) ) with factorized form (13), an iterative approach is adopted where each q ( Φ m ) is alternatively optimized while keeping the other q ( Φ l ) fixed,   l m . In such a case, the optimal q ( Φ m ) is given by [35]
log q ( Φ m ) log p ( Φ , Z ) q ( Φ l ) , l m ,
where · q ( Φ l ) , l m is the expectation with respect to all Φ l except Φ m . The above iteration continues until some convergence criterion is satisfied.

4. Online Variational Bayesian Inference of Parameters

In this section, we concentrate on the solution of the proposed cooperative tracking model. Following the recursive Bayesian framework, the involved prediction step (5) and correction step (6) are derived to achieve joint estimation of the system state and the noise parameter.

4.1. Prediction

We assume the system state   x k   and the noise parameter   Σ k 2 are independent to each other given their previous estimation. Therefore, we have
p ( x k , Σ k 2 | x k 1 , Σ k 1 2 ) =   p ( x k | x k 1 ) p ( Σ k 2 | Σ k 1 2 ) ,
Substituting (7) and (15) into (5) yields
p ( x k , Σ k 2 | y 1 : k 1 ) = p ( x k | y 1 : k 1 ) p ( Σ k 2 | y 1 : k 1 ) ,
where we have
{ p ( x k | y 1 : k 1 ) =   p ( x k | x k 1 ) p ( x k 1 | y 1 : k 1 ) d x k 1   p ( Σ k 2 | y 1 : k 1 ) = p ( Σ k 2 | Σ k 1 2 ) p ( Σ k 1 2 | y 1 : k 1 ) d Σ k 1 2 ,
Considering the state evolution model (1) and the posterior distribution of the system state at time step k 1 in (8), we can have
p ( x k | y 1 : k 1 ) = N ( x k ; x ^ k | k 1 , P k | k 1 ) ,
where x ^ k | k 1 and P k | k 1 are given by the Kalman filter prediction equations
{ x ^ k | k 1 = F k x ^ k 1 | k 1               P k | k 1 = F k P k 1 | k 1 F k T + Q ,
For the prediction of the localization noise covariance, it is difficult to specify the evolution model which has the desirable conjugative property. In order to implement recursive estimation and consider the time-varying characteristics of unknown localization noise covariance, a heuristic model [31] is assumed such that
p ( Σ k 2 | y 1 : k 1 ) = t = 1 m 2 I G ( σ k | k 1 , t 2 ; α k | k 1 , t , β k | k 1 , t ) ,
where the involved scale and the shape parameters are given by
{ α k | k 1 , t = ρ α k 1 | k 1 , t β k | k 1 , t = ρ β k 1 | k 1 , t ,
for   t = 1 , 2 , , m 2 . Here, ρ   ( 0 < ρ 1 ) is called the forgetting factor reflecting the fluctuation characteristics of noise statistics. Finally, substituting (18) and (20) into (16), we obtain the prediction distribution of the system state and the localization noise covariance as follows
p ( x k , Σ k 2 | y 1 : k 1 ) = N ( x k ; x ^ k | k 1 , P k | k 1 ) × t = 1 m 2 I G ( σ k | k 1 , t 2 ; α k | k 1 , t , β k | k 1 , t ) ,

4.2. Correction

It can be seen from the correction equation (6) that solving the joint posterior distribution p ( x k , Σ k 2 | y 1 : k ) of the system state and the localization noise covariance involves multiple integrals and thus is difficult to calculate directly. Therefore, following the VB inference principle described in Section 3, we construct the variational posterior distribution   q ( x k , Σ k 2 ) , which approximates the true posterior distribution p ( x k , Σ k 2 | y 1 : k ) . Therefore, we have
p ( x k , Σ k 2 | y 1 : k ) q ( x k , Σ k 2 ) = Q x ( x k ) Q Σ ( Σ k 2 ) ,
where Q x ( x k ) and Q Σ ( Σ k 2 ) are the approximate probability densities of the unknown x k and Σ k 2 . The KL divergence between the separable approximate distribution and the true posterior distribution is
K L ( Q x ( x k ) Q Σ ( Σ k 2 ) | | p ( x k , Σ k 2 | y 1 : k ) ) = Q x ( x k ) Q Σ ( Σ k 2 ) log ( Q x ( x k ) Q Σ ( Σ k 2 ) p ( x k , Σ k 2 | y 1 : k ) ) d x k d Σ k 2 ,
According to equation (14), the logarithmic expression of the approximate distribution of x k   and   Σ k 2   is given by
{ l o g Q x ( x k ) log p ( y k , x k , Σ k 2 | y 1 : k 1 ) Q Σ ( Σ k 2 ) d Σ k 2 l o g Q Σ ( Σ k 2 ) log p ( y k , x k , Σ k 2 | y 1 : k 1 ) Q x ( x k ) d x k ,
Furthermore, according to the graphical model shown in Figure 2, the log joint posterior distribution of y k 1 , y k 2 , x k and Σ k 2 can be expressed as
log p ( y k 1 , y k 2 , x k , Σ k 2 | y 1 : k 1 ) = log N ( y k 1 ; h ( x k ) , Σ k 1 ) + log N ( y k 2 ; H k x k , Σ k 2 ) + log N ( x k ; x ^ k | k 1 , P k | k 1 ) + log t = 1 m 2 I G ( σ k | k 1 , t 2 ; α k | k 1 , t , β k | k 1 , t ) ,
Now, we focus on the derivation of the variational posterior distribution of   x k   . Firstly, as can be seen from (3), the measurement equation h ( x k )   is nonlinear in the system state x k , thus preventing the recursive estimation. To deal with this problem, we follow the linearization of extended Kalman filter (EKF) and expand the nonlinear function into a Taylor series. By omitting the terms higher than order two, we can obtain the linearized model of the nonlinear system. Specifically, let the Jacobian matrix of the measurement equation h ( x k )   at the predicted system state   x ^ k | k 1   be
J H k = h ( x ) x | x = x ^ k | k 1 ,
For convenience, the observation and the measurement equation can be combined as follows
{ y k = [ y k 1 , y k 2 ] T H ¯ k = [ J H k , H ] T ,
Then, by substituting (26) and (28) into (25), we can have
log Q x ( x k ) 1 2 ( y k H ¯ k x ^ k | k 1 ) T ( Σ k ) 1 ( y k H ¯ k x ^ k | k 1 ) 1 2 ( x k x ^ k | k 1 ) T ( P k | k 1 ) 1 ( x k x ^ k | k 1 ) ,
where Σ k = diag ( Σ k 1 , Σ k 2 Σ ) with Σ k 2 Σ = Σ k 2 Q Σ ( Σ k 2 ) d Σ k 2 , representing the expectation of Σ k 2 with respect to the variational posterior distribution Q Σ ( Σ k 2 ) . According to (29), we immediately find that the variational posterior distribution of   x k   is Gaussian Q x ( x k ) = N ( x k ; x ^ k | k , P k | k ) , where the parameters   x ^ k | k   and   P k | k are given by
S k = H ¯ k P k | k 1 H ¯ k T + Σ k ,
K k = P k | k 1 H ¯ k T ( S k ) 1 ,
x ^ k | k = x ^ k | k 1 + K k ( y k H ¯ k x ^ k | k 1 ) ,
P k | k = ( I K k H ¯ k ) P k | k 1 ,
In order to derive the variational posterior distribution of Σ k 2 , we substitute (26) into (25) and obtain
log Q Σ ( Σ k 2 ) t = 1 m 2 ( α k | k 1 , t + 3 2 ) log σ k | k , t 2 t = 1 m 2 1 σ k | k , t 2 { β k | k 1 , t + 1 2 ( H P k | k H T + ( y k 2 H x ^ k | k ) ( y k 2 H x ^ k | k ) T ) t t } ,
As a result, we find that Σ k 2 follows inverse-Gamma distribution Q Σ ( Σ k 2 ) = t = 1 m 2 I G ( σ k | k , t 2 ; α k | k , t , β k | k , t ) , with the parameters α k | k , t , β k | k , t ( t = 1 , 2 , , m 2 ) given by
α k | k , t = α k | k 1 , t + 1 2 ,
β k | k , t = β k | k 1 , t + 1 2 ( H P k | k H T + ( y k 2 H x ^ k | k ) ( y k 2 H x ^ k | k ) T ) t t ,
Σ k 2 = diag ( β k | k , 1 α k | k , 1 , , β k | k , m 2 α k | k , m 2 ) ,
Considering the correction equations of Q x ( x k ) and Q Σ ( Σ k 2 ) are dependent on each other, the posterior distribution parameters need to be calculated alternatively such that more accurate approximation can be achieved. Considering the calculation cost, we present two termination conditions
Condition 1: If the state change between two adjacent iterations is less than the threshold, then the VB iteration will converge. The specific formula is
x ^ k | k i + 1 x ^ k | k i 2 x ^ k | k i 2 < ε ,
where · 2 is the Euclidean norm.
Condition 2: If the number of VB iterations reaches the predetermined maximum number of iterations c , the VB iteration will also be terminated.

4.3. Algorithm

In summary, the proposed cooperative tracking algorithm under time-varying localization noise can be described as follows (Algorithm 1).
Algorithm 1 Multi-Vehicle Cooperative Target Tracking
Initialization: x ^ 1 | 1 ,   P 1 | 1 , { α 1 | 1 , t , β 1 | 1 , t } t = 1 m 2
Time outer loop
For k = 2 , 3 ,
Prediction:
x ^ k | k 1 = F k x ^ k 1 | k 1
P k | k 1 = F k P k 1 | k 1 F k T + Q
α k | k 1 , t = ρ α k 1 | k 1 , t     t = 1 , 2 , , m 2
β k | k 1 , t = ρ β k 1 | k 1 , t     t = 1 , 2 , , m 2
Correction:
J H k = h ( x ) x | x = x ^ k | k 1
H ¯ k = [ J H k , H ] T
y k = [ y k 1 , y k 2 ] T
α k | k , t 0 = α k | k 1 , t
β k | k , t 0 = β k | k 1 , t
Variational inner loop
 For i = 0 , 1 , 2 ,
   Σ k 2 i + 1 = diag ( β k | k , 1 i α k | k , 1 i , , β k | k , m 2 i α k | k , m 2 i )
   Σ k i + 1 = diag ( Σ k 1 , Σ k 2 i + 1 )
   S k i + 1 = H ¯ k P k | k 1 H ¯ k T + Σ k i + 1
   K k i + 1 = P k | k 1 H ¯ k T ( S k i + 1 ) 1
   x ^ k | k i + 1 = x ^ k | k 1 + K k i + 1 ( y k H ¯ k x ^ k | k 1 )
   P k | k i + 1 = ( I K k i + 1 H ¯ k ) P k | k 1
   α k | k , t i + 1 = α k | k 1 , t + 1 2     t = 1 , 2 , , m 2
   β k | k , t i + 1 = β k | k 1 , t + 1 2 ( H P k | k i + 1 H T + ( y k 2 H x ^ k | k i + 1 ) ( y k 2 H x ^ k | k i + 1 ) T ) t t     t = 1 , 2 , , m 2
  Convergence judgment
  Condition 1: x ^ k | k i + 1 x ^ k | k i 2 x ^ k | k i 2 < ε
  Condition 2: i > c
End the inner loop
x ^ k | k = x ^ k | k I + 1 , P k | k = P k | k I + 1 , I is the number of VB iteration
End the outer loop
Outputs:   { x ^ k | k , P k | k } k = 2 , 3 , ,

5. Simulation and Discussion

5.1. Simulation Scene Configuration

Assume that the target and CV move in a 2-D plane with the constant velocity model. The position of target can be observed simultaneously by HV and CV. In addition, the position of CV can be obtained by HV through inter-vehicle communication. Taking the coordinate frame of HV as reference frame, the initial states of the target and the cooperative vehicle are [ 30 , 15 , 1 , 1 ] T and [ 20 , 20 , 2 , 2 ] T , respectively. The state transition matrix F k in (1) is given by
F k = [ F 1 F 1 F 2 ]
where F 1 = [ 1 0 t 0 0 1 0 t 0 0 1 0 0 0 0 1 ] , F 2 = [ 1 t 0 1 ] , the sampling period t = 0.1 . The total number of time steps is 400 and thus the actual simulation time is 40 s. The heading angle of the cooperative vehicle is simulated according to the following equation
θ k = tan 1 p ˙ y , k 13 p ˙ x , k 13 + 5 100 sin k 100
The process noise   w k   in (1) is assumed to be zero-mean white Gaussian noise with covariance matrix Q = diag ( Q 1 , Q 2 ) , where Q 1 = I 2 0.01 2 G G T , is the 2-D identity matrix,   is the Kronecker product. The expression of G and Q 2 is given below [36]
G = [ t 3 3 0 t 2 2 0 0 t 3 3 0 t 2 2 t 2 2 0 t 0 0 t 2 2 0 t ] ,   Q 2 = [ t 3 3 t 2 2 t 2 2 t ]
Suppose that the observation noises v k 1 and v k 2 in (2) obey Gaussian distribution with zero mean and covariance matrix. Specifically, for the on-board sensor measurement noise v k 1 , we let the corresponding covariance Σ k 1 = diag ( 0.5 , 0.5 , 0.5.0.5 ) . For the localization noise v k 2 , we simulate the time-varying covariance Σ k 2 at time step k as follows
Σ k 2 = { Σ , if   k [ 1 , 100 ]   5 Σ , if   k [ 101 , 200 ] Σ , if   k [ 201 , 300 ] 10 Σ , if   k [ 301 , 400 ]
where   Σ = diag ( σ 2 , σ 2 , 0.1 × σ 2 ) . In the simulation, we let σ 2 vary in the set { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } , so as to investigate the tracking performance under different levels of localization uncertainty. Take σ 2 = 0.2 as an instance. Figure 3 shows the observed position of the target and CV in the coordinate frame of HV. Figure 4 shows the observed heading angle of CV in the coordinate frame of HV. Figure 5 shows the observed position of the target in the coordinate frame of CV.
With the above simulated scene, computer experiments are carried out to evaluate the performance of our proposed algorithm, termed as variational Bayesian inference-cooperative tracking (VBI-CT). For comparison, we include three related methods: Kalman filter (KF), EKF-cooperative tracking/static (EKF-CT/S) and EKF-cooperative tracking/dynamic (EKF-CT/D). For KF, we estimate the state of target based solely on HV without cooperation. For EKF-CT/S, we assume the potential localization noise covariance Σ k 2 is static and set to Σ , and thus the state of target and CV (i.e., (1) and (2)) can be recursively estimated by EKF. For EKF-CT/D, it is similar to EKF-CT/S, except that we assume the perfect knowledge regarding the dynamic localization noise is provided (i.e., the exact value of Σ k 2 at each time step is known). For all the algorithms, the initial state of the target and CV is set as the corresponding position observation while the velocity is simply set as zero since no prior information is available. For VBI-CT, there are some parameters need to set before online estimation. Specifically, the convergence threshold of VB iteration ε = 5 × 10 6 , the forgetting factor ρ = 0.7 . the maximum number of iterations c = 10 , the initial value for the localization noise parameter α and β is simply set as 1.0.
To eliminate the influence of randomness, we perform a total of 100 Monte Carlo (MC) runs to compare KF, EKF-CT/S, EKF-CT/D, and VBI-CT. To evaluate the tracking performance of different algorithms, we adopt the root mean squared error (RMSE) and averaged root mean squared error (ARMSE), defined as
R M S E ( k ) = 1 M m = 1 M ( ( p x , k p ^ x , k m ) 2 + ( p y , k p ^ y , k m ) 2 )
A R M S E k = 1 K k = 1 K R M S E ( k )
where ( p x , k , p y , k ) denotes the true target position at time step   k , while   ( p ^ x , k m , p ^ y , k m )   is the corresponding estimation at the m th   MC run. As can be seen, ARMSE is the average RMSE across the whole simulation time.

5.2. Tracking Performance under Time-Varying Localization Noise

As mentioned above, we have performed a total of 100 MC runs to calculate the tracking error of four algorithms. The experimental results obtained by KF, EKF-CT/S, EKF-CT/D, and VBI-CT, when the localization noise covariance σ 2 increases from 0.1 to 0.5, are shown in Table 1, Table 2, Table 3, Table 4 and Table 5, respectively. Figure 6 shows the RMSE curve of different algorithms when σ 2 = 0.2 .
By analyzing the above experimental results, we find that the overall performance of KF is the worst among the three algorithms. The reason for the poor performance is that the target state is estimated exclusively based on the observations of HV without considering the information from CV. EKF-CT/S outperforms KF when the localization noise covariance σ 2 is small, indicating the information from CV has been successfully integrated with the local observation of HV. However, with the increase of σ 2 , the performance of EKF-CT/S degenerates. This is because EKF-CT/S cannot adjust the localization noise covariance automatically, thus lacking adaptation to the dynamic environment. EKF-CT/D and VBI-CT work better than the other two methods, since they can not only integrate the information from CV but also take into account the dynamic variation of the localization noise uncertainty. We observe from the results that the performance of VBI-CT is very close to that of EKF-CT/D, which assumes the exact time-varying localization noise is known. As a result, the application of EKF-CT/D is restricted in many real applications where the uncertainty of the localization error is difficult or even impossible to obtain. Our proposed VBI-CT, on the other hand, can estimate such uncertainty automatically based on the observed measurements, thus verifying the effectiveness of VBI-CT in the dynamic environment. In addition, we notice that, with the increase of the localization noise variance, the tracking error of VBI-CT increases correspondingly because of the position uncertainty of CV. Nevertheless, in comparison with KF, we can find the tracking error of VBI-CT can reduce about 18.15% to 9.5%, depending on the uncertainty level of the localization noise.

5.3. Run Time Comparison

The averaged execution time of all algorithms across 100 MC runs is shown in Table 6. It should be noticed that the execution time includes the calculation in 400 time steps. In terms of execution time, compared with the other algorithms, KF consumes the least time because it only needs to estimate the state of target, thus leading to very small state space. The speed of EKF-CT/S and EKF-CT/D is slower than that of KF because they both have to estimate the joint state of target and CV, thus leading to larger state space. Finally, our proposed VBI-CT algorithm takes the longest time, because it needs to estimate more parameters and iterate the VB inference until convergence. Nevertheless, the time consumed in each time step is still very small (~5 ms).

5.4. Influence of Parameters

For the proposed VBI-CT, the forgetting factor 0 < ρ 1 is used to initialize the prior distribution of the localization noise at each time step, thus will affect the subsequent VB inference procedure. A large ρ should be chosen when the variation of the localization noise over time is slow, wheras a small ρ is more preferred if the localization noise changes fast. Hence, we analyze the variation of tracking error with respect to different values of ρ . We increase the value from 0.5 to 1.0 with step 0.1. The resulting variation curve of ARMSE is shown in Figure 7. It can be seen that the tracking error gradually decreases with the increased value of ρ at the initial stage. However, when ρ is too large, the tracking error will increase due to the excessive self-pruning of the parameters by the VB algorithm [32]. Therefore, in the simulation, we set ρ as 0.7 .
The number of iterations in VB inference is an important factor affecting the tracking behavior. Generally, with the increase of iterations, variational distribution can approximate the true posterior distribution more precisely, thus resulting in better tracking performance. However, too many iterations will bring about heavier computational burden and reduce the tracking speed. Therefore, we investigate how the tracking error varies with the increased number of VB iterations. The result is shown in Figure 8. where the number of VB iterations increases from 1 to 10. We also show different variation curves under different values of forgetting factor. As can be seen, the tracking error decreases rapidly during the first several iterations and reaches the convergence after about 3 iterations. Therefore, the proposed cooperative tracking method shows fast convergence speed with respect to the number of VB iterations.
In the above experiments, we assume the information from CV can be immediately transmitted to HV without delays. In such a case, the transmitted information and the local perception of HV can be temporally aligned. However, in real applications, due to the communication band or the large number of vehicles involved in cooperation, communication delay is an important factor influencing the performance of cooperative tracking. Therefore, we study the tracking error under different communication delays. We increase the delay from 0.00 to 0.09 s with the step 0.01 and show the results in Figure 9. As we can see, with the increase of communication delay, the error of cooperative tracking gradually increases and the impact of delay on cooperative tracking is larger when the localization uncertainty is small.

5.5. Tracking Performance under Stationary Localization Noise

In this experiment, we assess the tracking performance of different approaches when the localization noise is stationary so as to evaluate the applicability of our proposed algorithm. In this case, EKF-CT/D will reduce to EKF-CT/S since the covariance of localization noise is constant. We gradually increase the value of σ 2 from 0.1 to 0.5 and show the corresponding results in Figure 10. As can be seen, VBI-CT and EKF-CT/S algorithms still outperform KF in terms of state estimation, which further verifies the advantages of cooperative tracking over non-cooperative tracking. We also observe that, as the relative localization noise increases, the error of cooperative tracking algorithms increases gradually, thus emphasizing the importance of relative localization in the cooperative scenario.

6. Conclusions

In this paper, a cooperative target tracking algorithm is developed for the integration of the information from multiple vehicles. This method attempts to jointly estimate the state of target and CV as well as the localization noise parameter modeled by inverse-Gamma distribution. The method is formulated in the recursive Bayesian framework, where the posterior distribution of the unknown variables is dealt with variational Bayesian inference. The performance of the proposed method is verified using computer simulation. The results through 100 Monte Carlo runs show that cooperative tracking can effectively reduce the tracking error even with time-varying localization uncertainty.

Author Contributions

Methodology, X.C.; Writing—Original Draft Preparation, Y.W.; Software, J.J., and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Key Research and Development Program of China (2018YFB0105000), the National Natural Science Foundation of China (grant nos. 61773184, 51875255, 6187444, U1564201, U1664258, U1762264, and 61601203), Six Talent Peaks project of Jiangsu Province (grant no. 2017-JXQC-007), the Key Research and Development Program of Jiangsu Province (BE2016149), the Natural Science Foundation of Jiangsu Province (BK2017153), the Key Project for the Development of Strategic Emerging Industries of Jiangsu Province (2016-1094, 2015-1084), and the Talent Foundation of Jiangsu University, China (no. 14JDG066).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, H.; Yu, Y.; Cai, Y.; Chen, X.; Chen, L.; Li, Y. Soft-weighted-average ensemble vehicle detection method based on single-stage and two-stage deep learning models. IEEE Trans. Intell. Veh. 2020, 1. [Google Scholar] [CrossRef]
  2. Wang, H.; Yu, Y.; Cai, Y.; Chen, X.; Chen, L.; Liu, Q. A comparative study of state-of-the-art deep learning algorithms for vehicle detection. IEEE Intell. Transp. Syst. Mag. 2019, 11, 82–95. [Google Scholar] [CrossRef]
  3. Kim, S.-W.; Liu, W.; Ang, M.H.; Frazzoli, E.; Rus, D. The impact of cooperative perception on decision making and planning of autonomous vehicles. IEEE Intell. Transp. Syst. Mag. 2015, 7, 39–50. [Google Scholar] [CrossRef]
  4. Kim, S.-W.; Qin, B.; Chong, Z.J.; Shen, X.; Liu, W.; Ang, M.H.; Frazzoli, E.; Rus, D. Multivehicle cooperative driving using cooperative perception: Design and experimental validation. IEEE Trans. Intell. Transp. Syst. 2014, 16, 663–680. [Google Scholar] [CrossRef]
  5. Rangesh, A.; Trivedi, M. No blind spots: Full-surround multi-object tracking for autonomous vehicles using cameras and LiDARs. IEEE Trans. Intell. Veh. 2019, 4, 588–599. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, H.; Fang, J.; Cui, S.; Xu, H.; Xue, J. Multi-3D-object tracking by fusing RGB and 3D-LiDAR data. In Proceedings of the IEEE International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; pp. 941–946. [Google Scholar]
  7. Günther, H.-J.; Mennenga, B.; Trauer, O.; Riebl, R.; Wolf, L.C. Realizing collective perception in a vehicle. In Proceedings of the IEEE Vehicular Networking Conference (VNC), Columbus, OH, USA, 8–10 December 2016; pp. 1–8. [Google Scholar]
  8. Kenney, J.B. Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
  9. Chen, S.; Hu, J.; Shi, Y.; Zhao, L. LTE-V: A TD-LTE-based V2X solution for future vehicular network. IEEE Internet Things J. 2016, 3, 997–1005. [Google Scholar] [CrossRef]
  10. Sánchez, M.G.; Táboas, M.P.; Cid, E.L. Millimeter wave radio channel characterization for 5G vehicle-to-vehicle communications. Measurement 2017, 95, 223–229. [Google Scholar] [CrossRef]
  11. Javed, M.A.; Zeadally, S.; Ben Hamida, E. Data analytics for cooperative intelligent transport systems. Veh. Commun. 2019, 15, 63–72. [Google Scholar] [CrossRef]
  12. Liggins, M., II; Hall, D.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  13. Marvasti, E.E.; Raftari, A.; Marvasti, A.E.; Fallah, Y.P.; Guo, R.; Lu, H. Cooperative LIDAR object detection via feature sharing in deep networks. arXiv 2020, arXiv:2002.08440). [Google Scholar]
  14. Grime, S.; Durrant-Whyte, H. Data fusion in decentralized sensor networks. Control. Eng. Pract. 1994, 2, 849–863. [Google Scholar] [CrossRef]
  15. Gabb, M.; Digel, H.; Muller, T.; Henn, R.-W. Infrastructure-supported perception and track-level fusion using edge computing. In Proceedings of the IV IEEE Intelligent Vehicles Symposium, Paris, France, 9 June 2019; pp. 1739–1745. [Google Scholar]
  16. Chen, Q.; Tang, S.; Yang, Q.; Fu, S. Cooper: Cooperative perception for connected autonomous vehicles based on 3D point clouds. In Proceedings of the 39th IEEE International Conference on Distributed Computing Systems (ICDCS), Dalas, TX, USA, 7–9 July 2019; pp. 514–524. [Google Scholar]
  17. Vasic, M.; Martinoli, A. A collaborative sensor fusion algorithm for multi-object tracking using a Gaussian mixture probability hypothesis density filter. In Proceedings of the 18th IEEE International Conference on Intelligent Transportation Systems, Las Palmas, Spain, 15–18 September 2015; pp. 491–498. [Google Scholar]
  18. Vo, B.-N.; Ma, W.-K. The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 2006, 54, 4091–4104. [Google Scholar] [CrossRef]
  19. Hurley, M.B. An information theoretic justification for covariance intersection and its generalization. In Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997); International Society of Information Fusion: Sunnyvale, CA, USA, 2002; Volume 1, pp. 505–511. [Google Scholar]
  20. Vasic, M.; Lederrey, G.; Navarro, I.; Martinoli, A. An overtaking decision algorithm for networked intelligent vehicles based on cooperative perception. In Proceedings of the IV IEEE Intelligent Vehicles Symposium, Gothenburg, Sweden, 19–22 June 2016; pp. 1054–1059. [Google Scholar]
  21. Thomaidis, G.; Vassilis, K.; Lytrivis, P.; Tsogas, M.; Karaseitanidis, G.; Amditis, A. Target tracking and fusion in vehicular networks. In Proceedings of the IV IEEE Intelligent Vehicles Symposium, Baden, Germany, 5–9 June 2011; pp. 1080–1085. [Google Scholar]
  22. Soatti, G.; Nicoli, M.; Garcia, N.; Denis, B.; Raulefs, R.; Wymeersch, H. Enhanced vehicle positioning in cooperative ITS by joint sensing of passive features. In Proceedings of the 20th IEEE International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
  23. Teunissen, P.J.G.; Odolinski, R.; Odijk, D. Instantaneous BeiDou+GPS RTK positioning with high cut-off elevation angles. J. Geodesy 2014, 88, 335–350. [Google Scholar] [CrossRef]
  24. Huang, C.; Wu, X. Cooperative vehicle tracking using particle filter integrated with interacting multiple models. In Proceedings of the IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar]
  25. Fröhle, M.; Lindberg, C.; Granstrom, K.; Wymeersch, H. Multisensor poisson multi-bernoulli filter for joint target–sensor state tracking. IEEE Trans. Intell. Veh. 2019, 4, 609–621. [Google Scholar] [CrossRef] [Green Version]
  26. García-Fernández, Á.F.; Williams, J.L.; Granström, K.; Svensson, L. Poisson multi-bernoulli mixture filter: Direct derivation and implementation. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1883–1901. [Google Scholar] [CrossRef] [Green Version]
  27. Kakinuma, K.; Ozaki, M.; Hashimoto, M.; Takahashi, K. Cooperative pedestrian tracking by multi-vehicles in GPS-denied environments. In Proceedings of the SICE Annual Conference (SICE), Akita, Japan, 20–23 August 2012; pp. 211–214. [Google Scholar]
  28. Chen, X.; Ji, J.; Wang, Y. Robust cooperative multi-vehicle tracking with inaccurate self-localization based on on-board sensors and inter-vehicle communication. Sensors 2020, 20, 3212. [Google Scholar] [CrossRef]
  29. Hu, X.; Sun, Y.; Gao, J.; Hu, Y.; Ju, F.; Yin, B. Probabilistic linear discriminant analysis based on L1-norm and its Bayesian variational inference. IEEE Trans. Cybern. 2020, 1–12. [Google Scholar] [CrossRef]
  30. Zhao, Q.; Meng, D.; Xu, Z.; Zuo, W.; Yan, Y. L1-norm low-rank matrix factorization by variational Bayesian method. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 825–839. [Google Scholar] [CrossRef]
  31. Sarkka, S.; Nummenmaa, A. Recursive noise adaptive Kalman filtering by variational Bayesian approximations. IEEE Trans. Autom. Control. 2009, 54, 596–600. [Google Scholar] [CrossRef]
  32. Zhu, H.; Hu, J.; Leung, H.; Zhang, B. Recursive variational Bayesian inference to simultaneous registration and fusion. Int. J. Adv. Robot. Syst. 2016, 13, 124. [Google Scholar] [CrossRef] [Green Version]
  33. Zhu, H.; Leung, H.; He, Z. A variational Bayesian approach to robust sensor fusion based on Student-t distribution. Inf. Sci. 2013, 221, 201–214. [Google Scholar] [CrossRef]
  34. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  35. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin, Germany, 2006. [Google Scholar]
  36. Hosseini, S.S.; Jamali, M.M.; Särkkä, S. Variational Bayesian adaptation of noise covariances in multiple target tracking problems. Measurement 2018, 122, 14–19. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of cooperative tracking scene (HV: host vehicle, CV: cooperative vehicle).
Figure 1. Schematic diagram of cooperative tracking scene (HV: host vehicle, CV: cooperative vehicle).
Sensors 20 06487 g001
Figure 2. Probabilistic graphical model representation of our model.
Figure 2. Probabilistic graphical model representation of our model.
Sensors 20 06487 g002
Figure 3. The position observation of the target and the cooperative vehicle in HV coordinate frame.
Figure 3. The position observation of the target and the cooperative vehicle in HV coordinate frame.
Sensors 20 06487 g003
Figure 4. The heading angle observation of the cooperative vehicle in HV coordinate frame.
Figure 4. The heading angle observation of the cooperative vehicle in HV coordinate frame.
Sensors 20 06487 g004
Figure 5. The position observation of the target in CV coordinate frame.
Figure 5. The position observation of the target in CV coordinate frame.
Sensors 20 06487 g005
Figure 6. Variation curves of tracking errors obtained by different algorithms when σ 2 = 0.2
Figure 6. Variation curves of tracking errors obtained by different algorithms when σ 2 = 0.2
Sensors 20 06487 g006
Figure 7. Tracking error of the proposed algorithm with different forgetting factor.
Figure 7. Tracking error of the proposed algorithm with different forgetting factor.
Sensors 20 06487 g007
Figure 8. Tracking error of the proposed algorithm under different numbers of variational Bayesian (VB) iterations.
Figure 8. Tracking error of the proposed algorithm under different numbers of variational Bayesian (VB) iterations.
Sensors 20 06487 g008
Figure 9. Tracking error of the proposed algorithm under different communication delays.
Figure 9. Tracking error of the proposed algorithm under different communication delays.
Sensors 20 06487 g009
Figure 10. Tracking error under stationary localization noise.
Figure 10. Tracking error under stationary localization noise.
Sensors 20 06487 g010
Table 1. Comparison of tracking error of different algorithms with σ 2 = 0.1 .
Table 1. Comparison of tracking error of different algorithms with σ 2 = 0.1 .
Time PeriodKFEKF-CT/SEKF-CT/DVBI-CT
[1‒100]0.34560.26790.26790.2739
[100‒200]0.16220.14190.14110.1390
[200‒300]0.13700.12330.12280.1122
[300‒400]0.12810.11840.11160.1073
[1‒400]0.19280.16260.16060.1578
Table 2. Comparison of tracking error of different algorithms with σ 2 = 0.2 .
Table 2. Comparison of tracking error of different algorithms with σ 2 = 0.2 .
Time PeriodKFEKF-CT/SEKF-CT/DVBI-CT
[1‒100]0.34560.28160.28160.2810
[100‒200]0.16220.15090.14610.1475
[200‒300]0.13700.12800.12580.1195
[300‒400]0.12810.12490.11390.1111
[1‒400]0.19280.17110.16660.1645
Table 3. Comparison of tracking error of different algorithms with σ 2 = 0.3 .
Table 3. Comparison of tracking error of different algorithms with σ 2 = 0.3 .
Time PeriodKFEKF-CT/SEKF-CT/DVBI-CT
[1‒100]0.34560.29000.29000.2872
[100‒200]0.16220.15610.14900.1516
[200‒300]0.13700.13070.12740.1237
[300‒400]0.12810.12860.11540.1138
[1‒400]0.19280.17610.17010.1688
Table 4. Comparison of tracking error of different algorithms with σ 2 = 0.4 .
Table 4. Comparison of tracking error of different algorithms with σ 2 = 0.4 .
Time PeriodKFEKF-CT/SEKF-CT/DVBI-CT
[1‒100]0.34560.29580.29580.2926
[100‒200]0.16220.15960.15090.1541
[200‒300]0.13700.13260.12850.1266
[300‒400]0.12810.13100.11640.1158
[1‒400]0.19280.17950.17260.1720
Table 5. Comparison of tracking error of different algorithms with σ 2 = 0.5 .
Table 5. Comparison of tracking error of different algorithms with σ 2 = 0.5 .
Time PeriodKFEKF-CT/SEKF-CT/DVBI-CT
[1‒100]0.34560.30010.30010.2972
[100‒200]0.16220.16220.15230.1558
[200‒300]0.13700.13400.12940.1286
[300‒400]0.12810.13250.11730.1172
[1‒400]0.19280.18190.17440.1744
Table 6. Total execution time (s) of different algorithms.
Table 6. Total execution time (s) of different algorithms.
σ 2 KFEKF-CT/SEKF-CT/DVBI-CT
0.10.01920.03710.03780.1803
0.20.02020.03800.03880.1880
0.30.02520.04770.04890.2331
0.40.02050.03900.03950.1894
0.50.02270.04620.04690.2197
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Wang, Y.; Chen, L.; Ji, J. Multi-Vehicle Cooperative Target Tracking with Time-Varying Localization Uncertainty via Recursive Variational Bayesian Inference. Sensors 2020, 20, 6487. https://doi.org/10.3390/s20226487

AMA Style

Chen X, Wang Y, Chen L, Ji J. Multi-Vehicle Cooperative Target Tracking with Time-Varying Localization Uncertainty via Recursive Variational Bayesian Inference. Sensors. 2020; 20(22):6487. https://doi.org/10.3390/s20226487

Chicago/Turabian Style

Chen, Xiaobo, Yanjun Wang, Ling Chen, and Jianyu Ji. 2020. "Multi-Vehicle Cooperative Target Tracking with Time-Varying Localization Uncertainty via Recursive Variational Bayesian Inference" Sensors 20, no. 22: 6487. https://doi.org/10.3390/s20226487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop