Next Article in Journal
Microfluidics by Additive Manufacturing for Wearable Biosensors: A Review
Previous Article in Journal
Accurate Spirometry with Integrated Barometric Sensors in Face-Worn Garments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Target Joint Detection; Tracking and Classification Based on Marginal GLMB Filter and Belief Function Theory

1
School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai 200240, China
2
China Academy of Launch Vehicle Technology, Beijing 100076, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(15), 4235; https://doi.org/10.3390/s20154235
Submission received: 19 June 2020 / Revised: 21 July 2020 / Accepted: 26 July 2020 / Published: 29 July 2020
(This article belongs to the Section Physical Sensors)

Abstract

:
This paper proposes a new solution to multi-target joint detection, tracking and classification based on labeled random finite set (RFS) and belief function theory. A class dependent multi-model marginal generalized labeled multi-Bernoulli (MGLMB) filter is developed to analytically calculate the multi-target number, state estimates and model probabilities. In addition, a two-level classifier based on continuous transferable belief model (cTBM) is designed for target classification. To make full use of the kinematic characteristics for classification, both the dynamic modes and states are considered in the classifier, the model dependent class beliefs are computed on the continuous state feature subspace corresponding to different dynamic modes and then fused. As a result that the uncertainty about the classes is well described for decision, the classification results are more reasonable and robust. Moreover, as the estimation and classification problems are jointly solved, the tracking and classification performance are both improved. In the simulation, a scenario contains multi-target with miss detection and dense clutter is used. The performance of multi-target detection, tracking and classification is better than traditional methods based on Bayesian classifier or single model. Simulation results are illustrated to demonstrate the effectiveness and superiority of the proposed algorithm.

1. Introduction

Multiple target joint detection, tracking and classification is a critical problem in radar system, this problem consists of three subproblems: estimate the number of the targets, estimate their kinematic states and determine their classes. These three subproblems are usually coupled: tracking provides the kinematic features to distinguish the target type; according to the target class, appropriate dynamic models can be chosen for accurate tracking; besides, the detection of the target is the prerequisite of accurate multi-target tracking and classification [1,2,3]. Therefore, multi-target detection, tracking and classification need to be solved jointly.
The traditional joint detection, tracking and classification methods are usually developed based on the Bayesian theory and density estimation in the case of only the position measurements are available. The class probabilities and class dependent multi-target density are calculated based on the Bayesian framework. Recently, the belief function theory has been proven effective in dealing with uncertain information, and many credal models are proposed for target classification and identification, especially the transferable belief model (TBM) proposed by Smets [4,5,6]. Various joint tracking and classification algorithms are also developed based on the TBM. These methods deal with identification of objects as members of predefined classification categories. When only the position measurements are available, the target classes are mainly judged according to the kinematic features, and the proposed solution usually contains the tracker and the classifier. In [7], the dynamic behavior is described using a set of maneuver modes with different acceleration quantity according to the priori. The dynamic state estimates and model probabilities are first calculated using a bank of Kalman filters in parallel, then the beliefs and plausibilities of dynamic behaviors are computed. According to the relationship between dynamic behaviors and the target classes, the classification results are finally obtained. In [8], the Kalman filter is derived within the TBM framework, and the target joint tracking and classification problem is solved within the unified belief theoretical framework. In [9], TBM is employed in the wireless sensor networks, and the terrain information is used to improve the performance. In [10], particle filter is introduced into TBM framework to solve target joint tracking and classification problem in multi-sensor scenario. In [11], a second-order uncertainty model is proposed to describe the uncertain mapping from the dynamic feature space to the target class space, and a practical method based on TBM is provided to calculate the class likelihood under a relaxed dependence assumption. However, due to the discrete nature of such set theoretic uncertain reasoning approaches based on TBM, these algorithms have difficulty in modeling continuous signals. Therefore, the dynamic features such as the state estimates can not be directly used by the classifier. As a result, the accuracy of the classification results is affected.
To overcome such inherent defects, the belief function theory on continuous frame is recently developed by Smets, and the continuous transferable belief model (cTBM) is proposed in [12,13]. In this theory, the basic belief masses allocated on a real value is generalized into belief density on the whole real axis R . The least committed belief functions are first built according to the priori pignistic probability density. Then the basic belief assignment of the target classes is calculated, and the classification results are finally derived using belief function tools. In these articles, the cTBM is also used for target classification, and the advantages compared to the Bayesian methods is also analyzed and illustrated. In [14], the target joint tracking and classification problem with nonlinear trajectories is solved using cTBM and particle filter. As shown in the simulations, the classification results derived based on cTBM are more accurate compared with that of the method based on TBM. However, this approach is difficult to apply to n-dimensional state space R n unless it is independent between different dimensions [13]. Unfortunately, this is usually not satisfied during the dynamic process. In [15], the belief functions are built on R 2 . In [16], a new algorithm is developed when the knowledge of sensors is an α -stable probability density function, the plausibility functions are calculated using a single Gaussian and α -stable model. However, the computational complexities of these methods are high, and it is difficult to extend these approaches to more complicated scenarios. Therefore, in the conventional algorithms, the cTBM based classification results are usually derived based on the velocity or acceleration. Actually, the dynamic characteristics are not fully utilized.
The multi-target joint detection, tracking and classification problem in cluttered environment is much more complex, especially with target number and observation uncertainty. Inaccurate multi-target state estimates and model probabilities calculated under the condition of association error may lead to incorrect classification results. In [17], the multi-target joint tracking and classification problem is solved based on the global nearest neighbor (GNN) approach and TBM. As shown in the simulations, when there exists association error in tracking process, the classification performance is deteriorated. Recently, the random finite set (RFS) theory is developed, and many multi-target tracker are proposed, such as probability hypothesis density (PHD) filter, cardinalized PHD (CPHD) filter and cardinality balanced multi-target multi-Bernoulli (CMeMBer) filter [18,19]. These RFS based trackers are integrated approaches for multi-target joint detection and tracking, and provide the approximated multi-target density with association uncertainty. Compared to traditional approaches [20,21,22,23], the multi-target joint detection, tracking and classification problem is also solved using multi-model PHD/CPHD [1,24,25,26,27,28] and CMeMBer filter [29]. However, due to track information of the RFS based filters can not be obtained directly, these algorithms only calculate the class-dependent multi-target density without the explicit classification results for each target. In [30], the concept of labeled RFS is proposed and the conjugate multi-target state distribution is established. Several multi-target state filters are also derived, such as the generalized labeled multi-Bernoulli (GLMB) filter [31], which provides analytical solutions for multi-target tracking and produce track-valued target state estimates. As demonstrated in the simulations, the performance of the GLMB filter is much better than that of the tracker based on unlabeled RFS. In [32], the target birth intensity are adaptively modeled and introduced to the multi-Bernoulli filter to improve the estimation performance, while the filtering efficiency are maintained. However, the amount of calculation of the GLMB filter increases exponentially with time [33]. In [34], the marginal GLMB (MGLMB) density is proposed to exactly match the posterior multi-target density and the cardinality distribution, this approximate multi-target density is propagated during the estimation process and the MGLMB filter is developed. The performance of MGLMB filter is close to that of the GLMB filter, but the calculation complexity is largely reduced. In [35,36,37], multi-target detection, tracking and classification problems have also been solved using the GLMB filter, these approaches take advantages of the GLMB filter, and explicit class probabilities and state estimates of each target are produced. However, the estimation and classification results are still calculated within Bayesian theory framework.
In this paper, a joint solution is proposed to multi-target detection, tracking and classification based on labeled RFS and belief function theory in the case of only the position measurements are available. A class dependent multi-model MGLMB filter is developed to calculate the multi-target cardinality, states estimates and model probabilities, and a two-level classifier is designed based on the cTBM to identify the target classes using the kinematic data. According to different dynamic models, the class beliefs are computed on corresponding continuous dynamic feature subspace given the state estimates. Then the model-dependent class beliefs are fused to obtain the final classification results. The proposed algorithm has an analytical form, explicit track-valued state estimates and class probabilities of each target are obtained. In addition, as both the target dynamic state estimates and dynamic modes are considered, the designed classifier makes full use of the dynamic characteristics to improve the classification performance. Furthermore, because the uncertainty of the classes is well described using cTBM, and the detection, tracking and classification problems are jointly solved, both the state estimation and classification performance are improved. The particle implementation of the proposed algorithm is also detailed.
This article is organized as follows. An introduction of the cTBM and the MGLMB filter is presented in Section 2. Section 3 proposes a multi-target joint detection, tracking and classification algorithm. The simulation results are illustrated in Section 4. Finally, conclusions are summarized in Section 5.

2. Theory Foundation

2.1. Classification in the TBM Framework

The transferable belief model aims to represent quantified beliefs based on the belief functions. This model works on two levels: (1) The credal level, in which the beliefs are quantified. (2) The pignistic level, in which the decisions are made [4].
In [8], a joint tracking and classification (JTC) algorithm is proposed in TBM framework. Compared to the probability theory, the likelihood of an hypothesis is equated to the conditional plausibility of the observation given the hypothesis within the TBM. Assume that the B = { b i } , i = 1 , , n is the set of target behaviors, and Z t is the measurements, then p l i = p l ( Z t | b i ) indicates the measurement likelihoods that matched to behavior b i . Consider a vacuous a priori on B, the posterior belief can be computed using the General Bayesian Theorem (GBT).
m B ( b | Z t ) = i : b i b p l i j : b j b ( 1 p l j ) , b B
Then the class beliefs can be derived according to the relation between the class set C and behavior set B as
m C = M ¯ · m B
where m B represents the vector of the basic belief assignments (BBAs), whose elements are corresponding to that in the power set of B, and similarly for m C . Therefore, within the TBM framework, the beliefs allocated to the elements of the class focal set, which is the power set of the classes, and the explicit class probability is calculated until the decision is made. On contrary, the Bayesian classifier computes the explicit class probabilities based on the measurement likelihood at each time.

2.2. Belief Functions on R and Least Committed Belief Density

In [12], Smets extended the belief theory on continuous frames and proposed the continuous transferable belief model (cTBM). Compared with TBM, cTBM operates within continuous domain. The basic belief masses generalized into basic belief densities (bbds), and the belief functions can be computed by the integrations of density functions on the specific space.
Consider the construction of the belief functions on the real space R , and a nonempty interval on the real axis R denoted as [ a , b ] R , a < b . We assume that masses are only allocated to closed intervals, and the function m : A R is then a basic belief assignment, where A is the focal set consists of the closed intervals A R . As illustrated in Figure 1, each interval [ a , b ] corresponds to a point K lies in the triangle, and the point K = ( a , b ) inside the triangle T [ 0 , 1 ] uniquely defines the interval [ a , b ] T [ 0 , 1 ] . The belief of each point is equal to that of the corresponding interval, P { ( x , y ) = ( a , b ) } = m A ( [ a , b ] ) . Consequently, the belief, plausibility and commonality of an arbitrary interval [ a , b ] defined to be the sum of the masses of the intervals [ x , y ] satisfies { x a AND y b } , { a x b OR a y b } and { x a AND y b } , respectively.
The commonality denotes the mass that can be assigned on any interval, within, straddling, or outside [ a , b ] . The pignistic probability is the result of mapping a belief measure to a probability measure. For the singletons within the triangle space, it can be calculated as
B e t f ( s ) = s A [ 0 , 1 ] m ( A ) | a * a * | | 1 m ( ) |
where a * = i n f { a : a A } and a * = s u p { a : a A } , and | a * a * | is the interval length. Assume that f T [ 0 , 1 ] is the basic belief density function, when the number of the focal sets is infinite, this summation is replaced by the integration
B e t f ( a ) = lim ϵ 0 x = 0 x = a y = a + ϵ y = 1 f T [ 0 , 1 ] ( x , y ) y x d x d y
As a result that there are many bbds which share the same pignistic transformation in Equation (2), the belief density function that maximizes the commonality is defined as the least committed (LC) belief density. As given in [12], the focal interval sets of this density can be represented by a line on the triangle space, which has two properties: (1) For unimodal densities, the line starts at ( μ , μ ) , μ = a r g m a x a e t f ( a ) and p l T ( [ μ , μ ] ) = 1 . (2) For all symmetrical pignistic densities B e t f , the line is straight and centered at μ
y = ( x 2 μ ) < x < μ
The LC belief density φ ( a ) of interval A is
φ ( a ) = ( a a ¯ ) B e t f ( a ) d a
where a ¯ is defined by the property that: B e t f ( a ¯ ) = B e t f ( a ) . Note that a ¯ is a function of a. The plausibility function p l ( x ) can be computed by the integration of the basic belief density, and the limits indicate the focal intervals have non-empty intersection with x, with the property that x a
p l ( x ) = ( x x ¯ ) B e t f ( x ) + x ( 1 d a ¯ d a ) B e t f ( a ) d a
Suppose that the pignistic density is Gaussian B e t f ( x ) = N ( x ; μ , σ ) , the LC belief density and corresponding plausibility p l ( x ) are
κ ( y ) = 2 y 2 2 π e y 2 / 2
p l ( y ) = 2 y 2 π e y 2 / 2 + e r f c ( y / 2 )
where e r f c ( s ) = ( 2 / π ) s inf e t 2 d t .
To solve the JTC problem in the cTBM frame, beliefs transfer when new information is received. Let Θ = { θ 1 , , θ n } is the frame of the target classes, the posterior conditional basic belief assignment m Θ ( A | Z ) of x X can be calculated using the priori belief b e l Θ and the plausibility function p l Θ ( · | x ) given x X based on the GBT given in (1). Then the a posteriori conditional belief function over θ A can be recursively calculated using m ( A | Z ) and the priori belief m k 1 ( A ) based on the Dempster’s rule.
m k ( A ) = m ( A | Z k ) m k 1 ( A )
The main difference between TBM and cTBM is that, in TBM the least committed plausibility is computed for the behavior, while in cTBM, the plausibility can be computed using the estimates. In the multi-target tracking scenario, the dynamic states are continuous variables, therefore, the cTBM is more effective than TBM.

2.3. Marginal Generalized Labeled Multi-Bernoulli Filter

In [30], Vo et al. introduced the notion of the labeled RFS and constructed conjugate prior. Suppose that the state vector x in the space X is augmented with a unique label L , where L is a discrete label space, and X represents the labeled RFS. Let L : L × X L be the projection L ( ( x , ) ) = , L ( X ) is then the label set of X . The distinct label indicator Δ ( X ) = δ | X | ( L ( X ) ) ensures the distinctness of the labels of X . All the finite subsets of L are denoted by F ( L ) . In a realization of the marginal generalized labeled multi-target state X, the labels of two state vectors must be distinct. The distribution of a marginal GLMB RFS is
π ( X ) = Δ ( X ) I F ( L ) ω ( I ) δ I ( L ( X ) ) p ( I ) X
where I F ( L ) denotes the set of track labels, ω ( I ) is the corresponding weight. p ( · , ) is the state probability density of track .
As a result that the multi-target conjugate prior constructed by introducing the notion of labeled RFSs is closed under the Chapman–Kolmogorov equation, the marginal GLMB density can be propagated within the Bayesian framework. An analytic solution for the multi-target state estimation is then provided, called marginal GLMB filter [31]. The marginal GLMB filter consists of the following two steps
1. Prediction: suppose that the multi-target prior is a GLMB density of form (12), the predicted multi-target density is then also a GLMB RFS with state space X and label space L + = L B ( L B = ), where L and B are the label spaces of the surviving and birth targets. Use the standard inner product notation f , g = f ( x ) g ( x ) d x , the predicted multi-target density can be given by:
π + ( X + ) = Δ ( X + ) I F ( L + ) ω + ( I ) δ I ( L ( X + ) ) p + ( I ) X +
where
ω + ( I ) = ω B ( I B ) ω s ( I ) ( I L )
p + ( I ) ( x , ) = 1 L ( ) p s ( I ) ( x , ) + ( 1 1 L ( ) ) p b ( x , )
p s ( I ) ( x , ) = p s ( · , ) f ( x | · , ) , p ( I ) ( · , ) η s ( I ) ( )
η s ( I ) ( ) = p s ( · , ) f ( x | · , ) , p ( I ) ( · , ) d x
ω s ( I ) ( L ) = η s ( I ) L I L 1 I ( L ) q s ( I ) I L ω ( I )
q s ( I ) ( ) = q s ( · , ) , p ( I ) ( · , )
For the predicted multi-target density given a predicted label set L, the weight ω + ( I ) is the product of the weight of the new birth and surviving target labels. The surviving target density is predicted using the transition function f ( x | · , ) .
2. Update: if the multi-target prediction density is a GLMB has the form as (11), the multi-target posterior density is then also a GLMB and has the following form:
π ( X | Z ) = Δ ( X ) I F ( L ) θ Θ ω ( I , θ ) ( Z ) δ I ( L ( X ) ) × p ( I , θ ) ( · | Z ) X
where Θ is the space of mappings θ : L { 0 , 1 , , | Z | } , such that θ ( i ) = θ ( i ) > 0 implies i = i , and
ω ( I , θ ) ( Z ) η Z ( I , θ ) I
p ( I , θ ) ( x , | Z ) = p ( I ) ( x , ) ψ Z ( x , ; θ ) η Z ( I , θ ) ( )
η Z ( I , θ ) ( ) = p ( I ) ( · , ) , ψ Z ( · , ; θ ) ψ Z ( x , ; θ ) = δ 0 ( θ ( ) ) ( 1 p d ( x , ) )
+ ( 1 δ 0 ( θ ( ) ) ) p d ( x , ) g ( z θ ( ) | x , ) κ ( z ( θ ( ) ) )
In the update step, the posterior target density p ( I , θ ) ( x , | Z ) given labeled is explicitly calculated using the predicted multi-target density via Bayes rule with likelihood function ψ Z . The updated weight ω ( I , θ ) ( Z ) is proportional to the prior weight scaled by the product [ η Z ( I , θ ) ] I of single target normalizing constants. Here, p d ( x , ) is the detection probability of track , g ( z | x , ) is the likelihood function for the measurement z and κ ( · ) is the intensity of Poisson clutter.
Compute the marginalized GLMB density to exactly match the posterior multi-target density and cardinality distribution of (18), the approximate multi-target density is
π ^ ( X ) = Δ ( X ) I F ( L ) δ I ( L ( X ) ) w ( I ) p ( I ) X
where
w ( I ) = θ Θ ( I ) w ( I , θ ) p ( I ) ( x , ) = 1 I ( ) 1 w ( I ) θ Θ ( I ) w ( I , θ ) p ( I , θ ) ( x , )
As given in (23), a principled GLMB π ^ ( X ) is constructed approximation to the posterior density π ( X ) , which results in a marginalization over the association histories. As a result that the number of components propagated in the Bayesian recursion is drastically reduced, the computational complexity is largely reduced.

3. Multi-Target Joint Detection, Tracking and Classification Algorithm

This section first presents the mathematical formulation of the problem, then proposes the multi-target joint detection, tracking and classification algorithm based on the labeled RFS and cTBM. The particle implementation of the proposed algorithm is also detailed.

3.1. Problem Formulation

Assume that the class of each target is a time-invariant attribute that takes values from a discrete set C = { c 1 , c 2 , , c n } , and the class-dependent dynamic model set is M c = { o 1 , o 2 , , o m } . These models transfer given an underlying Markov process with class dependent model transition probability f ( o | o , c ) . The target dynamic and measurement equations at time k are
x k = f k | k 1 , o ( · | x k 1 ) + w k
z k = h k ( · | x k ) + v k
where f k | k 1 , o ( · ) is the model-dependent dynamic equation, and h k is the observation function of the targets. f k | k 1 , o ( · ) and h k ( · ) are possibly nonlinear, and w k and v k are uncorrelated Gaussian process and measurement noise.
In multi-target scenario, each target appears and disappears randomly with the birth probability p b , k and the survive probability p s , k . At time k, X k = { x k , 1 , x k , 2 , , x k , n } is the multi-target state set with time varying target numbers, and Z k = { z k , 1 , z k , 2 , , z k , m } is the observation set which consists of measurements from targets and clutter with the detection probability p d , k ( x k ) and clutter density c k .

3.2. Multi-Target Joint Detection, Tracking and Classification Algorithm Based on Labeled RFS and cTBM

The proposed algorithm contains the tracker and classifier. Starting with prior multi-target density, the class dependent multi-target posterior density is first calculated using the multi-model marginal GLMB filter. Then, the classification results are derived using the two-level classifier based on cTBM. In the classifier, the model dependent class basic belief assignments (BBAs) are computed using the state estimates, then the beliefs are fused to derive the final classification results. The algorithm diagram is shown in Figure 2. The particle implementation is also provided in this section.
A. Class dependent multi-model MGLMB filter
(1) Prediction: Assume at time k 1 , the class dependent posterior multi-target density is
π k 1 ( X ) = Δ ( X ) I L ( X ) c C ω k 1 ( I ) δ I ( L ( X ) ) [ p k 1 , ( I ) ( · , o | c ) ] X
where [ p k 1 , ( I ) ( · , o | c ) ] X is the model dependent multi-target target density given the classes. The density of each target can be further expressed as p k 1 , ( x , o | c ) . At time k, the predicted multi-target density is
π k + ( X + ) = Δ ( X + ) I L ( X + ) c C ω k + ( I ) δ I ( L ( X ) ) [ p k + , ( I ) ( · , o | c ) ] X +
where
ω k + ( I ) = ω B ( I B ) ω s ( I ) ( I L )
p k + , ( I ) ( x , o | c ) = o M c p k + ( I ) ( o | c ) p s ( I ) ( x ) f ( x | x , o , c ) , p k 1 , ( I ) ( x | o , c ) η s ( I ) ( )
η s ( I ) ( ) = o M c p k 1 , ( I ) ( o | c ) p s , ( x ) p k 1 , ( I ) ( x | o , c ) d x
p k + , ( I ) ( o | c ) = o M c p ( o | o , c ) p k 1 , ( I ) ( o | c )
In these equations, L and B denote the label spaces of the surviving targets and the new birth targets, respectively, f ( x | x , o , c ) is the model dependent dynamic equation, p ( o | o , c ) is the class dependent model transition probability and p s , is the surviving probability of track .
(2) Update: Assume that the predicted multi-target density is represented as
π k + ( X + ) = Δ ( X + ) I L ( X + ) c C ω k + ( I ) δ I ( L ( X + ) ) [ p k + , ( I ) ( · , o | c ) ] X +
when the measurement set Z is received, the multi-target posterior density is
π k ( X ) = Δ ( X ) ξ Ξ c C ω k ( ξ ) δ I ( L ( X ) ) [ p k , ( ξ ) ( · , o | c ) ] X
where ξ Ξ represents ( I , θ ) F ( L + ) × Θ , Θ is the space of mappings θ : L { 0 , 1 , , | Z | } , such that θ ( i ) = θ ( i ) > 0 implies i = i , and
ω c , k ( ξ ) ( Z ) ω c , k + ( I ) [ η Z ( ξ ) ] I
p k , ( ξ ) ( x | o , c ) = p k + , ( I ) ( x | o , c ) ψ Z ( ξ ) ( x | o , c ) η Z ( ξ ) ( | o , c )
η Z ( ξ ) ( | o , c ) = p k + , ( I ) ( x | o , c ) , ψ Z ( ξ ) ( x | o , c )
ψ Z ( ξ ) ( x | o , c ) = δ 0 ( θ ( ) ) q d ( x | o , c )
+ ( 1 δ 0 ( θ ( ) ) ) p d ( x ) g ( z θ ( ) | x ) κ ( z θ ( ) )
p k , ( ξ ) ( o | c ) = p k + , ( I ) ( o | c ) η Z ( ξ ) ( | o , c ) η Z ( ξ ) ( | c )
η Z ( ξ ) ( | c ) = o M c p k + , ( I ) ( o | c ) η Z ( ξ ) ( | o , c )
In these equations, δ 0 ( θ ( ) ) indicates whether there is a measurement associated with the target, g ( z θ ( ) | x ) is the likelihood function and p d is the detection probability of the target. [ p k , ( ξ ) ( · , o | c ) ] X is the product of multiple target densities p k , ( ξ ) ( · , o | c ) given predicted labels I and association map θ . The class dependent marginal GLMB density that matches the class dependent posterior multi-target density and the cardinality distribution is
π ^ ( X ) = Δ ( X ) I F ( L ) c C ω k ( I ) δ I ( L ( X ) ) [ p k , ( I ) ( · , o | c ) ] X
where
ω k ( I ) = θ Θ ω k ( ξ )
p k , ( I ) ( x , o | c ) = 1 I ( ) 1 ω k ( I ) θ Θ ω k ( ξ ) p k , ( ξ ) ( x , o | c )
and the target number is
N = I F ( L ) n × ω k ( I )
where n is the number of the targets in label set L .
B. Classification based on cTBM
The dynamic features are associated with both dynamic states and motion modes. For example, the target shows a higher maneuverability when executing a sharp turn than fling straight at the same speed; the cruising speed of fighters may exceed sonic velocity, which is much faster than most normal planes. Obviously, the classification based on either dynamic states or motion modes is partial. Therefore, a two-level classifier is designed here, and both of these two factors are considered. The proposed classifier consists of the following components.
(1) Classification on the state feature subspace
As a result that the dynamic states of different dimensions are usually considered as joint, such as the speed and acceleration, the calculation of the beliefs based on cTBM in the framework of a one-dimensional continuous state feature space is difficult to be directly extended to multi-dimensional feature space. Assume that x k , o , is the posterior state estimate of track I given the model o. In the proposed algorithm, one-dimensional state feature subspace is firstly chosen corresponding to the dynamic models, and the conditional LC plausibility p l ( x k , o , | c i ) is calculated using the state sub-vector x ˜ k , o , R . Then the beliefs are fused to derive the final classification results. Suppose that B e t f ( x ˜ k , o , | c i , o j ) is the prior pignistic density function of target class c i and model o j , then the LC plausibility can be computed as
p l ( x ˜ k , o , | c i , o j ) = ( x ˜ k , o , ϕ ( x ˜ k , o , ) ) B e t f ( x ˜ k , o , | c i , o j )
+ x ˜ k , o , ( 1 d ϕ ( a ) d a ) B e t f ( a | c i , o j ) d a
where ϕ ( x ˜ k , o , ) satisfies
B e t f ( ϕ ( x ˜ k , o , ) | c i , o j ) = B e t f ( x ˜ k , o , | c i , o j )
and the state sub-vector x ˜ k , o , is selectable according to different modes. For example, velocity or acceleration are chosen for classifying corresponding to CV or CA models, respectively. Using the Generalized Bayesian Theorem (GBT), the model dependent BBAs of the classes are
m ( C | x ˜ k , o , , o j ) = c i C p l ( x ˜ k , o , | c i , o j ) × c i C ¯ [ 1 p l ( x ˜ k , o , | c i , o j ) ]
(2) Classification with multi-model
Let ( C , X , O ) indicates the credibility space, where X and O are the state and model frames, respectively, while the beliefs allocated to the elements of C . Combine the model dependent beliefs m ( C | x ˜ k , o , , o ) calculated before, the final class BBAs of target is
m ( C | x k , ) = o j M c m ( C | x ˜ k , o , , o j ) m ( o j )
where m ( o j ) is the confidence allocated on the models, which is equal to the model probability [38]. This equation can also be represented using the matrix equation m C = M ¯ m o , where m C = [ m c 1 , m c 2 , , m c n ] is the vector of BBAs, C is the power set of the class, and m o = [ m o 1 , m o 2 , , m o m ] is the vector of model belief masses. The dimension of these two vectors are not need to be equal, this conforms the situation that the dynamic models and the classes are not in one-to-one correspondence. Each column of the matrix M ¯ is the model dependent BBAs of class focal given the selectable state sub-vector x ˜ k , o , .
(3) Time recursion using Generalized Bayesian Theorem
Combine the conditional beliefs with prior information using (9), the posterior class beliefs m k ( C | x k , o , ) at time k can be calculated. Then, the pignistic class probabilities are
B e t P k { c i | x k , o , } = c i C 1 | C | m k ( C | x k , o , ) 1 m k ( | x k , o , )
The proposed MGLMB-cTBM algorithm for multi-target joint detection, tracking and classification based on labeled RFS and belief function theory is summarized as follows:
Algorithm 1: MGLMB-cTBM algorithm for multi-target joint detection, tracking and classification
Initial the prior conditional pignistic density function of the dynamic states.
(1) Predict the multi-target density π k + ( X + ) , and model dependent density p k + , ( I ) ( x , o | c ) and model probability p k + , ( I ) ( o | c ) of each target using (28)–(32).
(2) Update the multi-target density π k ( X ) , model dependent density p k , ( ξ ) ( x | o , c ) and model probability p k , ( ξ ) ( o | c ) using (34)–(41).
(3) Compute the marginal GLMB density π ^ ( X ) using (42)–(44).
(4) For different dynamic models, chose the state estimate sub-vector x ˜ k , o , R to calculate the conditional LC plausibility p l ( x ˜ k , o , | c i , o j ) , and then compute the model dependent beliefs m ( C | x ˜ k , o , , o j ) using (46)–(48).
(5) Combine the model dependent beliefs to compute the final BBAs of the class focal m ( C | x k , ) as (49).
(6) Merge the beliefs with the priori using GBT, and then go to step 1.
Analysis and discussion: 1. In this paper, an analytical algorithm is proposed to multi-target joint detection, tracking and classification in radar system. In this application, the target is embodied in the form of points. For the point targets, all the information about the target can be represented using the state and category. Therefore, both the observations and the dynamic information can be directly denoted using the measurement and state vectors. The posterior state and class probabilities can be analytically updated, and the classification results can be explicitly derived based on the dynamic states and the prior distribution of the class pignistic probability obtained directly from statistical data. Therefore, the derived results are Bayesian optimal. Compared to the traditional methods in [28,36], the dependence between estimation and classification is considered in this paper. Multi-target density is predicted using the class dependent multi-model, and the classifier computes the target class according to the both the dynamic models and estimates. The relation between multi-target class and state estimation is reflected in both the prediction and update process of the proposed algorithm. As given in (36)–(42), the mapping weights are updated based on the class dependent multi-model likelihood, therefore, the target existing probability (detection) is derived related to the target class in (45). As given in (43)–(45), the target density is the sum of weighed density updated given the target classes, therefore, the multi-target tracking result is also related to target classes. Therefore, according to the target class, appropriate models are used for improve the detection and tracking performance. Furthermore, more accurate estimation results lead to the improvement of the classification and overall performance.
2. In the cTBM based classifier, the classification results are explicitly derived based on the dynamic state estimates and the prior distribution of the class pignistic probability obtained directly from statistical data. Therefore, the classification results provided by the classifier are Bayesian optimal. During the solution process, the LC plausibilities of target classes are first computed, then the BBAs of the class focal sets are derived. The target class focal set is the power set of the target classes. The number of the focal elements in the set is 2 n , where n is the number of possible class in the frame of discernment. The beliefs allocated to the elements of the class focal set, which is the power set of the classes. For example, when the two categories c 1 and c 2 can not be completely distinguished, the belief mass is assigned to the elements { c 1 c 2 } . The belief assignments are combine with the prior beliefs using the GBT, and the explicit class probability is calculated until the decision is made. On contrary, the Bayesian classifier computes the class probabilities based on the measurement likelihood at each time. Moreover, compared with the TBM based methods, cTBM based classifier can derive the classification results directly using the dynamic estimates. When there is no effective information contained in the dynamic model (the target moves straight with the same velocity all the time), the class can still be judged according to the dynamic estimates. In addition, the models are selected according to the dynamic modes. By contrast, for the TBM, the classification result is derived based on the dynamic modes, which is actually an interval in the state space, such as the accelerate a [ g , + g ] . Therefore, the uncertainty about the class is described more precisely and the cTBM based classifier is better than the traditional approaches. Furthermore, as the number of model is largely reduced, the computational complexity is reduced and the model competition is avoided.
3. The computation complexity of the proposed MGLMB-cTBM algorithm is generally higher than that of the standard GLMB filter. Assume that P = m a x | L ( X ) | is the maximum number of the targets, and M = | Z | is the maximum number of the measurements, as given in [33], the computational complexity of GLMB filter is O ( P 2 M ) , i.e., quadratic in the number of hypothesized labels and linear in the number of measurements. The computational complexity of marginal GLMB filter is the same as that of the GLMB filter in the worst case. In practical applications, for the classifier of the proposed MGLMB-cTBM algorithm, the integral calculation in (46) can be calculated previously based on the prior pignistic probability density, and the belief assignments given the estimates can be quickly obtained by looking up the table. Therefore, the main difference of the computational complexity between the MGLMB-cTBM and MGLMB filter lies in the estimation process given different classes and models, which involves predicting and updating the target state according to the class dependent multi-model. Assume that C and O are the maximum number of the target class focal sets and models, the computational complexity of MGLMB-cTBM is O ( ( C O P ) 2 M ) .
4. With the application of optoelectronic technology and the improvement of radar resolution, the target morphological information can also be obtained, such targets are treated as the image targets and extended targets, respectively. In [39,40,41], the graph neural network (GNN) is used for multiple image targets tracking. The graph nodes are composed of all the targets in all frames. The attributes of the nodes are composed of apparent features and geometric features (position and shape) obtained through training, and the distance of the nodes are defined using Euclidean distance, etc. There are connections between nodes across frames, and the weights of the edges are setting based on the defined distance metric. The association and tracking results are derived through recursively updating the weights of the edges and the attributes of the nodes. In [42,43], convolutional neural networks (CNN) and long short-term memory (LSTM) are used to obtain the kinematic and morphological characteristics of the target, and the dynamic states and association relationships of the targets are calculated using the GNN. In [44], the importance of learned reID features for multi-object tracking is shown. These proposed GNN-based methods can also be used for multiple extended targets tracking. Moreover, in the scenario when the attribute measurements are available, such as the signal features, the belief of the attribute measurements can be fused with the beliefs calculated using the dynamic estimates based on the D-S evidence theory. Relevant works can be done in the future.

3.3. The Particle Implementation

(1) Prediction: As given in (28), the predicted multi-target density π k + ( X + ) consists of the model dependent density of each target. Using the particle description, the density p k + , ( I ) ( x , o | c ) of track can be denoted as { x k + , o , ( n ) , ω k + , o , ( n ) } n = 1 N k + , o , , where x k + , o , and ω k + , o , are the predicted state and weight that calculated using the state transition matrix F o and surviving probability p s
x k + , o , ( n ) = F o x k 1 , o , ( n )
ω k + , o , ( n ) = p s ω k 1 , o , ( n )
The predicted density of new birth target can be indicated using particles { x b , o , ( n ) , ω b , o , ( n ) } n = 1 N b , o , , where the state x b , o , ( n ) and weight ω b , o , ( n ) conform the prior.
(2) Update: The posterior density p k , ( ξ ) ( x | o , c ) of target can be indicated using particles { x k , o , ( n ) , ω k , o , ( n ) } n = 1 N k , o , , where the weights
ω k , o , ( n ) = ω k + , o , ( n ) δ 0 ( θ ( ) ) ( 1 p d ) + ( 1 δ 0 ( θ ( ) ) ) p d q ( x k + , o , ( n ) ) δ x k ( n ) x k + ( n ) , o k ( n ) o k + ( n ) κ ( z θ ( ) ) + o M c n = 1 N k + , o , q ( x k + , o , ( n ) )
q ( x k + , o , ( n ) ) = N ( z θ ( ) ; H k x k ( n ) , R k )
In these equations, the term x k , o , ( n ) and ω k , o , ( n ) are the posterior state particles and associated weights, δ 0 ( θ ( ) ) indicates whether there is a measurement associated with the target, q ( x k + , o , ( n ) ) is the likelihood function, and p d is the detection probability of the target. The dynamic model probabilities are p c , k ( o ) = n = 1 N k + , o , ω c , k , o , ( n ) .
(3) Resample: To avoid particle impoverishment and prevent exponential growth of the particle number, a resample procedure is performed, and then the derived particles have equal weights
ω ¯ k , o , ( n ) = ω k , o , ( n ) o M c n = 1 N k + , o , ω k , o , ( n )
(4) Compute the BBAs: For each particle { x k , o , ( n ) , ω k , o , ( n ) } , the state sub-vector x ˜ k , o , is selectable according to different modes o j . Suppose that B e t f ( x ˜ k , o , ( n ) | c i , o j ) is the prior pignistic density function of target class c i and model o j . Then, the LC plausibility and BBAs can be computed using the state estimates of each particle as
p l ( x ˜ k , o , ( n ) | c i , o j ) = ( x ˜ k , o , ( n ) ϕ ( x ˜ k , o , ( n ) ) ) B e t f ( x ˜ k , o , ( n ) | c i , o j )
+ x ˜ k , o , ( n ) ( 1 d ϕ ( a ) d a ) B e t f ( a | c i , o j ) d a
m ( C | x ˜ k , o , ( n ) , o j ) = c i C p l ( x ˜ k , o , ( n ) | c i , o j ) × c i C ¯ [ 1 p l ( x ˜ k , o , ( n ) | c i , o j ) ]
Combine all of the BBAs from each of the particles for model o using D-S theory [14], the fused model conditioned BBAs can be calculated as
m 1 2 ( C | x ˜ k , o , , o j ) = B C m ( C | x ˜ k , o , 1 , o j ) m ( C | x ˜ k , o , 2 , o j ) 1 B C = m ( C | x ˜ k , o , 1 , o j ) m ( C | x ˜ k , o , 2 , o j )
(5) Classification with multi-model: Combine the model dependent beliefs m ( C | x ˜ k , o , , o ) , the final class BBAs of target is
m ( C | x k , ) = o j M c m ( C | x ˜ k , o , , o j ) m ( o j )
Fuse the conditional beliefs with prior information to compute the posterior class beliefs m k ( C | x k , o , ) . Then, the pignistic class probabilities are
B e t P k { c i | x k , o , } = c i C 1 | C | m k ( C | x k , o , ) 1 m k ( | x k , o , )
At last, proportionally correct the weights of the particles belong to the class c i conform that o M c ω k , o , ( n ) = B e t P k { c i | x k , o , } .
The implementation can be realized using the open source tools (http://ba-tuong.vo-au.com/codes.html), and the pseudo-code of the MGLMB-cTBM algorithm is given as follows:
Algorithm 2: Pseudo-code of the proposed MGLMB-cTBM algorithm
1: function MGLMB-cTBM algorithm

2: Input { x k + , o , ( n ) , ω k + , o , ( n ) } n = 1 N k + , o , given different classes and models, input the measurements Z k .
3: Generate the k-shortest path to calculate k-best births and surviving hypotheses, predict the target state x k + , o , ( n ) and ω k + , o , ( n ) using (52) and (53), add the birth particles { x b , o , ( n ) , ω b , o , ( n ) } n = 1 N b , o , .
5: for i = 1 , , | Z k |
6: Compute the likelihood q ( z i | x k + , o , ( n ) ) for each measurement z i Z k using (55).
7: end for
8: Calculate m-best assignment hypotheses/components using murty’s algorithm according to the likelihood q ( z i | x k ( n ) ) .
9: Update the weights ω k , o , ( n ) using (54).
10: Resample.
11: Look up the table and obtain the LC plausibility p l ( x ˜ k , o , ( n ) | c i , o j ) and BBAs m ( C | x ˜ k , o , ( n ) , o j ) according to the estimates x k + , o , ( n ) of particles. Compute the dynamic model probability p k ( o ) = n = 1 N k + , o , ω k , o , ( n ) .
12: Combine the BBAs for model m 1 2 ( C | x ˜ k , o , , o j ) using (59).
13: Combine the model dependent beliefs m ( C | x ˜ k , o , , o ) , and then merge the beliefs with the priori using GBT to derive the final class BBAs of target . Then go to 1.
end function

4. Simulations

In this section, simulations are performed to illustrate the effectiveness of the proposed algorithm. The scenarios under consideration contain several targets form three categories: ordinary targets, medium maneuvering target and high maneuvering targets, such as commercial plane, bomber and fighter, respectively. The model set consists of the constant velocity, constant acceleration and coordinated turn models. For the commercial plane, the transition probability of the dynamic models is 0.05 , while the transition probabilities for the bomber and fighter are 0.1 and 0.2 , respectively. The prior pignistic probability distributions of velocity, acceleration and turn rate for each category are Gaussian, i.e., B e t f N ( x ¯ , σ ) , and the parameters are given in Table 1. In Figure 3, the pignistic probability functions of these classes are illustrated using thick, normal and dashed line, respectively.
In our method, different state feature subspaces are chosen for classification according to the dynamic modes. For CV, CA and CT models, the class beliefs are calculated based on the velocity, acceleration and turn angular rate, respectively. The performance of our algorithm is compared with the traditional methods in terms of the class probability.
In the scenario, five targets are involved, two are the high maneuvering targets such as the fighter, two other are the ordinary targets such as the normal commercial planes and the last one is a medium maneuvering target such as the bomber. These targets perform different maneuvers in ( x , y ) space. Target 1 appears at the beginning and firstly evolves with a constant velocity 870 km/h until 24 s, then executes a coordinated turn during 24 to 32 s at 0.35 rad/s. Finally, it performs a normal flight and disappears at 80 s. Target 2 appears at 8 s and disappears at the end, it firstly evolves with a low constant velocity at 850 km/h, then turns during 24 to 30 s with the turning angle rate 0.2 rad/s. Last, it speeds up to 1200 km/h within 8 s and escapes. Targets 3 and 4 appear from 16 s, and perform the uniform speed flight at 780 km/h throughout the process. Target 5 appears at 20 s and performs the uniform speed flight at 1080 km/h. The scenario is illustrated in Figure 4. The measurements of the tracks are drawn using black ‘*’, and the clutter are drawn using blue ‘*’.
In the simulation, the equation of the ith dynamic model is
x k = F k , i x k 1 + w k , i
where F k , i is the model-dependent state transition matrix, and w k , i is Gaussian noise with covariance Q k , i . The constant velocity (CV) model has the following parameters
F k , 1 = d i a g 1 T 0 0 1 0 0 0 0 , 1 T 0 0 1 0 0 0 0 , 1
Q k , 1 = d i a g T 2 T 0 T 1 0 0 0 0 σ v 2 , T 2 T 0 T 1 0 0 0 0 σ v 2 , 0
where σ v is the process noise with covariance σ v 2 = 10 m 2 / s 2 . The parameters of CA model are
F k , 2 = d i a g 1 T 1 2 T 2 0 1 T 0 0 1 , 1 T 1 2 T 2 0 1 T 0 0 1 , 1
Q k , 2 = d i a g 1 4 T 4 1 2 T 3 1 2 T 2 1 2 T 3 T 2 T 1 2 T 2 T 1 σ a 2 , 1 4 T 4 1 2 T 3 1 2 T 2 1 2 T 3 T 2 T 1 2 T 2 T 1 σ a 2 , 0
where σ a is the process noise with covariance σ a 2 = 10 m 2 / s 4 . The parameters of CT model are
F k , 3 = 1 s i n ( ω T ) ω 0 0 1 c o s ( ω T ) ω 0 0 0 c o s ( ω T ) 0 0 s i n ( ω T ) 0 0 0 0 1 0 0 0 0 0 1 c o s ( ω T ) ω 0 1 s i n ( ω T ) ω 0 0 0 s i n ( ω T ) 0 0 c o s ( ω T ) 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1
Q k , 3 = d i a g T 3 l 1 3 T 2 l 1 2 T 2 l 1 2 T l 1 , 0 , T 3 l 1 3 T 2 l 1 2 T 2 l 1 2 T l 1 , 0 , T l 2
where l 1 and l 2 are the process noise with covariance l 1 = 10 1 r a d 2 / s 4 and l 2 = 10 4 r a d 2 / s 4 , respectively.
The sensor receives range and bearing measurements
z k = l k θ k + v k
where l k and θ k are the true range and bearing measurement of a target that can be given by
l k = ( p x s x ) 2 + ( p y s y ) 2
θ k = arctan p x s x p y s y
The position of the sensor [ p x , p y ] is located at [ 1000 , 3000 ] , and v k is the Gaussian measurement noise with the covariance R k = d i a g ( σ r 2 , σ θ 2 ) , where σ r = 5 m and σ θ = ( π / 180 ) r a d . The target detection probability is p d = 0.92 , there are on average 35 clutters per scan within the multi-target moving area (clutter density is 2.08 × 10 7 ), and the sample time is 2 s. The target survival probability is p s = 0.98 , and the target birth probability is p b = 0.02 . The density of the new birth target is b k = N ( x ; m b ; Q b ) , where the parameters m b is equal to the position of the first measurement of each target, assume that the measurement in Cartesian coordinates is [ x , y ] , the initial state of the birth targets is m b = [ x , 0 , 0 , y , 0 , 0 , 0 ] , while the state covariance is Q b = d i a g ( 100 , 10 4 , 10 , 100 , 10 4 , 10 , 0.1 ) , respectively.
As shown in Figure 5a,b, take the targets 1 and 5 as example, the dynamic modes are correctly identified by the multi-model MGLMB filter in the case of dense clutter and miss detections. The model dependent class beliefs are calculated given corresponding dynamic state characteristics and then fused. The final classification results are illustrated in Figure 5c,d. For target 1, the pignistic class probability of commercial plane is the largest at first, the reason is that it performs a normal flight, and the classification result is mainly derived on basis of the velocity. When it turns, the pignistic probability of fighter rapidly increases because only the this kind of the target can do such sharp maneuver with high turning angular rate. For target 5, the object flies at a constant speed from beginning to the end, therefore, the classification results are derived using the velocity, the pignistic probabilities of fighter and bomber are approximately both equal to 1 / 2 , while the commercial plane is excluded by the velocity. As a result that the kinematic characteristics are not very distinguishable, the target class cannot be effectively distinguished only from the speed, therefore the classification results still contain much uncertainty about the fighter and bomber.
The comparison of the classification results for one target of the proposed MGLMB-cTBM algorithm and the Bayesian classifier is shown in Figure 6. Take the target 1 as example, before the target does a maneuver, the probability of normal commercial plane is larger using the Bayesian approach. In contrast, the classification results of the proposed MGLMB-cTBM algorithm still contain much uncertainties. When the target turns, the probability of the target being the fighter is the highest in both algorithms. After the target completes the maneuver, the probability of commercial plane obtained by the Bayesian classifier significantly increases again. The reason is that, the specific classification results are produces by the Bayesian classifier at each moment, and the prior probability of the commercial plane is larger under such speed. On contrary, for the classifier based on cTBM, the uncertainty of the target class is indicated using the confidence assigned to the focal element set C = { c 1 , c 2 , c 1 , 2 } , and the reliabilities are translated into pignistic probabilities only when the decision is made. Therefore, when the evidences are insufficient, the difference among the pignistic class probabilities of different categories is not obvious. This result reflects the reasonability of the proposed MGLMB-cTBM algorithm.
In Figure 7, the superiority of the two level classifier is shown. Take the target 2 as example, as illustrated in the Figure 7a, the class probabilities of three classes are represented by the thick, normal and dash line, respectively. Before the target accelerates, the pignistic probability of commercial plane is larger because the classification results are derived mainly based on the velocity. After the medium maneuver performed during 24–32 s, the pignistic probabilities of bomber and fighter increase promptly. However, there still exists uncertainty about fighter and bomber, because all types of aircraft can do this maneuver except the commercial plane. Finally, the target class is determined on basis of the high velocity. The classification results of the method proposed in [7] are shown in Figure 7b, in which the target class is identified only based on the acceleration. It can be seen that, because the target makes medium maneuvering from 24 to 32s, the target class can be preliminary judged according to the acceleration, but no effective classification result is derived when the acceleration is 0 during the cruise section of flight. The kinematic characteristics are not fully utilized, and the acceleration is insufficient for distinguishing of the bomber and fighter absolutely. In contrast, our method identifies the motion mode firstly and classifies using appropriate dynamic characteristics, the target is finally identified according to the high speed. The classification performance is improved.
In Figure 8, the comparison of the overall detection, tracking and classification performance of 50 Monte Carol runs is illustrated. The proposed MGLMB-cTBM algorithm is compared with the MGLMB-Bayesian method in terms of the cardinality estimates, class dependent optimal subpattern assignment (OSPA) like distance for tracks, and average error class probability (the error between the classification results and the truth). In MGLMB-Bayesian method, the target class is introduced into conventional multi-model MGLMB filter, the posterior class probabilities and class dependent multi-target density are calculated based on the observation likelihood within Bayesian framework. As shown in Figure 8c, because the classification results derived within belief function framework is more reasonable and robust, the classification results of the cTBM based classifier is better. In Figure 8a,b, the detection and overall tracking performance of our proposed method is also better. The reason is that, according to more accurate classification results, appropriate dynamic modes are chosen during the estimation process, and the overall performance is improved. In Figure 8c The time scale represents the moving time of the targets. The proposed algorithm has an iterative implementation form, the motion state and class probability of each target are recursively calculated at each moment, and the number of iteration is actually equal to the time scan. The values of average error class probability change over time because the conditional beliefs are calculated at the current moment based on the dynamic state estimates, and then combined with the prior information using generalized Bayesian theorem. As more dynamic information of the target obtained, the class are more definite with the time recursion, and the values of average error class probability decreased over time.
To further evaluate the overall performance of the proposed algorithm, the algorithm works under 100 different scenarios that are randomly generated. Each scenario contains 5 targets moving within 90 s. The real class, appearing time, initial dynamic state and position of these targets are the same as in scenario 1. For the fighter, the beginning time of maneuver is random, and the probability of maneuvering is 0.02 at each moment during 10–80 s. The covariance of the Gaussian measurement noise R k = d i a g ( 200 ; 5 × 10 3 ) , and there are average 50 clutters per scan within the multi-target moving area, both larger than scenario 1.
The proposed MGLMB-cTBM algorithm is compared with MGLMB-Bayesian and PHD-TBM algorithms [28] in terms of the cardinality estimates, overall tracking performance, and average error class probability. As illustrated in Figure 9a, although the cardinality estimates of all the algorithms converges to the truth, the estimates of PHD-TBM is worse within several scans after the target appears, the reason is that the MGLMB is an analytical multi-target tracker, and the dynamic state estimates is more accurate than PHD filter. In Figure 9c, the classification result of TBM-PHD is also worse because the results are derived only based on the dynamic modes, while the estimates are directly used for classification in cTBM based classifier. Moreover, both the classification and state estimation of MGLMB-cTBM algorithm is better than MGLMB-Bayesian method. Actually, the traditional method based on Bayesian framework is optimal in the usual sense, the performance of the proposed algorithm is better because it takes advantages of the belief function theory, the classification results derived within belief function framework is more reasonable and robust, as the target class is considered in state estimation, appropriate models are then used for multi-target tracking. As a result that the estimation and classification problems are jointly solved, more accurate classification and estimation results are both helpful to improve the overall performance.
The proposed MGLMB-cTBM algorithm is realized using Matlab and the open source tools. In the simulation, the density of each target is represented using 1000 particles. The operating environment of the program is Windows 10, i5 5200 Dual-core CPU 2.2 GHz, 8 GB RAM. The computational time of each step of MGLMB-cTBM, MGLMB-Bayesian and PHD-TBM methods is 15.7 s, 14.5 s and 0.095 s, respectively. The computational time of these algorithms is given in Table 2. As shown in the table, the computational time of PHD-TBM is far less than the other two methods, because the computational complexity of the PHD-based tracker is much lower than the MGLMB filter. As the target state estimates of MGLMB-cTBM need to be calculated given different classes and models, in addition, compared to the MGLMB-Bayesian filter, the beliefs of the particles need to be fused. Therefore, the computational time of MGLMB-cTBM algorithm is the largest. As illustrated in Figure 9, the estimation and classification results of MGLMB-cTBM are the best, therefore, the increase in the amount of the computation leads to an improvement in the performance. Actually, the Gaussian mixture implementation can be used under linear Gaussian condition, and the computational time of MGLMB-cTBM algorithm can be further reduced.

5. Conclusions

This paper presents a new solution to multi-target joint detection, tracking and classification with target number and observation uncertainty based on the marginal GLMB filter and cTBM. Explicit state estimates and pignistic class probabilities of each target are produced by the proposed algorithm, and the particle implementation is also given. For multi-target tracking, a class dependent multi-model MGLMB filter is developed to calculate the target existing probabilities, dynamic model probabilities and model-dependent target state estimates. For target classification, a two-level classifier is designed, the class beliefs are first computed on corresponding state feature subspace according to different models, and then fused to derive the final results. Numerical examples are also performed to demonstrate the effectiveness and robustness of the proposed algorithm. As both the dynamic modes and the state estimates are considered by the classifier, the kinematic characteristics are fully utilized to improve the classification performance. Moreover, because the explicit class piginistic probability is only computed when the decision is made, the uncertainty of the class is well described using the belief functions during the recursion, and the classification results are more reasonable. Actually, the designed classifier extracts more information from the measurement and kinematic states to adaptively improve the tracking performance by choosing the proper motion modes, etc. In addition, because the estimation and classification problems are jointly solved, the performance of the detection, tracking and classification are further improved compared with the traditional methods.

Author Contributions

J.L. and M.L. wrote the original paper; J.L. and M.L. performed the experiments and analyzed the data; Z.J. and H.P. reviewed and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work is jointly supported by National Natural Science Foundation of China (Grant Nos. 61673262 and 61175028), Leading Fundation of National Defense Technology(Grant No. 18-163-00-TS-004-014-01), and China Postdoctoral Science Foundation(Grant No. 2019M651498).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, W.; Fu, Y.W.; Long, J.Q.; Li, X. Joint detection, tracking and classification of multiple targets in clutter using the PHD filter. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 3594–3609. [Google Scholar]
  2. Jing, Z.L.; Li, M.Z.; Leung, H. Multi-target joint detection, tracking and classification based on random finite set for aerospace applications. Aerosp. Syst. 2018, 1, 1–12. [Google Scholar] [CrossRef] [Green Version]
  3. Li, M.Z.; Jing, Z.L. Joint DTC based on FISST and generalised Bayesian risk. IET Signal Process. 2017, 11, 796–804. [Google Scholar] [CrossRef]
  4. Delmotte, F.; Smets, P. Target identification based on the Transferable Belief Model interpretation of Dempster-Shafer model. IEEE Trans. Syst. Man Cybern. Part A 2004, 34, 457–471. [Google Scholar] [CrossRef]
  5. Benavoli, A.; Ristic, B. Classification with imprecise likelihoods: A comparison of TBM, random set and imprecise probability approach. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
  6. Fiche, A.; Martin, A.; Cexus, J.C.; Khenchaf, A. A comparison between a Bayesian approach and a method based on continuous belief functions for pattern recognition. In Proceedings of the International Conference on Belief Functions, Compiègne, France, 9–12 May 2012. [Google Scholar]
  7. Ristic, B.; Gordon, N.; Bessell, A. On target classification using kinematic data. Inf. Fusion 2004, 5, 15–21. [Google Scholar] [CrossRef]
  8. Smets, P.; Ristic, B. Kalman filter and joint tracking and classification based on belief functions in the TBM framework. Inf. Fusion 2007, 8, 16–27. [Google Scholar] [CrossRef]
  9. Roberts, M.; Marshall, D. Powell Improving joint tracking and classification with the Transferable Belief Model and terrain information. In Proceedings of the 13th International Conference on Information Fusion, Edinburg, UK, 26–29 July 2010. [Google Scholar]
  10. Liu, X.X.; Leung, H.; Valin, P. Multisensor joint tracking and identification using particle filter and Dempster-Shafer fusion. In Proceedings of the 15th International Conference on Information Fusion, Singapore, 9–12 July 2012. [Google Scholar]
  11. Mei, W.; Shan, G.L.; Wang, Y.F. A second-order uncertainty model for target classification using kinematic data. Inf. Fusion 2011, 12, 105–110. [Google Scholar] [CrossRef]
  12. Ristic, B.; Smets, P. Target classification approach based on the belief function theory. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 574–583. [Google Scholar] [CrossRef]
  13. Ristic, B.; Smets, P. Belief function theory on the continuous space with an application to model based classification. In Modern Information Processes; Elsevier: Amsterdam, The Netherlands, 2006; pp. 11–24. [Google Scholar]
  14. Powell, G.; Marshall, D.; Smets, P.; Ristic, B.; Maskell, S. Joint tracking and classification of airbourne objects using particle filters and the continuous transferable belief model. In Proceedings of the 9th International Conference Information Fusion, Florence, Italy, 10–13 July 2006. [Google Scholar]
  15. Caron, F.; Ristic, B.; Duflos, E. Least committed basic belief density induced by a multivariate Gaussian: Formulation with applications. Int. J. Approx. Reason. 2008, 48, 419–436. [Google Scholar] [CrossRef]
  16. Fiche, A.; Cexus, J.C.; Martin, A. Features modeling with an α-stable distribution: Application to pattern recognition based on continuous belief functions. Inf. Fusion 2013, 14, 504–520. [Google Scholar] [CrossRef] [Green Version]
  17. Hachour, S.; Delmotte, F.; Mercier, D. Object tracking and credal classification with kinematic data in a multi-target context. Inf. Fusion 2014, 20, 174–188. [Google Scholar] [CrossRef]
  18. Mahler, R. Advanced in Statistical Multisource-Multitarget Information Fusion; Artech House: Norwood, MA, USA, 2014; pp. 81–106. [Google Scholar]
  19. Williams, J.L. Marginal multi-Bernoulli filters: RFS derivation of MHT, JIPDA, and association-based member. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1664–1687. [Google Scholar] [CrossRef] [Green Version]
  20. He, X.; Tharmarasa, R.; Pelletier, M.; Kirubarajan, T. Two-level automatic multiple target joint tracking and classification. In Proceedings of the SPIE, Signal and Data Processing of Small Targets, Orlando, FL, USA, 5–9 April 2010. [Google Scholar]
  21. Gao, Y.X.; Liu, L.Y.; Li, X.R. Tracking-aided classification of targets using multihypothesis sequential probability ratio test. IEEE Trans. Aerosp. Electron. Syst. 2017, 54, 233–245. [Google Scholar] [CrossRef]
  22. Tom, V.; Guo, D.; Wang, X.D. Joint multiple target tracking and classification in collaborative sensor networks. IEEE J. Sel. Areas Commun. 2005, 23, 714–723. [Google Scholar]
  23. Gini, F.; Muralidhar, R. Knowledge Based Radar Detection, Tracking and Classification; John Wiley & Sons: Hoboken, NJ, USA, 2008; Volume 52. [Google Scholar]
  24. Zajic, T.; Ravichandran, R.B.; Mahler, R.P.S. Joint tracking and identification with robustness against unmodeled targets. In Proceedings of the SPIE, Signal Processing, Sensor Fusion, and Target Recognition XII, Orlando, FL, USA, 21–25 April 2003; pp. 279–290. [Google Scholar]
  25. Yang, W.; Fu, Y.W.; Li, X. Joint detection, tracking and classification of multiple maneuvering targets based on the linear Gaussian jump Markov probability hypothesis density filter. Opt. Eng. 2013, 52, 83–106. [Google Scholar] [CrossRef]
  26. Yang, W.; Wang, Z.; Fu, Y.W.; Pan, X.; Li, X. Joint detection, tracking and classification of a manoeuvring target in the finite set statistics framework. IET Signal Process. 2015, 9, 10–20. [Google Scholar] [CrossRef]
  27. Georgescu, R.; Willett, P. Classification aided cardinalized probability hypothesis density filter. In Proceedings of the SPIE, Signal Processing, Sensor Fusion, and Target Recognition XXI, Baltimore, MD, USA, 23–27 April 2012. [Google Scholar]
  28. Fortina, B.; Hachoura, S.; Delmottea, F. Multi-Target PHD Tracking and Classification using imprecise likelihoods. Int. J. Approx. Reason. 2017, 90, 17–36. [Google Scholar] [CrossRef]
  29. Gao, L.; Sun, W.; Wei, P. Extensions of the CBMeMBer filter for joint detection, tracking, and classification of multiple maneuvering targets. Digit. Signal Process. 2016, 56, 35–42. [Google Scholar] [CrossRef]
  30. Vo, B.T.; Vo, B.N. Labeled random finite sets and multi-object conjugate priors. IEEE Trans. Signal Process. 2013, 61, 3460–3475. [Google Scholar] [CrossRef]
  31. Vo, B.N.; Vo, B.T.; Phung, D. Labeled random finite sets and the Bayes multi-target tracking filter. IEEE Trans. Signal Process. 2014, 62, 6554–6567. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, X.; Ji, H.; Liu, L. Adaptive Target Birth Intensity Multi-Bernoulli Filter with Noise-Based Threshold. Sensors 2019, 19, 1120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Vo, B.N.; Vo, B.T.; Hoang, H.G. An efficient implementation of the generalized labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 2017, 65, 1975–1987. [Google Scholar] [CrossRef] [Green Version]
  34. Fantacci, C. Scalable multisensor multitarget tracking using the marginalized δ-GLMB Density. IEEE Signal Process. Lett. 2016, 23, 863–867. [Google Scholar] [CrossRef] [Green Version]
  35. Chen, D.W.; Li, C.Y.; Ji, H.B. Multi-target joint detection, tracking and classification with merged measurements using generalized labeled multi-Bernoulli filter. In Proceedings of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  36. Vo, B.T.; Vo, B.N. Tracking, identification, and classification with random finite sets. In Proceedings of the SPIE, Signal Processing, Sensor Fusion, and Target Recognition XXII, Baltimore, MD, USA, 29 April–2 May 2013. [Google Scholar]
  37. Li, M.Z.; Jing, Z.L.; Dong, P.; Pan, H. Multi-target joint detection, tracking and classification using generalized labeled multi-Bernoulli filter with bayes risk. In Proceedings of the 19th International Conference Information Fusion, Heidelberg, Germany, 5–8 July 2016. [Google Scholar]
  38. Smets, P.; Kennes, R. The transferable belief model. Artif. Intel. 1994, 66, 191–234. [Google Scholar] [CrossRef]
  39. Tang, S.; Andriluka, M.; Schiele, B. Multi people tracking with lifted multicut and person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  40. Guillem, B.; Laura, L.T. Learning a neural solver for multiple object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  41. Henschel, R.; Zou, Y.; Rosenhahn, B. Multiple people tracking using body and joint detections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  42. Weng, X.; Wang, Y.; Man, Y. GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking with Multi-Feature Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  43. Gao, J.; Zhang, T.; Xu, C. Graph convolutional tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  44. Ristani, E.; Tommasi, C. Features for multi-target multicamera tracking and re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
Figure 1. Graphical representations of belief, plausibility and commonality.
Figure 1. Graphical representations of belief, plausibility and commonality.
Sensors 20 04235 g001
Figure 2. Algorithm flow diagram.
Figure 2. Algorithm flow diagram.
Sensors 20 04235 g002
Figure 3. Prior pignistic class probabilities; (a) pignistic class probabilities for velocity, (b) pignistic class probabilities for acceleration, (c) pignistic class probabilities for turn rate.
Figure 3. Prior pignistic class probabilities; (a) pignistic class probabilities for velocity, (b) pignistic class probabilities for acceleration, (c) pignistic class probabilities for turn rate.
Sensors 20 04235 g003
Figure 4. The tracks and measurements with miss detection and clutter.
Figure 4. The tracks and measurements with miss detection and clutter.
Sensors 20 04235 g004
Figure 5. Classification results; (a) model probabilities of target 1, (b) model probabilities of target 5, (c) pignistic class probabilities of target 1, (d) pignistic class probabilities of target 5.
Figure 5. Classification results; (a) model probabilities of target 1, (b) model probabilities of target 5, (c) pignistic class probabilities of target 1, (d) pignistic class probabilities of target 5.
Sensors 20 04235 g005
Figure 6. The comparison of the classification results of target 1; (a) continuous transferable belief model (cTBM), (b) Bayesian theory.
Figure 6. The comparison of the classification results of target 1; (a) continuous transferable belief model (cTBM), (b) Bayesian theory.
Sensors 20 04235 g006
Figure 7. The comparison of the classification results of target 2; (a) Pignistic class probabilities of target 2 based on multiple models, (b) Pignistic class probabilities of target 2 based on single model.
Figure 7. The comparison of the classification results of target 2; (a) Pignistic class probabilities of target 2 based on multiple models, (b) Pignistic class probabilities of target 2 based on single model.
Sensors 20 04235 g007
Figure 8. The comparison of tracking and classification results between multi-model marginal generalized labeled multi-Bernoulli (MGLMB)-Bayesian and MGLMB-cTBM; (a) cardinality estimates, (b) overall tracking performance, (c) average error class probability.
Figure 8. The comparison of tracking and classification results between multi-model marginal generalized labeled multi-Bernoulli (MGLMB)-Bayesian and MGLMB-cTBM; (a) cardinality estimates, (b) overall tracking performance, (c) average error class probability.
Sensors 20 04235 g008
Figure 9. The comparison of tracking and classification results between MGLMB-Bayesian and MGLMB-cTBM; (a) cardinality estimates, (b) overall tracking performance, (c) average error class probability.
Figure 9. The comparison of tracking and classification results between MGLMB-Bayesian and MGLMB-cTBM; (a) cardinality estimates, (b) overall tracking performance, (c) average error class probability.
Sensors 20 04235 g009
Table 1. Numerical values for priori pignistic class probability density.
Table 1. Numerical values for priori pignistic class probability density.
Commercial PlaneBomberFighter
Velocity interval (km/s)[680, 1020][850, 1190][720, 1580]
Velocity var (km/s)120160250
Accelerate mean (g)000
Accelerate var (g)0.81.21.6
Turn rate mean (rad/s)000
Turn rate var (rad/s)0.10.20.4
Table 2. Average computational time.
Table 2. Average computational time.
MGLMB-cTBMMGLMB-BayesianPHD-TBM
15.7 s14.5 s0.095 s

Share and Cite

MDPI and ACS Style

Liang, J.; Li, M.; Jing, Z.; Pan, H. Multi-Target Joint Detection; Tracking and Classification Based on Marginal GLMB Filter and Belief Function Theory. Sensors 2020, 20, 4235. https://doi.org/10.3390/s20154235

AMA Style

Liang J, Li M, Jing Z, Pan H. Multi-Target Joint Detection; Tracking and Classification Based on Marginal GLMB Filter and Belief Function Theory. Sensors. 2020; 20(15):4235. https://doi.org/10.3390/s20154235

Chicago/Turabian Style

Liang, Jun, Minzhe Li, Zhongliang Jing, and Han Pan. 2020. "Multi-Target Joint Detection; Tracking and Classification Based on Marginal GLMB Filter and Belief Function Theory" Sensors 20, no. 15: 4235. https://doi.org/10.3390/s20154235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop