Next Article in Journal
Sensor Integration in a Low Cost Land Mobile Mapping System
Previous Article in Journal
α-Hydroxyketone Synthesis and Sensing by Legionella and Vibrio
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FISST Based Method for Multi-Target Tracking in the Image Plane of Optical Sensors

School of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Sensors 2012, 12(3), 2920-2934; https://doi.org/10.3390/s120302920
Submission received: 3 February 2012 / Revised: 23 February 2012 / Accepted: 1 March 2012 / Published: 2 March 2012
(This article belongs to the Section Physical Sensors)

Abstract

: A finite set statistics (FISST)-based method is proposed for multi-target tracking in the image plane of optical sensors. The method involves using signal amplitude information in probability hypothesis density (PHD) filter which is derived from FISST to improve multi-target tracking performance. The amplitude of signals generated by the optical sensor is modeled first, from which the amplitude likelihood ratio between target and clutter is derived. An alternative approach is adopted for the situations where the signal noise ratio (SNR) of target is unknown. Then the PHD recursion equations incorporated with signal information are derived and the Gaussian mixture (GM) implementation of this filter is given. Simulation results demonstrate that the proposed method achieves significantly better performance than the generic PHD filter. Moreover, our method has much lower computational complexity in the scenario with high SNR and dense clutter.

1. Introduction

Optical sensors have been widely applied both in important military and civil areas due to their properties of long detection range, high concealment ability and large coverage area. Since there is usually a long distance between targets and sensor which generates a low SNR and dense clutter scenario, multi-target tracking in the image plane of optical sensors is a very difficult problem. In multi-target tracking, the aim is to estimate the number of a set of targets and the state of each target from a set of measurements received. However, due to the variation of targets number with time in the field of view of the sensors and the existence of miss detection and dense clutters, multi-target tracking in image plane remains a challenging problem.

In addition to the location measurement, amplitude information has been proven to improve tracking performance [15]. One of the pioneering techniques was the approach proposed by Colegrove, Lerro and Bar-Shalom [1,2] where the probability data association (PDA) filter utilizing target amplitude was applied in the context of single-target tracking. The target amplitude has also been incorporated in the multiple hypothesis tracking (MHT) framework [3] and Viterbi data association scheme [4]. More recently, the significance of target amplitude has been explored for data association of closely spaced targets in [5]. Although significant progress has been made recently, the approaches mentioned above are based on data association technique, which requires expensive computational cost in most circumstances. Therefore, the traditional data association approach is not the optimal option for multi-target tracking in image plane of optical sensor in the scenarios with time-varying targets number, low SNR and dense clutters.

A good candidate is the emerging Bayesian approach in the framework of finite set statistics (FISST) proposed by Mahler [6]. FISST provides a set of mathematical tools that allows direct application of Bayesian inferencing to multi-target problems. PHD proposed by Mahler [7] is a computation tractable approximation for the optimal multi-object Bayes filter based on Finite Set Statistics (FISST). Operating on the single-target state space, the PHD filter avoids the combinatorial problem that arises from data association and thus leads to superior performance in comparison with traditional MHT algorithm in multi-target tracking [8,9]. These features render the PHD filter extremely attractive to many researchers. The primary PHD applications can be found in [1013]. In [14], Clark et al. incorporated target amplitude into a PHD filter for the first time in multi-target tracking, which improves the tracking performance. However, their amplitude is modeled for radar and sonar application and the computational complexity analysis of their method is absent.

In this paper, we propose a FISST based method which uses signal amplitude information in a PHD filter for multi-target tracking in the image plane of optical sensors. Based on analyzing the imaging characteristics of the optical sensor, we model the signal amplitude and incorporate it into the PHD in the form of amplitude likelihood ratio. In the situation where the SNR of target is unknown, we then present an alternative method based on this amplitude model. Simulation results demonstrate a significant improvement of the proposed method in tracking performance. Furthermore, the computational complexity of our method in the scenarios with different clutter density and SNR is also discussed.

This paper is organized as follows: Section 2 models the signal amplitude generated by the optical sensor. Section 3 demonstrates how to incorporate the amplitude information (for known and unknown SNR case) into a PHD filter and then the GM implementation for this filter is given. Section 4 presents the simulation results that validate the proposed method. The conclusions are given in Section 5.

2. Amplitude Measurement Model

2.1. Amplitude Likelihood Ratio

In an optical multi-target tracking system, the original images taken by the sensor are usually processed by background suppression before being further used for target tracking. Assuming the noise is addictive, we define the SNR d of the residual image as [1517]:

d = s σ
where s is the mean signal value of target, σ is the standard deviation of the residual image. Each measurement from the image consists of the two-dimensional position vector z in the image plane and the corresponding amplitude a ≥ 0, that is, a measurement vector has the form :=(zT, a)T. For simplicity, we assume that the value of the target signal has no spreading in the image plane. We assume that the noise is Gaussian, then the probability densities of the amplitude of the false alarms and the target p0(a) and p1(a|d) can be written as [18]:
p 0 ( a ) = 1 2 π σ 2 exp ( a 2 2 σ 2 )
p 1 ( a | d ) = 1 2 π σ 2 exp ( ( a s ) 2 2 σ 2 )                             = 1 2 π σ 2 exp ( ( a σ d ) 2 2 σ 2 )

This leads to the probabilities of false alarm and detection p FA τ and p D τ ( d ) with a detection threshold τ,

p FA τ = τ p 0 ( a ) da = erf ( τ / σ )
p D τ ( d ) = τ p 1 ( a | d ) da = erf ( τ / σ d )
where erf ( x ) = 1 2 π x + exp ( y 2 2 ) dy is the probability error function. According to Equations (4) and (5), we have:
erf 1 ( p FA τ ) erf 1 [ p D τ ( d ) ] = d

In target tracking application we usually choose p FA τ as a fixed value. Given p FA τ, the threshold τ can be calculated via the inverse form of Equation (4) with the parameter σ, and the probability of detection p D τ ( d ) can be calculated via Equations (5) or (6) for the target with SNR = d. Table 1 lists the values of p D τ ( d ) for different SNR values d and specified p FA τ with σ setting to be a normalized value, i.e., σ = 1.

When the target SNR d is known, we can get our amplitude likelihood functions for the false alarm and the target as:

c a τ   ( a ) = 1 p FA τ   p 0   ( a ) ,           a τ
g a τ   ( a | d ) = 1 p D τ ( d )   p 1 ( a | d ) ,           a τ

The amplitude likelihood ratio given a threshold τ is defined as [19]:

ρ a τ   ( a | d ) = g a τ   ( a | d ) c a τ   ( a )

We use the notation ca(a) and ρa(a|d) to denote the case where τ = 0, then we have:

c a   ( a ) = p 0   ( a )
ρ a ( a | d ) = p 1 ( a | d ) p 0 ( a )

From Equations (5) and (9) we can see that the calculations of p D τ ( d ) and ρ a τ ( a | d ) rely on a specified known target SNR, however, this requirement cannot be satisfied in most practical tracking systems. We adopt an alternative approach to circumvent this issue next.

2.2. Method for Unknown SNR

When the SNR of target is unknown, one straightforward approach would be to estimate the unknown parameter d from the measurement amplitudes a. However, this approach requires a large number of measurements from the target to achieve an accurate estimate of d. Furthermore, due to the unknown association between measurements and targets in multi-target environment with clutter, the approaches of estimating d usually fail. Similar to the idea introduced in [14], we adopt an alternative approach where we do not attempt to estimate d at all. Instead we marginalize out the parameter d over the range of possible values and find a probability of detection for p D τ and a likelihood ratio for ρ a τ that is not conditional on d.

Consider that we always have some prior information about the targets being tracked, so we assume that p(d), defined on the possible SNR values [d1,d2], gives the expected probability distribution of SNR values. Since the amplitude distribution in Equations (2) and (3) are symmetric thus have no biases in high or low SNR targets, the reasonable choice for p(d) is the uniform distribution U[d1,d2]. Then we define the probability of detection and amplitude likelihood ratio where SNR is unknown as:

p D τ = d 1 d 2 p ( ν ) p D τ ( ν ) d ν
ρ a τ   ( a ) = d 1 d 2 p ( ν ) ρ a τ   ( a | ν ) d ν

From Equations (5) and (12) we have:

p D τ = 1 d 2 d 1 d 1 d 2 erf ( τ / σ ν ) d ν

Note that the p D τ over the marginalized region [d1, d2] can be computed with numerical integration offline since it does not need to be computed at each iteration. The computation of ρ a τ   ( a ) as in Equation (13) can be simplified and will be presented in Section 3.2.

3. PHD Filter with Signal Amplitude Information

Suppose that at time k there are Nk target states xk,1, ⋯ xk,Nk, each taking values in a state space X ⊆ Rnx; and Mk measurements (detections) zk,1, ⋯ zk,Mk, each taking values in the observation space Z ⊆ Rnz. In PHD filter, a multi-object state and a multi-object observation are represented by RFS:

X k = { x k , 1 , x k , N k } F   ( X ) Z k = { z k , 1 , z k , M k } F   ( Z )
where F(X) and F(Z) are the finite subsets of X and Z, respectively. The state x = (x,y,,)T of each target contains the position (x, y)T and velocity (,)T in the image plane, while the measurement z is defined in Section 2. We assume that each target follows a linear Gaussian dynamical model and the sensor has a linear Gaussian measurement model, i.e.,
f k | k 1 ( x | x ) = N   ( x ;   F k 1   x ,   Q k 1 )
L z   ( z | x ) = N   ( z ;   H k   m ,   R k )
where N(.;m,P) denotes a Gaussian density with mean m and covariance P, Fk−1 is the state transition matrix, Qk−1 is the process noise covariance, Hk is the observation matrix, and Rk is the observation noise covariance.

3.2. The PHD Recursion with Amplitude Information

We abbreviate the PHD filter incorporated with amplitude information as AI-PHD filter. Next we derive the prediction and update equations of AI-PHD filter based on the amplitude likelihood ratio given by Section 2. For simplicity, we do not consider target spawning in this paper.

Step 1. Prediction: The prediction equation of AI-PHD filter is the same as generic PHD filter since their state vector and state transition matrix are the same, i.e.,

D k | k 1   ( x ) = γ k   ( x ) + ( p S , k ( x )   f k | k 1   ( x | x ) ) D k 1 | k 1   ( x ) d x
where γk(x) is the birth term for new targets, pS,k(x′) is the probability of target survival, fk|k−1(x|x′) is the transition density and Dk−1|k−1(x′) is the PHD at time k − 1.

Step 2. Update: The update equation is changed when incorporated with amplitude information. Analogized to the update equation of generic PHD filter in [7], we have the update equation of our AI-PHD filter as

D k | k ( x ) L Z ˜ k ( x ) D k | k 1 ( x )
where Lk(x) is the pseudo-likelihood function as
L Z ˜ k   ( x ) = 1 p D   ( x ) + p D   ( x ) z ˜ Z ˜ k L z ˜ ( x ) λ V c ( z ˜ ) + D k | k 1 [ p D   L z ˜ ]
D k | k 1   [ p D   L z ˜ ] = { p D   ( x ) L z ˜   ( x ) D k | k 1 ( x ) } d x
where λ and V are the clutter density and area of image plane of optical sensor respectively. Assuming the amplitude ak is independent with target state xk, we can rewrite Lk (x) and c() as
L z ˜   ( x ) = L z   ( x ) g a τ   ( a | d )
c   ( z ˜ ) = c   ( z ) c a τ   ( a )
where Lz(x) is the measurement location likelihood function and c(z) is the probability density of the false alarm spatial distribution in the image plane. We assume that the targets are within the surveillance region of sensor, the probability of detection for a given threshold τ is then only dependent on d
p D   ( x ) = p D τ   ( d )

Substituting Equations (5), (8) and (2224) into Equation (20) we have the pseudo-likelihood function of AI-PHD as

L Z ˜ k   ( x ) = 1 p D τ   ( d ) + p D τ   ( d ) z ˜ Z ˜ k ρ a τ   ( a | d ) L z   ( x ) λ V c ( z ) + p D τ   ( d ) ρ a τ   ( a | d ) D k | k 1 [ L z ]

Equations (18) and (25) compose the recursion of AI-PHD filter. The probability of detection p D τ   ( d ) and amplitude likelihood ratio ρ a τ   ( a | d ) are replaced by p D τ and ρ a τ   ( a ) respectively for the unknown SNR case.

We can simplify the computation of ρ a τ   ( a ) by noting the fact that ρ a τ   ( a ) is calculated combined with p D τ   ( d ) in Equation (25). From Equations (8) and (9) we have

p D τ   ( d ) ρ a τ   ( a | d ) = p 1 ( a | d ) c a τ   ( a )

Hence, instead of computing ρ a τ   ( a ) by Equation (13) we can compute the expression p D τ ρ a τ   ( a ) directly using the method introduced in Section 2.2, i.e.,

p D τ ρ a τ   ( a ) = d 1 d 2 p ( ν ) p 1   ( a | ν ) c a τ   ( a ) d ν = 1 σ c a τ   ( a ) ( d 2 d 1 ) [ Φ ( d 2 a σ ) Φ ( d 1 a σ ) ]
where Φ ( x ) = 1 2 π x exp ( t 2 2 ) dt is the standard normal distribution function which can be computed easily. Consequently, our approach incorporates the amplitude information into PHD filter with only a minor additional computational load.

We show the consistency of our AI-PHD filter with the generic PHD filter. If the SNR of target is set as d = 0, from Equations (79) we have ρ a τ   ( a | d ) 1. This is the condition under which our AI-PHD filter degenerates to the generic PHD filter.

3.3. Gaussian Mixture Implementation

An analytic solution to the PHD filter can be found under linear assumptions on the system and observation equations with Gaussian process and observation noises as described in Equations (16) and (17) [20]. In this case, both the prediction and update equations of PHD are represented by a mixture of Gaussians where the means and covariances are updated with Kalman filter and the weights of the Gaussian components are found using the PHD filter equations. We use Gaussian Mixture implementation of our filter for its simplicity in calculation and convenience of target state extraction comparing to the Sequential Monte Carlo (SMC) method [21].

We assume that the survival probability is state independent, i.e., pS,k(x′) = pS,k and the detection probabilities are p D τ   ( d ) and p D τ for the known SNR and the unknown SNR case respectively. The intensity of the target birth RFS are Gaussian mixture of the form

γ k   ( x ) = i = 1 J γ , k ω γ , k ( i )   N   ( x ; m γ , k ( i ) ,   P γ , k ( i ) )
where J γ , k, ω γ , k ( i ), m γ , k ( i ), P γ , k ( i ), i = 1, ⋯ Jγ,k, are given model parameters that determine the shape of the birth intensity.

We assume a uniform location distribution of clutter in the measurement space, so that the clutter location likelihood is not dependent on the state or the measurement. Hence the clutter location distribution is constant over the measurement space and equals to the reciprocal of the area of the image plane of optical sensor, i.e., c(z) = 1/V. Next we give the prediction and update equations of Gaussian mixture implementation of our AI-PHD filter.

Prediction: The posterior intensity at time k − 1 is a Gaussian mixture of the form

D k 1 | k 1   ( x ) = i = 1 J k 1 ω k 1 ( i ) N   ( x ; m k 1 ( i ) , P k 1 ( i ) )
where Jk−1 is the number of Gaussian terms with the weights ω k 1 ( i ), means m k 1 ( i ) and covariances P k 1 ( i ). Then the prediction intensity is still a Gaussian mixture as
D k | k 1   ( x ) = γ k   ( x ) + p S , k i = 1 J k 1 ω k 1 ( i ) N   ( x ; m S , k | k 1 ( i ) ,   P S , k | k 1 ( i ) )
where the birth intensity γk(x) is given by Equation (28) and the means m S , k | k 1 ( i ) and covariances P S , k | k 1 ( i ) are computed with the Kalman filter prediction.

Update: We rewrite the predicted intensity Dk|k−1(x) as a Gaussian mixture of the form

D k | k 1   ( x ) = i = 1 J k | k 1 ω k | k 1 ( i ) N   ( x ; m k | k 1 ( i ) ,   P k | k 1 ( i ) )

Substituting the Equations (31), (2025) into Equation (19), we obtain the intensity of our AI-PHD filter updated by measurements set k as the Gaussian mixture form

D k | k   ( x ) = [ 1 p D τ   ( d ) ] D k | k 1   ( x ) + i = 1 J k | k 1 z ˜ Z ¯ k ω k ( i ) ( z ˜ ) N   ( x ; m k | k ( i ) ( z ˜ ) ,   P k | k ( i ) )
where the updated means m k | k ( i ) ( z ˜ ) = m k | k ( i ) ( z ) and covariances P k | k ( i ) are calculated with the Kalman filter update. The updated weights ω k ( i ) ( z ˜ ) in Equation (32) are computed as
ω k ( i ) ( z ˜ ) = p D τ ( d ) ρ a τ ( a | d ) ω k | k 1 ( i ) N   ( z ; z ^ k | k 1 ( i ) ,   S k | k 1 ( i ) ) λ + p D τ ( d ) ρ a τ ( a | d ) l = 1 J k | k 1 ω k | k 1 ( l ) N ( z ; z ^ k | k 1 ( l ) ,   S k | k 1 ( l ) )
where z ^ k | k 1 ( i ) = H k   m k | k 1 ( i ) is the predicted measurement and S k | k 1 ( i ) = H k   P k | k 1 ( i )   H k T + R k is the innovation covariance.

In the PHD update equation, Equation (32), and the weight update Equation (33), we replace the probability of detection p D τ ( d ) and term p D τ ( d ) ρ a τ ( a | d ) by p D τ in Equation (14) and p D τ ρ a τ ( a ) in Equation (27) when the SNR is unknown.

4. Simulation

In this Section, by setting up multi-target tracking simulation in the image plane of optical sensor, we examine the performance and computational complexity of our method for known and unknown SNR cases and benchmark them with generic PHD filter with different combinations of probability of false alarm and SNR value.

4.1. Simulation Scene and OSPA Metric

Consider a scenario with an unknown and time varying number of targets in clutter in the image region [−300, 300] × [2,000, 2,600] (pixel). Up to Nk = 6 targets are generated in this region with the random birth and dieing time instants. Figure 1 shows the true trajectories of each target. All targets in each simulation had the same mean SNR (this is not necessary by the algorithm but simplifies the presentation of results).

Each target has survival probability pS,k = 0.99 and follows the linear Gaussian dynamics in Equation (16) with:

F k 1 = [ I 2 Δ I 2 0 2 I 2 ] ,                         Q k 1 = σ ν 2 [ Δ 4 4   I 2 Δ 3 2   I 2 Δ 3 2   I 2 Δ 2   I 2 ]
where Δ = 1 s is the sampling period, and σv = 0.5(pixel/s2) is the standard deviation of process noise. The location measurement follows the observation model in Equation with H k = [ I 2 0 2 ] ,   R k = σ ε 2 I 2, where σε = 1(pixel) is the standard deviation of measurement noise.

The intensities of the birth RFS are Gaussian mixtures of the form:

γ k   ( x ) = i = 1 6 0.1 N   ( x ; m γ ( i ) ,   P γ ( i ) )
where m γ ( i ) is chosen around the mean initial states of i-th target, P Y ( i ) = [ 50 , 50 , 15 , 15 ], i = 1,⋯6. To mitigate the exponential growth of mixture components, at each time step the number of Gaussian components is capped to a maximum of Jmax = 100 components, whilst pruning is performed with a weight threshold of T = 10−5, and merging is performed with a threshold of U = 2.

We adopt the optimal subpattern assignment (OSPA) metric [22] for the purpose of multi-target performance evaluation since it jointly captures errors in the target state and target number estimates. Given two arbitrary finite sets X = {x1, ⋯ xm} and Y = {y1, ⋯ yn}, the OSPA is computed as follows:

d ¯ p ( q )   ( X ,   Y ) = { 0 , m = n = 0 ( 1 n ( min π Π n i = 1 m d ( c ) ( x i ,   y π ( i ) ) p + c p ( n m ) ) ) 1 / p , m n d ¯ p ( q )   ( Y ,   X ) , m > n
where Πn is the set of permutations on {1,⋯n}, d(c)(x,y) = min(c,d(x,y)), p is the order that penalizes error of individual element estimates, c is the cut-off parameter that penalizes error of cardinality estimate. We chose p = 2 and c = 30(pixel) in our simulation. Note that the chosen value of cut-off parameter c is significantly larger than the typical measurement noise but significantly smaller than the maximal distance between targets, thus maintaining a balance between the cardinality and localization components of the OSPA error [22].

4.2. Numerical Results

4.2.1. Filtering Results for Multi-Target Tracking

The effectiveness of our AI-PHD filter for multi-target tracking in image plane of optical sensor is verified through simulation. We assume a moderately cluttered scenario that the probabilities of false alarm, p FA τ = 1 × 10 4, which means the clutter density, λ = 1 × 10−4 pixel−2. The SNRs of all targets are set as d = 6 and the probability of detection p D τ ( d ) 0.99 (see Table 1). For the unknown SNR case, the SNR region is set as [2,10] and the probability of detection is replaced by p D τ which can be computed by Equation (12). Other parameters for the filter are given as in Section 4.1. The true trajectories and filter estimates are shown in x and y coordinates of image plane versus time for AI-PHD filter with known and unknown target SNRs in Figures 2 and 3 respectively (denoted as case1 and case2 accordingly).

From the estimates of AI-PHD filter shown in Figures 2 and 3, we see both for the known and unknown SNR case, the filters can eliminate dense clutter in the scenario with time-varying number of targets. All targets are detected immediately after birth and tracked accurately, which highlights the track initialization, maintenance and termination capabilities of our algorithm. The results also demonstrate superior performance of the algorithm in targets number and target states estimation. We evaluate this performance by Monte Carlo simulation next.

4.2.2. Monte Carlo Results and Analysis

Average OSPA and computation time cost per frame are used to evaluate the performance and computational complexity of our AI-PHD filter. To prove the improvements of the proposed method, we have benchmarked the results against a generic PHD filter which does not use the amplitude information. To make the assessment as fair as possible, the probability of detection p D τ for filters without amplitude information was chosen to be the same as AI-PHD filter of the known SNR case, as in Table 1. 50 Monte Carlo runs were carried out for each combination of p FA τ and d on computer (Intel quad core processor 2.66 GHz, 32-bit operating system, 4 Gbytes RAM) using Matlab (R2009b). Average OSPA for AI-PHD filter of known SNR and unknown SNR case and generic PHD filter are given in Table 2 where the results are divided by ‘/’ accordingly.

From Table 2 we see that for all combinations of p FA τ and d given(corresponding to different probabilities of detection), our AI-PHD filter both with known and unknown SNR gives better performance than the generic PHD filter. This improvement is enhanced as p FA τ or d increases. In the case of p FA τ = 1 × 10 3 and d = 8 where the method using the amplitude information works best, our AI-PHD filter achieves 15.94 and 12.91 lower average OSPA (pixel) for known and unknown SNR, respectively. This improvement in performance is mainly due to two reasons: firstly, as d increases, the false alarm distribution will poorly represent the target counterpart and there is a big distinction in the target and false alarm distributions; secondly, as p FA τ increases, having more measurements aids the method using the amplitude, since we discard less useful information. Table 2 also shows that the performance of generic PHD filter without amplitude decreases rapidly as p FA τ increases since there are more measurements from false alarms which by no means could be identified from those from targets. In contrast, we see no deterioration in the performance of AI-PHD filter in this case. For the known SNR case especially, the performance increases consistently as p FA τ increases, which means our method works even better in a scenario with dense clutters. The comparison of computational complexity between AI-PHD filter and generic PHD filter without amplitude information is shown in average computation time per frame versus target SNR for different p FA τ in Figure 4. Since we can achieve similar complexity for unknown SNR case with that of known SNR by computing Equation (27) with some fast algorithms, only the result for the known SNR case is given.

We see that for different given p FA τ and d, the results from two filters are close with the maximum difference being no more than 1 s. Figure 4(a) shows that in the scenario with low clutter density, AI-PHD filter has only a minor increase in average computation time over the generic PHD counterpart. Furthermore, AI-PHD filter performs an even low value in high clutter density scenario which is shown in Figure 4(b). In the case of p FA τ = 1 × 10 3, d = 8 where this reduction is most obvious, the average computation time of AI-PHD filter is reduced by 53.7% over the generic PHD counterpart, which means the AI-PHD filter has even lower computational complexity than the PHD filter without amplitude information in scenarios with dense clutter and high SNR. The primary reason for this trend is that in these scenarios, the computation time cost is mainly decided by the multi-target state extraction step given the same number of targets and measurements. Incorporated with amplitude information, the update for the AI-PHD filter (see Equation (33)) gives heavier weights to the Gaussian items updated by the measurements from targets, thus updating the PHD with comparatively higher intensity near the real target positions and at the same time, suppressing the intensity of PHD near clutter positions (see Figure 5). Therefore, the updated Gaussian items can be prune and merged quickly and accurately.

5. Conclusions

In this paper, we have proposed a FISST-based method using signal amplitude information in a PHD filter for multi-target tracking applications in the image plane of optical sensors. We extend the measurement model to include the signal amplitude of observations and then incorporate this information into a PHD recursion in the form of an amplitude likelihood ratio. Based on the assumption that the amplitudes of the measurements from true targets are stronger than those from clutter, we show our method can significantly improve the performance over the one without amplitude information. Furthermore, simulation results also demonstrate that our method has much lower computational complexity in the scenario with high SNR and dense clutter, which makes sense for its practical implementation. Future work will involve improving the tracking performance for the targets with much lower SNR.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (No. 61002026).

References

  1. Colegrove, S.B.; Davis, A.W.; Ayliffe, J.K. Track initiation and nearest neighbours incorporated into probabilistic data association. J. Electr. Electron. Eng. Aust 1986, 6, 191–198. [Google Scholar]
  2. Lerro, D.; Bar-Shalom, Y. Automated tracking with target amplitude information. Proceedings of the 1990 American Control Conference, San Diego, CA, USA, 23–25 May 1990.
  3. van Keuk, G. Multihypothesis tracking using incoherent signal-strength information. IEEE Trans. Aerosp. Electron. Syst 1996, 32, 1164–1170. [Google Scholar]
  4. la Scala, B.F. Viterbi data association tracking using amplitude information. Proceedings of the 7th International Conference on Information Fusion (Fusion ’04), Stockholm, Sweden, July 2004; pp. 698–705.
  5. Ehrman, L.M. Comparison of methods for using target amplitude to improve measurement-to-track association in multi-target tracking. Proceedings of the 9th International Conference on Information Fusion, Florence, Italy, 10–13 July 2007.
  6. Mahler, R. Statistical Multisource-Multitarget Information Fusion; Artech House: London, UK, 2007. [Google Scholar]
  7. Mahler, R.P.S. Multitarget Bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst 2003, 39, 1152–1178. [Google Scholar]
  8. Panta, K.; Clark, D.E.; Vo, B. Data association and track management for the Gaussian mixture probability hypothesis density filter. IEEE Trans. Aerosp. Electron. Syst 2009, 45, 1003–1016. [Google Scholar]
  9. Panta, K.; Vo, B.; Singh, S.; Doucet, A. Probability hypothesis density filter versus multiple hypothesis tracking. Proc. SPIE 2004, 5429, 284–295. [Google Scholar]
  10. Ma, W.; Vo, B.; Singh, S.; Baddeley, A. Tracking an unknown time-varying number of speakers using TDOA measurements: A random finite set approach. IEEE Trans. Signal Process 2006, 54, 3291–3303. [Google Scholar]
  11. Tobias, M.; Lanterman, A.D. A probability hypothesis density based multitarget tracking with multiple bistatic range and doppler observations. IEE Proc. Radar Sonar Navig 2005, 152, 195–205. [Google Scholar]
  12. Clark, D.; Vo, B.; Bell, J. GM-PHD filter multitarget tracking in sonar images. Proc. SPIE 2006. [Google Scholar] [CrossRef]
  13. Pollard, E.; Plyer, A.; Pannetier, B.; Champagnat, F.; le Besnerais, G. GM-PHD filters for multi-object tracking in uncalibrated aerial videos. Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, USA, July 2009.
  14. Clark, D.; Ristic, B.; Vo, B.-N.; Vo, B.T. Bayesian multi-object filtering with amplitude feature likelihood for unknown object SNR. IEEE Trans. Signal Process 2010, 58, 26–37. [Google Scholar]
  15. Gao, B. An operational method for estimating signal to noise ratio from data acquired with imaging spectrometers. Remote Sens. Environ 1993, 43, 23–33. [Google Scholar]
  16. Chuang, K.S.; Huang, H.K. Assessment of noise in a digital image using the join-count statistic and moran test. Phys. Med. Biol 1992, 37, 357–369. [Google Scholar]
  17. Lee, J.S.; Hopple, K. Noise modeling and estimation of remotely-sensed images. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, July 1989; pp. 1005–1008.
  18. Gonzalez, R.; Woods, R. Digital Image Processing, 2nd ed; Prentice Hall: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
  19. Lerro, D.; Bar-Shalom, Y. Interacting multiple model tracking with target amplitude feature. IEEE Trans. Aerosp. Electron. Syst 1993, 29, 494–508. [Google Scholar]
  20. Vo, B.; Ma, W. The gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process 2006, 54, 4091–4104. [Google Scholar]
  21. Vo, B.; Singh, S.; Arnaud, D. Sequential monte carlo methods for multi-target filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst 2005, 41, 1224–1245. [Google Scholar]
  22. Schuhmacher, D.; Vo, B.-N.; Vo, B. A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process 2008, 56, 3447–3457. [Google Scholar]
Figure 1. Target trajectories in the pixel plane with start/stop position as O/▵.
Figure 1. Target trajectories in the pixel plane with start/stop position as O/▵.
Sensors 12 02920f1 1024
Figure 2. Filter estimates with known SNR.
Figure 2. Filter estimates with known SNR.
Sensors 12 02920f2 1024
Figure 3. Filter estimates with unknown SNR.
Figure 3. Filter estimates with unknown SNR.
Sensors 12 02920f3a 1024Sensors 12 02920f3b 1024
Figure 4. Average computation time per frame for different algorithms with different SNRs d and probabilities of false alarm p FA τ.
Figure 4. Average computation time per frame for different algorithms with different SNRs d and probabilities of false alarm p FA τ.
Sensors 12 02920f4 1024
Figure 5. Intensity functions for generic PHD filter (left) and AI-PHD filter (right).
Figure 5. Intensity functions for generic PHD filter (left) and AI-PHD filter (right).
Sensors 12 02920f5 1024
Table 1. p D τ ( d ) under different SNR and p FA τ combinations.
Table 1. p D τ ( d ) under different SNR and p FA τ combinations.
d
p FA τ45678
5 × 10−50.54360.68640.98250.99911.0000
1 × 10−40.61060.89990.98870.99951.0000
5 × 10−40.76100.95630.99660.99991.0000
1 × 10−30.81850.97190.99821.00001.0000
Table 2. Average OSPA (pixel) for different algorithms.
Table 2. Average OSPA (pixel) for different algorithms.
d
p FA ττ45678
5 × 10−53.890617.42/18.88/19.008.17/8.230/9.652.93/3.71/6.081.57/3.53/5.780.97/2.69/5.27
1 × 10−43.719017.82/16.13/19.197.37/6.44/11.012.83/3.14/7.971.43/3.33/8.321.11/3.41/7.82
5 × 10−43.290516.41/13.53/20.216.65/5.91/15.712.79/4.14/14.891.38/4.18/15.150.95/3.99/14.43
1 × 10−33.090214.82/11.83/21.425.92/5.85/17.812.26/5.54/17.371.11/4.38/17.091.00/4.03/16.94

Share and Cite

MDPI and ACS Style

Xu, Y.; Xu, H.; An, W.; Xu, D. FISST Based Method for Multi-Target Tracking in the Image Plane of Optical Sensors. Sensors 2012, 12, 2920-2934. https://doi.org/10.3390/s120302920

AMA Style

Xu Y, Xu H, An W, Xu D. FISST Based Method for Multi-Target Tracking in the Image Plane of Optical Sensors. Sensors. 2012; 12(3):2920-2934. https://doi.org/10.3390/s120302920

Chicago/Turabian Style

Xu, Yang, Hui Xu, Wei An, and Dan Xu. 2012. "FISST Based Method for Multi-Target Tracking in the Image Plane of Optical Sensors" Sensors 12, no. 3: 2920-2934. https://doi.org/10.3390/s120302920

Article Metrics

Back to TopTop