*4.5. Gibbs-GLMB Filter*

Gibbs sampling is a special case of continuous Markov Chain Monte Carlo (MCMC), which can transform sampling from high-dimensional space to low-dimensional one [63]. Assuming that the target state is *Xk* = x*k*,1, ··· , x*k*,*N*(*k*) , which obeys the probability distribution *π*, the probability distribution of the target state is *π* x*k*,1, ··· , x*k*,*N*(*k*) .


In Algorithm 2, x*k*,1:*n*−<sup>1</sup> is the samples x*k*,1, ··· , x*k*,*n*<sup>1</sup> that have generated at current time, x*k*−1,*n*+1:*N*(*k*) is associations x*k*,*n*+1, ··· , x*k*,*N*(*k*) at previous time. The Gibbs sampling algorithm reduces the joint density estimation problem to conditional probability to reduce the sampling difficulty and finally updates all parameters by the iterative process of each parameter.

In the calculation process of the update step and the prediction step of the GLMB filter, the number of weights and data quantities of the update and the prediction step are exponentially increasing. By using optimal assignment implementation and the kth shortest path, the complexity of the measurement quantity is cubic [35]. The GLMB filter is truncated by Gibbs sampling, thereby joint prediction and update reduce the complexity of the measurements to linear.

Given the GLMB distribution (11) at the current time, the GLMB distribution at the next time is [36]:

$$
\pi\_{\mathbb{Z}\_+} \left( \mathbb{X} \right) \approx \Lambda \left( \mathbb{X} \right) \sum\_{\mathbb{I}\_{\mathbb{Z}\_+}^{\mathbb{Z}} \mathbb{I}\_+, \theta\_+} \omega^{(\mathbb{I}\_{\mathbb{Z}\_+}^{\mathbb{Z}})} \omega\_{\mathbb{Z}\_+}^{(\mathbb{I}\_{\mathbb{Z}\_+}^{\mathbb{Z}} \mathbb{I}\_+, \theta\_+)} \delta\_{\mathbb{I}\_+} \left( \mathcal{L} \left( \mathbb{X} \right) \right) \left[ p\_{\mathbb{Z}\_+}^{(\mathbb{Z}\_+ \theta\_+)} \right]^X \tag{42}
$$

where *<sup>I</sup>* ∈ F (L), *<sup>ξ</sup>* <sup>∈</sup> <sup>Ξ</sup>, *<sup>I</sup>* <sup>+</sup> ∈ F (<sup>L</sup> <sup>+</sup> ), *<sup>θ</sup>* <sup>+</sup> <sup>∈</sup> <sup>Θ</sup><sup>+</sup> and

$$\boldsymbol{\omega}\_{\boldsymbol{Z}\_{+}}^{(I\_{\boldsymbol{\upbeta}},\boldsymbol{I}\_{+},\boldsymbol{\upbeta}\_{+})} = \mathbf{1}\_{\boldsymbol{\upTheta}\_{+}(I\_{+})} \left(\boldsymbol{\uptheta}\_{+}\right) \left[1 - \boldsymbol{P}\_{\boldsymbol{S}}^{(\boldsymbol{\upbeta})}\right]^{I-I\_{+}} \left[\boldsymbol{P}\_{\boldsymbol{S}}^{(\boldsymbol{\upbeta})}\right]^{I \cap I\_{+}} \left[1 - r\_{\boldsymbol{B},+}\right]^{\mathbb{B}\_{+}-I\_{+}} r\_{\boldsymbol{B},+}^{\mathbb{B}\_{+}\cap I\_{+}} \left[\boldsymbol{\uptheta}\_{\boldsymbol{Z}\_{+}}^{(\boldsymbol{\upbeta},\boldsymbol{\uptheta}\_{+})}\right]^{I\_{+}} \tag{43}$$

$$P\_S^{(\xi)}\left(l\right) = \left\langle p^{(\xi)}\left(\cdot, l\right), P\_S\left(\cdot, l\right) \right\rangle \tag{44}$$

$$
\psi\_{Z\_+}^{(\xi,\theta\_+)}(l\_+) = \left\langle \mathfrak{p}\_+^{(\xi)}(\cdot,l\_+), \mathfrak{p}\_{Z\_+}^{(\theta\_+(l\_+))}(\cdot,l\_+) \right\rangle \tag{45}
$$

$$\bar{p}\_{+}^{(\underline{\mathbb{S}})}\left(\mathbf{x}\_{+},l\_{+}\right) = \mathbf{1}\_{\mathbb{L}}\left(l\_{+}\right) \frac{\left\langle P\_{\mathbb{S}}\left(\cdot,l\_{+}\right)f\_{+}\left(\mathbf{x}\_{+}\mid\cdot,l\_{+}\right),p^{(\underline{\mathbb{S}})}\left(\cdot,l\_{+}\right)\right\rangle}{\bar{P}\_{\mathbb{S}}^{(\underline{\mathbb{S}})}\left(l\_{+}\right)} + \mathbf{1}\_{\mathbb{B}\_{+}}\left(l\_{+}\right)p\_{\mathbb{B}\_{+}+}\left(\mathbf{x}\_{+},l\_{+}\right) \tag{46}$$

$$p\_{Z\_+}^{(\xi,\theta\_+)}\left(\mathbf{x}\_+,l\_+\right) = \frac{\overline{p}\_+^{(\xi)}\left(\mathbf{x}\_+,l\_+\right)\Psi\_{Z\_+}^{(\theta\_+(l\_+))}\left(\mathbf{x}\_+,l\_+\right)}{\overline{\Psi\_{Z\_+}^{(\xi,\theta\_+)}\left(l\_+\right)}}.\tag{47}$$

Note that *rB*,<sup>+</sup> (*l*+) is the birth probability of the target with label *l*+, *pB*,<sup>+</sup> (*x*+, *l*+) is its kinematic state distribution and *f*<sup>+</sup> (*x*<sup>+</sup> |· , *l*+) is the Markov state transition function.
