• *FormulaIII*

According to Stroud's invariance theory for cubature integrals, McNamee et al. [13] constructed a group of fully symmetric integration formulas with order 2*k* + 1 in the n-dimensional space as:

$$G(\mathfrak{u}) \approx T(\mathfrak{u}) = \mathcal{W}\_0 \mathfrak{u}(0) + \mathcal{W}\_1 \sum\_{FS} \mathfrak{u}(r, 0) + \mathcal{W}\_2 \sum\_{FS} \mathfrak{u}(r, r, 0) \tag{19}$$

This cubature formula is a special form of Equation (10) in the fifth-degree case. Zhang et al. [12] applied it to nonlinear Kalman filtering and formed an ECKF. The fully symmetric quadrature formula of the fifth degree is

$$\begin{split} G(\mathfrak{u}) &\approx \frac{\left(n^{2} - 7n + 18\right) (\pi)^{n/2}}{18} \mathfrak{u}(\mathbf{0}) \\ &+ \frac{(4 - n) (\pi)^{n/2}}{18} \sum\_{FS} \mathfrak{u}\left(\sqrt{\frac{3}{2}}, \mathbf{0}\right) + \frac{(\pi)^{n/2}}{36} \sum\_{FS} \mathfrak{u}\left(\sqrt{\frac{3}{2}}, \sqrt{\frac{3}{2}}, \mathbf{0}\right). \end{split} \tag{20}$$

In this formula, nonlinear equations are constructed and weight coefficients are obtained by solving the fully symmetric cubature criterion. The number of required cubature points is 2*n*<sup>2</sup> + 1, leading to a simple structure and a small number of calculations. In contrast to Equations (16) and (17) obtained using spherical radial integration, the coefficient of cubature points <sup>√</sup>3/2 is a fixed value and does not increase with the system dimension *n*. The problem of cubature points exceeding the integral domain in the spherical integral formula is avoided. However, according to Theorem 2, the stability coefficient is <sup>2</sup>*n*<sup>2</sup>−8*n*+<sup>9</sup> <sup>9</sup> ; thus, the numerical stability of this formula is poor. In addition, for systems with sufficiently large values of *n*, the weights are negative, which further affects the stability.

#### *2.3. Novel High-Order Cubature Formula Based on Fully Symmetric Cubature Criterion and Divided Difference Formula*

This section describes the construction of a novel fifth-degree cubature formula using a new method to maintain the filtering accuracy and numerical stability while controlling the number of cubature points required for integration.

First, the partial-derivative formula is used to modify the original formula to increase accuracy. Generally, it is difficult to directly calculate the partial derivative of a multivariable continuous function at a certain point. To avoid complex calculations, we use discrete high-order divided difference formulas to approximate the value of the partial derivatives and write them as linear combinations of the original functions. Because the weight function of the Gaussian weighted integral is a fully symmetric function, the integral of the partial derivatives of odd order is zero, and only the partial derivatives of the even order must be considered. From Equation (19), the new cubature formula modified by the high-order divided difference and even-order divided difference formula of *f*(*x*) is expressed as follows:

$$\begin{split} \hat{G}(f) &= T(f) + \sum\_{k=1}^{n} \nabla\_{\mathbf{4}}^{\mathbf{x}\_{k}} f(\mathbf{x})\_{\mathbf{x}=0} + \sum\_{k$$

For example, the first two high-order divided difference formulas can be expressed as follows:

$$\begin{aligned} \nabla\_2^{\mathbf{x}} f(\mathbf{x}) &= \frac{f(\mathbf{x} - \mathbf{r}) - 2f(\mathbf{r}) + f(\mathbf{x} + \mathbf{r})}{r^2}, \\ \nabla\_4^{\mathbf{x}} f(\mathbf{x}) &= \frac{f(\mathbf{x} - 2\mathbf{r}) - 4f(\mathbf{x} - \mathbf{r}) + 6f(\mathbf{r}) - 4f(\mathbf{x} + \mathbf{r}) + f(\mathbf{x} + 2\mathbf{r})}{r^4} \dots \end{aligned} \tag{22}$$

where *r* represents the selected step size for the divided differences. By combining the above formulas, Equation (21) with the fully symmetric integral formula with Equation (19), a new form of the fifthdegree cubature formula is obtained as follows:

$$\hat{G}(\boldsymbol{\mu}) = \mathcal{W}\_0 \boldsymbol{\mu}(\mathbf{0}) + \mathcal{W}\_1 \sum\_{FS} \boldsymbol{\mu}(\boldsymbol{r}, \mathbf{0}) + \mathcal{W}\_2 \sum\_{FS} \boldsymbol{\mu}(2\boldsymbol{r}, \mathbf{0}) + \mathcal{W}\_3 \sum\_{FS} \boldsymbol{\mu}(\boldsymbol{r}, \mathbf{r}, \mathbf{0}) \,. \tag{23}$$

The definitions of cubature generators (*r*, **0**) and (*r*,*r*, **0**) are identical to those in Equation (15). Compared with Equations (14), (17), and (20), the newly constructed cubature formula uses discrete derivatives and has a higher accuracy. However, it uses more cubature points. To determine the weight coefficient of the above equation, it is necessary to construct a set of higher-order algebraic equations. According to the fully symmetric cubature criterion, consider the Gaussian weighted integral of the following even-power monomial function:

$$\begin{aligned} I\_{2s\_1,2s\_2,\ldots,2s\_\nu} &= \int\_0^\infty (\mathbf{x}\_1)^{2s\_1} (\mathbf{x}\_2)^{2s\_2} \ldots (\mathbf{x}\_\nu)^{2s\_\nu} \exp\left(-\mathbf{x}^T \mathbf{x}\right) d\mathbf{x} \\\ i \le j, 0 \le s\_i \le s\_j, 2\sum\_{k\le\nu} s\_k \le 5. \end{aligned} \tag{24}$$

Note the above conditions for the *sk* value; in the fifth-degree cubature formula, only the integrals *I*0,0, *I*0,2, *I*0,4 and *I*2,2 must be calculated. The following formula based on the gamma function is used to calculate the integrals:

$$\begin{split} I\_{2a,2b} &= \int\_{\mathbb{R}^n} (\mathbf{x}\_1)^{2a} (\mathbf{x}\_2)^{2b} \exp(-\mathbf{x}^T \mathbf{x}) d\mathbf{x} \\ &= \left( \int\_0^\infty (\mathbf{x}\_1)^{2a} e^{-\mathbf{x}\_1^2} d\mathbf{x}\_1 \right)^2 \left( \int\_0^\infty (\mathbf{x}\_2)^{2b} e^{-\mathbf{x}\_2^2} d\mathbf{x}\_2 \right)^2 \int\_{\mathbb{R}^{n-2}} \exp\left(-\left(\mathbf{x}^{(n-2)}\right)^T \mathbf{x}^{(n-2)}\right) d\mathbf{x}^{\{n-2\}} \\ &= \Gamma\left(a + \frac{1}{2}\right) \Gamma\left(b + \frac{1}{2}\right) (\pi)^{n/2 - 1}, a, b \ge 0. \end{split} \tag{25}$$

By combining the coefficients of the above monomial function with the newly constructed cubature formula of Equation (23), the following fully symmetric polynomial can be obtained [13]:

$$\mathcal{M}\_{\mathbb{W}\_{0,b},I\_{b,b}} = 2^{r\_N} \sum\_{k\_1=1}^p \cdots \cdot \sum\_{k\_\upsilon=1}^p \frac{(n-\upsilon)! r\_{k\_1}^{2s\_1} \cdots r\_{k\_\upsilon}^{2s\_\upsilon}}{(n-r\_N)!} \left(\prod\_{j=1}^p (p\_j - q\_j)!\right)^{-1} \tag{26}$$

This polynomial is used to calculate the coefficient-matrix elements of the higher-order algebraic equations corresponding to the newly constructed cubature formula. Here, *rN* represents the number of nonzero vector coefficients at the cubature points used in each part of Equation (23), and *p* represents the number of nonzero vector coefficients in each generator of the new formula which is different from the others. Thus, *rN* and *p* can be easily obtained using the above formula. *pj* and *qj* represent the numbers of times that the coefficient *r* of the cubature points and the integer *j* appear in the counting units *k*1,..., *kv* of the sum, respectively.

According to the different cubature points in Equation (23), different integral trajectories are selected, and the parameters are calculated using Equations (25) and (26). Thus, the coefficient matrix of the higher-order algebraic equation can be obtained:

$$\begin{cases} \begin{array}{ccccc} & 0 & (r,0) & (2r,0) & (r,r,0) \\ r\_N = 2, p = 1 \Rightarrow (r,r,0) & & & \\ r\_N = 1, p = 1 \Rightarrow (ar,0) & \rightarrow & \begin{pmatrix} 1 & 2n & 2n & 2n(n-1) \\ 0 & 2r^2 & 2(2r)^2 & 4(n-1)r^2 \\ 0 & 2r^2 & 2(2r)^2 & 4(n-1)r^4 \\ 0 & 0 & 0 & 4r^4 \end{pmatrix}\_{M\_{4,l}} \end{cases} \end{cases} \tag{27}$$

Using Equations (23), (25), and (26), we can construct a higher-order algebraic system for solving weight coefficients:

$$\begin{pmatrix} \mathcal{W}\_0 + 2n\mathcal{W}\_1 + 2n\mathcal{W}\_2 + 2(n-1)n\mathcal{W}\_3 = I\_{0,0} \\ 2\mathcal{W}\_1 r^2 + 8\mathcal{W}\_2 r^2 + 4(n-1)\mathcal{W}\_3 r^2 = I\_{0,2} \\ 2\mathcal{W}\_1 r^4 + 32\mathcal{W}\_2 r^4 + 4(n-1)\mathcal{W}\_3 r^4 = I\_{0,4} \\ 4r^4\mathcal{W}\_3 = I\_{2,2} \end{pmatrix}. \tag{28}$$

To solve the above algebraic equations, the following unique solution of the weight coefficients can be obtained:

$$\begin{aligned} \mathcal{W}\_0 &= \frac{\pi^{n/2} \left(2n^2 - 10n^2 + n + 16r^4\right)}{16r^4}, \mathcal{W}\_1 = \frac{\pi^{n/2} \left(8r^2 - 3n\right)}{24r^4}, \\\\ \mathcal{W}\_2 &= \frac{\pi^{n/2} \left(3 - 2r^2\right)}{96r^4}, \mathcal{W}\_3 = \frac{\pi^{n/2}}{16r^4} \end{aligned} \tag{29}$$

To maintain the approximate accuracy and numerical stability while minimizing the number of cubature points used, we can take *<sup>r</sup>* <sup>=</sup> <sup>√</sup>3*n*/8 or <sup>√</sup>3/2 obtain a new set of weight coefficients. When *<sup>r</sup>* <sup>=</sup> <sup>√</sup>3/2, we obtain the cubature formula of the embedded KF in Equation (20), which has inadequate numerical stability; thus, we take *<sup>r</sup>* <sup>=</sup> <sup>√</sup>3*n*/8. The weight solution is as follows:

$$\mathcal{W}\_0 = \frac{2(n+2)\pi^{n/2}}{9n}, \mathcal{W}\_1 = 0, \mathcal{W}\_2 = -\frac{(n-4)\pi^{n/2}}{18n^2}, \mathcal{W}\_3 = \frac{4\pi^{n/2}}{9n^2}.\tag{30}$$

Therefore, Equation (23) can be written as

$$\begin{split} \mathbf{G}(\boldsymbol{\mu}) &= \frac{2(n+2)\pi^{n/2}}{9n} \boldsymbol{\mu}(\mathbf{0}) - \frac{(n-4)\pi^{n/2}}{18n^2} \sum\_{\widetilde{F} \boldsymbol{S}} \boldsymbol{\mu}\left(\sqrt{\frac{3\mathbf{u}}{2}}, \mathbf{0}\right) \\ &+ \frac{4\pi^{n/2}}{9n^2} \sum\_{\widetilde{F} \boldsymbol{S}} \boldsymbol{\mu}\left(\sqrt{\frac{3\mathbf{u}}{8}}, \sqrt{\frac{3n}{8}}, \mathbf{0}\right) . \end{split} \tag{31}$$

The total number of points in the new cubature formula is 2*n*<sup>2</sup> + 1, and the stability coefficient is <sup>11</sup>*n*−<sup>8</sup> <sup>9</sup>*<sup>n</sup>* <sup>&</sup>gt; 1 (when *<sup>n</sup>* > 4). The numerical stability of this formula is superior to that of Equation (20) for the embedded KF, and the number of cubature points used is equal to that for Equation (16). The new formula has a simpler structure and a smaller cubature-points coefficient *r* than Equations (14) and (17), which improves the numerical stability. To ameliorate the non-local sampling problem of point coordinates caused by the increase in system dimensions, we adjust the coordinate parameters of the cubature points in the above cubature formula:

$$\begin{split} \hat{G}(\mathfrak{u}) &= \frac{2(n+2)\pi^{n/2}}{9n} \mathfrak{u}(0) - \frac{(n-4)\pi^{n/2}}{18n^2} \sum\_{\widetilde{F} \widetilde{S}} \mathfrak{u}\left(\sqrt{\frac{3(n-c)}{2}}, 0\right) \\ &+ \frac{4\pi^{n/2}}{9n^2} \sum\_{\widetilde{F} \widetilde{S}} \mathfrak{u}\left(\sqrt{\frac{3(n-c)}{8}}, \sqrt{\frac{3(n-c)}{8}}, 0\right), 0 < c < 1. \end{split} \tag{32}$$

Fine-tuning the parameters can not only maintain the accuracy of the cubature formula but also reduce the influence of nonlocal sampling problems.

#### **3. Robust KF Based on New Cubature Formula and MCC**

#### *3.1. Cross-Correntropy Formula*

As a statistical measure of the similarity between two random variables, the cross-correntropy formula has been widely used in non-Gaussian noise-signal processing. Cross-correntropy is a generalized variable that characterizes the correlation between a pair of scalar random variables and can measure not only second-order information but also high-order moments of the joint probability density. The cross-correntropy between two scalar random variables can be expressed by the mathematical expectation of the positive-definite kernel function *Kσ*(*x*, *y*) [14]:

$$H\_{\mathcal{F}}(X,Y) = E(\mathcal{K}\_{\mathcal{F}}(X,Y)) = \iint\_{\mathbb{R}^2} \mathcal{K}\_{\mathcal{F}}(x,y) f\_{XY}(x,y) d\tau. \tag{33}$$

Here, *fXY*(*x*, *y*) is the joint probability density function between two random variables. Under normal circumstances, the joint distribution function between random variables cannot be accurately obtained; thus, it can only be estimated using a discrete approximation formula and a limited number of sample data points:

$$H\_{\mathcal{T}\to} = \frac{1}{n} \sum\_{k=1}^{n} K\_{\mathcal{T}}(\boldsymbol{x}\_{k'} \boldsymbol{y}\_{k}).\tag{34}$$

In the above expression, {*xk*, *yk*}*<sup>n</sup>* <sup>1</sup> is a sample extracted from the joint distribution function *FXY*(*x*, *y*). We selected the Gaussian correlation entropy function for the estimation, which is a positive definite kernel function that satisfies Mercer's theory:

$$\mathcal{K}\_{\mathcal{V}}(\mathbf{x}, y) = \mathcal{G}\_{\mathcal{V}}(\boldsymbol{\varepsilon}) = \exp\left(-\frac{\boldsymbol{\varepsilon}^{2}}{2\sigma^{2}}\right). \tag{35}$$

where *e* = *x* − *y* and *σ* represents the bandwidths of the kernel function. The Gaussian function has a series of advantageous properties. For example, it is positive and bounded, and the maximum point is zero. The MCC is obtained using the correlation entropy estimation formula based on the Gaussian function. Expanding the kernel function in Equation (34) using the Taylor series:

$$H\_{\sigma E} = \sum\_{k=0}^{\infty} \frac{1}{k!} \left( -\frac{1}{2\sigma^2} \right)^k E \left[ (X - Y)^{2k} \right] \tag{36}$$

According to the series expression above, the correlation entropy can extract the information of all even moments of *X* − *Y* under the appropriate kernel bandwidth. This helps us to use the higher-order moment information of the signal better.

#### *3.2. Parameter Estimation Method Based on Correntropy Criterion*

According to the derivation process of the original Kalman filter, the state update equation of the conventional Kalman filter can be obtained under the assumption that the innovation conforms to a Gaussian distribution. When interference values and outliers exist in the noise of the state or observation information, the hypothesis that innovation conforms to the Gaussian distribution in Equations (4) and (5) is no longer satisfied, resulting in a reduced filtering effect. Therefore, a robust method should be introduced to optimize the original Kalman filter framework.

Izanloo R et al. [14] combined the weighted matrix with the cost function of the c-filter and obtained the cost function of the WLS method and the MCC for a linear system, so that the leastvariance estimation and high-order moment information were combined in the filtering process, and the estimation process was embedded into the conventional KF framework. A KF with robustness was obtained. According to this cost function, a new generalized cost function is designed, and its parameter estimation process is extended to the filtering process of nonlinear systems. Using the nonlinear filter framework presented in Section 2.1, together with the WLS method and MCC, we give the generalized cost function as follows:

$$J = \mathbb{G}\_{\mathcal{V}}\left(\left\lVert\mathbf{x}\_{\mathbf{k}} - \overset{\scriptstyle \triangleq}{\mathbf{x}}\_{\mathbf{k}|\mathbf{k}-1}\right\rVert\_{\mathbf{P}^{-1}\_{\mathbf{k}|\mathbf{k}-1}}^{2}\right) + \gamma \mathbb{G}\_{\mathcal{V}}\left(\left\lVert\mathbf{z}\_{\mathbf{k}} - \overset{\scriptstyle \triangleq}{H}\_{\mathbf{k}}\mathbf{x}\_{\mathbf{k}} - \overset{\scriptstyle \triangleq}{z}\_{\mathbf{k}|\mathbf{k}-1} + \overset{\scriptstyle \triangleq}{H}\_{\mathbf{k}}\overset{\scriptstyle \triangleq}{\mathbf{x}}\_{\mathbf{k}|\mathbf{k}-1}\right\rVert\_{\mathbf{P}^{-1}\_{\mathbf{k}}}^{2}\right),\tag{37}$$

where *x*<sup>2</sup> *<sup>R</sup>* = *<sup>x</sup>TRx*, *<sup>x</sup>k*, *^ <sup>x</sup>k*|*k*−1, *<sup>z</sup><sup>k</sup> ^ <sup>z</sup>k*|*k*−<sup>1</sup> and *<sup>P</sup>k*|*k*−<sup>1</sup> are the prediction, the update state vector, the observation vector and the state covariance matrix in the KF framework, respectively. *γ* is an undetermined constant. We use statistical linearization [25] to solve nonlinear filtering problems. ¯ *Rk* <sup>=</sup> *Pzz* <sup>−</sup> ¯ *HkPk*|*k*−<sup>1</sup> ¯ *H T k* is the noise covariance matrix of the statistically linearized observation vector, and ¯ *Hk* = *PT xzP*−<sup>1</sup>

*<sup>k</sup>*|*k*−<sup>1</sup> is the coefficient matrix of the statistically linearized observation vector. To minimize the above objective function *J*, we first calculate its derivative with respect to *xk*:

$$\begin{split} \frac{\partial \boldsymbol{J}}{\partial \mathbf{x}\_{k}} &= \gamma \frac{\mathbf{H}\_{\mathbf{k}}^{T} \mathbf{R}\_{\mathbf{k}}^{-1} \boldsymbol{G}\_{\sigma} \left( \left\| \boldsymbol{\mathbf{z}}\_{k} - \boldsymbol{\Pi}\_{\mathbf{k}} \mathbf{x}\_{k} - \boldsymbol{\texttt{z}}\_{k(k-1)} + \boldsymbol{\Pi}\_{\mathbf{k}} \boldsymbol{\texttt{s}}\_{k(k-1)} \right\|\_{\mathbf{Z}\_{\mathbf{k}}^{-1}}^{2} \right) \left( \mathbf{z}\_{k} - \boldsymbol{\Pi}\_{\mathbf{k}} \mathbf{x}\_{k} - \boldsymbol{\texttt{z}}\_{k(k-1)} + \boldsymbol{\Pi}\_{\mathbf{k}} \boldsymbol{\texttt{s}}\_{k(k-1)} \right)}{\sigma^{2}} \\ &- \mathbf{G}\_{\sigma} \left( \left\| \boldsymbol{\chi}\_{\mathbf{k}} - \boldsymbol{\texttt{x}}\_{k(k-1)} \right\|\_{\mathbf{Z}\_{\mathbf{k}}^{-1}}^{2} \right) \frac{\boldsymbol{P}\_{\mathbf{k}|\mathbf{k}-1}^{-1} \left( \boldsymbol{x}\_{k} - \boldsymbol{\texttt{x}}\_{k(k-1)} \right)}{\sigma^{2}} \end{split} . \tag{38}$$

By setting this to 0, the following matrix equation is obtained:

$$\gamma \circ \mathbf{G}\_k \mathbf{H}\_k^T \mathbf{R}\_k^{-1} \left( z\_k - \mathbf{H}\_k \mathbf{x}\_k - \mathbb{1}\_{k|k-1} + \mathbf{H}\_k \mathbb{1}\_{k|k-1} \right) = P\_{k|k-1}^{-1} \left( \mathbf{x}\_k - \mathbb{1}\_{k|k-1} \right)$$

$$\mathbf{G}\_k = \frac{\mathbf{G}\_\sigma \left( \left\| z\_k - \overline{\mathbf{H}}\_k \mathbf{x}\_k - \mathbb{1}\_{k|k-1} + \overline{\mathbf{H}}\_k \mathbb{1}\_{k|k-1} \right\|\_{\mathbf{H}\_k^{-1}}^2 \right)}{\mathbf{G}\_\sigma \left( \left\| \mathbf{x}\_k - \mathbb{1}\_{k|k-1} \right\|\_{\mathbf{H}\_{k|k-1}}^2 \right)} \tag{39}$$

According to the equation, we take the constant parameter *γ* = 1, and simplify the above equation to obtain a simple equation involving *<sup>x</sup>k*:

$$\begin{aligned} \left(P\_{k|k-1}^{-1} + G\_k \overline{H}\_k^T \overline{R}\_k^{-1} \overline{H}\_k\right) \mathbf{x}\_k &= G\_k \overline{H}\_k^T \overline{R}\_k^{-1} \left(\mathbf{z}\_k - \mathbb{1}\_{k|k-1}\right) \\ + \left(P\_{k|k-1}^{-1} + G\_k \overline{H}\_k^T \overline{R}\_k^{-1} \overline{H}\_k\right) \mathbf{\hat{x}}\_{k|k-1} \\ \mathbf{x}\_k &= \left(P\_{k|k-1}^{-1} + G\_k \overline{H}\_k^T \overline{R}\_k^{-1} \overline{H}\_k\right)^{-1} G\_k \overline{H}\_k^T \overline{R}\_k^{-1} \left(\mathbf{z}\_k - \mathbb{1}\_{k|k-1}\right) + \mathbb{1}\_{k|k-1} \end{aligned} \tag{40}$$

Then, we obtain the new state-estimation expression for a robust KF:

$$\begin{aligned} \mathfrak{k}\_{k|k} &= \mathfrak{k}\_{k|k-1} + \mathfrak{K}\_{k}^{G} \left( z\_{k} - \mathfrak{k}\_{k|k-1} \right); \\ \mathfrak{K}\_{k}^{G} &= \left( P\_{k|k-1}^{-1} + G\_{k} \overline{H}\_{k}^{T} \overline{R}\_{k}^{-1} \overline{H}\_{k} \right)^{-1} G\_{k} \overline{H}\_{k}^{T} \overline{R}\_{k}^{-1} \,. \end{aligned} \tag{41}$$

In the KF framework, we replace *Hkxk* with *Hkx*ˆ*k*|*k*−<sup>1</sup> and substitute the corresponding state estimation vector of *xk*:

$$\mathbf{G}\_{k} = \frac{\mathbf{G}\_{\sigma} \left( \left\| \mathbf{z}\_{k} - \overline{\mathbf{H}}\_{k} \mathbf{x}\_{k} - \boldsymbol{\mathfrak{e}}\_{k|k-1} + \overline{\mathbf{H}}\_{k} \boldsymbol{\mathfrak{e}}\_{k|k-1} \right\|\_{\overline{\mathbf{R}}\_{k}^{-1}}^{2} \right)}{\mathbf{G}\_{\sigma} \left( \left\| \mathbf{x}\_{k} - \boldsymbol{\mathfrak{e}}\_{k|k-1} \right\|\_{\overline{\mathbf{R}}\_{k|k-1}^{-1}}^{2} \right)} \Rightarrow \frac{\mathbf{G}\_{\sigma} \left( \left\| \mathbf{z}\_{k} - \boldsymbol{\mathfrak{e}}\_{k|k-1} \right\|\_{\overline{\mathbf{R}}\_{k}^{-1}}^{2} \right)}{\mathbf{G}\_{\sigma} \left( \left\| \boldsymbol{\mathfrak{e}}\_{k|k-1} - \boldsymbol{\mathfrak{e}}\_{k-1|k-1} \right\|\_{\overline{\mathbf{R}}\_{k|k-1}^{-1}}^{2} \right)} \tag{42}$$

Then, we obtain the following state-estimation expression for a robust KF:

⎧ ⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎩ *Kk* = *Pxz <sup>k</sup>*|*k*−1(*Pzz <sup>k</sup>*|*k*−1) −1 *<sup>x</sup>*ˆ*k*|*<sup>k</sup>* <sup>=</sup> *<sup>x</sup>*ˆ*k*|*k*−<sup>1</sup> <sup>+</sup> *Kk zk* <sup>−</sup> *<sup>z</sup>*ˆ*k*|*k*−<sup>1</sup> *Pk*|*<sup>k</sup>* <sup>=</sup> *Pk*|*k*−<sup>1</sup> <sup>−</sup> *KkP<sup>P</sup><sup>z</sup> z <sup>k</sup>*|*k*−1*K<sup>T</sup> k* ⇒ ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ *Hk* = *Pxz <sup>k</sup>*|*k*−<sup>1</sup> *T P*−1 *<sup>k</sup>*|*k*−<sup>1</sup> *Rk* = *Pzz <sup>k</sup>*|*k*−<sup>1</sup> <sup>−</sup> *HkPk*|*k*−1*H<sup>T</sup> k Gk* = *Gσ zk*−*z*ˆ*k*|*k*−1<sup>2</sup> *R*−1 *k Gσ x*ˆ*k*|*k*−1−*x*ˆ*k*−1|*k*−1<sup>2</sup> *Pk*−1−<sup>1</sup> *K*ˆ *<sup>G</sup> k* = *P*−1 *<sup>k</sup>*|*k*−<sup>1</sup> <sup>+</sup> *GkH<sup>T</sup> <sup>k</sup> R*−<sup>1</sup> *<sup>k</sup> Hk* −<sup>1</sup> *GkH<sup>T</sup> <sup>k</sup> R*−<sup>1</sup> *k <sup>x</sup>*ˆ*k*|*<sup>k</sup>* <sup>=</sup> *<sup>x</sup>*ˆ*k*|*k*−<sup>1</sup> <sup>+</sup> *<sup>K</sup>*<sup>ˆ</sup> *<sup>G</sup> k zk* <sup>−</sup> *<sup>z</sup>*ˆ*k*|*k*−<sup>1</sup> *Pk*|*<sup>k</sup>* <sup>=</sup> *<sup>E</sup>* <sup>−</sup> *<sup>K</sup>*<sup>ˆ</sup> *<sup>G</sup> <sup>k</sup> Hk Pk*|*k*−<sup>1</sup> *<sup>E</sup>* <sup>−</sup> *<sup>K</sup>*<sup>ˆ</sup> *<sup>G</sup> <sup>k</sup> Hk T* +*K*ˆ *<sup>G</sup> <sup>k</sup> Rk K*ˆ *<sup>G</sup> k T* traditional(*a*) robust(*b*) (43)

The above formula is used to replace the gain in the original Kalman filtering process and to update the state vector and covariance estimation. The algorithm flow of robust Kalman filtering can be obtained using the new cubature formula derived in Section 2.3.

#### *3.3. Robust KF Based on New Fifth-Degree Cubature Formula and MCC*

In the following algorithm, we use diagonalization transformation to solve the square root of the matrix. With regard to numerical stability, the diagonalization transformation is better than the Cholesky decomposition method used in the general CKF algorithm. Cholesky decomposition requires the matrix to be positive-definite, which may lead to process instability or even to algorithm divergence.

We now present the time update and measurement update processes of the proposed robust KF algorithm.

#### 3.3.1. Initialization of Cubature Points and Parameters

The state vector and covariance matrix are initialized as follows:

$$\dot{\mathbf{x}}\_{0|0} = E(\mathbf{x}\_0), \mathbf{P}\_{0|0} = E\left[ \left( \mathbf{x}\_0 - \dot{\mathbf{x}}\_{0|0} \right) \left( \mathbf{x}\_0 - \dot{\mathbf{x}}\_{0|0} \right)^T \right]. \tag{44}$$

The trajectory nodes of the integration required by the fifth-degree algorithm are generated:

$$\mathbf{U}\_{1} = \sqrt{\frac{\frac{\overline{\lambda}(n-\varepsilon)}{4}}{4}} \underbrace{(\mathbf{c}\_{1} + \mathbf{c}\_{2}, \dots, \mathbf{c}\_{1} + \mathbf{c}\_{n}, \mathbf{c}\_{2} + \mathbf{c}\_{3}, \dots, \mathbf{c}\_{2} + \mathbf{c}\_{n}, \dots, \mathbf{c}\_{n-1} + \mathbf{c}\_{n})}\_{n(n-1)/2}$$

$$\mathbf{U}\_{2} = \sqrt{\frac{\overline{\lambda}(n-\varepsilon)}{4}} \underbrace{(\mathbf{c}\_{1} - \mathbf{c}\_{2}, \dots, \mathbf{c}\_{1} - \mathbf{c}\_{n}, \mathbf{c}\_{2} - \mathbf{c}\_{3}, \dots, \mathbf{c}\_{2} - \mathbf{c}\_{\mathbf{n}}, \dots, \mathbf{c}\_{\mathbf{n}-1} - \mathbf{c}\_{\mathbf{n}})}\_{n(n-1)/2} \tag{45}$$

Then, we initialize the different weights in the cubature formula and their corresponding cubature points:

$$\begin{cases} \begin{aligned} \mathcal{N}\_{f} &= 2n^{2} + 1 \\ \mathcal{X}\_{(1)} &= 0 \\ \mathcal{X}\_{(2,\ldots,2n+1)} &= \sqrt{3(n-c)}(\mathcal{E}, -\mathcal{E}) \\ \mathcal{X}\_{(2n+2,\ldots,2n^{2}+1)} &= (\mathcal{U}\_{1}, -\mathcal{U}\_{1}, \mathcal{U}\_{2}, -\mathcal{U}\_{2}) \end{aligned} & \begin{cases} \begin{aligned} w\_{(1)} &= \frac{2(n+2)}{9n} \\ w\_{(2,\ldots,2n+1)} &= -\frac{(n-4)}{18n^{2}} \\ w\_{(2n+2,\ldots,2n^{2}+1)} &= \frac{4}{9n^{2}} \end{aligned} \end{cases} \end{cases} \end{cases} \end{cases} \begin{aligned} \mathcal{U}\_{(1)} = \frac{2(n+2)}{9n} \\ \begin{aligned} \mathcal{U}\_{(2,\ldots,2n+1)} &= -\frac{(n-4)}{18n^{2}} \\ \mathcal{U}\_{(3,\ldots,3n+1)} &= \frac{4}{9n^{2}} \end{aligned} \end{cases} \tag{46}$$

#### 3.3.2. Time Update

The diagonalization transformation given above is used to calculate the square-root matrix of the covariance matrix [26]: <sup>√</sup>**Λ***k*|*kU<sup>T</sup>*

$$P\_{k|k} = \mathbf{U}\_I \mathbf{A}\_{k|k} \mathbf{U}\_I^T, \mathbf{U}\_{k|k} = \mathbf{U}\_I \sqrt{\mathbf{A}\_{k|k}} \mathbf{U}\_I^T$$

$$\mathbf{A}\_{k|k} = \begin{pmatrix} \sqrt{\lambda\_1} & 0 & \cdots & 0 \\ 0 & \sqrt{\lambda\_2} & \cdots & 0 \\ \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & \cdots & \sqrt{\lambda\_n} \end{pmatrix}\_{P\_{i|k}}.\tag{47}$$

*I*

Then, the new cubature points are calculated by substituting the integral variable into Equation (8) and its corresponding state function value:

$$\mathbf{X}\_{k-1\mid k-1}^{l} = \mathbf{U}\_{k-1\mid k-1} \mathbf{x}\_{k-1} + \mathbf{x}\_{k-1\mid k-1}, \mathbf{X}\_{k-1\mid k-1}^{(l)} = f\left(\mathbf{X}\_{k-1\mid k-1}^{l}\right) \tag{48}$$

Next, the state-vector prediction and state error covariance matrix prediction are calculated:

$$\begin{cases} \begin{aligned} \dot{\mathbf{x}}\_{k|k-1} &= \sum\_{l=1}^{N\_f} w\_l f\left(\mathbf{X}\_{k-1|k-1}^l\right) = \sum\_{l=1}^{N\_f} w\_l \mathbf{X}\_{k-1|k-1}^{(l)} \\\\ \mathbf{P}\_{k|k-1} &= \sum\_{l=1}^{N\_f} w\_l \left(\mathbf{X}\_{k-1|k-1}^{(l)} - \hat{\mathbf{x}}\_{k|k-1}\right) \left(\mathbf{X}\_{k-1|k-1}^{(l)} - \hat{\mathbf{x}}\_{k|k-1}\right)^T + \mathbf{Q}\_{k-1} \end{aligned} \end{cases} \tag{49}$$

#### 3.3.3. Measurement Update

Similar to Equation (46), diagonalization transformation is performed on the state prediction covariance matrix to obtain the square-root matrix:

$$\mathbf{P}\_{k|k-1} = \mathbf{U}\_{\mathbf{f}} \boldsymbol{\Lambda}\_{k|k-1} \mathbf{U}\_{\mathbf{f}}^{\top}, \mathbf{U}\_{k|k-1} = \mathbf{U}\_{\mathbf{f}} \sqrt{\boldsymbol{\Lambda}}\_{k|k} \mathbf{U}\_{\mathbf{f}}^{\top} \,. \tag{50}$$

The new cubature points are calculated after the Gaussian integral substitution, along with the corresponding measurement-function value:

$$\mathbf{X}\_{k|k-1}^{l} = \mathbf{U}\_{k|k-1} \mathbf{x}\_k + \mathbf{\hat{x}}\_{k|k-1}, \mathbf{Z}\_{k|k-1}^{l} = \mathbf{\hat{x}} \left( \mathbf{X}\_{k|k-1}^{l} \right) \,. \tag{51}$$

The measurement vector estimation, autocorrelation covariance matrix, and cross-covariance matrix are evaluated:

$$\begin{cases} \begin{aligned} \dot{\mathbf{z}}\_{k|k-1} &= \sum\_{l=1}^{N\_f} w\_l \hbar \left( \mathbf{X}\_{k|k-1}^l \right) = \sum\_{l=1}^{N\_f} w\_l \mathbf{Z}\_{k|k-1}^l \\\\ \mathbf{P}\_{k|k-1}^{\pm z} &= \sum\_{l=1}^{N\_f} w\_l \left( \mathbf{Z}\_{k|k-1}^l - \hat{z}\_{k|k-1} \right) \left( \mathbf{Z}\_{k|k-1}^l - \hat{z}\_{k|k-1} \right)^T + \mathbf{R}\_k \\\\ \mathbf{P}\_{k|k-1}^{\pm z} &= \sum\_{l=1}^{N\_f} w\_l \left( \mathbf{X}\_{k|k-1}^l - \hat{\mathbf{x}}\_{k|k-1} \right) \left( \mathbf{Z}\_{k|k-1}^l - \hat{z}\_{k|k-1} \right)^T. \end{aligned} \end{cases} \tag{52}$$

We calculate the statistical linearization update matrix and covariance matrix of the measurement function:

$$\mathbf{T}\_k = \left(P\_{k|k-1}^{xz}\right)^T P\_{k|k-1}^{-1}, \mathbf{T}\_k = P\_{k|k-1}^{xz} - \mathbf{T}\_k P\_{k|k-1} \mathbf{T}\_k^T. \tag{53}$$

We then compute the correlation entropy coefficient based on the Gaussian kernel function and weighted norm:

$$\mathbf{G}\_{k} = \frac{\mathbf{G}\_{\sigma} \left( \left\| \| \mathbf{z}\_{k} - \hat{\mathbf{z}}\_{k|k-1} \right\|\_{R\_{k}}^{2} \right)}{\mathbf{G}\_{\sigma} \left( \left\| \hat{\mathbf{x}}\_{k|k-1} - \hat{\mathbf{x}}\_{k-1|k-1} \right\|\_{P^{-1}\_{k|k-1}}^{2} \right)} \,. \tag{54}$$

Finally, we calculate the new gain matrix of the robust KF; then, the updated state vector estimation and updated covariance matrix are computed according to the new gain matrix:

$$\begin{cases}
\dot{\mathcal{K}}\_k^G = \left(P\_{k|k-1}^{-1} + G\_k \overline{H}\_k^T \overline{\mathcal{R}}\_k^{-1} \overline{\mathcal{H}}\_k\right)^{-1} G\_k \overline{H}\_k^T \overline{\mathcal{R}}\_k^{-1} \\
\dot{\mathfrak{x}}\_{k|k} = \dot{\mathfrak{x}}\_{k|k-1} + \dot{\mathcal{K}}\_k^G \left(z\_k - \dot{z}\_{k|k-1}\right) \\
P\_{k|k} = \left(E - \dot{\mathcal{K}}\_k^G \overline{\mathcal{H}}\_k\right) P\_{k|k-1} \left(E - \dot{\mathcal{K}}\_k^G \overline{\mathcal{H}}\_k\right)^T + \dot{\mathcal{K}}\_k^G \overline{\mathcal{R}}\_k \left(\dot{\mathcal{K}}\_k^G\right)^T
\end{cases} \tag{55}$$

#### **4. Simulation Experiment of Target Tracking Based on Proposed Algorithm**

*4.1. Filtering Simulation Experiment and Numerical Analysis of Cubature Formulas* We evaluated the filtering performance of the CKF using the cubature formula proposed in Section 2 through simulation experiments.

Consider the following nonlinear system equation with trigonometric functions, exponential functions, and power functions [20]:

$$\begin{cases} \begin{aligned} & \text{Process equation}: \\ & \mathbf{x}\_{k} = \begin{pmatrix} \mathbf{x}\_{k}^{(1)} \\ \mathbf{x}\_{k}^{(2)} \\ \mathbf{x}\_{k}^{(3)} \end{pmatrix} = \begin{pmatrix} 3\sin^{2}\left(5\mathbf{x}\_{k-1}^{(2)}\right) \\ \mathbf{x}\_{k-1}^{(1)} + \boldsymbol{\varepsilon}^{-0.05\mathbf{x}\_{k-1}^{(3)}} + 10 \\ 0.2\mathbf{x}\_{k-1}^{(1)}\left(\mathbf{x}\_{k-1}^{(2)} + \mathbf{x}\_{k-1}^{(3)}\right) \end{pmatrix} + \mathbf{w}\_{k-1} \\ \text{Measurement equation}: \\ & \boldsymbol{z}\_{k} = \cos\left(\mathbf{x}\_{k}^{(1)}\right) + \mathbf{x}\_{k}^{(2)}\mathbf{x}\_{k}^{(3)} + \mathbf{v}\_{k} \end{aligned} \tag{56}$$

The process noise is *<sup>w</sup><sup>k</sup>* <sup>∼</sup> *<sup>N</sup>*(0, *<sup>Q</sup>*), *<sup>Q</sup>* <sup>=</sup> 0.1*E*3, where *<sup>E</sup><sup>n</sup>* is the n-dimensional identity matrix. The measurement noise is *<sup>v</sup><sup>k</sup>* <sup>∼</sup> *<sup>N</sup>*(0, *<sup>R</sup>*), *<sup>R</sup>* <sup>=</sup> 1.0. In the simulation experiment, the initial vector was set as *x*<sup>0</sup> = (1, 1, 1) *<sup>T</sup>*, and the initial covariance matrix was set as *<sup>P</sup>*0|<sup>0</sup> <sup>=</sup> diag(0.1, 0.1, 0.1). In addition to the proposed 5th-degree DDCKF (Equation (32)), the following HDCKFs were used for comparison:


The initial conditions of all the filters were identical at the beginning of each simulation, and the simulation time was 40 s. The root-mean-square error (*RMSE*) was used as an evaluation index for the filtering performance. The *RMSE* and average *RMSE* (*ARMSE*) were defined as follows:

$$\begin{aligned} RMSE\_{\langle \mathbf{x}, \mathbf{x}\_{l} \rangle} &= \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \sum\_{l=1}^{3} \left( \mathbf{x}\_{k}^{(l)} - \mathbf{x}\_{k|k}^{(l)} \right)^{2}} \\ ARMSE\_{\langle \mathbf{x} \rangle} &= \frac{1}{N} \sum\_{k=1}^{A} RMSE\_{\langle \mathbf{x}, \mathbf{x}\_{k} \rangle}(k) \end{aligned} \tag{57}$$

where *N* represents the number of Monte Carlo simulation runs, and *A* represents the timestep (in s) of the simulation experiment. We set *N* = 100 and *A* = 40. Additionally, we set *c* = 0 in Equations (45) and (46). The simulation results are shown in Figure 1.

From the *RMSE* results of the simulation experiment in Figure 1, it can be concluded that the filtering accuracy of the 3rd-degree CKF was the lowest when the system was highly nonlinear, because it only used 2*N* cubature points and the low-order cubature formula with low accuracy to approximate the probability density; thus, it could not obtain a good approximation. Among the 5th-degree formulas, the SCKF had a slightly higher average filtering accuracy than the ECKF, and the proposed DDCKF had the highest accuracy. In such a low-dimension nonlinear system, the cubature of the 5th-degree CKF is *n*<sup>2</sup> + 3*n* + 3; thus, the number of cubature points required by the 5th-degree CKF exceeded those for the other filters, leading to a larger number of computations. According to the experimental results, compared with the other 5th-degree algorithms, the DDCKF has better filtering performance for systems with a high degree of nonlinearity. The *ARMSE* values of the algorithms are presented in Table 2.

**Table 2.** Performance indices of the different filters used in Experiment 4.1.


**Figure 1.** RMSE for different filters in Experiment 4.1.1.

According to the *ARMSE* results, the 3rd-degree CKF requires the least number of cubature points, but it has the lowest filtering accuracy. The proposed DDCKF algorithm achieved the highest filtering accuracy when the number of cubature points was maintained.

#### *4.2. Simulation Experiment of Robust Target Tracking Based on DD-MCCKF and Surface Target-Tracking Models*

4.2.1. Simulation Experiment of Robust Filtering Based on the Constant Velocity (CV) Tracking Model of the Surface Target

In this experiment, the following target-tracking model is used to verify the filtering performance of the proposed robust CKF algorithm for the model with high maneuvering speed and non-Gaussian noise. Consider the following constant velocity (CV) surface target-tracking model:

$$\begin{cases} \begin{aligned} \text{State equation}: \\ \begin{aligned} \mathbf{x}\_{k} = \begin{pmatrix} \dot{\mathbf{x}}\_{k} \\ \dot{\mathbf{x}}\_{k} \\ \dot{\mathbf{y}}\_{k} \end{pmatrix} = \begin{pmatrix} 1 & T & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & T \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \mathbf{x}\_{k-1} \\ \dot{\mathbf{x}}\_{k-1} \\ \dot{\mathbf{y}}\_{k-1} \end{pmatrix} + \begin{pmatrix} \frac{T^{2}}{T} & 0 \\ 0 & T & 0 \\ 0 & \frac{T^{2}}{T} \end{pmatrix} \mathbf{w}\_{k-1} \\\ \begin{aligned} \text{Measurement equation}: \\ \mathbf{z}\_{k} = \begin{pmatrix} \sqrt{\mathbf{x}\_{k}^{2} + \mathbf{y}\_{k}^{2}} \\ \arctan\left(\frac{\mathbf{y}\_{k} - \mathbf{y}\_{k}}{\mathbf{x}\_{k} - \mathbf{x}\_{k}}\right) \end{pmatrix} + \mathbf{v}\_{k} \end{aligned} \end{cases} \end{cases} \tag{58}$$

The CV model is a type of coordinate-uncoupled model. These models assume that the target maneuvering process in three orthogonal directions is uncoupled in three-dimensional space. Target maneuvering is caused by acceleration changes caused by external forces. Therefore, the difficulty of maneuvering modeling lies in the description of the target acceleration. For high-speed surface targets, the CV model is often used to describe the movement of such targets.

In this model, *<sup>w</sup><sup>k</sup>* and *<sup>v</sup><sup>k</sup>* are the mutually independent system process noise and measurement noise with covariance matrices *<sup>Q</sup><sup>k</sup>* and *<sup>R</sup>k*, and the sampling period is T.

The position *RMSE*, velocity *RMSE*, and *ARMSE* were defined as the filtering-accuracy evaluation criteria:

$$\begin{aligned} RMSE\_{CVpos} &= \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \left[ \left( \mathbf{x}\_{k} - \boldsymbol{\hat{x}}\_{k|k} \right)^{2} + \left( \mathbf{y}\_{k} - \boldsymbol{\hat{y}}\_{k|k} \right)^{2} \right]} \\ RMSE\_{CVrel} &= \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \left[ \left( \dot{\mathbf{x}}\_{k} - \hat{\dot{\mathbf{x}}}\_{k|k} \right)^{2} + \left( \dot{\mathbf{y}}\_{k} - \hat{\dot{\mathbf{y}}}\_{k|k} \right)^{2} \right]} \\ ARMSE\_{\text{any}} &= \frac{1}{A} \sum\_{k=1}^{A} RMSE\_{\text{any}}(k) \end{aligned} \tag{59}$$

where *N* represents the number of Monte Carlo simulation runs, and *A* represents the timestep of the simulation experiment. The initial values of the state variable and the error covariance matrix are *x*<sup>0</sup> and *<sup>P</sup>*0|0, respectively. In this experiment, the parameters were initialized as follows:

$$\begin{aligned} Q\_1 &= q\_1^2 \begin{bmatrix} \mathbf{M}\_1 & 0\\ 0 & \mathbf{M}\_1 \end{bmatrix}, \; \mathbf{Q}\_2 = q\_2^2 \begin{bmatrix} \mathbf{M}\_2 & 0\\ 0 & \mathbf{M}\_2 \end{bmatrix} \\\ q\_1 &= 0.2, q\_2 = 0.3, \mathbf{M}\_k = \begin{bmatrix} T\_k^3/3 & T\_k^2/2\\ T\_k^2/2 & T\_k \end{bmatrix}, \; T\_1 = 1, T\_2 = \frac{1}{2} \end{aligned} \tag{60}$$
 
$$\begin{aligned} \mathbf{R}\_1 &= 0.1 \text{diag} \left\{ \left(20 \text{ m}\right)^2, \left(\frac{6\pi}{180}\right)^2 \text{rad} \right\}, \mathbf{R}\_2 = 0.1 \text{diag} \left\{ \left(30 \text{ m}\right)^2, \left(\frac{8\pi}{180}\right)^2 \text{rad} \right\} \end{aligned} \tag{61}$$

To set different types of the non-Gaussian state noise and measurement noise environment based on *<sup>w</sup><sup>k</sup>* <sup>∼</sup> (<sup>1</sup> <sup>−</sup> *<sup>η</sup>*)*N*(**0**, *<sup>Q</sup>*1) <sup>+</sup> *<sup>η</sup>N*(**0**, *<sup>Q</sup>*2) *<sup>r</sup><sup>k</sup>* <sup>∼</sup> (<sup>1</sup> <sup>−</sup> *<sup>η</sup>*)*N*(**0**, *<sup>R</sup>*1) <sup>+</sup> *<sup>η</sup>N*(**0**, *<sup>R</sup>*2) , we considered the following conditions:

**Condition 1.** *Under mixture noise, η* = <sup>1</sup> 2 :

$$\log\_{11} \sim \frac{1}{2}N\_1(\mathbf{0}, \mathbf{Q}\_1) + \frac{1}{2}N\_2(\mathbf{0}, \mathbf{Q}\_2) \,, \; r\_{k1} \sim \frac{1}{2}N\_1(\mathbf{0}, \mathbf{R}\_1) + \frac{1}{2}N\_2(\mathbf{0}, \mathbf{R}\_2) \,. \tag{61}$$

**Condition 2.** *Under mixture noise, η* = <sup>2</sup> 3 :

$$aw\_{k2} \sim \frac{1}{3}N(\mathbf{0}, \mathbf{Q}\_1) + \frac{2}{3}N(\mathbf{0}, \mathbf{Q}\_2) \text{ }, \; r\_{k2} \sim \frac{1}{3}N(\mathbf{0}, \mathbf{R}\_1) + \frac{2}{3}N(\mathbf{0}, \mathbf{R}\_2) \text{ }. \tag{62}$$

**Experiment 1.** *Comparison of the filtering performance between robust and regular 3rd-degree CKF in the non-Gaussian noise Environment (61).*

First, to verify the strong tracking performance of the algorithm proposed in this study and compare it with the traditional nonlinear Kalman filter in a non-Gaussian noise environment, a target tracking experiment based on a 3rd-degree CKF was carried out for the above CV tracking model, in which the status update process of CKF(regular) was implemented using Formula (43a). The status updating process of CKF(robust) was realized using Formula (43b).

We set the initial value of the state variable and the error covariance matrix as *x*<sup>0</sup> = (100 m, 30 m/s, 100 m, 20 m/s) *<sup>T</sup>* and *<sup>P</sup>*0|<sup>0</sup> <sup>=</sup> diag 10 m2,1m2/s2, 10 m, 1 m2/s2 *T* , respectively. In this experiment, we set *N* = 200 times and *A* = 150 s. Additionally, we set c in Equations (45) and (46) as 1/3. Other parameters and initial values in the experiment were evaluated according to Equation (60).

The simulation results are shown in Figures 2 and 3:

Therefore, from the above experiments, the state-updating process proposed in this study is more robust than the traditional nonlinear Kalman filter in a strong non-Gaussian noise environment. This experiment proved that the proposed method is effective and feasible.

**Experiment 2.** *Comparison of the filtering performance of five types of robust nonlinear Kalman filters in a non-Gaussian noise environment, Equations (61) and (62), respectively:*

In the experiment in this section, the robust filter MCUKF proposed in [16], MCSCKF proposed in [27], and E-MCCKF and S-MCCKF, which are formed by Equations (14) and (20), and the MCC method proposed in this study were compared with the robust algorithm DD-MCCKF proposed in this study in a non-Gaussian state and measured noise Environment (61) and (62).

We set the initial value of the state variable and the error covariance matrix as *x*<sup>0</sup> = (100 m, 30 m/s, 100 m, 20 m/s) *<sup>T</sup>* and *<sup>P</sup>*0|<sup>0</sup> <sup>=</sup> diag 10 m2,1m2/s2, 10 m, 1 m2/s2 *T* , respectively. The number of Monte Carlo simulation *<sup>N</sup>* <sup>=</sup> <sup>1</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> times and *<sup>A</sup>* = 120 s. Additionally, we set *c* in Equations (45) and (46) as 1/3, and the other parameters and initial values in the experiment were evaluated according to Equation (60).

**Figure 2.** Tracking trajectory of the 3rd-drgree CKF and the 3rd-degree MCCKF under non-Gaussian noise conditions in Equation (61).

**Figure 3.** *RMSE* of the position and velocity for different filters under non-Gaussian noise conditions in Equation (61) ((**a**) is the position RMSE and (**b**) is the velocity RMSE).

The simulation results are shown in Figures 4–6:

The running time of the algorithm was 120 s. The results show that, similar to related properties introduced in Section 2.1, the comprehensive numerical performance of UKF was not as good as that of 3rd-degree CKF, resulting in the highest mean square error and the lowest filtering accuracy of MCUKF. In addition, when *n* = 4, the number of cubature points used by 5th-degree SCKF and ECKF was the same as those used by the DDKCF proposed in this study. The formula structure was similar. Thus, the calculation amount was very similar. Because DD-CKF maintains the numerical stability when the system dimension is large while calculating with fewer cubature points, and Formula (32) carries out a certain displacement of the coefficient of cubature points to reduce the non-sampling error, the accuracy of DD-MCCKF was slightly higher than that of other algorithms in tracking experiments.

As shown in Table 3, the proposed DD-MCCKF algorithm had the highest estimation accuracy under the premise of using fewer cubature points in a non-Gaussian noise environment.

**Figure 4.** Tracking trajectories of a surface target (unmanned surface vessel) under non-Gaussian noise conditions ((**a**) is the result under Equation (61) and (**b**) is the result under Equation (62)).

**Figure 5.** Position RMSEs of robust target-tracking algorithms under non-Gaussian noise conditions ((**a**) is the result under Equation (61) and (**b**) is the result under Equation (62)).

**Figure 6.** Velocity RMSEs of robust target-tracking algorithms under non-Gaussian noise conditions ((**a**) is the result under Equation (61) and (**b**) is the result under Equation (62)).

Table 3 shows the number of points and ARMSE of each algorithm in Experiment 2 above.


**Table 3.** Number of points used by different filtering algorithms in Experiment 1, along with the *ARMSE* for the position and velocity.

4.2.2. Simulation Experiment of Robust Filtering Based on Cooperative Turning Tracking Model of Surface Targets

The cooperative turning (*CT*) model is a coordinate coupling model. In most cases, the coordinate coupling model refers to a turning motion model. Because this type of model is closely related to the coordinates, it can be divided into two types: two-dimensional and three-dimensional turning models. The two-dimensional turning model is also called the planar turning model, that is, the *CT* model.

The *CT* model is one of the most important maneuvering models in surface target tracking. It is a commonly used model to describe maneuvering target in USV tracking. The state equation and measurement equation of the *CT* model are as follows:

$$\begin{cases} \begin{aligned} \mathbf{x}\_{k} = \begin{pmatrix} \mathbf{x}\_{k} \\ \dot{\mathbf{x}}\_{k} \\ y\_{k} \\ \dot{y}\_{k} \\ \omega\_{k} \end{pmatrix} = \begin{pmatrix} 1 & \frac{\sin(\omega\_{k-1}T)}{\omega\_{k-1}} & 0 & \frac{\cos(\omega\_{k-1}T)-1}{\omega\_{k-1}} & 0 \\ 0 & \cos(\omega\_{k-1}T) & 0 & -\sin(\omega\_{k-1}T) & 0 \\ 0 & \frac{1-\cos(\omega\_{k-1}T)}{\omega\_{k-1}} & 1 & \frac{\sin(\omega\_{k-1}T)}{\omega\_{k-1}} & 0 \\ 0 & \sin(\omega\_{k-1}T) & 0 & \cos(\omega\_{k-1}T) & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \mathbf{x}\_{k-1} \\ \dot{\mathbf{x}}\_{k-1} \\ y\_{k-1} \\ \omega\_{k-1} \end{pmatrix} + \mathbf{w}\_{k-1} \\ \mathbf{z}\_{k} = \begin{pmatrix} \sqrt{x\_{k}^{2} + y\_{k}^{2}} \\ \arctan\left(\frac{y\_{k} - y\_{l}}{x\_{k} - \chi\_{l}}\right) \end{pmatrix} + \mathbf{v}\_{k} \end{aligned} \tag{63}$$

where *<sup>x</sup><sup>k</sup>* is the system state variable, and *xk*, *yk* and . *xk* . *yk* represent the position and velocity of the target in the *x* and *y* directions, respectively. *T* represents the sampling period, and *ω<sup>k</sup>* represents the steering angular velocity. *<sup>w</sup><sup>k</sup>* represents the process noise, which has covariance matrix *<sup>Q</sup>k*, and *<sup>v</sup><sup>k</sup>* represents the measurement noise, which has covariance matrix *<sup>R</sup>k*. The initial value of the state variable is *<sup>x</sup>*0, and the correlation covariance matrix is *<sup>P</sup>*0|0. To enhance the mobility of the surface target, the parameters were initialized as follows:

$$\begin{aligned} \mathbf{Q}\_1 &= q\_1^2 \begin{bmatrix} \mathbf{M}\_1 & 0 & 0 & \mathbf{M}\_2 & 0 & 0\\ 0 & \mathbf{M}\_1 & 0 & \mathbf{J}\_1 \mathbf{Q}\_2 = q\_2^2 \begin{bmatrix} 0 & \mathbf{M}\_2 & 0\\ 0 & \mathbf{M}\_2 & 0\\ 0 & 0 & 0 & T\_2/3 \end{bmatrix} \\ \mathbf{r}\_0 &= 0.05, q\_1 = 0.2, q\_2 = 0.3, \mathbf{M}\_k = \begin{bmatrix} T\_k^3/3 & T\_k^2/2\\ T\_k^2/2 & T\_k \end{bmatrix}, \mathbf{r}\_1 = \frac{1}{4}, \mathbf{r}\_2 = \frac{1}{6} \\ \mathbf{x}\_0 &= \left(100 \text{ m}, 80 \text{ m/s}, 100 \text{ m}, 120 \text{ m/s}, -\frac{8\pi}{180} \text{ rad}\right)^T \\ \mathbf{P}\_{0|0} &= \text{diag}\left(10 \text{ m}^2, 1 \text{ m}^2/\text{s}^2, 10 \text{ m}^2, 1 \text{ m}^2/\text{s}^2, 0.1 \text{ rad}^2/\text{s}^2\right) \\ \mathbf{R}\_1 &= q \text{diag}\left\{ (25 \text{ m})^2, \left(\frac{3\pi}{180}\right)^2 \text{ rad} \right\}, \mathbf{R}\_2 = q \text{diag}\left\{ (30 \text{ m})^2, \left(\frac{9\pi}{180}\right)^2 \text{ rad} \right\} \end{aligned} \tag{64}$$

To set up the different types of non-Gaussian state noise and measurement noise environment based on *<sup>w</sup><sup>k</sup>* <sup>∼</sup> (<sup>1</sup> <sup>−</sup> *<sup>η</sup>*)*N*(**0**, *<sup>Q</sup>*1) <sup>+</sup> *<sup>η</sup>N*(**0**, *<sup>Q</sup>*2) *<sup>r</sup><sup>k</sup>* <sup>∼</sup> (<sup>1</sup> <sup>−</sup> *<sup>η</sup>*)*N*(**0**, *<sup>R</sup>*1) <sup>+</sup> *<sup>η</sup>N*(**0**, *<sup>R</sup>*2) , we considered the conditions as follows:

**Condition 3.** *Under mixture noise, η* = <sup>1</sup> 2 :

$$\mathcal{L}w\_{k3} \sim \frac{1}{2}\mathcal{N}\_1(\mathbf{0}, \mathbf{Q}\_1) + \frac{1}{2}\mathcal{N}\_2(\mathbf{0}, \mathbf{Q}\_2) \,, \ r\_{k3} \sim \frac{1}{2}\mathcal{N}\_1(\mathbf{0}, \mathbf{R}\_1) + \frac{1}{2}\mathcal{N}\_2(\mathbf{0}, \mathbf{R}\_2) \,. \tag{65}$$

**Condition 4.** *Under mixture noise, η* = <sup>2</sup> 3 :

$$\mathfrak{a}\mathfrak{w}\_{\mathsf{k}4} \sim \frac{1}{3}\mathcal{N}(\mathfrak{0}, \mathbb{Q}\_{1}) + \frac{2}{3}\mathcal{N}(\mathfrak{0}, \mathbb{Q}\_{2}) \; , \; r\_{\mathsf{k}4} \sim \frac{1}{3}\mathcal{N}(\mathfrak{0}, \mathbb{R}\_{1}) + \frac{2}{3}\mathcal{N}(\mathfrak{0}, \mathbb{R}\_{2}) \; . \tag{66}$$

In the simulation experiment, we used the *RMSE* and *ARMSE* to evaluate the filtering performance. The *RMSE* and *ARMSE* of the displacement, velocity, and steering angle were defined as follows:

$$\begin{split} RMSE\_{\text{CTpos}} &= \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \left[ \left( x\_k - \hat{x}\_{k|k} \right)^2 + \left( y\_k - \hat{y}\_{k|k} \right)^2 \right]}\\ RMSE\_{\text{CTval}} &= \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \left[ \left( \dot{x}\_k - \hat{\dot{x}}\_{k|k} \right)^2 + \left( \dot{y}\_k - \hat{\dot{y}}\_{k|k} \right)^2 \right]}\\ RMSE\_{\text{CTong}} &= \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \left[ \left( \omega\_k - \hat{\omega}\_{k|k} \right)^2 \right]}\\ ARMSE\_{\text{any}} &= \frac{1}{A} \sum\_{l=1}^{A} RMSE\_{\text{any}} / l \text{.} \end{split} \tag{67}$$

Here, *N* represents the total number of Monte Carlo simulations, and *A* represents the timestep of each Monte Carlo simulation. In the simulation experiment, We set c in Equations (45) and (46) as 1/3, and we set **<sup>N</sup>** <sup>=</sup> **<sup>1</sup>** <sup>×</sup> **105** times and *<sup>A</sup>* = 140 s. Lines 1–3 of Equation (67) give the position *RMSE*<sup>s</sup> obtained from the real value and estimated value of the position vector and the velocity *RMSE* and angular-velocity *RMSE* obtained via the same method as the position *RMSE*, respectively. Finally, the calculation formula for the *ARMSE* is presented.

Because the *CT* model has a strong nonlinearity, and the collaborative *CT* model can better describe the USV high-speed maneuvering steering process on the water surface in reality than the uniform turning model, this experiment used the *CT* model to conduct the last robust filtering experiment based on the nonlinear system in a non-Gaussian environment.

As with Experiment 2 in Section 4.2.1, in the experiment of this section, the robust filter MCUKF proposed in reference [16], the MCSCKF proposed in reference [27] and E-MCCKF and S-MCCKF which are formed by Formula (14), Formula (20) and the MCC method proposed in this paper are used to compare with the robust algorithm DD-MCCKF proposed in this paper in a non-Gaussian state and measured noise environments above. The simulation results for the *CT* target-tracking model are presented in Figures 7–10.

**Figure 7.** Tracking trajectories of surface target (unmanned surface vessel) under non-Gaussian noise conditions ((**a**) is the result under Equation (65) and (**b**) is the result under Equation (66)).

**Figure 9.** Velocity RMSEs of robust target-tracking algorithms under non-Gaussian noise conditions ((**a**) is the result under Equation (65) and (**b**) is the result under Equation (66)).

**Figure 10.** Course angular-velocity *RMSE*s of robust target-tracking algorithms under non-Gaussian noise conditions ((**a**) is the result under Equation (65) and (**b**) is the result under Equation (66)).

The running time of the algorithm was 140 s. Among the position errors of the simulation results, the accuracy of the MCUKF was the worst owing to the numerical instability and inaccurate UT. Because of the high system dimensions, the approximation process of the ECKF was unstable, and its filtering performance was worse than that of the S-MCCKF. Especially in the position *RMSE* index, the ECKF was particularly affected by the instability of its formula value, and its filtering accuracy is even lower than that of the 3rd-degree MCCKF.

The DD-MCCKF and S-MCCKF were similar with regard to the structure of the cubature formula and the number of cubature points used. Thus, they had similar filtering accuracies. Because the DD-CKF integrates the two indexes of numerical stability and the number of cubature points used, it is better at improving accuracy and reducing the calculation amount. In addition, the smaller coefficient of the cubature points of the DD-CKF can better reduce non-local sampling problems, which helps further improve its filtering accuracy.

By fine-tuning parameter c in Equations (45) and (46), the non-local sampling problem caused by the increase in the system dimension was alleviated, and the filtering accuracy was increased. For the estimation of the velocity and course angular velocity, the UKF and 3rd-degree CKF had the lowest accuracy, whereas the 5th-degree ECKF and SCKF had similar performance, and the DD-MCCKF had the highest accuracy. Furthermore, as indicated in Table 4, the ARMSEs obtained via the proposed algorithm were lower than those for the 5th-degree ECKF and SCKF.

**Table 4.** Number of points and the ARMSEs of the position, velocity and course angular velocity for different robust filtering algorithms in the target tracking simulation of Experiment 4.2.2.


#### **5. Conclusions**

A high-degree robust CKF algorithm, based on a new cubature formula and MCC, was developed. First, according to the construction method of the fully symmetric cubature formula, different cubature-point coordinate generators were used to construct a new high-degree cubature formula and fine-tune its parameters to increase its accuracy. Subsequently, a new robust cost function was constructed by combining the MCC and WLS methods.

As a statistical measure of the similarity between random variables, MCC can extract information from all even moments under the appropriate kernel bandwidth. This helped us make better use of the higher-order moment information of the signal. Therefore, in filtering applications, MCC is more robust to non-Gaussian mixture noise than the conventional nonlinear Kalman filter, which can only use second-order information. The fixed-point iteration solution of the equation was embedded in the nonlinear Kalman filtering process to obtain a robust filtering algorithm. Finally, a new high-degree robust CKF algorithm was obtained by combining the new estimation process with the KF framework. The aim was to use as few cubature coordinate points as possible to achieve the highest filtering accuracy and stability in a non-Gaussian noise environment. The proposed method was applied to the target tracking of an unmanned surface vessel. The simulation experiment exhibited a smaller number of computations, higher filtering accuracy, and better numerical stability when compared to (or similar results to) other algorithms of the same order.

The new-proposed filter probably has a certain application value in practical application scenarios, such as the course tracking of unmanned surface vessels, tracking for maritime moving targets or surface vessel rescuing, and in the sea area with strong environmental interference (e.g., wind and waves). The autonomous navigation target of obstacle avoidance on the water surface may be achieved under the condition of large errors in sensor distance and angle measurement. However, owing to the uncertainty and complexity of the surface environment, the measurement process based on sensors and satellite positioning may be affected or temporarily unavailable. At the same time, some state estimation problems in more complex environments are also the focus of research, such as estimation with bias compensation, nonlinear filtering based on state constraints, robust constraints, state estimation in complex domain impulsive noise etc. As mentioned in References [28–45]. These issues need to be further studied in the following work.

**Author Contributions:** Writing—original draft preparation, T.W.; writing—review and editing, S.L. and L.Z.; validation and formal analysis, S.L., L.Z. and T.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** Supported by the Fundamental Research Funds for the Central Universities (3072022JC0404).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data can be made available upon request to the authors.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

