**1. Introduction**

The Kalman filter (KF) [1–4] is an optimal estimator (in the minimum mean square error (MMSE) sense) for a filtering problem with linear dynamic and measurement models with additive Gaussian noise. However, many real-world filtering problems are non-linear due to nonlinearity in the dynamic and measurement models. Common real-world non-linear filtering (NLF) problems are bearing-only filtering [5–8], ground moving target indicator (GMTI) filtering [9], passive angle-only filtering in three-dimensions (3D) using an infrared search and track sensor [10–12], etc.

In the early stages of NLF, the extended Kalman filter (EKF) [1–4] was widely used. It was observed in some problems, e.g., falling of a body in earth's atmosphere with high velocity [13,14] and bearing-only filtering [5,7,8] that the EKF performs poorly due to linearization. The high degree of nonlinearity in these problems was the attributed cause for the poor performance of the problem without a quantitative measure of nonlinearity (MoN). To overcome the poor accuracy and convergence problems of the EKF, a number of improved approximate non-linear filters, such as the unscented Kalman filter (UKF) [14,15], cubature KF (CKF) [16], and particle filter (PF) [8,17] have been proposed during the last two decades.

It is important to address the following questions for NLF problems:


**Remark 1.** *In this paper we consider a parameter estimation problem with polynomial nonlinearity. We hope that insights and results from this analysis would encourage further study of MoN in NLF problems. Next, we describe some historical developments in the field of parameter estimation and NLF.*

Beale in his pioneering work [18] proposed four MoNs for the static non-random parameter estimation problem. Two MoNs were empirical and two were theoretical. Guttman and Meeter [19] and Linssen [20] observed that Beale's method gives lower MoN for highly non-linear problems and proposed a modified MoN. Using differential geometry based curvature measures, Bates and Watts [21,22] and Goldberg et al. [23] extended Beale's work and developed curvature measures of nonlinearity (CMoN) for the static non-random parameter estimation problem. Bates and Watts formulated two CMoN, the parameter-effects curvature and intrinsic curvature [21,24–26].

In [27], we first extended the method of Bates and Watts to the non-linear filtering problem with unattended ground sensor (UGS) to calculate CMoN. Next, we computed the parameter-effects curvature and intrinsic curvature for the bearing-only filtering (BOF) problem [28–31], GMTI filtering problem [30,32,33], video tracking problem [34], and polynomial nonlinearity [35].

In our previous work [35], we considered a polynomial curve in two-dimensions (2D) and calculated CMoN using differential geometry (e.g., extrinsic curvature) [36–38], Bates and Watts parameter-effects curvature [21,25,26], and direct parameter-effects curvature [29]. The computation of these curvatures requires the Jacobian and Hessian of the measurement function [2] evaluated at the true or estimated parameter. The extrinsic curvature uses the true parameter, whereas the other two CMoN use the estimated parameter.

In [35], we obtained the maximum likelihood (ML) estimate [2,39] of the parameter *x* while using a vector measurement by numerical minimization. In [40], we derived analytic expressions for the ML estimator (MLE) [2,39] and associated variance using a vector measurement. This approach is simple and efficient, since it does not require numerical minimization. We also showed through Monte Carlo simulations in [40] that the variance of the MLE and the Cramér-Rao lower bound (CRLB) [2,41] are nearly the same for different powers of *x*. We also found that the bias error was small and the mean square error (MSE) [2] was close to the CRLB and variance of the MLE. Our numerical results showed that the average normalized estimation error squared (ANEES) [42] was within the 99% confidence interval most of the time. Hence, the variance of the MLE was in agreement with the estimation error.

Li constructed a combined non-linear function while using the non-linear time evolution function and measurement function in a discrete-time nonlinear filtering problem, and he proposed a global MoN at each measurement time [43]. This MoN minimizes the mean square distance between the combined non-linear function and the set of all affine functions with the same dimension at each measurement time. An un-normalized MoN and a normalized MoN were proposed in [43]. These MoNs can also be unconditional or conditional. The normalized MoN lies in the interval [0, 1]. A journal version of the paper with enhancements was published in [44].

The normalized MoN that was proposed in [43] was calculated for non-linear filtering problems, including one with the nearly constant turn motion and a non-linear measurement model [45], a video tracking problem using PF [46], and a hypersonic entry vehicle state estimation problem [47]. In these cases, the normalized MoN were rather low. In [33], we compared the normalized MoN for

the BOF and GMTI filtering problems. Contrary to our expectation, we found that the GMTI filtering problem had a higher conditional normalized MoN than that of the BOF problem in the examples that we investigated.

Using the current mean (e.g., predicted mean) and associated covariance, Duník et al. [48] generate a number of sample points (e.g., sigma points using unscented transform [14]) and transform these points using a non-linear function (e.g., non-linear measurement function or time evolution function). Subsequently, they try to predict the transformed points using a linear transformation and estimate the parameter of the transformation using linear weighted least squares (WLS) [39]. They use the cost function of the WLS evaluated at the estimated parameter as a local MoN.

In [35], we showed analytically and through Monte Carlo simulations that affine mappings with positive slopes exist among the logarithm of the extrinsic curvature, Bates and Watts parameter-effects curvature, direct parameter-effects curvature, MSE, and CRLB. For completeness, we have included these key results from [35] in Section 4. New contributions in this paper include the computation and analysis of following MoNs:


It is not possible to derive a mapping analytically between the logarithm of Beale's MoN, Linssen's MoN, Li's MoN, MoN of Straka, Duník, and Simandl, and the logarithm of the MSE. ˘ The numerical results from Monte Carlo simulations also show that affine mappings with positive slopes exist among the logarithm of the MSE and the logarithm of two of these MoNs.

The paper is organized, as follows. Section 2.1 describes the measurement model for polynomial nonlinearity. The MLE for parameter estimation and CRLB using polynomial nonlinearity and a vector measurement is presented in Section 2. Section 3 presents different types of MoN, such as extrinsic curvature based on differential geometry, Beale's MoN, Linssen's MoN, Bates and Watts parameter-effects curvature, direct parameter-effects curvature, Li's MoN, and MoN of Straka, et al. Section 4 discusses mappings among logarithms of extrinsic curvature, parameter-effects curvature, CRLB, and MSE. Section 5 presents the numerical simulation and results. Finally, Section 6 summarizes our contribution and concludes with future work.

*Notation Convention:* For clarity, we use italics to denote scalar quantities and boldface for vectors and matrices. A lower or upper case Roman letter represents a name (e.g., "s" for "sensor", "RMS" for "root mean square", etc.). We use ":=" to define a quantity and **A** denotes the transpose of the vector or matrix **A**. The *n*−dimensional identity matrix, *m*−dimensional null matrix, and *m* × *n* null matrix are denoted by **I***n*, **0***m*, and **0***m*×*n*, respectively.
