**5. Numerical Simulation and Results**

We follow the same simulation scenario as used in our previous work [35]. We use *a* = 0.6 and *n* = 2, 3, 4, 5 and a number of uniformly spaced *x* coordinates with the spacing of 0.1 in the interval [2, 7]. The measurement noise standard deviation (*σ*) is 0.5. The dimension of the measurement vector is 10 or 20. The results are based on 1000 Monte Carlo runs. Figure 2 shows log10(*h*(*x*)) versus *x*.

**Figure 2.** log10(*h*(*x*)) versus *x*.

To assess the accuracy of the MLE, we compute the sample bias, sample MSE, ANEES [42], and CRLB [2,41,51]. Let *xk*,*<sup>i</sup>* = *xk*, *x*ˆ*k*,*i*, and *σ*<sup>2</sup> *<sup>k</sup>*,*<sup>i</sup>* denote the true parameter, ML estimate, and associated variance, respectively, at the *k*th point in the *i*th Monte Carlo run. The sample bias in the estimate at the *k*th point is defined by [9]

$$\hat{\mathcal{B}}\_k := \frac{1}{M} \sum\_{i=1}^M (\mathbf{x}\_{k,i} - \mathbf{x}\_{k,i})\_\prime \tag{113}$$

where *M* is the number of Monte Carlo runs. The sample root MSE (RMSE) [9] and ANEES [2,9,42] at the *k*th point are defined, respectively, by

$$\text{RMSE}\_k := \left[ \frac{1}{M} \sum\_{i=1}^{M} (\mathbf{x}\_{k,i} - \hat{\mathbf{x}}\_{k,i})^2 \right]^{1/2} \text{ \textquotedblleft} \tag{114}$$

$$\text{ANEES}\_{k} := \frac{1}{M} \sum\_{i=1}^{M} (\mathbf{x}\_{k,i} - \hat{\mathbf{x}}\_{k,i})^2 / \sigma\_{k,i}^2. \tag{115}$$

Figure 3 presents the sample bias for different powers of *x*. We observe from Figure 3 that the bias is small when compared with the true value of *x* and the bias decreases with increase in the power of *<sup>x</sup>*. In Figure 4, we have plotted the <sup>√</sup>CRLB and the average of *<sup>σ</sup><sup>x</sup>* over Monte Carlo runs. Figure <sup>4</sup> shows that, for each power of *<sup>x</sup>*, the <sup>√</sup>CRLB and the average of *<sup>σ</sup><sup>x</sup>* are on top of each other and it is not possible to distinguish them in the figure.

**Figure 3.** (**a**) Sample bias vs. *x* using 10 scalar measurements and (**b**) sample bias vs. *x* using 20 scalar measurements.

**Figure 4.** (**a**) <sup>√</sup>CRLB or (Avg. *<sup>σ</sup>x*) vs. *<sup>x</sup>* using 10 scalar measurements and (**b**) <sup>√</sup>CRLB or (Avg. *<sup>σ</sup>x*) vs. *x* using 20 scalar measurements.

Figure <sup>5</sup> presents <sup>√</sup>CRLB and RMSE for each power of *<sup>x</sup>*. Solid and dashed lines in Figure <sup>5</sup> represent the <sup>√</sup>CRLB and RMSE, respectively, for each power of *<sup>x</sup>*. We see from Figure <sup>5</sup> that corresponding values of <sup>√</sup>CRLB and RMSE are close to each other for each power of *<sup>x</sup>*. In Figures 3–5, the bias, <sup>√</sup>CRLB, *<sup>σ</sup>x*, and RMSE for 20 measurements are smaller than corresponding values for 10 measurements.

**Figure 5.** (**a**) <sup>√</sup>CRLB or RMSE vs. *<sup>x</sup>* using 10 scalar measurements and (**b**) <sup>√</sup>CRLB or RMSE vs. *<sup>x</sup>* using 20 scalar measurements.

We present the ANEES [42] in Figure 6 for different powers of *x* with 99% confidence bounds. We see from Figure 6 that the ANEES lies within the 99% confidence bounds. This shows that the variance *σ*<sup>2</sup> *<sup>x</sup>* calculated using the MLE is consistent with the estimation error.

**Figure 6.** (**a**) ANEES vs. *x* using 10 scalar measurements and (**b**) ANEES vs. *x* using 20 scalar measurements.

Figure 7 presents the logarithm of the extrinsic curvature log10(*κ*(*x*)) versus *x*. The extrinsic curvature is completely determined by the first and second derivatives of the non-linear function *h* and it is evaluated while using the true *x*.

In Figures 8–18, we present results using 10 scalar measurements. We have also generated results using 20 scalar measurements. In order to limit the number of figures, we have not presented figures with 20 scalar measurements. The CRLB, variance of estimation error, all MoNs, and MSE follow the same trend. However, the corresponding values compared with 20 measurements are reduced due to improved estimation accuracy.

**Figure 7.** Logarithm of the extrinsic curvature log10(*κ*(*x*)) versus *x*.

**Figure 8.** (**a**) Logarithm of Beale's MoN (log10(Avg. Beale's MoN)) vs. *x* and (**b**) logarithm of Beale's MoN using LS (log10(Avg. Beale's-LS MoN)) vs. *x* with 10 scalar measurements.

**Figure 9.** (**a**) Logarithm of Linssen's MoN (log10(Avg. Linssen's MoN)) vs. *x* and (**b**) logarithm of Linssen's MoN using LS (log10(Avg. Linssen's-LS MoN) vs. *x* with 10 scalar measurements.

**Figure 10.** (**a**) Logarithm of Bates and Watts parameter-effects curvature (log10(Avg. *K*)) vs. *x* and (**b**) logarithm of direct parameter-effect curvature (log10(Avg. *β*)) vs. *x* using 10 scalar measurements.

**Figure 11.** (**a**) Logarithm of Li's un-normalized MoN (log10(Avg. *J*)) vs. *x* and (**b**) logarithm of Li's normalized MoN (log10(Avg. *ν*)) vs. *x* using 10 scalar measurements.

**Figure 12.** (**a**) Logarithm of MoN of Straka et al. (log10(Avg. *η*)) vs. *x* and (**b**) logarithm of MoN of Straka et al. with UT (log10(Avg. *η*-UT)) vs. *x* using 10 scalar measurements.

**Figure 13.** log10(MSE) vs. logarithm of extrinsic curvature (log10 (*κ*)) using 10 scalar measurements.

**Figure 14.** (**a**) log10(MSE) vs. log10(Avg. Beale's MoN) and (**b**) log10(MSE) vs. log10 (Avg. Beale's MoN using LS) using 10 scalar measurements.

**Figure 15.** (**a**) log10(MSE) vs. log10(Linssen's MoN) and (**b**) log10(MSE) vs. log10 (Linssen's-LS) using 10 scalar measurements.

**Figure 16.** log10(MSE) vs. logarithm of parameter-effects curvatures. (**a**) log10(MSE) vs. log10(Avg. *K*) and (**b**) log10(MSE) vs. log10 (Avg. *β*) using 10 scalar measurements.

**Figure 17.** log10(MSE) vs. logarithm of Li's MoN. (**a**) log10(MSE) vs. log10(Avg. *J*) and (**b**) log10(MSE) vs. log10 (Avg. *ν*) using 10 scalar measurements.

**Figure 18.** log10(MSE) vs. logarithm of MoN of Straka et al. (**a**) log10(MSE) vs. log10(Avg. *η*) and (**b**) log10(MSE) vs. log10 (Avg. *η*-UT) using 10 scalar measurements.

In [35], we had shown analytically, and through Monte Carlo simulation, that affine mappings exist among log10(MSE), log10 (*κ*), log10(Avg. *K*), and log10 (Avg. *β*). In Figures 13–18, we have plotted the log10(MSE) versus log10 of various MoNs using 10 scalar measurements. These figures show that the log10(MSE) varies with log10 (MoN) according to an affine mapping with a positive slope. This implies that the MSE increases as an MoN increases. We obtain similar results for the case of 20 scalar measurements.

The above results demonstrate that, for the polynomial nonlinearity problem analyzed, any of the seven MoNs analyzed is suitable metrics to quantify the MSE, which represents the complexity of a parameter estimation problem. Further research is needed to study the applicability of these MoNs in real-world non-linear filtering problems.
