*2.2. ML Estimate of Parameter*

The likelihood function of *x* is [2,50,51]

$$\Lambda(\mathbf{x}; \mathbf{z}) = p(\mathbf{z}|\mathbf{x}) = [(2\pi)^{N}|\mathbf{R}|]^{-1/2} \exp\{- (1/2) [\mathbf{z} - \mathbf{h}(\mathbf{x})]^{\prime} \mathbf{R}^{-1} [\mathbf{z} - \mathbf{h}(\mathbf{x})] \}. \tag{10}$$

The maximization of the likelihood in (10) is equivalent to the minimization of the cost function [2,51]

$$f(\mathbf{x}) = [\mathbf{z} - \mathbf{h}(\mathbf{x})]^\prime \mathbf{R}^{-1} [\mathbf{z} - \mathbf{h}(\mathbf{x})] = [\mathbf{z} - \mathbf{h}(\mathbf{x})]^\prime [\mathbf{z} - \mathbf{h}(\mathbf{x})] / \sigma^2. \tag{11}$$

The maximum likelihood (ML) estimate *x*ˆ of *x* is obtained by setting the derivative of *J*(*x*) to zero [2,51],

$$\frac{d\mathcal{J}(\mathbf{x})}{d\mathbf{x}} = 0.\tag{12}$$

From (11) and (12), we obtain

$$[\mathbf{z} - \mathbf{h}(\hat{\mathbf{x}})]' \frac{d\mathbf{h}(\hat{\mathbf{x}})}{d\mathbf{x}} = 0. \tag{13}$$

Because the derivative of **h**(*x*) with respect to *x* is not zero, we obtain

$$\mathbf{z} - \mathbf{h}(\boldsymbol{\xi}) = \mathbf{0}\_{\mathrm{N} \times 1}. \tag{14}$$

Hence, the ML estimate satisfies,

$$h(\mathfrak{k})\mathbf{d} = \mathbf{z}.\tag{15}$$

Left-multiplying both sides of (15) by **d**- , we obtain

$$h(\hat{\mathbf{x}})\mathbf{d}^{\prime}\mathbf{d} = \mathbf{d}^{\prime}\mathbf{z} = \sum\_{i=1}^{N} z\_{i}.\tag{16}$$

We note that

$$\mathbf{d}'\mathbf{d} = \mathcal{N}.\tag{17}$$

Using (1) and (17) in (16) we get

$$a\mathfrak{X}^n = \mathfrak{z},\tag{18}$$

where *z*¯ is the sample mean of *z*,

$$z = \frac{1}{N} \sum\_{i=1}^{N} z\_i. \tag{19}$$

Thus, from (18), the ML estimate of *x* is given by

$$\pounds = (\exists / a)^{1/n}, \quad n = 2, 3, \dots \tag{20}$$

**Remark 3.** *In general, the MLE for a nonlinear measurement model is biased [51]. We can calculate the variance of x*ˆ *under the small error assumption using the linearization approximation. To guarantee the validity of the variance, the bias in the MLE must be calculated. The bias can be numerically calculated using Monte Carlo simulation.*

*The bias in the MLE is defined by [2,51]*

$$b(\mathbf{x}) := \mathbf{x} - \hat{\mathbf{x}}.\tag{21}$$

**Remark 4.** *The ML estimate of x in [35] was obtained by minimizing the cost function in (11) numerically. The estimator in (20) provides simple and efficient way of estimating x using a vector measurement* **z** *without numerical optimization.*

*2.3. Variance of the MLE*

The variance of *x*ˆ is given by [51]

$$
\sigma\_x^2 = (\mathbf{\hat{H}'} \mathbf{R}^{-1} \mathbf{\hat{H}})^{-1},
\tag{22}
$$

where

$$\dot{\mathbf{H}} = \frac{d\mathbf{h}(\mathbf{x})}{d\mathbf{x}}\Big|\_{\mathbf{x}=\mathbf{x}}.\tag{23}$$

Using the special form of **R** from (9) in (22), we get

$$
\sigma\_x^2 = \sigma^2 (\dot{\mathbf{H}}' \dot{\mathbf{H}})^{-1}.\tag{24}
$$

Using (7) in (23), we get

$$\mathbf{H} = \frac{d\mathbf{h}(\mathbf{x})}{d\mathbf{x}}\Big|\_{\mathbf{x}=\mathbf{x}} \mathbf{d}.\tag{25}$$

Differentiating (1) with respect to *x*, we obtain

$$\frac{dh(\mathbf{x})}{d\mathbf{x}} = am\mathbf{x}^{n-1}.\tag{26}$$

Using (26) in (25), we get

$$\mathbf{H} = am\mathbf{\hat{x}}^{n-1}\mathbf{d}.\tag{27}$$

From (27), we obtain

$$\mathbf{\dot{H}'\dot{H}} = (am\mathbf{\hat{x}}^{n-1})^2 \mathbf{d}'\mathbf{d},\tag{28}$$

Using (28) and (17) in (24), we obtain

$$
\sigma\_\mathbf{x}^2 = \sigma^2 (\mathbf{H'}\mathbf{H})^{-1} = \frac{\sigma^2}{N(an\,\mathbf{\hat{x}}^{n-1})^2},\tag{29}
$$

$$
\sigma\_x = \frac{\sigma}{\sqrt{N}an\mathfrak{X}^{n-1}}.\tag{30}
$$
