4.1.2. Localization Stage

The localization stage starts by linearizing the BR equations from the unknown target. Firstly, reorganize (7) as

$$(r\_{m,n} + R\_{\text{t},m,r,n}^{\text{o}}) - R\_{\text{t},m}^{\text{o}} = R\_{\text{r},n}^{\text{o}} + \Delta r\_{m,n} \tag{54}$$

Since we have obtained the refined estimate of **s**<sup>o</sup> t,*<sup>m</sup>* and **<sup>s</sup>**<sup>o</sup> r,*<sup>n</sup>* from calibration stage, we plug **s**o t,*<sup>m</sup>* = **<sup>s</sup>**ˆt,*<sup>m</sup>* <sup>−</sup> (**Δs**t,*<sup>m</sup>* <sup>−</sup> **<sup>Δ</sup>s**ˆt,*m*) and **<sup>s</sup>**<sup>o</sup> r,*<sup>n</sup>* = **s**ˆr,*<sup>n</sup>* − (**Δs**r,*<sup>n</sup>* − **Δs**ˆr,*n*) into (54) and ignoring the second and higher error terms as

$$\begin{cases} 2\left(\mathbf{\hat{s}}\_{\mathtt{t},m} - \mathbf{\hat{s}}\_{\mathtt{r},n}\right)^{\mathrm{T}}\mathbf{u}^{\mathrm{o}} + 2\left(r\_{m,n} + R\_{\mathtt{t},m,r,n}\right)R\_{\mathtt{t},m}^{\mathrm{o}} = \left(r\_{m,n} + R\_{\mathtt{t},m,r,n}\right)^{2} + \mathbf{\hat{s}}\_{\mathtt{t},m}^{\mathrm{T}}\mathbf{\hat{s}}\_{\mathtt{t},m} - \mathbf{\hat{s}}\_{\mathtt{r},n}^{\mathrm{T}}\mathbf{\hat{s}}\_{\mathtt{r},n} - 2R\_{\mathtt{r},n}^{\mathrm{o}}\Delta r\_{m,n} \\ + 2\left(\mathbf{u}^{\mathrm{o}} - \mathbf{\hat{s}}\_{\mathtt{t},m} - R\_{\mathtt{r},n}^{\mathrm{o}}\mathbf{\hat{p}}\_{\mathtt{t},m,r,n}\right)^{\mathrm{T}}\left(\Delta \mathbf{s}\_{\mathtt{t},m} - \Delta \mathbf{\hat{s}}\_{\mathtt{t},m}\right) - 2\left(\mathbf{u}^{\mathrm{o}} - \mathbf{\hat{s}}\_{\mathtt{r},n} - R\_{\mathtt{r},n}^{\mathrm{O}}\mathbf{\hat{p}}\_{\mathtt{t},m,r,n}\right)^{\mathrm{T}}\left(\Delta \mathbf{s}\_{\mathtt{r},n} - \Delta \mathbf{\hat{s}}\_{\mathtt{r},n}\right) \end{cases} \tag{55}$$

By forming an auxiliary vector as θ<sup>o</sup> = [(**u**o) T,*R*<sup>o</sup> t,1,*R*<sup>o</sup> t,2, ... ,*R*<sup>o</sup> t,*M*] T , we can collect (55) for all the *m* and *n* into a matrix form as

$$\mathbf{G}\_1 \mathbf{\hat{e}}^0 = \mathbf{h}\_1 + \Delta \mathbf{h}\_1 \tag{56}$$

where

$$\mathbf{G}\_1 = \begin{bmatrix} \mathbf{G}\_{1,s} & \mathbf{G}\_{1,r} \end{bmatrix} \tag{57}$$

$$\mathbf{G}\_{1,\mathbf{s}} = 2 \begin{bmatrix} \hat{\mathbf{s}}\_1 \\ \hat{\mathbf{s}}\_2 \\ \vdots \\ \hat{\mathbf{s}}\_M \end{bmatrix} \mathbf{s}\_m = \begin{bmatrix} \left(\hat{\mathbf{s}}\_{\mathrm{t,m}} - \hat{\mathbf{s}}\_{\mathrm{r},1}\right)^\mathrm{T} \\ \left(\hat{\mathbf{s}}\_{\mathrm{t,m}} - \hat{\mathbf{s}}\_{\mathrm{r},2}\right)^\mathrm{T} \\ \vdots \\ \left(\hat{\mathbf{s}}\_{\mathrm{t,m}} - \hat{\mathbf{s}}\_{\mathrm{r},N}\right)^\mathrm{T} \end{bmatrix} \tag{58}$$

$$\mathbf{G}\_{1,\mathbf{r}} = 2 \begin{bmatrix} \mathbf{r}\_1 & \mathbf{0}\_{\text{N}\times 1} & \cdots & \mathbf{0}\_{\text{N}\times 1} \\ \mathbf{0}\_{\text{N}\times 1} & \mathbf{r}\_2 & \cdots & \mathbf{0}\_{\text{N}\times 1} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0}\_{\text{N}\times 1} & \mathbf{0}\_{\text{N}\times 1} & \cdots & \mathbf{r}\_{\text{M}} \end{bmatrix}, \mathbf{r}\_{\text{m}} = \begin{bmatrix} r\_{m,1} \\ r\_{m,2} \\ \vdots \\ r\_{m,N} \end{bmatrix} \tag{59}$$

$$\mathbf{h}\_{1} = \begin{bmatrix} \mathbf{h}\_{1,1} \\ \mathbf{h}\_{1,2} \\ \vdots \\ \mathbf{h}\_{1,M} \end{bmatrix}, \mathbf{h}\_{1,m} = \begin{bmatrix} (r\_{m,1} + R\_{\mathtt{t},m,r,1})^2 + \mathbf{\dot{s}}\_{\mathtt{t},m}^{\top}\mathbf{\dot{s}}\_{\mathtt{t},m} - \mathbf{\dot{s}}\_{\mathtt{r},1}^{\top}\mathbf{\dot{s}}\_{\mathtt{r},1} \\ (r\_{m,2} + R\_{\mathtt{t},m,r,2})^2 + \mathbf{\dot{s}}\_{\mathtt{t},m}^{\top}\mathbf{\dot{s}}\_{\mathtt{t},m} - \mathbf{\dot{s}}\_{\mathtt{r},2}^{\top}\mathbf{\dot{s}}\_{\mathtt{r},2} \\ \vdots \\ (r\_{m,N} + R\_{\mathtt{t},m,r,N})^2 + \mathbf{\dot{s}}\_{\mathtt{t},m}^{\top}\mathbf{\dot{s}}\_{\mathtt{t},m} - \mathbf{\dot{s}}\_{\mathtt{r},N}^{\top}\mathbf{\dot{s}}\_{\mathtt{r},N} \end{bmatrix} \tag{60}$$

and the error vector **Δh**<sup>1</sup> is related to the target position as

$$
\Delta \mathbf{h}\_1 = \mathbf{B}\_1 \Delta \mathbf{r} + \mathbf{D}\_1 \left(\Delta \mathbf{s} - \Delta \mathbf{\hat{s}}\right) \tag{61}
$$

where

$$\mathbf{B}\_{1} = \begin{bmatrix} \mathbf{B}\_{1,1} & \mathbf{O}\_{N \times N} & \cdots & \mathbf{O}\_{N \times N} \\ \mathbf{O}\_{N \times N} & \mathbf{B}\_{1,2} & \cdots & \mathbf{O}\_{N \times N} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{O}\_{N \times N} & \mathbf{O}\_{N \times N} & \cdots & \mathbf{B}\_{1,M} \end{bmatrix} \tag{62}$$

$$\mathbf{D}\_{1} = \begin{bmatrix} \mathbf{D}\_{1, \mathrm{t}, 1} & \mathbf{O}\_{\mathrm{N} \times 3} & \cdots & \mathbf{O}\_{\mathrm{N} \times 3} & \mathbf{D}\_{1, \mathrm{r}, 1} \\ \mathbf{O}\_{\mathrm{N} \times 3} & \mathbf{D}\_{1, \mathrm{t}, 2} & \cdots & \mathbf{O}\_{\mathrm{N} \times 3} & \mathbf{D}\_{1, \mathrm{r}, 2} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \mathbf{O}\_{\mathrm{N} \times 3} & \mathbf{O}\_{\mathrm{N} \times 3} & \cdots & \mathbf{D}\_{1, \mathrm{t}, \mathrm{M}} & \mathbf{D}\_{1, \mathrm{r}, \mathrm{M}} \end{bmatrix} \tag{63}$$

with

$$\mathbf{B}\_{1,m} = -2\text{diag}\left(R^{\text{o}}\_{\text{r},1'}R^{\text{o}}\_{\text{r},2'}, \dots, R^{\text{o}}\_{\text{r},N}\right) \tag{64}$$

$$\mathbf{D}\_{\rm t,tr,tt} = 2 \begin{bmatrix} \left(\mathbf{u}^{\rm o} - \mathbf{\hat{s}}\_{\rm t,m} - R\_{\rm r,1}^{\rm O} \mathbf{p}\_{\rm t,m,r,1}\right)^{\rm T} \\ \left(\mathbf{u}^{\rm o} - \mathbf{\hat{s}}\_{\rm t,m} - R\_{\rm r,2}^{\rm O} \mathbf{p}\_{\rm t,m,r,2}\right)^{\rm T} \\ \vdots \\ \left(\mathbf{u}^{\rm o} - \mathbf{\hat{s}}\_{\rm t,m} - R\_{\rm r,N}^{\rm O} \mathbf{p}\_{\rm t,m,r,N}\right)^{\rm T} \end{bmatrix} \tag{65}$$

$$\mathbf{D}\_{1,\mathbf{r},m} = -2\begin{bmatrix} (\mathbf{u}^{o} - \mathfrak{s}\_{\mathbf{r},1} - R\_{\mathbf{r},1}^{o}\boldsymbol{\mathfrak{o}}\_{\mathbf{t},m,\mathbf{r},1})^{\mathrm{T}} & \mathbf{0}\_{3\times 1}^{\mathrm{T}} & \cdots & \mathbf{0}\_{3\times 1}^{\mathrm{T}}\\ \mathbf{0}\_{3\times 1}^{\mathrm{T}} & (\mathbf{u}^{o} - \mathfrak{s}\_{\mathbf{t},2} - R\_{\mathbf{r},2}^{o}\boldsymbol{\mathfrak{o}}\_{\mathbf{t},m,\mathbf{r},2})^{\mathrm{T}} & \cdots & \mathbf{0}\_{3\times 1}^{\mathrm{T}}\\ \vdots & \vdots & \ddots & \vdots\\ \mathbf{0}\_{3\times 1}^{\mathrm{T}} & \mathbf{0}\_{3\times 1}^{\mathrm{T}} & \cdots & (\mathbf{u}^{o} - \mathfrak{s}\_{\mathbf{t},N} - R\_{\mathbf{r},N}^{o}\boldsymbol{\mathfrak{o}}\_{\mathbf{t},m,\mathbf{r},\mathbf{t}})^{\mathrm{T}} \end{bmatrix} \tag{66}$$

From the set of linear equations in (56), the WLS estimate of θo, denoted by θ, which minimizes **Δh**<sup>T</sup> <sup>1</sup>**W**1**Δh**<sup>1</sup> can be produced as

$$\boldsymbol{\Theta} = \left(\mathbf{G}\_1^T \mathbf{W}\_1 \mathbf{G}\_1\right)^{-1} \mathbf{G}\_1^T \mathbf{W}\_1 \mathbf{h}\_1 \tag{67}$$

where **W**<sup>1</sup> represents the weighting matrix and it can be computed by

$$\begin{aligned} \mathbf{W}\_1 &= \left[ \mathbb{E} (\mathbf{A} \mathbf{h}\_1 \mathbf{A} \mathbf{h}\_1^T) \right]^{-1} \\ &= \left[ \mathbf{B}\_1 \mathbf{Q}\_\alpha \mathbf{B}\_1^T + \mathbf{D}\_1 \text{cov} (\mathbf{A} \mathbf{s} - \mathbf{A} \mathbf{\hat{s}}) \mathbf{D}\_1^T \right]^{-1} \end{aligned} \tag{68}$$

However, to compute **W**1, the unknown target position has to be acquired in advance. To resolve this contradiction, we preliminarily let **W**<sup>1</sup> = **I***MN*×*MN* and use (67) to compute a least squares estimate of θo, and then use the estimated θ<sup>o</sup> to update **W**<sup>1</sup> for another repetition.

Based on the WLS theorem, it can be deduced that the estimate θ is approximately unbiased and the corresponding covariance matrix can be obtained, given sufficiently small BR measurement noise and transmitter/receiver position error, as

$$\text{cov}(\boldsymbol{\Theta}) = \left(\mathbf{G}\_1^T \mathbf{W}\_1 \mathbf{G}\_1\right)^{-1} \tag{69}$$

Next, the functional relation between the target position **u**<sup>o</sup> and the introduced nuisance parameters *R*o t,1,*R*<sup>o</sup> t,2, ... ,*R*<sup>o</sup> t,*M*, is explored to compute the final estimate of target position. To this end, reorganize the functional relation in (4) as

$$2(\mathbf{s}\_{\mathbf{t},m}^{\circ})^{\mathrm{T}}\mathbf{u}^{\mathrm{o}} = (\mathbf{u}^{\mathrm{o}})^{\mathrm{T}}\mathbf{u}^{\mathrm{o}} + \left(\mathbf{s}\_{\mathbf{t},m}^{\mathrm{o}}\right)^{\mathrm{T}}\mathbf{s}\_{\mathbf{t},m}^{\mathrm{o}} - \left(R\_{\mathbf{t},m}^{\mathrm{o}}\right)^{2} \tag{70}$$

Denoting the estimation error of θ by **Δ**θ, mathematically we arrive at

$$\mathbf{u}^{\diamond} = \boldsymbol{\Theta}(1:3) - \boldsymbol{\Delta\theta}(1:3) \tag{71}$$

$$R\_{\mathbf{t},m}^{\bullet} = \Theta(\mathfrak{Z} + m) - \Delta\Theta(\mathfrak{Z} + m) \tag{72}$$

Putting (71), (72) into the right side of (70) and **s**<sup>o</sup> t,*<sup>m</sup>* = **s**ˆt,*<sup>m</sup>* − (**Δs**t,*<sup>m</sup>* − **Δs**ˆt,*m*) into the both sides, we have after ignoring second-order error terms,

$$\begin{aligned} \mathbf{2\hat{s}}\_{\mathsf{t},m}^{\mathrm{T}}\mathbf{u}^{0} &= \boldsymbol{\theta}(\boldsymbol{1}:\boldsymbol{3})^{\mathrm{T}}\boldsymbol{\theta}(\boldsymbol{1}:\boldsymbol{3}) - \boldsymbol{\theta}(m+\boldsymbol{3})^{2} + \boldsymbol{\hat{s}}\_{\mathsf{t},m}^{\mathrm{T}}\boldsymbol{\hat{s}}\_{\mathsf{t},m} \\ &- 2\boldsymbol{\theta}(\boldsymbol{1}:\boldsymbol{3})\boldsymbol{\Delta}\boldsymbol{\theta}(\boldsymbol{1}:\boldsymbol{3}) + 2\boldsymbol{\theta}(m+\boldsymbol{3})\boldsymbol{\Delta}\boldsymbol{\theta}(m+\boldsymbol{3}) + 2\left(\mathbf{u}^{\mathrm{o}} - \boldsymbol{\hat{s}}\_{\mathsf{t},m}\right)^{\mathrm{T}}(\boldsymbol{\Delta}\mathbf{s}\_{\mathsf{t},m} - \boldsymbol{\Delta}\mathbf{\hat{s}}\_{\mathsf{t},m}) \end{aligned} \tag{73}$$

The final estimate of target position should satisfy (73) and meanwhile retain as close as possible to the estimated values of target position in θ. In line with this principle, one has the following set of equations

$$\mathbf{G}\_2 \mathbf{u}^o = \mathbf{h}\_2 + \Delta \mathbf{h}\_2 \tag{74}$$

where

$$\mathbf{G}\_2 = \begin{bmatrix} \mathbf{I}\_{3 \times 3} \\ 2\mathbf{s}\_{t,1}^T \\ 2\mathbf{s}\_{t,2}^T \\ \vdots \\ \mathbf{i} \\ 2\mathbf{s}\_{t,M}^T \end{bmatrix} \tag{75}$$

$$\mathbf{h}\_{2} = \begin{bmatrix} \boldsymbol{\theta}(1:3) \\ \boldsymbol{\theta}(1:3)^{\mathrm{T}}\boldsymbol{\theta}(1:3) - \boldsymbol{\theta}(1+3)^{2} + \boldsymbol{\mathsf{s}}\_{\mathrm{t},1}^{\mathrm{T}}\boldsymbol{\mathsf{s}}\_{\mathrm{t},1} \\ \boldsymbol{\theta}(1:3)^{\mathrm{T}}\boldsymbol{\theta}(1:3) - \boldsymbol{\theta}(2+3)^{2} + \boldsymbol{\mathsf{s}}\_{\mathrm{t},2}^{\mathrm{T}}\boldsymbol{\mathsf{s}}\_{\mathrm{t},2} \\ \vdots \\ \boldsymbol{\theta}(1:3)^{\mathrm{T}}\boldsymbol{\theta}(1:3) - \boldsymbol{\mathsf{e}}(M+3)^{2} + \boldsymbol{\mathsf{s}}\_{\mathrm{t},M}^{\mathrm{T}}\boldsymbol{\mathsf{s}}\_{\mathrm{t},M} \end{bmatrix} \tag{76}$$

$$
\Delta \mathbf{h}\_2 = \mathbf{B}\_2 \Delta \boldsymbol{\Theta} + \mathbf{D}\_2 (\Delta \mathbf{s} - \Delta \hat{\mathbf{s}}) \tag{77}
$$

$$\mathbf{B}\_2 = \begin{bmatrix} -\mathbf{I}\_{3 \times 3} & 0\_{3 \times 1} & 0\_{3 \times 1} & \cdots & 0\_{3 \times 1} \\ -2\mathfrak{O}(1:3) & 2\mathfrak{O}(1+3) & 0 & \cdots & 0 \\ -2\mathfrak{O}(1:3) & 0 & 2\mathfrak{O}(2+3) & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -2\mathfrak{O}(1:3) & 0 & 0 & \cdots & 2\mathfrak{O}(M+3) \end{bmatrix} \tag{78}$$

$$\begin{bmatrix} -2\mathfrak{A}(1:3) & 0 & 0 & \cdots & 2\mathfrak{A}(M+3) \end{bmatrix}$$

$$\mathbf{D}\_2 = \begin{bmatrix} \mathbf{O}\_{3\times3M} & \mathbf{O}\_{3\times3N} \\ 2\text{diag}\left\{ \left( \mathbf{u}^o - \mathbf{\hat{s}}\_{\text{t},1} \right)^T, \left( \mathbf{u}^o - \mathbf{\hat{s}}\_{\text{t},2} \right)^T, \dots, \left( \mathbf{u}^o - \mathbf{\hat{s}}\_{\text{t},M} \right)^T \right\} & \mathbf{O}\_{3M\times3N} \end{bmatrix} \tag{79}$$

Invoking the WLS theorem again, one has the solution of target position, denoted by **u**, from (74) as

$$\mathbf{u} = \left(\mathbf{G}\_2^T \mathbf{W}\_2 \mathbf{G}\_2\right)^{-1} \mathbf{G}\_2^T \mathbf{W}\_2 \mathbf{h}\_2 \tag{80}$$

where **W**<sup>2</sup> is the weighting matrix and it is determined by

$$\begin{aligned} \mathbf{W}\_{2} &= \left[\mathbf{E}(\boldsymbol{\Delta}\mathbf{h}\_{2}\boldsymbol{\Delta}\mathbf{h}\_{2}^{\mathrm{T}})\right]^{-1} \\ &= \left[\mathbf{B}\_{2}\text{cov}(\boldsymbol{\Theta})\mathbf{B}\_{2}^{\mathrm{T}} + \mathbf{D}\_{2}\text{cov}(\boldsymbol{\Delta}\mathbf{s} - \boldsymbol{\Delta}\boldsymbol{\dot{\Theta}})\mathbf{D}\_{2}^{\mathrm{T}} + \mathbf{B}\_{2}(\mathbf{G}\_{1}^{\mathrm{T}}\mathbf{W}\_{1}\mathbf{G}\_{1})^{-1}\mathbf{G}\_{1}^{\mathrm{T}}\mathbf{W}\_{1}\mathbf{D}\_{1}\text{cov}(\boldsymbol{\Delta}\mathbf{s} - \boldsymbol{\Delta}\boldsymbol{\dot{\Theta}})\mathbf{D}\_{2}^{\mathrm{T}} \\ &+ \mathbf{D}\_{2}\text{cov}(\boldsymbol{\Delta}\mathbf{s} - \boldsymbol{\Delta}\dot{\mathbf{s}})\mathbf{D}\_{1}^{\mathrm{T}}\mathbf{W}\_{1}\mathbf{G}\_{1}\left(\mathbf{G}\_{1}^{\mathrm{T}}\mathbf{W}\_{1}\mathbf{G}\_{1}\right)^{-1}\mathbf{B}\_{2}^{\mathrm{T}} \end{aligned} \tag{81}$$

But as presented in (81), the unknown target position is required in the computation of **W**2. Herein, to circumvent this dilemma, we preliminarily exploit the target position estimate contained in θ to form **W**<sup>2</sup> and use (80) to estimate target position. After that we can utilize the estimated target position to update **W**<sup>2</sup> for another repetition.

From the WLS theorem, the covariance matrix of **u** can be approximated, given sufficiently small BR measurement noise and transmitter/receiver position error, as

$$\text{cov}(\mathbf{u}) = \left(\mathbf{G}\_2^T \mathbf{W}\_2 \mathbf{G}\_2\right)^{-1} \tag{82}$$
