*3.2. Analysis and Comparison of Simulation Results*

Figure 8 shows the location of the transmitter station and receiver on the map. To clearly show the influence of (*ϕ*, *λ*) on *F*(**x**), the contour of *F*(*ϕ*, *λ*) is shown in Figure 9, where *δt* is set to a known value. The four black contours in Figure 9 are, respectively, surrounded by the solution sets of the four observation equations. The contour shape shows the non-convexity of *F*(*ϕ*, *λ*), which is mainly related to the topology of the transmitting station. Take A as the test point and select the four positions shown in Table 3 as the initial value points. Since *δt*<sup>0</sup> has no effect on the optimization process, it will always be set to 0 in subsequent simulations.

**Figure 8.** Stations location distribution.

**Figure 9.** Contour plot of *F*(*ϕ*, *λ*).

Since the initialization problem of eLoran has not been studied in the literature, there is a lack of competing algorithms for performance comparison. To this end, we select four commonly used nonlinear least squares methods, namely, the NR algorithm, the Levenberg– Marquardt (LM) algorithm, and the trust region Dogleg algorithm to compare with the SBB algorithm. The NR algorithm is a commonly used algorithm in positioning and is widely used in various pseudorange positioning scenarios. Its advantage is that the calculation is

simple, and if the initial is suitable, it will converge quickly. Currently, only this algorithm is mentioned in the existing papers to solve the eLoran localization problem. The LM algorithm is an algorithm that combines the steepest descent method and Newton's method, and is currently widely used in nonlinear least squares. It is characterized by considering the stability of the steepest descent method and the fast convergence characteristics of Newton's method. This algorithm is a benchmark algorithm for solving nonlinear least squares problems based on the derivation algorithm, and is widely used in various scenarios. The LM algorithm can represent a series of scenarios based on the derivation algorithm to demonstrate the problem of solving the eLoran localization problem based on the first-order derivation and the second-order derivation algorithm.

The trust region Dogleg algorithm is representative of another large class of algorithms for solving nonlinear least squares algorithms. It is different from the line search algorithm; the algorithm first sets the step size, and then determines the search direction. The advantage of this algorithm is that it does not require a line search process when solving complex nonlinear least squares problems. Furthermore, even if the condition number of the objective function is poor, it is easy to introduce second-order information of the function. The above three algorithms represent the three most commonly used ideas for solving nonlinear least squares problems. The results are shown in Table 4.


**Table 4.** Convergence results of conventional algorithms under different initial points.

The data in red are the incorrect results, and the data in black are the correct results.

In Table 4, the data in red are the incorrect results, and the data in black are the correct results. The results of all algorithms may be incorrect due to the selection of initial values, except for the SBB algorithm. Among them, both the LM and Dogleg algorithms converge to (31.2167, 103.7164), which is the local minimum L shown in Figure 9. In addition, when the initial value point is close to Point A, both the LM and Dogleg algorithms converge correctly; when the initial value point is close to the local minimum point L, all the results of the above two are incorrect. The erroneous results of the NR algorithm may go beyond the feasible region *D*, mainly because the convergence of the NR algorithm may be out of control due to the lack of line search. Results from the above table verify that we need a global optimization algorithm to solve the eLoran pseudorange positioning problem when the initial value is not available.

We analyze how the SBB algorithm can always converge to the correct result, regardless of the change in the initial value.

Consider the shrink method of the SBB algorithm. Without loss of generality, we set *Q* in Equation (20) to 3000, and the reduced feasible region *Ds* is shown as the red box in Figure 10. It can be seen that *Ds* has been significantly reduced compared to *D*, which reduces the subsequent computation of the SBB algorithm.

To observe the global optimization performance of the SBB algorithm more clearly, Tables 5 and 6 show the iterative process of branch and bound under some initial value points.

**Figure 10.** Contour line of *F* under feasible region *D* and feasible region *Ds*.



**Table 6.** Iterative process with initial value (97.4, 40.1, 0).


It can be seen from Tables 5 and 6 that, as the feasible region is continuously shrunk and divided, the SBB algorithm gradually converges to close to the global minimum.

To further verify the performance of the SBB algorithm, we designed the following simulation experiments: we randomly selected 1000 locations within *Ds* as test points and used the above mentioned algorithms to solve for these locations. Note that these locations were chosen to keep the GDOP as consistent as possible to avoid the impact of GDOP on location accuracy. For each algorithm, the initial value was randomly selected in *D* and *Ds*. When the positioning error was lower than the set threshold, the solution was successful. The statistical results of the success rate of these algorithms in solving these 1000 positions are shown in Figure 11.

**Figure 11.** Statistical chart of success rate of different algorithms. (**a**) *x*<sup>0</sup> ∈ *D*; (**b**) *x*<sup>0</sup> ∈ *Ds*.

As shown in Figure 11, the LM and Dogleg algorithms have a success rate of 55% in Figure 11b, while in Figure 11a, the success rates of the two are only 25% and 30%, respectively. This shows that the two algorithms depend strongly on the selection of initial values. The NL algorithm has the lowest success rate, and its solution probabilities are 5% and 35%, respectively, under the two initial value selection schemes. The main reason for the poor performance of the NL algorithm is that it lacks a line search process compared to the LM and Dogleg. The solution success rates of the SBB algorithm under the two initial value selection schemes are 99.9% and 99.5%, respectively, showing good global optimization performance. The possible reason for the failure of the SBB algorithm is that the algorithm will converge to the local minimum value when *x*<sup>0</sup> is selected very close to the local minimum value. Thus, when *x*<sup>0</sup> is selected in *D*, there is a smaller probability of selecting points close to the local minimum. Therefore, the success rate of the algorithm will be improved under *x*<sup>0</sup> ∈ *D* compared to under *x*<sup>0</sup> ∈ *Ds*. To avoid choosing a point near the local minimum as the initial value when using the SBB algorithm, we can choose a point far away from all possible solutions as the initial value point, such as (0, 0).

Computational complexity affects the performance of an algorithm. The previous analysis of the complexity of the SBB algorithm showed that the number of branches, *N,* has an important impact on the complexity of the SBB algorithm. The figure shows the statistical graph of the number of branch iterations, *N,* required by the SBB algorithm to complete the positioning solution each time in 1000 positioning simulation experiments. Figure 12 shows that the SBB algorithm needs at most two branch iterations to complete the solution, and even only one branch is required in most cases. Comparing Figure 12a,b, it can be found that the probability that the latter requires two branches to solve is 46%, which is much higher than the 24% of the former. This is because when the initial value is randomly selected in *Ds*, there will be a higher probability of selecting the point close to the local minimum, which makes it converge to the global optimal value after two branches.

**Figure 12.** Pie chart of number of branch iterations *N*. (**a**) *x*<sup>0</sup> ∈ *D*; (**b**) *x*<sup>0</sup> ∈ *Ds*.
