Next Article in Journal
A Novel Single-Fed Dual-Band Dual-Circularly Polarized Dielectric Resonator Antenna for 5G Sub-6GHz Applications
Previous Article in Journal
Recent Progress in Epicardial and Pericardial Adipose Tissue Segmentation and Quantification Based on Deep Learning: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Bald Eagle Search Algorithm for Time Difference of Arrival Localization

1
Electrical Engineering College, Guizhou University, Guiyang 550025, China
2
Power China Guizhou Engineering Co., Ltd., Guiyang 550001, China
3
Electrical Engineering and Intelligent College, Guizhou University, Guiyang 550025, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 5221; https://doi.org/10.3390/app12105221
Submission received: 18 April 2022 / Revised: 18 May 2022 / Accepted: 19 May 2022 / Published: 21 May 2022
(This article belongs to the Special Issue Local Positioning Systems)

Abstract

:
The technology of wireless sensor networks (WSNs) is developing rapidly, and it has been applied in diverse fields, such as medicine, environmental control, climate prediction, monitoring, etc. Location is one of the critical fields in WSNs. Time difference of arrival (TDOA) has been widely used to locate targets because it has a simple model, and it is easy to implement. Aiming at the problems of large deviation and low accuracy of the nonlinear equation solution for TDOA, many metaheuristic algorithms have been proposed to address the problems. By analyzing the available literature, it can be seen that the swarm intelligence metaheuristic has achieved remarkable results in this domain. The aim of this paper is to achieve further improvements in solving the localization problem by TDOA. To achieve this goal, we proposed a hybrid bald eagle search (HBES) algorithm, which can improve the performance of the bald eagle search (BES) algorithm by using strategies such as chaotic mapping, Lévy flight, and opposition-based learning. To evaluate the performance of HBES, we compared HBES with particle swarm algorithm, butterfly optimization algorithm, COOT algorithm, Grey Wolf algorithm, and sine cosine algorithm based on 23 test functions. The comparison results show that the proposed algorithm has better search performance than other reputable metaheuristic algorithms. Additionally, the HBES algorithm was used to solve the TDOA location problem by simulating the deployment of different quantities of base stations in a noise situation. The results show that the proposed method can obtain more consistent and precise locations of unknown target nodes in the TDOA localization than that of others.

1. Introduction

The composition of wireless sensor networks (WSNs) includes numerous sensor nodes with signal processing, communication, and other functions to the fore area to be monitored. These nodes then connect, sense, and process information through wireless communication technology to form a multi-hop self-organizing network [1,2,3,4,5], which has the characteristics of low networking cost, self-organizing ability, and environmental adaptability. A wireless sensor node usually consists of sensing units, processing units, transceivers, and power units. Its function can be summarized as three keywords: communication, perception, and calculation. WSNs have great applications in military, medical, transportation, and environmental monitoring [6].
Location information is an integral part of data sensing in most applications of the WSNs, such as monitoring the extent of the enemy’s area during military activities, determining the exact location of the fire source during forest fires, and quickly identifying the affected area during earthquake disasters [7,8]. Knowledge of the node location information can improve the efficiency of routing protocols and enable automatic configuration of the network topology to improve coverage quality [9]. We can obtain the location information of nodes by deploying some nodes with a global positioning system (GPS) or BeiDou navigation satellite system (BDS) modules in the WSN. The greater the number of nodes with known locations, the more accurate the user can be in determining the location of the information source, but these devices are energy-intensive, large, and costly, and are not suitable for deployment in all nodes. Therefore, it is necessary to research a precise and effective localization algorithm for WSNs.
The main contributions of this paper are as follows:
  • Aiming at the shortcomings of TDOA localization algorithm in WSNs, which is susceptible to interference from non-visual range errors and poor localization results, a hybrid bald eagle search algorithm (HBES) is proposed to replace the mathematical analysis method to estimate the unknown node coordinates. It avoids the inverse and the derivative of the matrix in the mathematical analysis method, which can lead to unsolvable solutions.
  • To enhance the optimization capability of the bald eagle search algorithm (BES), we propose to use chaotic mapping, Lévy’s flight and backward learning strategies to improve the population quality, and hybrid sine and cosine algorithms to improve the performance of the bald eagle algorithm.
  • A localization model combining the TDOA algorithm and HBES algorithm is developed, and experiments are conducted to analyze the performance of the algorithm and the localization effect.
The rest of this article is arranged as follows. In Section 2, we introduce some existing positioning technologies in WSNs. In Section 3, we describe our model for TDOA localization in detail. In Section 4, we make a comparison between our algorithm and other swarm intelligence algorithms for assessing the performance of the proposed algorithm, and we use the localization algorithm in localization experiments to analyze the actual performance of the proposed algorithm. Section 5 concludes the paper and predicts trends in applications.

2. Background and Related Work

The positioning of the WSNs is usually done by two kinds of nodes. The first is the anchor node with known position data. The other is the target node without location information [10,11,12]. The two-dimensional or three-dimensional coordinates of the target node is calculated by combining the coordinates with the communication of the anchor node using localization algorithms. Range-based localization methods require distance information to be obtained first, and then the unknown node coordinates are solved by using localization algorithms. Range-based methods include TDOA, time of arrival (TOA), received signal strength indication (RSSI), etc. The coordinates are then located using methods such as least-squares and trilateration methods. Range-free localization methods generally use connectivity information between unknown nodes and landmarks to obtain estimated distances and then use localization algorithms to estimate unknown node coordinates. Range-free methods include Distance Vector HOP algorithms, centroid localization algorithms, and Approximate Point-in-Triangulation Test (APIT) localization algorithms [13,14]. Both types of algorithms have their own applications and need to be selected according to the actual situation.
The swarm intelligence algorithm is a kind of metaheuristic algorithm. The concept of “swarm intelligence” was proposed by Beni and Wang for the first time in 1989 in Swarm Intelligence [15]. Researchers have proposed many swarm intelligence algorithms to solve practical optimization problems by observing the unique behaviors and survival modes of birds, animals, and other biological groups formed in the evolution process and imitating the behaviors of biological groups. The ant colony algorithm [16,17,18] proposed in 1991 and the particle swarm optimization algorithm [19] proposed in 1994 were among the earlier swarm intelligence optimization algorithms. Given the advantages of distributed computing, parallel search, and robustness in solving optimization problems, swarm intelligence has been used to solve the problems such as routing, coverage, and localization in sensor networks.
TDOA [20,21] localization methods have the characteristics of simple models and high localization accuracy and are applied in the localization scenario of the WSN localization. The Taylor-series estimation method [22] is one of the general methods of classic TDOA localization, but it is very prone to non-convergence of the algorithm when the initial value is poorly chosen. In addition, two-step least-squares algorithms, weighted least-squares algorithms and constrained least-squares algorithms [23,24] are also common algorithms for solving localization problems. These algorithms do not need iterative arithmetic, are computationally small, fast and can approximate the Cramér–Rao Lower Bound (CRLB) when the noise is small, and is the more established of the commonly used algorithms. However, there is a matrix inversion process in the least-squares algorithm, and more observatories must be used to satisfy the operational conditions in order to keep the operations free of matrix singularities.
The emergence of swarm intelligence optimization algorithms, which simulate the nature of organisms searching for food, has a wide range of applications for solving optimal solutions to complex functions and provides new ideas for the TDOA localization problem. Kenneth et al. [25] put forward using Particle Swarm Optimization (PSO) algorithm to solve the TDOA localization problem, which does not need original values for localization and can attain better localization precision than the least-squares method. Chen Tao [26] et al. proposed the use of the Salp Swarm algorithm to solve the passive time difference localization problem. The Salp Swarm algorithm has fewer control parameters and a simple model, which is also applicable to TDOA localization. Swarm intelligent optimization algorithms omit the complex computational process. These methods initialize a large number of random points in the observation range and evaluate the merits of the random points by constructing an adaptation function between the random points and the measurements. Then, the position of random points is updated according to the formula of the swarm intelligence optimization algorithm to find the best degree of adaption position. Therefore, this method avoids matrix inversion, overcoming the disadvantage of least-squares-like algorithms that require the placement of multiple observation stations.
However, the swarm intelligence algorithm has a slow convergence rate and tends to fall into a local optimum. Therefore, other strategies are needed to optimize the swarm intelligence algorithm and enhance its optimization capability. The application of improved particle swarm algorithms, and genetic algorithms in TDOA localization is described in the literature [27,28]. They both improve the original algorithm to enhance the algorithm’s search accuracy, such as introducing chaotic mapping to initialize the population quality to improve the global search capability of the particle swarm algorithm, and using adaptive crossover rate and variation rate to optimize the local search capability of the genetic algorithm.
The reported approaches have improved the performance of the swarm intelligence algorithms to some extent, and they have been used to solve some practical problems. On the other hand, some swarm intelligence algorithms still suffer from slow convergence speed and low accuracy in finding the optimal individual. In addition, the use of swarm intelligence algorithms for TDOA localization only takes into account factors such as algorithm accuracy and stability, two-dimensional scenarios, and specific deployment of base stations. These methods are not closely related to practical application.
According to the NFL (no free lunch) theory [29], The results of each swarm intelligence algorithm are uncertain when solving different optimization problems, and there is no swarm intelligent algorithm that can optimize all engineering problems. In addition, different swarm intelligent optimization algorithms require different numbers of control parameters, and numerous control parameters not only increase the complexity of the algorithm but also increase the workload of adjusting parameters. Therefore, it is vital and significant to find a more suitable swarm intelligence algorithm to solve the TDOA localization problem.
All these defects are considered via proposing an effective method combining HBES to improve the accuracy of the TDOA localization. A localization model combining the TDOA localization and HBES algorithm is designed. The search strategy of the BES algorithm is improved by using many strategies such as chaotic mapping, Lévy flight, and opposition-based learning, which can avoid the algorithm falling into a local optimum. The effect of population size and the number of iterations on the algorithm results is analyzed. In two-dimensional (2D) and three-dimensional (3D) scenarios, we analyze the random deployment of base stations, noise effects, and the impact of base station numbers on TDOA localization to better solve practical problems.

3. Proposed Localization Algorithm

3.1. TDOA Localization Algorithm

This section mainly describes the TDOA positioning method and its model combined with the swarm intelligence algorithm.

3.1.1. TDOA Positioning Principle

In order to achieve localization, nodes in the WSN can be divided into two types: beacon nodes and unknown nodes, where beacon nodes are also called anchor nodes or reference points, and unknown nodes are also called ordinary nodes. Beacon nodes are nodes with known location information, and unknown nodes are nodes without location information. The principle of TDOA localization is shown in Figure 1, where an unknown node broadcasts a signal to the surrounding area to anchor nodes, and the anchor nodes in the surrounding area will successively receive the signal sent by the unknown node. Then the distance difference from the unknown node to the anchor nodes can be calculated through the time difference. For example, if we know the distance difference between the unknown node MS and the anchor nodes BS1 and BS2, we know that the unknown node is on the hyperbola with BS1 and BS2 as the focal points, and similarly, MS is on the hyperbola with BS2 and BS3 as the focal points, and the coordinates of the unknown node can be determined by solving for the intersection of the hyperbolas according to some known conditions.

3.1.2. Location Model Based on Swarm Intelligence Algorithm

Taking a two-dimensional scenario as an example, we suppose there are M anchor nodes distributed on the field. The coordinates of the anchor node i and the unknown node are ( X i , Y i ) ,   ( x , y ) , the distance between the anchor node i and the unknown node is d i , and the calculation is:
d i = ( X i x ) 2 + ( Y i y ) 2 + ε i .
We considered anchor node 1 as the main reference node, d i , 1 is the measured value of the distance difference between the unknown node and the anchor node i and the anchor node 1 and the unknown node. The equation is:
d i , 1 = d i d 1 + ε i , 1
d i , 1 = ( X i x ) + ( Y i y ) ( X 1 x ) + ( Y 1 y ) + ε i , 1
where d i , 1 is calculated as the time difference between the two anchor nodes receiving a broadcast from the unknown node and ε is the measurement noise. Equation (3) identifies the unknown node on a particular hyperbola, which requires at least three anchor nodes in the two-dimensional plane to unite the set of equations, and at least four anchor nodes in the three-dimensional plane, converting Equation (3) to matrix form as:
Δ D = D D 1 + N
Assuming that:
Δ D ( m 1 ) * 1 = [ d 2 , 1 , d 3 , 1 , , d m , 1 ] T D ( m 1 ) * 1 = [ d 2 , d 3 , , d m ] T D 1 ( m 1 ) * 1 = [ d 1 , d 1 , , d 1 ] T N ( m 1 ) * 1 = [ ε 2 , 1 , ε 3 , 1 , , ε m , 1 ] T
where ε i , 1 are independent Gaussian white noise random variables with zero mean and known variance of δ 2 , the maximum likelihood method is used to determine the location of the unknown node (x, y). The likelihood function can be followed as:
i = 1 M 1 { 1 2 π δ exp [ ( d i , 1 d i + d 1 ) 2 2 δ 2 ] } = [ 1 2 π δ ] M 1 exp [ ( Δ D D + D 1 ) T ( Δ D D + D 1 ) 2 δ 2 ]
for getting the coordinate of maximizing the likelihood function of Equation (5), it can be transformed into solving the objective function of Equation (6).
( x , y ) = arg { min [ ( Δ D D + D 1 ) ( Δ D D + D 1 ) ] }
fitness = [ ( Δ D D + D 1 ) T ( Δ D D + D 1 ) ]
Using general methods to solve nonlinear Equation (6) will lead to high computation costs, which are very inconvenient in practical problems. After transforming Equation (6) into objective Equation (7), this paper proposes a HBES algorithm, which obtains the optimal solution for the target function by finding the individual with the highest level of adaptability.

3.2. Hybrid Bald Eagle Search Optimization Algorithm

3.2.1. Bald Eagle Search Optimization Algorithm

The bald eagle search algorithm [30] is a new metaheuristic algorithm inspired by the predatory behaviors of the bald eagle. Bald eagles use multiple attack methods to assess the energy cost of a hunting action, the energy contained in the prey, and the success rate in all kinds of habitats. They follow an intelligent strategy of selecting the correct area with prey, searching the selected area, and capturing the prey at the right time. Wise hunting strategies are also used to take advantage of wind speeds and stormy air. Bald eagle hunting behaviors involve selecting the right search area, searching the selected area, and swooping in on prey at the right time. A bald eagle usually chooses a search area that is spatially closer to the previous search area, and each of its movements is determined by the previous movements. The basic process of BES algorithm can be divided into three stages: choosing a seek space, seeking the space for prey, and diving for prey.
Step 1: Selecting a search space: The bald eagle randomly searches a large area for prey, deciding whether the area is suitable for hunting in accordance with the prey concentrative degree and its own experience. The equation for updating a bald eagle’s location is:
P i , n e w = P b e s t + α × r ( P m e a n P i )
α is for bald eagle’s own experience, which is itself is a parameter controlling position variation ranging from (1.5, 2), r is a random number between (0, 1), P b e s t represents the most adapted position among individual bald eagles, P m e a n represents the average position of the bald eagle population, and P i represents the position of the bald eagle i.
Step 2: Search space prey: The bald eagle selects a search space to search for prey and will search in and around the center point of the search space, moving in different directions within the spiral space to speed up the search, which can be expressed through the equation as:
P i , n e w = P i + y ( i ) × ( P i P i + 1 ) + x ( i ) × ( P i P m e a n )
x ( i ) = x r ( i ) max ( | x r | ) , y ( i ) = y r ( i ) max ( | y r | )
x r ( i ) = r ( i ) × sin ( θ ( i ) ) , y r ( i ) = r ( i ) × cos ( θ ( i ) )
θ ( i ) = a × π × r a n d , r ( i ) = θ ( i ) + R × r a n d
where θ ( i ) represents the polar angle in polar coordinates, r ( i ) is the polar radius in polar coordinates; a and R are parameters to govern the helix search pattern of the bald eagle, varying over the ranges (0, 5) and (0.5, 2). rand represents a random number of (0, 1), and x ( i ) and y ( i ) represent the two coordinates of the bald eagle when converted to a plane rectangular coordinate system. P i + 1 represents the position of bald eagle i + 1 before the population was updated.
Step 3: Swooping for prey: The bald eagle finds prey and launches a swoop toward it. The bald eagle attacks the target prey from the optimum location in the search space, and all points go to the best point. The equation for the change in position of a bald eagle is:
P i , n e w = r a n d × P b e s t + x 1 ( i ) × ( P i c 1 × P m e a n ) + y 1 × ( P i c 2 × P b e s t )
x 1 ( i ) = x r ( i ) max ( | x r | ) , y 1 ( i ) = y r ( i ) max ( | y r | )
x r ( i ) = r ( i ) × sinh [ θ ( i ) ] , y r ( i ) = r ( i ) × cosh [ θ ( i ) ]
θ ( i ) = α × π × r a n d , r ( i ) = θ ( i )
where c1 and c2 represent the motion intensity of the bald eagle to the optimal point and center point during the dive, and the value range is (1, 2).

3.2.2. Lévy Flight Strategy

Lévy flight has its origins in the mathematically related chaos theory [31], which is essentially a random wander with step size obeying the Lévy distribution, a power-law heavy-tailed distribution. The distribution is given by:
L ( s ) ~ s 1 λ
where w is the shape index taking values in the range (0, 2), after the research of many scholars, the foraging trajectories of many creatures fit the pattern of Lévy flight, and the swarm intelligence algorithm is also an algorithm inspired by observing the behavior of creatures. The search strategy of Lévy flight is a small step search in most cases, with occasional larger steps, and the combination with the swarm intelligence algorithm helps expand the swarm intelligence algorithm’s global search capability. The most common method currently used to solve for random step sizes is the algorithm proposed by Mentegna [32], which has been applied with good results. The formula is as follows:
s = u | v | 1 β
where β  = 1.5, u and v serve the regular distribution of N ( 0 , δ u 2 ) and N ( 0 , δ v 2 ) , and the variance calculation method is as follows:
δ u = [ Γ ( 1 + β ) * ( sin ( π β 2 ) ) Γ ( 1 + β 2 ) * β * ( 2 β 1 2 ) ] 1 β ,   δ v = 1
BES in Equation (10) is the first stage of the algorithm, the bald eagle starts a wide range of searches, laying the foundation for the subsequent search for excellence. Therefore, there is still room for improvement at this stage, combined with the Lévy flight strategy to improve the search strategy, improving the algorithm’s global search capability, and the algorithm’s search efficiency. Equation (10) is improved as follows:
P i , n e w = P b e s t + α * r ( P m e a n P i ) * L e v y

3.2.3. Chaos Mapping

The initial population of BES is generated randomly, and for the D-dimensional space, the initial population is:
X i = L b + ( U b L b ) * r a n d
where L b and U b represent the upper and lower bounds of the population search range, X i represents the bald eagle i in the population, and rand is a random number of (0, 1). The initial population also has an impact on the algorithm’s capabilities in finding the optimal solution. We need the initial population to be uniformly distributed, and the wide search range of the algorithm helps to find the globally optimal solution. Chaotic mappings have been applied to swarm intelligence algorithms such as the grey wolf algorithm and whale algorithm in the literature [33,34]. Chaos mapping can increase the diversity of the initial population at the beginning of population intelligence and speed up the convergence of the algorithm. The bifurcation diagram for drawing the Tent map is shown in Figure 2. In this paper, the population expression of the tent-mapping initialized bald eagle algorithm was chosen as follows:
x i + 1 = { 2 x i , 0 x i 0.5 2 ( 1 x i ) , 0.5 x i 1

3.2.4. Sine Cosine Algorithm

The second phase of the BES algorithm searches for spatial prey is to search for prey in a spiral flight search based on the first phase with respect to the region selected in the first phase. This transition phase can be helpful for the algorithm to go beyond the local optimum to ultimately get the best value if it increases the search convenience and the exploration of the solution space. The sine cosine algorithm proposed by Mirjalili [35] in 2016 is characterized by fast convergence speed and few adjustment parameters. The sine cosine algorithm replaces the process of biological searching for food in the swarm intelligence algorithm by the oscillation of the sine and cosine function. The position update formula of the sine cosine algorithm is as follows:
X i t + 1 = { X i t + r 1 * sin ( r 2 ) * | r 3 P j t X i t | , r 4 < 0.5 X i t + r 1 * cos ( r 2 ) * | r 3 P j t X i t | , r 4 0.5
where t is the current iteration number, X i t represents the particle i of the t iteration, r 2 , r 3 , r 4 are random numbers whose values range is (0, 2), (−2, 2), (0, 1) respectively. P j t represent the individual with the best fitness in the current population, and r 1 is a parameter that linearly decreases with the number of iterations and controls the switching between global search and local development of the algorithm.
r 1 = a t a T
where T is the total number of iterations, when | r 1 × sin ( r 2 ) | or | r 1 × cos ( r 2 ) | is greater than 1 the algorithm deviates from P j t to enhance the algorithm’s global search, when | r 1 × sin ( r 2 ) | or | r 1 × cos ( r 2 ) | less than or equal to 1 the algorithm is close to P j t to facilitate the algorithm’s local exploration to find the optimal solution. This paper incorporates the oscillation mechanism of the positive cosine algorithm in the second stage of the BES algorithm to adjust the local development mode of the global search of the algorithm and improve the algorithm’s ability to find the optimal solution. Because r 1 is linearly decreasing, this paper adjusts r 1 as a nonlinear parameter to facilitate the global search in the early stage of slow decline, and local development in the late stage of fast decline to find the optimal solution, while introducing a nonlinear weight factor ω to adjust the dependence of the bald eagle algorithm on individual information. In order to match the conversion pattern of r 1 , ω is smaller in the early stage and larger in the late stage, which is in line with the strategy of global search in the early stage and local development in the late stage.
r = ( 1 ( t T ) η ) 1 η
ω = e t T 1 e 1
X i t + 1 = { ω X i t + r × sin ( r 2 ) × | r 3 X m e a n t X i t | , r 4 < 0.5 ω X i t + r × cos ( r 2 ) × | r 3 X m e a n t X i t | , r 4 0.5
Equation (25) is the adjusted r 1 , Equation (26) is the expression of ω , and Equation (27) is the improved expression of the bald eagle search algorithm’s two-stage integration with the sine cosine algorithm. The curves of r, ω , r 1 with iteration number are shown in Figure 3.

3.2.5. Opposition-Based Learning

The opposition-based learning strategy is a mathematical method proposed by Tizhoosh [36] in 2005, which is an idea to expand the solution space by generating corresponding reverse solutions again based on the basis of the present solution space and then selecting the best solution from it for the next iteration. This strategy can be a good way to expand the population size of a swarm intelligence algorithm and extend the range of the algorithm’s superiority search. In this paper, the population size is expanded by incorporating a backward learning strategy in the closing stage of the bald eagle algorithm to further improve the algorithm’s merit-seeking ability. The population expansion formula is:
X b a c k , i t = u b + l b X i t
Equation (28) is the mathematical expression for generating a reverse population. ub and lb represent the upper and lower bounds of population search, respectively, and are the current population and reverse population, respectively.

3.2.6. Time Complexity Analysis

Supposing the population size of HBES is n, the dimension of the search space is d, Then the Tent mapping initialization complexity of the population is O ( n d ) . In addition, the maximum evaluation times T m a x also influence the time complexity. The computation complexity of the fitness value is O ( n d ) , the complexity of the three-position update is O ( n d + n 2 ) , the sorting complexity of the fitness value of the algorithm is O ( n 2 ) , and the control parameter r, ω , updates the complexity of the algorithm is O ( t 1 + t 2 ) . Generating reverse populations and sorting complexity is O ( n d + n 2 ) . Then, the time complexity of HBES is:
                                                                    O ( HBES ) = O ( 2 n d ) + O ( T m a x ) O ( 4 n d + 4 n 2 ) + O ( t 1 + t 2 ) O ( 2 n d ) + O ( T m a x ) O ( 4 n d + 4 n 2 )
However, the original BES algorithm has the same initialization population n, maximum evaluation times T m a x , and the dimension of search space d as the proposed HBES. The time complexity of the initialization phase is O ( n d ) ; calculating the fitness of all agents is O ( n d ) ; the position update is O ( n d + n 2 ) . Consequently, the time complexity of the BES algorithm is:
O ( BES ) = O ( 2 n d ) + O ( T m a x ) O ( 3 n d + 3 n 2 )
In summary, the time complexity of the HBES algorithm is slightly higher than that of BES algorithm for the same number of iterations. However, the optimization capability of the advanced algorithm is superior to BES, and both of them are in the same order of magnitude. Speed of optimization, optimization capability, and robustness of the proposed method can be proved in the following experiments, including the 23 benchmark functions test, and TDOA problems test, respectively.

3.3. TDOA Localization Implementation Process and Pseudo Code Based on HBES

This section applies HBES to the WSN localization scenario, solving the nonlinear equations for TDOA localization to derive the estimated coordinates of the unknown nodes. The specific steps and pseudo-code for the implementation of TDOA localization based on HBES are shown below.
Step 1: First, the modelling of TDOA localization is started by setting the number of anchor nodes and unknown nodes, randomly generating the coordinates of the anchor and unknown nodes. After that, start the localization and calculate the distance between each node based on the communication time between the nodes.
Step 2: Initialize the population using Tent mapping and configure the basic parameters of HBES algorithm such as population dimension dim, the maximum number of iterations T, population size N, population search boundary ub, lb, etc.
Step 3: Calculate the search space of the current population and return the individuals that exceed the search space.
Step 4: Calculate the fitness of each individual in the population and rank them; record the current best fitness value.
Step 5: Introduce the Lévy flight to promote the algorithm ability of global search and perform a global search of the initialized population of individuals according to Equation (20) with the HBES algorithm, after which the fitness values of the individuals in the population are ranked to complete a phase of position updating.
Step 6: Then, the search mechanism of the positive cosine algorithm is added to perform the local search of the HBES algorithm, and the spatial prey phase of the HBES algorithm is entered according to Equation (27), followed by the spiral search phase according to Equation (13).
Step 7: Finally, expand a set of reverse populations and calculate the fitness of the reverse populations, selecting the updated populations and the individuals with the highest fitness in the reverse populations.
Step 8: Repeat steps 3–7 until the number of iterations reaches a maximum or the overall extreme accuracy value meets a predetermined minimum accuracy value to obtain the optimal coordinates for the TDOA localization model. Pseudo code of HBES-based TDOA positioning process is shown in Algorithm 1.
Algorithm 1: Pseudo code of HBES-based TDOA positioning process
Input: 
Number   of   Population   N ,   Population   dimension   dim ,   Search   the   upper   bound   ub ,   Search   the   Lower   lb ,   Maximum   iteration   , Number   of   anchor   nodes   n 1   and   unknown   nodes   n 2 ;
Initialization: 
Initial population by tent mapping, randomly generating the coordinates of the anchor nodes and unknown nodes;
  • Calculated the fitness of individuals in the initial population by Equation (7);
  • Select   the   individuals   with   the   best   fitness   in   the   initial   population   P b e s t t ;
  • While iterations < T
  •    for i = 1:P
  •      P i , n e w = P b e s t + α × r ( P m e a n P i ) × L e v y
  •      if   fitness   ( P i , n e w )   <   fitness   ( P b e s t t )
  •       P b e s t t = P i , n e w
  •     end if
  •    end for
  •    for i = 1:P
  •   if rand < 0.5
  •      P i , n e w = ω P i t + r × sin ( r 2 ) × | r 3 P m e a n t P i t |
  •   else if rand >= 0.5
  •    P i , n e w = ω P i t + r × cos ( r 2 ) × | r 3 P m e a n t P i t |
  •   end if
  •    if   fitness   ( P i , n e w )   <   fitness   ( P b e s t t )
  •      P b e s t t = P i , n e w
  •   end if
       end for
  • for i = 1:P
  •     P i , n e w = r a n d × P b e s t + x 1 ( i ) × ( P i c 1 × P m e a n ) + y 1 × ( P i c 2 × P b e s t )
  •     if   fitness   ( P i , n e w )   <   fitness   ( P b e s t t )
  •      P b e s t t = P i , n e w
  •    end if
  • end for
  • Generating   reverse   population   P b a c k by Equation (28)
  •  Calculated the fitness of individuals in the reverse population by Equation (7);
  • Select   the   individuals   with   the   best   fitness   in   the   initial   population   P b a c k t ;
  •     if   fitness   ( P b a c k t )   <   fitness   ( P b e s t t )
  •    P b e s t t = P b a c k t
  •    end if
  •  iterations = iterations + 1
  • end while
  • Update   final   best   p b e s t t
  •   p b e s t t is the unknown node coordinate calculated by HBES algorithm
  •   Output:   unknown   nodes   coordinates   P b e s t t ;

4. Experimental Design and Analysis

4.1. Experimental Design

First, the HBES algorithm is compared with PSO [37], GWO [38], COOT [39], BOA [40], HPSOBOA [41], and the BES algorithms, respectively, on the benchmark test functions for finding the best result. HBES algorithm was then combined with TDOA localization and PSO, GWO, COOT, and BES for comparative analysis. The test functions are divided into three categories, single-peaked functions, multi-peaked functions, and fixed-dimensional functions. The parameters of the test function is shown in Table 1. The optimization finding accuracy was set to 1 × 10 30 .
The experimental environment is Windows 10 operating system, AMD R7-5800H [email protected] GHz, 16 G memory, and MATLAB 2020A simulation platform.
Definition 1. 
Optimization success rate (SR): When the difference between the optimal value and the theoretical optimal value is less than the optimization accuracy, the optimization is considered to be successful, and the success rate is the times of successful optimizations divided by the times of experiments.

4.2. Performance Evaluation

We obtained experimental data from the designed experiments and plotted the data in the form of graphs to facilitate performance analysis of the hybrid bald eagle algorithm against other algorithms.

4.2.1. Comparison of Benchmark Test Function Optimization

In order to ensure the fairness of each algorithm on the optimization search experiment, the population size of each algorithm is N = 30, where the dimensionality of the single-peaked test function and the multi-peaked function is dim = 30, the dimensionality of the fixed-dimensional function is as required by the table, and the maximum number of iterations for each algorithm is T m a x = 500. For the same test function, each population intelligence algorithm is run 50 times independently, and the mean, standard deviation (std), and the success rate (SR) are calculated based on the statistical values. The mean, standard deviation, and success rate are calculated based on the statistical values. The results of the optimization experiments are shown in Table 2.
As can be seen from Table 2, for the single-peak functions, the search results of F 1 , F 2 , F 3 and F 4 show that BES, HBES, and the HPSBOA are better than the other algorithms, and the search success rate reaches 100%, while the remaining single-peak functions of the BES and HBES algorithms are not very different. For the multi-peak functions, the HBES algorithm achieves a much better search result at F 8 , and F 9 to F 12 are all around the theoretical optimum, with F 13 being slightly better than the original algorithm. Except for F 7 , the rest of the multi-peaked functions are better with the BES and HBES. Among the fixed-dimensional functions, the HBES algorithm performs better in F 21 , F 22 and F 23 . Statistical analysis of the search results in Table 2 yields Table 3, from which it can be seen that the HBES algorithm has better combined results in terms of search accuracy and stability among the 23 tested functions, and in most cases, maintains the leading position or is tied for first place with other swarm intelligent algorithms.
In order to further compare the convergence speed and convergence accuracy of each algorithm, the curve of the optimization seeking process of the test function is observed, as shown below.
In Figure 4, the single-peaked functions F 1 , F 2 and F 3 of the BES algorithm and HBES algorithm have the same search accuracy and converge to the theoretical optimum value of 0. However, Figure 4 shows that the convergence speed of the single-peaked functions has improved, while the performance on F 7 is still not as good as the original algorithm. In the test of the multi-peaked functions, F 8 has the biggest improvement in convergence performance and convergence accuracy, while F 10 and F 13 have a more obvious improvement in convergence speed. In the test of fixed-dimensional functions, the convergence speed and convergence accuracy of the hybrid condor algorithm have been maintained in the leading position among these algorithms.

4.2.2. HBES-Based Simulation of TDOA Localization

In order to verify the performance of the HBES algorithm for TDOA localization, the combination of the HBES algorithm and other intelligent algorithms (PSO, GWO, COOT, BES, SSA [26], CPSO [27]) are compared.
The simulation scenario and parameters are set as follows: 8 base stations are deployed in a 50 m × 50 m range with base station coordinates of (0, 0), (0, 25), (0, 50), (25, 50), (50, 50), (50, 25), (50, 0), (25, 0), the superior limit of 2D plane coordinates ub = [50, 50], the inferior limit lb = [0, 0], and the number of iterations of all swarm intelligence algorithms The Root Mean Square Error (RMSE) was selected as the evaluation index for localization accuracy.
RMSE = 1 P P = 1 P ( x p x p t ) 2 + ( y p y p t ) 2
where x p , y p and x p t , y p t ) are the true and estimated positions of the test point p, respectively, and P is the total number of test points. The performance of the localization algorithm is usually assessed based on whether the root mean square error is close to the Cramér–Rao Lower Bound (CRLB). The Cramér–Rao Lower Bound specifies the lower limit on the standard deviation that can be achieved by an unbiased estimator, representing the theoretical best estimation accuracy, and is calculated as:
CRLB = t r a c e [ ( G T Q 1 G ) 1 ]
G = [ x x 2 r 2 x x 1 r 1 x x 3 r 3 x x 1 r 1 x x N r N x x 1 r 1 y y 2 r 2 y y 1 r 1 y y 3 r 3 y y 1 r 1 y y N r N y y 1 r 1 ]
Q = [ 2 σ r 2 σ r 2 σ r 2 σ r 2 2 σ r 2 σ r 2 σ r 2 σ r 2 2 σ r 2 ]
where the trace [] function is used to find the trace of the matrix, σ r is the standard deviation of the distance noise, ( x , y ) and ( x i , y i ) are the coordinates of the unknown node and base station I, respectively, r i is the distance from the unknown node to the base station, and N is the total number of base stations.

4.2.3. Analysis of the Accuracy of TDOA Positioning Based on HBES

The population size M is set to 100, the number of base stations N is set to 8, and the values of distance noise standard deviation σ r is set at 0.2, 0.4, 0.6, 0.8, and 1.0 for the experiment, and the experimental scenario is shown in Figure 5. The comparison of the positioning accuracy for different distance noise standard deviations was obtained, as shown in Table 4. It can be seen that the noise immunity of PSO, SSA, and COOT is poor. When the noise standard deviation is 0.2, the RMSE is close to CRLB. As the noise standard deviation increases, the difference between RMSE and CRLB becomes larger. The CPSO, BES, and GWO have a good positioning accuracy in low noise conditions. The results of CPSO and PSO show that chaotic mapping can promote the optimization ability to some degree. Overall, the HBES algorithm has the best noise immunity and its RMSE is always close to that of CRLB.
The effect of the number of base stations on the localization was analyzed under the condition that the standard deviation of the distance noise was 0.5 m. Experiments were conducted with 4–8 base stations involved in localization, and the comparison of the localization accuracy with different numbers of base stations was obtained, as shown in Table 5. The RMSE and CRLB of different algorithms decrease as the number of base stations increases, and HBES algorithm achieves a lower RMSE for different numbers of base stations, which is closer to the CRLB than other algorithms when the number of base stations increases from 4 to 8, the RMSE of HBES algorithm decreases by 2.34%, 4.17%, 7.8%, 8.24%, 9.06%, and 9.06% respectively compared to the BES algorithm. This indicates that the HBES algorithm can make fuller use of the redundancy of measurement values to reduce the positioning error and can perform better in high-precision positioning scenarios with enough base stations. In general, the difference between the positioning accuracy of HBES and other algorithms is not significant when the quantity of base stations is 4. When the quantity of base stations is increased, the RMSE of HBES is lower than that of the other algorithms. This indicates that the HBES algorithm can be used for TDOA positioning with good accuracy while the quantity of base stations is appropriate and that the HBES algorithm has good stability against noise interference and can achieve good positioning accuracy in the presence of interference.

4.2.4. Analysis of the Convergence Performance of TDOA Based on HBES

Swarm intelligent optimization algorithms require a certain population size and number of iterations to complete the search for the global optimal solution. Swarm intelligent optimization algorithms with good convergence capability can converge at smaller population sizes or fewer iterations, reducing running time. Experiments were conducted under the condition that the number of base stations was 8 and for 0.6 m varying the population size M. The relationship between RMSE and population size was obtained as shown in Figure 6. The RMSE of SSA, COOT, PSO, CPSO changes significantly with the increase in population size. the RMSE of GWO, BES no longer decreases significantly after the population size reaches 50. The RMSE of HBES tends to be stable after the population size. The RMSE of HBES stabilized after the population size reached 50. It can be seen that CPSO and HBES have better initial solutions than PSO and BES after using chaotic mapping to initialize the populations so that the individuals of the populations can quickly focus on the global optimal solution and search around it, which improves the efficiency of the search. The population size M is set to 30, and the convergence of RMSE of different algorithms was compared under the increasing number of iterations, and the relationship between RMSE and the number of iterations was obtained, as shown in Figure 7. The RMSE of SSA, COOT, and PSO has a steep decline curve, indicating that the algorithms are easily affected by the number of iterations. The HBES algorithm enhances the global search capability through opposition-based learning and Lévy flight, allowing the algorithm to find an initial value with a smaller RMSE faster and to calculate the location of the target using fewer iterations. Therefore, applying the HBES algorithm to TDOA localization can achieve smaller population sizes and iterations, and reduce the time taken to compute the results.

4.2.5. Comparison of 3D Scene Positioning

The above experiments are the application of the swarm intelligence algorithm in the two-dimensional scene, and three-dimensional space positioning is also one of the important applications of the sensor network. Therefore, TDOA localization based on HBES algorithm is applied to 3D scene. The experimental scene of this experiment is set in a three-dimensional space of 50 m × 50 m × 50 m. When the quantity of base stations was 5, 6, 7, and 8, the algorithm was independently run 50 times, and the average value and standard deviation of positioning errors of each algorithm were recorded.
The positioning data and effects of each algorithm in 3D scenes are shown in Figure 8 and Figure 9. Figure 8 imagines the unknown and anchor nodes as points in 3D space, and the estimated locations are the locations found by the TDOA localization method based on the swarm intelligence algorithm. The closer the points of the triangle and the points of the square in the figure indicate, the better the accuracy of the localization. Figure 9 shows the histogram of the average error of the experimental simulation, which shows that the average error of all seven algorithms decreases as the number of base stations increases in the 3D scenario. The average errors of CPSO, BES, and HBES remained consistently low as the number of base stations changed. The HBES improved over the BES algorithm by 1.4%, 4.6%, 5.4%, and 7.1%, respectively, and over CPSO by 1.1%, 3.6%, 1.4%, and 11.9%, respectively, as the number of base stations increased. The standard deviation reflects the stability of the algorithm in solving the positioning coordinates. The HBES algorithm is less stable when the number of base stations is small, and the standard deviation becomes smaller as the number of base stations increases, remaining at the top of the seven algorithms while the quantity of base stations is 8. It can be seen that the TDOA positioning algorithm based on the HBES algorithm for 3D scenes has good positioning accuracy and stability compared with other algorithms, and the method is simple in model and is easy to implement.

5. Conclusions

The purpose of the research in this article is to further address the problem of poor TDOA localization accuracy by using swarm intelligence algorithms. To achieve this goal, we proposed an improved BES method named HBES algorithm and applied HBES to the TDOA problem with higher accuracy and precision than that of other similar algorithms. In the basic BES algorithm, chaotic mappings are introduced to initialize the population. A Lévy flight strategy is incorporated to expand the global search capability of the algorithm, and an opposition-based learning strategy is added to increase the ability of the algorithm to step out of local optima, which can promote the optimization-seeking accuracy and convergence speed of BES.
In order to verify the performance of HBES, firstly, we performed an experimental simulation with other advanced swarm intelligence algorithms in 23 test functions for finding the optimum. At last, in the two-dimensional and three-dimensional scenarios, we respectively analyzed the impacts of some aspects, such as noise effects, number of base stations, number of algorithm iterations, and population size on the TDOA localization. As a general conclusion, the HBES metaheuristic algorithm has better convergence accuracy and faster convergence speed in the test function experiments and shows the best performance, solution quality, and robustness in dealing with the node localization problem.
In this paper, the TDOA localization problem was not yet identified in the real scenarios. Therefore, in order to improve the practicality of this research, further research will be carried out on TDOA localization in indoor, underwater, and underground scenarios. Since many of the challenges from the research area of WSNs belong to the NP-hard optimization group and the swarm intelligence metaheuristic is proven to be a robust solution method for such problems, we will apply HBES to other problems in WSNs, such as deployment, coverage, and routing. Moreover, there are many opportunities for improvement in swarm intelligence research. We also plan to explore the existing population intelligence algorithms for improvement research to make swarm intelligence work better in the field of WSNs.

Author Contributions

Conceptualization, W.L. and J.Y.; formal analysis, W.L., T.Q. and Y.F.; investigation, F.L.; methodology, W.L. and W.W.; software, W.L.; supervision, J.Z. and J.Y.; validation, W.L.; writing—original draft, W.L.; writing—review and editing, J.Y. and J.Z.; data curation, J.Z.; funding acquisition, J.Z., W.W., T.Q., Y.F., F.L. and J.Y.; project administration, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NNSF of China (No. 61640014, No. 61963009), Industrial Project of Guizhou province (No. Qiankehe Zhicheng [2022]Yiban017, [2019]2152), Innovation group of Guizhou Education Department (No. Qianjiaohe KY[2021]012), Science and Technology Fund of Guizhou Province (No. Qiankehejichu [2020]1Y266), CASE Library of IoT (KCALK201708), and the platform regarding the IoT of Guiyang National High Technology Industry Development Zone (No. 2015).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yaqoob, I.; Hashem, I.A.T.; Ahmed, A.; Kazmi, S.A.; Hong, C.S. Internet of things forensics: Recent advances, taxonomy, requirements, and open challenges. Futur. Gener. Comput. Syst. 2018, 92, 265–275. [Google Scholar] [CrossRef]
  2. Srinivasan, C.R.; Rajesh, B.; Saikalyan, P. A review on the different types of Internet of Things (IoT). J. Adv. Res. Dyn. Control Syst. 2019, 11, 154–158. [Google Scholar]
  3. Jagannath, J.; Polosky, N.; Jagannath, A. Machine learning for wireless communications in the Internet of Things: A comprehensive survey. Ad Hoc Netw. 2019, 93, 101913. [Google Scholar] [CrossRef] [Green Version]
  4. Malik, H.; Alam, M.M.; Pervaiz, H. Radio resource management in NB-IoT systems: Empowered by interference prediction and flexible duplexing. IEEE Netw. 2019, 34, 144–151. [Google Scholar] [CrossRef]
  5. KKhan, S.Z.; Malik, H.; Sarmiento, J.L.R.; Alam, M.M.; Le Moullec, Y. DORM: Narrowband IoT Development Platform and Indoor Deployment Coverage Analysis. Procedia Comput. Sci. 2019, 151, 1084–1091. [Google Scholar] [CrossRef]
  6. Rashid, B.; Rehmani, M.H. Applications of wireless sensor networks for urban areas: A survey. J. Netw. Comput. Appl. 2016, 60, 192–219. [Google Scholar] [CrossRef]
  7. Navarro, M.; Davis, T.W.; Liang, Y. ASWP: A long-term WSN deployment for environmental monitoring. In Proceedings of the 12th International Conference on Information Processing in Sensor Networks, Philadelphia, PA, USA, 8–13 April 2013; pp. 351–352. [Google Scholar]
  8. Chiang, S.Y.; Kan, Y.C.; Tu, Y.C. A preliminary activity recognition of WSN data on ubiquitous health care for physical therapy. In Recent Progress in Data Engineering and Internet Technology; Springer: Berlin/Heidelberg, Germany, 2013; pp. 461–467. [Google Scholar]
  9. Shi, L.; Wang, F.B. Range-free localization mechanism and algorithm in wireless sensor networks. Comput. Eng. Appl. 2004, 40, 127–130. [Google Scholar]
  10. Liu, F.; Li, X.; Wang, J.; Zhang, J. An Adaptive UWB/MEMS-IMU Complementary Kalman Filter for Indoor Location in NLOS Environment. Remote Sens. 2019, 11, 2628. [Google Scholar] [CrossRef] [Green Version]
  11. Xiong, H.; Chen, Z.; An, W.; Yang, B. Robust TDOA Localization Algorithm for Asynchronous Wireless Sensor Networks. Int. J. Distrib. Sens. Netw. 2015, 11, 598747. [Google Scholar] [CrossRef]
  12. Meyer, F.; Tesei, A.; Win, M.Z. Localization of multiple sources using time-difference of arrival measurements. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 3151–3155. [Google Scholar]
  13. Tomic, S.; Beko, M.; Camarinha-Matos, L.M.; Oliveira, L.B. Distributed Localization with Complemented RSS and AOA Measurements: Theory and Methods. Appl. Sci. 2019, 10, 272. [Google Scholar] [CrossRef] [Green Version]
  14. Mass-Sanchez, J.; Ruiz-Ibarra, E.; Gonzalez-Sanchez, A.; Espinoza-Ruiz, A.; Cortez-Gonzalez, J. Factorial Design Analysis for Localization Algorithms. Appl. Sci. 2018, 8, 2654. [Google Scholar] [CrossRef] [Green Version]
  15. Beni, G.; Wang, U. Swarm intelligence in cellular robotic systems. In Robots and Biological Systems: Towards a New Bionics? Springer: Berlin/Heidelberg, Germany, 1993. [Google Scholar]
  16. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the 1st European Conference on Artificial Life, Paris, France, 11–13 December 1991; pp. 134–142. [Google Scholar]
  17. Colorni, A.; Dorigo, M.; Maniezzo, V. An investigation of some properties of an ant algorithm. In Proceedings of the Parallel Problem Solving from Nature Conference, Brussels, Belgium, 28–30 September 1992; pp. 509–520. [Google Scholar]
  18. Colorni, A.; Dorigo, M.; Maniezzo, V. Ant system for job shop scheduling. Oper. Res. Stat. Comput. Sci. 1994, 34, 39–53. [Google Scholar]
  19. Smets, P.; Kennes, R. The transferable belief model. Artif. Intell. 1994, 66, 191–234. [Google Scholar] [CrossRef]
  20. Vaghefi, R.M.; Buehrer, R.M. Cooperative Joint Synchronization and Localization in Wireless Sensor Networks. IEEE Trans. Signal Process. 2015, 63, 3615–3627. [Google Scholar] [CrossRef]
  21. Le, T.-K.; Ono, N. Closed-Form and Near Closed-Form Solutions for TDOA-Based Joint Source and Sensor Localization. IEEE Trans. Signal Process. 2016, 65, 1207–1221. [Google Scholar] [CrossRef]
  22. Foy, W.H. Position-location solutions by Taylor-series estimation. IEEE Trans. Aerosp. Electron. Syst. 2007, AES-12, 187–194. [Google Scholar] [CrossRef]
  23. Xiaomei, Q.; Lihua, X. An efficient convex constrained weighted least squares source localization algorithm based on TDOA measurements. Signal Process. 2016, 119, 142–152. [Google Scholar]
  24. Jihao, Y.; Qun, W.; Shiwen, Y. A simple and accurate TDOA-AOA localization method using two stations. IEEE Signal Process. Lett. 2015, 23, 144–148. [Google Scholar]
  25. Kenneth, W.K.; Jun, Z. Particle swarm optimization for time-difference-of-arrival based localization. In Proceedings of the Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 414–417. [Google Scholar]
  26. Tao, C.; Mengxin, W.; Xiangsong, H. Passive time difference location based on salp swarm algorithm. J. Electron. Inf. Technol. 2018, 40, 1591–1597. [Google Scholar]
  27. Yue, Y.; Cao, L.; Hu, J. A novel hybrid location algorithm based on chaotic particle swarm optimization for mobile position estimatio. IEEE Access 2019, 7, 58541–58552. [Google Scholar] [CrossRef]
  28. Shengliang, W.; Genyou, L.; Ming, G. Application of improved adaptive genetic algorithm in TDOA localization. Syst. Eng. Electron. 2019, 41, 254–258. [Google Scholar]
  29. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  30. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimization algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  31. Yanhong, F.; Jianqin, L.; Yizhao, H. Dynamic population firefly algorithm based on chaos theory. Comput. Appl. 2013, 33, 796–799. [Google Scholar]
  32. Leccardi, M. Comparison of three algorithms for Levy noise generation. In Proceedings of the fifth EUROMECH Nonlinear Dynamics Conference, Eindhoven, The Netherlands, 7–12 August 2005. [Google Scholar]
  33. Jiakui, Z.; Lijie, C.; Qing, G. Grey Wolf optimization algorithm based on Tent chaotic sequence. Microelectron. Comput. 2018, 3, 11–16. [Google Scholar]
  34. Chunmei, T.; Guobin, C.; Chao, L. Research on chaotic feedback adaptive whale optimization algorithm. Stat. Decis. Mak. 2019, 35, 17–20. [Google Scholar]
  35. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  36. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  37. Shi, Y.; Eberhart, R. A Modified Particle Swarm Optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, Anchorage, UK, 4–9 May 1998; pp. 69–73. [Google Scholar]
  38. Mirjalili, S.; Mirjalili, S.M. Lewis A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  39. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
  40. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  41. Mengjian, Z.; Daoyin, L.; Tao, Q.; Jing, Y. A Chaotic Hybrid Butterfly Optimization Algorithm with Particle Swarm Optimization for High-Dimensional Optimization Problems. Symmetry 2020, 12, 1800. [Google Scholar]
Figure 1. TDOA positioning principle.
Figure 1. TDOA positioning principle.
Applsci 12 05221 g001
Figure 2. Tent-Mapping Bifurcation Chart.
Figure 2. Tent-Mapping Bifurcation Chart.
Applsci 12 05221 g002
Figure 3. Variation curves of r, ω , r 1
Figure 3. Variation curves of r, ω , r 1
Applsci 12 05221 g003
Figure 4. Simulation diagram of the test function: (a) comparison of convergence curves on F 1 ; (b) comparison of convergence curves on F 2 ; (c) comparison of convergence curves on F 3 ; (d) comparison of convergence curves on F 7 ; (e) comparison of convergence curves on F 8 ; (f) comparison of convergence curves on F 10 ; (g) comparison of convergence curves on F 13 ; (h) comparison of convergence curves on F 15 ; (i) comparison of convergence curves on F 21 ; (j) comparison of convergence curves on F 23 .
Figure 4. Simulation diagram of the test function: (a) comparison of convergence curves on F 1 ; (b) comparison of convergence curves on F 2 ; (c) comparison of convergence curves on F 3 ; (d) comparison of convergence curves on F 7 ; (e) comparison of convergence curves on F 8 ; (f) comparison of convergence curves on F 10 ; (g) comparison of convergence curves on F 13 ; (h) comparison of convergence curves on F 15 ; (i) comparison of convergence curves on F 21 ; (j) comparison of convergence curves on F 23 .
Applsci 12 05221 g004aApplsci 12 05221 g004b
Figure 5. Positioning accuracy-test simulation diagram.
Figure 5. Positioning accuracy-test simulation diagram.
Applsci 12 05221 g005
Figure 6. Relationship between RMSE and population size.
Figure 6. Relationship between RMSE and population size.
Applsci 12 05221 g006
Figure 7. Relationship between RMSE and iteration times.
Figure 7. Relationship between RMSE and iteration times.
Applsci 12 05221 g007
Figure 8. 3D-scene positioning diagram: (a) 3D five-base stations; (b) 3D six-base stations; (c) 3D seven-base stations; (d) 3D eight-base stations.
Figure 8. 3D-scene positioning diagram: (a) 3D five-base stations; (b) 3D six-base stations; (c) 3D seven-base stations; (d) 3D eight-base stations.
Applsci 12 05221 g008
Figure 9. Error statistics of base stations in 3D scenes. (a) The average error of the algorithm in the experiment in different base stations; (b) the standard deviation of the algorithm in the experiment in different base stations.
Figure 9. Error statistics of base stations in 3D scenes. (a) The average error of the algorithm in the experiment in different base stations; (b) the standard deviation of the algorithm in the experiment in different base stations.
Applsci 12 05221 g009
Table 1. Test function parameters.
Table 1. Test function parameters.
Function ClassificationTest FunctionsDimensionRange of ValuesOptimum Value
single-peaked functions F 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
F 3 ( x ) = max i { | x i | , 1 i n } 30[−100, 100]0
F 4 ( x ) = max { | x i | , 1 i n } 30[−100, 100]0
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
F 7 ( x ) = i x i 4 + r a n d o m [ 0 , 1 ] 30[−128, 128]0
multi-peaked functions F 8 ( x ) = i = 1 n x i * sin ( | x i | ) 30[−500, 500]−418 × n
F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 n 1 n x i 2 ) exp ( 1 n 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]0
F 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } y i = 1 + x i + 1 4 , u = ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) , x i < a 30[−50, 50]0
F 13 ( x ) = 0.1 { sin 2 ( 3 π x i ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x i 1 ) 2 [ 1 + sin 2 ( 2 π x i ) ] } + 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) ) 1 2[−65, 65]1
fixed-dimensional functions F 15 ( x ) = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 4[−5, 5]0.0003
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] * [ 30 + ( 2 x 1 3 x 2 ) * ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x i p i j ) 2 ) 2[1, 3]−3.86
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x i p i j ) 2 ) 6[0, 1]−3.32
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
F 23 ( x ) = i = 1 9 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 2. Comparison of algorithm search results.
Table 2. Comparison of algorithm search results.
Test FunctionIndicatorsPSOBOAHPSOBOASCAGWOCOOTBESHBES
F 1 mean1.15 × 10-57.71 × 10−111.862 × 10−29023.24231.25 × 10−272.07 × 10−1300
std2.57 × 10−56.71 × 10−12068.95562.1 × 10−271.47 × 10−1200
SR(%)001000076100100
F 2 mean0.00632.33 × 10−88.62 × 10−1460.03061.07 × 10−164.05 × 10−800
std0.01246.46 × 10−95.59 × 10−1460.04398.79 × 10−172.86 × 10−700
SR(%)00100000100100
F 3 mean0.49566.2 × 10−111.096 × 10−28992.857.28 × 10−72.02 × 10−1000
std0.24977.91 × 10−12053.398.65 × 10−71.39 × 10−900
SR(%)00100000100100
F 4 mean0.53643.38 × 10−82.52 × 10−14634.66108.15 × 10−74.04 × 10−1300
std0.42263.38 × 10−91.2 × 10−14711.10139.05 × 10−72.66 × 10−1600
SR(%)00100000100100
F 5 mean42.6228.906928.776953.339427.261344.9719.063920.4183
std26.640.02630.06443.37 × 1030.764744.441.49060.2718
SR(%)00000000
F 6 mean1.15 × 10−65.40640.041640.67170.67660.26051.34 × 10−182.13 × 10−18
std2.04 × 10−50.56540.0318166.81520.36710.20084.29 × 10−185.05 × 10−18
SR(%)00000000
F 7 mean0.08320.00192.14 × 10−5112.90.00220.00553.98 × 10−52.47 × 10−4
std0.03776.81 × 10−41.05 × 10−4248.60.00130.00453.77 × 10−50.0012
SR(%)00000000
F 8 mean−2.79 × 103−4.04 × 103−9.745 × 103−3.73 × 103−5.97 × 103−7.44 × 103−7.48 × 103−1.24 × 104
std406.595343.21.367 × 103308.97840.61151.05 × 1031.60 × 1031.09 × 103
SR(%)000000078
F 9 mean45.828832.84740.064338.66122.50011.04 × 10−1200
std14.428369.83990.181834.33273.2996.67 × 10−1200
SR(%)0000084100100
F 10 mean0.00122.78 × 10−88.88 × 10−1614.18271.05 × 10−138.39 × 10−138.88 × 10−168.88 × 10−16
std5.68 × 10−45.61 × 10−908.90911.85 × 10−145.1 × 10−1200
SR(%)00000000
F 11 mean43.02271.18 × 10−1100.97400.00282.4 × 10−1100
std6.85571.06 × 10−1100.43350.00661.7 × 10−1000
SR(%)001000072100100
F 12 mean0.76780.48660.00191.06 × 1030.03980.25835.79 × 10−205.18 × 10−20
std0.64870.12740.00145.83 × 1030.01700.63193.68 × 10−191.02 × 10−18
SR(%)00000000
F 13 mean0.00152.7770.911788.270.61380.47662.87050.0059
std0.00440.26580.7025180.740.23860.44190.44850.0144
SR(%)00000000
F 14 mean1.51311.15053.17451.95084.21610.99805.32682.5651
std1.11530.38562.75821.00093.71552.3 × 10−135.23443.2754
SR(%)00000000
F 15 mean5.85 × 10−43.72 × 10−40.00150.00110.00447.02 × 10−43.68 × 10−43.07 × 10−4
std4.37 × 10−45.26 × 10−57.9276 × 10−43.807 × 10−40.0082.59 × 10−42.22 × 10−43.42 × 10−8
SR(%)00000000
F 16 mean−1.0316−1.0316−0.6435−1.0316−1.0316−1.0316−1.0316−1.0316
std2.61 × 10−165.6 × 10−50.09726.1034 × 10−52.14 × 10−83.31 × 10−121.48 × 10−71.53 × 10−9
SR(%)10064078100100100100
F 17 mean0.39790.45140.39790.40100.39790.39790.39790.3979
std00.00784.39 × 10−60.00323.79 × 10−60.00124.71 × 10−72.28 × 10−8
SR(%)00000000
F 18 mean33.562128.09663.00023333
std1.37 × 10−153..66405.86503.02 × 10−43.74 × 10−51.83 × 10−126.085 × 10−167.4587 × 10−18
SR(%)10000092100100100
F 19 mean−3.8628−3.8568−3.1973−3.8543−3.8615−3.8628−3.8628−3.8628
std1.3 × 10−150.00890.00940.00270.00231.89 × 10−111.34 × 10−152.34 × 10−15
SR(%)00000000
F 20 mean−3.2649−3.0238−2.0792−2.8994−3.2738−3.2958−3.2602−3.2673
std0.060.16498.97 × 10−160.34450.06750.04980.060.04
SR(%)00000000
F 21 mean−5.4891−6.9961−1.6612−2.4205−9.3193−9.0522−7.505−10.1513
std3.23461.74110.02721.96681.93132.57662.57023.48 × 10−5
SR(%)320000784894
F 22 mean−7.1406−7.4507−1.6867−3.5481−10.0364−9.4728−7.5917−10.2694
std3.63181.60580.0061.70691.48392.36992.73820.94
SR(%)00000000
F 23 mean−5.9766−8.0935−1.7266−3.9590−10.2949−10.0932−8.1434−10.4024
std3.59711.3830.0041.67970.75141.51732.63361.90 × 10−5
SR(%)646004884494
Table 3. Comparison results of benchmark functions.
Table 3. Comparison results of benchmark functions.
PSOBOAHPSOBOASCAGWOCOOTBESHBES
Mean/+/−/=1/19/30/21/20/16/70/22/11/18/42/18/31/8/144/5/14
Std/+/−/=3/19/13/13/71/22/00/13/101/22/00/12/110/12/114/9/10
Total/+/−/=4/38/43/34/91/38/70/35/111/40/42/30/141/20/258/14/24
Table 4. Localization accuracy comparison of different distance noise standard deviation.
Table 4. Localization accuracy comparison of different distance noise standard deviation.
Location Accuracy σ r / m PSOGWOCOOTCPSOSSABESHBESCRLB
RMSE/m1.01.05321.02970.99661.12561.16160.85150.83960.7149
0.80.90630.94860.88450.90021.07760.74430.65430.5719
0.60.73840.74650.78400.71500.92150.64350.58690.4289
0.40.51330.44810.51360.45630.65120.44140.33850.2861
0.20.16980.19210.18360.15090.24630.18560.14560.1438
Table 5. Localization accuracy comparison of a different number of base stations.
Table 5. Localization accuracy comparison of a different number of base stations.
Location Accuracy N PSOGWOCOOTCPSOSSABESHBESCRLB
RMSE/m43.92773.75174.04233.85223.85884.03283.93841.0084
52.04681.96461.96491.81892.19651.99891.95390.9451
61.75771.51781.74551.71751.83771.59851.46380.8799
71.00271.04961.20841.11861.10470.90550.87670.8187
81.05321.02970.99661.01111.07420.85150.83960.7149
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, W.; Zhang, J.; Wei, W.; Qin, T.; Fan, Y.; Long, F.; Yang, J. A Hybrid Bald Eagle Search Algorithm for Time Difference of Arrival Localization. Appl. Sci. 2022, 12, 5221. https://doi.org/10.3390/app12105221

AMA Style

Liu W, Zhang J, Wei W, Qin T, Fan Y, Long F, Yang J. A Hybrid Bald Eagle Search Algorithm for Time Difference of Arrival Localization. Applied Sciences. 2022; 12(10):5221. https://doi.org/10.3390/app12105221

Chicago/Turabian Style

Liu, Weili, Jing Zhang, Wei Wei, Tao Qin, Yuanchen Fan, Fei Long, and Jing Yang. 2022. "A Hybrid Bald Eagle Search Algorithm for Time Difference of Arrival Localization" Applied Sciences 12, no. 10: 5221. https://doi.org/10.3390/app12105221

APA Style

Liu, W., Zhang, J., Wei, W., Qin, T., Fan, Y., Long, F., & Yang, J. (2022). A Hybrid Bald Eagle Search Algorithm for Time Difference of Arrival Localization. Applied Sciences, 12(10), 5221. https://doi.org/10.3390/app12105221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop