1. Introduction
The wireless sensor network (WSN) is a distributed sensor network in which the number of nodes can reach hundreds or thousands [
1]. Different network topologies are formed between nodes, and nodes can communicate with each other. The node is composed of anchor nodes and common nodes [
2]. The common node senses the external environment through its hardware equipment, collects data, and transmits it to the anchor node. The anchor node filters and fuses the data and sends the sorted data to the network owner through the network. The network owner responds to the changes in the environment based on the collected data. However, if the anchor node does not know the location of the node sending the information, the received information is meaningless. Therefore, location is not only a critical issue of wireless sensor networks but also the basis for the subsequent operation of the network.
There are many algorithms to realize an unknown node location in WSN, such as the DV-Hop algorithm, the APIT algorithm, and the centroid algorithm. The DV-Hop algorithm was proposed by Niculescu et al. [
3]. The main idea is to use the minimum hop count and average hop distance between the unknown node and the anchor node to replace the distance between the two. When the distance between the unknown node and the other three anchor nodes is obtained, you can use the maximum likelihood method or the trilateration method to solve the position of the unknown node. The APIT algorithm was proposed by Tian He et al. [
4]. The main idea is to continuously narrow the range of the area where the unknown node is located according to whether the unknown node is in the triangle area formed by three adjacent anchor nodes and to take the centroid position of the finally locked area as the coordinates of unknown node. The centroid algorithm was proposed by Nirupama et al. [
5]. This algorithm calculates the position of unknown nodes through the connectivity of the network. For an unknown node, the algorithm uses the anchor nodes around the unknown node as the polygon vertices of the unknown node and calculates the centroid position of the polygon, and the obtained result is the coordinate of the unknown node.
With the gradual maturity of mobile terminal and mobile internet technology, the localization of mobile wireless sensor networks (MWSN) has become a new trend of current wireless sensor networks and a hot research field of WSN [
6]. The most significant difference between MWSN and WSN is that the nodes constantly move. The mobility of nodes can be used to increase the network coverage, improve the network scalability and reliability, and also put forward new requirements for the location algorithm. Because the network’s topology is constantly changing, MWSN requires that nodes can be dynamically located. Otherwise, the location information of nodes will become invalid with time, and MWSN will not be able to operate normally [
7]. Many mobile node localization algorithms have been proposed, such as particle filter algorithm and Monte Carlo localization (MCL) algorithm [
8].
The swarm intelligence algorithm [
9] simulates the behavior of living things in nature [
10]. It is a group of algorithms with self-organization abilities and self-learning abilities and has the characteristics of adaptability and parallelism. This algorithm has fewer requirements for optimization problems, is simple to use, and is suitable for solving large-scale problems. In 1989, Gerardo Beni and Jing Wang [
11] of the University of California proposed the concept of "swarm intelligence". The basic principle is to simulate the behavior of animal groups in nature and to take advantage of the cooperation and communication between animal groups to achieve the optimization goal. Unlike algorithms with complex internal designs, swarm intelligence algorithms are simple and have stronger robustness and adaptability. Therefore, once the concept of swarm intelligence was proposed, it attracted widespread attention. Standard swarm intelligence algorithms include brain storm optimization (BSO) [
12], particle swarm optimization (PSO) [
13], the firefly algorithm (FA) [
14], the whale optimization algorithm (WOA) [
15], grey wolf optimization (GWO) [
16], and the black hole (BH) algorithm [
17], etc.
Metaheuristic search algorithms [
18] are divided into the following four main categories: evolutionary algorithms, swarm intelligence algorithms, human-based algorithms, and physics-based algorithms. Evolutionary algorithms are a class of optimization algorithms based on the principles of biological evolution in nature, which are used to find optimal or near-optimal solutions in the search space. Evolutionary algorithms mainly include evolutionary programming (EP) [
19], the genetic algorithm (GA) [
20,
21], genetic programming (GP) [
22], etc. Swarm intelligence algorithms are a class of optimization algorithms based on the behavior of groups in nature, which achieve global search or optimization problem solving by simulating the collaboration and cooperation among individuals in a group. Swarm intelligence algorithms include particle swarm optimization (PSO) [
13,
23], grey wolf optimization (GWO) [
16,
24,
25], the whale optimization algorithm (WOA) [
15], etc. Human-based algorithms are a class of optimization algorithms based on human behavior and cognition, which apply human intelligence and experience to the process of problem-solving. Human-based algorithms include the Jaya algorithm (JA) [
26], human-inspired algorithm (HIA) [
27], teaching-learning based optimization (TLBO) [
28], etc. Human-based algorithms can fully use human intelligence and experience in practical applications and are particularly advantageous in complex, uncertain, and multi-objective problems. Physics-based algorithms are a class of optimization algorithms that mimic physical phenomena and principles of nature for problem-solving and optimization. These algorithms usually achieve problem-solving or optimization by simulating physical processes, mechanical laws, energy transfer, etc. These algorithms can achieve global optimization by simulating the physical phenomena and laws of nature and have strong robustness and global search capability in some cases. Physics-based algorithms mainly include the gravitational search algorithm (GSA) [
29], the multi-verse optimizer (MVO) [
30], and the black hole (BH) algorithm [
17], etc.
Mobile node localization problems usually involve searching for the location of object nodes in complex spaces, such as locating the location of moving objects in three-dimensional space. The search space can be very large and complex, and traditional exact search methods are often not suitable. The metaheuristic search algorithm can efficiently search in the complex search space by simulating the optimization process in nature. Mobile node localization problems are usually performed in dynamic environments, where the location of the target node may change with time and other factors. Metaheuristic search algorithms are usually adaptive and can adjust search strategies in real time in dynamic environments to adapt to changes in target locations. Metaheuristic search algorithms are generally robust and adaptable and can cope with various uncertainties and noises. In the mobile node localization problem due to the uncertainty of the environment and nodes, such as noise, sensor error, communication interruption, etc., metaheuristic search algorithms can usually find better solutions in these uncertainties.
Heuristic algorithms usually search in the search space through the population to replace the solution process of the problem to be optimized. Individuals in the population represent candidate solutions to the problem to be optimized, and the performance of candidate solutions corresponds to fitness. Almost all heuristic algorithms simulate the behavior patterns of creatures in nature or this natural phenomenon. The main idea of the black hole algorithm is to find the optimal solution of the problem to be optimized in the search space by simulating the black hole to attract the moving side of stars.
In this paper, we combine the BH algorithm and MCL algorithm to localize mobile nodes in 3D WSN. The principle of the BH algorithm is simple, and there are few parameters. Using it to optimize the positioning of wireless sensor networks will not cause a large burden on its memory. It is precisely because the BH algorithm only simulates the phenomenon that the black hole attracts stars to move and does not balance the exploration and development stages of the algorithm, resulting in weak optimization ability and slow convergence speed of the algorithm. In order to overcome the defect of BH algorithm, we propose an opposition-based learning black hole (OBH) algorithm. According to the evolution degree of the population, OBH divides the population’s evolution into four stages. It applies different opposite strategies to the population at different evolution stages. At the same time, this paper uses an adaptive strategy [
31] to determine the evolution interval of the algorithm so that the population adopts a more accurate opposition-based learning strategy (OBL).
This section describes the organizational structure of the article.
Section 2 introduces the basic principles of the BH and MCL.
Section 4 introduces how to combine the reverse learning strategy with the BH algorithm. In
Section 5, this paper tests the OBH’s performance and compares it with several swarm intelligence algorithms.
Section 6 is the experimental simulation part. Finally, the full text is summarized in
Section 7.
4. Opposition-Based Learning Black Hole Algorithm
The OBL strategy [
34] is to improve the learning rate. The opposite operation can fully explore the search space, quickly find a promising region, and accelerate the population toward global optimization.
Define
x as an individual in the population. Assume that the upper and lower bounds of
x are
a and
b, respectively. The reflective opposition of
x is as follows:
The relationship between
x,
,
a, and
b is shown in the
Figure 2.
Divide the interval in
Figure 2 to obtain the other three kinds of opposition about
x, namely,
(quasi-opposition of
x),
(super-opposition of
x), and
(quasi-reflective opposition of
x), which are calculated by the following formula:
Figure 3 clearly shows the four opposition forms of
x on the interval
:
Further observation of
Figure 3 shows that no opposition rule is defined on the interval
. Through experiments, this paper finds that it is necessary to further explore the interval
. There will also be promising points on this interval, which can considerably improve individual performance and help break away from local optimization. Therefore, we propose a new opposition rule called near opposition (
), which is defined on the interval
.
In the opposition strategy, parameter represents the probability of the individual being the opposite. If the value of is too large, the opposite individuals will be generated frequently, which makes the population repeat the opposite operation in the search space, wasting time in evaluating the fitness function. If is too small, the number of opposite individuals generated is too small to explore the unknown area fully. In this paper, the initial value of is , eventually reaching as the iteration gradually decreases.
The evolution process of the population can be divided into four states, namely, exploration [
35], exploitation, convergence, and jumping out. Exploration and development are essential parts of the swarm intelligence algorithm. It is an essential condition for the algorithm to realize intelligence. Exploration refers to the group searching for promising areas in the search space. In contrast, exploitation [
36] refers to the full development of promising areas to find the best performance position in the area. Too much exploration will increase the randomness of the algorithm, while too much exploitation will reduce the randomness of the algorithm. Exploration and exploitation are critical to the algorithm, so to make the algorithm achieve better performance, we need to balance the two. Convergence means that the current population has found a better global optimal value. At this time, all individuals in the population are close to the new global optimal value. Jumping out means that the current population has fallen into the local optimal value but found a better performance position in the latest evolution. However, this position is relatively far from the local optimal position. At this time, all individuals in the population are close to the better-performance individuals, which is jumping out of state.
Since the population is divided into four states, determining the state of the population becomes very important. Obviously, the average distance between individuals in a population is a good criterion for judging the state. The average distance between all individuals and the
individual is calculated using the following formula:
D represents the dimension that the individual has, and
K represents a certain dimension of the individual. The average distance between the globally optimal individual and the other individuals is the smallest under the convergence state because individuals in the population only surround the global optimal individual. The average distance is the largest when jumping out because the global optimal individual may be far away from other individuals in the population. This paper lets
represent the average distance between all individuals and the globally optimal individuals. The maximum value
represents the average distance between individuals and others in the group, and
represents the minimum value.
Equations (
13)–(
16) describe the fuzzy membership functions of exploration, development, convergence, and jump-out stages, respectively. The following formula can show the evolution factor
f judging the population state:
During the iteration, the population is in the exploration stage, and
f is at a significant value. As the iteration progresses, the population enters the exploitation stage, and
f starts to decrease gradually until the population enters the convergence state, and
f also reaches the minimum value. Once the algorithm finds a better individual in another location, the population enters the jumping state, and
f increases sharply until the population enters the convergence state again. Many test results have proved a specific range of intersection between each interval.
Figure 4 clearly describes how the evolution factor
f describes the state of the population.
The transition rule between states is . Assuming that f is currently in the crossing interval of and and the previous state is , it can be seen that the current population has entered the exploration state. After knowing the evolution state of the current population, we can take corresponding opposite strategies for the population at the appropriate time. This method of obtaining the population state according to the population evolution factor f and then adopting different strategies for the population is also called an adaptive strategy.
When the algorithm encounters a deceptive location, the global optimal individual will always hover at that location, making the algorithm always explore a worthless area, wasting much fitness evaluation times. In order to make the OBH algorithm take advantage of the optimization opportunities as much as possible and avoid the endless exploration and development in the worthless areas, this paper will use Levy flight to accelerate the particles to jump out of the local optimal. Levy flight is a unique random walk, meaning an individual can advance any distance.
Figure 5 shows the step size of the Levy flight, each line segment represents a random walk of the individual.
It can be seen that the Levy flight has a significant probability of being of a large step size. When an individual falls into a local optimum, the Levy flight can be used to let the individual jump out of the current position and find a new optimal position through a large step size. Moreover, the Levy flight not only helps individuals jump out of the local optimum but also allows individuals to move forward at a significant pace in the exploration stage and to find promising areas in the whole search space as soon as possible after exploration.
In order to avoid the profound negative impact on the population caused by the elite individuals trapped in the local optimal, this paper introduces several elite individuals to the OBH algorithm. At the same time, we found that the BH algorithm did not fully use the achievements of previous generations of individuals in the search space, so we introduced the inertia weight
in the algorithm to use the current population information fully. The experiments show that the two improved methods can improve the algorithm’s performance. The following is the position update formula of individuals in the OBH algorithm:
The flow of the OBH algorithm is described in Algorithm 3. At each iteration, the population is moved according to the evolutionary logic of the black hole algorithm. Then, the population will be sorted according to fitness. In order to retain more useful information in the population, this paper retains the best
k elite individuals in the population. In order to perform reverse operations on the remaining individuals, it is necessary to calculate the evolutionary state of the current population. Finally, according to
and population evolution status, corresponding opposite operations are generated for individuals and their fitness is evaluated.
Algorithm 3 OBH algorithm. |
- 1:
Initialization: , Dimension D, , Jump Rate , The optimal number of individuals to be reserved k - 2:
while (t < MaxGeneration) do - 3:
for ; ; do - 4:
Calculate the fitness of the population on the test function set - 5:
Select black hole individuals according to fitness - 6:
Update the population location according to the Equation ( 18) - 7:
If the performance of an individual after moving is better than that of a black hole, it will become a new generation of black holes - 8:
end for - 9:
Sort the population after one iteration and keep the top k individuals - 10:
Calculate the average distance according to the Equation ( 12) - 11:
Calculate population evolution factor f according to the Equation ( 17) - 12:
Judge the state of population according to the previous state of the population and f - 13:
for ; ; do - 14:
Generate a random number p - 15:
if then - 16:
Generate corresponding opposite individuals according to the current state of the population - 17:
Evaluate the individuals after opposition. If the performance is better than the global optimal solution, it will become the new generation of black hole - 18:
end if - 19:
end for - 20:
t = t + 1 - 21:
end while
|
Algorithm 4 describes how to select the appropriate opposite rule for individuals according to the current evolutionary state of the population. It is worth noting that the probability of imposing reverse rules on individuals is not fixed and needs to be constantly adjusted. At the same time, individuals with low fitness gain more benefits from severe reverse operations than individuals with high fitness.
Algorithm 4 Reverse rules. |
- 1:
Initialization: population evolution factor f, individual x - 2:
When the population is in the exploration state, it reduces the probability of of the population while increasing the probability of of the population. - 3:
When the population is in a converged state, it reduces the probability of of the population while increasing the probability of of the population. - 4:
When the population is in the development state, the probability of and of the population can be appropriately increased, otherwise the probability of and of the population can be appropriately increased.
|
Equation (
19) describes the fitness function when OBH optimizes MCL, where
D is the dimension of the individual and
N is the number of unknown nodes. The OBH algorithm is used to optimize the approximate position
of the unknown node obtained by MCL positioning and to finally obtain a more accurate position
of the unknown node. Therefore, it is necessary to take
as the input and output the optimized precise position
as the result. Since this is a simulation experiment, the real positions of unknown nodes are known. When OBH optimizes MCL, the fitness function is the sum of the distances between the approximate positions of all nodes and the real positions.
For swarm intelligence algorithms, the number of fitness evaluations is a precious resource. In the OBH algorithm, each individual may perform multiple fitness evaluations during each iteration. The opposite individuals generated must also be evaluated. Therefore, in order to ensure that the total number of fitness evaluations remains unchanged, we need to reduce the number of iterations of the algorithm.
Fitness evaluation is the most time-consuming operation in the algorithm, so when calculating the time complexity of the algorithm, the number of fitness evaluations should be used as a measure. The time complexity of calling the BH algorithm each time is , where N is the size of the population and T is the number of iterations each time. The Levy flight is added to the OBH algorithm. Since the Levy flight will not increase the number of fitness evaluations of the algorithm, it will not cause an increase in the complexity of the algorithm. When judging the evolutionary state of the population, there is a cycle in the algorithm, where D is the dimension of the individual, but judging the evolutionary state of the population will not perform fitness evaluation, so it will not significantly increase the running time of the algorithm. When the opposite operation is performed on the individual, the fitness function will be evaluated. In order to ensure that the number of fitness evaluations of the OBH algorithm is consistent with that of the BH algorithm, the total number of iterations of the OBH algorithm will be reduced accordingly. Therefore, the time complexity of the OBH algorithm is also , and C is a number less than 2. According to the above discussion, compared with the performance improvement brought by the OBH algorithm, the increased running time of the OBH algorithm is acceptable.
6. Application in Monte Carlo localization
This section applies the OBH algorithm to MCL and replaces the general algorithm in
Section 5.2 with several excellent algorithms to exploit the potential of the OBH algorithm further, such as adaptive particle swarm optimization (APSO) [
44], differential evolution (DE) [
45], GWO, and PSO. In each group of experiments, the number of nodes remained unchanged, with a total of 200, and the number of anchor nodes gradually increased from 5 to 30.
Figure 7 shows the comparison results of the OBH algorithm and the other six intelligent algorithms applied in MCL in a deployment area of 200 m × 200 m × 200 m. The y-axis represents the mean value of positioning error. As the number of anchor nodes increases, each population intelligence algorithm’s positioning accuracy significantly improves. The average positioning accuracy of seven algorithms optimizing MCL are shown in
Table 4. The localization accuracy of the OBH algorithm is always the highest, which shows that the positioning accuracy of this algorithm is the best of the seven algorithms.
To further verify the performance of the OBH algorithm, the deployment area of the 3D experiment is further expanded to 400 m × 400 m × 400 m.
Figure 8 shows the comparison of the positioning accuracy of seven algorithms:
As seen from
Figure 8, OBH is also the best among the seven algorithms in a larger deployment area. Even if the number of anchor nodes in the deployment area is only five, OBH can still effectively eliminate the interference of external error information, give full play to the power of the algorithm, and further improve positioning accuracy. With the increased anchor node density, the OBH algorithm still maintains considerable advantages. It can optimize the positioning results of the MCL algorithm to the greatest extent and achieve the best results in the seven population intelligence algorithms.
The average positioning accuracy of seven algorithms optimizing MCL are shown in
Table 5. It can be seen that in the deployment area of 400 m × 400 m × 400 m, the best positioning accuracy can be obtained by using OBH to optimize the unknown node position obtained by MCL.
When the increase in the deployment area leads to a decrease in the density of anchor nodes, the available information transmitted by the anchor nodes in the network is very small, and OBH can use this small amount of useful information to be optimized in the search space. It can be seen that the approximate position of the unknown node obtained by MCL positioning is passed to seven optimization algorithms for optimization, and OBH can perform the best solution in the search space.
7. Conclusions
The BH algorithm has a simple structure and fewer parameters, making it more suitable for wireless sensor network localization with limited memory and computing power. However, the traditional BH algorithm performs generally and cannot stand out among many swarm intelligence algorithms. In this paper, we combine the OBL strategy and BH algorithm, and the OBH algorithm is proposed.
The OBH algorithm reduces the number of duplicate individuals as much as possible based on preserving the dominant individuals, fully uses the current population information, and avoids wasting valuable fitness evaluation time. At the same time, an adaptive strategy can accurately judge the evolution state of the population, formulate the best OBL strategy for the algorithm, and significantly improve the performance of the algorithm. To verify the improvement in the OBH algorithm, this paper compares five excellent swarm intelligence algorithms with OBH, and the results show that the OBH algorithm performs best among the six algorithms. Finally, this paper applies the OBH and the other six algorithms to MCL. The simulation results show that the MCL positioning accuracy optimized by OBH is the highest, which indicates that the performance of the OBH algorithm is first class.
Compared with the BH algorithm, the performance improvement in the OBH algorithm is very obvious, but it comes at a price. At each iteration, the algorithm calculates the evolutionary state of the population and operates oppositely, which increases the runtime of the algorithm. In a certain period of time, there are populations and opposite populations in the algorithm at the same time, which also increases the demand for memory resources from the sensor nodes to a certain extent.
It is inevitable that the opposite operation will increase the runtime of the algorithm, but we can reduce the memory resource requirements of the algorithm through the compact strategy. The compact strategy uses the probability distribution to simulate the population, so the algorithm only needs to maintain the parameters of the probability distribution, and the updating of the parameters replaces the updating of the population, which will greatly reduce the memory requirements of the algorithm. Therefore, if we want to reduce the burden of sensor node memory, using the compact strategy to improve will be the focus of future work.