1. Introduction
Over the past decades, metaheuristic algorithms (MAs) have become more prevalent in solving optimization problems in various fields of industry and science [
1]. The widespread usage of MAs for solving different optimization problems verified their ability for solving complex problems with difficulties such as non-linear constraints, multi-modality of the problem, and a non-convex search landscape [
1,
2]. Metaheuristics are a class of general-purpose stochastic algorithms that can be applied to any optimization problem [
3]. Despite exact algorithms, metaheuristics allow tackling complex problems by providing satisfactory solutions in a reasonable time [
4]. MAs estimate the approximate optimal solution of the problem by sampling the solution space to find or generate better solutions. Many combinatorial optimization problems were solved using metaheuristic algorithms in diverse engineering fields such as civil, mechanical, electrical, industrial, and system engineering. Several new optimization algorithms have been proposed recently due to the emergence of the no free lunch (NFL) theorem [
5], which states that no particular optimization algorithm can solve all problems of all kinds of complexities. It is also discovered that using the same algorithm on the same problem yields varied results depending on the various parameter settings.
The inspiration and imitation of creatures’ behaviors led to many effective metaheuristics to find the optimum solution for different problems. The MAs based on their source of inspiration can be broadly classified into two main categories: evolutionary and swarm intelligence algorithms. The algorithms that imitate an evolutionary phenomenon in nature are classified as evolutionary algorithms. These algorithms improve a randomly generated population of solutions for a particular optimization problem by employing evolutionary principles. Genetic algorithm (GA) [
6], genetic programming (GP) [
7], evolution strategy (ES) [
8], evolutionary programming (EP) [
9], and differential evolution (DE) [
10] are the most well-known algorithms in this category. Swarm intelligence algorithms mimic simple behaviors of social creatures in which the individuals cooperate and interact collectively to find promising regions. Some of the best known and recently proposed swarm intelligence algorithms are particle swarm optimization (PSO) [
11], the bat algorithm (BA) [
12], krill herd (KH) [
13], the grey wolf optimizer (GWO) [
14], the whale optimization algorithm (WOA) [
15], the salp swarm algorithm (SSA) [
16], the squirrel search algorithm (SSA) [
17], the African vultures optimization algorithm (AVOA), and the Aquila optimizer (AO) [
18] algorithm.
On the other hand, the optimal power flow (OPF) is a non-linear and non-convex problem that is considered one of the power system’s complex optimization problems [
19]. OPF adjusts both continuous and discrete control variables to optimize specified objective functions by satisfying the operating constraints [
20]. From the perspective of industries and power companies, minimizing the operational cost and maximizing the reliability of power systems are two primary objectives. Since a slight modification in power flow can considerably raise the running expense of power systems, the OPF focuses on the economic aspect of operating power systems [
21]. Non-linear programming [
22], Newton algorithm [
23], and quadratic programming [
24] are some of the classical optimization algorithms that have been employed to tackle the OPF problem. Although these algorithms can sometimes find the global optimum solution, they have some drawbacks such as getting trapped in local optima, a high sensitivity to initial potions, and the inability to deal with non-differentiable objective functions [
25,
26,
27]. Thus, it is essential to develop effective optimization algorithms to overcome these shortcomings and deal with such challenges efficiently.
Regarding MA, the whale optimization algorithm (WOA) is a swarm-based algorithm inspired by the hunting behavior of humpback whales in nature. The humpback whales use the bubble-net hunting technique to encircle and catch their prey that are in collections of fishes close to water level. The whales go down the surface and dive into the prey, then swarm in a spiral-shaped path while they start creating bubbles. The whales’ spiral movement radius narrows when prey get closer to the surface, enabling them to attack. WOA consists of three phases: encircling the prey, bubble-net attacking, and searching for the prey. WOA has been used to solve a wide range of optimization problems in different applications including feature selection [
28], software defect prediction [
29], clustering [
30,
31], classification [
32,
33], disease diagnosis [
34], image segmentation [
35,
36], scheduling [
37], forecasting [
38,
39], parameter estimation [
40], global optimization [
41], and photovoltaic energy generation systems [
42,
43]. Even though WOA is employed to tackle a wide variety of optimization problems, it still has flaws such as premature convergence, the imbalance between exploration and exploitation, and local optima stagnation [
44,
45].
This paper proposes an effective whale optimization algorithm for solving the optimal power flow problem (EWOA-OPF). The EWOA-OPF improves the movement strategy of whales by introducing two new movement strategies: (1) encircling the prey using Levy motion and (2) searching for prey using Brownian motion that cooperate with canonical bubble-net attacking. The reason for these changes is to maintain an appropriate balance between exploration and exploitation and enhance the exploration ability of the WOA, resulting in more precise solutions when solving problems. To validate the proposed EWOA-OPF algorithm, a comparison among well-known optimization algorithms is established under single- and multi-objective functions of the OPF. Standard IEEE 6-bus, IEEE 14-bus, IEEE 30-bus, and IEEE 118-bus test systems are used to evaluate the proposed EWOA-OPF and comparative algorithms for solving OPF problems in diverse power system scale sizes. The results were compared with four state-of-the-art algorithms consisting of particle swarm optimization (PSO) [
11], krill herd (KH) [
13], the grey wolf optimizer (GWO) [
14], and the whale optimization algorithm (WOA) [
15] and two recently proposed algorithms, the salp swarm algorithm (SSA) [
16] and the Aquila optimizer (AO) [
18] algorithm. The comparison of results proves that the EWOA-OPF can solve single- and multi-objective OPF problems with better solutions than other comparative algorithms.
The rest of the paper is organized as follows: the related works are reviewed in
Section 2.
Section 3 presents the OPF problem formulation and objective functions.
Section 4 contains the mathematical model of WOA.
Section 5 presents the proposed EWOA-OPF. The experimental evaluation of EWOA-OPF and comparative algorithms on OPF is presented in
Section 6. Finally, the conclusion and future work are given in
Section 7.
2. Related Work
The purpose of optimization is to find the global optimum solution among numerous candidate solutions. Traditional optimization methods have several drawbacks when solving complex and complicated problems that require considerable time and cost optimization. Metaheuristic algorithms have been proven capable of handling a variety of continuous and discrete optimization problems [
46] in a wide range of applications including engineering [
47,
48,
49], industry [
50,
51], image processing and segmentation [
52,
53,
54], scheduling [
55,
56], photovoltaic modeling [
57,
58], optimal power flow [
59,
60], power and energy management [
61,
62], planning and routing problems [
63,
64,
65], intrusion detection [
66,
67], feature selection [
68,
69,
70,
71,
72], spam detection [
73,
74], medical diagnosis [
75,
76,
77], quality monitoring [
78], community detection [
79], and global optimization [
80,
81,
82]. In the following, some representative metaheuristic algorithms from the swarm intelligence category used in our experiments are described. Then, some metaheuristic algorithms were used to solve OPF are highlighted.
Swarm intelligence algorithms mimic the collective behavior of creatures in nature such as birds, fishes, wolves, and ants. The main principle of these algorithms is to deal with a population of particles that can interact with each other. Eberhart and Kennedy proposed the particle swarm optimization (PSO) [
11] method, which simulates bird flocks’ foraging and navigation behavior. It is derived from basic laws of interaction amongst birds, which prefer to retain their flight direction considering their current direction, the local best position gained so far, and the global best position that the swarm has discovered thus far. The PSO algorithm concurrently directs the particles to the best optimum solutions by each individual and the swarm. The krill herd (KH) [
13] algorithm is a population-based metaheuristic algorithm based on the krill individual herding behavior modeling. The KH algorithm repeats the three motions and searches in the same direction until it finds the optimum answer. Other krill-induced movements, foraging activity, and random diffusion all have an impact on the position.
Another well-known swarm intelligence algorithm is the grey wolf optimizer (GWO) [
14], which is inspired by grey wolves in nature that look for the best approach to pursue prey. In nature, the GWO algorithm uses the same method, following the pack hierarchy to organize the wolves’ pack’s various responsibilities. GWO divides pack members into four divisions depending on each wolf’s involvement in the hunting process. The four groups are alpha, beta, delta, and omega, with alpha being the finest hunting solution yet discovered. The salp swarm algorithm (SSA) [
16] is another recent optimizer that is based on natural salp swarm behavior. As a result, it creates and develops a set of random individuals within the problem’s search space. Following that, the chain’s leader and followers must update their location vectors. The leader salp will assault in the direction of a food supply, while the rest of the salps can advance towards it. The Aquila optimizer (AO) [
18] is one of the latest proposed algorithms in the swarm intelligence category that simulates the prey-catching behavior of Aquila in nature. In AO, four methods were used to emulate this behavior consisting of selecting the search space by a high soar with a vertical stoop, exploring within a diverge search space by contour flight with short glide attack, exploiting within a converge search space by low flight with slow descent attack, and swooping by walk and grab prey.
Regardless of the nature of the algorithm, the majority of the metaheuristics, especially the population-based algorithms, have two standard contrary criteria in the search process: the exploration of the search space and the exploitation of the gained best solutions. In exploitation, the promising regions are explored more thoroughly for generating similar solutions to improve the previously obtained solution. In exploration, non-explored regions must be visited to be sure that all regions of the search space are evenly explored and that the search is not only limited to a reduced number of regions. Excessive exploitation decreases diversity and leads to premature convergence, whereas excessive exploration leads to gradual convergence [
83]. Thus, metaheuristic algorithms try to balance between the exploration and exploitation that has a crucial impact on the performance of the algorithm and the gained solution [
84]. Furthermore, real-world problems require achieving several objectives that are in conflict with one another such as minimizing risks, maximizing reliability, and minimizing cost. There is only one objective function to be optimized and only one global solution to be found in a single-objective problem. However, in multi-objective problems, as there is no single best solution, the aim is to find a set of solutions representing the trade-offs among the different objectives [
85].
Although metaheuristic algorithms have several merits over classical optimization algorithms, such as the simple structure, independence to the problem, the gradient-free nature, and finding near-global solutions [
14], they may encounter premature convergence, local optima entrapment, and the loss of diversity. In this regard, improved variants of these algorithms have been proposed, each of which adapted to tackle such weaknesses [
86,
87,
88]. Additionally, the significant growth of metaheuristic algorithms has resulted in a trend of solving OPF problems by using population-based metaheuristic algorithms. In the literature, the OPF was solved by using black hole (BO) [
89], teaching–learning based optimization (TLBO) [
90] algorithms, the krill herd (KH) algorithm [
91], the equilibrium optimizer (EO) algorithm [
92], and the slime mould algorithm (SMA) [
93]. Additionally, some studies used the modified and enhanced version of the canonical swarm intelligence algorithms for solving OPF with different test systems such as the modified shuffle frog leaping algorithm (MSLFA) for multi-objective optimal power flow [
94] that added a mutation strategy to overcome the problem of being trapped in local optima.
Another work proposed an improved grey wolf optimizer (I-GWO) [
95] to improve the GWO search strategy with a dimension learning-based hunting search strategy to deal with exploration and exploitation imbalances and premature convergence weaknesses. In [
96], quasi-oppositional teaching–learning based optimization (QOTLBO) proposed to improve the convergence speed and quality of obtained solutions by using quasi-oppositional based learning (QOBL). In [
97], particle swarm optimization with an aging leader and challengers (ALC-PSO) algorithm was applied to solve the OPF problem by using the concept of the leader’s age and lifespan. The aging mechanism can avoid the premature convergence of PSO and result in better convergence. An improved artificial bee colony optimization algorithm was based on orthogonal learning (IABC) [
98] to adjust exploration and exploitation. In [
99], the modified sine–cosine algorithm (MSCA) was aimed to reduce the computational time with sufficient improvement in finding the optimal solution and feasibility. The MSCA benefits from using Levy flights cooperated by the strategy of the canonical sine–cosine algorithm to avoid local optima. In the high-performance social spider optimization algorithm (NISSO) [
100], the canonical SSO algorithm was modified by using two new movement strategies that resulted in faster convergence to the optimal solution and finding better solutions in comparison to comparative algorithms.
4. The Whale Optimization Algorithm (WOA)
The whale optimization algorithm (WOA) [
15] is inspired by the hunting behavior used by humpback whales in nature. The humpback whales use the bubble-net hunting technique to encircle and catch their prey that are in groups of small fishes. In WOA, the best whale position is considered as prey position
X* and the other whales update their position according to the
X*. In WOA, three behaviors of whales are encircling prey, bubble-net attacking (exploitation), and searching for prey (exploration), modeled as in the following definitions.
Encircling prey: The first step in the whales’ hunting process is surrounding the prey. Whales can detect the position of the prey and begin to surround them. Therefore, in WOA, the current best whale
X* is considered as prey or being close to the prey. All other whales update their position according to the
X* by Equations (6) and (7):
where
t is the iteration counter and
D is the calculated distance between the prey
X*(t) and the whale
X(t).
A and
C are coefficient vectors that are calculated by Equations (8) and (9):
where the value of
a is linearly decreased from 2 to 0 over the course of the iterations, and
r is a random number in [0, 1].
Bubble-net attacking: Whales spin around the prey within a shrinking encircling technique or spiral updating position. This behavior is modeled by Equation (10),
where
p is a random number in [0,1] and shows the probability of updating whales’ positions based on a shrinking encircling technique (if
p < 0.5) or a spiral updating position (if
p > 0.5).
A is a random value in [−
a, a] where a is linearly decreased from 2 to 0 over the course of the iterations. In the spiral updating position,
D’ represents the distance between the current whale
X and the prey
X*,
b represents a constant used to define the spiral movement shape, and
l is a random number in [−1, 1].
Searching for prey: In order to find new prey, whales conduct a global search through the search space. This is completed when the absolute value of vector
A value is greater or equal to 1, and it will be an exploration, else it will be exploitation. In the exploration phase, the whales update their position concerning a random whale
Xrand instead of the best whale
X*, which is calculated using Equations (11) and (12):
where
Xrand is a randomly selected whale from the current population.
5. Effective Whale Optimization Algorithm to Solve Optimal Power Flow (EWOA-OPF)
While WOA is easy to implement and applicable for solving a wide range of optimization problems, it has insufficient performance to solve complex problems. The algorithm suffers from premature convergence to local optima and an insufficient balance between exploration and exploitation. Such problems lead to inadequate performance of the WOA when used to solve complex problems. Motivated by these considerations, an enhanced version of the WOA algorithm named the effective whale optimization algorithm (EWOA-OPF) is proposed for solving the optimal power flow problem. Since maintaining an appropriate balance between exploration and exploitation can prevent premature convergence and control the global search ability of the algorithm, the canonical WOA’s strategies, encircling the prey and searching for prey, are replaced by two new movement strategies. This modification aims to enhance the exploitative and explorative capabilities of WOA which leads to obtaining accurate solutions when solving problems. In the following, the proposed EWOA-OPF is explained in detail.
Initializing step: N whales are randomly generated and distributed in the search space within the predefined range [
LB, UB] using Equation (13).
where
Xij is the position of the
i-th whale in the
j-th dimension,
LBj and
UBj are the lower and upper bound of the
j-th dimension, and the
rand is a uniformly distributed random variable between 0 and 1, respectively. The fitness value of whale
Xi in the
t-th iteration is calculated by the fitness function
f (Xi (t)), and the whale with better fitness is considered as
X*, which is the best solution obtained.
Encircling prey using Levy motion: Whales update their position by considering the position of
X* and the Levy-based pace scale
PSL by Equation (14),
where
Xj*(t) is the
j-th dimension of the best whale,
C is a linearly decreased coefficient from 1 to 0 over the course of iterations, and
is the
j-th dimension of the
i-th row of pace scale calculated by Equation (15).
is a randomly generated number based on Levy movement, which is calculated by Equation (16),
where
u and
v follow the Gaussian distribution which is calculated by Equations (17) and (18),
where Γ is a Gamma function and
β = 1.5.
Bubble-net attacking: Whales spin around the prey within a shrinking encircling technique and spiral updating position. This behavior is as same as canonical WOA and calculated by Equations (19) and (20),
where
D’ is the distance between the current whale
X and the prey
X*, b represents a constant used to define the spiral movement shape by the whales, and
l is a random number in [−1, 1].
Searching for prey using Brownian motion: Whales update their position by considering the position of
X* and the Brownian-based pace scale
PSB by Equation (21),
where
A is a decreasing coefficient calculated by Equation (8),
rand is a random number, and
PSB is Brownian-based pace scale which is calculated by Equation (22),
where
is a random number based on normal distribution representing the Brownian motion.
After determining the new position of the whales, their fitness is calculated and the prey position
X* is updated. The search process is iterated until the predefined number of iterations (
MaxIter) is reached. The pseudo-code of the proposed EWOA-OPF is shown in Algorithm 1.
Algorithm 1 The EWOA-OPF algorithm |
| Input: N, D, MaxIter |
| Output: The global optimum (X*) |
1: | Begin |
2: | iter = 1. |
3: | Randomly distribute N whales in the search space. |
4: | Evaluating the fitness and set X*. |
5: | While iter ≤ MaxIter |
6: | For i = 1: N |
7: | Caculating coefficients a, A, C, and l. |
8: | For j = 1: D |
9: | If (p < 0.5) |
10: | If (|A| < 1) |
11: |
|
12: |
|
13: | Elseif (|A| ≥ 1) and (iter < MaxIter/3) |
14: |
|
15: |
|
16: | End if |
17: | Elseif (p > 0.5) |
18: |
|
19: |
|
20: |
End if |
21: |
End for |
22: |
End for |
23: | Evaluating fitness and update X*. |
24: | iter = iter + 1. |
25: | End while |
26: | Return the global optimum (X*). |
27: | End |