1. Introduction and Literature Review
Optimization is a process to fetch the best alternative solution from the given set of alternatives. Optimization processes are evident everywhere around of us. For example, to run a generating company, the operator has to take care of operating cost and to check and deal with various type of markets to execute financial transactions.The operator has to optimize the fuel purchase cost, sell the power at maximum rate and purchase the carbon credits at minimum cost to earn profit. Sometimes, optimization processes involve various stochastic variables to model the uncertainty in the process. Such processes are quite difficult to handle and often pose a severe challenge to the optimizer or solution provider algorithms. Evolution of modern optimizers is the outcome of these complex combinatorial multimodal nonlinear optimization problems. Unlike classical optimizers, where the search starts with the initial guess, these modern optimizers are based on the stochastic variables, and hence, they are less vulnerable towards local minima entrapment. These problems become the main source of emerging of metaheuristic algorithms, which are capable of finding a near-optimal solution in less computation time. The popularity of metaheuristic algorithms [
1] has increased exponentially in the last two decades due to their simplicity, derivation-free mechanism, flexibility and better results providing capacity in comparison with conventional methods. The main inspiration of these algorithms is nature, and hence, aliased as nature-inspired algorithms [
2].
Social mimicry of nature and living processes, behavior analysis of animals and cognitive viability are some of the attributes of nature-inspired algorithms. Darwin’s theory of evolution has inspired some nature-inspired algorithms, based on the property of “inheritance of good traits” and “competition, i.e., survival of the fittest”. These algorithms are Genetic Algorithm [
3], Differential Evolution and Evolutionary Strategies [
4].
The other popular philosophy is to mimic the behavior of animals which search for food. In these approaches, food or prey is used as a metaphor for global minima in mathematical terms. Exploration, exploitation and convergence towards the global minima is mapped with animal behavior. Most of the nature-inspired algorithms also known as population-based algorithms can further be classified as:
Bio-inspired Swarm Intelligence (SI)-based algorithms: This category includes all algorithms inspired by any behavior of swarms or herds of animals or birds. Since most birds and animals live in a flock or group, there many algorithms that fall under this category, such as Ant Colony Optimization (ACO) [
5], Artificial Bee Colony [
6], Bat Algorithm [
7], Cuckoo Search Algorithm [
8], Krill herd Algorithm [
9], Firefly Algorithm [
10], Grey Wolf Optimizer [
11], Bacterial Foraging Algorithm [
12], Social Spider Algorithm [
13], Cat Swarm Optimization [
14], Moth Flame Optimization [
15], Ant Lion Optimizer [
16], Crow Search Algorithm [
17] and Grasshopper Optimization Algorithm [
18]. A social interaction-based algorithm named gaining and sharing knowledge was proposed in reference [
19]. References pertaining to the applications of bioinspired algorithms affirm the suitability of these algorithms on real-world problems [
20,
21,
22,
23]. A timeline of some famous bio-inspired algorithms is presented in
Figure 1.
Physics- or chemistry-based algorithms: Algorithms developed by mimicking any physical or chemical law fall under this category. Some of them are Big bang-big crunch Optimization [
24], Black Hole [
25], Gravitational search Algorithm [
26], Central Force [
27] and Charged system search [
28].
Other than these population-based algorithms, a few different algorithms have also been proposed to solve specific mathematical problems. In [
29,
30], the authors proposed the concept of construction, solution and merging. Another Greedy randomised adaptive search-based algorithm using the improved version of integer linear programming was proposed in [
31].
The No Free Lunch Theorem proposed by Wolpert et al. [
32] states that there is no one metaheuristic algorithm which can solve all optimization problems. From this theorem, it can be concluded that there is no single metaheuristic that can provide the best solution for all problems. It is possible that one algorithm may be very effective for solving certain problems but ineffective in solving other problems. Due to the popularity of nature-inspired algorithms in providing reasonable solutions to complex real-life problems, many new nature-inspired optimization techniques are being proposed in the literature. It is interesting to note that all bio-inspired algorithms are subsets of nature-inspired algorithms. Among all of these algorithms, the popularity of bio-inspired algorithms has increased exponentially in recent years. Despite of this popularity, these algorithms have also been critically reviewed [
33].
In 2016, Mirjalili et al. [
34] proposed a new nature-inspired algorithm called the Whale optimization algorithm (WOA), inspired by the bubble-net hunting behavior of humpback whales. The humpback whale belongs to the rorqual family of whales, known for their huge size. An adult can be 12–16 m long and weigh 25–30 metric tons. They have a distinctive body shape and are known for their breaching behavior in water with astonishing gymnastic skills and for haunting songs sung by males during their migration period. Humpback whales eat small fish herds and krills. For hunting their prey, they follow a unique strategy of encircling the prey spirally, while gradually shrinking the size of the circles of this spiral. With incorporation of this theory, the performance of WOA is superior to many other nature-inspired algorithms. Recently, in [
35], WOA was used to solve the optimization problem of the truss structure. WOA has also been used to solve the well-known economic dispatch problem in [
36]. The problem of unit commitment from electric power generation was solved through WOA in [
37]. In [
38], the author applied WOA to the long-term optimal operation of a single reservoir and cascade reservoirs. The following are the main reasons to select WOA:
There are few parameters to control, so it is easy to implement and very flexible.
This algorithm has a specific characteristic to transit between exploration and exploitation phasesm as both of these include one parameter only.
Sometimes, it also suffers from a slow convergence speed and local minima entrapment due to the random size of the population. To overcome these shortcomings, in this paper, we propose two major modifications to the existing WOA:
The first modification is the inculcation of the opposition-based learning (OBL) concept in the initialization phase of the search process, or in other words, the exploratory stage. The OBL is a proven tool for enhancing the exploration capabilities of metaheuristic algorithms.
The second modification is of the position updating phase, by updating the position vector with the help of Cauchy numbers.
The remaining part of this paper is organized as follows:
Section 2 describes the crisp mathematical details of WOA.
Section 3 is a proposal of the proposed variant; an analogy based on modified position update is also established with the proposed mathematical framework.
Section 4 includes the details of benchmark functions. In
Section 5 and
Section 6 show the results of the benchmark functions and some real-life problems that occur with different statistical analyses. Last but not the least, the paper concludes in
Section 7 with a decisive evaluation of the results, and some future directions are indicated.
5. Result Analysis
In this section, various analyses that can check the efficacy of the proposed modifications are exhibited. For judging the optimization performance of the proposed AWOA, we have chosen some recently developed variants of WOA for comparison purpose. These variants are:
Lévy flight trajectory-based whale optimization algorithm (LWOA) [
50].
Improved version of the whale optimization algorithm that uses the opposition-based learning, termed OWOA [
41].
Chaotic Whale Optimization Algorithm (CWOA) [
51].
5.1. Benchmark Suite 1
Table 3 shows the optimization results of AWOA on Benchmark Suite 1 along with the leading. The table shows entries of mean and standard deviation (SD) of function values of 30 independent runs. Maximum function evaluations are set to 15,000. The first four functions in the table are unimodal functions. Benchmarking of any algorithm on unimodal functions gives us the information of the exploration capabilities of the algorithm. Inspecting the results of proposed AWOA on unimodal functions, it can be easily observed that the mean values are very competitive for the proposed AWOA as compared with other variants of WOA.
For rest of the functions, indicated mean values are competitive and the best results are indicated in bold face. From this statistical analysis, we can easily derive a conclusion that proposed modifications in AWOA are meaningful and yield positive implications on optimization performance of the AWOA specially on unimodal functions. Similarly, for multimodal functions BF-7 and BF-9 to 11, BF-15 to 19 and BF-22 have optimal values of mean parameter. We observed that the values of mean are competitive for rest of the functions and performance of proposed AWOA has not deteriorated.
5.1.1. Convergence Property Analysis
Similarly, the convergence plots for functions BF1 to BF4 have also been plotted in
Figure 5 for the sake of clarity. From these convergence curves, it is observed that the proposed variant shows better convergence characteristics and the proposed modifications are fruitful to enhance the convergence and exploration properties of WOA. As it can be seen that convergence properties of AWOA is very swift as compared to other competitors. It is to be noted here that BF1–BF4 are unimodal functions and performance of AWOA on unimodal functions indicates enhanced exploitation properties. Furthermore, for showcasing the optimization capabilities of AWOA on multimodal functions the plots of convergence are exhibited in
Figure 6. These are plotted for BF9 to BF12. From these results of proposed AWOA, it can easily be concluded that the results are also competitive.
5.1.2. Wilcoxon Rank Sum Test
A rank sum test analysis has been conducted and the
p-values of the test are indicated in
Table 4. We have shown the values of Wilcoxon rank sum test by considering a 5% level of significance [
52]. Values that are indicated in boldface are less than 0.05, which indicates that there is a significance difference between the AWOA results and other opponents.
5.1.3. Boxplot Analysis
To present a fair comparison between these two opponents, we have plotted boxplots and convergence of some selected functions.
Figure 7 shows the boxplots of function (BF1–BF12). From the boxplots, it is observed that the width of the boxplots of AWOA are optimal in these cases; hence, it can be concluded that the optimization performance of AWOA is competitive with other variants of WOA. The mean values shown in the boxplots are also optimal for these functions. The performance of AWOA on the remaining functions of this suite has been depicted through boxplots shown in
Figure 8. From these, it can be concluded that the performance of proposed AWOA is competitive, as mean values depicted in the plots are optimal for most of the functions.
5.2. Benchmark Suite 2
In this section, we report the results of the proposed variant on CEC17 functions. The details of CEC 17 functions have been exhibited in
Table 2. To check the applicability of the proposed variant on higher as well as lower dimension functions, 10- and 30-dimension problems are chosen deliberately. While performing the simulations we have obeyed the criterion of CEC17; for example, the number of function evaluations have been kept
for AWOA and other competitors. The results are averaged over 51 independent runs, as indicated by CEC guidelines. The results of the optimization are expressed as mean and standard deviation of the objective function values obtained from the independent runs.
Table 5 and
Table 6 show these analyses and the bold face entries in the tables show the best performer.
Table 7 and
Table 8 also report the statistical comparison results of objective function values obtained from independent runs through Wilcoxon rank sum test with 5% level of significance. These results are
p-values indicated in the each column of the observation table when the opponent is compared with the proposed AWOA. These values are indicator of the statistical significance.
5.3. Results of the Analysis of 10D Problems
For 10D problems, the depiction of results are in terms of the mean values and standard deviation values obtained from 51 different independent runs that are indicated for each opponent of AWOA. Furthermore, the following are the noteworthy observations from this study:
From the table, it is observed that the values obtained from optimization process and their statistical calculation indicate that the substantial enhancement is evident in terms of mean and standard deviation values. These values are shown in bold face. We observe that out of 29 functions, the proposed variant provides optimal mean values for 23 functions. In addition to that, we have observed that the value of the mean parameter is optimal for 23 functions for AWOA. Except CECF16, 17, 18, 23, 24, 26 and CECF29, the mean values of the optimization runs are optimal for AWOA. This supports the fact that the proposed modifications are helpful for enhancing the optimization performance of the original WOA. Inspecting other statistical parameters, namely standard deviation values, also gives a clear insight into the enhanced performance.
We observe that for unimodal functions, these values are optimal as compared to different versions of WOA as compared to AWOA; hence, it can be said that for unimodal functions, AWOA outperforms. Unimodal functions are useful to test the exploration capability of any optimizer.
Inspecting the performance of the proposed version of WOA on multimodal functions that are from CECF4-F10 gives a clear insight on the fact that the proposed modifications are meaningful in terms of enhanced exploitation capabilities. Naturally, in multimodal functions, more than one minimum exist, and to converge the optimization process to global minima can be a troublesome task.
The results of optimization runs indicated in bold face depict the performance of AWOA.
5.3.1. Statistical Significance Test by the Wilcoxon Rank Sum Test
The results of the rank sum test are depicted in
Table 7. It is always important to judge the statistical significance of the optimization run in terms of calculated
p-values. For this reason, the proposed AWOA has been compared with all opponents and results in terms of the
p-values that are depicted. Bold face entries show that there is a significance difference between optimization runs obtained in AWOA and other opponents. This fact demonstrates the superior performance of AWOA.
5.3.2. Boxplot Analysis
Boxplot analysis for 10D functions are performed for 20 independent runs of objective function values. This analysis is depicted in
Figure 9 and
Figure 10. From these boxplots, it is easily to state that the results obtained from the optimization process acquire an optimal Inter Quartile Range and low mean values. For showcasing the efficacy of the proposed AWOA, all the optimal entries of mean values are depicted with an oval shape in boxplots.
5.4. Results of the Analysis of 30D Problems
The results of the proposed AWOA, along with other variants of WOA, are depicted in terms of statistical attributes of independent 51 runs in
Table 6. From the results, it is clearly evident that except for F24, the proposed AWOA provides optimal results as compared to other opponents. Mean values of objective functions and standard deviation of the objective functions obtained from independent runs are shown in bold face.
The results of the rank sum test are depicted in
Table 8. It is always important to judge the statistical significance of the optimization run in terms of calculated
p-values. For this reason, the proposed AWOA was compared with all opponents and the results in terms of
p-values are depicted. Bold face entries show that there is a significance difference between optimization runs obtained in AWOA and other opponent, as the obtained
p-values are less than 0.05. We observe that for the majority of the functions, calculated
p-values are less than 0.05. Along with the optimal mean and standard deviation values,
p-values indicated that the proposed AWOA outperforms. In addition to these analyses, a boxplot analysis was performed of the proposed AWOA with other opponents, as depicted in
Figure 11 and
Figure 12. From these figures, it is easy to learn that the IQR and mean values are very competitive and optimal in almost all cases for 30-dimension problems. Inspecting the convergence curves for some of the functions, such as unimodal functions F1 and F3 and for some other multimodal and hybrid functions, as depicted in
Figure 13.
5.5. Comparison with Other Algorithms
To validate the efficacy of the proposed variant, a fair comparison on CEC 2017 criteria is executed. The optimization results of the proposed variant along with some contemporary and classical optimizers are reported in
Table 9. The competitive algorithms are Moth flame optimization (MFO) [
15], Sine cosine algorithm [
53], PSO [
54] and Flower pollination Algorithm [
55]. It can be easily observed that the results of our proposed variant are competitive for almost all the functions.