*Article* **Development and Applications of Augmented Whale Optimization Algorithm**

**Khalid Abdulaziz Alnowibet 1, Shalini Shekhawat 2,\*, Akash Saxena 2,\*, Karam M. Sallam <sup>3</sup> and Ali Wagdy Mohamed 4,5,\***


**Abstract:** Metaheuristics are proven solutions for complex optimization problems. Recently, bioinspired metaheuristics have shown their capabilities for solving complex engineering problems. The Whale Optimization Algorithm is a popular metaheuristic, which is based on the hunting behavior of whale. For some problems, this algorithm suffers from local minima entrapment. To make WOA compatible with a number of challenging problems, two major modifications are proposed in this paper: the first one is opposition-based learning in the initialization phase, while the second is inculcation of Cauchy mutation operator in the position updating phase. The proposed variant is named the Augmented Whale Optimization Algorithm (AWOA) and tested over two benchmark suits, i.e., classical benchmark functions and the latest CEC-2017 benchmark functions for 10 dimension and 30 dimension problems. Various analyses, including convergence property analysis, boxplot analysis and Wilcoxon rank sum test analysis, show that the proposed variant possesses better exploration and exploitation capabilities. Along with this, the application of AWOA has been reported for three real-world problems of various disciplines. The results revealed that the proposed variant exhibits better optimization performance.

**Keywords:** metaheuristic algorithms; Whale Optimization Algorithm

**MSC:** 68T01; 68T05; 68T07; 68T09; 68T20; 68T30

#### **1. Introduction and Literature Review**

Optimization is a process to fetch the best alternative solution from the given set of alternatives. Optimization processes are evident everywhere around of us. For example, to run a generating company, the operator has to take care of operating cost and to check and deal with various type of markets to execute financial transactions.The operator has to optimize the fuel purchase cost, sell the power at maximum rate and purchase the carbon credits at minimum cost to earn profit. Sometimes, optimization processes involve various stochastic variables to model the uncertainty in the process. Such processes are quite difficult to handle and often pose a severe challenge to the optimizer or solution provider algorithms. Evolution of modern optimizers is the outcome of these complex combinatorial multimodal nonlinear optimization problems. Unlike classical optimizers, where the search starts with the initial guess, these modern optimizers are based on the stochastic variables, and hence, they are less vulnerable towards local minima entrapment. These problems

**Citation:** Alnowibet, K.A.; Shekhawat, S.; Saxena, A.; Sallam, K.M.; Mohamed, A.W. Development and Applications of Augmented Whale Optimization Algorithm. *Mathematics* **2022**, *10*, 2076. https:// doi.org/10.3390/math10122076

Academic Editor: Jian Dong

Received: 13 May 2022 Accepted: 6 June 2022 Published: 15 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

become the main source of emerging of metaheuristic algorithms, which are capable of finding a near-optimal solution in less computation time. The popularity of metaheuristic algorithms [1] has increased exponentially in the last two decades due to their simplicity, derivation-free mechanism, flexibility and better results providing capacity in comparison with conventional methods. The main inspiration of these algorithms is nature, and hence, aliased as nature-inspired algorithms [2].

Social mimicry of nature and living processes, behavior analysis of animals and cognitive viability are some of the attributes of nature-inspired algorithms. Darwin's theory of evolution has inspired some nature-inspired algorithms, based on the property of "inheritance of good traits" and "competition, i.e., survival of the fittest". These algorithms are Genetic Algorithm [3], Differential Evolution and Evolutionary Strategies [4].

The other popular philosophy is to mimic the behavior of animals which search for food. In these approaches, food or prey is used as a metaphor for global minima in mathematical terms. Exploration, exploitation and convergence towards the global minima is mapped with animal behavior. Most of the nature-inspired algorithms also known as population-based algorithms can further be classified as:


Other than these population-based algorithms, a few different algorithms have also been proposed to solve specific mathematical problems. In [29,30], the authors proposed the concept of construction, solution and merging. Another Greedy randomised adaptive search-based algorithm using the improved version of integer linear programming was proposed in [31].

The No Free Lunch Theorem proposed by Wolpert et al. [32] states that there is no one metaheuristic algorithm which can solve all optimization problems. From this theorem, it can be concluded that there is no single metaheuristic that can provide the best solution for all problems. It is possible that one algorithm may be very effective for solving certain problems but ineffective in solving other problems. Due to the popularity of natureinspired algorithms in providing reasonable solutions to complex real-life problems, many new nature-inspired optimization techniques are being proposed in the literature. It is interesting to note that all bio-inspired algorithms are subsets of nature-inspired algorithms. Among all of these algorithms, the popularity of bio-inspired algorithms has increased exponentially in recent years. Despite of this popularity, these algorithms have also been critically reviewed [33].

In 2016, Mirjalili et al. [34] proposed a new nature-inspired algorithm called the Whale optimization algorithm (WOA), inspired by the bubble-net hunting behavior of humpback whales. The humpback whale belongs to the rorqual family of whales, known for their huge size. An adult can be 12–16 m long and weigh 25–30 metric tons. They have a distinctive body shape and are known for their breaching behavior in water with astonishing gymnastic skills and for haunting songs sung by males during their migration period. Humpback whales eat small fish herds and krills. For hunting their prey, they follow a unique strategy of encircling the prey spirally, while gradually shrinking the size of the circles of this spiral. With incorporation of this theory, the performance of WOA is superior to many other nature-inspired algorithms. Recently, in [35], WOA was used to solve the optimization problem of the truss structure. WOA has also been used to solve the well-known economic dispatch problem in [36]. The problem of unit commitment from electric power generation was solved through WOA in [37]. In [38], the author applied WOA to the long-term optimal operation of a single reservoir and cascade reservoirs. The following are the main reasons to select WOA:


Sometimes, it also suffers from a slow convergence speed and local minima entrapment due to the random size of the population. To overcome these shortcomings, in this paper, we propose two major modifications to the existing WOA:


The remaining part of this paper is organized as follows: Section 2 describes the crisp mathematical details of WOA. Section 3 is a proposal of the proposed variant; an analogy based on modified position update is also established with the proposed mathematical framework. Section 4 includes the details of benchmark functions. In Sections 5 and 6 show the results of the benchmark functions and some real-life problems that occur with different statistical analyses. Last but not the least, the paper concludes in Section 7 with a decisive evaluation of the results, and some future directions are indicated.

#### **2. Mathematical Framework of WOA**

The mathematical model of WOA can be presented in three steps: prey encircling, exploitation phase through bubble-net and exploration phase, i.e., prey search.

1. Prey encircling: Humpback whales choose their target prey through the capacity to finding the location of prey. The best search agent is followed by other search agents to update their positions, which can be given mathematically as:

$$\vec{P} = \left| \vec{Q} \,\vec{Y}^\*(\mathbf{s}) - \vec{Y}(\mathbf{s}) \right| \tag{1}$$

$$
\vec{Y}(\mathbf{s}+1) = \vec{Y}^\*(\mathbf{s}) - \vec{R} \cdot \vec{P} \tag{2}
$$

where *Y*<sup>∗</sup> denotes the position vector of the best obtained solution, *Y* is the position vector, *s* is the current iteration, | | denotes the absolute value and · denotes the element to element multiplication.

The coefficients *R* and *Q* can be calculated as follows:

$$
\vec{\vec{R}} = 2\vec{p} \cdot \vec{r} - \vec{p} \tag{3}
$$

$$
\vec{Q} = \mathfrak{Z}\vec{r}\tag{4}
$$

where *p* linearly decreases with every iteration from 2 to 0 and*r* ∈ [0, 1]. By adjusting the values of vectors *P* and *R*, the current position of search agents shifted towards the best position. This position updating process in the neighborhood direction also helps in encircling the prey n dimensionally.

2. Exploitation phase through bubble-net: The value of *p* decreases in the interval [−*p*, *p*]. Due to changes in *p*, *P* also fluctuates and represents the shrinking behavior of search agents. By choosing random values of *P* in the interval [−1, 1], the humpback whale updates its position. In this process, the whale swims towards the prey spirally and the circles of spirals slowly shrink in size. This shrinking of the spirals in a helix-shaped movement can be mathematically modeled as:

$$
\vec{Y}(s+1) = \vec{Q}' \cdot e^{al} \cdot \cos(2\pi l) + \vec{Y}^\*(s) \tag{5}
$$

$$\vec{\mathcal{Q}}' = \left| \vec{Y}^\*(s) - \vec{Y}(s) \right| \tag{6}$$

where *a* is the constant factor responsible for the shape of spirals and *l* randomly belongs to interval [−1, 1].

In the position updating phase, whales can choose any model, i.e., the shrinking mechanism or the spiral mechanism. The probability of this simultaneous behavior is assumed to be 50 during the optimization process. The combined equation of both of these behavior can be represented as:

$$\vec{Y}(s+1) = \begin{cases} \vec{Y}^\*(s) - \vec{P} \cdot \vec{Q} & p < 0.5\\ \vec{Q}' e^{al} \cos(2\pi l) + Y^s & p > 0.5 \end{cases} \tag{7}$$

#### 3. Exploration Phase

In this phase, *P* is chosen opposite to the exploitation phase, i.e., the value of *P* must be > 1 or < −1, so that the humpback whales can move away from each other, which increases the exploration rate. This phenomenon can be represented mathematically as:

$$
\vec{Q} = \left| \vec{R} \cdot \vec{Y}\_{rand} - \vec{Y} \right| \tag{8}
$$

$$
\vec{Y}(\mathbf{s}+1) = \vec{Y}\_{rand} - \vec{P} \cdot \vec{Q} \tag{9}
$$

where *Yrand* represents the position of a random whale.

After achieving the termination criteria, the optimization process finishes. The main features of WOA are the presence of the dual theory of circular shrinking and spiral path, which increase the exploitation process of finding the best position around the prey. Afterwards, the exploration phase provides a larger area through the random selection of values of *A* .

#### **3. Motivation and Development of the Augmented Whale Optimization Algorithm**

It is observed in the previous reported applications that inserting mutation in the population-based schemes can enhance the performance of the optimization. Some noteworthy applications are reported in [39].

#### *3.1. Augmented Whale Optimization Algorithm (AWOA)*

By taking the motivation of the modified position update, we present the development of AWOA and the mathematical steps we have incorporated. To simulate the behavior of whale through modified position update and their connection to the position update mechanism for mating, we require two mechanisms:


3.1.1. The Opposition-Based Position Update Method

For simulating the first mechanism, we choose the opposition number generation theory that was proposed by H. R Tizoosh. Opposition-based learning is the concept that puts the search agents in diverse (rather opposite directions) so that the search for optima can also be initiated from opposite directions. This theory has been applied in many metaheuristic algorithms, and now, it is a proven fact that the search capabilities of the optimizer can substantially be enhanced by the application of this opposition number generation technique. Some recent papers have provided evidence of this [40,41]. With these approaches, an impact of opposition-based learning can be easily seen. Furthermore, a rich review on the techniques related to opposition, application area and performance-wise comparison can be read in [42,43].

The following points can be taken as some positive arguments in favor of the application of the oppositional number generation theory (ONGT) concept:


For the reader's benefit, we are incorporating some definitions of opposite points in search space for a 2D and multidimensional space.

**Definition 1.** *Let x* ∈ [*a*, *b*] *be a real number, where the opposite number of x is defined by:*

$$
\bar{\bar{x}} = a + b - \bar{x} \tag{10}
$$

*The same holds for Q dimensional space.*

**Definition 2.** *Let A* = (*x*1, *x*2, ... , *xQ*) *be a point in Q dimensional space, where x*1, *x*2, ... , *xQ* ∈ *R and xi* <sup>∈</sup> [*a*, *<sup>b</sup>*], *<sup>i</sup>*=1, 2, . . . , *<sup>Q</sup> ; the opposite points matrix can be given by* <sup>−</sup> *A* = [ <sup>−</sup> *x*<sup>1</sup> , − *x*<sup>2</sup> , − *x*<sup>3</sup> ... , <sup>−</sup> *xQ* ]*. Hence:* <sup>−</sup>

$$x\_i = [a\_i + b\_i - x\_i] \tag{11}$$

*where ai and bi are the lower limit and upper limit, respectively. Furthermore, Figure 2 illustrates the search process of ONGT, where A1 and B1 are the search boundaries, and it shrinks as the iterative process progresses.*

**Figure 2.** Solving the one-dimensional problem by recursive halving the search interval.

3.1.2. Position Updating Mechanism Based on the Cauchy Mutation Operator

For simulating the second mechanism, we require a signal that is a close replica of a whale song. In the literature, a significant amount of work has been done on the application of the Cauchy mutation operator due to the following reasons:


In the proposed AWOA, the position update mechanism is derived from the Cauchy distribution operator. The Cauchy density function of the distribution is given by:

$$f\_t(\mathbf{x}) = \frac{1}{\pi} \begin{array}{c} t \\ \frac{2}{t^2 + \mathbf{x}^2} \end{array} - \infty < \mathbf{x} < \infty \tag{12}$$

where *t* is the scaling parameter and the corresponding distribution function can be given as:

$$F\_t(\mathbf{x}) = \frac{1}{2} + \frac{1}{\pi} \arctan(\frac{\mathbf{x}}{t}) \tag{13}$$

First, a random number *y* ∈ (0, 1) is generated, after which a random number *α* is generated by using following equation:

$$\mathfrak{a} = \mathfrak{x}\_0 + t \tan(\pi(y - 0.5)) \tag{14}$$

**Figure 3.** Whale position update inspired from Cauchy Distribution.

We assume that *α* is a whale position update generated by the search agents and on the basis of this signal, the position of the whale is updated. Furthermore, we define a position-based weight matrix of *jth* position vector of *ith* whale, which is given as:

$$W(j) = \frac{\left(\sum\_{i=1}^{NP} \mathbb{1}\_{i,j}\right)}{NP} \tag{15}$$

where *W*(*j*) is a weight vector and *NP* is the population size of whale. Furthermore, the position update equation can be modified as:

$$\mathbf{x}'(j) = \mathbf{x}(j) + \mathcal{W}(j) \* \mathbf{a} \tag{16}$$

Summarizing all the points discussed in this section, we propose two mechanisms for the improvement of the performance of WOA. The first one is the opposition-based learning concept that places whales in diverse directions to explore the search space effectively, and based on the whale behaviour(modified position update) is created by them, the position update mechanism is proposed. To simulate whale song, we employ Cauchy numbers. Hence, both of these mechanisms can be beneficial for enhancing the exploration and exploitation capabilities of WOA. In the next section, we will evaluate the performance of the proposed variant on some conventional and CEC-17 benchmark functions.

#### **4. Benchmark Test Functions**

Benchmark functions are a set of functions with different known characteristics (separability, modality and dimensionality) and often used to evaluate the performance of optimization algorithms. In the present paper, we measure the performance of our proposed variant AWOA through two benchmark suites.


tails of these functions. For other details, such as optima and mathematical definitions, we can follow [49].

**Table 1.** Details of Benchmark Functions Suite 1.



**Table 2.** Details of CEC-2017 (Benchmark Suite 2).

**Figure 4.**

Benchmark

 Suite 1.

#### **5. Result Analysis**

In this section, various analyses that can check the efficacy of the proposed modifications are exhibited. For judging the optimization performance of the proposed AWOA, we have chosen some recently developed variants of WOA for comparison purpose. These variants are:


#### *5.1. Benchmark Suite 1*

Table 3 shows the optimization results of AWOA on Benchmark Suite 1 along with the leading. The table shows entries of mean and standard deviation (SD) of function values of 30 independent runs. Maximum function evaluations are set to 15,000. The first four functions in the table are unimodal functions. Benchmarking of any algorithm on unimodal functions gives us the information of the exploration capabilities of the algorithm. Inspecting the results of proposed AWOA on unimodal functions, it can be easily observed that the mean values are very competitive for the proposed AWOA as compared with other variants of WOA.

For rest of the functions, indicated mean values are competitive and the best results are indicated in bold face. From this statistical analysis, we can easily derive a conclusion that proposed modifications in AWOA are meaningful and yield positive implications on optimization performance of the AWOA specially on unimodal functions. Similarly, for multimodal functions BF-7 and BF-9 to 11, BF-15 to 19 and BF-22 have optimal values of mean parameter. We observed that the values of mean are competitive for rest of the functions and performance of proposed AWOA has not deteriorated.

#### 5.1.1. Convergence Property Analysis

Similarly, the convergence plots for functions BF1 to BF4 have also been plotted in Figure 5 for the sake of clarity. From these convergence curves, it is observed that the proposed variant shows better convergence characteristics and the proposed modifications are fruitful to enhance the convergence and exploration properties of WOA. As it can be seen that convergence properties of AWOA is very swift as compared to other competitors. It is to be noted here that BF1–BF4 are unimodal functions and performance of AWOA on unimodal functions indicates enhanced exploitation properties. Furthermore, for showcasing the optimization capabilities of AWOA on multimodal functions the plots of convergence are exhibited in Figure 6. These are plotted for BF9 to BF12. From these results of proposed AWOA, it can easily be concluded that the results are also competitive.

#### 5.1.2. Wilcoxon Rank Sum Test

A rank sum test analysis has been conducted and the *p*-values of the test are indicated in Table 4. We have shown the values of Wilcoxon rank sum test by considering a 5% level of significance [52]. Values that are indicated in boldface are less than 0.05, which indicates that there is a significance difference between the AWOA results and other opponents.

#### 5.1.3. Boxplot Analysis

To present a fair comparison between these two opponents, we have plotted boxplots and convergence of some selected functions. Figure 7 shows the boxplots of function (BF1–BF12). From the boxplots, it is observed that the width of the boxplots of AWOA are optimal in these cases; hence, it can be concluded that the optimization performance of AWOA is competitive with other variants of WOA. The mean values shown in the boxplots are also optimal for these functions. The performance of AWOA on the remaining functions of this suite has been depicted through boxplots shown in Figure 8. From these, it can be concluded that the performance of proposed AWOA is competitive, as mean values depicted in the plots are optimal for most of the functions.


**Table 3.** Results of Benchmark Suite-1.

**Figure 5.** Convergence property analysis of unimodal functions.

**Figure 6.** Convergence property analysis of multimodal functions.


**Table 4.** Results of Wilcoxon rank sum test of AWOA.

**Figure 7.** Boxplot analysis of Benchmark Suite 1.

**Figure 8.** Boxplot analysis of the remaining functions of Benchmark Suite 1.

#### *5.2. Benchmark Suite 2*

In this section, we report the results of the proposed variant on CEC17 functions. The details of CEC 17 functions have been exhibited in Table 2. To check the applicability of the proposed variant on higher as well as lower dimension functions, 10- and 30-dimension problems are chosen deliberately. While performing the simulations we have obeyed the criterion of CEC17; for example, the number of function evaluations have been kept 10<sup>4</sup> <sup>×</sup> *<sup>D</sup>* for AWOA and other competitors. The results are averaged over 51 independent runs, as indicated by CEC guidelines. The results of the optimization are expressed as mean and standard deviation of the objective function values obtained from the independent runs. Tables 5 and 6 show these analyses and the bold face entries in the tables show the best performer. Tables 7 and 8 also report the statistical comparison results of objective function values obtained from independent runs through Wilcoxon rank sum test with 5% level of significance. These results are *p*-values indicated in the each column of the observation table when the opponent is compared with the proposed AWOA. These values are indicator of the statistical significance.

#### *5.3. Results of the Analysis of 10D Problems*

For 10D problems, the depiction of results are in terms of the mean values and standard deviation values obtained from 51 different independent runs that are indicated for each opponent of AWOA. Furthermore, the following are the noteworthy observations from this study:

• From the table, it is observed that the values obtained from optimization process and their statistical calculation indicate that the substantial enhancement is evident in terms of mean and standard deviation values. These values are shown in bold face. We observe that out of 29 functions, the proposed variant provides optimal mean values for 23 functions. In addition to that, we have observed that the value of the mean parameter is optimal for 23 functions for AWOA. Except CECF16, 17, 18, 23, 24, 26 and CECF29, the mean values of the optimization runs are optimal for AWOA. This supports the fact that the proposed modifications are helpful for enhancing the optimization performance of the original WOA. Inspecting other statistical parameters, namely standard deviation values, also gives a clear insight into the enhanced performance.



**Table 5.** Results of Benchmark Suite-2 (10D).


**Table 5.** *Cont.*

5.3.1. Statistical Significance Test by the Wilcoxon Rank Sum Test

The results of the rank sum test are depicted in Table 7. It is always important to judge the statistical significance of the optimization run in terms of calculated *p*-values. For this reason, the proposed AWOA has been compared with all opponents and results in terms of the *p*-values that are depicted. Bold face entries show that there is a significance difference between optimization runs obtained in AWOA and other opponents. This fact demonstrates the superior performance of AWOA.

**Table 6.** Results of Benchmark Suite-2 (30D).



**Table 6.** *Cont.*

#### 5.3.2. Boxplot Analysis

F30

Boxplot analysis for 10D functions are performed for 20 independent runs of objective function values. This analysis is depicted in Figures 9 and 10. From these boxplots, it is

SD 3.07 × <sup>10</sup><sup>2</sup> 3.11 × <sup>10</sup><sup>2</sup> **2.77** *×* **102** 3.56 × 102 3.82 × 102

Mean 8.90 × <sup>10</sup><sup>6</sup> 9.96 × <sup>10</sup><sup>6</sup> **4.28** *×* **105** 2.54 × 107 2.38 × 107 SD 5.90 × <sup>10</sup><sup>6</sup> 6.80 × <sup>10</sup><sup>6</sup> **1.92** *×* **105** 2.30 × 107 1.83 × 107 easily to state that the results obtained from the optimization process acquire an optimal Inter Quartile Range and low mean values. For showcasing the efficacy of the proposed AWOA, all the optimal entries of mean values are depicted with an oval shape in boxplots.

**Figure 9.** Boxplot analysis of the 10D functions of Benchmark Suite 2.

**Figure 10.** Boxplot analysis of the remaining 10D functions of Benchmark Suite 2.

#### *5.4. Results of the Analysis of 30D Problems*

The results of the proposed AWOA, along with other variants of WOA, are depicted in terms of statistical attributes of independent 51 runs in Table 6. From the results, it is clearly evident that except for F24, the proposed AWOA provides optimal results as compared to other opponents. Mean values of objective functions and standard deviation of the objective functions obtained from independent runs are shown in bold face.


**Table 7.** Results of the rank sum test on Benchmark Suite-2 (10D).

The results of the rank sum test are depicted in Table 8. It is always important to judge the statistical significance of the optimization run in terms of calculated *p*-values. For this reason, the proposed AWOA was compared with all opponents and the results in terms of *p*-values are depicted. Bold face entries show that there is a significance difference between optimization runs obtained in AWOA and other opponent, as the obtained *p*-values are less than 0.05. We observe that for the majority of the functions, calculated *p*-values are less than 0.05. Along with the optimal mean and standard deviation values, *p*-values indicated that the proposed AWOA outperforms. In addition to these analyses, a boxplot analysis was performed of the proposed AWOA with other opponents, as depicted in Figures 11 and 12. From these figures, it is easy to learn that the IQR and mean values are very competitive and optimal in almost all cases for 30-dimension problems. Inspecting the convergence curves for some of the functions, such as unimodal functions F1 and F3 and for some other multimodal and hybrid functions, as depicted in Figure 13.


**Table 8.** Results of the rank sum test on Benchmark Suite-2 (30D).

**Figure 11.** Boxplot analysis of the 30D functions of Benchmark Suite 2.

**Figure 12.** Boxplot analysis of the remaining 30D functions of Benchmark Suite 2.

**Figure 13.** Convergence property analysis of some 30D functions of Benchmark Suite 2.

#### *5.5. Comparison with Other Algorithms*

To validate the efficacy of the proposed variant, a fair comparison on CEC 2017 criteria is executed. The optimization results of the proposed variant along with some contemporary and classical optimizers are reported in Table 9. The competitive algorithms are Moth flame optimization (MFO) [15], Sine cosine algorithm [53], PSO [54] and Flower pollination Algorithm [55]. It can be easily observed that the results of our proposed variant are competitive for almost all the functions.


**Table 9.** Comparison of AWOA with other algorithms for 30D.

#### **6. Applications of AWOA in Engineering Test Problems**

*6.1. Model Order Reduction*

In control system engineering, most of the linear time invariant systems are of a higher order, and thus, difficult to analyze. This problem has been solved using the reduced model order technique, which is easy to use and less complex in comparison to earlier control paradigm techniques. Nature-inspired optimization algorithms have proved to be an efficient tool in this field, as they help to minimize the integral square of lower-order systems. This approach was first introduced in [56] followed by [39,57,58] and many more. These works advocate the efficacy of optimization algorithm in solving the reduced model order technique, as these reduce the complexity, computation time and cost of the reducing process. For testing the applicability of AWOA on some real-world problems, we have considered the Model Order Reduction problem in this section. In MOR, large complex systems with known transfer functions are converted with the help of an optimization application to the reduced order system. The following are the steps of the conversion:


#### 6.1.1. Problem Formulation

In this technique, a transfer function given by *X*(*t*) : *u* → *v* of a higher order is reduced, in function *X*(*t*) : *u* → *v*˜ of a lower order, without affecting the input *u*(*x*); the output is *v*˜(*x*) ≈ *v*(*x*). The integral error defined by the following equation is minimized in the process using the optimization algorithm:

$$\text{IIE} = \int\_0^\infty \left[ v(\mathbf{x}) - \overline{v}(\mathbf{x}) \right]^2 d\mathbf{x} \tag{17}$$

where *X*(*t*) is a transfer function of any Single Input and Single Output system defined by:

$$X(s) = \frac{a\_0 + a\_1t + a\_2t^2 + \dots + a\_mt^m}{b\_0 + b\_1t + b\_2t^2 + \dots + b\_nt^n} \tag{18}$$

For a reduced order system, *X*(*s*) can be given by:

$$X(t)' = \frac{a\_0' + a\_1't + a\_2't^2 + \dots + a\_{m\_r}'t^{m\_r}}{b\_0' + b\_1't + b\_2't^2 + \dots + b\_{n\_r}'t^{n\_r}} \tag{19}$$

where (*nr* ≥ *mr*, *mr*, *nr* ∈ *I*). In this study, we calculate the value of coefficients of the numerator and denominator of a reduced order system defined in Equation (21) while minimizing the error. To establish the efficiency of our proposed variant, we have reported two numerical examples.

6.1.2. Numerical Examples and Discussions


$$X(s) = \frac{(s+4)}{(s^4 + 19s^3 + 113s^2 + 245s + 150)}\tag{21}$$

The results of the optimization process by depicting the values of time domain specifications, namely rise time and settling time for both functions, are exhibited in Table 10. Furthermore, the convergence proofs of the algorithm on both functions are depicted in Figures 14 and 15. Errors in the time domain specifications as compared to the original system are depicted in Table 11.


**Table 10.** Simulated models for different WOA variants and time domain specifications.

**Table 11.** Error analysis of MOR results on both functions.


**Figure 14.** Results of MOR for function 1.

**Figure 15.** Results of MOR for function 2.

From these analyses, it is quite evident that MOR performed by AWOA leads to a configuration of the system that follows the time domain specifications of the original system quite closely. In addition to that, the error in objective function values are also optimal in the case of AWOA.

#### *6.2. Frequency-Modulated Sound Wave Parameter Estimation Problem*

This problem has been taken in many approaches to benchmark the applicability of different optimizers. This problem was included in the 2011 Congress on Evolutionary computation competition for testing different evolutionary optimization algorithms on real problems [59]. This problem is a six-dimensional problem, where the parameters of a sound wave are estimated in such a manner that it should be matched with the target wave.

The mathematical representation of this problem can be given as:

$$K = (\mathfrak{a}\_1, \delta\_1, \mathfrak{a}\_2, \delta\_2, \mathfrak{a}\_3, \delta\_3) \tag{22}$$

The equations of the predicted sound wave and target sound wave are as follows:

$$J(t) = a\_1 \sin(\delta\_1.t.\theta + a\_2.\sin(\delta\_2.t.\theta + a\_3.\sin(\delta\_3.t.\theta)))\tag{23}$$

$$J\_0(t) = (1.0).\sin((5.0).t.\theta - (1.5).\sin((4.8).t.\theta + (2.0).\sin((4.9)t.\theta)))\tag{24}$$

$$\dim f(\overrightarrow{\mathbb{X}}^{\flat}) = \sum\_{t=0}^{100} \left( f(t) - f\_0(t) \right)^2 \tag{25}$$

The results of this design problem are shown in terms of different analyses that include the boxplot and convergence property, which are obtained from 20 independent runs. The Figure 16 shows this analysis. A comparison of the performance on the basis of error in the objective function values is depicted in Figure 17. Here, boxplot axis entry 1, 2, 3, 4 and 5 show LWOA, CWOA, proposed AWOA, OWOA and WOA, respectively.

**Figure 16.** Boxplot and Convergence Property analysis for the FM problem.

#### *6.3. PID Control of DC Motors*

In today's machinery era, DC motors are used in various fields such as the textile industry, rolling mills, electric vehicles and robotics. Among the various controllers available for DC motors, the Proportional Integral Derivative (PID) is the most widely used and proved its efficiency as an accurate result provider without disturbing the steady state error and overshoot phenomena [60]. With this controller, we also needed an efficient tuning method to control the speed and other parameters of DC motors. In recent years, some researchers have explored the meta-heuristic algorithm in tuning of different types of PID controllers. In [61], the authors presented a comparative study between simulated annealing, particle swarm optimization and genetic algorithm. Stochastic fractal search has been applied to the DC motor problem in [62]. The sine cosine algorithm is also used in the determination of optimal parameters of the PID controller of DC motors in [20]. In [63], the authors proposed the chaotic atom search optimization for optimal tuning of the PID controller of DC motors with a fractional order. A hybridized version of foraging optimization and simulated annealing to solve the same problem was reported in [64].

**Figure 17.** Comparative results of different statistical measures of independent runs.

6.3.1. Mathematical Model of DC Motors

The DC motor problem used here is a specific type of DC motor which controlled its speed through input voltage or change in current. In DC motors, the applied voltage *fb*(*t*) is directly proportional to the angular speed *β*(*t*) = *<sup>d</sup>α*(*t*) *dt* , while the flux is constant, i.e.:

$$f\_b(t) = H\_b \frac{d\alpha(t)}{dt} = H\_b \beta(t) \tag{26}$$

The initial voltage of armature *fa*(*t*) satisfies the following differential equation:

$$f\_b(t) = P\_a \frac{dr\_a(t)}{dt} + K\_a r\_a(t) + f\_a(t) \tag{27}$$

The motor torque (due to various friction) developed in the process (neglecting the disturbance torque) is given by:

$$
\pi(t) = L\frac{d\beta(t)}{dt} + T\beta(t) = H\_m r\_a(t) \tag{28}
$$

Taking the Laplace transform of these equations and assuming all the initial condition to zero, we get:

$$F\_b(s) = H\_b X(s) \tag{29}$$

$$F\_a(s) = (P\_a s + K\_a) R\_a(s) + F\_b(s) \tag{30}$$

$$
\Omega(s) = (Ls + T)X(s) = H\_m R\_a(s) \tag{31}
$$

On simplifying these equations, open loop transfer function of DC motor can be given as:

$$\frac{X(s)}{F\_{\mathfrak{a}}(s)} = \frac{H\_{\mathfrak{m}}}{(P\_{\mathfrak{a}}s + K\_{\mathfrak{a}})(Ls + T) + H\_{\mathfrak{b}}H\_{\mathfrak{m}}} \tag{32}$$

#### 6.3.2. Results and Discussion

All the parameters and constant values considered in this experiment are given in Table 12. The simulation results for tuning the PID controller for plant DC motors are depicted in Table 13. First column entries show the plant and controller combined realization as a closed system and the other two entries show specification of time domain simulation conducted when the system is subjected to step input.

After a careful observation, it is concluded that the closed loop system realized with the proposed AWOA possesses optimal settling and rise time that itself depicts a fast transient response of the system. Although the comparative analysis of other algorithms also depicts very competitive values of these times, the response and convergence process of AWOA are swift as compared to other opponents. The boxplot analysis and convergence property analysis are shown in Figure 18. The boxplot shows the comparison of the optimization results when the optimization is run 20 independent times. The X axis shows the AWOA, CWOA, LWOA, OWOA and WOA algorithms. The optimal entries of settling time and rise time are in bold face to showcase the efficacy of the AWOA. The step response of these controllers has been shown in Figure 19.


**Table 12.** Various parameters of DC motors.

**Table 13.** Comparison of AWOA with other algorithms for the DC motor controller design problem.


**Figure 18.** Comparative results of different controllers for DC motors.

**Figure 19.** Step Response Analysis of Different Controllers.

#### **7. Conclusions**

This paper is a proposal of a new variant of WOA. The singing behavior of whales is mimicked with the help of opposition-based learning in the initialization phase and Cauchy mutation in the position update phase. The following are the major conclusions drawn from this study:


• The proposed variant is also tested for challenging engineering design problems. The first problem is the model order reduction of a complex control system into subsequent reduced order realizations. For this problem, AWOA shows promising results as compared to WOA. As a second problem, the frequency-modulated sound wave parameter estimation problem was addressed. The performance of the proposed AWOA is competitive with contemporary variants of WOA. In addition to that, the application of AWOA was reported for tuning the PID controller of the DC motor control system. All these applications indicate that the modifications suggested for AWOA are quite meaningful and help the algorithm find global optima in an effective way.

The proposed AWOA can be applied to various other engineering design problem, such as network reconfiguration, solar cell parameter extraction and regulator design. These problems will be the focus of future research.

**Author Contributions:** Formal analysis, K.A.A.; Funding acquisition, K.A.A.; Investigation, K.A.A.; Methodology, S.S. and A.S.; Project administration, A.W.M.; Software, K.M.S.; Supervision, A.S. and A.W.M.; Validation, K.M.S.; Visualization, K.M.S.; Writing—original draft, S.S. and A.S.; Writing review & editing, S.S. and A.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research is funded by Researchers Supporting Program at King Saud University, (RSP-2021/305).

**Acknowledgments:** The authors present their appreciation to King Saud University for funding the publication of this research through Researchers Supporting Program (RSP-2021/305), King Saud University, Riyadh, Saudi Arabia.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**

