Next Article in Journal
Comparative Insights into the Antimicrobial, Antioxidant, and Nutritional Potential of the Solanum nigrum Complex
Previous Article in Journal
Numerical Simulation on Hydraulic Fracture Height Growth across Layered Elastic–Plastic Shale Oil Reservoirs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Aquila Optimizer Based on Search Control Factor and Mutations

Electronic Information School, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(8), 1451; https://doi.org/10.3390/pr10081451
Submission received: 22 June 2022 / Revised: 14 July 2022 / Accepted: 21 July 2022 / Published: 25 July 2022

Abstract

:
The Aquila Optimizer (AO) algorithm is a meta-heuristic algorithm with excellent performance, although it may be insufficient or tend to fall into local optima as as the complexity of real-world optimization problems increases. To overcome the shortcomings of AO, we propose an improved Aquila Optimizer algorithm (IAO) which improves the original AO algorithm via three strategies. First, in order to improve the optimization process, we introduce a search control factor (SCF) in which the absolute value decreasing as the iteration progresses, improving the hunting strategies of AO. Second, the random opposition-based learning (ROBL) strategy is added to enhance the algorithm’s exploitation ability. Finally, the Gaussian mutation (GM) strategy is applied to improve the exploration phase. To evaluate the optimization performance, the IAO was estimated on 23 benchmark and CEC2019 test functions. Finally, four real-world engineering problems were used. From the experimental results in comparison with AO and well-known algorithms, the superiority of our proposed IAO is validated.

1. Introduction

The optimization process aims to find the optimal decision variables for a system from all the possible values by minimizing its cost function [1]. In many scientific fields, such as image processing [2], industrial manufacturing [3], wireless sensor networks [4], scheduling problems [5], path planning [6], feature selection [7], and data clustering [8], optimization problems are essential. Real-world engineering problems have the characteristics of non-linearity, computation burden, and wide solution space. Due to the gradient mechanism, traditional optimization methods have the drawbacks of poor flexibility, high computational complexity, and local optima entrapment. In order to overcome these shortcomings, many new optimization methods have been proposed, especially the meta-heuristic (MH) algorithms. MH algorithms perform well in optimization problems, with high flexibility, no gradient mechanism, and high ability to escape the trap of local optima [9].
In general, MH algorithms are classified into four main categories: swarm intelligence algorithms (SI), evolutionary algorithms (EA), physics-based algorithms (PB), and human-based algorithms (HB) [10], as shown in Table 1. Despite the different sources of inspiration for these four categories, they all occupy two important search phases, namely, exploration (diversification) and exploitation (intensification) [11,12]. The exploration phase explores the search space widely and efficiently, and reflects the capability to escape from the local optimal traps, while the exploitation phase reflects the capability to improve the quality locally and tries to find the best optimal solution around the obtained solution. In summary, a great MH algorithm can obtain a balance between these two phases to achieve good performance.
Generally, the optimization process of the MH algorithms begins with a set of candidate solutions generated randomly. These solutions follow an inherent set of optimization rules to update, and they are evaluated by a specific fitness function iteratively; this is the essence of an optimization algorithm. The Aquila Optimizer (AO) [37] was first proposed in 2021 as one of the SI algorithms. It is achieved by simulating the four Aquila hunting strategies, which are high soar with a vertical stoop, walk and grab prey, low flight with slow descent attack, and walk and grab prey. The first two behavioral strategies reflect the exploitation ability, while the latter two reflect the exploration ability. AO has the advantages of fast convergence speed, high search efficiency, and a simple structure. However, this algorithm is insufficient as the complexity of real-world engineering problems increases. The performance of AO depends on the diversity of population and the optimization update rules. Moreover, the algorithm has inadequate exploitation and exploration capabilities, weak ability to jump out of local optima, and poor convergence accuracy.
In [38], Ewees, A. et al. improved the AO algorithm through using the search strategy of the WOA to boost the search process of the AO. They mainly used the operator of WOA to substitute for the exploitation phase of the AO. The proposed method tackled the limitations of the exploitation phase as well as increasing the diversity of solutions in the search stage. Their work attempted to avoid the weakness of using a single search strategy; comparative trials showed that the suggested approach achieved promising outcomes compared to other methods.
Zhang, Y. et al. [39] proposed a new hybridization algorithm of AOA with AO, named AOAAO. This new algorithm mainly utilizes the operators of AOA to replace the exploitation strategies of AO. In addition, inspired by the HHO algorithm, an energy parameter, E, was applied to balance the exploration and exploitation procedures of AOAAO, with a piecewise linear map used to decrease the randomness of the energy parameter. AOAAO is more efficient than AO in optimization, with faster convergence speed and higher convergence accuracy. Experimental results have confirmed that the suggested strategy outperforms the alternative algorithms.
Wang, S. et al. [40] proposed Improved Hybrid Aquila Optimizer and Harris Hawks Algorithm (IHAOHHO) in 2021. This algorithm combines the exploration strategies of AO and the exploitation strategies of HHO. Additionally, the nonlinear escaping energy parameter is introduced to balance the two exploration and exploitation phases. Finally, it uses random opposition-based learning (ROBL) strategy to enhance performance. IHAOHHO has the advantages of high search accuracy, strong ability to escape local optima, and good stability.
The binary variants of the AO mentioned above all improve on the performance of original AO. However, they remain insufficient for specific complex optimization problems, especially multimodal and high-dimensional problems. In order to enhance the optimization capability in both low and high dimensions, we propose an improved AO algorithm, IAO, which uses SCF and mutations. The main drawbacks of the AO consist of two main points:
  • The optimization rules of AO limit the search capability. The second strategy of AO limits the exploration capability, as the effect of the Levy flight function leads to local optima, while the third limits the exploitation capability with the fixed number parameter α , resulting in low convergence accuracy.
  • The population diversity of AO decreases with increasing numbers of iterations [38]. While the candidate solutions are updated according to the specific optimization rules of AO, the randomness of the solutions decreases through the search process, resulting in inferior population diversity.
Therefore, in order to improve the optimization rules of AO, we introduce a search control factor (SCF) to improve these two search methods and allow the Aquila to search widely in the search space and around the best result. The random opposition-based learning (ROBL) and Gaussian mutation (GM) strategies are applied to increase the population diversity of the algorithm. The ROBL strategy is employed to further enhance the exploitation capability, while the GM strategy is utilized to further improve the exploration phase. In addition, the 23 standard benchmark functions were applied to rigorously evaluate the robustness and effectiveness of the IAO algorithm. In this paper, the IAO is compared with AO, IHAOHHO, and several well-known MH algorithms, including PSO, SCA, WOA, GWO, HBA, SMA, and SNS. In the end, CEC2019 test functions and four real-world engineering design problems were utilized to further evaluate the performance of IAO. The experimental results show that the proposed IAO achieves the best performance among all of these algorithms.
The main contribution of this paper is to improve on the performance of the AO algorithm. To improve the exploration and exploitation phases, we define an SCF to change the second and third predation strategies of the Aquila in order to fully search around the solution space and the optimal Aquila, respectively. We apply the ROBL strategy to further enhance the ability to exploit the search space and utilize the GM strategy to further enhance the ability to escape from local optimal traps. The remainder of this paper is organized as follows: Section 2 introduces the AO algorithm; Section 3 illustrates the details of the proposed algorithm; and comparative trials on benchmark functions and real-world engineering experiments are provided in Section 4. Finally, Section 5 concludes the study.

2. Aquila Optimizer (AO)

The AO algorithm is a typical SI algorithm proposed by Abualigah, L. et al. [37] in 2021. The algorithm is optimized by simulating four predator–prey behaviors of Aquila through four strategies: selecting the search space by high soar with a vertical stoop; exploring within a divergent search space by contour flight with a short glide attack; exploiting within a convergent search space via low flight with a slow descent attack; and swooping, walking, and grabbing prey. A brief description of these strategies is provided below.
  • Strategy 1: Expanded exploration ( X 1 )
In this strategy, the Aquila searches the solution space through high soar and uses a vertical stoop to determine the hunting area; the mathematical model is defined as
X 1 t + 1 = X b e s t t × 1 t T + X M t X b e s t t · R 1
X M t = 1 N i = 1 N X i t , j = 1 , 2 , 3 , , D i m
where X b e s t t represents the best position in the current iteration, X M t represents the average value of positions, which is shown in Equation (2), R 1 is a random number between 0 and 1, Dim is the dimension value of the solution space, N represents the number of Aquila population, and t and T are the current iteration and the maximum number of iterations, respectively.
  • Strategy 2: Narrowed exploration ( X 2 )
In the second strategy, the Aquila uses spiral flight above the prey and then attacks through a short glide. The model formula is described as
X 2 t + 1 = X b e s t t × L y D + X R t + y x · R 2
where R 2 is a random number within (0, 1), X R t is a random number selected from the Aquila population, D is the dimension number, and L y D is the Levy flight method function, which is shown as
L y D = s × u × σ υ 1 β σ = Γ 1 + β × s i n e π β 2 Γ 1 + β 2 × β × 2 β 1 2
where s is a constant value equal to 0.01, β is another equal to 1.5, u and υ are random numbers within (0, 1), and y and x indicate the spiral flight trajectory in the search, which are calculated as follows:
y = r × cos θ x = r × sin θ r = R 3 + 0.00565 × D 1 θ = 0.005 × D 1 + 3 π 2
where D 1 is an integer number from 1 to the dimension length (D) and R 3 refers to an integer indicating the search cycles between 1 and 20.
  • Strategy 3: Expanded exploitation ( X 3 )
In this method, when the hunting area is selected the Aquila descends vertically and searches the solution space through low flight, then attacks the prey. The mathematical expression formula is represented as
X 3 t + 1 = X b e s t t X M t × α R 4 + U B L B × R 5 + L B × δ
where α and δ are integer numbers equal to 0.1 which are used to adjust exploitation, R 4 and R 5 are random numbers between 0 and 1, and L B and U B are the upper and lower bounds of the solution space, respectively.
  • Strategy 4: Narrowed exploitation ( X 4 )
In the final method, the Aquila chases the prey in the light of stochastic escape route and attacks the prey on the ground. The mathematical expression of this behavior is
X 4 t + 1 = Q F × X b e s t t G 1 × X t × r a n d G 2 × L y D + r a n d × G 1 Q F t = t 2 × r a n d 1 1 T 2 G 1 = 2 × r a n d 1 G 2 = 2 × 1 t T
where Q F is a quality function parameter which is applied to tune the search strategies, rand is a random value within (0, 1), G 1 is a random number between −1 and 1 indicating the behavior of prey tracking during the elopement, and G 2 represents the flight slope when hunting prey, which decreases from 2 to 0.

3. The Proposed Improved Aquila Optimizer (IAO) Algorithm

Among the four strategies of the original AO algorithm, the effect of the Levy flight function of the second makes Aquila search insufficient in the solution space and tends to fall into local optima. Meanwhile, the third leads to a weak local exploitation ability, as the α parameter is constant. To reduce the search pace of Aquila as iterations proceeds, we create a SCF woth an absolute value (abs) that decreases with the number of iterations to improve on the second and third strategies of the original AO. Furthermore, the ROBL and GM strategies are added to further improve the exploitation and exploration phases, respectively. The IAO is discussed in additional detail below.

3.1. Search Control Factor (SCF)

The search control factor is used to control the basic search step size and direction of Aquila. As iteration progresses, the movement of the Aquila will gradually decrease to increase the search accuracy. Therefore, the abs of SCF decreases with iterations (t), which is described as
S C F t = K v · exp 1 t T × D i r
D i r = 1 , i f r < 0.5 1 , e l s e
where K v is a constant >0 (default = 2). The term exp 1 t T is used to control the Aquila’s flight speed through the number of iterations, r is a random number between 0 and 1, and D i r is the direction control factor described by Equation (9), which is used to control the Aquila’s flight direction.

3.1.1. Improved Narrowed Exploration ( I x 2 ): Full Search with Short Glide Attack

In the second improved method, the Aquila searches the target solution space sufficiently through different directions and speeds, then attacks the prey. The improved method is called full search with short glide attack, and is shown in Figure 1. The mathematical expression formula of the improved strategy is illustrated in Equation (10):
I X 2 t + 1 = X R t + S C F t × X b e s t t X t × R 6 × y x
where S C F t is the search control factor, which is introduced above, I X 2 t + 1 is the solution of the next iteration of t, X R t is a random selected from the Aquila population, R 6 is a random number within (0,1), X b e s t t is the best-obtained solution until t iteration, and y and x are the same as in the original AO algorithm, and describe the spiral flight method in the search as calculated in Equation (5).

3.1.2. Improved Expanded Exploitation ( I x 3 ): Search around Prey and Attack

In the third improved method, the Aquila searches thoroughly around the prey and then attacks it. Figure 2 shows this strategy, called Aquila search around prey and attack. This method aims to improve the exploitation capability, and is mathematically expressed by Equation (11):
I X 3 i , j = L B j + r a n d · U B j L B j + R 7 · X R j X b e s t j · S C F t · 1 t T
where S C F t is the search control factor, R 7 and rand are random numbers between 0 and 1, I X 3 i , j is the solution of the next iteration of t, X b e s t j represents the jth dimension of the best position in this current iteration, j is a random integer between 1 and the dimension length, X R j is the jth dimension of a random one selected from the Aquila population, and L B j and U B j represent the jth dimension of the the upper and lower bounds of the solution space, respectively.

3.2. Random Opposition-Based Learning (ROBL)

On the basis of opposition-based learning (OBL) [41], Long, W. [42] developed a powerful optimization tool called random opposition-based learning (ROBL). The main idea of ROBL is to consider the fitness of a solution and its corresponding random opposite solution at the same time in order to obtain a better candidate solution. ROBL has made a great contribution to improving exploitation ability, as defined by
X L t = L B + U B R 8 × X t ,
where R 8 is a random number between 0 and 1, X L t is the opposite solution, and L B and U B are the upper and lower bounds of the solution space, respectively.

3.3. Gaussian Mutation (GM)

Gaussian mutation (GM) is another commonly employed optimization tool, and performs well in the exploitation phase. Here, we use GM to prevent the IAO from falling into local optima; it is mathematically expressed by
X G t = X t × 1 + G a u s s i a n α
where X ( t ) and X G t are the current and mutated positions, respectively, of a search agent and G a u s s i a n α denotes a uniform random number that following a Gaussian distribution with a value of mean to 0 and standard deviation to 1, shown as
G a u s s i a n α = 1 2 π σ 2 · exp α 2 2 σ 2 .

3.4. Computational Complexity of IAO

In this section, we discuss the computational complexity of the proposed IAO algorithm. In general, the computational complexity of the IAO typically includes three phases: initialization, fitness calculation, and updating of Aquila positions. The computational complexity of the initialization phase is O (N), where N is the population size of Aquila. Assume that T is the total number of iterations and O (T × N) is the computational complexity of the fitness calculation phase. Additionally, assume that Dim is the dimension of the problem; then, for updating positions of Aquila, the computational complexity is O (T × N × Dim + T × N × 2). Consequently, the total computational complexity of IAO is O (N × (T × (Dim + 2) + 1)).

3.5. Pseudo-Code of of IAO

In summary, IAO uses SCF to improve the second and third hunting methods of the original AO. Additionally, GM is applied to further enhance the exploitation capability, while ROBL is utilized to further improve the exploration phase. The target is to increase the solution diversity of the algorithm. The pseudocode and details of the IAO are shown in Algorithm 1 and Figure 3, respectively.
Algorithm 1 Improved Aquila Optimizer
1:
Initialization phase:
2:
Initialize the population X and the parameters of the IAO
3:
while (The end condition is not met) do
4:
     Calculate the fitness function values and X b e s t t : the best obtained solution
5:
     for (i = 1, 2…, N) do
6:
           Update the mean value of the current solution X M t and x, y, G 1 , G 2 , L y D , etc.
7:
           if  t 2 3 T  then
8:
                  if  r 0.5  then
9:
                         Step 1: Expanded exploration ( X 1 )
10:
                       Update the current solution using Equation (1)
11:
                        X n e w t + 1 = X 1 t + 1
12:
                else
13:
                       Step 2: Narrowed exploration ( I X 2 )
14:
                       Update the current solution using Equation (10)
15:
                        X n e w t + 1 = I X 2 t + 1
16:
         else
17:
               if  r 0.5  then
18:
                       Step 3: Expanded exploitation ( I X 3 )
19:
                       Update the current solution using Equation (11)
20:
                        X n e w t + 1 = I X 3 t + 1
21:
               else
22:
                       Step 4: Narrowed exploration ( X 4 )
23:
                       Update the current solution using Equation (7)
24:
                        X n e w t + 1 = X 4 t + 1
25:
         if  F i t n e s s X n e w t + 1 < F i t n e s s X t  then
26:
                X t = X n e w t + 1
27:
         Step 5: Random Opposition-Based Learning Updating ( X L )
28:
         Update the current solution using Equation (12)
29:
         if  F i t n e s s X L t < F i t n e s s X t  then
30:
                X t = X L t
31:
         Step 6: Gaussian-mutation Updating ( X G )
32:
         Update the current solution using Equation (13)
33:
         if  F i t n e s s X G t < F i t n e s s X t  then
34:
                X t = X G t
35:
return The best solution ( X b e s t ).

4. Experimental Results and Discussion

In this section, we desribe the 23 benchmark test functions, CEC2019 benchmark functions, and four real-world engineering design problems that were carried out to evaluate the performance of the IAO algorithm. To ensure the fairness of comparison, all algorithms were implemented for the same iterations and search agents, which were 500 and 30, respectively. All experiments were tested on the MATLAB 2019 platform using a PC with an Intel (R) Core (TM) i7-9750H CPU @ 2.60 GHz and 16 GB of RAM.

4.1. Results of Twenty-Three Benchmark Test Functions

In this section, we describe the 23 benchmark functions which were used to analyze the IAO’s ability in exploring the solution space and exploiting the global solutions. We used three types of benchmark functions: unimodal (F1-F7, Table 2), multimodal (F8-F13, Table 3), and fixed dimension multimodal functions (F14-F23, Table 4). The results of unimodal and multimodal functions used to evaluate the exploitation ability and exploration tendency of IAO, respectively, in ten dimensions are shown in Table 5. In addition, these two tests were employed in 50 and 100 dimensions; the results are shown in Table 6 and Table 7. Finally, the results with fixed dimension multimodal functions are shown in Table 8, indicating the exploration ability in the lower dimensions of the IAO. The results are compared with nine well-known algorithms: AO, PSO, SCA, WOA, GWO, HBA, SMA, IHAOHHO, and SNS. The main control parameters of the compared algorithms are shown in Table 9. All algorithms were evaluated using the mean and standard deviations (STD) from 30 independent runs and Friedman’s mean rank tests. Three lines of tables have been reported in order to display statistical analysis for the test functions, demonstrating the superiority of the IAO. The first line illustrates three symbols, (W|L|T), that denote the number of the functions in which the performance of the IAO was either the best (win), inferior (loss), or indistinguishable (tie) with respect to the compared algorithm. The second line shows the Friedman mean rank values., while the third line refers to the final rank values of the algorithms.

4.1.1. Qualitative Analysis for Convergence of IAO

In this section, we present the results of a series of experiments made to qualitatively analyze the convergence of the IAO. Figure 4 shows the parameter space of the function, search history of the Aquila positions, trajectory, average fitness, and convergence curve from the first to fifth columns, respectively. The search history reflects the way in which IAO explores and exploits the search space. The search history discusses how the Aquila explores and exploits the solution space, indicating the hunting behavior between the eagle and prey. From the last two columns, it can be seen that the values of average fitness and convergence curve are large at first, then gradually decrease as iteration progresses. From the fourth column, presenting the trajectory of the solution, it can be observed that the solution has a large value at the beginning of the iterations and gradually tends towards stability as iteration progresses. The comparison indicates that the IAO has strong exploitation and exploration abilities in the iterative process.
In addition, Figure 5 shows the convergence curve for the IAO and the other compared algorithms on the 23 benchmark functions. For unimodal functions (F1–F7), the convergence rate of the IAO is faster than the other algorithms, and its final convergence accuracy is better as well. This result confirms that the exploitation capacity of IAO is strong and has reliable performance. For the convergence curves of the multimodal functions, the IAO shows very good search performance. It can be seen that the IAO achieves a high balance between the exploration and exploitation phases. For F8–F15, F17, F21, F22, and F23, the IAO approaches positions around the optimal solutions at a very fast speed, and the positions have are exploited efficiently to catch the high precision final solutions. For F16, F18, F19, and F20, the IAO gradually approaches the optimal solutions and updates the Aquila positions in the iterations to confirm the final solution. In addition, three features can be obtained from Figure 5. The first is the fast convergence speed, which can be seen from F1–F4 and F9–F11. Second, from F5–F7, F12, F13, and F15, we can conclude that IAO has the best global optimal approximation compared with its peers. Finally, in examining the strong local optimum escape capability, the behavior of the IAO indicates its ability to escape the local optimum after multiple stagnations in F5–F7, F12, F13, and F20.

4.1.2. Exploitation Capability of IAO

Because the set of unimodal functions (F1-F7) has only one global optimum, it was employed to test the IAO’s exploitation capability, as shown in Table 5. From F1 to F4, the IAO can find the optimal solution. For F5 and F7, the IAO achieves the smallest average and STD values. For F6, the IAO shows the second smallest Average and STD, while HBA has the best. Therefore, as compared with other algorithms the IAO has the best stability and accuracy. It can be concluded that the exploitation capability of the IAO is excellent.

4.1.3. Exploration Capability of IAO

As multimodal functions have several local optima, they were employed to evaluate the IAO’s exploitation capability. The multimodal experiments were divided into two categories, multidimensional (F8–F13) and fixed-dimensional (F14–F23), to test the exploration ability of the IAO in multiple dimensions and low dimensions, respectively. From Table 5, it can be seen that the IAO attains the smallest Average and STD values, excluding F12. For F12, SNS performs best, while IAO achieves the second best position. As for the fixed-dimensional experiments, which are shown in Table 8, IAO achieves the best performance in more than half of the sets. These comparative trials show that IAO’s exploration capability is superior to its competitors.

4.1.4. Stability of IAO

To verify the stability and quality of IAO in dealing with high-dimensional problems, two more experiments were designed with dimensions of 50 and 100. All algorithms (IAO, AO, PSO, SCA, WOA, GWO, HBA, SMA, IHAOHHO, and SNS) were run independently 30 times with 30 search agents and 500 iterations. The results are shown in Table 6 and Table 7. It can be seen that the IAO obtains the best results in 50 and 100 dimensions except for the second performance in F5. For F5, AO obtains the minimum average and STD in both 50 and 100 dimensions. Therefore, we can conclude that the stability and quality of IAO confer a particularly good effect in dealing with high-dimensional problems.

4.1.5. Time Consuming Test of IAO

In this section, the running time of the algorithm is tested to evaluate its efficiency. Each algorithm was run in ten dimensions with variable dimension function (F1–F23) and the time was recorded. From Table 10, it can be seen that the time required by IAO is longer than the original AO, as the GM and ROBL steps increase the complexity of the algorithm and consume excess time. Nevertheless, the elapsed time of IAO on the majority of the benchmark functions is less than that of SMA and IHAOHHO. Considering the excellent performance of the IAO and the increasing requirements of real-world optimization problems, the time consumption of the IAO is acceptable.

4.2. Results of Cec2019 Test Functions

This section analyses the IAO with the CEC2019 test functions, the definitions of which are listed in Table 11. The IAO was implemented with 30 agents and 500 iterations for 30 independent runs. The IAO was compared with AO, AOA, IHAOHHHO, PSO, HBA, WOA, and SSA. As reported in Table 12, the comparison was performed through the average and STD values of the considered algorithms across the course of the functions. In addition, the Friedman mean rank values and final ranks are shown at the bottom of Table 12. The results confirm the proposed IAO’s superiority in dealing with these challenging test functions, as it performed the best in more than half of the functions.

4.3. Real-World Application

To further verify the performance of the algorithm, the IAO was tested in four well-known real-world engineering design problems: the speed reducer design problem, pressure vessel design problem, tension/compression spring design problem, and three-bar truss design problem, and the experimental results with the IAO were compared with classical algorithms proposed in previous studies.

4.3.1. Speed Reducer Design Problem

The optimization goal of this optimization problem is to minimize the weight of the speed reducer. This design problem includes seven optimization parameter variables and eleven constraints. Figure 6 shows the image expression of this design, and Equation (15) provides the mathematical expression. The IAO is compared with IHAOHHO, AO, AOA, PSO, SCA, GA, MDA, MFO, FA, HA, and PAO-DE. From Table 13, it can be seen hat the IAO is superior to all of the other algorithms, and ranks first.
Minimize
f x = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0943 1.508 x 1 x 6 2 + x 7 2 + 7.4777 x 6 3 + x 7 3
Subject to
g 1 x = 27 x 1 x 2 2 x 3 1 0 g 2 x = 397.5 x 1 x 2 2 x 2 2 1 0 g 3 x = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 g 4 x = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 g 5 x = 745 x 4 x 2 x 3 2 + 16.9 × 10 6 110 x 6 3 1 0 g 6 x = 745 x 4 x 2 x 3 2 + 157.5 × 10 6 85 x 6 3 1 0 g 7 x = x 2 x 3 40 1 0 g 8 x = 5 x 2 x 1 1 0 g 9 x = x 1 12 x 2 1 0 g 10 x = 1.5 x 6 + 1.9 x 4 1 0 g 11 x = 1.1 x 7 + 1.9 x 5 1 0
where 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.8 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5 .

4.3.2. Pressure Vessel Design Problem

The optimization goal of this design problem is to minimize the manufacturing cost of pressure vessels. As shown in Figure 7, this design problem includes four variables that need to be optimized: the thickness of the shell (Ts), the head (Th), the length of the cylindrical section (L), and the inner radius (R). Meanwhile, there are four constraints, as shown below. From Table 14, it can be seen that the results with the IAO are the best compared to its competitors.
Consider x = x 1 x 2 x 3 x 4 = T s T h R L
Minimize
f x = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to
g 1 x = x 1 + 0.0193 x 3 0 g 2 x = x 2 + 0.00954 x 3 0 g 3 x = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 g 4 x = x 4 240 0
where 0 x 1 99 , 0 x 2 99 , 10 x 3 200 , 10 x 4 200

4.3.3. Tension/Compression Spring Design Problem

This design problem seeks to minimize the weight of the tension/compression spring by adjusting three design variables: the wire diameter (d), mean coil diameter (D), and the number of active coils (N), shown in Figure 8. Equation (19) provides the mathematical functions of this problem. The results can be obtained from Table 15, where it can be seen that the IAO is competitive compared with AO, IHAOHHO, WOA, HS, MVO, PSO, ES, and CSCA.
Consider x = x 1 x 2 x 3 = d D N
Minimize
f x = x 3 + x 2 x 2 x 1 2
Subject to
g 1 x = 1 x 2 3 x 3 71785 x 1 4 0 g 2 x = 4 x 2 2 x 1 x 2 12566 x 2 x 1 3 x 1 4 + 1 5108 x 1 2 0 g 3 x = 1 140.45 x 1 x 2 2 x 3 0 g 4 x = x 1 + x 2 1.5 1 0
where 0.05 x 1 2 , 0.25 x 2 1.30 , 2.00 x 3 15 .

4.3.4. Three-Bar Truss Design Problem

This design problem seeks to minimize the weight of the truss by adjusting the two parameters in Figure 9. Equation (21) provides the mathematical functions of the problem. The results are shown in Table 16, from which it is obvious that the IAO can better obtain the minimum weight of a three-bar truss compared with AO, IHAOHHO, AOA, SSA, MBA, PSO-DE, CS, GOA, and MFO.
Minimize
f x = 2 2 x 1 + x 2 × l
Subject to
g 1 x = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 g 2 x = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 g 3 x = 1 2 x 2 + x 1 P σ 0
where 0 x 1 1 , 0 x 2 1 , l = 100 cm , P = 2 KN / cm 2 , a n d σ = 2 KN / cm 2 .

5. Conclusions

This paper proposes an Improved Aquila Optimizer (IAO) algorithm. A search control factor (SCF) is proposed to improve the second and third search strategies of the original Aquila Optimizer (AO) algorithm. Then, GM and ROBL methods are integrated to further improve the exploration and exploitation ability of the original AO. To evaluate the performance of the IAO, we tested the algorithm with 23 benchmark functions and CEC2019 test functions. From the results, the performance of the IAO is superior to other advanced MH algorithms. Meanwhile, through experimental results on four real-world engineering design problems, it is validated that the IAO has high practicability in solving practical problems.
In future studies, we will consider various methods to balance the exploration and exploitation phases. Different mutation strategies can be taken into account to increase the solution diversity. In addition, the proposed algorithm could be applied to more fields, including deep learning, material scheduling problems, parameter estimation, wireless sensor networks, path planning, signal denoising, image segmentation, model optimization, and more.

Author Contributions

Conceptualization, B.G. and F.X.; methodology, B.G.; software, B.G.; validation, B.G.; formal analysis, B.G.; investigation, B.G. and Y.S.; resources, B.G.; data curation, B.G.; writing—original draft preparation, B.G.; writing—review and editing, F.X. and Y.S.; visualization, B.G. and Y.S.; supervision, F.X. and X.X.; project administration, F.X.; funding acquisition, F.X. and X.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 51975422 and 52175543.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hajipour, V.; Kheirkhah, A.; Tavana, M.; Absi, N. Novel Pareto-based meta-heuristics for solving multi-objective multi-item capacitated lot-sizing problems. Int. J. Adv. Manuf. Technol. 2015, 80, 31–45. [Google Scholar] [CrossRef]
  2. Ameur, M.; Habba, M.; Jabrane, Y. A comparative study of nature inspired optimization algorithms on multilevel thresholding image segmentation. Multimed. Tools Appl. 2019, 78, 34353–34372. [Google Scholar] [CrossRef]
  3. Yildiz, A.R.; Mirjalili, S.; Sait, S.M.; Li, X. The Harris hawks, grasshopper and multi-verse optimization algorithms for the selection of optimal machining parameters in manufacturing operations. Mater. Test. 2019, 8, 1–15. [Google Scholar]
  4. Wang, S.; Yang, X.; Wang, X.; Qian, Z. A virtual vorce algorithm-lévy-mmbedded grey wolf optimization algorithm for wireless sensor network coverage optimization. Sensors 2019, 19, 2735. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Zhang, W.; Wen, J.B.; Zhu, Y.C.; Hu, Y. Multi-objective scheduling simulation of flexible job-shop based on multi-population genetic algorithm. Int. J. Simul. Model. 2017, 16, 313–321. [Google Scholar] [CrossRef]
  6. Liu, A.; Jiang, J. Solving path planning problem based on logistic beetle algorithm search–pigeon-inspired optimisation algorithm. Electron. Lett. 2020, 56, 1105–1108. [Google Scholar] [CrossRef]
  7. Chen, B.; Chen, H.; Li, M. Improvement and optimization of feature selection algorithm in swarm intelligence algorithm based on complexity. Complexity 2021, 2021, 9985185. [Google Scholar] [CrossRef]
  8. Abualigah, L.; Diabat, A.; Zong, W.G. A comprehensive survey of the harmony search algorithm in clustering applications. Appl. Sci. 2020, 10, 3827. [Google Scholar] [CrossRef]
  9. Omar, M.B.; Bingi, K.; Prusty, B.R.; Ibrahim, R. Recent advances and applications of spiral dynamics optimization algorithm: A review. Fractal Fract. 2022, 6, 27. [Google Scholar] [CrossRef]
  10. Molina, D.; Poyatos, J.; Ser, J.D.; García, S.; Herrera, F. Comprehensive taxonomies of nature- and bio-inspired optimization: Inspiration versus algorithmic behavior, critical analysis recommendations. Cogn. Comput. 2020, 12, 897–939. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  12. Af, A.; Mh, A.; Bs, A.; Sm, B. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar]
  13. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  14. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  15. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2009, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  16. Fogel, L.J.; Owens, A.J.; Walsh, M.J. Artificial Intelligence Through Simulated Evolution; Wiley-IEEE Press: New York, NY, USA, 1966. [Google Scholar]
  17. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  18. Kirkpatrick, S.; Gelatt, C.; Vecchi, M. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  19. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  20. Azizi, M. Atomic orbital search: A novel metaheuristic algorithm. Appl. Math. Model. 2021, 93, 657–683. [Google Scholar] [CrossRef]
  21. Bouchekara, H.R.E.H. Electrostatic discharge algorithm: A novel nature-inspired optimisation algorithm and its application to worst-case tolerance analysis of an EMC filter. Iet Sci. Meas. Technol. 2019, 13, 491–499. [Google Scholar] [CrossRef]
  22. Talatahari, S.; Bayzidi, H.; Saraee, M. Social network search for global optimization. IEEE Access 2021, 9, 92815–92863. [Google Scholar] [CrossRef]
  23. Farshchin, M.; Maniat, M.; Camp, C.V.; Pezeshk, S. School based optimization algorithm for design of steel frames. Eng. Struct. 2018, 171, 326–335. [Google Scholar] [CrossRef]
  24. Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
  25. Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95, Proceedings of the 6th International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  26. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey badger algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. (MATCOM) 2022, 192, 84–110. [Google Scholar] [CrossRef]
  27. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  28. Peraza-Vazquez, H.; Pena-Delgado, A.F.; Echavarria-Castillo, G.; Beatriz Morales-Cepeda, A.; Velasco-Alvarez, J.; Ruiz-Perez, F. A bio-inspired method for engineering design optimization inspired by dingoes hunting strategies. Math. Probl. Eng. 2021, 2021, 9107547. [Google Scholar] [CrossRef]
  29. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput.-Syst. Int. J. Esci. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  30. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  31. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  32. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  33. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  34. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  35. Yazdani, M.; Jolai, F. Lion Optimization Algorithm (LOA): A Nature-Inspired Metaheuristic Algorithm. J. Comput. Des. Eng. 2016, 3, 24–36. [Google Scholar] [CrossRef] [Green Version]
  36. Fathollahi-Fard, A.M.; Hajiaghaei-Keshteli, M.; Tavakkoli-Moghaddam, R. Red deer algorithm (RDA): A new nature-inspired meta-heuristic. Soft Comput. 2020, 24, 14637–14665. [Google Scholar] [CrossRef]
  37. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  38. Ewees, A.A.; Algamal, Z.Y.; Abualigah, L.; Al-qaness, M.A.A.; Yousri, D.; Ghoniem, R.M.; Abd Elaziz, M. A Cox Proportional-Hazards Model Based on an Improved Aquila Optimizer with Whale Optimization Algorithm Operators. Mathematics 2022, 10, 1273. [Google Scholar] [CrossRef]
  39. Zhang, Y.J.; Yan, Y.X.; Zhao, J.; Gao, Z.M. AOAAO: The Hybrid Algorithm of Arithmetic Optimization Algorithm With Aquila Optimizer. IEEE Access 2022, 10, 10907–10933. [Google Scholar] [CrossRef]
  40. Wang, S.; Jia, H.; Abualigah, L.; Liu, Q.; Zheng, R. An improved hybrid aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021, 9, 1551. [Google Scholar] [CrossRef]
  41. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar] [CrossRef]
  42. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  43. Lu, S.; Kim, H.M. A regularized inexact penalty decomposition algorithm for multidisciplinary design optimization problems with complementarity constraints. J. Mech. Des. 2010, 132, 041005. [Google Scholar] [CrossRef] [Green Version]
  44. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  45. Baykasoglu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  46. Geem, Z.; Kim, J.; Loganathan, G. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  47. Liu, H.; Cai, Z.; Wang, Y. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl. Soft Comput. 2010, 10, 629–640. [Google Scholar] [CrossRef]
  48. He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar] [CrossRef]
  49. Mezura-Montes, E.; Coello Coello, C.A. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
  50. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  51. Huang, F.z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  52. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  53. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  54. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper gptimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Aquila search around prey and attack.
Figure 1. Aquila search around prey and attack.
Processes 10 01451 g001
Figure 2. Aquila search around prey and attack.
Figure 2. Aquila search around prey and attack.
Processes 10 01451 g002
Figure 3. Flowchart of IAO algorithm.
Figure 3. Flowchart of IAO algorithm.
Processes 10 01451 g003
Figure 4. Qualitative results of IAO.
Figure 4. Qualitative results of IAO.
Processes 10 01451 g004
Figure 5. Qualitative results of IAO.
Figure 5. Qualitative results of IAO.
Processes 10 01451 g005aProcesses 10 01451 g005b
Figure 6. Speed reducer design problem.
Figure 6. Speed reducer design problem.
Processes 10 01451 g006
Figure 7. Pressure vessel design problem.
Figure 7. Pressure vessel design problem.
Processes 10 01451 g007
Figure 8. Tension/compression spring design problem.
Figure 8. Tension/compression spring design problem.
Processes 10 01451 g008
Figure 9. Three-bar truss design problem.
Figure 9. Three-bar truss design problem.
Processes 10 01451 g009
Table 1. Categories of MH algorithms.
Table 1. Categories of MH algorithms.
CategoryAlgorithm Name
EAGenetic Algorithm (GA) [13]
Differential Evolution (DE) [14]
Biogeography-Based Optimization (BBO) [15]
Evolutionary Programming (EP) [16]
Evolution Strategy (ES) [17]
PBSimulated Annealing (SA) [18]
Gravitational Search Algorithm (GSA) [19]
Atomic Orbital Search (AOS) [20]
Electrostatic Discharge Algorithm (ESDA) [21]
HBSocial Network Search (SNS) [22]
School Based Optimization Algorithm (SBO) [23]
Soccer League Competition (SLC) [24]
SIParticle Swarm Optimization (PSO) [25]
Honey Badger Algorithm (HBA) [26]
Slime Mould Algorithm (SMA) [27]
Dingo Optimization Algorithm (DOA) [28]
Harris Hawks Optimization (HHO) [29]
Grey Wolf Optimizer (GWO) [30]
Whale Optimization Algorithm (WOA) [31]
Sparrow Search Algorithm (SSA) [32]
Sine Cosine Algorithm (SCA) [33]
Arithmetic Optimization Algorithm(AOA) [34]
Lion Optimization Algorithm (LOA) [35]
Red deer algorithm (RDA) [36]
Table 2. Unimodal benchmark functions.
Table 2. Unimodal benchmark functions.
FunDimRangeFmin
f 1 x = i = 1 n x i 2 10, 50, 100[−100, 100]0
f 2 x = i = 0 n x i + i = 0 n x i 10, 50, 100[−10, 10]0
f 3 x = i = 1 d j = 1 i x j 2 10, 50, 100[−100, 100]0
f 4 x = max i x i , 1 i n 10, 50, 100[−100, 100]0
f 5 x = i = 1 n 1 100 x i 2 x i + 1 2 + 1 x i 2 10, 50, 100[−30, 30]0
f 6 x = i = 1 n x i + 0.5 2 10, 50, 100[−100, 100]0
f 7 x = i = 0 n i x i 4 + r a n d o m 0 , 1 10, 50, 100[−128, 128]0
Table 3. Multimodal benchmark functions.
Table 3. Multimodal benchmark functions.
FunDimensionsRangeFmin
f 8 x = i = 1 n x i sin x i 10, 50, 100[−500, 500]−418.9829n
f 9 x = i = 1 n x i 2 10 cos 2 π x i + 10 10, 50, 100[−5.12, 5.12]0
f 10 x = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 10, 50, 100[−32, 32]0
f 11 x = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos x i i 10, 50, 100[−600, 600]0
f 12 x = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + i = 1 n u x i , 10 , 100 , 4 , 10, 50, 100[−50, 50]0
w h e r e y i = 1 + x i + 1 4 , u x i , a , k , m = { K x i a m , i f x i > a 0 , a x i a K x i a m , a x i 10, 50, 100[−50, 50]0
f 13 x = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x 1 + 1 + x n 1 2 1 + 10, 50, 100[−50, 50]0
sin 2 2 π x n + i = 1 n u x i , 5 , 100 , 4
Table 4. Fixed-dimension multimodal benchmark functions.
Table 4. Fixed-dimension multimodal benchmark functions.
FunDimensionsRangeFmin
f 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 1 2[−65, 65]1
f 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.0003
f 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[-5,5]−1.0316
f 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
f 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 2[−2, 2]3
× 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
f 19 x = i = 1 4 c i exp i = 1 3 a i j x j p i j 2 3[−1, 2]−3.86
f 20 x = i = 1 4 c i exp i = 1 6 a i j x j p i j 2 6[0, 1]−0.32
f 21 x = i = 1 5 X a i X a i T + c i 1 4[0, 1]−10.1532
f 22 x = i = 1 7 X a i X a i T + c i 1 4[0, 1]−10.4028
f 23 x = i = 1 10 X a i X a i T + c i 1 4[0, 1]−10.5363
Table 5. Results of the comparative methods on classical test functions (F1–F13); the dimension is fixed to 10.
Table 5. Results of the comparative methods on classical test functions (F1–F13); the dimension is fixed to 10.
Fun No.Comparative Methods
MeasureIAOAOPSOSCAWOAGWOHBASMAIHAOHHOSNS
F1
Average0.0000E+002.3224E-1019.8827E-022.9790E-112.8930E-771.3870E-572.4460E-3110.0000E+000.0000E+001.3155E-83
STD0.0000E+001.2688E-1001.7879E-011.6088E-101.3687E-764.3670E-570.0000E+000.0000E+000.0000E+004.0797E-83
F2
Average0.0000E+008.2910E-542.1864E+002.8434E-091.0860E-523.4493E-332.5178E-1653.7251E-1650.0000E+001.9158E-43
STD0.0000E+004.5332E-533.6831E+009.3310E-094.7689E-526.3477E-330.0000E+000.0000E+000.0000E+002.4764E-43
F3
Average0.0000E+005.2357E-1403.9463E+003.2085E-021.1925E+024.9961E-255.3850E-2820.0000E+000.0000E+004.8278E-44
STD0.0000E+002.3204E-1398.2282E+001.1471E-012.0555E+022.2353E-240.0000E+000.0000E+000.0000E+001.1647E-43
F4
Average0.0000E+001.7558E-531.4897E+001.1596E-034.7707E+003.1774E-184.1792E-1517.5797E-1690.0000E+007.0078E-39
STD0.0000E+009.6168E-538.8025E-011.6691E-039.2981E+006.0564E-181.4627E-1500.0000E+000.0000E+001.0665E-38
F5
Average1.0280E-031.5008E-036.0458E+037.4225E+001.5318E+016.7390E+005.3561E+002.3140E+002.3893E-037.3809E+00
STD4.0666E-031.7352E-032.2823E+044.4219E-014.5765E+018.2270E-014.9909E-012.9942E+003.4581E-032.7072E-01
F6
Average3.1782E-071.7953E-056.7767E-024.4686E-013.2444E-038.4110E-039.8927E-094.2659E-052.5668E-066.2835E-06
STD7.0485E-072.9948E-058.9846E-021.4309E-011.2638E-024.6050E-022.1026E-082.8976E-053.5834E-063.2982E-05
F7
Average5.0112E-051.1823E-046.6253E-032.4664E-032.8474E-037.0323E-042.5156E-041.0642E-047.3335E-054.1179E-04
STD3.7042E-058.6859E-055.1460E-032.0900E-032.8026E-034.3698E-043.4840E-041.0564E-048.0041E-052.6762E-04
F8
Average−4.1898E+03−2.7040E+03−3.2063E+03−2.1452E+03−3.2492E+03−2.6436E+03−4.1125E+03−4.1898E+03−3.6604E+03−4.1696E+03
STD1.2752E-024.8263E+024.7583E+021.5916E+026.1201E+022.7699E+022.5316E+023.6702E-037.3765E+024.4751E+11
F9
Average0.0000E+000.0000E+002.1168E+017.6524E-017.2367E-016.8404E-020.0000E+000.0000E+000.0000E+009.0618E-13
STD0.0000E+000.0000E+001.1400E+012.6158E+003.9637E+003.7466E-010.0000E+000.0000E+000.0000E+003.9802E-12
F10
Average8.8818E-168.8818E-162.1813E+004.8481E-074.4409E-157.7568E-158.8818E-168.8818E-168.8818E-168.8818E-16
STD0.0000E+000.0000E+001.0373E+001.3528E-062.6389E-159.0135E-160.0000E+000.0000E+000.0000E+000.0000E+00
F11
Average0.0000E+000.0000E+003.3710E-016.0756E-021.6105E-012.3521E-020.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+001.9268E-019.2313E-021.9162E-012.4736E-020.0000E+000.0000E+000.0000E+000.0000E+00
F12
Average3.0145E-075.8204E-064.2631E-011.2026E-016.7461E-023.3293E-036.2436E-043.9397E-061.3917E-066.3829E-09
STD6.8979E-077.3875E-066.5028E-015.4482E-022.7273E-019.3149E-033.4190E-032.4780E-061.5261E-062.0490E-08
F13
Average6.6465E-076.6356E-062.0431E-013.2588E-012.8072E-021.3021E-021.2726E-017.0433E-045.6927E-063.6630E-04
STD8.8440E-078.7177E-062.0539E-018.8981E-023.6420E-023.3783E-021.1781E-015.4785E-046.5362E-062.0060E-03
(W|L|T) (10|0|3)(13|0|0)(13|0|0)(13|0|0)(13|0|0)(9|1|3)(8|0|4)(6|0|7)(10|1|2)
Mean Rank1.1538E+003.8462E+009.4615E+008.8462E+007.9231E+007.3077E+003.6154E+003.1538E+001.9231E+004.4615E+00
Final Rank15109874326
Table 6. Results of the comparative methods on classical test functions (F1–F13); the dimension is fixed to 50.
Table 6. Results of the comparative methods on classical test functions (F1–F13); the dimension is fixed to 50.
Fun No.Comparative Methods
MeasureIAOAOPSOSCAWOAGWOHBASMAIHAOHHOSNS
F1
Average0.0000E+001.2621E-981.3367E+039.8429E+024.9659E-739.3925E-202.6934E-2683.1046E-3030.0000E+002.2916E-69
STD0.0000E+006.9126E-981.1120E+031.0716E+032.5168E-721.5742E-190.0000E+000.0000E+000.0000E+007.7994E-69
F2
Average0.0000E+004.2713E-517.2861E+016.0782E-011.6283E-502.5761E-126.0605E-1402.7294E-1440.0000E+001.6968E-36
STD0.0000E+002.3395E-502.8472E+015.8515E-017.0749E-502.3422E-123.2825E-1391.4947E-1430.0000E+001.7325E-36
F3
Average0.0000E+006.2530E-991.9677E+044.8973E+042.0134E+056.9354E-012.7237E-2301.3691E-2550.0000E+003.1473E-19
STD0.0000E+003.4230E-981.3525E+041.6184E+044.0176E+041.5062E+000.0000E+000.0000E+000.0000E+008.4013E-19
F4
Average0.0000E+001.3486E-511.5811E+016.8791E+016.5347E+013.4044E-041.3771E-1283.3920E-1230.0000E+001.3513E-31
STD0.0000E+006.1382E-514.2564E+008.6104E+002.9025E+012.5777E-044.9247E-1281.3065E-1220.0000E+001.8882E-31
F5
Average6.2115E-028.7566E-031.6375E+058.0264E+064.8313E+014.7545E+014.7587E+011.5060E+016.7975E-024.8009E+01
STD1.0885E-011.6222E-021.9275E+051.1260E+072.9790E-017.3129E-019.3529E-011.9174E+011.9964E-012.7176E-01
F6
Average3.2391E-063.0053E-041.2981E+031.1480E+031.2141E+002.6191E+003.5202E+001.0294E-012.8225E-061.1414E+00
STD3.7520E-064.6379E-044.6381E+029.0352E+024.7747E-015.3908E-018.0760E-011.1948E-014.0460E-064.0006E-01
F7
Average4.5157E-051.0168E-045.9505E+003.7327E+002.6190E-033.0051E-033.7636E-041.9590E-049.5694E-055.5031E-04
STD3.9634E-051.0872E-049.6585E+004.6774E+002.2822E-031.4242E-033.8930E-041.9448E-041.1017E-043.3177E-04
F8
Average−2.0949E+04−1.0296E+04−1.0334E+04−4.9128E+03−1.8066E+04−8.6995E+03−2.0632E+04−2.0945E+04−2.0080E+04−1.1343E+04
STD1.0924E-016.4117E+031.1183E+033.4755E+022.8841E+031.6886E+034.8682E+023.9174E+002.5957E+037.7881E+02
F9
Average0.0000E+000.0000E+003.2003E+021.0439E+020.0000E+004.5236E+000.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+005.1219E+014.8147E+010.0000E+004.4977E+000.0000E+000.0000E+000.0000E+000.0000E+00
F10
Average8.8818E-168.8818E-161.0035E+011.6811E+013.4935E-154.6984E-118.8818E-168.8818E-168.8818E-164.4409E-15
STD0.0000E+000.0000E+004.7572E+006.7482E+002.6279E-151.8211E-110.0000E+000.0000E+000.0000E+000.0000E+00
F11
Average0.0000E+000.0000E+001.2668E+019.2736E+000.0000E+002.6772E-030.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+006.7658E+001.1111E+010.0000E+006.2169E-030.0000E+000.0000E+000.0000E+000.0000E+00
F12
Average3.5676E-071.3093E-061.0947E+011.6926E+073.0187E-021.4397E-011.1742E-011.2283E-026.6027E-074.5960E-03
STD5.9063E-072.1103E-066.0591E+002.1964E+072.5737E-027.7867E-027.7275E-021.7179E-028.6266E-073.7155E-03
F13
Average3.1679E-062.5696E-057.5024E+022.9400E+071.1223E+002.0360E+004.1355E+002.4300E-021.5121E-053.3356E-01
STD4.5408E-064.4035E-052.1952E+033.7851E+074.3282E-013.2325E-014.4922E-012.5096E-022.6993E-051.8665E-01
(W|L|T) (9|1|3)(13|0|0)(13|0|0)(11|0|2)(13|0|0)(10|0|3)(10|0|3)(6|0|7)(10|0|3)
Mean Rank1.0769E+003.3846E+009.1538E+009.4615E+006.0000E+007.5385E+004.2308E+003.0000E+001.6923E+004.7692E+00
Final Rank14910785326
Table 7. Results of the comparative methods on classical test functions (F1–F13); the dimension is fixed to 100.
Table 7. Results of the comparative methods on classical test functions (F1–F13); the dimension is fixed to 100.
Fun No.Comparative Methods
MeasureIAOAOPSOSCAWOAGWOHBASMAIHAOHHOSNS
F1
Average0.0000E+005.1159E-995.7117E+031.1443E+043.5827E-691.9290E-123.2518E-2006.2811E-2210.0000E+001.1048E-67
STD0.0000E+002.8021E-983.6373E+038.2302E+031.9617E-681.2824E-120.0000E+000.0000E+000.0000E+004.1967E-67
F2
Average0.0000E+003.9051E-521.5064E+027.2002E+002.2753E-504.2169E-081.3788E-1111.2113E-1410.0000E+003.2427E-35
STD0.0000E+002.1383E-513.1696E+016.2087E+008.7244E-501.5009E-087.5522E-1116.6348E-1410.0000E+003.7736E-35
F3
Average0.0000E+006.8311E-1019.5308E+042.4323E+051.0637E+064.0890E+021.0078E-1832.4850E-2530.0000E+006.6454E-12
STD0.0000E+002.5438E-1006.2759E+046.0263E+042.4881E+053.7435E+020.0000E+000.0000E+000.0000E+003.6253E-11
F4
Average0.0000E+002.0674E-542.1773E+018.8956E+017.2568E+011.0282E+009.8822E-987.2459E-1400.0000E+001.9484E-30
STD0.0000E+001.1324E-534.6161E+002.9508E+002.9428E+011.1412E+005.4126E-973.9688E-1390.0000E+002.1891E-30
F5
Average3.3067E-011.0799E-021.0524E+061.0779E+089.8209E+019.7923E+019.8033E+013.5374E+017.5137E+019.8128E+01
STD4.1431E-012.0391E-021.0923E+066.9068E+072.1229E-016.6181E-016.8642E-014.0718E+014.2148E+012.5631E-01
F6
Average1.7505E-051.4580E-034.4654E+031.3780E+044.3181E+001.0034E+011.3475E+011.6686E+007.8806E-057.7487E+00
STD2.8122E-054.2895E-031.8531E+037.7723E+031.0995E+001.2448E+001.6622E+001.7329E+001.4201E-048.0349E-01
F7
Average3.8993E-051.0242E-045.4957E+011.3007E+024.4602E-037.1806E-034.3691E-042.7884E-048.1182E-054.7715E-04
STD3.3278E-057.1470E-051.2389E+025.2281E+015.5305E-034.0150E-038.6479E-042.2574E-045.9169E-052.8944E-04
F8
Average−4.1898E+04−1.1220E+04−1.5662E+04−6.9689E+03−3.3889E+04−1.5512E+04−4.0452E+04−4.1891E+04−4.1647E+04−1.8296E+04
STD3.7073E-016.2868E+032.2964E+034.6480E+025.8967E+033.3937E+032.9790E+031.0217E+011.2765E+032.6919E+03
F9
Average0.0000E+000.0000E+008.3584E+022.3748E+020.0000E+009.2482E+000.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+004.4226E+011.2370E+020.0000E+008.1907E+000.0000E+000.0000E+000.0000E+000.0000E+00
F10
Average8.8818E-168.8818E-161.0763E+012.0216E+013.8488E-151.2753E-078.8818E-168.8818E-168.8818E-164.4409E-15
STD0.0000E+000.0000E+002.5471E+002.0674E+001.8853E-154.8362E-080.0000E+000.0000E+000.0000E+000.0000E+00
F11
Average0.0000E+000.0000E+004.5743E+019.5013E+010.0000E+003.4526E-030.0000E+000.0000E+000.0000E+000.0000E+00
STD0.0000E+000.0000E+001.5048E+016.1397E+010.0000E+009.1468E-030.0000E+000.0000E+000.0000E+000.0000E+00
F12
Average1.1362E-071.7522E-061.7321E+023.6238E+084.9595E-023.0686E-013.5908E-011.4775E-024.3243E-074.7429E-02
STD1.3745E-072.7432E-068.2035E+021.5973E+081.9496E-027.6135E-021.0260E-013.7190E-026.6715E-071.2802E-02
F13
Average2.8163E-063.8493E-051.2104E+055.7687E+082.7539E+006.8028E+009.3862E+001.0836E-011.1256E-055.1031E+00
STD3.2627E-064.6305E-052.3444E+052.6301E+088.5936E-014.9165E-014.6368E-012.0204E-012.0715E-059.5963E-01
(W|L|T) (9|1|3)(13|0|0)(13|0|0)(13|0|0)(11|0|2)(13|0|0)(10|0|3)(10|0|3)(11|0|2)
Mean Rank1.0769E+003.4615E+008.8462E+009.7692E+005.7692E+007.6154E+004.4615E+002.7692E+001.6923E+005.4615E+00
Final Rank14910785326
Table 8. Results of the comparative methods on classical test functions (F14–F23).
Table 8. Results of the comparative methods on classical test functions (F14–F23).
Fun No.Comparative Methods
MeasureIAOAOPSOSCAWOAGWOHBASMAIHAOHHOSNS
F14
Average9.9800E-013.3268E+003.6091E+001.8685E+003.0570E+004.7152E+001.8474E+009.9800E-013.9338E+009.9800E-01
STD1.9485E-075.2481E+003.9511E+001.5933E+003.4565E+004.4316E+002.4975E+004.2766E-134.1802E+003.1721E-17
F15
Average3.4953E-045.2404E-041.0700E-021.0491E-036.0726E-044.4088E-033.1558E-035.2164E-043.7513E-043.6095E-04
STD1.7826E-049.7738E-049.4417E-033.8370E-043.1958E-048.0603E-036.5715E-032.2777E-043.4012E-041.5460E-04
F16
Average−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00
STD2.4221E-061.6986E-032.9925E-166.9673E-053.1374E-093.2901E-089.1327E-081.6735E-097.0317E-102.4977E-16
F17
Average3.9789E-013.9813E-013.9789E-013.9996E-013.9790E-013.9792E-013.9789E-013.9789E-013.9789E-013.9789E-01
STD2.0640E-052.6541E-040.0000E+002.2873E-031.7337E-052.4586E-047.0378E-084.7990E-083.4610E-073.3645E-16
F18
Average3.0000E+003.0308E+003.0000E+003.0001E+003.5402E+003.0000E+003.0000E+005.7000E+006.6000E+003.0000E+00
STD2.6593E-042.0550E-051.8626E-151.8739E-043.8193E+005.6334E-051.7120E-151.1757E-108.2385E+001.3292E-15
F19
Average−3.8626E+00−3.8569E+00−3.8615E+00−3.8538E+00−3.8571E+00−3.8610E+00−3.8628E+00−3.8628E+00−3.8627E+00−3.8628E+00
STD1.7812E-043.6747E-032.9875E-032.3179E-031.1745E-022.7863E-031.3914E-071.5881E-071.4671E-042.6684E-15
F20
Average−3.2762E+00−3.2488E+00−3.1063E+00−2.8254E+00−3.1939E+00−3.2652E+00−3.2721E+00−3.2425E+00−3.2809E+00−3.2911E+00
STD5.9227E-021.0202E-012.6843E-014.2368E-011.7954E-017.6236E-025.9276E-025.7189E-025.8947E-025.2680E-02
F21
Average−1.0153E+01−1.0140E+01−9.4095E+00−2.4271E+00−8.3139E+00−9.3203E+00−9.3280E+00−1.0153E+01−1.0153E+01−1.0003E+01
STD3.2810E-041.4970E-022.1134E+001.9289E+002.6107E+002.1197E+002.2797E+003.0386E-044.0101E-041.0639E+00
F22
Average−1.0403E+01−1.0402E+01−8.7103E+00−3.5346E+00−7.4750E+00−1.0084E+01−9.1451E+00−1.0403E+01−1.0403E+01−1.0297E+01
STD4.9822E-041.8102E-033.0947E+001.9151E+003.0141E+001.2683E+002.9284E+002.3691E-041.9187E-047.5169E-01
F23
Average−1.0536E+01−1.0521E+01−9.2431E+00−3.7748E+00−6.8094E+00−1.0372E+01−8.4938E+00−1.0536E+01−1.0536E+01−1.0428E+01
STD2.9928E-043.3891E-022.8229E+001.5647E+003.3590E+001.1473E+003.7874E+002.7799E-042.6806E-047.6480E-01
(W|L|T) (9|0|1)(6|1|3)(9|0|1)(8|1|1)(6|2|2)(7|0|3)(4|0|6)(4|1|5)(5|1|4)
Mean Rank1.600E+005.6000E+005.5000E+007.9000E+006.6000E+004.8000E+004.8000E+003.4000E+003.5000E+003.1000E+00
Final Rank18710955342
Table 9. Parameters for the compared algorithms.
Table 9. Parameters for the compared algorithms.
AlgorithmParameters
AO U = 0.00565 ; R 3 = 10 ; ω = 0.005 ; α = 0.1 ; δ = 0.1
IHAOHHO U = 0.00565 ; r 3 = 10 ; ω = 0.005 ; E = [ 2 , 0 ]
PSO c 1 = 2 ; c 2 = 2 ; v max = 6
SCA
WOA a = [ 2 , 0 ] ; b = 2
GWO a = [ 2 , 0 ]
HBA C = 2 ; β = 6
SMA z = 0.03
SNS
Table 10. Computation time of algorithms on benchmark functions.
Table 10. Computation time of algorithms on benchmark functions.
FunComparative Methods
IAOAOPSOSCAWOAGWOHBASMAIHAOHHOSNS
F12.30E-011.11E-013.38E-025.18E-025.42E-026.23E-022.23E-014.09E-012.18E-011.93E-01
F22.22E-011.14E-013.56E-025.44E-024.87E-026.58E-022.01E-013.69E-012.17E-011.66E-01
F34.44E-012.25E-018.61E-029.06E-028.19E-029.94E-024.70E-014.49E-014.22E-011.92E-01
F42.09E-011.16E-013.47E-025.27E-026.77E-025.94E-021.85E-014.42E-012.10E-011.69E-01
F52.76E-011.42E-015.12E-026.90E-027.45E-027.68E-022.17E-014.17E-013.29E-011.80E-01
F62.04E-011.10E-013.50E-025.65E-024.53E-025.98E-022.21E-014.49E-012.21E-011.52E-01
F73.82E-011.64E-015.97E-027.48E-026.90E-028.45E-021.59E-014.27E-012.96E-011.74E-01
F82.52E-011.32E-014.73E-026.61E-026.04E-027.61E-021.16E-013.91E-012.77E-011.73E-01
F91.99E-011.13E-013.78E-025.59E-024.78E-026.65E-022.23E-013.85E-012.28E-011.71E-01
F102.43E-011.32E-013.98E-026.13E-025.19E-026.66E-022.55E-013.76E-012.29E-011.61E-01
F112.53E-011.37E-015.20E-027.12E-026.35E-027.87E-022.02E-013.96E-012.79E-011.71E-01
F128.22E-013.02E-011.43E-011.52E-011.42E-011.55E-016.28E-014.78E-016.55E-012.63E-01
F137.99E-013.21E-011.54E-011.46E-011.44E-011.60E-016.37E-014.48E-016.69E-012.90E-01
F142.28E+007.02E-013.78E-013.56E-013.80E-013.92E-011.64E-015.45E-011.65E+004.80E-01
F152.12E-011.08E-013.59E-024.70E-024.31E-025.16E-022.16E-012.70E-012.21E-011.80E-01
F162.52E-011.06E-013.51E-024.21E-024.29E-024.32E-021.57E-012.40E-012.12E-011.51E-01
F171.89E-011.17E-013.60E-023.66E-023.87E-024.11E-022.06E-012.73E-011.93E-011.49E-01
F181.69E-019.46E-022.91E-025.19E-023.58E-024.37E-021.89E-012.38E-011.86E-011.46E-01
F192.24E-011.17E-014.27E-025.42E-024.97E-025.31E-022.41E-012.62E-012.39E-011.65E-01
F202.60E-011.22E-014.12E-026.44E-026.28E-026.07E-021.82E-013.45E-012.45E-011.69E-01
F212.86E-011.43E-015.27E-026.58E-026.29E-026.79E-022.94E-013.60E-012.90E-011.83E-01
F224.03E-011.51E-016.33E-027.31E-028.77E-027.89E-023.26E-013.77E-013.19E-011.90E-01
F234.62E-011.68E-017.04E-028.54E-029.06E-028.67E-023.87E-013.00E-014.06E-012.33E-01
Table 11. CEC2019 benchmark functions.
Table 11. CEC2019 benchmark functions.
No.FunctionsFminDSearch Range
1Storn’s Chebyshev Polynomial Fitting Problem19[−8192, 8192]
2Inverse Hilbert Matrix Problem116[−16384, 16384]
3Lennard-Jones Minimun Energy Cluster118[−4, 4]
4Rastrigin’s Function110[−100, 100]
5Griewangk’s Function110[−100, 100]
6Weierstrass Function110[−100, 100]
7Modified Schwefel’s Function110[−100, 100]
8Expanded Schaffer’s F6 Function110[−100, 100]
9Happy Cat Function110[−100, 100]
10Ackley Function110[−100, 100]
Table 12. Results of the comparative methods on CEC2019 test functions.
Table 12. Results of the comparative methods on CEC2019 test functions.
Fun No.Comparative Methods
MeasureIAOAOAOAIHAOHHOPSOHBAWOASSA
F1
Average1.0000E+001.0000E+001.0000E+001.0000E+009.1426E+021.0000E+001.9196E+031.0000E+00
STD0.0000E+007.0142E-081.3783E-110.0000E+001.2574E+031.0142E-151.9316E+030.0000E+00
F2
Average4.9902E+005.0000E+004.8151E+004.9958E+002.3817E+014.4369E+004.7245E+015.0000E+00
STD2.2790E-020.0000E+002.2630E-013.4360E-021.1027E+012.0940E-012.2419E+011.7967E-06
F3
Average3.7124E+006.9735E+005.1734E+004.6995E+001.2112E+018.0559E+006.8170E+001.1024E+01
STD1.1250E+001.8316E+001.5342E+002.7237E+008.5501E-012.7655E+002.7975E+002.4208E+00
F4
Average3.7228E+013.8164E+015.8394E+016.5305E+015.9952E+015.5902E+015.9898E+018.3837E+01
STD9.2662E+001.1124E+011.0801E+011.9889E+011.9166E+012.2363E+012.0615E+012.1308E+01
F5
Average2.7378E+002.1021E+008.4194E+011.9913E+011.1439E+013.0982E+002.6868E+006.7515E+01
STD3.2698E-011.9713E-013.1443E+011.4142E+019.1522E+005.7361E+008.2301E-013.7276E+01
F6
Average6.2261E+005.4917E+001.0712E+011.0866E+018.9864E+008.1044E+009.1670E+001.1597E+01
STD1.2281E+001.3587E+001.0793E+001.3478E+001.7271E+001.8066E+001.5338E+001.7529E+00
F7
Average1.0325E+031.1475E+031.3246E+031.4688E+031.4088E+031.3097E+031.4174E+031.5311E+03
STD2.8196E+022.9632E+022.1290E+023.9995E+022.5769E+024.0527E+022.7771E+023.5557E+02
F8
Average4.3272E+004.4131E+004.7220E+004.8513E+004.6475E+004.7121E+004.8320E+004.9479E+00
STD2.7159E-012.9467E-013.3237E-012.5891E-013.4308E-013.4192E-012.7793E-012.7781E-01
F9
Average1.4783E+001.4901E+003.1939E+001.5746E+001.5590E+001.3615E+001.3896E+002.4654E+00
STD1.4829E-011.3466E-016.0448E-014.3106E-014.1725E-011.7083E-011.9395E-018.8138E-01
F10
Average2.0799E+012.0870E+012.1121E+012.1061E+012.1161E+012.1013E+012.1269E+012.1037E+01
STD2.0220E+002.8398E+001.1769E-019.7112E-021.3506E-016.5911E-029.5167E-021.4888E-01
(W|L|T) (7|2|1)(8|1|1)(9|0|1)(10|0|0)(7|2|1)(8|2|0)(9|0|1)
Mean Rank1.7000E+003.0000E+004.7000E+005.2000E+005.7000E+002.9000E+005.4000E+006.3000E+00
Final Rank13457268
Table 13. Results of the compared algorithms for solving the speed reducer design problem.
Table 13. Results of the compared algorithms for solving the speed reducer design problem.
AlgorithmOptimal Values for VariablesOptimumRanking
x 1 x 2 x 3 x 4 x 5 x 6 x 7 Weight
IAO3.497110.7177.37.75723.3506135.2866692995.47471
IHHOAO [40]3.499240.7177.37.81913.350065.285312996.09352
AO [37]3.50210.7177.30997.74763.36415.29943007.73286
AOA [34]3.503840.7177.37.729333.356495.28672997.91574
SCA [33]3.5087550.7177.37.83.461025.2892133030.56310
GA [13]3.5102530.7178.357.83.3622015.2877233067.56111
MDA [43]3.50.7177.37.6703963.5424215.2458143019.5833658
MFO [44]3.497450.7177.827757.712453.351785.286352998.94085
FA [45]3.5074950.7001177.719678.080853.351515.287053010.1374927
HS [46]3.5201240.7178.377.83.366975.2887193029.0029
PSO-DE [47]3.50.7177.37.83.350215.286682996.34813
Table 14. Results of the compared algorithms for solving the pressure vessel design problem.
Table 14. Results of the compared algorithms for solving the pressure vessel design problem.
AlgorithmOptimal Values for VariablesOptimumRanking
T s T h RLWeight
IAO0.77883540.387147240.35416199.51975892.75341
AO [37]1.0540.18280659.621938.8055949.22583
IHHOAO [40]0.83635590.412786845.08462142.92025932.33922
WOA [31]0.81250.437542.0982699176.6389986059.7419
SMA [27]0.79310.393240.6711196.21785994.18574
GWO [30]0.81250.434542.0892176.75876051.56396
AOA [34]0.83037370.416205742.75127169.34546048.78445
PSO-SCA [47]0.81250.437542.098446176.63666059.714338
MVO [11]0.81250.437542.090738176.738696060.806612
GA [13]0.81250.437542.097398176.654056059.9463411
HPSO [48]0.81250.437542.0984176.63666059.71437
ES [49]0.81250.437542.098087176.6405186059.745610
Table 15. Results of the compared algorithms for solving the tension/compression spring design problem.
Table 15. Results of the compared algorithms for solving the tension/compression spring design problem.
AlgorithmOptimal Values for VariablesOptimumRanking
dDNWeight
IAO0.0518279410.36447501210.70684720.0124403753
AO [37]0.05024390.3526210.54250.0111652
IHAOHHO [40]0.0558830.527844.76030.0111441
WOA [31]0.0512070.34521512.0040320.01267637
HS [46]0.0511540.34987112.0764320.01267065
MVO [11]0.052510.3760210.335130.012799
PSO [50]0.0517280.35764411.2445430.01267476
ES [49]0.0516430.35536011.3979260.0126988
CSCA [51]0.0516090.35471411.4108310.01267024
Table 16. Results of the compared algorithms for solving three-bar truss design problem.
Table 16. Results of the compared algorithms for solving three-bar truss design problem.
AlgorithmOptimal Values for VariablesOptimumRanking
x 1 x 2 Weight
IAO0.79122170.4007014263.86143641
AO [37]0.79260.3966263.86843
IHHOAO [40]0.790020.40324263.86222
AOA [34]0.793690.39426263.91549
SSA [32]0.788665410.408275784263.895844
MBA [52]0.7885650.4085597263.895856
PSO-DE [47]0.78867510.4082482263.895845
CS [53]0.788670.40902263.971610
GOA [54]0.7888975560.40761957263.89588157
MFO [44]0.7882447710.409466906263.89597978
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, B.; Shi, Y.; Xu, F.; Xu, X. An Improved Aquila Optimizer Based on Search Control Factor and Mutations. Processes 2022, 10, 1451. https://doi.org/10.3390/pr10081451

AMA Style

Gao B, Shi Y, Xu F, Xu X. An Improved Aquila Optimizer Based on Search Control Factor and Mutations. Processes. 2022; 10(8):1451. https://doi.org/10.3390/pr10081451

Chicago/Turabian Style

Gao, Bo, Yuan Shi, Fengqiu Xu, and Xianze Xu. 2022. "An Improved Aquila Optimizer Based on Search Control Factor and Mutations" Processes 10, no. 8: 1451. https://doi.org/10.3390/pr10081451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop