Next Article in Journal
Comparing Business, Innovation, and Platform Ecosystems: A Systematic Review of the Literature
Next Article in Special Issue
A Sustainable Multi-Objective Model for Capacitated-Electric-Vehicle-Routing-Problem Considering Hard and Soft Time Windows as Well as Partial Recharging
Previous Article in Journal
Dimensioning of Biomimetic Beams under Bending for Additively Manufactured Structural Components
Previous Article in Special Issue
A Random Particle Swarm Optimization Based on Cosine Similarity for Global Optimization and Classification Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer: Solving Constrained Engineering Design Problems

1
Rajkiya Engineering College (AKTU, Lucknow), Bijnor 246725, India
2
Department of Basic Sciences, Preparatory Year, King Faisal University, Al Ahsa 31982, Saudi Arabia
3
Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(4), 215; https://doi.org/10.3390/biomimetics9040215
Submission received: 28 February 2024 / Revised: 27 March 2024 / Accepted: 2 April 2024 / Published: 4 April 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
One of the most important tasks in handling real-world global optimization problems is to achieve a balance between exploration and exploitation in any nature-inspired optimization method. As a result, the search agents of an algorithm constantly strive to investigate the unexplored regions of a search space. Aquila Optimizer (AO) is a recent addition to the field of metaheuristics that finds the solution to an optimization problem using the hunting behavior of Aquila. However, in some cases, AO skips the true solutions and is trapped at sub-optimal solutions. These problems lead to premature convergence (stagnation), which is harmful in determining the global optima. Therefore, to solve the above-mentioned problem, the present study aims to establish comparatively better synergy between exploration and exploitation and to escape from local stagnation in AO. In this direction, firstly, the exploration ability of AO is improved by integrating Dynamic Random Walk (DRW), and, secondly, the balance between exploration and exploitation is maintained through Dynamic Oppositional Learning (DOL). Due to its dynamic search space and low complexity, the DOL-inspired DRW technique is more computationally efficient and has higher exploration potential for convergence to the best optimum. This allows the algorithm to be improved even further and prevents premature convergence. The proposed algorithm is named DAO. A well-known set of CEC2017 and CEC2019 benchmark functions as well as three engineering problems are used for the performance evaluation. The superior ability of the proposed DAO is demonstrated by the examination of the numerical data produced and its comparison with existing metaheuristic algorithms.

1. Introduction

Global optimization is a term used to characterize several scientific and engineering problems that can be resolved using different optimization techniques. These days, the preferred methods for global optimization are metaheuristic algorithms (MAs) since they are protected against local maximum efficacy by their stochastic and dynamic nature [1]. Genetic Evolution [2], Differential Evolution (DE) [3], Particle Swarm Optimization (PSO) [4], Reptile Search Algorithm (RSA) [5], Whale Optimization Algorithm (WOA) [6], Brain Storm Optimization (BSO) [7], Teaching–Learning-Based Optimization (TLBO) [8], etc., are several MAs that have emerged over the past 20 years. One of the better algorithms is the AO method, which Abualigah proposed in 2021 [9], because it is simple to build, has consistent performance, and few configurable parameters. Its strong optimization capabilities have helped with a variety of global optimization problems, including feature selection [10], vehicle route planning [11], and machine scheduling [12].
The No Free Lunch (NFL) theorem [13] was a significant advancement in the field of nature-inspired algorithms. It is impossible to develop a single optimization algorithm that solves every optimization problem, according to the NFL theorem. To put it simply, even if optimization method “A” is ideally suited for a particular set of problems, there is always a subset of problems on which it would perform poorly. As a result, the NFL theorem provides the area of nature-inspired algorithms life and enables academics to either suggest new algorithms or enhance already existing ones. In order to improve existing algorithms, an effective approach for doing so is hybridization—combining the best aspects of multiple algorithms to create a hybridized algorithm. The present study aims to combine the benefits of better exploration and the efficiency of maintaining a balance between exploration and exploitation by improving the AO, DOL, and DRW techniques. This is achieved by drawing inspiration from the advantages of improving the algorithm.
The new NIOA, called Aquila Optimizer, uses the Aquila bird’s hunting strategy in an attempt to discover the best solution to an optimization problem. AO is capable of handling a broad range of optimization problems [14]. The first drawback of this algorithm is premature convergence, which happens when the algorithm has a stagnation issue and is unable to explore the whole search space during the process. The second drawback is its low computational efficiency. This provides a poor ideal solution and also prevents the algorithm from searching the whole search space. Aquila Optimizer takes a longer time to converge and to reach the ideal solution than other existing metaheuristic algorithms. Therefore, in the current study, Aquila Optimizer is enhanced so that it can explore the more promising areas that are left in the population’s memory. By combining AO with DRW and DOL, suitable harmony between the exploration and exploitation process is formed. The DOL [15] method with its asymmetric and dynamic search space exhibits a great deal of promise. In the meanwhile, the dynamic opposite number, a random candidate, can be computed quickly and easily. This may enhance the algorithm’s capacity for exploitation and increase the rate of convergence. The DRW [16] approach focuses on iteratively improving a solution by exploring its closer neighborhood because balancing the search for new promising areas with refining solutions within existing areas is the key to metaheuristics. The following are the paper’s contributions:
  • To increase the AO algorithm’s computing effectiveness and capacity for local optimal avoidance, a new DRW technique is put forth.
  • To enhance the algorithm’s performance and balance between exploration and exploitation, the DOL approach is incorporated into AO for the very first time.
  • The performance of DAO is examined on twenty-nine benchmark functions of CEC 2017, ten benchmark functions of CEC 2019, and then on three engineering design problems, and the results are compared with various algorithms.
The following part of the paper is structured as follows: the fundamental ideas of AO, DOL, and DRW are presented in Section 2. The previous work on AO is explained in Section 3. In Section 4, the proposed DAO algorithm is explained. Section 5 presents the experiments and their findings. Section 6 shows the engineering applications. The study’s conclusion is finally presented in Section 7.

2. Algorithm Preliminaries

2.1. Aquila Optimizer

The Aquila bird’s hunting strategy served as the inspiration for the Aquila Optimizer (AO) metaheuristic optimization technique [9]. AO mimics the four main prey-hunting strategies, explained as follows:

2.1.1. Expanded Exploration

The expanded exploration x 1 of Aquila Optimizer mimics the high-achieving quickly descending hunting strategy observed in Aquila birds. With this strategy, the bird soars to enormous heights, giving it the opportunity to inspect the whole search area, identify potential prey, and select the ideal hunting place. Equation (1) in [9] provides a mathematical illustration of this strategy.
x 1 ( h + 1 ) = x b e s t ( h ) × 1 h H + x M ( h ) r a n d × x b e s t ( h )
In Equation (1), the maximum number of iterations is represented as H where h denotes the current iteration. The response for the subsequent iteration indicated as x 1 ( h + 1 ) is found by the first search in the candidate solution population x 1 . The expression x b e s t ( h ) represents the best outcome achieved so far in the h t h iteration. A count of iterations is employed through an equation 1 h H to modify the search space’s depth. Additionally, using Equation (2), where N represents the population size and D is the dimension size, the average value of the locations of connected existing solutions at the h t h iteration is determined, represented as x M ( h ) .
x M ( h ) = 1 N i = 1 N x i h ,   for   all   i = 1 , 2 , , D

2.1.2. Narrowed Exploration

In this approach, the Aquila bird hunts; to track prey, they must fly in a contour-like pattern and execute swift gliding strikes inside a small research region. The primary aim of this methodology x 2 h + 1 , as expressed mathematically in Equation (3), is to identify a solution for the subsequent iterations.
x 2 h + 1 = x b e s t h × L e v y D + x R h + v u × r a n d
In this approach, L e v y ( D ) is the Levy flying distribution for dimension space D . At the h t h iteration, the random solution x R h is taken in the range of 1   N , where N is the population size. The Levy flight distribution is calculated using a fixed constant value of s = 0.01 and two randomly selected parameters, u and v , which have values between 0 and 1. The mathematical expression for this computation is provided by Equation (4).
L e v y D = s × u × σ v 1 a
Equation (5) finds the value σ , which is obtained using the constant parameter a = 1.5 .
σ = Γ 1 + a × sin π a 2 Γ 1 + a 2 × a × 2 a 1 2
Equations (6) and (7) depict the spiral form inside the search range, denoted by y and x , respectively. Equation (3) specifies this spiral form.
y = r 1 + U D 1 cos ω D 1 + 3 π 2
x = r 1 + U D 1 sin ω D 1 + 3 π 2
Variable r 1 , over a predefined number of search iterations, takes values between 1 and 20. The constant values of ω and U are 0.005 and 0.00565, respectively. D 1 Z has a range from 1 to the dimension D of the search space.

2.1.3. Expanded Exploitation

During the investigation phase, the Aquila bird meticulously examines the prey area before attacking with a low, slow fall. This strategy, sometimes referred to as expanded exploitation x 3 , is represented mathematically in Equation (8).
x 3 h + 1 = x b e s t h x M ( h ) × θ r a n d + u b l b × r a n d + l b × ρ
x 3 h + 1 , the result of Equation (8), represents the result for the subsequent iteration. In the h t h iteration, x b e s t h denotes the current best solution obtained, and x M h denotes the average value of the current solution as determined by Equation (2). Variable “rand” is assigned a random number within the range of (0, 1), while tuning parameters θ and ρ are typically assigned values of 0.1 each. Symbols u b and l b represent the upper and lower bounds, respectively.

2.1.4. Narrowed Exploitation

Aquila birds hunt by taking advantage of their prey’s unpredictable ground movement patterns to grab their prey directly. This hunting strategy serves as the basis for the constrained exploitation technique x 4 h design, which is produced by Equation (9), which also yields the h t h iteration of the following solution, denoted as x 4 h + 1 . Equation (10), which expresses the quality function J , was put out to provide a well-balanced search approach.
x 4 h + 1 = J × x b e s t h P 1 × r a n d × x 1 h P 2 × L e v y D + r a n d × P 1
Equations (11) and (12) are used to determine the mobility pattern for the Aquila’s prey tracking P 1 and the trajectory of an attack during an escape, from the beginning to the terminal point P 2 . Both the maximum number of iterations H and the current iteration number h are used in the computations.
J h = h 2 × r a n d ( ) 1 1 H 2
P 1 = 2 × r a n d 1
P 2 = 2 × 1 h H

2.2. Concept of Dynamic Oppositional Learning (DOL)

The objectives of the optimization algorithms are to produce solutions, improve approximated solutions, and look for additional solutions inside the domain. The needs of tackling a complex problem cannot be met by the current solutions. Then, a variety of learning techniques are developed to improve optimization algorithms’ performance. Owing to its higher convergence capacity, the opposition-based learning (OBL) technique is the most frequently acknowledged among these learning systems. The following is an introduction to the definition of OBL [15]:
OBL is made up of the real number x R in the interval x a , b . Furthermore, the opposite number, x O B L , is produced.
x O B L = a + b x
Regarding a situation with several dimensions, the definition is demonstrated as follows: x = x 1 , x 2 , , x D is a point in D dimensional coordinates if and only if x 1 , x 2 , , x D R in the interval a i , b i . As the iteration changes, the associated low and high bounds of the population are denoted by a i and b i , respectively. In the meantime, the definition of the multidimensional opposite point is
x i O B L = a i + b i x i
Even while the OBL method enhances the algorithm’s searching capabilities, it still has certain drawbacks, such being premature. Various variations of OBL have been proposed to enhance its performance. To expand the domain known as original notion of quasi-opposite-based learning (QOBL), for example, a quasi-opposite number is employed [17]. In the meantime, a quasi-reflection number is introduced in the interval between the present location and the center position in order to implement a quasi-reflection-based learning (QRBL) method [18].
Phase of Dynamic Opposite Learning: In addition to the OBL variations mentioned above, a novel learning approach called dynamic opposite learning operator (DOL) is used in this work. In order to enhance the TLBO algorithm’s performance, Xu et al. originally suggested the DOL method in [15]. When dealing with complex issues, the DOL is included to prevent the algorithm from being too young [19]. Furthermore, in an asymmetric and dynamic search environment, the DOL learning technique is a new variation of the opposition-based learning (OBL) strategy that aids in population learning from the opposite points [20,21].

2.2.1. Dynamic Population Initialization

x a , b was defined as the initial population in the initialization step. Additionally, x O B L is produced in the opposing domain. x R O x R O = r a n d x O B L , r a n d 0 , 1 is introduced to replace x O B L in order to expand the searching space and convert the previous symmetric searching space into a dynamic asymmetric domain. The optimizer is then able to prevent prematurity by expanding the searching space. Therefore, in order to enhance the capacity to overcome local optima, a weighting factor w d is incorporated. This is how the mathematical model is displayed:
x D O L = x + w d r 1 r 2 x O B L x
where r 2 0 , 1 is a random parameter. When faced with a multifaceted goal, it manifests as follows:
x i j D O L = x i j + w d r 1 r 2 x i j O B L x i j
where i = 1 ,   2 , ,   N is the population size, j = 1 ,   2 , ,   D is the dimension of an individual, r 1 and r 2 denote random numbers among 0 , 1 .

2.2.2. Dynamic Population Jumping Process

In DOL, a jumping rate J r is used to update the population, and a positive weighting factor w d is employed to balance the capabilities of exploration and exploitation. The following is an implementation of the DOL operation procedure, provided that the selection probability is less than J r .
x i j D O L = x i j + w d r 1 r 2 a j + b j x i j x i j
where a random value x i j is produced as the starting populace; N is the population size; i is the i t h solution; x i j D O L is the population created by the DOL technique; j displays the dimension of j t h ; two random parameters in 0 , 1 are called r 1 and r 2 ; the weighting factor w d is set to 3; and the jumping rate J r is set at 1 by conducting sensitivity analysis as in Table 1.

2.3. Concept of Dynamic Random Walk (DRW)

Dynamic Random Walk (DRW) can be applied to the expanded exploration phase of the Aquila Optimizer metaheuristic algorithm to improve its exploration ability and help it escape local optima by the following equation:
x = x b e s t + w r 3 r 4 r w v x b e s t
where r w v , random walk vector, is provided by r w v = r 1 , D 0.5 . Two random parameters in 0 , 1 are called r 3 and r 4 . DRW is incorporated into AO to improve its exploration ability. In the early stages of the optimization process, DRW is used to allow the search agents to explore a large search space.

3. Previous Work on AO and DOL

There is always room to enhance an algorithm by increasing and balancing the operators’ exploitation and exploration since the NFL theorem opposes the existence of an algorithm that is best suited for all optimization tasks. Plenty of work has been completed in the literature to improve the search efficiency in AO. These improvements include adjusting the algorithm’s parameters, including new movement strategies, and merging the algorithm with other optimization methods. The improved versions of AO can handle a large range of difficult real-world optimization problems better than the standard AO. The strategies used in AO are hybridization with NIOAs [22,23], oppositional-based learning [24], chaotic sequence [25], Levy flight-based strategy [26], Gauss map and crisscross operator [27], Niche Thought with Dispersed Chaotic Swarm [28], random learning mechanism and Nelder–Mead Simplex Search [29], wavelet mutation [30], Weighted Adaptive Searching Technique [31], Binay AO [32], and multi-objective AO [33].
DOL strategies are also used in many NIOAs to enhance their performance. First, they were introduced with Teaching–Learning-based Optimization [15], Grey Wolf Optimizer [34], Whale Optimization Algorithm [35], Antlion Optimizer [16], Bald Eagle Search Optimization [36], in the hybrid version of Aquila Optimizer, and Artificial Rabbits Optimization Algorithm [37], and the comprehensive survey with other algorithms can be found in the literature [14].

4. The Proposed DAO Algorithm

Two new features, DOL and DRW, are added to the original AO by the proposed DAO (Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer) algorithm. The aim of DOL population generation is to provide diverse solutions to escape from stagnation, and DOL generation jumping helps in the exploitation ability of the algorithm and accelerates the speed of the algorithm. On the other hand, DRW will help the algorithm to improve its exploration ability. This overall approach will provide the perfect balance between exploration and exploitation and help the algorithm to escape from local optima. Let us examine this improvement working in more detail.
  • Benefits of using DOL population initialization
Compared to random initialization, the use of a dynamic opposition population initialization technique in Aquila Optimizer (AO) has various benefits that result in a more diverse solution pool:
(a)
Random initialization limitations:
Particularly for complex problems, random initialization might produce a population localized in a particular area of the search space, which restricts exploration and raises the possibility of becoming trapped in local optima.
(b)
Initialization Based on Dynamic Opposition:
For every randomly selected initial point, this method produces an “opposite” solution. With respect to a predetermined reference point (often the centre or limits), the opposing solution is located on the other side of the search area. This forces investigation in many places and produces wider initial dispersion of solutions.
The starting population is more diversified when opposition-based generation and random selection are combined. Because of this diversity, AO is able to investigate various regions of the search field right away. To prevent becoming overly biased in favor of the opposing alternatives, the strategy, nevertheless, maintains a healthy balance by retaining some randomly generated solutions. Overall, we can say that introducing DOL population initialization can help AO in the following ways:
(a)
Increased exploration: AO can find promising regions throughout the whole search space by distributing the first solutions more widely.
(b)
Decreased chance of local optima: AO is less likely to become stuck in solutions that are only effective in a small area because it starts from a variety of sites.
(c)
Faster convergence: When multiple regions are investigated concurrently, a well-distributed population can converge more quickly to the global optimum.
2.
Benefits of using DOL generation jumping:
(a)
Improved Exploration: Reintroducing exploration in later phases may result in the identification of more effective solutions.
(b)
Escape from Local Optima: AO is nudged away from regions that would not lead to the global optimum by jumping in opposition to underperforming individuals.
(c)
Fine-tuning: By investigating neighboring regions in the opposite direction, the leaps may discover somewhat better choices even if AO converges to a suitable solution.
3.
Benefits of using DRW in place of Aquila’s expanded exploration phase:
(a)
Reduced Complexity: By doing away with the necessity to plan and carry out a specific extended exploration phase, DRW simplifies the algorithm as a whole.
(b)
Effective Exploration: Because of its intrinsic unpredictability, DRW can efficiently explore the search space and perhaps produce outcomes that are comparable to those of Aquila’s exploration stage.
In Algorithm 1, DOL Population Initialization and DOL Generation Jumping are used and DRW is used to swap out the expanded exploration of AO. Algorithm 1 illustrates the phases of this algorithm. In this, the parameter values are taken at their best regarding α ,   β of AO; weight w d , jumping rate J r of DOL; and weight w of DRW for the rest of the paper.
Figure 1 also displays the algorithm DAO flowchart visualization.
Algorithm 1 DAO Algorithm
Initialize   the   values   of   parameters   ( nPop ,   nVar ,   α ,   β ,   w ,   w d ,   J r , Max_iter, etc.)
Establish a random starting position.
Take   the   counter   t = 1
While (t < Max_iter), do
 Conduct DOL population initialization using Equation (16)
  Assess the early positions’ fitness.
 Verify Boundaries
  For (i = 1: nPop) do
   Update of the existing solution’s mean value
   Updated variables include u ,   v ,   P 1 ,   P 2 ,   a n d   L e v y D
       If h 2 3 × M a x _ i t e r
          If r a n d 0.5
           Apply DRW using Equation (18)
          Else
             Apply Narrowed Exploration by Equation (3)
          End If
       Else
          If r a n d 0.5
           Apply Expanded Exploitation by Equation (8)
          Else
           Apply Narrowed Exploitation by Equation (9)
          End If
       End If
   Conduct the DOL population jumping process using Equation (17)
   Assess the fitness function.
   Verify boundaries
  End for
t = t + 1
End while
Record best solution x b e s t
This section also displays DAO’s overall computational complexity. The initialization of the solutions, the computing of the fitness functions, and the updating of the solutions are the three steps that are often taken to ascertain the computational complexity of DAO. Let N represent the total number of solutions, and let o N represent the computational complexity of the solutions’ initialization processes. The computational complexity of the updating processes for the solutions is o N × D +   o G × N × D + N × D , where G is the total number of iterations and D is the size of the problem’s dimensions. These procedures entail updating the placements of each solution and looking for the best ones. Consequently, the overall computing complexity of the proposed DAO (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Optimizer) is o N × D + o G × N × D + N × D = o N D 1 + 2 G .

5. Experimental Settings

The algorithms used in the numerical trials include Aquila Optimizer (AO), Modified Aquila Optimizer (MAO) [38], Whale Optimization Algorithm (WOA) [6], Grasshopper Optimization Algorithm (GOA) [39], Reptile Search Algorithm (RSA) [5], and Brain Storm Optimization (BSO) [7]. On a computer with an Intel(R) Core (TM) i7-9750H processor running at 2.60 GHz and 16 GB of RAM, all algorithms were implemented in MATLAB R2021b.
The following five factors are used to assess DAO’s (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Optimizer) performance:
  • The optimization errors between the obtained and known real optimal values, average, and standard deviation. Since all objective functions include minimization, the best values—that is, the lowest mean values—are indicated in bold.
  • Non-parametric statistical tests to compare the p-value and the significance level = 0.05 between the compared technique and the suggested algorithm, such as the Wilcoxon rank sum test [40]. For both techniques, there is a significant difference when the p-value is less than 0.05. W/T/L indicates how many wins, ties, and losses the algorithm in question has suffered in contrast to its opponent.
  • The Friedman test is another non-parametric statistical test that is used [41,42]. The average optimization error values are used as test data. The method operates more efficiently with lower Friedman rank values. To make the minimal value stand out, it is bolded.
  • Bonferroni–Dunn’s diagram shows the differences in the rankings obtained for each algorithm at dimension 10 by showing the pairwise variances in ranks for each approach at each dimension. Pairwise disparities in rankings are calculated by subtracting the rank of one algorithm from the rank of another algorithm. In the graphic created by Bonferroni and Dunn, each bar denotes the average pairwise difference in ranks for a certain algorithm at a given dimension. Typically, different algorithms are represented by color-coded bars.
  • A clear visual depiction of the algorithm’s accuracy and convergence rate is offered via convergence graphs. If the improved algorithm deviates from the local answer, it explains why.

5.1. Competitive Algorithms Comparison on CEC2017 Benchmark Functions

Five competing algorithms are compared to gauge DAO’s efficiency and search performance: MAO (Modified Aquila Optimizer), AO (Aquila Optimizer), RSA (Reptile Search Algorithm), WOA (Whale Optimization Algorithm), and BSO (Brain Storm Optimization). The comparison is made on 29 benchmark functions from IEEE CEC2017 from the literature [43]. The population size (N) was fixed at 50 in each experiment. Maximum iteration is 500 and dimension is 10. The [−100, 100] range was chosen for the search. On each function, each algorithm was executed 30 times.
Parameter Settings: The algorithm’s performance depends on the parameter settings, particularly for DAO. In that instance, this part implements the sensitivity analysis of the parameters of DOL. Table 1 contains a detailed explanation of each parameter setting; the mean values are used to compare the results.
The weighting factor w and the jumping rate J r are set to 1–10 and 0.1–1 in the DAO algorithm, respectively. Here, in Table 1, only J r = 0.3 ,   and   1 is taken because, at other points, the values are not favorable.
Test functions have been chosen for analysis from the literature [43], where F3 and F6 are multimodal functions, F18 is a hybrid function, F23 is a composition function, and, in order to assess performance, the means of the outcomes obtained by DAO are also shown in Table 1. In F3, F6, and F23, respectively, DAO performs better than other settings when w = 3 and J r = 1 . w = 3 and J r = 1 are hence the best parameter settings, and DRW weight w = 0.5 is taken from the literature [16]. Table 2 contains the parameter settings of the optimization algorithms used for comparison.

Analysis of IEEE CEC’17 Test Functions

  • Analysis of Unimodal and Multimodal Test Functions
The mean and standard deviation of algorithms on twenty-nine unimodal, multimodal, and composition functions are displayed in Table 3. The function F1 is unimodal. The results show that, on one unimodal function, DAO outperforms the other algorithms. Moreover, it may be said that the DOL approach, which expands search spaces, has a higher chance of reaching the global optimum for its capacity for exploitation.
Multimodal functions like F3–F9 are used to confirm DAO’s exploring capabilities. The results in Table 3 demonstrate how well DAO performs in comparison to other algorithms, particularly on the F4, F5, F6, and F9 test functions.
  • Analysis of Hybrid and Composition Test Functions
Hybrid functions are used to evaluate the algorithms by combining unimodal and multimodal functions in order to mimic real-world challenges. It may lead to subpar performance; however, balancing exploitation and exploration capability is important to deal with mixed tasks. Table 3 clearly illustrates the benefits of DAO on F12–F17, F20–F24, F26–F28, and F30, and the composition function indicates that DAO is still able to solve the problem to the same degree as other algorithms. Then, in many real-world scenarios, DAO may effectively balance the rate of convergence and the optimization solution.
The last line of Table 3 shows W/L/T (Win/Loss/Tie), Friedman rank, and CPU runtime. The W/L/T metric shows that DAO performs well on functions with 10 dimensions, outperforming AO, MAO, RSA, WOA, and BSO on 24, 29, 28, 28, and 27 functions, respectively. The Friedman rank of DAO is comparatively less than other MAs, and the CPU runtime of DAO, AO, MAO, RSA, WOA, and BSO is shown in the third-last line of Table 3. The results show that WOA takes much less time than other MAs.
  • Analysis of Convergence Graph
Figure 2 displays the convergence graphs of the four functions, F4, F9, F13, and F20, where the mean optimizations generated by six algorithms on the IEEE CEC2017 functions with 10 dimensions are displayed. The vertical axis represents the log value of the mean optimizations, while the horizontal axis represents the number of iterations. Figure 2 makes it clear that the convergence speed is fast and that the DAO curves are the lowest. When compared to the original AO in the convergence graphs, DAO can find a better solution, exit local optimization, avoid premature convergence, improve the quality of the solution, and have high optimization efficiency.
Table 4 represents the Wilcoxon rank sum test results. The totals of ranks for positive and negative differences are represented by R + and R , respectively. When compared to other algorithms, DAO has a greater positive rank sum. Additionally, in the table, the corresponding z and p values are provided. The significant threshold of difference is α = 0.05 . This table shows that the performance of DAO is better than other original AO and other metaheuristic algorithms.
The Bonferroni–Dunn’s test [45] is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of Table 4. Among all the other algorithms, DAO was found to have the lowest mean rank. The Bonferroni–Dunn graphic in Figure 3 shows the variation in ranks for each method at D = 10. In this figure, a horizontal cut line is drawn, which represents the threshold for the best-performing algorithm, the one with the lowest ranking bar. The height of this cut line is determined by adding the algorithm’s ranking. The Bonferroni–Dunn technique computed the equivalent CD for each α = 0.05 and α = 0.1. Algorithms with a rank bar higher than this line are deemed to perform worse than the control algorithm. As a result, it is evident from the use of the Bonferroni–Dunn technique that AO and WOA are substantially acceptable when compared with DAO.

5.2. Competitive Algorithms Comparison on CEC2019 Benchmark Functions

In Table 5, the list of the ten CEC2019 benchmark functions with their dimensions and search ranges is taken from the literature [46].

Analysis of IEEE CEC’19 Test Functions

DAO has been implemented on 10 CEC 2019 benchmark functions with 500 iterations and 50 population sizes for 30 independent runs. Its results are compared with AO, MAO, WOA, SSA, and GOA. The comparison has been performed through the mean and STD (standard deviation) values by the considered algorithms across the course of the functions, as reported in Table 6. Moreover, the Friedman mean rank values and W/L/T are involved in the table’s last lines (see Table 6). The results confirm the proposed DAO’s superiority in dealing with these challenging testbed functions as it is classified as the best algorithm for half of these functions.
Meanwhile, AO succeeded for three functions, and MAO, WOA, SSA, and GOA for only one function out of this set. When it comes to the chain counterparts, DAO is positioned first in terms of sequence. The CPU runtime is mentioned in the last line of Table 6, which shows WOA taking much less time than the other algorithms. The convergence curves of Figure 4 show the efficiency of DAO in converging for high qualified solutions with significant convergence speed, as exhibited in F2, F6, F7, and F9.
Figure 4 shows the convergence capacity of six algorithms on test functions. The average fitness value is displayed as the “Mean”. Because of its exceptional exploration capabilities, DAO converges quickly with iterative computation, as illustrated in the figures. Regarding the trend that is gradually convergent, this is because the DOL technique is capable of being exploited.
Table 7 represents the Wilcoxon rank sum test results. The totals of ranks for positive and negative differences are represented by R + and R , respectively. When compared to other algorithms, DAO has a greater positive rank sum in most of the cases. Additionally, in the table, the corresponding z and p values are provided. The significant threshold of difference is α = 0.05 . This table shows that the performance of DAO is equivalently acceptable when compared to other metaheuristic algorithms.
Bonferroni–Dunn’s test is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of Table 7. Among all the other algorithms, DAO was found to have the lowest mean rank. The Bonferroni–Dunn graphic in Figure 5 shows the variation in ranks for each method at D = 10. In this figure, the smallest bar will show the best-performing algorithm, or the one with the lowest ranking bar. Algorithms with a higher rank bar are deemed to perform worse than the control algorithm. As a result, it is evident from the use of the Bonferroni–Dunn technique that DAO is also performing well when compared with other metaheuristic algorithms, and the worst performance is from the SSA algorithm.

6. DAO for Engineering Design Problems

Three relevant engineering benchmarks are used in this section to confirm that DAO improves when tackling real-world problems: problem of cantilever beam design (CBD), welded beam design problem (WBD), and pressure vessel design (PVD) problem and work. Thirty independent runs of each problem were carried out in order to examine the statistical features of the outcomes, and all the parameters are taken at their best.

6.1. CBD Problem

The goal of the CBD problem is to minimize a cantilever beam’s weight while accounting for the vertical displacement constraint. There are five hollow square blocks, and each of the five side length values z 1 ,   z 2 ,   z 3 ,   z 4 ,   z 5 needs to be optimized [47]. The following is an explanation of the mathematical model:
Consider
z = z 1   z 2   z 3   z 4   z 5
Minimize
f ( z ) = 0.6224 ( z 1 + z 2 + z 3 + z 4 + z 5 ) ,
Subject to
p ( z ) = 60 z 1 3 + 27 z 2 3 + 19 z 3 3 + 7 z 4 3 + 1 z 5 3 1 0
Variable range is
0.01 z 1 , z 2 , z 3 , z 4 , z 5 100
Table 8 displays the results of the CBD problem compared with six different MAs, such as COA, AO, GWO, ROA, WOA, and SCA. The results indicate that the proposed algorithm DAO is able to provide better results than other state-of-the-art algorithms. Thus, DAO is the optimal method for addressing the CBD problem. CPU runtime of the given set of algorithms is calculated, which shows that WOA takes very little time to compute the CBD problem.

6.2. WBD Problem

The goal of the WBD challenge is to reduce the cost of manufacturing a welded beam [9]. The optimization parameters include thickness ( H ), height ( T T ), length of the clamping bar ( L ), and thickness ( B B ). It is important to take into account seven limitations. The optimization model can be stated as follows:
Consider
z = z 1   z 2   z 3   z 4 = H   L   T T   B B
Minimize
f ( z ) = 1.10471 z 1 2 z z + 0.04811 z 3 z 4 ( 14.0 + z 2 )
Subject to the constraint,
p 1 ( z ) = τ z τ max 0 , p 2 ( z ) = σ z σ max 0 , p 3 ( z ) = δ z δ max 0 , p 4 ( z ) = z 1 z 4 0 , p 5 ( z ) = P P c z 0 , p 6 ( z ) = 0.125 z 1 0 , p 7 ( z ) = 1.10471 z 1 2 + 0.04811 z 3 z 4 14 + z 2 5 0
Variable range
0.1 z 1 2 , 0.1 z 2 10 , 0.1 z 3 10 , 0.1 z 4 2
where
τ z = τ 2 + 2 τ τ z 2 2 R + τ 2 , τ = p 2 z 1 z 2 ,   τ = M R J
M = P L + z 2 2 , R = z 2 2 4 + z 1 + z 3 2 2 , J = 2 z 1 z 2 z 2 2 4 + z 1 + z 3 2 2 , σ z = 6 P L z 4 z 3 2 , δ z = 6 P L 3 E z 3 2 z 4 P c z = z 3 2 z 4 6 36 4.013 E L 2 1 z 3 2 L E 4 G ,
P = 6000   lb ,   L = 14   in . ,   δ max = 0.25   in . , E = 30 × 1 6   psi ,   G = 12 × 10 6   psi , τ max = 13600   psi ,   σ max = 30000   psi
Table 9 reports the outcomes of the WBD problem. It is clear that DAO is not able to provide a better solution than other algorithms. However, with the exception of AO, DAO also has a very close value to provide an optimal result. This suggests that DAO is a stable and effective solution to the WBD problem. CPU runtime of the given set of algorithms is calculated, which shows that WOA takes very little time to compute the WBD problem.

6.3. PVD Problem

The PVD problem, a classical and representative optimization issue in engineering, is typically employed to verify the efficacy of optimization techniques. Its goal is to reduce a tension/compression spring’s cost [41]. The design parameters are thickness of the shell T S , thickness of the head T H , inner radius r , and the length of the cylindrical shell L C S . The following is the expression for the mathematical formulation [47]:
Consider
z = [ z 1   z 2   z 3   z 4 ] = T S   T H   r   L C S
Minimize
f ( z ) = 0.6224 z 1 z 3 z 4 + 1.7781 z 2 z 3 2 + 3.1661 z 1 2 z 4 + 19.84 z 1 2 z 3 ,
Subject to
p 1 ( z ) = z 1 + 0.0193 z 3 0 , p 2 ( z ) = z 3 + 0.00954 z 3 0 ,
p 3 ( z ) = π z 3 2 z 4 4 3 π z 3 3 + 1296000 0 ,
p 4 ( z ) = z 4 240 0
Variable range is
0 z 1 99 , 0 z 2 99 , 10 z 3 200 , 10 z 4 200
Table 10′s results for the TSD problem demonstrate that ROA is the optimal method for solving it, followed by COA and DAO, but we can say that DAO is a competitive and stable solution. CPU runtime of the given set of algorithms is calculated, which shows that WOA takes very little time to compute the PVD problem.
The outcomes of three classic engineering challenges are shown in this section, demonstrating how well and consistently DAO performs when handling real-world issues. In particular, DAO performs noticeably better than the AO algorithm.

7. Conclusions

In order to replace the expanded exploration regarding AO, this study has proposed a low-complexity DRW method that strikes a fair balance between exploitation and exploration. The aim of this technique is to increase computational efficiency and to avoid stagnation. Moreover, to achieve a balance between exploration and exploitation, the DOL technique is introduced. The CPU runtime clearly shows Aquila Optimizer’s computing efficiency. Then, the results obtained from the benchmark functions of CEC 2017 and CEC 2019 demonstrate its superiority. Furthermore, the convergence graphs, the Wilcoxon rank sum tests, the Friedman test, and the Bonferroni test show its accessibility. Then, it is also applied to real-world structural engineering design problems, which provides better results than AO. All these results show that the DRW and DOL approaches provide great additions to AO. DAO performs far better than AO as well as compared to most of the other MAs.

8. Future Scope

DAO could be applied in additional real-world applications given its great performance. Additionally, other optimization jobs including image processing, cloud and fog computing, and others could use the DAO optimization method.

Author Contributions

Conceptualization, M.V. and P.K.; methodology, M.V. and P.K.; software, M.V. and P.K.; validation, M.A. and Y.G.; formal analysis, M.V. and P.K.; investigation, M.A. and Y.G.; resources, P.K., M.A. and Y.G.; data curation, M.V. and P.K.; writing—original draft preparation, M.V. and P.K.; writing—review and editing, M.A. and Y.G.; visualization, M.V. and P.K.; supervision, M.A., P.K. and Y.G.; project administration, M.A. and P.K.; funding acquisition, M.A. and Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, the Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (GrantA016).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Since no datasets were created or examined in the current investigation, data sharing is not relevant to this topic.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A. Metaheuristic Algorithms: A Comprehensive Review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
  2. Goldberg, D.E. Genetic Algorithms; Pearson Education: Bangalore, India, 2006. [Google Scholar]
  3. Storn, R.; Price, K. Differential Evolution- A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. Proc. IEEE Int. Conf. Neural Netw. 1995, 4, 1942–1948. [Google Scholar]
  5. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A Nature-Inspired Meta-Heuristic Optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  7. Shi, Y. Brain Storm Optimization Algorithm. In International Conference in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; pp. 303–309. [Google Scholar]
  8. Rao, R.; Savsani, V.; Vakharia, D. Teaching-learning based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  9. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A Novel MetaHeuristic Optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  10. Li, L.; Pan, J.; Zhuang, Z.; Chu, S. A Novel Feature Selection Algorithm Based on Aquila Optimizer for COVID-19 Classification. In International Conference on Intelligent Information Processing; Springer International Publishing: Cham, Switzerland, 2022; pp. 30–41. [Google Scholar]
  11. Chaudhari, S.V.; Dhipa, M.; Ayoub, S.; Gayathri, B.; Siva, M.; Banupriya, V. Modified Aquila Optimization based Route Planning Model for Unmanned Aerial Vehicles Networks. In Proceedings of the 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 13–15 December 2022; pp. 370–375. [Google Scholar]
  12. Abualigah, L.; Elaziz, M.A.; Khodadadi, N.; Forestiero, A.; Jia, H.; Gandomi, A.H. Aquila Optimizer Based PSO Swarm Intelligence for IoT Task Scheduling Application in Cloud Computing. In Part of the Studies in Computational Intelligence Book Series; Springer International Publishing: Cham, Switzerland, 2022; Volume 1038, pp. 481–497. [Google Scholar]
  13. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  14. Sasmal, B.; Hussien, A.G.; Das, A.; Dhal, K.G. A Comprehensive Survey on Aquila Optimizer. Arch. Comput. Methods Eng. 2023, 30, 4449–4476. [Google Scholar] [CrossRef]
  15. Xu, Y.; Yang, Z.; Li, X.; Kang, H.; Yang, X. Dynamic opposite learning enhanced teaching–learning-based optimization. Knowl. Based Syst. 2020, 104966, 188. [Google Scholar] [CrossRef]
  16. Dong, H.; Xu, Y.; Li, X.; Yang, Z.; Zou, C. An improved antlion optimizer with dynamic random walk and dynamic opposite learning. Knowl. Based Syst. 2021, 106752, 216. [Google Scholar] [CrossRef]
  17. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Quasi-oppositional differential evolution. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2229–2236. [Google Scholar] [CrossRef]
  18. Ergezer, M.; Simon, D.; Du, D. Oppositional biogeography-based optimization. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1009–1014. [Google Scholar] [CrossRef]
  19. Zhou, J.; Zhang, Y.; Guo, Y.; Feng, W.; Menhas, M.; Zhang, Y. Parameters Identification of Battery Model Using a Novel Differential Evolution Algorithm Variant. Front. Energy Res. 2022, 10, 794732. [Google Scholar] [CrossRef]
  20. Liu, Z.H.; Wei, H.L.; Li, X.H.; Liu, K.; Zhong, Q.C. Global identification of electrical and mechanical parameters in PMSM drive based on dynamic self-learning PSO. IEEE Trans. Power Electron. 2018, 33, 10858–10871. [Google Scholar] [CrossRef]
  21. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC 06), Vienna, Austria, 22 May 2006; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 695–701. [Google Scholar]
  22. Mohamed, A.; Abualigah, L.; Alburaikan, A.; Khalifa, H.A.E.-W. AOEHO: A New Hybrid Data Replication Method in Fog Computing for IoT Application. Sensors 2023, 23, 2189. [Google Scholar] [CrossRef]
  23. Nirmalapriya, G.; Agalya, V.; Regunathan, R.; Belsam Jeba Ananth, M. Fractional Aquila Spider Monkey Optimization Based Deep Learning Network for Classification of Brain Tumor. Biomed. Signal Process. Control. 2023, 79, 104017. [Google Scholar] [CrossRef]
  24. Perumalla, S.; Chatterjee, S.; Kumar, A.P.S. Modelling of Oppositional Aquila Optimizer with Machine Learning Enabled Secure Access Control in Internet of Drones Environment. Theor. Comput. Sci. 2023, 941, 39–54. [Google Scholar] [CrossRef]
  25. Duan, J.; Zuo, H.; Bai, Y.; Chang, M.; Chen, X.; Wang, W.; Ma, L.; Chen, B. A Multistep Short-Term Solar Radiation Forecasting Model Using Fully Convolutional Neural Networks and Chaotic Aquila Optimization Combining WRF-Solar Model Results. Energy 2023, 271, 126980. [Google Scholar] [CrossRef]
  26. Ramamoorthy, R.; Ranganathan, R.; Ramu, S. An Improved Aquila Optimization with Fuzzy Model Based Energy Efficient Cluster Routing Protocol for Wireless Sensor Networks. Yanbu J. Eng. Sci. 2022, 19, 51–61. [Google Scholar] [CrossRef]
  27. Huang, C.; Huang, J.; Jia, Y.; Xu, J. A Hybrid Aquila Optimizer and Its K-Means Clustering Optimization. Trans. Inst. Meas. Control 2023, 45, 557–572. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Xu, X.; Zhang, N.; Zhang, K.; Dong, W.; Li, X. Adaptive Aquila Optimizer combining niche thought with dispersed chaotic swarm. Sensors 2023, 23, 755. [Google Scholar] [CrossRef] [PubMed]
  29. Ekinci, S.; Izci, D.; Abualigah, L. A Novel Balanced Aquila Optimizer Using Random Learning and Nelder–Mead Simplex Search Mechanisms for Air–Fuel Ratio System Control. J. Braz. Soc. Mech. Sci. Eng. 2023, 45, 68. [Google Scholar] [CrossRef]
  30. Alangari, S.; Obayya, M.; Gaddah, A.; Yafoz, A.; Alsini, R.; Alghushairy, O.; Ashour, A.; Motwakel, A. Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication. Sensors 2022, 22, 8508. [Google Scholar] [CrossRef] [PubMed]
  31. Das, T.; Roy, R.; Mandal, K.K. A Novel Weighted Adaptive Aquila Optimizer Technique for Solving the Optimal Reactive Power Dispatch Problem. Researchsquare, 2022; preprint. [Google Scholar]
  32. Bas, E. Binary Aquila Optimizer for 0–1 Knapsack Problems. Eng. Appl. Artif. Intell. 2023, 118, 105592. [Google Scholar] [CrossRef]
  33. Long, H.; Liu, S.; Chen, T.; Tan, H.; Wei, J.; Zhang, C.; Chen, W. Optimal reactive power dispatch based on multi-strategy improved Aquila optimization algorithm. IAENG Int. J. Comput. Sci. 2022, 49, 4. [Google Scholar]
  34. Wang, Y.; Jin, C.; Li, Q.; Hu, T.; Xu, Y.; Chen, C.; Zhang, Y.; Yang, Z. A Dynamic Opposite Learning-Assisted Grey Wolf Optimizer. Symmetry 2022, 14, 1871. [Google Scholar] [CrossRef]
  35. Cao, D.; Xu, Y.; Yang, Z.; Dong, H.; Li, X. An enhanced whale optimization algorithm with improved dynamic opposite learning and adaptive inertia weight strategy. Complex Intell. Syst. 2023, 9, 767–795. [Google Scholar] [CrossRef]
  36. Sharma, S.; Kaur, M.; Sing, B. A Self-adaptive Bald Eagle Search optimization algorithm with dynamic opposition-based learning for global optimization problems. Expert Syst. 2023, 40, e13170. [Google Scholar] [CrossRef]
  37. Wang, Y.; Xiao, Y.; Guo, Y.; Li, J. Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications. Processes 2022, 10, 2703. [Google Scholar] [CrossRef]
  38. Ali, M.H.; Salawudeen, A.T.; Kamel, S.; Salau, H.B.; Habil, M.; Shouran, M. Single- and Multi-Objective Modified Aquila Optimizer for Optimal Multiple Renewable Energy Resources in Distribution Network. Mathematics 2022, 10, 2129. [Google Scholar] [CrossRef]
  39. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and Application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  40. García, S.; Molina, D.; Lozano, M.; Herrera, F. A Study on the Use of Non-Parametric Tests for Analyzing the Evolutionary Algorithms’ Behaviour: A Case Study on the CEC’2005 Special Session on Real Parameter Optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  41. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced Nonparametric Tests for Multiple Comparisons in the Design of Experiments in Computational Intelligence and Data Mining: Experimental Analysis of Power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  42. Luengo, J.; García, S.; Herrera, F. A Study on the Use of Statistical Tests for Experimentation with Neural Networks: Analysis of Parametric Test Conditions and Non-Parametric Tests. Expert Syst. Appl. 2009, 36, 7798–7808. [Google Scholar] [CrossRef]
  43. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  44. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  45. Ahmadianfar, I.; Asghar Heidari, A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 116516, 195. [Google Scholar] [CrossRef]
  46. Jing-Chang, L.; Qu, B.; Suganthan, P. Problem Definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, Computer science. Mathematics 2014, 635, 2014. [Google Scholar]
  47. Varshney, M.; Kumar, P.; Ali, M.; Gulzar, Y. Using the Grey Wolf Aquila Synergistic Algorithm for Design Problems in structural Engineering. Biomimetics 2024, 9, 54. [Google Scholar] [CrossRef]
  48. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish Optimization Algorithm. Artif. Intell. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  49. Jia, H.; Peng, X.; Lang, C. Remora Optimization Algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  51. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed DAO algorithm.
Figure 1. Flowchart of the proposed DAO algorithm.
Biomimetics 09 00215 g001
Figure 2. Convergence graphs of F4, F9, F13, and F20 CEC 2017 benchmark function.
Figure 2. Convergence graphs of F4, F9, F13, and F20 CEC 2017 benchmark function.
Biomimetics 09 00215 g002
Figure 3. Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level (here, ----- shows sig level at 0.1, and shows significance level at 0.05).
Figure 3. Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level (here, ----- shows sig level at 0.1, and shows significance level at 0.05).
Biomimetics 09 00215 g003
Figure 4. Convergence graphs of F1, F2, F4, F6, F9, and F10 CEC 2019 benchmark functions.
Figure 4. Convergence graphs of F1, F2, F4, F6, F9, and F10 CEC 2019 benchmark functions.
Biomimetics 09 00215 g004aBiomimetics 09 00215 g004b
Figure 5. Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm.
Figure 5. Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm.
Biomimetics 09 00215 g005
Table 1. The sensitivity analysis of w and J r .
Table 1. The sensitivity analysis of w and J r .
w
Mean
F3F6F18F23
J r   =   0.3
J r   =   1
J r   =   0.3
J r   =   1
J r   =   0.3
J r   =   1
J r   =   0.3
J r   =   1
16.582 × 1036.859 × 1034.251 × 1014.839 × 1012.387 × 1051.457 × 1054.256 × 1024.340 × 102
26.502 × 1035.187 × 1034.216 × 1013.776 × 1013.041 × 1051.567 × 1054.101 × 1023.865 × 102
36.750 × 1034.022 × 1034.469 × 1013.675 × 1012.224 × 1066.025 × 1053.982 × 1023.818 × 102
48.326 × 1035.279 × 1034.476 × 1013.885 × 1011.045 × 1066.649 × 1054.082 × 1023.868 × 102
58.613 × 1034.975 × 1034.468 × 1013.908 × 1014.604 × 1061.106 × 1064.109 × 1023.888 × 102
69.230 × 1035.749 × 1034.886 × 1014.031 × 1013.238 × 1062.014 × 1064.166 × 1024.025 × 102
79.451 × 1037.962 × 1035.108 × 1014.931 × 1015.852 × 1065.806 × 1064.192 × 1024.218 × 102
81.031 × 1048.989 × 1035.228 × 1014.922 × 1013.966 × 1061.155 × 1074.223 × 1024.357 × 102
99.738 × 1031.003 × 1045.263 × 1015.246 × 1011.043 × 1071.255 × 1074.181 × 1024.472 × 102
101.087 × 1041.108 × 1045.446 × 1015.205 × 1012.989 × 1073.067 × 1074.311 × 1024.574 × 102
Note: bold is used to indicate the best result.
Table 2. Parameter settings of optimization algorithms.
Table 2. Parameter settings of optimization algorithms.
AlgorithmParameters
DAO U = 0.00565 , r = 10 , ω = 0.05 , α = 0.1 , β = 0.1 , P 1 1 , 1 , P 2 = 2 , 0 , w = 0.5 , w d = 3 , J r = 1
AO [9] U = 0.00565 , r = 10 , ω = 0.05 , α = 0.1 , β = 0.1 , P 1 1 , 1 , P 2 = 2 , 0
MAO [38] U = 0.00565 , r = 10 , ω = 0.05 , α = 0.1 , β = 0.1 , P 1 1 , 1 , P 2 = 2 , 0
SSA [44] v = 0
WOA [6] w 1 = 2 , 0 , w 2 = 1 , 2 , v = 1
RSA [5] a = 2 , 0
GOA [39] l = 1.5 ,   f = 0.5
BSO [7] m = 5 ,   p a = 0.2 ,   p b = 0.8 ,   p b 1 = 0.4 ,   p c = 0.5
Table 3. Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2017 benchmark functions.
Table 3. Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2017 benchmark functions.
FunctionDAOAOMAORSAWOABSO
F1 Mean
STD
8.388 × 108
4.709 × 108
9.239 × 108
6.512 × 106
2.159 × 1010
4.981 × 109
5.38 × 1010
9.29 × 109
9.784 × 108
7.462 × 106
9.410 × 109
2.501 × 103
F3 Mean
STD
4.291 × 103
1.381 × 103
8.809 × 102
5.485 × 102
2.363 × 105
4.971 × 104
7.42 × 104
5.50 × 103
3.663 × 103
3.232 × 103
3.001 × 101
1.710 × 102
F4 Mean
STD
7.619 × 101
4.001 × 101
2.085 × 102
2.512 × 102
2.527 × 103
1.304 × 103
1.45 × 104
4.56 × 103
9.451 × 101
1.923 × 101
9.495 × 102
2.001 × 101
F5 Mean
STD
6.359 × 101
1.584 × 101
7.125 × 101
1.066 × 101
1.508 × 102
2.758 × 101
3.89 × 102
3.30 × 101
8.408 × 101
2.096 × 101
2.038 × 102
4.101 × 101
F6 Mean
STD
3.523 × 101
8.702 × 100
7.745 × 101
6.053 × 100
9.271 × 101
1.745 × 101
8.63 × 101
7.46 × 100
3.627 × 101
1.012 × 101
5.316 × 101
6.414 × 100
F7 Mean
STD
8.615 × 101
2.031 × 101
5.545 × 101
1.931 × 101
4.585 × 102
9.379 × 101
6.72 × 102
6.73 × 101
7.470 × 101
2.151 × 101
5.110 × 102
1.011 × 102
F8 Mean
STD
3.211 × 101
6.033 × 100
2.408 × 101
6.884 × 100
1.369 × 102
1.922 × 101
3.11 × 102
2.80 × 101
4.291 × 101
1.767 × 101
1.451 × 102
3.211 × 101
F9 Mean
STD
2.773 × 102
1.571 × 102
3.135 × 102
6.321 × 101
4.114 × 103
1.082 × 103
8.53 × 103
1.19 × 103
5.919 × 102
3.820 × 102
3.411 × 103
6.754 × 102
F10Mean
STD
1.451 × 103
3.124 × 102
9.451 × 102
2.686 × 102
2.726 × 103
2.296 × 102
7.02 × 103
3.59 × 102
1.181 × 103
2.751 × 102
4.211 × 103
6.081 × 102
F11Mean
STD
4.201 × 102
4.743 × 102
1.078 × 102
5.818 × 101
2.604 × 104
2.681 × 104
7.77 × 103
2.80 × 103
1.417 × 102
8.465 × 101
1.378 × 102
4.511 × 101
F12Mean
STD
5.697 × 106
5.285 × 106
7.862 × 106
3.363 × 106
2.784 × 109
1.640 × 109
1.70 × 1010
4.36 × 109
7.279 × 106
5.117 × 106
9.614 × 107
8.094 × 105
F13Mean
STD
2.549 × 105
6.824 × 105
2.465 × 105
1.528 × 104
3.020 × 108
3.011 × 108
1.18 × 1010
4.90 × 109
1.437 × 106
1.177 × 104
5.216 × 107
2.340 × 104
F14Mean
STD
5.424 × 103
8.248 × 103
6.334 × 104
8.016 × 102
7.503 × 106
1.063 × 107
3.07 × 106
3.58 × 106
7.307 × 103
1.500 × 103
4.170 × 105
3.152 × 103
F15Mean
STD
6.293 × 103
3.375 × 103
9.332 × 103
2.839 × 103
2.148 × 107
2.908 × 107
6.73 × 108
5.74 × 108
6.416 × 103
5.063 × 103
3.112 × 104
2.122 × 104
F16Mean
STD
3.248 × 102
1.027 × 102
9.535 × 102
1.114 × 102
1.178 × 103
2.349 × 102
3.89 × 103
6.86 × 102
3.329 × 102
1.440 × 102
1.504 × 103
3.314 × 102
F17Mean
STD
8.911 × 101
2.286 × 101
9.589 × 101
1.871 × 101
6.631 × 102
1.916 × 102
5.30 × 103
6.86 × 103
1.033 × 102
5.087 × 101
8.120 × 102
2.401 × 102
F18Mean
STD
2.407 × 105
3.197 × 105
2.153 × 104
1.184 × 104
6.274 × 108
6.430 × 108
3.27 × 107
3.07 × 107
1.946 × 104
1.111 × 104
1.120 × 105
1.001 × 105
F19Mean
STD
3.203 × 104
4.889 × 104
1.436 × 104
2.225 × 104
6.471 × 107
8.754 × 107
1.32 × 104
1.69 × 109
6.597 × 104
9.665 × 104
1.301 × 105
5.361 × 104
F20Mean
STD
1.701 × 102
5.579 × 101
2.153 × 102
4.716 × 101
5.419 × 102
1.330 × 102
8.63 × 102
1.42 × 102
1.854 × 102
7.896 × 101
7.219 × 102
2.015 × 102
F21Mean
STD
2.299 × 102
5.293 × 101
2.967 × 102
4.681 × 101
3.375 × 102
3.148 × 101
6.43 × 102
4.26 × 101
2.310 × 102
5.171 × 101
4.004 × 102
4.051 × 101
F22Mean
STD
1.758 × 102
5.337 × 101
2.091 × 102
1.524 × 101
1.798 × 103
5.866 × 102
5.25 × 103
1.01 × 103
1.831 × 102
2.703 × 102
4.001 × 103
1.701 × 103
F23Mean
STD
3.843 × 102
2.354 × 101
5.412 × 102
1.313 × 101
5.423 × 102
6.781 × 101
1.04 × 103
1.08 × 102
3.976 × 102
2.060 × 101
9.991 × 102
1.013 × 102
F24Mean
STD
3.145 × 102
1.416 × 101
3.437 × 102
8.266 × 101
5.939 × 102
7.436 × 101
1.17 × 103
2.45 × 102
3.870 × 102
2.521 × 101
1.004 × 103
9.711 × 101
F25Mean
STD
4.776 × 102
4.645 × 101
7.949 × 102
3.036 × 101
1.988 × 103
7.365 × 102
2.22 × 103
8.61 × 102
5.651 × 102
3.538 × 101
4.101 × 102
9.110 × 100
F26Mean
STD
6.408 × 102
3.023 × 102
9.175 × 102
1.623 × 102
2.348 × 103
3.639 × 102
7.93 × 103
1.12 × 103
9.465 × 102
6.068 × 102
5.832 × 103
1.112 × 103
F27Mean
STD
4.467 × 102
4.845 × 101
6.041 × 102
8.332 × 100
7.303 × 102
1.222 × 102
9.41 × 102
2.31 × 102
5.379 × 102
3.300 × 101
1.204 × 103
2.510 × 102
F28Mean
STD
4.913 × 102
6.521 × 100
5.965 × 102
9.938 × 101
1.323 × 103
2.043 × 102
3.98 × 103
8.85 × 102
6.153 × 102
1.794 × 102
5.854 × 102
5.120 × 101
F29Mean
STD
4.026 × 102
6.510 × 101
3.429 × 102
5.123 × 101
1.070 × 103
2.163 × 102
4.14 × 103
1.61 × 103
4.614 × 102
8.636 × 101
1.520 × 103
3.701 × 102
F30Mean
STD
3.891 × 104
8.456 × 104
6.647 × 105
7.482 × 104
1.451 × 108
1.052 × 108
2.24 × 108
9.25 × 107
7.597 × 106
9.042 × 105
5.371 × 105
3.104 × 105
(W/L/T)
Rank
CPU Runtime
20/9/0
1.62
3.25 × 104
5/24/0
2.41
2.10 × 104
0/29/0
4.72
1.29 × 104
1/28/0
5.62
5.11 × 104
1/28/0
2.55
4.10 × 103
2/27/0
4.07
1.29 × 104
Note: bold is used to indicate the best result.
Table 4. Summary of non-parametric statistical results by Wilcoxon test and Bonferroni–Dunn test.
Table 4. Summary of non-parametric statistical results by Wilcoxon test and Bonferroni–Dunn test.
Algorithms R + R z-Valuep-ValueSign
DAO vs.AO2182.0220.043=
MAO2904.7030.000+
RSA2814.2490.000+
WOA2452.7570.006=
BSO2543.5570.000+
CD   value   at   α = 0.1 1.1428 CD   value   at   α = 0.05 1.2656
Table 5. List of 10 benchmark functions of CEC2019 with dimensions and search range.
Table 5. List of 10 benchmark functions of CEC2019 with dimensions and search range.
Func. No.FunctionsDimSearch Range
F1Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]
F2Inverse Hilbert Matrix Problem16[−16,384, 16,384]
F3Lennard-Jones Minimum Energy Cluster18[−4, 4]
F4Rastrigin’s Function10[−100, 100]
F5Griewangk’s Function10[−100, 100]
F6Weierstrass Function10[−100, 100]
F7Modified Schwefel’s Function10[−100, 100]
F8Expanded Schaffer’s F6 Function10[−100, 100]
F9Happy Cat Function10[−100, 100]
F10Ackley Function10[−100, 100]
Table 6. Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2019 benchmark functions.
Table 6. Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2019 benchmark functions.
FunctionDAOAOMAOWOASSAGOA
F1 Mean
STD
9.900 × 101
0.000 × 100
9.900 × 101
2.053 × 10−8
1.235 × 109
7.355 × 108
6.784 × 106
7.462 × 106
7.324 × 109
3.483 × 109
1.320 × 1010
1.541 × 1010
F2 Mean
STD
1.950 × 102
0.000 × 100
1.950 × 102
0.000 × 100
2.825 × 104
7.301 × 103
7.663 × 102
8.7317 × 102
2.001 × 102
2.079 × 10−2
1.739 × 103
4.084 × 102
F3 Mean
STD
2.948 × 102
1.321 × 100
2.937 × 102
1.863 × 100
2.862 × 102
4.426 × 10−1
2.951 × 102
1.923 × 100
2.970 × 102
1.776 × 10−15
2.270 × 102
8.188 × 10−12
F4 Mean
STD
3.441 × 102
1.376 × 101
3.683 × 102
9.697 × 100
2.445 × 102
2.501 × 101
3.498 × 102
2.496 × 101
3.423 × 101
1.077 × 101
3.286 × 102
1.971 × 101
F5 Mean
STD
4.929 × 102
4.413 × 100
4.981 × 102
1.826 × 10−1
3.059 × 102
4.894 × 101
4.977 × 102
4.591 × 10−1
5.486 × 102
8.533 × 10−1
8.484 × 102
8.763 × 10−1
F6 Mean
STD
5.918 × 102
1.787 × 100
5.944 × 102
1.440 × 100
5.999 × 102
9.224 × 10−1
5.919 × 102
1.751 × 100
5.986 × 102
8.533 × 10−1
8.484 × 102
8.763 × 101
F7 Mean
STD
7.152 × 102
2.553 × 102
3.011 × 102
2.936 × 102
2.217 × 103
2.924 × 102
7.640 × 102
3.001 × 102
4.728 × 102
9.776 × 10−1
5.007 × 102
2.191 × 102
F8 Mean
STD
7.953 × 102
1.998 × 10−1
8.957 × 102
3.015 × 10−1
7.644 × 102
2.387 × 10−1
5.953 × 102
3.216 × 10−1
9.088 × 102
6.135 × 10−1
8.587 × 102
4.300 × 10−1
F9 Mean
STD
8.985 × 102
1.639 × 10−1
9.365 × 102
1.427 × 10−1
8.993 × 103
8.679 × 10−1
8.985 × 103
2.006 × 10−1
2.416 × 103
5.956 × 10−1
9.664 × 102
1.827 × 10−1
F10 Mean
STD
9.785 × 102
7.688 × 10−1
9.996 × 102
4.637 × 100
9.852 × 102
1.350 × 10−1
9.953 × 102
1.330 × 10−1
2.101 × 103
3.562 × 101
9.923 × 102
3.718 × 10−4
(W/L/T)
Rank
CPU Runtime
5/5/2
2.65
3.11 × 104
3/7/2
3.25
3.02 × 104
1/9/0
3.15
2.16 × 104
1/9/0
3.65
5.02 × 104
1/9/0
4.30
4.22 × 103
1/9/0
4.00
1.26 × 104
Note: bold is used to indicate the best result.
Table 7. Summary of non-parametric statistical results obtained from Wilcoxon test and Bonferroni–Dunn test.
Table 7. Summary of non-parametric statistical results obtained from Wilcoxon test and Bonferroni–Dunn test.
AlgorithmsΣR+ΣRz-Valuep-ValueSign
DAO vs.AO621.2600.208=
MAO550.8660.386=
WOA631.5990.110=
MPA821.4780.139=
GOA731.4780.139=
Table 8. Comparison of DAO and other algorithms for CBD problem.
Table 8. Comparison of DAO and other algorithms for CBD problem.
Optimum Attributes
Algorithmsz1z2z3z4z5Optimum WeightCPU Runtime (s)
DAO6.01125.12114.82213.21142.15101.33021.986
COA [48]6.01725.30714.49123.50812.14991.39992.001
AO [9]5.84925.54134.37783.59782.10261.35961.926
ROA [49]6.01565.10014.3033.73652.31831.34561.256
GWO [50]5.99565.41214.59863.56892.35481.35861.112
WOA [6]5.83935.15824.99173.6932.22751.34670.606
SCA [51]5.92645.92854.52233.32671.99231.35811.111
Note: bold is used to indicate better result.
Table 9. Comparison of DAO and other algorithms for WBD problem.
Table 9. Comparison of DAO and other algorithms for WBD problem.
Optimum Attributes
AlgorithmsHLTTBBOptimum
Cost
CPU Runtime (s)
DAO0.21383.21549.02750.20521.69602.410
COA [48]0.24563.25639.04030.20571.69632.031
AO [9]0.16313.36529.02020.20671.65662.399
SSA [44]0.20573.47149.03660.20571.72492.121
WOA [6]0.20543.48439.03740.20621.73051.037
Note: bold is used to indicate better result.
Table 10. Comparison of DAO and other algorithms for PVD problem.
Table 10. Comparison of DAO and other algorithms for PVD problem.
Optimum Attributes
Algorithms T S T H r L C S Optimum
Cost
CPU Runtime (s)
DAO0.78850.325442.3275189.8925877.10002.432
COA [48]0.74370.370540.3238199.94145735.24882.356
AO [9]1.05400.182859.621938.80505949.22582.222
GWO [50]0.81250.434542.0891176.75876051.56391.345
ROA [49]0.72950.222640.4323198.55375311.91752.252
RSA [5]0.80710.442643.6335142.53596213.83171.125
WOA [6]0.81250.437542.098276.63896059.74100.872
Note: bold is used to indicate better result.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Varshney, M.; Kumar, P.; Ali, M.; Gulzar, Y. Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer: Solving Constrained Engineering Design Problems. Biomimetics 2024, 9, 215. https://doi.org/10.3390/biomimetics9040215

AMA Style

Varshney M, Kumar P, Ali M, Gulzar Y. Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer: Solving Constrained Engineering Design Problems. Biomimetics. 2024; 9(4):215. https://doi.org/10.3390/biomimetics9040215

Chicago/Turabian Style

Varshney, Megha, Pravesh Kumar, Musrrat Ali, and Yonis Gulzar. 2024. "Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer: Solving Constrained Engineering Design Problems" Biomimetics 9, no. 4: 215. https://doi.org/10.3390/biomimetics9040215

Article Metrics

Back to TopTop