Next Article in Journal
Depth Image Rectification Based on an Effective RGB–Depth Boundary Inconsistency Model
Previous Article in Journal
A Highly Integrated Millimeter-Wave Circularly Polarized Wide-Angle Scanning Antenna Unit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy

1
School of Mechanical and Automation Engineering, Wuyi University, Jiangmen 529020, China
2
School of Rail Transit, Wuyi University, Jiangmen 529020, China
3
School of Electronic and Information Engineering, Wuyi University, Jiangmen 529020, China
4
Jiangmen Key Laboratory of Kejie Semiconductor Bonding Technology and Control System, Jiangmen 529020, China
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(16), 3329; https://doi.org/10.3390/electronics13163329 (registering DOI)
Submission received: 13 July 2024 / Revised: 11 August 2024 / Accepted: 12 August 2024 / Published: 22 August 2024
(This article belongs to the Section Artificial Intelligence)

Abstract

:
This paper proposes an improved African vulture optimization algorithm (IROAVOA), which integrates the random opposition-based learning strategy and disturbance factor to solve problems such as the relatively weak global search capability and the poor ability to balance exploration and exploitation stages. IROAVOA is divided into two parts. Firstly, the random opposition-based learning strategy is introduced in the population initialization stage to improve the diversity of the population, enabling the algorithm to more comprehensively explore the potential solution space and improve the convergence speed of the algorithm. Secondly, the disturbance factor is introduced at the exploration stage to increase the randomness of the algorithm, effectively avoiding falling into the local optimal solution and allowing a better balance of the exploration and exploitation stages. To verify the effectiveness of the proposed algorithm, comprehensive testing was conducted using the 23 benchmark test functions, the CEC2019 test suite, and two engineering optimization problems. The algorithm was compared with seven state-of-the-art metaheuristic algorithms in benchmark test experiments and compared with five algorithms in engineering optimization experiments. The experimental results indicate that IROAVOA achieved better mean and optimal values in all test functions and achieved significant improvement in convergence speed. It can also solve engineering optimization problems better than the other five algorithms.

1. Introduction

Metaheuristic algorithms are one of the effective methods for solving many real-world engineering optimization problems and are widely used in various fields, such as economics, engineering, politics, and management [1]. Most metaheuristic algorithms are inspired by the survival of the fittest theory of evolutionary algorithms, the collective command of swarm particles, the behavior of biologically inspired algorithms, and the logical behavior of natural physical algorithms. Compared with traditional optimization algorithms, metaheuristic algorithms have a simple structure, low computational complexity, do not require gradient information, and have a strong ability to escape local optima [2]. Metaheuristic algorithms are divided into three main types: those based on biological evolution, those based on physical laws, and those based on population behavior [3]. Algorithms based on the biological evolution process simulate the evolutionary process of organisms in nature and are widely used in the field of artificial intelligence (AI) to solve highly complex optimization problems [4]. Representative algorithms include genetic algorithm (GA) [5], differential evolution (DE) [6], and partial differential equations (PDE) [7]. Physics-based algorithms simulate the physical laws of nature. Representative algorithms include the Gravity Search Algorithm (GSA) [8] and the Multi-Verse Optimization (MVO) [9]. Group-based algorithms simulate the biological behavior of group life. Representative algorithms include Particle Swarm Optimization (PSO) [10], Gray Wolf Optimizer (GWO) [11], Whale Optimization Algorithm (WOA) [12], Moth-Flame Optimization (MFO) [13], Tunicate Swar Algorithm (TSA) [14], and Harris Hawk Algorithm (HHO) [15].
The African vulture optimization algorithm [16] is one of the metaheuristic algorithms that is inspired by the competition and navigation behaviors of the African vultures. This algorithm has received widespread attention from researchers due to its simple structure and implementation method, as well as its excellent performance in finding optimal solutions [17]. Haimid et al. proposed an improved African vulture optimization algorithm to solve the minimization problem of fuel cell SOFC empirical voltage and current curves [18]. Zhou et al. proposed an improved African vulture optimization algorithm to solve the dual resource constraint flexible job scheduling problem with machine and worker constraints [19]. In order to solve the problem of rolling bearing defect diagnosis, Govind et al. introduced the improved African vulture optimization algorithm to optimize TVF-WMD filter parameters [20]. Singh et al. introduced the African vulture optimization algorithm to optimize the solution to the TSP shortest path problem [21]. Ahmed et al. introduced the African vulture optimization algorithm to accurately predict the location parameters of various solar photovoltaic units [22].
Although the African vulture optimization algorithm has good search performance, it still has some shortcomings, such as a tendency to fall into local optima, limited population diversity, and insufficient exploration capabilities in multimodal problems Wang et al. cited the representative vulture selection strategy and the rotating flight strategy. Zheng et al. proposed three strategies, selection accumulation mechanism, representative vulture selection strategy and rotating flight strategy to improved the African vulture optimization algorithm, which better balanced the balance between local search and global search [23]. Liu et al. introduced quasi-oppositional learning and differential evolution operators to improve the population diversity and enhancement algorithm of the algorithm. For the exploration ability [24], Hao et al. introduced tent chaos mapping and a time-varying mechanism optimization algorithm to obtain the optimal solution and improve the convergence speed [25].
However, although most of the existing improved AVOA have improved optimization performance and achieved satisfactory results in solving specific engineering problems, they are not suitable for most optimization problems and still have some limitations and uncertainties. The details are as follows:
  • During the exploration phase, the algorithm’s reliance solely on updates from the optimal position diminishes population diversity, leading to slower convergence in the initial stages.
  • With a poor ability to balance exploration and exploitation, the AVOA is sensitive to local optimal solutions, and it is difficult to obtain ideal solutions.
In order to solve the above problems, this paper proposes an improved African vulture optimization algorithm. The first part adds a random opposition-based learning strategy in the initialization population stage of the algorithm to increase the diversity of the population, allowing the algorithm to search the space more broadly in the early stages. The second part introduces a disturbance factor in the exploration phase to increase the randomness of the algorithm and improve the algorithm’s ability to escape from the local optimum, thus balancing the exploration and exploitation of the algorithm. In order to evaluate the effectiveness of the algorithm, this article conducted simulation experiments using 23 benchmark test functions and the CEC2019 test suite. IROAVOA performance results are compared with basic AVOA, ROAVOA, IAOVA, and seven other swarm intelligence algorithms. The results show that the proposed IROAVOA has better solution accuracy and convergence accuracy.
The remainder of this article is organized as follows. The original AVOA is briefly introduced in Section 2. IROAVOA is introduced in Section 3. In Section 4, the performance of IROAVOA is analyzed using classic benchmark test functions. Section 5 discusses the results, summarizes the study, and suggests possible future research areas.

2. Original African Vultures Optimization Algorithm

AVOA is a new metaheuristic algorithm proposed by Mirjalili et al. [16] in 2021. This approach mimics the competitive and navigational behavior of African vultures. Known for their unique physical characteristics, African vultures are regarded as intelligent and strong creatures. One of the distinguishing characteristics of African vultures is that they take appropriate actions in different situations depending on the current level of hunger (hunger rate), which is also a characteristic of the AVOA. The solution process of the African vulture optimization algorithm is shown in Figure 1. The mathematical model of the hunger rate is shown in Equation (1).
F t ( t ) = ( 2 × r a n d + 1 ) × z × 1 t T + d t
d t = h × sin w π 2 × t T + cos π 2 × t T 1
where  F t t  is the vulture hunger rate of the i-th vulture in the t-th iteration.  d t  shows a fixed parameter set before the algorithm works.  t  represents the current number of iterations.  T  represents the maximum number of iterations, and x r a n d  indicates a random number between 0 and 1.  h  represents a random number between −2 and 2.  z  is a random number between −1 and 1. w is a fixed value that is set to 2.5 in AVOA. When the value drops below zero, it means that the vulture is in a hungry state. When the  z  increases to zero, it means that the vulture is in a satiated state. In order to show the key characteristics of the vulture, the first- or second-best vulture is selected as the leader vulture, and its mathematical equation is shown in Equation (3).
R i ( t ) = B e s t V u l t u r e 1 i f   p > r a n d B e s t V u l t u r e 2 O t h e r w i s e
where  R i t  is a randomly selected vulture. BestVulture1 and BestVulture2 represent the first- and second-best vulture, respectively. p is a constant, which is set to 0.8.

2.1. Exploration Phase

When  F t t 1 , vultures hunt for food in different areas. AVOA has entered the exploration phase. Based on the movement of vultures to protect foods, it is divided into two strategies. The mathematical model is as follows:
P i ( t + 1 ) = E q . ( 5 )   i f   p 1 r a n d p 1 E q . ( 6 )   i f   p 1 < r a n d p 1
P i ( t + 1 ) = R i ( t ) D i ( t ) × F i ( t )  
P i ( t + 1 ) = R i ( t ) F i ( t ) + r a n d × ( u b l b ) × r a n d × l b
D i ( t ) = | X × R i ( t ) P i ( t ) |
where  P t t + 1  is the new position for the next iteration, and  p 1  is set to 0.6.  r a n d p 1  is a random number between 1 and 0.  D i t  indicates the distance between the current vulture and the selected leader vulture, and  X  is a random number between −2 and 2.  u b  and  l b  represent upper and lower limits.

2.2. Exploitation Phase

When  F t t < 1 , vultures hunt for food in different areas. AVOA has entered the exploration phase. Based on the movement of vultures to protect foods, it is divided into two strategies. The mathematical model is as follows:
P i ( t + 1 ) = E q . ( 9 )   i f   p 2 r a n d p 2 E q . ( 10 )   i f   p 2 < r a n d p 2
P i ( t + 1 ) = D i ( t ) ( F i ( t ) + r a n d ) d i ( t )
P i ( t + 1 ) = R i ( t ) ( S 1 S 2 )
d i ( t ) = R i ( t ) P i ( t )
S 1 = R i ( t ) × r a n d × P i ( t ) 2 × π × cos ( P i ( t ) ) S 2 = R i ( t ) × r a n d × P i ( t ) 2 × π × sin ( P i ( t ) )
where  p 2  is set to 0.4.  R i t  is one of the best vultures.  r a n d p 2  is a random number between 1 and 0. In the second stage, the value of F is less than 1. This stage simulates the vulture’s accumulation of food and fierce competition for food:
P i ( t + 1 ) = E q . ( 14 )   i f   p 3 r a n d p 3 E q . ( 15 )   i f   p 3 < r a n d p 3
P i ( t + 1 ) = A 1 + A 2 2
P i ( t + 1 ) = R i ( t ) | d i ( t ) | × F i ( t ) × L e v y ( d )
A 1 = B e s t V u l t u r e 1 ( t ) B e s t V u l t u r e 1 ( t ) P i ( t ) 2 B e s t V u l t u r e 1 ( t ) 2 P i ( t ) 2 × F i ( t ) A 2 = B e s t V u l t u r e 1 ( t ) B e s t V u l t u r e 2 ( t ) P i ( t ) 2 B e s t V u l t u r e 2 ( t ) P i ( t ) 2 × F i ( t )
L e v y ( d ) = 0.01 × u | v | 1 β , u ~ ( 0 , σ u 2 ) , v ~ ( 0 , σ v 2 )
σ u = Γ ( 1 + β ) × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
where  p 3  is set to 0.4.  r a n d p 3  is a random number between 1 and 0. u and v are random numbers that follow a Gaussian distribution.  σ v  and  β  are set to 1 and 1.5.  Γ  is the standard gamma function.

3. Improved African Vulture Optimization Algorithm

3.1. Perturbation Operator

During the process of competition and navigation, the African vulture group mainly uses the position information of the optimal vulture, gradually approaches them, and updates its own position according to Equation (4). During the algorithm-solving process, a new optimal solution is generated around the optimal solution. However, as the iteration proceeds, the population diversity gradually decreases, which may cause the algorithm to fall into a local optimum. In order to overcome this shortcoming, this paper introduces an interference factor w in the exploration stage. The changing trend of this interference factor is shown in Figure 2. In the initial stage of the algorithm, providing a random number can promote a wider search and increase the exploration solution space possibility. In the later stages of the algorithm, the introduction of random numbers helps to escape from the local optimal solution and avoids falling into the dilemma of local search, making it more likely to find the global optimal solution. The definition of perturbation operator is shown in Formula (19).
w = 0.5 × 0.1 + r a n d × r a n d n 0 , 1
where  r a n d  is a random number between 1 and 0.  r a n d n 0 , 1  is a random number obeying the normal distribution with mean 0 and standard deviation 1. The vulture position update in the exploration phase is defined as follows:
P i ( t + 1 ) = R i ( t ) × 1 w D i ( t ) × F i ( t ) × w
P i ( t + 1 ) = R i ( t ) × w F i ( t ) + r a n d × ( u b l b ) × r a n d × l b

3.2. Random Opposition-Based Learning

The bibliography on motion ecology provides a wealth of insights into the complex patterns and dynamics of animal movement [26,27,28]. Meyer et al. predicted the movement of springboks within the next hour with a certainty of approximately 20%, while the remaining 80% of the movement is stochastic in nature, stemming from unaccounted factors in the modeling algorithm and individual behavioral traits of the springboks [29]. Stochastic is of great significance in ecological and animal behavioral research. Opposition-based learning is a method based on estimation and opposition estimation principles proposed by Tizhoosh et al. [30]. It is inspired by the concept of opposition in the real world and has been widely used in optimization algorithms, reinforcement learning, artificial neural networks, and fuzzy systems [31]. Optimization usually starts with a candidate solution, and the initial population and parameters are chosen based on randomness. If the initial candidate solution is close to the optimal solution, the algorithm will converge quickly. Conversely, the initial candidate solution may be far away from the candidate solution, in which case the algorithm will take longer to converge, or in the worst case, the algorithm may not converge at all. Opposition learning causes candidate solutions to generate opposite points, which can improve the convergence speed of the algorithm under a certain probability. Therefore, the opposite point of each candidate solution can be further explored. If it is found to be beneficial, we can consider using the opposite point as a candidate solution for the next iteration.
Definition 1.
Assume that x is a real number defined in the interval [L1, L2], and its opposite point  x o p  is defined as shown in Equations (22) and (23).
x o p = L 1 + L 2 x
If  L 1 = 0  and  L 2 = 0 , then:
x o p = 1 x
Similarly, when going beyond two dimensions, we can define opposite positions.
Definition 2.
Assume that  p x 1 , , x D  is a point on the D-dimensional coordinate system, where each   x i  is a real number in the interval   L 1 i , L 2 i , and the definition of the opposite point   p ˜  of  p  is shown in Equation (24).
x i o p = L 1 i + L 2 i x i   i 1 , D
where  x i o p  is the coordinate of  p ˜ .
The fixed distance between the reverse solution generated by the opposition-based learning strategy and the current solution limits its randomness. In order to enhance the diversity of the population and improve its ability to avoid falling into the local optimal solution, Long W et al. proposed the random opposition-based learning strategy [32], and its one-dimensional solution space is shown in Figure 3. This strategy is proposed to further expand the search space and improve the randomness of the algorithm and the population’s ability to avoid local optimality. Its definition is shown in Equation (25).
x i o p = L 1 i + L 2 i r a n d × x i   i 1 , D
where rand is a random number between 1 and 0.

3.3. Summary of the Proposed Method

In the African vulture optimization algorithm, the transition between exploration and exploitation depends on the vulture’s hunger rate F. In the early stages of exploration, the algorithm converges slowly due to the lack of diversity in the population. As the number of iterations increases, the value of the hunger rate F gradually decreases, keeping the algorithm in the exploitation stage. However, this algorithm has shortcomings, such as easily falling into local optimality and imbalance in exploration and exploitation capabilities. To solve these problems, this paper first introduces a random opposition-based learning strategy during population initialization to improve the diversity of the initial population. Secondly, interference factors are introduced in the exploration phase of each population, allowing the algorithm to explore more extensively in the search space, effectively improving the ability to avoid falling into local optima, improving the global search capability of the algorithm, and better balancing the algorithm’s explore and exploit. The solution process of the improved African Vulture optimization algorithm is shown in Figure 4.

3.4. Computational Complexity

The computational complexity of the AVOA hinges primarily on three key steps: initialization, fitness evaluation, and the update of vulture position vectors. In the initialization phase, assigning initial states to N vultures incurs a computational cost of O(N). As the algorithm progresses to search for optimal positions and update the position vectors of all vultures, the complexity arises from two main components: O(T × N), stemming from the multiplication of the number of iterations and vultures, and O(T × N × D), which encapsulates the impact of iterations, vultures, and the dimensionality of the problem. By integrating these factors, the overall computational complexity of the AVOA can be succinctly expressed as O(N × T × (1 + D)), effectively highlighting the interplay between algorithm performance and the number of vultures, iterations, and the complexity of the problem domain. Turning to the computational complexity of IROAVOA, consider the worst situation, as each vulture updates the position using the random opposition study perturbation factor throughout the iteration and then generates two candidate positions, updating the computational complexity of all vulture positions to O (2 × T × 2 × N × D). Therefore, the overall assumed complexity of IROAVOA is O(N × 2 × T × 2 × (1 + D)).
Compared with the original AVOA, the introduction of the random adversarial learning and perturbation factor increases the computational complexity to a certain extent. However, these additional time costs can improve the search performance of the algorithm, so the additional computational complexity is acceptable.

4. Discussion

4.1. Experimental Design and Parameter Setting

In order to verify that the IROAVOA has better optimization performance and effectiveness, a comparative experiment was first conducted between AVOA and the three algorithms proposed in this article: IROAVOA, ROAVOA, and IAVOA. Then, IROAVOA was compared with other classic algorithms PSO and GWO, and seven types of MVO, WOA, MFO, TSA, and HHO were used for comparative experiments. The experiment used 23 benchmark test functions (F1–F23) [33] and the CEC2019 [34] test suite, as shown in Table 1, Table 2, Table 3 and Table 4. These benchmark functions are variations of classical functions in terms of shifted, rotated, expanded, and combined. The benchmark test functions of Table 1 are unimodal benchmark functions with only one optimal solution and are often used to verify the local exploitation stage of the algorithm. The benchmark test functions of Table 2 are multimodal benchmark functions. The benchmark test functions of Table 3 are fixed-dimensions multimodal. Multimodal benchmark functions are used to test the global exploration capabilities of the algorithm. Dim represents the dimension of the function, Range indicates the range of the function search space, and fmin represents the theoretical optimal value of the function. The mathematical expressions and function characteristics of the classic benchmark test functions and the CEC2019 test suite are shown in Table 4.
The experimental environment for the optimization test in this article is the Window11 operating system, Intel(R) Core(TM) i5–11155G7 CPU @2.50 GHz, 16 GB memory, and MATLAB 2021a. In the algorithm optimization of this article, the parameter settings of each algorithm are consistent with the references. The population size is 30, and the maximum number of iterations is 500. Each optimization algorithm will be run independently on each benchmark function 30 times, and the test results will be compared with the function and the optimal value of the function. The results mainly include the best value (Best), the mean (Mean), and the standard deviation (Std). The average value can verify the optimization ability of the algorithm, and the standard deviation can verify the stability of the algorithm. The parameter settings of all comparison algorithms are shown in Table 5.

4.2. Convergence Analysis

As shown in the convergence curve in Figure 5, Figure 6 and Figure 7, the convergence speed and convergence accuracy of the three improved algorithms in the single-peak test function F1–F5 are significantly improved compared to the traditional AVOA. For F6 and F7, even if the algorithm falls into a local optimum, its convergence accuracy is better than AVOA. For multimodal functions F10, F12, and fixed-dimensional multimodal test function F15, IROAVOA has better convergence accuracy. For multimodal test functions F8–F9, F11, F13, and fixed-dimensional multimodal test functions F16–F23, the four algorithms have the same convergence accuracy, but the convergence speed of the three improved algorithms is faster than the traditional AVOA. Whether for unimodal or multimodal test functions, IROAVOA can provide better convergence.

4.3. Analysis of 23 Groups of Benchmark Test Function Results

We conducted a comprehensive evaluation of the proposed improved algorithm based on the unimodal and multimodal test functions described previously. Table A1, Table A2 and Table A3 list the results of the optimal value, mean, and standard deviation obtained by IROAVOA and other algorithms for each benchmark test function (F1–F23). In order to ensure the performance of fair comparison algorithms, the maximum number of iterations of all comparison algorithms is set to 500, and the population number is set to 30. The results show that the proposed IROAVOA exhibits excellent performance on most tested functions.
For the unimodal test functions (F1–F7), IROAVOA can effectively find the global optimal value. Compared with other comparison algorithms, the solution accuracy of the proposed improved algorithm is significantly improved. As shown in Table A1, Table A2 and Table A3 and Figure 5 and Figure 6, the mean and standard deviation of IROAVOA are small, and the convergence speed is faster than other algorithms, indicating that IROAVOA shows the best performance on these test functions and is better than other comparison algorithms. For the multimodal test function (F8–F23), IROAVOA is significantly better than other comparison algorithms in the test functions F12, F14–F15, and F17–F20. In the benchmark test functions F9–F11, the performance of the algorithm is similar to HHO but significantly better than GWO, PSO, MVO, WOA, MFO, and TSA. However, the performance of some benchmark test functions of IROAVOA is not as good as other optimization algorithms. Therefore, IROAVOA has certain limitations and requires further testing and application.

4.4. Comparison with Basic AVOA and Two Variants of AVOA

To evaluate the effectiveness of each component, the improved IROAVOA was compared with AVOA, ROAVOA improved based on random opposition learning strategy, and IAOVA improved based on perturbation factor. Under the same experimental design, 23 different types of test functions in Table 1, Table 2 and Table 3 were tested simultaneously, and the optimal values, mean values, and standard deviations were obtained as shown in Table A1, Table A2 and Table A3. To demonstrate the dynamic convergence performance of IROAVOA, the average optimal fitness convergence curves of each test function are shown in Figure 5 and Figure 6.
Based on the results, we can find that the performance of the three improved optimization algorithms is superior to traditional AVOA. For the unimodal test functions (F1–F7), IROAVOA outperforms other algorithms on most test functions. For F1 and F3, the four algorithms can achieve the same optimal fitness value, but IROAVOA has slightly better standard deviation than ROAVOA, IAVOA, and AVOA. For F5–F7, all four algorithms did not reach the theoretical optimal value, indicating that IROAVOA has local development potential. For the multimodal testing function (F8–F23), the three improved algorithms showed slight improvements in both the optimal and average values. For F10 and F13–F15, the average value of IROAVOA has reached the theoretical optimal value, and the performance of the other two improved algorithms has also been improved. The three improved algorithms have slightly improved the average values of most test functions, but the search performance has only significantly improved. The main reason is that the traditional AVOA algorithm itself has good search ability and can discover the theoretical optimal solution of multimodal test functions, so the room for improvement is relatively small. These results indicate that the performance of AVOA has been improved to a certain extent by applying random opposition-based learning and perturbation factor.

4.5. CEC2019 Test Suite Result Analysis

In order to further illustrate the superiority of the improved algorithm in this article, the IEEE CEC2019 test suite is used to evaluate the performance of IROAVOA in solving complex numerical problems. The CEC2019 suite contains 10 complex single-objective test functions. In order to ensure the fairness of the experiment, the proposed improved algorithm and the other eight comparison algorithms were run independently on each function 30 times. The maximum number of iterations is set to 500, and the number of populations is set to 30. Table 6 lists the optimal values, mean values, and standard deviation results of the test results. The last row of the table shows the Friedman test results.
It can be seen from the data in Table 4 that for CEC2019, IROVAOA is better than the other eight algorithms in the 10 test functions and can effectively find the global optimal value. Although the standard deviation of the proposed algorithm is slightly inferior to other algorithms, they all have smaller standard deviations than the traditional AVOA, which shows that the improved IROAVOA has better stability. The above results show that the performance of the proposed IROAVOA is more competitive and can effectively solve various complex optimization problems.

4.6. Statistical Test Result Analysis

The mean and standard deviation are metrics to evaluate the overall stability of the algorithm, but they do not fully reflect the results of each run. Due to the randomness of the results of each run, it is not sufficient to rely only on fitness and standard values and standard deviations to evaluate algorithm performance. In order to verify whether there are differences between the proposed algorithm and other algorithms, this section uses the Wilcoxon rank sum test [35] and the Friedman ranking test [36].
The Friedman ranking test ranks IROAVOA and other algorithms based on the fitness obtained using Equation (26), where k is the number of algorithms, Rj is the average ranking of the j-th algorithm, and n is the number of test checks. The test is performed by assuming a distribution with k − 1 degrees of freedom. It first determines the ranking of each algorithm independently and then calculates the average ranking to arrive at a final ranking for each algorithm considered. Table 7 shows the Friedman ranking test result. According to the results, the IROAVOA is significantly different from other algorithms in most benchmark test functions. The overall ranking shows that the IROAVOA is better than other algorithms.
F f = 12 n k ( k + 1 ) j R j 2 k ( k + 1 ) 2 4
The Wilcoxon rank sum test is a non-parametric statistical method with a significance level set at 5%. If the p value is greater than 0.05, it means rejecting the null hypothesis, that is, the performance of the IROAVOA is inferior to other comparison algorithms; if the p value is less than 0.05, it means accepting the null hypothesis, indicating that the performance of the IROAVOA is better than other algorithms. When the p value is NaN, it means that the performance of the IROAVOA and the comparison algorithm are equivalent. Table 6 lists the Wilcoxon rank sum test results for each benchmark test function. Table 8 shows Statistical results of Wilcoxon rank result. In order to describe the results more clearly, (W/T/L) symbols are added to the last row of the table to illustrate the IROAVOA in the number of wins, draws, and the number of failures. According to the results, IROAVOA is better than AVOA on 15 benchmark test functions and better than GWO, PSO, MVO, WOA, MFO, TSA, and HHO on 23 groups of benchmark test functions. This confirms that the proposed improved algorithm has significant superiority.

5. Engineering Design Problems

5.1. Pressure Vessel Design Problem

The pressure vessel design problem is designed to minimize vessel manufacturing costs, and Figure 8 shows the design for this problem. The four design variables are shell thickness Ts (s3), head thickness Th (s4), inner radius R (s1), and container length L (s2, excluding the head), where TS and Th are integers of 0.0625 times, and R and L are continuous variables. The specific mathematical model is shown in Equation (27).
Compare the IROAVOA with AVOA and other optimization algorithms (such as GWO, WOA, HHO, and MVO). The population size and maximum number of iterations are set to 30 and 500, respectively. After running independently for 30 times, the optimal value is reached. Under the same experimental results, the results obtained are in Table 9. As can be seen from Table 9, IROAVOA has a smaller manufacturing cost and is more suitable compared with other algorithms. Therefore, IROAVOA can achieve optimal costs when solving such problems.
min f ( s ) = 0.6224 s 1 s 3 s 4 + 1.7781 s 2 s 3 3 + 3.1661 s 1 2 s 4 + 19.84 s 1 2 s 3 g 1 = s 1 + 0.0193 s 3 0 , g 2 = s 2 + 0.00954 s 3 0 g 3 = π s 3 2 s 4 4 3 π s 3 3 + 1296000 0 g 4 = s 4 240 0 0 s 1 , s 2 99 , 0 s 3 , s 4 200 .

5.2. Welded Beam Design Problem

The welded beam design problem is to reduce the cost in the welded beam design process. Figure 9 shows the design of the problem. The optimization problem can be described as finding the shear stress (s1), bending stress (s2), beam bending load (s3), end deflection (s4), and boundary condition constraints. The four design variables include the length of the beam (l), the height (t), thickness (b), and weld thickness (h), which minimize the design cost of manufacturing welded beams. The specific mathematical models are shown in Formulas (28) and (29).
Compare the IROAVOA with AVOA and other optimization algorithms (such as GWO, WOA, HHO, and MVO). The population size and maximum number of iterations are set to 30 and 500, respectively. After running independently for 30 times, the optimal value is obtained. It can be seen from Table 10 that IROAVOA achieves better results than other algorithms and can, therefore, be better applied.
cos t f ( s ) = 1.1047 s 1 2 s 2 + 0.04811 s 3 s 4 ( 14.0 + s 2 ) y 1 ( s ) = τ ( s ) 13600 0 , y 2 ( s ) = σ ( s ) 30000 0 , y 3 ( s ) = δ ( s ) 0.25 0 , y 4 = s 1 s 4 0 , y 5 = p p c 0 , y 6 = 0.125 s 1 0 , y 7 = 1.10471 s 1 2 + 0.04811 s 3 s 4 ( 14 . + s 2 ) 5.0 0 , 0.1 s 1 , s 4 2.0 , 0.1 s 2 , s 3 10.0 .
τ = τ 1 + 2 τ 1 τ 2 ( s 2 2 r ) + τ 2 2 , τ 1 = P d s 1 s 2 2 , m = p d ( i + s 2 2 ) , j = 2 { 2 s 1 s 2 [ s 2 2 12 + s 1 + s 3 2 2 ] } , r = s 2 2 4 + s 1 + s 3 2 2 , σ = 6 p d i s 4 s 3 2 , δ = 6 p d i 3 n s 3 2 s 4 , p c = 4.013 n s 3 2 s 4 6 36 i 2 ( 1 s 3 2 i n 4 m ) , m = 12 × 10 6 , n = 30 × 10 6 , p d = 6000 l b , i = 14 i n , τ 2 = m r j .

6. Conclusions

In view of the shortcomings of the algorithm, such as poor global search ability and poor ability to balance exploration and exploitation, this paper introduces an improved IROAVOA based on a combination of random opposition-based learning and disturbance factors. Opposition learning can generate opposite points for candidate solutions, and adding randomness can remove the fixed distance between the generated reverse solution and the current solution, further expanding the search space of the algorithm. Therefore, in the initial stage, random opposition-based learning can increase the initial generation of African vultures to enhance the population diversity and randomness of the algorithm and promote a wider search of the solution space, thereby enhancing the algorithm’s ability to delve into a wider range of potential solutions. In the exploration stage, the perturbation operator helps African vultures avoid the dilemma of local search during navigation, improves the algorithm’s ability to escape from local optima, and balances the exploration and exploitation stages. In order to verify the effectiveness of IROAVOA, simulations were conducted on 23 benchmark test functions and the IEEE CEC2019 test suite, and the exploration and exploitation capabilities, as well as convergence of the algorithm, were analyzed. The results show that IROAVOA outperforms traditional AVOA, two AVOA variants (IAVOA, ROAVOA), and seven other optimization algorithms. In ablation experiments, random opposition-based learning strategy and perturbation factor effectively improved the exploitation of AVOA and its ability to balance exploration and exploitation since adding disturbance factors increases the time complexity of the algorithm. In subsequent research work, we will further reduce the time complexity of the algorithm and apply it to more practical engineering optimization problems.

Author Contributions

Conceptualization, Methodology, Software, Writing—Original Draft, J.H.; Visualization, Investigation, Writing—Reviewing, X.L.; Investigation, Validation, Funding Acquisition, T.W.; Supervision, C.L.; Funding Acquisition, Z.W.; Funding Acquisition, X.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jiangmen Science and Technology Commissioner’s scientific research cooperation project, grant number 2023760300180008278, the Jiangmen Science and Technology Plan Project under Grant 2022JC01021, and, in part, by the domestic development and industrialization of water quality online monitoring instruments and core accessories, grant number 2022ZAZX3034.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Results of unimodal benchmark functions.
Table A1. Results of unimodal benchmark functions.
Benchmark FunctionIndexAVOAROAVOAIAVOAIROAVOA
F1Best6.45 × 10−2920.00 × 1000.00 × 1000.00 × 100
Mean2.18 × 10−2930.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F2Best5.49 × 10−1455.04 × 10−1731.37 × 10−1622.86 × 10−192
Mean1.86 × 10−1468.40 × 10−1754.55 × 10−1644.76 × 10−194
Std1.02 × 10−1450.00 × 1002.33 × 10−1630.00 × 100
F3Best1.78 × 10−2221.06 × 10−2801.08 × 10−2405.74 × 10−284
Mean6.02 × 10−2241.79 × 10−2823.84 × 10−2429.85 × 10−286
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F4Best5.19 × 10−1495.32 × 10−1664.74 × 10−1637.36 × 10−193
Mean1.75 × 10−1508.86 × 10−1681.59 × 10−1641.23 × 10−194
Std9.60 × 10−1500.00 × 1007.41 × 10−1640.00 × 100
F5Best5.71 × 10−51.06 × 10−52.11 × 10−53.49 × 10−6
Mean3.46 × 1062.89 × 1061.35 × 1071.19 × 107
Std1.90 × 1072.24 × 1077.39 × 1079.21 × 107
F6Best1.73 × 10−38.48 × 10−91.70 × 10−75.36 × 10−9
Mean5.98 × 10−53.89 × 10−79.64 × 10−77.69 × 10−7
Std3.27 × 10−43.01 × 10−65.28 × 10−65.95 × 10−6
F7Best1.36 × 10−41.06 × 10−41.35 × 10−46.17 × 10−5
Mean1.76 × 10−28.45 × 10−31.87 × 10−28.14 × 10−3
Std9.64 × 10−26.55 × 10−21.03 × 10−16.31 × 10−2
Table A2. Results of multimodal benchmark functions.
Table A2. Results of multimodal benchmark functions.
Benchmark FunctionIndexAVOAROAVOAIAVOAIROAVOA
F8Best−1.23 × 104−1.25 × 104−1.26 × 104−1.26 × 104
Mean−4.10 × 102−2.09 × 102−4.19 × 102−2.09 × 102
Std2.25 × 1031.62 × 1032.29 × 1031.62 × 103
F9Best0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Mean3.79 × 10−161.26 × 10−163.16 × 10−161.58 × 10−16
Std2.08 × 10−159.78 × 10−161.73 × 10−151.22 × 10−15
F10Best8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Mean1.01 × 10−163.85 × 10−171.01 × 10−163.85 × 10−17
Std5.51 × 10−162.98 × 10−165.51 × 10−162.98 × 10−16
F11Best0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Mean6.17 × 10−194.32 × 10−194.93 × 10−192.47 × 10−19
Std3.38 × 10−183.34 × 10−182.70 × 10−181.91 × 10−18
F12Best3.01 × 10−86.48 × 10−108.90 × 10−92.31 × 10−10
Mean1.93 × 10−71.26 × 10−82.07 × 10−72.78 × 10−8
Std1.06 × 10−69.78 × 10−81.13 × 10−62.15 × 10−7
F13Best7.48 × 10−85.85 × 10−93.51 × 10−82.69 × 10−9
Mean6.28 × 1072.65 × 1075.69 × 1072.17 × 107
Std3.44 × 1082.05 × 1083.12 × 1081.68 × 108
Table A3. Results of fixed-dimensions multimodal benchmark functions.
Table A3. Results of fixed-dimensions multimodal benchmark functions.
Benchmark FunctionIndexAVOAROAVOAIAVOAIROAVOA
F14Best1.10 × 1009.98 × 10−19.98 × 10−19.98 × 10−1
Mean3.66 × 10−21.66 × 10−23.33 × 10−21.66 × 10−2
Std2.00 × 10−11.29 × 10−11.82 × 10−11.29 × 10−1
F15Best4.42 × 10−43.58 × 10−43.84 × 10−43.13 × 10−4
Mean1.48 × 10−56.10 × 10−61.29 × 10−55.37 × 10−6
Std8.11 × 10−54.73 × 10−57.07 × 10−54.16 × 10−5
F16Best−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Mean−3.44 × 10−2−1.72 × 10−2−3.44 × 10−2−1.72 × 10−2
Std1.88 × 10−11.33 × 10−11.88 × 10−11.33 × 10−1
F17Best3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Mean1.33 × 10−26.63 × 10−31.33 × 10−26.63 × 10−3
Std7.27 × 10−25.14 × 10−27.27 × 10−25.14 × 10−2
F18Best3.00 × 1003.00 × 1003.00 × 1003.00 × 100
Mean1.00 × 10−15.00 × 10−21.00 × 10−15.00 × 10−2
Std5.48 × 10−13.88 × 10−15.48 × 10−13.87 × 10−1
F19Best−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100
Mean−1.27 × 10−1−6.36 × 10−2−1.29 × 10−1−6.34 × 10−2
Std6.96 × 10−14.93 × 10−17.05 × 10−14.91 × 10−1
F20Best−3.29 × 100−3.28 × 100−3.28 × 100−3.27 × 100
Mean−1.09 × 10−1−5.28 × 10−2−1.09 × 10−1−5.45 × 10−2
Std5.99 × 10−14.09 × 10−15.98 × 10−14.22 × 10−1
F21Best−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101
Mean−3.38 × 10−1−1.69 × 10−1−3.38 × 10−1−1.69 × 10−1
Std1.85 × 1001.31 × 1001.85 × 1001.31 × 100
F22Best−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101
Mean−3.47 × 10−1−1.73 × 10−1−3.47 × 10−1−1.73 × 10−1
Std1.90 × 1001.34 × 1001.90 × 1001.34 × 100
F23Best−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101
Mean−3.51 × 10−1−1.76 × 10−1−3.51 × 10−1−1.75 × 10−1
Std1.92 × 1001.36 × 1001.92 × 1001.36 × 100
Table A4. Unimodal benchmark function results of IROAVOA and other algorithms.
Table A4. Unimodal benchmark function results of IROAVOA and other algorithms.
FunctionIndexIROAVOAGWOPSOMVOWOAMFOTSAHHO
F1Best0.00 × 1002.17 × 10−273.79 × 1021.34 × 1001.70 × 10−732.01 × 1032.07 × 10−216.35 × 10−95
Mean0.00 × 1002.17 × 10−271.26 × 1031.44 × 1001.87 × 10−732.01 × 1032.29 × 10−212.12 × 10−96
Std0.00 × 1006.03 × 10−313.01 × 1024.48 × 10−26.07 × 10−753.29 × 1001.09 × 10−221.16 × 10−95
F2Best2.29 × 10−1881.15 × 10−168.62 × 1004.56 × 1004.93 × 10−513.68 × 1011.27 × 10−131.90 × 10−50
Mean3.81 × 10−1901.15 × 10−161.59 × 1014.58 × 1005.05 × 10−513.68 × 1011.37 × 10−136.35 × 10−52
Std0.00 × 1001.69 × 10−202.10 × 1001.00 × 10−24.48 × 10−532.69 × 10−24.55 × 10−153.48 × 10−51
F3Best6.02 × 10−2931.39 × 10−51.31 × 1032.23 × 1024.18 × 1042.06 × 1041.45 × 10−42.34 × 10−74
Mean1.03 × 10−2941.39 × 10−51.71 × 1042.24 × 1024.19 × 1042.28 × 1044.12 × 10−47.82 × 10−76
Std0.00 × 1004.75 × 10−91.74 × 1045.11 × 10−12.06 × 1024.29 × 1033.13 × 10−44.28 × 10−75
F4Best7.98 × 10−1839.21 × 10−78.22 × 1002.11 × 1005.10 × 1017.00 × 1013.71 × 10−11.70 × 10−48
Mean1.33 × 10−1849.22 × 10−71.56 × 1012.14 × 1005.11 × 1018.41 × 1014.68 × 10−15.67 × 10−50
Std0.00 × 1005.33 × 10−102.28 × 1001.32 × 10−21.05 × 10−11.22 × 1015.37 × 10−23.11 × 10−49
F5Best2.96 × 10−62.68 × 1011.41 × 1043.96 × 1022.81 × 1011.63 × 1042.87 × 1011.28 × 10−2
Mean8.36 × 1062.68 × 1011.29 × 1053.97 × 1022.81 × 1013.43 × 1043.03 × 1014.28 × 10−4
Std6.47 × 1071.78 × 10−45.57 × 1046.94 × 10−17.88 × 10−58.64 × 1041.87 × 1002.34 × 10−3
F6Best3.63 × 10−98.19 × 10−13.98 × 1021.25 × 1004.12 × 10−11.67 × 1033.86 × 1009.02 × 10−5
Mean4.51 × 10−78.19 × 10−11.26 × 1031.34 × 1004.12 × 10−11.67 × 1033.97 × 1003.01 × 10−6
Std3.49 × 10−62.09 × 10−53.08 × 1024.49 × 10−21.53 × 10−46.33 × 1006.39 × 10−21.65 × 10−5
F7Best6.72 × 10−51.79 × 10−33.83 × 10−23.62 × 10−23.43 × 10−33.89 × 1001.01 × 10−21.33 × 10−4
Mean8.64 × 10−35.09 × 10−16.29 × 10−15.30 × 10−15.06 × 10−14.92 × 1005.01 × 10−14.43 × 10−6
Std6.69 × 10−22.88 × 10−12.88 × 10−12.82 × 10−12.87 × 10−11.93 × 1002.85 × 10−12.43 × 10−5
Table A5. Fixed-dimension test function results of IROAVOA and other algorithms.
Table A5. Fixed-dimension test function results of IROAVOA and other algorithms.
FunctionIndexIROAVOAGWOPSOMVOWOAMFOTSAHHO
F8Best−1.26 × 104−5.98 × 103−5.35 × 103−7.82 × 103−9.90 × 103−8.72 × 103−5.64 × 103−1.26 × 104
Mean−2.09 × 102−5.88 × 103−3.45 × 103−7.82 × 103−9.90 × 103−8.72 × 103−2.78 × 103−4.19 × 102
Std1.62 × 1031.68 × 1001.00 × 1038.29 × 10−11.06 × 1018.78 × 10−27.47 × 1022.29 × 103
F9Best0.00 × 1003.17 × 1001.62 × 1021.15 × 1027.58 × 10−151.58 × 1021.87 × 1020.00 × 100
Mean1.58 × 10−163.17 × 1003.17 × 1021.15 × 1028.91 × 10−151.58 × 1023.03 × 1020.00 × 100
Std1.22 × 10−155.95 × 10−44.08 × 1012.13 × 10−23.22 × 10−155.71 × 10−23.10 × 1010.00 × 100
F10Best8.88 × 10−161.03 × 10−135.87 × 1001.85 × 1004.32 × 10−151.39 × 1012.30 × 1008.88 × 10−16
Mean4.24 × 10−171.07 × 10−138.46 × 1001.87 × 1004.32 × 10−151.39 × 1012.57 × 1002.96 × 10−17
Std3.29 × 10−160.00 × 1006.75 × 10−16.92 × 10−30.00 × 1003.88 × 10−27.48 × 10−21.62 × 10−16
F11Best0.00 × 1003.09 × 10−34.59 × 1008.55 × 10−10.00 × 1002.20 × 1018.47 × 10−30.00 × 100
Mean4.32 × 10−193.09 × 10−31.24 × 1018.81 × 10−11.48 × 10−182.22 × 1011.51 × 10−10.00 × 100
Std3.34 × 10−181.54 × 10−62.62 × 1001.18 × 10−25.23 × 10−182.21 × 10−11.26 × 10−10.00 × 100
F12Best2.42 × 10−105.05 × 10−23.88 × 1002.19 × 1002.79 × 10−29.79 × 1008.08 × 1005.81 × 10−6
Mean5.14 × 10−85.05 × 10−26.31 × 1022.20 × 1002.79 × 10−26.91 × 1051.61 × 1031.94 × 10−7
Std3.98 × 10−72.74 × 10−62.59 × 1033.71 × 10−32.01 × 10−53.45 × 1064.86 × 1031.06 × 10−6
F13Best3.02 × 10−96.37 × 10−11.70 × 1011.76 × 10−15.50 × 10−14.62 × 1033.17 × 1008.71 × 10−5
Mean1.55 × 1076.37 × 10−11.44 × 1041.84 × 10−15.50 × 10−11.91 × 1063.79 × 1002.90 × 10−6
Std1.20 × 1085.20 × 10−52.41 × 1043.91 × 10−31.37 × 10−38.59 × 1062.13 × 10−11.59 × 10−5
Table A6. Fixed-dimensions multimodal benchmark function results of IROAVOA and other algorithms.
Table A6. Fixed-dimensions multimodal benchmark function results of IROAVOA and other algorithms.
FunctionIndexIROAVOAGWOPSOMVOWOAMFOTSAHHO
F14Best9.98 × 10−15.24 × 1003.13 × 1009.98 × 10−12.73 × 1002.19 × 1008.43 × 1001.62 × 100
Mean1.66 × 10−25.24 × 1003.55 × 1029.98 × 10−12.86 × 1002.19 × 1001.46 × 1025.41 × 10−2
Std1.29 × 10−11.46 × 10−71.77 × 1022.29 × 10−97.04 × 10−18.28 × 10−161.25 × 1022.96 × 10−1
F15Best3.49 × 10−43.77 × 10−31.37 × 10−38.11 × 10−36.27 × 10−41.83 × 10−37.08 × 10−33.89 × 10−4
Mean5.87 × 10−63.77 × 10−34.44 × 1028.12 × 10−36.29 × 10−41.83 × 10−39.34 × 1011.30 × 10−5
Std4.54 × 10−54.34 × 10−62.39 × 1031.21 × 10−57.52 × 10−67.75 × 10−75.11 × 1027.09 × 10−5
F16Best−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Mean−1.72 × 10−2−1.03 × 100−9.27 × 10−2−1.03 × 100−1.03 × 100−1.03 × 100−9.15 × 10−1−3.44 × 10−2
Std1.33 × 10−13.10 × 10−61.06 × 1008.62 × 10−61.01 × 10−46.37 × 10−161.91 × 10−11.88 × 10−1
F17Best3.98 × 10−13.98 × 10−11.10 × 1003.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Mean6.63 × 10−33.98 × 10−12.14 × 1013.98 × 10−14.02 × 10−13.98 × 10−15.18 × 1001.33 × 10−2
Std5.14 × 10−21.72 × 10−44.48 × 1015.54 × 10−51.17 × 10−24.27 × 10−165.78 × 1007.26 × 10−2
F18Best3.00 × 1003.00 × 1003.90 × 1003.00 × 1003.00 × 1003.00 × 1001.50 × 1013.00 × 100
Mean5.00 × 10−23.00 × 1001.83 × 1013.00 × 1003.00 × 1003.00 × 1001.81 × 1021.00 × 10−1
Std3.87 × 10−13.57 × 10−42.02 × 1018.06 × 10−56.34 × 10−36.52 × 10−155.52 × 1025.48 × 10−1
F19Best−3.86 × 100−3.86 × 100−3.71 × 100−3.86 × 100−3.85 × 100−3.86 × 100−3.86 × 100−3.86 × 100
Mean−6.41 × 10−2−3.86 × 100−2.29 × 100−3.86 × 100−3.85 × 100−3.86 × 100−3.05 × 100−1.29 × 10−1
Std4.96 × 10−15.97 × 10−57.57 × 10−14.12 × 10−61.84 × 10−32.68 × 10−156.62 × 10−17.05 × 10−1
F20Best−3.27 × 100−3.25 × 100−1.96 × 100−3.27 × 100−3.19 × 100−3.23 × 100−3.22 × 100−3.10 × 100
Mean−5.42 × 10−2−3.25 × 100−3.12 × 10−1−3.27 × 100−3.19 × 100−3.23 × 100−2.59 × 100−1.03 × 10−1
Std4.20 × 10−12.16 × 10−54.51 × 10−16.73 × 10−63.83 × 10−41.89 × 10−154.24 × 10−15.67 × 10−1
F21Best−1.02 × 101−9.48 × 100−1.73 × 100−7.96 × 100−8.93 × 100−6.46 × 100−6.78 × 100−5.36 × 100
Mean−1.69 × 10−1−9.47 × 100−3.56 × 10−1−7.96 × 100−8.88 × 100−6.46 × 100−1.52 × 100−1.79 × 10−1
Std1.31 × 1003.26 × 10−31.53 × 10−18.92 × 10−41.55 × 10−13.12 × 10−156.74 × 10−19.78 × 10−1
F22Best−1.04 × 101−1.00 × 101−1.45 × 100−7.99 × 100−6.68 × 100−6.82 × 100−5.89 × 100−5.25 × 100
Mean−1.73 × 10−1−1.00 × 101−3.98 × 10−1−7.99 × 100−6.66 × 100−6.82 × 100−1.50 × 100−1.75 × 10−1
Std1.34 × 1003.24 × 10−31.91 × 10−11.09 × 10−34.12 × 10−21.73 × 10−157.71 × 10−19.59 × 10−1
F23Best−1.05 × 101−1.01 × 101−1.77 × 100−9.10 × 100−7.04 × 100−7.21 × 100−6.67 × 100−5.12 × 100
Mean−1.75 × 10−1−1.01 × 101−4.89 × 10−1−9.10 × 100−7.02 × 100−7.21 × 100−1.81 × 100−1.71 × 10−1
Std1.36 × 1003.00 × 10−32.17 × 10−11.14 × 10−35.03 × 10−21.93 × 10−158.83 × 10−19.35 × 10−1

References

  1. Kumar, V.; Chhabra, J.K.; Kumar, D. Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems. J. Comput. Sci. 2014, 5, 144–155. [Google Scholar] [CrossRef]
  2. Xiao, Y.; Guo, Y.; Cui, H.; Wang, Y.; Li, J.; Zhang, Y. IHAOAVOA: An improved hybrid aquila optimizer and African vultures optimization algorithm for global optimization problems. Math. Biosci. Eng. 2022, 19, 10963–11017. [Google Scholar] [CrossRef]
  3. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  4. Pršić, D.; Nedić, N.; Stojanović, V. A nature inspired optimal control of pneumatic-driven parallel robot platform. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2017, 231, 59–71. [Google Scholar]
  5. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  6. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  7. Columbu, A.; Fuentes, R.D.; Frassu, S. Uniform-in-time boundedness in a class of local and nonlocal nonlinear attraction–repulsion chemotaxis models with logistics. Nonlinear Anal. Real World Appl. 2024, 79, 104135. [Google Scholar] [CrossRef]
  8. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  14. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  15. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar]
  16. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar]
  17. Salah, B.; Hasanien, H.M.; Ghali, F.M.A.; Alsayed, Y.M.; Abdel Aleem, S.H.E.; El-Shahat, A. African Vulture Optimization-Based Optimal Control Strategy for Voltage Control of Islanded DC Microgrids. Sustainability 2022, 14, 11800. [Google Scholar] [CrossRef]
  18. Bagal, H.A.; Soltanabad, Y.N.; Dadjuo, M.; Wakil, K.; Zare, M.; Mohammed, A.S. SOFC model parameter identification by means of Modified African Vulture Optimization algorithm. Energy Rep. 2021, 7, 7251–7260. [Google Scholar]
  19. He, Z.; Tang, B.; Luan, F. An improved African vulture optimization algorithm for dual-resource constrained multi-objective flexible job shop scheduling problems. Sensors 2022, 23, 90. [Google Scholar] [CrossRef] [PubMed]
  20. Vashishtha, G.; Chauhan, S.; Kumar, A.; Kumar, R. An ameliorated African vulture optimization algorithm to diagnose the rolling bearing defects. Meas. Sci. Technol. 2022, 33, 075013. [Google Scholar]
  21. Singh, N.; Houssein, E.H.; Mirjalili, S.; Cao, Y.; Selvachandran, G. An efficient improved African vultures optimization algorithm with dimension learning hunting for traveling salesman and large-scale optimization applications. Int. J. Intell. Syst. 2022, 37, 12367–12421. [Google Scholar]
  22. Diab, A.A.Z.; Tolba, M.A.; El-Rifaie, A.M.; Denis, K.A. Photovoltaic parameter estimation using honey badger algorithm and African vulture optimization algorithm. Energy Rep. 2022, 8, 384–393. [Google Scholar]
  23. Zheng, R.; Hussien, A.G.; Qaddoura, R.; Jia, H.; Abualigah, L.; Wang, S.; Saber, A. A multi-strategy enhanced African vultures optimization algorithm for global optimization problems. J. Comput. Des. Eng. 2023, 10, 329–356. [Google Scholar]
  24. Liu, R.; Wang, T.; Zhou, J.; Hao, X.; Xu, Y.; Qiu, J. Improved African vulture optimization algorithm based on quasi-oppositional differential evolution operator. IEEE Access 2022, 10, 95197–95218. [Google Scholar]
  25. Fan, J.; Li, Y.; Wang, T. An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism. PLoS ONE 2021, 16, e0260725. [Google Scholar]
  26. Toledo, S.; Shohami, D.; Schiffner, I.; Lourie, E.; Orchan, Y.; Bartan, Y.; Nathan, R. Cognitive map–based navigation in wild bats revealed by a new high-throughput tracking system. Science 2020, 369, 188–193. [Google Scholar]
  27. Pearson, K. VII. Mathematical contributions to the theory of evolution.—III. Regression, heredity, and panmixia. Philos. Trans. R. Soc. London. Ser. A Contain. Pap. A Math. Or Phys. Character 1896, 187, 253–318. [Google Scholar]
  28. Brownlee, J. XIV.—The Mathematical Theory of Random Migration and Epidemic Distribution. Proc. R. Soc. Edinb. 1912, 31, 262–289. [Google Scholar]
  29. Meyer, P.G.; Cherstvy, A.G.; Seckler, H.; Hering, R.; Blaum, N.; Jeltsch, F.; Metzler, R. Directedeness, correlations, and daily cycles in springbok motion: From data via stochastic models to movement prediction. Phys. Rev. Res. 2023, 5, 043129. [Google Scholar]
  30. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  31. Mandavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar]
  32. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A random opposition-based learning grey wolf optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar]
  33. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  34. Price, K.V.; Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the 100-Digit Challenge Special Session and Competition on Single Objective Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2018. [Google Scholar]
  35. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar]
  36. Theodorsson-Norheim, E. Friedman and Quade tests: BASIC computer program to perform nonparametric two-way analysis of variance and multiple comparisons on ranks of several related samples. Comput. Biol. Med. 1987, 17, 85–99. [Google Scholar] [PubMed]
Figure 1. African vulture optimization algorithm flowchart.
Figure 1. African vulture optimization algorithm flowchart.
Electronics 13 03329 g001
Figure 2. w change trend chart.
Figure 2. w change trend chart.
Electronics 13 03329 g002
Figure 3. ROBL one-dimensional solution space.
Figure 3. ROBL one-dimensional solution space.
Electronics 13 03329 g003
Figure 4. Improved African vulture optimization algorithm flowchart.
Figure 4. Improved African vulture optimization algorithm flowchart.
Electronics 13 03329 g004
Figure 5. Convergence curve of unimodal optimization function.
Figure 5. Convergence curve of unimodal optimization function.
Electronics 13 03329 g005
Figure 6. Convergence curve of multimodal optimization functions.
Figure 6. Convergence curve of multimodal optimization functions.
Electronics 13 03329 g006aElectronics 13 03329 g006b
Figure 7. Convergence curves of 23 sets of benchmark test functions between the IROAVOA and other comparative optimization algorithms.
Figure 7. Convergence curves of 23 sets of benchmark test functions between the IROAVOA and other comparative optimization algorithms.
Electronics 13 03329 g007aElectronics 13 03329 g007b
Figure 8. Pressure vessel design problem.
Figure 8. Pressure vessel design problem.
Electronics 13 03329 g008
Figure 9. Welded beam design problem.
Figure 9. Welded beam design problem.
Electronics 13 03329 g009
Table 1. Unimodal benchmark functions.
Table 1. Unimodal benchmark functions.
NumberNameBenchmarkDimRange f min
F1Sphere F 1 = i = 1 d x i 2 30[−100, 100]0
F2Schwefel’problem2.22 F 2 = i = 1 d x i + i = 1 d x i 30[−10, 10]0
F3Schwefel’problem1.2 F 3 = i = 1 d j = 1 i x j 2 30[−100, 100]0
F4Schwefel’problem2.21 F 4 = max x i , 1 i d 30[−100, 100]0
F5Rosenbrock F 5 = i = 1 d 1 100 x i + 1 x i 2 2 + x i 1 2 30[−30, 30]0
F6Step F 6 = i = 1 d x i + 0 . 5 2 30[−100, 100]0
F7Noise F 7 = i = 1 d i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
Table 2. Multimodal benchmark functions.
Table 2. Multimodal benchmark functions.
NumberNameBenchmarkDimRange f min
F8Generalized
Schwfel’s problem
F 8 = i = 1 d x i sin ( x i ) 30[−500, 500]−12,569.5
F9Rastrigin F 9 = 10 d + i = 1 d x i 2 10 cos 2 π x i 30[−5.12, 5.12]0
F10Ackley F 10 = 20 exp 0.2 1 d i = 1 d x i 2 exp 1 d i = 1 d cos 2 π x i + 20 + exp 1 30[−32, 32]0
F11Griewank F 11 = i = 1 d x i 2 4000 i = 1 d cos x i i + 1 30[−600, 600]0
F12Generalized
penalized function 1
F 12 = π d 10 sin 2 π y 1 + i = 1 d 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y d 1 2 + i = 1 d μ x i , 10 , 100 , 4 w h e r e   y i = 1 + 1 4 x i + 1 , μ x i , a , k , m = k x i a m , 0 , k x i a m , x i > a a x i a x i < a 30[−50, 50]0
F13Generalized
penalized function 2
F 13 = 0 . 1 sin 2 3 π x 1 + i = 1 d x i 1 2 1 + sin 2 3 π x i + 1 + x d 1 2 1 + sin 2 ( 2 π x d ) + i = 1 d μ x i , 5 , 100 , 4 w h e r e   μ x i , a , k , m = k x i a m , 0 , k x i a m , x i > a a x i a x i < a 30[−50, 50]0
Table 3. Fixed-dimensions multimodal benchmark functions.
Table 3. Fixed-dimensions multimodal benchmark functions.
NumberNameBenchmarkDimRange f min
F14Shekel’s foxholes function F 14 = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65.536, 65.536]1
F15Kowalik’s function F 15 = i = 1 11 a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.0003
F17Branin F 17 = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos ( x 1 ) + 10 2[−5, 10]
[10, 15]
0.3788
F18Goldstein–Price function F 18 = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) × 30 + ( 2 x 1 3 x 2 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2, 2]3
F19Hartmann 1 F 19 = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.86
F20Hartmann 2 F 20 = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
F21Shekel 1 F 21 = i = 1 5 ( x a i ) ( x a i ) T + c i 1 4[0, 10]D−10.1532
F22Shekel 2 F 22 = i = 1 7 ( x a i ) ( x a i ) T + c i 1 4[0, 10]D−10.4028
F23Shekel 3 F 23 = i = 1 10 ( x a i ) ( x a i ) T + c i 1 4[0, 10]D−10.5363
Table 4. Descriptions of the CEC2019 test suite.
Table 4. Descriptions of the CEC2019 test suite.
NumberNameDRangeFmin
CEC-01Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]1
CEC-02Inverse Hilbert Matrix Problem16[−16,384, 16,384]1
CEC-03Lennard-Jones Minimum Energy Cluster18[−4, 4]1
CEC-04Rastrigin’s Function10[−100, 100]1
CEC-05Griewangk’s Function10[−100, 100]1
CEC-06Weierstrass Function10[−100, 100]1
CEC-07Modified Schwefel’s Function10[−100, 100]1
CEC-08Expanded Schaffer’s F6 Function10[−100, 100]1
CEC-09Happy Cat Function10[−100, 100]1
CEC-10Ackley Function10[−100, 100]1
Table 5. Algorithm parameter setting.
Table 5. Algorithm parameter setting.
AlgorithmParameterValueAlgorithmParameterValue
IROAVOA L 1 0.8WOA a 0 , 2
L 2 0.2 b 2
W 2.5MFO a 2 , 1
P 1 0.6 b 1
P 2 0.4TSA P min 1
P 3 0.6 P max 4
GWO α 0 , 2 HHO E 1 0 , 2
PSO C 1 2 E 0 1 , 1
C 2 2MVO W E P max 1
w min 0.2 W E P min 0.2
w max 0.9
Table 6. CEC2019 experimental results of IROAVOA and other algorithms.
Table 6. CEC2019 experimental results of IROAVOA and other algorithms.
Benchmark
Function
IndexIROAVOAGWOPSOMVOWOAMFOTSAHHO
F1Best1.00 × 1005.11 × 1012.58 × 10109.20 × 1032.19 × 1039.11 × 1042.72 × 1011.00 × 100
Mean1.67 × 10−21.20 × 1037.09 × 10101.27 × 1043.65 × 1039.14 × 1049.18 × 1023.33 × 10−2
Std1.29 × 10−11.64 × 1032.76 × 10102.60 × 1031.40 × 1034.69 × 1021.06 × 1031.83 × 10−1
F2Best4.65 × 1007.19 × 1006.77 × 1048.39 × 1015.28 × 1018.11 × 1015.99 × 1005.00 × 100
Mean7.75 × 10−27.22 × 1001.12 × 1058.94 × 1015.30 × 1018.11 × 1011.25 × 1011.67 × 10−1
Std6.00 × 10−12.18 × 10−21.53 × 1043.61 × 1003.67 × 10−13.45 × 10−24.55 × 1009.13 × 10−1
F3Best5.53 × 1001.28 × 1011.33 × 1011.30 × 1011.13 × 1011.11 × 1011.37 × 1018.02 × 100
Mean9.21 × 10−21.39 × 1061.37 × 1013.89 × 1071.00 × 10183.09 × 1011.37 × 1012.67 × 10−1
Std7.13 × 10−14.79 × 1067.14 × 10−62.13 × 1083.05 × 10189.98 × 1013.12 × 10−121.46 × 100
F4Best4.43 × 1012.14 × 1014.52 × 1055.65 × 1017.11 × 1018.97 × 1011.27 × 1026.22 × 101
Mean7.39 × 10−12.39 × 1018.21 × 1058.05 × 1017.13 × 1018.97 × 1011.88 × 1022.07 × 100
Std5.72 × 1003.85 × 10−22.64 × 1051.33 × 1013.26 × 10−18.07 × 10−142.43 × 1011.14 × 101
F5Best1.45 × 1001.81 × 1001.56 × 1063.81 × 1002.90 × 1001.17 × 1021.36 × 1022.02 × 100
Mean2.41 × 10−22.08 × 1002.74 × 1065.00 × 1003.02 × 1001.17 × 1021.38 × 1026.74 × 10−2
Std1.87 × 10−12.82 × 10−28.93 × 1056.26 × 10−11.57 × 10−17.70 × 10−52.22 × 1003.69 × 10−1
F6Best7.02 × 1001.09 × 1016.63 × 1007.94 × 1009.35 × 1005.45 × 1001.11 × 1019.30 × 100
Mean1.17 × 10−12.00 × 1011.81 × 1018.94 × 1001.19 × 1015.45 × 1002.11 × 1013.10 × 10−1
Std9.09 × 10−17.87 × 10−14.23 × 1005.19 × 10−12.42 × 1001.75 × 10−62.69 × 1001.70 × 100
F7Best1.16 × 1039.13 × 1021.81 × 1051.71 × 1031.38 × 1031.61 × 1032.41 × 1031.13 × 103
Mean1.93 × 1011.43 × 1033.11 × 1052.21 × 1031.40 × 1031.61 × 1034.69 × 1033.75 × 101
Std1.50 × 1027.81 × 1001.05 × 1052.89 × 1023.53 × 1015.69 × 1006.40 × 1022.06 × 102
F8Best4.46 × 1005.31 × 1005.53 × 1005.45 × 1005.06 × 1005.33 × 1005.59 × 1004.89 × 100
Mean7.44 × 10−25.88 × 1006.00 × 1006.00 × 1005.10 × 1005.33 × 1006.00 × 1001.63 × 10−1
Std5.76 × 10−12.77 × 10−11.80 × 10−13.56 × 10−18.22 × 10−23.29 × 10−66.85 × 10−38.94 × 10−1
F9Best1.40 × 1001.20 × 1002.00 × 1041.84 × 1001.32 × 1008.10 × 1002.86 × 1001.39 × 100
Mean2.45 × 10−21.44 × 1003.66 × 1042.34 × 1001.61 × 1008.10 × 1003.53 × 1004.63 × 10−2
Std1.90 × 10−15.70 × 10−21.32 × 1042.14 × 10−11.27 × 10−11.54 × 10−52.80 × 10−12.54 × 10−1
F10Best2.11 × 1012.15 × 1012.15 × 1012.15 × 1012.14 × 1012.11 × 1012.15 × 1012.14 × 101
Mean3.52 × 10−12.27 × 1012.27 × 1012.26 × 1012.24 × 1012.11 × 1012.27 × 1017.12 × 10−1
Std2.73 × 1002.29 × 10−12.30 × 10−13.14 × 10−14.22 × 10−14.45 × 10−142.34 × 10−13.90 × 100
Table 7. Friedman’s statistical ranking results of the optimal value of 23 benchmark test functions of the IROAVOA and other algorithms.
Table 7. Friedman’s statistical ranking results of the optimal value of 23 benchmark test functions of the IROAVOA and other algorithms.
Figure Friedman Rank
IROAVOAGWOPSOMVOWOAMFOTSAHHOAVOA
F11.185.008.777.174.008.076.003.001.82
F21.005.008.037.103.478.876.003.532.00
F31.004.277.036.008.978.004.733.002.00
F41.004.007.136.078.078.705.033.002.00
F51.034.238.907.135.007.975.772.932.03
F61.004.938.906.104.307.207.572.972.03
F71.804.277.607.374.679.005.802.132.37
F81.007.678.435.703.975.077.832.602.73
F92.485.007.936.332.557.578.172.482.48
F102.105.008.106.333.708.806.772.102.10
F113.103.908.777.133.228.104.583.103.10
F121.004.877.176.304.138.378.172.972.03
F131.035.738.434.205.078.537.033.001.97
F141.907.006.773.936.033.358.474.702.85
F151.435.106.277.075.977.674.873.503.13
F162.336.078.837.674.731.107.404.272.60
F172.054.879.004.806.701.957.835.802.00
F183.276.938.705.106.331.037.032.733.87
F192.035.808.934.007.371.275.976.772.87
F203.204.579.004.475.403.604.377.632.77
F211.483.978.935.175.473.926.507.172.40
F221.403.979.004.136.074.556.776.732.38
F231.684.238.704.775.903.406.836.902.58
Avg
Rank
1.725.068.235.835.265.926.504.042.44
Overall
Rank
149657832
Table 8. Statistical results of Wilcoxon rank sum test of 23 benchmark test functions between IROAVOA and other algorithms.
Table 8. Statistical results of Wilcoxon rank sum test of 23 benchmark test functions between IROAVOA and other algorithms.
Figure IROAVOA VS.
GWOPSOMVOWOAMFOTSAHHOAVOA
F11.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−123.45 × 10−7
F23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F32.51 × 10−112.51 × 10−112.51 × 10−112.51 × 10−112.51 × 10−112.51 × 10−112.51 × 10−112.51 × 10−11
F43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.50 × 10−116.72 × 10−10
F63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F73.34 × 10−113.02 × 10−113.02 × 10−116.52 × 10−93.02 × 10−113.02 × 10−111.67 × 10−11.09 × 10−1
F83.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F91.20 × 10−121.21 × 10−121.21 × 10−123.34 × 10−11.21 × 10−121.21 × 10−12NaNNaN
F101.17 × 10−121.21 × 10−121.21 × 10−129.84 × 10−101.21 × 10−121.21 × 10−12NaNNaN
F112.79 × 10−31.21 × 10−121.21 × 10−123.34 × 10−11.21 × 10−121.27 × 10−5NaNNaN
F123.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F133.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.10 × 10−10
F142.34 × 10−112.34 × 10−112.34 × 10−112.34 × 10−116.83 × 10−12.34 × 10−112.34 × 10−118.89 × 10−2
F156.01 × 10−83.02 × 10−113.02 × 10−118.15 × 10−113.01 × 10−111.61 × 10−62.44 × 10−97.60 × 10−7
F165.14 × 10−125.14 × 10−125.14 × 10−125.14 × 10−128.99 × 10−115.14 × 10−121.56 × 10−112.08 × 10−2
F172.36 × 10−122.36 × 10−122.36 × 10−122.36 × 10−121.61 × 10−12.36 × 10−122.36 × 10−125.70 × 10−1
F183.47 × 10−103.34 × 10−112.02 × 10−81.03 × 10−62.56 × 10−115.49 × 10−111.17 × 10−22.06 × 10−1
F192.15 × 10−112.15 × 10−112.15 × 10−112.15 × 10−111.14 × 10−92.15 × 10−112.15 × 10−115.77 × 10−8
F202.75 × 10−33.02 × 10−111.06 × 10−35.83 × 10−31.56 × 10−11.67 × 10−11.85 × 10−92.34 × 10−1
F211.94 × 10−111.94 × 10−111.94 × 10−111.94 × 10−113.46 × 10−11.94 × 10−111.94 × 10−111.24 × 10−9
F222.15 × 10−112.15 × 10−112.15 × 10−112.15 × 10−111.17 × 10−12.15 × 10−112.15 × 10−115.18 × 10−11
F232.48 × 10−112.48 × 10−112.48 × 10−112.48 × 10−117.26 × 10−22.48 × 10−112.48 × 10−111.53 × 10−9
(W/T/L)23/0/023/0/023/0/023/0/023/0/023/0/020/3/015/3/5
Table 9. Comparison of pressure vessel design problem results of various algorithms.
Table 9. Comparison of pressure vessel design problem results of various algorithms.
AlgorithmTsThLRf(x)WorstMeanStd
IROAVOA0.7782710.38470040.32492199.92625885.507751.44 × 10112.40 × 1091.86 × 1010
AVOA0.7802520.38568040.42754198.50415888.933795.89 × 1031.96 × 1021.08 × 103
GWO0.7792550.38577540.37216199.45235893.323531.82 × 10202.06 × 10194.56 × 1019
WOA0.8514330.61842443.22549163.15406788.055961.66 × 10256.19 × 10202.17 × 1021
MVO0.8662710.43027144.86706145.45946072.499981.37 × 10201.70 × 10193.68 × 1019
HHO0.9479940.45833447.89274115.99526331.090056.33 × 1032.11 × 1021.16 × 103
Table 10. Comparison of welded beam design problem results of various algorithms.
Table 10. Comparison of welded beam design problem results of various algorithms.
Algorithmτ (s1)θ (s2)Pc (s3)δ (s4)f (x)WorstMeanStd
IROAVOA0.205633.25529.03560.205781.69559.43 × 10141.57 × 10131.22 × 1014
AVOA0.202953.30339.03700.205731.69791.71 × 10175.71 × 10153.13 × 1016
GWO0.205383.25919.04120.205811.69699.05 × 10157.41 × 10142.28 × 1015
WOA0.171813.86129.47290.203651.78362.55 × 10304.58 × 10161.53 × 1017
MVO0.203303.30739.03900.205791.69997.95 × 10169.41 × 10151.77 × 1016
HHO0.198293.44538.99780.207511.71671.72 × 10005.72 × 10−23.13 × 10−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kuang, X.; Hou, J.; Liu, X.; Lin, C.; Wang, Z.; Wang, T. Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy. Electronics 2024, 13, 3329. https://doi.org/10.3390/electronics13163329

AMA Style

Kuang X, Hou J, Liu X, Lin C, Wang Z, Wang T. Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy. Electronics. 2024; 13(16):3329. https://doi.org/10.3390/electronics13163329

Chicago/Turabian Style

Kuang, Xingsheng, Junfa Hou, Xiaotong Liu, Chengming Lin, Zhu Wang, and Tianlei Wang. 2024. "Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy" Electronics 13, no. 16: 3329. https://doi.org/10.3390/electronics13163329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop