Next Article in Journal
An Entanglement-Based Protocol for Simultaneous Reciprocal Information Exchange between 2 Players
Previous Article in Journal
A Question-Answering Model Based on Knowledge Graphs for the General Provisions of Equipment Purchase Orders for Steel Plants Maintenance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Sparrow Search Algorithm with Selective Ensemble

1
School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(11), 2505; https://doi.org/10.3390/electronics12112505
Submission received: 15 May 2023 / Revised: 29 May 2023 / Accepted: 30 May 2023 / Published: 1 June 2023

Abstract

:
Aiming at the deficiencies of the sparrow search algorithm (SSA), such as being easily disturbed by the local optimal and deficient optimization accuracy, a multi-strategy sparrow search algorithm with selective ensemble (MSESSA) is proposed. Firstly, three novel strategies in the strategy pool are proposed: variable logarithmic spiral saltation learning enhances global search capability, neighborhood-guided learning accelerates local search convergence, and adaptive Gaussian random walk coordinates exploration and exploitation. Secondly, the idea of selective ensemble is adopted to select an appropriate strategy in the current stage with the aid of the priority roulette selection method. In addition, the modified boundary processing mechanism adjusts the transgressive sparrows’ locations. The random relocation method is for discoverers and alerters to conduct global search in a large range, and the relocation method based on the optimal and suboptimal of the population is for scroungers to conduct better local search. Finally, MSESSA is tested on CEC 2017 suites. The function test, Wilcoxon test, and ablation experiment results show that MSESSA achieves better comprehensive performance than 13 other advanced algorithms. In four engineering optimization problems, the stability, effectiveness, and superiority of MSESSA are systematically verified, which has significant advantages and can reduce the design cost.

1. Introduction

Some real-world problems that exist in many fields, such as industrial manufacturing [1], medical applications [2] wireless sensor networks [3], etc., are usually modeled as optimization problems. With the continuous development of science and technology, these practical optimization problems are often multi-modal and high-dimensional, with a large number of local solutions and highly nonlinear and very strict constraints. One of the challenges is that it is difficult to obtain the optimal solution by using traditional optimization methods on increasingly complex optimization problems. The calculation efficiency of traditional optimization methods is low, and their calculation workload is large. Traditional optimization methods are powerless. It is even more difficult to deal with optimization problems with the aid of traditional optimization methods [4]. Therefore, many researchers have conducted extensive research to find the best solution to solve the optimization problem. The metaheuristic algorithm is an emerging evolutionary computing technology. Due to its characteristics of high precision, simplicity, and high flexibility, the metaheuristic algorithm has gained clear advantages in avoiding trapping into the state of local optimal stagnation and has strong exploration and exploitation abilities in solving complex optimization problems [5]. It is widely considered an effective optimization technology for solving complex optimization problems [6,7,8,9,10,11,12,13,14].
In recent years, metaheuristic algorithms based on swarm intelligence technology have been widely studied for their advantages. The swarm intelligence algorithm is mainly inspired by group biological behavior, i.e., it imitates the behavior of animals in the form of a group, such as wolves, fishes, birds, etc. Its core idea is to simulate the behavior of group members following the group leader to obtain food or mates. Common swarm intelligence algorithms include particle swarm optimization (PSO) [15], grey wolf optimization (GWO) [16], whale optimization algorithm (WOA) [17], artificial bee colony (ABC) [18], firefly algorithm (FA) [19], etc.
Among the many metaheuristic algorithms available, the sparrow search algorithm (SSA) is an optimization algorithm based on the sparrow population proposed by Xue et al. [20] in 2020. It simulates the foraging and anti-predation behavior of the sparrow population in nature and takes the search for food sources as an optimization algorithm. SSA has the advantages of simple structure, strong flexibility, and few parameters, and significant advantages in convergence speed, search accuracy, and stability. It has been applied to solve various practical application problems, such as intrusion detection [21], energy consumption [22], wireless sensor networks [23], feature selection [24], network planning [25], data compression [26], engineering problems [27,28,29,30,31,32,33], etc. However, the second challenge is that SSA is highly susceptible to interference from local solutions, resulting in reduced search capability and search accuracy. Therefore, it is of great significance to study the improvement of SSA.
The idea of ensemble learning was put forward in 2010 [34], and researchers have achieved good results by combining optimization algorithms and ensemble learning to solve complex optimization problems [35,36,37,38,39]. The idea of ensemble learning is to integrate various strategies with different characteristics to achieve better results. In fact, however, not all strategies need to be used, so selective ensemble can be adopted for better performance [40,41].
In recent years, more and more swarm intelligence algorithms have been proposed to deal with complex optimization problems. Our team also proposed some new algorithms and achieved good results in engineering applications [3,10,28,42]. However, the no free lunch (NFL) theorem [43] logically proves that no metaheuristic algorithm is suitable for solving all optimization problems. This motivates us to constantly develop new metaheuristic algorithms to solve different problems.
Based on the advantages of SSA in dealing with optimization problems and the idea of selective ensemble in algorithm optimization, a multi-strategy sparrow search algorithm with selective ensemble (MSESSA) is proposed. The selective ensemble method is adopted to select strategies to avoid resource loss, and the selective ensemble is combined with three novel and effective learning search strategies to achieve better search capability in different search stages of SSA, eliminate existing limitations as far as possible, and finally optimize the search performance of SSA. MSESSA’s major contributions and innovations are as follows:
  • Firstly, according to the search characteristics of SSA in different stages, three search learning strategies with different characteristics are proposed: variable logarithmic spiral saltation learning enhances global search capability, neighborhood-guided learning accelerates the convergence of local search, and adaptive Gaussian random walk coordinates exploration and exploitation. Three strategies can be used in the different search stages of the sparrow foraging search process to improve the performance of the algorithm;
  • Secondly, the idea of selective ensemble is adopted to select the most appropriate search strategy from the strategy pool for the current search stage with the aid of the priority roulette selection method;
  • Thirdly, the boundary processing mechanism of the original algorithm is modified, and a modified boundary processing mechanism is proposed to dynamically adjust the transgressive sparrows’ locations. The random relocation method is adopted for discoverers and alerters to further conduct global search in a large range, and the relocation method based on the optimal and suboptimal of the population is adopted for scroungers to further conduct better local search;
  • Finally, MSESSA is tested on the full set of 2017 IEEE Congress on Evolutionary Computation (CEC 2017) test functions. The CEC 2017 function test, Wilcoxon rank-sum test, and ablation experiment results show that MSESSA achieves better comprehensive performance than 13 other advanced algorithms. The stability, effectiveness, and superiority of MSESSA are systematically verified on four practical engineering optimization problems.
This study discusses the details of MSESSA, which is organized as follows: Section 2 describes the related work of SSA. Section 3 briefly describes SSA. Section 4 describes and analyzes the proposed MSESSA in detail. Section 5 compares MSESSA with 13 other advanced algorithms based on CEC 2017 and verifies the superior performance of MSESSA. Section 6 applies MSESSA to four classical practical engineering problems. Section 7 summarizes the research and looks forward to future research.

2. Related Work

According to the no free lunch theorem [43], an algorithm cannot be skilled in solving all optimization problems. SSA is easily disturbed by the local optimal solution, and it is easy to converge too fast that it enters the stagnation state. SSA also has deficient optimization accuracy and low search efficiency. Therefore, many researchers have begun to conduct considerable studies on the improvement of SSA. Liu et al. [21] proposed a hybrid strategy improved sparrow search algorithm (HSISSA). They constructed a hybrid circle piecewise mapping method to initialize the population and combined the spiral search method with the Lévy flight formula to update the locations of the discoverers and the alerters, expand the search range of the population, and enhance the search ability. In addition, the simplex method and pinhole imaging method were used to optimize the location of sparrows with poor fitness values and optimal fitness values so as to avoid population search stagnation and local optimization. Finally, the effectiveness of the HSISSA was verified in intrusion detection. Zhang et al. [24] proposed a mayfly sparrow search hybrid algorithm (MASSA). The idea of the Mayfly algorithm [44] was added to the optimization process of SSA, and circular chaotic mapping, Lévy flight, and a nonlinear inertia coefficient were added to balance the abilities of global search and local search. Finally, the RFID network planning using the MASSA achieved remarkable results. Wu et al. [27] proposed a sparrow search algorithm based on quantum computing and multi-strategy enhancement (QMESSA). They adopted an improved circular chaotic mapping theory, combined quantum computing with the quantum gate mutation mechanism, and constructed an adaptive T-distribution and a new position update formula using the enhanced search strategy to accelerate convergence and enhance its variability. The superiority of QMESSA was verified by several classical practical application problems. Ouyang et al. [28] proposed a learning sparrow search algorithm (LSSA). They introduced lens reverse learning and a sine and cosine guidance mechanism, adopted a differential-based local search strategy, expanded the possibility of the solution, and made the algorithm jump out of stagnation. Finally, they verified the practicability of LSSA in robot path planning. Meng et al. [30] proposed a multi-strategy improved sparrow search algorithm (MSSSA). They introduced chaotic mapping to obtain high-quality initial populations, adopted a reverse learning strategy to increase population diversity, designed an adaptive parameter control strategy to accommodate an adequate balance between exploration and exploitation, and embedded a hybrid disturbance mechanism in the individual renewal stage. The advantages of MSSSA were verified in solving engineering optimization problems. Ma et al. [33] proposed an enhanced multi-strategy sparrow search algorithm (EMSSA) based on three strategies. Firstly, in the uniformity-diversification orientation strategy, an adaptive-tent chaos theory was proposed to obtain a more diverse and greatly random initial population. Secondly, in the hazard-aware transfer strategy, a weighted sine and cosine algorithm based on the growth function was constructed to avoid trapping into the state of local optimal stagnation. Thirdly, in the dynamic evolutionary strategy, the similarity perturbation function was designed, and the triangle similarity theory was introduced to improve the exploration ability. The performance of EMSSA in 23 benchmark functions, CEC2014, and CEC2017 was a significant improvement over SSA and other advanced algorithms.
Many scholars have put forward constructive improvement strategies for the shortcomings of SSA in the above studies, and the improved algorithm has also been applied to various application fields. However, the improved algorithm still has limited advantages and instability when dealing with optimization problems. These existing studies have improved the performance of SSA to a certain extent and verified the advancement and effectiveness of SSA in application, which has a guiding role for subsequent research. However, existing research also has some shortcomings and still has limited advantages in dealing with optimization problems:
  • In the previous work, these improved algorithms still have limitations and uncertainties in practical application, such as insufficient optimization accuracy, low search efficiency, and unstable performance, and there are still some areas to be improved;
  • Existing research is mainly improved by introducing ideas or formulas of other algorithms, but many machine learning ideas can also realize new improvements.
So far, there are still few studies on the improvement of SSA because the proposed time is relatively short and relatively new. Although existing research can improve the performance of SSA, the proposed improved algorithm still has limitations and uncertainties in application. Improved framework research based on the ideas or formulas of other algorithms is not novel enough, and based on the no free lunch theorem [43], no algorithm can present advantages in all fields. It is necessary to further study the promising and important metaheuristic algorithm, and this necessity prompts us to develop a metaheuristic algorithm with good comprehensive performance to solve the optimization problem. Therefore, this paper combines the idea of multi-strategy and selective ensemble learning to design three novel strategies to improve SSA and eliminate the limitations and uncertainties in the application process as far as possible. Currently, no scholars have introduced the idea of multi-strategy and selective ensemble learning to the improvement of SSA.

3. Sparrow Search Algorithm

The design of SSA is inspired by sparrows’ foraging and anti-predation behavior. In the process of foraging, the sparrow population has its own roles, and the division of labor is clear. According to the adaptation to the environment in the foraging process, the sparrow population can be divided into discoverers, scroungers, and alerters. Discoverers usually play a role with high environmental fitness, so they are responsible for searching for food extensively, sharing information about foraging direction with other individuals, and guiding the population to flow in the foraging direction. Scroungers join the team of discoverers with the best fitness value to explore foraging information, while alerters mainly perform reconnaissance and early warning tasks and are responsible for detecting dangers and providing danger information to other individuals in the population.
Therefore, the following hypotheses are further obtained for sparrows’ behavior:
  • Sparrows with high energy reserves are called discoverers, capable enough to forage for food and responsible for finding areas that can provide rich food sources, providing foraging areas or directions for the scroungers. Sparrows with low energy reserves are called scroungers. The level of energy reserve depends on the assessment of individual fitness value;
  • Once the alerters find a predator, the individual will issue an alarm. When the alarm value is greater than the safety threshold, the discoverers need to guide the scroungers to a safe area;
  • Every sparrow can become a discoverer if it finds a better food source, but the proportion of discoverers and scroungers in the whole population remains the same;
  • The scroungers will follow the discoverers who provide the best foraging information to forage. Meanwhile, some scroungers will constantly monitor the discoverers and compete with the discoverers for food to improve their foraging rate;
  • When aware of danger, sparrows at the edge of the population will quickly move to safety to gain a better position, while sparrows in the group will move randomly.
We can model sparrows’ positions as follows:
X = X 1 , 1 X 1 , 2 X 1 , D X 2 , 1 X 2 , 2 X 2 , D X N , 1 X N , 2 X N , D
N  represents the number of sparrows, and D  represents the dimension of variables to be optimized. The fitness values of all sparrows can be modeled as follows:
F X = f ( [ X 1 , 1 X 1 , 2 X 1 , D ] ) f ( [ X 2 , 1 X 2 , 2 X 2 , D ] ) f ( [ X N , 1 X N , 2 X N , D ] )
The value of each row of F X  represents the fitness value of the individual.
In the sparrow population, the fitness values of all sparrows will be calculated first, the fitness values will be sorted, and the sparrows with the top 20% fitness values will be the discoverers. When the discoverers are within the safe value range, a large range of search will be conducted. The position update formula of the discoverers is shown in Equation (3):
X i , j t + 1 = X i , j t + Q L             i f   R 2 S T X i , j t e x p i α T       i f   R 2 < S T
X i , j t + 1  represents the position of the sparrow and the position of the i-th sparrow in the j-th dimension; t  represents the current iteration number; T  is the maximum number of iterations; α  is a random number of ( 0 , 1 ] ; Q  is a random number obeying the standard normal distribution, which meets Q ( 0 , 1 ] ; L  is a matrix of 1 × D  with each element value being 1; R 2  is the warning value and meets R 2 [ 0 , 1 ] ; and S T  is the safety threshold and meets S T [ 0.5 , 1 ] . When R 2 < S T , it means there is no danger and it can be searched extensively; otherwise, there is a danger, and the discoverers need to lead other sparrows to fly away from their current locations to a safe area.
The scroungers will follow the foraging direction provided by the discoverers for extensive search. The position update formula of scroungers is shown in Equation (4):
X i , j t + 1 = X P t + 1 + | X i , j t X P t + 1 | A + L   o t h e r w i s e   Q e x p X w o r s t t X i , j t i 2       i f   i > N / 2
X P  is the optimal position occupied by the current discoverer, X w o r s t t  is the global worst position, A  represents a 1 × D  matrix, each element is 1 or −1, and A + = A T ( A A T ) 1 . When i > N / 2 , it means that the scroungers with a low fitness value cannot compete for food preferentially with other individuals with a high fitness value and need to fly to a new place to forage; otherwise, it means that the scroungers have become the discoverers and forage alone.
The alerters are responsible for danger detection in the sparrow population. When the alerters are aware of a danger, they will transmit the danger signal to all individuals in the sparrow population, and the sparrow population will immediately show anti-predation behavior. The position update formula of the alerters is shown in Equation (5):
X i , j t + 1 = X i , j t + K ( | X i , j t X b e s t t | ( f i f w ) + ε )       i f   f i = f g X b e s t t + β | X i , j t X b e s t t |       i f   f i > f g
X b e s t t  is the current global optimal position, and β  is a random number subject to standard normal distribution with mean 0 and variance 1. K  is the random number in [ 1 , 1 ] , f i  is the fitness value of the current individual, f g  is the current global optimal fitness value, f w  is the current worst fitness value, and ε  is an infinitesimal constant to avoid the denominator being zero.

4. The Proposed Multi-Strategy Sparrow Search Algorithm with Selective Ensemble

In recent years, many scholars have combined swarm intelligence algorithms and machine learning to solve complex optimization problems, which have been proven to be effective in dealing with optimization problems [45,46,47]. Ensemble learning is a kind of machine learning that integrates multiple strategies to solve problems. Algorithms often need to be optimized in more than one stage, so the idea of multi-strategy has gained significant advantages in algorithm optimization. However, in multi-strategy algorithms, knowing when to use the strategy is the key to improve the performance of the algorithm. Zhou et al. [40] put forward the concept of selective ensemble, which aims to achieve better performance without necessarily using more learning strategies to deal with problems with better performance. Selective ensemble can select the strategy to deal with optimization problems to avoid resource loss. However, how to select strategies more suitable for this stage in different search stages of the algorithm is the key to the research. Roulette wheel selection is the simplest and most commonly used selection method. In this method, the selection probability of each individual is proportional to its fitness value. The greater the fitness, the greater the selection probability. In fact, however, during roulette selection, individual choice is often not based on individual choice probability but on “cumulative probability”. The function of roulette wheel selection is that good individuals can be selected from the current population; good individuals have a greater chance to retain and pass their good information to the next generation so that they can gradually approach the optimal solution. Therefore, we can choose the strategy suitable for the current optimization stage by the roulette selection method.
In this study, MSESSA combines the idea of selective ensemble with multi-strategy. Firstly, three search learning strategies are proposed to improve the performance of the algorithm from different aspects, which include the variable logarithmic spiral saltation learning strategy, suitable for optimizing global search and enhancing search ability; the neighborhood-guided learning strategy, suitable for optimizing local search and accelerating convergence speed; and the adaptive Gaussian random walk strategy, suitable for balancing global search and local search ability. These three novel strategies can greatly improve the global search capability and the local search capability of SSA. All strategies are set with the same initial priority. As the algorithm iteration progresses, if the strategy is suitable for the current search stage, the probability of the strategy being selected will increase, while if the strategy is not suitable for the current search, the probability of the strategy being selected will decrease. Therefore, a priority roulette selection method is designed to select strategies in different stages so that the algorithm can adjust the selection probability. Finally, MSESSA also modifies the boundary processing mechanism. The modified boundary processing mechanism is proposed to dynamically adjust the transgressive sparrows’ locations. The random relocation method is for discoverers and alerters to further conduct global search in a large range, and the relocation method based on the optimal and suboptimal of the population is for scroungers to further conduct better local search.

4.1. Variable Logarithmic Spiral Saltation Learning

As widely known, if the algorithm has strong search ability in the early stage, it will show superior performance. In the search process of SSA, the discoverers lead the population to conduct extensive search, but the randomness of the search process, search method, and search trajectory makes the scroungers blind to the search direction provided by the discoverers and make it difficult to make a correct judgment, resulting in poor convergence effect. The authors of [42,48,49] use the spiral search method to improve the search ability of the algorithm, achieving good results. However, if the algorithm searches with a spiral curve trajectory, there is a certain possibility that it will find poor solutions or is always limited to a certain range, thus reducing the search accuracy. The learning mechanism is a way of information flow, which is often used in algorithm optimization. By adopting a learning mechanism, better information and positions can be shared with other individuals. Other population individuals learn from the information obtained to achieve the location update. The authors of [50] proposed a saltation learning strategy based on the optimal positions to improve global search ability, and the algorithm showed better performance. Therefore, this study proposes a variable logarithmic spiral saltation learning strategy to change the search trajectory and search mode of the algorithm and improve the global search ability. The logarithmic spiral curve trajectory shown in Figure 1 is used to learn the location information of different dimensions of the optimal sparrows, the worst sparrows, and the random sparrows in the population to generate candidate solutions, as well as to further expand the search scope of the algorithm and improve the search ability and accuracy of the algorithm.
As shown in Figure 2, based on the advantages gained from the logarithmic spiral curves and saltation learning mechanism, we assume that the algorithm performs saltation learning under a variable logarithmic spiral curve trajectory. Each sparrow in each generation only randomly updates one dimension around a random dimension of the current optimal sparrows, and the information used to update this dimension comes from other dimensions. The saltation learning between different dimensions can expand the search space and generate candidate solutions to enrich the diversity of solutions. Then, the position update formula under variable logarithmic spiral saltation learning is Equation (6):
X i , j t + 1 = X b e s t , m t + e a l × cos ( 2 π l ) × ( X r 1 , n t X w o r s t , n t )
X b e s t t  is the optimal solution of the t-th iteration; X w o r s t t  is the worst solution of the t-th iteration; j , m , and n  are three integers randomly selected from [ 1 , d i m ]  and are not equal to each other; d i m  is the dimension; r 1  is an integer randomly selected from [ 1 , N ] ; and N  is the population size. a  is the variable factor that determines the shape of the logarithmic spiral, and l  is a parameter decreasing from 1 to −1. a  and l  are, respectively, generated by Equations (7) and (8):
a = e 5 × c o s ( π × ( 1 ( i / T ) ) )
l = 2 × ( 1 i / T ) 1
The pseudocode of variable logarithmic spiral saltation learning (VLSSL) is described in Algorithm 1.
Algorithm 1: Pseudocode of VLSSL
1: Input: the number of sparrows N , the current iteration number t , the maximum number of iterations M , the current optimal location X b e s t t , the current worst location X w o r s t t  
2: Output: the current new location X n e w  
3: while  ( t < M )  
4: for  i = 1 : N  
5: Use Equation (6) update the sparrow’s location
6: Get the current new location X n e w  
7: end for
8: t = t + 1
9: end while
10: return X n e w  

4.2. Neighborhood-Guided Learning

Generally, the algorithm is prone to problems such as optimal value stagnation and low search accuracy in the late iteration period. The authors of [13,51,52] use a neighborhood-guided strategy to jump out of stagnation with the help of the neighborhood, which presents certain advantages in improving algorithm performance. Considering the superiority of the learning mechanism on algorithm optimization, a neighborhood-guided learning strategy based on the optimal location information of the sparrow population is proposed in this study. A neighborhood-guided system is constructed in the sparrow population to establish the characteristics of each other’s guiding relationship, and the information of the optimal location is learned under the characteristics of the guiding relationship, thus helping SSA to jump out of stagnation and improve the accuracy of the final result.
In SSA, the discoverers play the role of the “bellwether” of the group, leading the sparrow population to forage, providing foraging directions and foraging information. However, it is easy for the discoverers to fall into stagnation in the late iteration, making it easy for the scroungers who follow the discoverers’ foraging search to become trapped in the state of local optimal stagnation, resulting in the insufficient searching ability of the algorithm in the late iteration. Therefore, considering that sparrows forage in groups, this study assumes that there are neighborhood areas near each sparrow. As shown in Figure 3, the black solid dots represent sparrows in the population, and blue areas represent the neighborhood areas near sparrows. Even if the sparrow is trapped in the state of local optimal stagnation, its neighborhood can guide it to jump out of stagnation and continue to the optimal search.
The neighborhood position generation formula is shown in Equation (9):
n e i g h b o r i , j t + 1 = X i , j t + r a n d n × n r
n r  is the neighborhood factor, and each sparrow’s neighborhood is updated under the action of the neighborhood factor. n r  is represented by Equation (10):
n r = ( r a n d × X r a n d r a n d × X i , j t ) × | | X b e s t X i , j t | |
r a n d  is a random number with uniform distribution, X b e s t  is the global optimal solution, and X r a n d  is a random position generated by Equation (11):
X r a n d = l b + r a n d × ( u b l b )
u b  is the upper bound of the search space, and l b  is the lower bound of the search space.
In the local search stage, the sparrow maintains a guiding relationship with its neighborhood to share information. If the location is better than the current location, the location update will be carried out. The position update formula is shown in Equation (12):
X i , j t + 1 = X i , j t + r a n d × ( X b e s t G R C )
G R C  represents the characteristics of the guiding relationship between them, which can be generated by Equation (13):
G R C = ( X i , j t + n e i g h b o r i , j t ) / 2
As shown in Figure 4, the black solid dots represent the sparrow individual in the population, the red five-pointed star represents the global optimal solution of the population, and the blue arrows represent the guiding direction. With the increase in iteration times, the sparrow will constantly move in the direction guided by the neighborhood, jump out of the current stagnation, and find the global optimal solution.
The pseudocode of neighborhood-guided learning (NGL) is described in Algorithm 2.
Algorithm 2: Pseudocode of NGL
1: Input: the number of sparrows N , the current iteration number t , the maximum number of iterations M , the optimal location X b e s t  
2: Output: the current new location X n e w  
3: while  ( t < M )  
4: for  i = 1 : N  
5: Use Equation (10) update the sparrow’s location
6: Get the current new location X n e w  
7: end for
8: t = t + 1
9: end while
10: return X n e w  

4.3. Adaptive Gaussian Random Walk

The ideal state of SSA is to conduct global search extensively in the early iteration period and local search in the late iteration period. Therefore, researchers have adopted numerous strategies to improve the search capability and achieved good results [27,28,29,30]. However, most studies only focus on optimizing the global search or local search and improving the search capability at the corresponding stage to reduce the probability of imbalance, rather than focusing on how to coordinate strategies to better transition algorithms from exploration to exploitation. The Gaussian random walk strategy is one of the classic random walk strategies with strong development ability [50,53]. Therefore, an adaptive Gaussian random walk strategy based on adaptive control step size factor is proposed in this study to help SSA coordinate exploration and exploitation so as to obtain better search ability and exploration ability.
The position update formula under adaptive Gaussian random walk strategy is shown in Equation (14).
X i , j t + 1 = G a u s s i a n ( X i , j t , σ ) + ( r 1 × X b e s t t r 2 × X i , j t )
G a u s s i a n ( X i , j t , σ )  is Gaussian distribution, expectation is X i , j t , standard deviation is σ , X b e s t t  is the optimal solution of the t-th iteration, r 1  and r 2  is the random number of ( 0 , 1 ) , σ  generated by Equation (15):
σ = w × ( X i , j t X w o r s t t )
w  is the adaptive control step size factor, which adaptively adjusts the step size. X w o r s t t  is the worst solution of the t-th iteration. In the early stage of the algorithm, extensive global search is conducted with a high probability to try to obtain the maximum solution space, while in the late stage, local search is conducted to narrow the search scope and improve the convergence speed. Its expression is shown in Equation (16):
w = 1 ( 1 + e 5 × c o s ( π × ( 1 i T ) )
The pseudocode of adaptive Gaussian random walk (AGRW) is described in Algorithm 3.
Algorithm 3: Pseudocode of ARGW
1: Input: the number of sparrows N , the current iteration number t , the maximum number of iterations M , the current optimal location X b e s t t , the current worst location X w o r s t t  
2: Output: the current new location X n e w  
3: while  ( t < M )  
4: for  i = 1 : N  
5: Use Equation (14) update the sparrow’s location
6: Get the current new location X n e w  
7: end for
8: t = t + 1
9: end while
10: return X n e w  

4.4. Priority Roulette Selection

Roulette wheel selection is the simplest and most commonly used selection method. In this method, the selection probability of each individual is proportional to its fitness value. The greater the fitness, the greater the selection probability. In fact, however, during roulette selection, individual choice is often not based on individual choice probability but on “cumulative probability”. The function of roulette wheel selection is that good individuals can be selected from the current population; good individuals have a greater chance to retain and pass their good information to the next generation so that they can gradually approach the optimal solution. Based on the advantage of the roulette wheel selection method [53,54,55,56], we can choose the strategy suitable for the current optimization stage by the roulette selection method.
Therefore, in this study, based on the idea of selective ensemble and multi-strategy, we design a priority roulette selection method that can dynamically select search strategies. Firstly, the priority of all strategies is set to 1, and the priority of each strategy changes as the search phase changes. According to the priority of each strategy, a suitable strategy is selected through the roulette wheel. The higher the priority of the strategy, the greater the probability of being selected, and vice versa. The probability of strategy selection is described by the following:
f i t i = 1 s i
P A i = 1 f i t i i = 1 3 1 f i t i
P A i  is the probability of the strategy being selected, and s i  is the priority of the strategy.
It is necessary to use greedy choice thought to determine whether the solution is updated in priority roulette selection. The greedy choice thought is described as Equation (19):
X b e s t = X b e s t       i f   f ( X b e s t ) f ( X b e s t ) X b e s t       i f   f ( X b e s t ) < f ( X b e s t )  
The pseudocode of dynamically adjusted strategy priority (DASP) is described in Algorithm 4.
Algorithm 4: Pseudocode of DASP
1: Input: the number of sparrows N , the current iteration number t , the maximum number of iterations M , the priority of each strategy S m , the optimal fitness value f ( X b e s t ) , the current fitness value f ( X b e s t )
2: Output: the current new location X n e w , the priority of each strategy S m  
3: while  ( t < M )  
4: for  i = 1 : N  
5: Use Equation (19) to determine if need to update the sparrow’s location
6: Get the current new location X n e w  
7: S m = S m + 1  
8: end for
9: t = t + 1
10: end while
11: return X n e w , S m  
As shown in Figure 5, the blue area with black solid dots represents the original population, the blue arrows represent the population strategy selection, the yellow area represents VLSSL, the purple area represents NGL, the green area represents AGRW, and the yellow arrows represent the results under the corresponding strategy. The red arrow represents the choice of strategy during the next iteration. If the original population changes from black solid dots to yellow solid dots through VLSSL, the previous solution is successfully updated. This means that the strategy is suitable for searching in the current period, the priority of VLSSL will increase, and the probability of VLSSL being selected will increase, and in the pie chart shown in Figure 6, the proportion of the yellow part will increase. Similarly, if the original population changes from black solid dots to purple solid dots through NGL, the priority of NGL will increase, the probability of NGL being selected will increase, and the proportion of the purple part in the pie chart will increase. If the original population changes from black solid dots to green solid dots through AGRW, the priority of AGRW will increase, the probability of AGRW being selected will increase, and the proportion of the green part in the pie chart will increase.

4.5. Modified Boundary Processing Mechanism

In SSA, once sparrows cross the boundary, their positions will be updated in the boundary, and it is easy to make boundary aggregation, loss of population diversity, and difficult to search for normal optimization. Therefore, the boundary processing mechanism of SSA should be modified. QMESSA [27] proposed a new boundary control method based on the location information of optimal and suboptimal individuals, which achieved significant advantages. Therefore, in this study, considering the characteristics of different roles in the sparrow population at the search stage, the modified boundary processing mechanism is proposed to dynamically adjust the transgressive sparrows’ locations. The random relocation method is proposed for discoverers and alerters to further conduct global search in a large range and the relocation method based on the optimal and suboptimal of the population is proposed for scroungers to further conduct better local search.
Since discoverers and alerters need to conduct global search in a large range, the random relocation of transgressive sparrows can improve the possibility of obtaining the optimal solution. Therefore, the random relocation method is adopted for the stage of discoverers and alerters, and the adjustment method is shown in Equation (20):
X n e w = u b ( u b l b ) × r a n d
u b  is the upper bound of the search space, and l b  is the lower bound of the search space.
Moreover, it is usually easy for the scroungers to cross the boundary, and the scroungers mainly move to the vicinity of the global optimal. Therefore, the overstepping sparrows are dynamically adjusted based on the optimal and suboptimal sparrows of the population so that the overstepping sparrows at this stage can be replaced with better ones. In the next iteration, better information can be obtained. Search is based on locations that are better than the original location. In this way, the exploitation capability of SSA can be improved. The adjustment method is shown in Equation (21):
X i , j t + 1 = X b e s t 2 t + r × ( X b e s t t X b e s t 2 t )
X b e s t t  and X b e s t 2 t , respectively, represent the global optimal solution and global suboptimal solution in the t-th iteration, and r  is the random number of ( 0 , 1 ] .
The pseudocode of the modified boundary processing mechanism (MBPM) is described in Algorithm 5.
Algorithm 5: Pseudocode of MBPM
1: Input: the number of sparrows N , the current iteration number t , the maximum number of iterations M , the number of discoverers P D , the number of alerters S D  
2: Output: the current new location X n e w  
3: while  ( t < M )  
4: for  i = 1 : P D  
5: Use Equation (20) relocate of transgressive sparrows
6: Get the current new location X n e w  
7: end for
8: for  i = ( P D + 1 ) : N  
9: Use Equation (21) relocate of transgressive sparrows
10: Get the current new location X n e w  
11: end for
12: for  i = 1 : S D  
13: Use Equation (20) relocate of transgressive sparrows
14: Get the current new location X n e w  
15: end for
16: t = t + 1
17: end while
18: return X n e w  

4.6. The Steps and Pseudocode of MSESSA

The steps of MSESSA are as follows, and the pseudocode of MSESSA is described in Algorithm 6:
  • Set basic parameters, such as population size N = 100 , maximum number of iterations M = 500 , the number of discoverers P 1 N ,   P 1 = 20 % , the number of scroungers P 2 N ,   P 2 = 80 % , the number of alerters P 3 N ,   P 3 = 20 % , and objective function dimension;
  • Initialize the population, calculate the fitness values of all sparrows, and sort them to find out the optimal and worst sparrows;
  • Use Equation (3) to update the locations of discoverers;
  • Use Equation (20) to dynamically correct the positions of the transgressive sparrows;
  • Use Equation (4) to update the positions of scroungers;
  • Use Equation (21) to dynamically correct the positions of the transgressive sparrows;
  • Use Equation (5) to update the positions of alerters;
  • Use Equation (20) to dynamically correct the positions of the transgressive sparrows;
  • Use Equations (17) and (18) and the initial priority S m  of each strategy to calculate the selection probability of each strategy;
  • Use priority roulette selection S P m  for strategy selection. If S P m = 1 , Equation (6) is used to update the sparrow’s position. If S P m = 2 , then Equation (12) is used to update the sparrow’s position. If S P m = 3 , the sparrow’s position is updated using Equation (14);
  • Obtain the current position information and determine whether to update the position according to Equation (19). If the current position is better than the previous position, the position update will be carried out, and the priority of the strategy will be updated;
  • At the end of the iteration, the fitness value and position information of the optimal sparrow will be output.
Algorithm 6: Pseudocode of MSESSA
1: Input: the number of sparrows N , the current iteration number t , the maximum number of iterations M , the number of discoverers P D , the number of alerters S D , the priority of each strategy S m  
2: Output: the optimal fitness value f i t b e s t , the optimal location X b e s t  
3: Initialize a population of N sparrows
4: while  ( t < M )  
5: Rank the fitness values and find the current best sparrow and the current worst sparrow;
6: for  i = 1 : P D  
7: Use Equation (3) update the discoverers;
8: Use Equation (20) relocate of transgressive sparrows;
9: Rank the fitness values and update the current best sparrow
10: end for
11: for  i = ( P D + 1 ) : N  
12: Use Equation (4) update the scroungers
13: Use Equation (21) relocate of transgressive sparrows
14: Rank the fitness values and update the current best sparrow
15: end for
16: for  i = 1 : S D  
17: Use Equation (5) update the alerters
18: Use Equation (20) relocate of transgressive sparrows
19: Rank the fitness values and update the current best sparrow
20: end for
21: for  i = 1 : N  
22: Calculate the probability of each strategy being selected by S m , Equations (17) and (18)
23: Select a strategy by priority roulette selection S P m  
24: if  S P m = 1  
25: Use Equation (6) update the sparrow’s location
26: end if
27: if  S P m = 2  
28: Use Equation (12) update the sparrow’s location
29: end if
30: if  S P m = 3  
31: Use Equation (14) update the sparrow’s location
32: end if
33: Get the current new location
34: If the new location is better than before, update it, and update the priority of each strategy S m  by Equation (19)
35: end for
36: t = t + 1
37: end while
38: return f i t b e s t , X b e s t  

4.7. The Time Complexity Analysis of MSESSA

Time complexity can measure the efficiency of an algorithm. Suppose that the population size is N , the dimension is d i m , and the maximum number of iterations is M . The proportion of discoverers is P 1 ,   P 1 = 20 % , the proportion of scroungers is P 2 ,   P 2 = 80 % , and the proportion of alerters is P 3 ,   P 3 = 20 % .
Macroscopically, the time complexity of the SSA algorithm is O ( N × d i m × M ) . Although MSESSA adds the variable logarithmic spiral saltation learning strategy, neighborhood-guided learning strategy, adaptive Gaussian random walk strategy, and modified boundary processing mechanism, it does not change the algorithm structure. All the operations are location updating operations of sparrows and only increase the number of cycles but do not increase the time complexity. Therefore, the time complexity of MSESSA is O ( N × d i m × M ) . The time complexity of MSESSA is the same as that of SSA.
Microscopically, MSESSA adds a certain computational complexity. In the search stage, MSESSA introduces the variable logarithmic spiral saltation learning strategy, neighborhood-guided learning strategy, and adaptive Gaussian random walk strategy. Different strategies are selected according to the search characteristics in different stages of the algorithm. If the variable logarithmic spiral saltation learning strategy is applicable to the discoverer stage, it will be carried out in the discoverer stage with O ( P 1 × N × d i m × M )  increased. If it is applicable to the scrounger stage, it will be carried out in the scrounger stage with O ( P 2 × N × d i m × M )  increased. If it is applicable to the alerter stage, it will be carried out in the alerter stage with O ( P 3 × N × d i m × M )  increased. The same applies to the neighborhood-guided learning strategy, adaptive Gaussian random walk strategy, and variable logarithmic spiral saltation learning strategy. MSESSA introduced a modified boundary processing mechanism in boundary processing. Random relocation of transgressive sparrows is adopted in the discoverer stage and alerter stage with O ( P 1 × N × d i m × M )  and O ( P 3 × N × d i m × M )  increased. In the scrounger stage, dynamic adjustment of transgressive sparrows is carried out based on the optimal and suboptimal sparrows of the population with O ( P 2 × N × d i m × M )  increased, but all strategies do not increase the order of magnitude of the algorithm, so the time complexity of MSESSA is O ( N × d i m × M ) . The time complexity of MSESSA is the same as that of SSA.

5. Experiments and Results

In order to test the performance of MSESSA, experiments of MSESSA and 13 algorithms are carried out on CEC 2017 test suites (dimension = 30, dimension = 50, and dimension = 100). We choose SSA [20], GWO [16], WOA [17], PSO [15], ABC [18], FA [19], QMESSA [27], LSSA [28], AGWO [7], jWOA [8], VPPSO [9], NABC [13], and MSEFA [41] to conduct the experiment with MSESSA. These algorithms contain a variety of excellent search strategies and frameworks, and most of them are highly advantageous algorithms proposed in recent years. Comparison with these algorithms can show the superiority of the proposed algorithm in dealing with the same problems so as to prove that the proposed algorithm is effective and can contribute to subsequent research. The parameters of the other 13 algorithms are the parameters of the original literature. The proposed algorithm in this study set the number of discoverers P 1 N ,   P 1 = 20 % , the number of scroungers P 2 N ,   P 2 = 80 % , and the number of alerters P 3 N ,   P 3 = 20 % . CEC 2017 is the most widely used set of tests to effectively evaluate the performance of algorithms. In CEC 2017 test suites, C01–C03 are unimodal functions, F4–F10 are simple multimodal functions, C11–C20 are mixed functions, and C21–C30 are combined functions. It should be noted that F2 is not carried out in this study. CEC 2017 includes experience test functions such as spheres, and the test dimensions include 10, 30, 50, and 100. The CEC 2017 unconstrained test problem is extremely difficult to solve with the increase in dimension. To be fair, during the experiment, the population number is set to 100, the maximum number of iterations is set to 500, each algorithm independently runs 30 times, all mean values and standard deviation are calculated, and the optimal value is shown in bold. The test results of CEC 2017 test suites (dimension = 30, dimension = 50, and dimension = 100) are shown in Table A1, Table A2 and Table A3. The convergence curves and the stacked histogram of ranking under the corresponding test functions are shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.

5.1. Comparison of Results on CEC 2017 Functions (Dim = 30)

According to the convergence curves of each function in Figure 7, MSESSA achieves better results in C01, C04, C06, C10, C11, C12, C13, C17, C19, C20, C25, C27, C28, C29, and C30 on CEC 2017 test suites (dimension = 30). MSESSA presents better performance in terms of unimodal functions, simple multimodal functions, mixed functions, and combined functions. Although the optimization values of unimodal function C03, simple multimodal function C07, mixed function C16, combined function C22, combined function C26, combined function C28, and combined function C29 are slightly worse than those of MSEFA, NABC, and ABC, it can be seen from the convergence curve that MSESSA has strong capability in both early global search and late local search, and the performance of MSESSA is stable from the perspective of the overall search process. Figure 8 intuitively shows the stacked histogram of ranking for all algorithms on CEC 2017 test suites (dimension = 30). It can be seen that MSESSA ranks top five in all functions, ranking first in 12 functions, second in 4 functions, third in 4 functions, fourth in 6 functions, and fifth in 3 functions. It is enough to see that MSESSA shows strong performance. According to Table A1 in the Appendix A section, MSESSA has the largest number of bolds and optimal values. It is worth mentioning that in unimodal function C01, only SSA and NABC are in the same order of magnitude as MSESSA. In simple multimodal function C06 and combined function C30, the optimal value of MSESSA is far ahead of other algorithms. In summary, MSESSA has the best comprehensive performance on CEC 2017 test suites (dimension = 30).
Figure 7. The convergence curves of MSESSA and other 13 algorithms based on CEC 2017 (Dim = 30).
Figure 7. The convergence curves of MSESSA and other 13 algorithms based on CEC 2017 (Dim = 30).
Electronics 12 02505 g007aElectronics 12 02505 g007b
Figure 8. The stacked histogram of ranking for MSESSA and other 13 algorithms based on CEC 2017 (Dim = 30).
Figure 8. The stacked histogram of ranking for MSESSA and other 13 algorithms based on CEC 2017 (Dim = 30).
Electronics 12 02505 g008

5.2. Comparison of Results on CEC 2017 Functions (Dim = 50)

According to the convergence curves of each function in Figure 9, MSESSA achieves better results in C01, C03, C04, C06, C10, C11, C12, C13, C15, C19, C25, C27, C28, C29, and C30 on CEC 2017 test suites (dimension = 50). MSESSA presents better performance in terms of unimodal functions, simple multimodal functions, mixed functions, and combined functions. Although the optimization values of simple multimodal function C09, mixed function C16, mixed function C17, mixed function C18, mixed function C20, combined function C23, combined function C26, and combined function C29 are slightly worse than those of MSEFA and NABC, However, it can be seen from the convergence curve that MSESSA has strong capability in both early global search and late local search, and the performance of MSESSA is stable from the perspective of the overall search process. The stacked histogram of ranking for all algorithms on CEC 2017 test suites (dimension = 50) can be intuitively seen from Figure 10. It can be seen that MSESSA ranks top eight in all functions, ranking first in 12 functions, second in 2 functions, third in 6 functions, fourth in 3 functions, fifth in 4 functions, seventh in 1 function, and eighth in 1 function. It is enough to see that MSESSA has reduced advantages compared with CEC 2017 test suites (dimension = 30). However, compared with other algorithms in the same dimension, MSESSA still shows strong performance. As can be seen from Table A2 in the Appendix A section, MSEFA has the largest number of bolds in terms of total bolds, followed by MSESSA, but MSESSA has the largest number of bolds in terms of mean bolds, with a total of 12 bolds among 16 bolds, and MSEFA has a total of 18 bolds, among which only 5 bolds are mean bolds. The mean value mainly reflects the centralized trend of test and experimental data. Standard deviation mainly reflects the degree of data dispersion; that is, MSESSA has better optimization ability and MSEFA is stable. It is worth mentioning that the optimal value of MSESSA is far ahead of other algorithms in terms of unimodal function C01 and combined function C30. In summary, MSESSA has the best comprehensive performance on CEC 2017 test suites (dimension = 50).
Figure 9. The convergence curves of MSESSA and other 13 algorithms based on CEC 2017 (Dim = 50).
Figure 9. The convergence curves of MSESSA and other 13 algorithms based on CEC 2017 (Dim = 50).
Electronics 12 02505 g009aElectronics 12 02505 g009b
Figure 10. The stacked histogram of ranking for MSESSA and other 13 algorithms based on CEC 2017 (Dim = 50).
Figure 10. The stacked histogram of ranking for MSESSA and other 13 algorithms based on CEC 2017 (Dim = 50).
Electronics 12 02505 g010

5.3. Comparison of Results on CEC 2017 Functions (Dim = 100)

According to the convergence curves of each function in Figure 11, MSESSA achieves better results in C03, C04, C06, C10, C11, C12, C13, C15, C17, C19, C21, C22, C25, C27, C29, and C30 on CEC 2017 test suites (dimension = 100). In other words, MSESSA presents better performance in terms of unimodal functions, simple multimodal functions, mixed functions, and combined functions. Although the optimization value of simple multimodal function C09, mixed function C16 and combined function C26 is slightly worse than that of MSEFA, NABC, and GWO, MSESSA is better than that of MSEFA, NABC, and GWO. However, it can be seen from the convergence curve that MSESSA has strong capability in both early global search and late local search, and the performance of MSESSA is stable from the perspective of the overall search process. It can be seen intuitively from Figure 12 that the stacked histogram of ranking for all algorithms on CEC 2017 test suites (dimension = 100). It can be seen that MSESSA is in the top four in all functions, ranking first in 12 functions, second in 9 functions, third in 5 functions, and fourth in 3 functions. It is enough to see that MSESSA shows strong performance. As can be seen from Table A3 in the Appendix A section, MSEFA has the largest number of bolds in terms of total bolds, followed by MSESSA; MSESSA has the largest number of bolds in terms of mean bolds, among which there are 12 bolds in 15 bolds, and MSEFA has 18 bolds in total, among which there are only 5 bolds in mean bolds. The mean value mainly reflects the centralized trend of test and experimental data. Standard deviation mainly reflects the degree of data dispersion, that is, MSESSA has better optimization ability and MSEFA is stable. It is worth mentioning that the optimal value of MSESSA is far ahead of other algorithms in terms of mixed function C13, mixed function C15 and combined function C30. In summary, MSESSA has the best comprehensive performance on CEC 2017 test suites (dimension = 100).
Figure 11. The convergence curves of MSESSA and other 13 algorithms based on CEC 2017 (Dim = 100).
Figure 11. The convergence curves of MSESSA and other 13 algorithms based on CEC 2017 (Dim = 100).
Electronics 12 02505 g011aElectronics 12 02505 g011b
Figure 12. The stacked histogram of ranking for MSESSA and other 13 algorithms based on CEC 2017 (Dim = 100).
Figure 12. The stacked histogram of ranking for MSESSA and other 13 algorithms based on CEC 2017 (Dim = 100).
Electronics 12 02505 g012
In conclusion, in the CEC 2017 test suites experiment, compared with other advanced algorithms in dealing with problems of different dimensions, MSESSA has good optimization performance and is competitive among many algorithms. However, with the increase in dimensions, MSESSA’s advantages slightly decrease.

5.4. Wilcoxon Rank-Sum Test

However, it is far from enough to evaluate the performance of MSESSA only through the mean and variance data in CEC 2017 tests, which is one-sided and not persuasive enough. In order to comprehensively evaluate the performance of the algorithm, the Wilcoxon rank-sum test is used in this study to test whether MSESSA has significant differences from other algorithms. During the experiment, the population is set to 100, the maximum number of iterations is set to 500, and each algorithm independently runs 30 times. Table A4, Table A5 and Table A6 show the rank-sum test results on CEC 2017 test suites (dimension = 30, dimension = 50, and dimension = 100) at the significance level of p = 5%. When p < 5%, it is considered that there are significant differences between algorithms; when p > 5%, it is considered that there are no significant differences between algorithms, and the data will be marked with underlines. NaN indicates that the results of the two algorithms are too similar to make a significant judgment.
From Table A4, Table A5 and Table A6 in the Appendix A section, it can be seen that NaN does not appear in any of the three tables, and there are not many underlined ones, which indicates that MSESSA has low similarity with its competitors in search results on CEC 2017 test suites (dimension = 30, dimension = 50, and dimension = 100); that is, there are significant differences in optimization results. Therefore, the optimization performance of the proposed MSESSA on CEC2017 test suites is significantly different from other algorithms. Combined with the analysis of CEC2017 test suites results in Table A1, Table A2 and Table A3 in the Appendix A section, it can be seen that the comprehensive performance of MSESSA is the most prominent among many algorithms. Therefore, it can be verified that MSESSA has certain core competitiveness and better performance compared with other algorithms.
In conclusion, in the Wilcoxon rank-sum test experiment, compared with other advanced algorithms in dealing with problems of different dimensions, MSESSA has a good optimization performance and is competitive among many algorithms. However, with the increase in dimensions, MSESSA’s advantages slightly decrease.

5.5. Ablation Experiment Test

In addition, in order to better verify the role of the proposed strategies in the optimization process and the validity of the idea of selective ensemble, the ablation experiment of MSESSA and SSA, the sparrow search algorithm which only uses the variable logarithmic spiral saltation learning strategy in the search process (SSA1), the sparrow search algorithm which only uses the neighborhood-guided learning strategy in the search process (SSA2), the sparrow search algorithm which only uses the adaptive Gaussian random walk strategy in the search process (SSA3), and the sparrow search algorithm which only uses modified boundary processing mechanism (SSA4), and the sparrow search algorithm which ensemble those four strategies in the search process (MESSA) is carried out. In the experiment, the population is set to 100, the maximum number of iterations is set to 500, and each algorithm independently runs 30 times to calculate the mean value and standard deviation. Table 1, Table 2 and Table 3 list the test results of CEC 2017 test suites (dimension = 30, dimension = 50, and dimension = 100).
Table 1. Ablation experiment tests on CEC 2017 functions (Dim = 30).
Table 1. Ablation experiment tests on CEC 2017 functions (Dim = 30).
FunctionIndexSSASSA1SSA2SSA3SSA4MESSAMSESSA
C01Mean8.549 × 1039.951 × 1037.051 × 1037.237 × 1039.967 × 1036.519 × 1031.936 × 103
Std5.735 × 1036.598 × 1034.479 × 1036.291 × 1038.048 × 1035.035 × 1031.896 × 103
C03Mean6.769 × 1034.134 × 1044.719 × 1044.213 × 1044.885 × 1044.816 × 1042.157 × 104
Std7.913 × 1038.848 × 1037.832 × 1038.374 × 1037.468 × 1037.826 × 1034.454 × 103
C04Mean1.148 × 1021.407 × 1021.092 × 1021.185 × 1028.913 × 1019.743 × 1018.254 × 101
Std2.961 × 1013.219 × 1012.651 × 1011.883 × 1013.194 × 1012.819 × 1013.139 × 101
C05Mean2.507 × 1022.062 × 1022.292 × 1022.159 × 1022.509 × 1022.298 × 1021.137 × 102
Std4.983 × 1014.880 × 1015.641 × 1015.672 × 1014.779 × 1015.426 × 1013.044 × 101
C06Mean4.847 × 1012.151 × 1014.725 × 1012.071 × 1015.144 × 1014.653 × 1011.166 × 10−2
Std9.471 × 1001.270 × 1019.658 × 1001.059 × 1018.855 × 1001.030 × 1012.600 × 10−3
C07Mean5.360 × 1024.327 × 1025.474 × 1024.681 × 1025.696 × 1025.159 × 1021.904 × 102
Std6.095 × 1011.309 × 1029.250 × 1011.405 × 1027.161 × 1019.828 × 1012.539 × 101
C08Mean1.625 × 1021.483 × 1021.793 × 1021.498 × 1021.724 × 1021.732 × 1021.024 × 102
Std3.109 × 1012.338 × 1012.991 × 1011.926 × 1013.474 × 1012.360 × 1012.112 × 101
C09Mean4.450 × 1034.650 × 1034.520 × 1034.480 × 1034.478 × 1034.366 × 1031.480 × 102
Std1.744 × 1025.602 × 1021.751 × 1027.877 × 1023.453 × 1024.465 × 1021.877 × 101
C10Mean4.409 × 1033.990 × 1034.435 × 1034.118 × 1034.160 × 1034.331 × 1033.119 × 103
Std6.839 × 1027.098 × 1026.395 × 1027.584 × 1026.188 × 1026.381 × 1025.770 × 102
C11Mean1.632 × 1021.925 × 1021.535 × 1021.893 × 1021.354 × 1021.596 × 1021.029 × 102
Std6.461 × 1015.870 × 1014.381 × 1015.115 × 1016.354 × 1015.066 × 1014.095 × 101
C12Mean1.625 × 1067.439 × 1061.781 × 1066.943 × 1062.579 × 1061.861 × 1061.257 × 106
Std1.592 × 1065.319 × 1061.415 × 1067.040 × 1061.877 × 1061.255 × 1065.376 × 105
C13Mean1.621 × 1047.841 × 1041.539 × 1045.261 × 1041.008 × 1041.776 × 1048.717 × 103
Std1.694 × 1041.575 × 1051.773 × 1041.401 × 1051.139 × 1041.516 × 1041.250 × 104
C14Mean4.179 × 1048.679 × 1043.264 × 1048.580 × 1046.952 × 1044.410 × 1041.132 × 104
Std3.943 × 1041.077 × 1052.614 × 1048.212 × 1044.879 × 1044.170 × 1045.591 × 103
C15Mean7.314 × 1032.730 × 1041.082 × 1041.217 × 1044.170 × 1037.526 × 1036.737 × 103
Std9.864 × 1031.043 × 1051.317 × 1042.069 × 1044.458 × 1031.111 × 1047.053 × 103
C16Mean1.359 × 1031.311 × 1031.403 × 1031.215 × 1031.269 × 1031.333 × 1039.145 × 102
Std2.553 × 1022.827 × 1023.696 × 1023.555 × 1023.642 × 1023.388 × 1021.222 × 102
C17Mean7.865 × 1026.135 × 1027.649 × 1026.690 × 1027.505 × 1027.227 × 1022.226 × 102
Std1.760 × 1022.523 × 1022.618 × 1022.664 × 1021.998 × 1022.955 × 1024.537 × 101
C18Mean5.350 × 1055.976 × 1055.508 × 1051.136 × 1067.154 × 1055.334 × 1051.491 × 105
Std5.206 × 1056.950 × 1055.826 × 1059.664 × 1051.066 × 1066.695 × 1056.904 × 104
C19Mean8.556 × 1037.216 × 1034.577 × 1031.446 × 1045.219 × 1031.456 × 1042.820 × 103
Std1.369 × 1041.000 × 1045.575 × 1033.023 × 1045.345 × 1031.701 × 1041.887 × 103
C20Mean6.480 × 1025.678 × 1027.652 × 1026.087 × 1027.867 × 1027.333 × 1022.836 × 102
Std2.353 × 1022.009 × 1022.295 × 1022.251 × 1022.189 × 1022.293 × 1028.310 × 101
C21Mean4.024 × 1023.641 × 1023.773 × 1023.649 × 1024.053 × 1024.113 × 1023.191 × 102
Std4.792 × 1013.581 × 1015.079 × 1014.498 × 1013.623 × 1013.934 × 1013.130 × 101
C22Mean2.885 × 1032.635 × 1032.133 × 1032.843 × 1032.868 × 1033.772 × 1039.891 × 102
Std2.207 × 1032.290 × 1032.417 × 1032.470 × 1032.230 × 1032.143 × 1031.682 × 103
C23Mean6.088 × 1025.265 × 1026.406 × 1025.157 × 1026.868 × 1026.521 × 1025.018 × 102
Std7.659 × 1015.645 × 1019.241 × 1014.039 × 1011.025 × 1026.949 × 1013.754 × 101
C24Mean6.897 × 1026.384 × 1026.826 × 1026.219 × 1028.092 × 1027.008 × 1025.378 × 102
Std8.497 × 1017.641 × 1019.012 × 1015.309 × 1018.801 × 1019.221 × 1016.224 × 101
C25Mean4.021 × 1024.102 × 1024.003 × 1024.188 × 1023.866 × 1024.013 × 1023.893 × 102
Std1.444 × 1011.842 × 1011.728 × 1011.957 × 1011.286 × 1011.798 × 1011.612 × 101
C26Mean3.545 × 1032.628 × 1033.213 × 1033.050 × 1033.184 × 1034.025 × 1031.999 × 103
Std1.183 × 1031.229 × 1031.598 × 1038.824 × 1021.260 × 1031.353 × 1032.124 × 103
C27Mean5.754 × 1025.472 × 1025.630 × 1025.395 × 1025.000066 × 1025.750 × 1025.000060 × 102
Std4.742 × 1011.920 × 1014.169 × 1011.907 × 1012.539 × 10−043.018 × 1013.215 × 10−4
C28Mean4.377 × 1024.833 × 1024.323 × 1024.862 × 1024.730 × 1024.398 × 1024.562 × 102
Std2.243 × 1012.166 × 1012.268 × 1013.206 × 1013.315 × 1012.492 × 1013.926 × 101
C29Mean1.234 × 1031.074 × 1031.175 × 1031.047 × 1031.004 × 1031.191 × 1037.867 × 102
Std2.555 × 1022.668 × 1022.484 × 1022.479 × 1022.747 × 1022.995 × 1022.168 × 102
C30Mean1.431 × 1043.537 × 1041.533 × 1041.361 × 1044.575 × 1031.488 × 1043.068 × 103
Std8.584 × 1039.373 × 1041.089 × 1047.351 × 1035.438 × 1037.138 × 1033.586 × 103
The bolds are the optimal values.
Table 2. Ablation experiment tests on CEC 2017 functions (Dim = 50).
Table 2. Ablation experiment tests on CEC 2017 functions (Dim = 50).
FunctionIndexSSASSA1SSA2SSA3SSA4MESSAMSESSA
C01Mean1.471 × 1061.479 × 1061.630 × 1061.467 × 1061.715 × 1061.427 × 1066.947 × 103
Std6.635 × 1056.116 × 1056.910 × 1056.493 × 1057.048 × 1054.971 × 1054.534 × 103
C03Mean2.091 × 1051.615 × 1052.166 × 1051.666 × 1052.242 × 1051.963 × 1054.623 × 104
Std4.565 × 1043.446 × 1044.678 × 1043.336 × 1045.252 × 1043.867 × 1041.484 × 104
C04Mean1.908 × 1022.803 × 1021.828 × 1022.697 × 1021.826 × 1021.834 × 1021.555 × 102
Std5.517 × 1018.004 × 1014.936 × 1015.777 × 1013.950 × 1014.556 × 1015.302 × 101
C05Mean3.665 × 1023.600 × 1023.828 × 1023.628 × 1023.750 × 1023.824 × 1022.441 × 102
Std3.133 × 1012.878 × 1011.958 × 1013.840 × 1012.072 × 1013.306 × 1013.842 × 101
C06Mean6.327 × 1014.514 × 1016.466 × 1014.318 × 1016.314 × 1016.342 × 1012.455 × 100
Std7.087 × 1001.364 × 1015.640 × 1001.232 × 1014.653 × 1004.813 × 1008.202 × 10−1
C07Mean1.044 × 1039.814 × 1021.041 × 1039.440 × 1021.043 × 1031.046 × 1033.668 × 102
Std7.600 × 1019.173 × 1016.760 × 1011.177 × 1026.171 × 1016.215 × 1013.450 × 101
C08Mean3.955 × 1023.789 × 1023.942 × 1023.904 × 1023.977 × 1023.937 × 1022.288 × 102
Std3.523 × 1014.579 × 1013.007 × 1014.979 × 1013.615 × 1013.739 × 1014.132 × 101
C09Mean1.244 × 1041.501 × 1041.221 × 1041.453 × 1041.239 × 1041.280 × 1048.450 × 103
Std1.261 × 1032.386 × 1031.282 × 1032.795 × 1031.047 × 1031.254 × 1032.080 × 103
C10Mean7.636 × 1037.382 × 1037.342 × 1037.753 × 1037.488 × 1037.543 × 1035.484 × 103
Std7.927 × 1021.242 × 1038.090 × 1021.163 × 1037.868 × 1028.922 × 1026.309 × 102
C11Mean3.043 × 1024.172 × 1023.360 × 1024.442 × 1023.329 × 1023.098 × 1021.798 × 102
Std8.496 × 1011.040 × 1026.493 × 1011.100 × 1028.804 × 1016.662 × 1013.197 × 101
C12Mean1.313 × 1076.711 × 1071.633 × 1076.467 × 1071.599 × 1071.160 × 1076.261 × 106
Std6.892 × 1065.309 × 1078.226 × 1064.173 × 1071.067 × 1076.360 × 1065.043 × 106
C13Mean2.721 × 1042.830 × 1051.727 × 1042.857 × 1054.220 × 1041.956 × 1044.363 × 103
Std2.669 × 1048.579 × 1051.260 × 1047.000 × 1053.866 × 1041.360 × 1046.047 × 103
C14Mean3.155 × 1056.362 × 1054.905 × 1056.475 × 1054.438 × 1054.255 × 1053.385 × 105
Std1.606 × 1054.901 × 1053.233 × 1055.574 × 1053.453 × 1052.565 × 1052.498 × 105
C15Mean1.410 × 1041.995 × 1041.627 × 1044.292 × 1041.908 × 1041.511 × 1041.469 × 104
Std1.238 × 1041.592 × 1048.102 × 1031.529 × 1051.897 × 1041.328 × 1041.528 × 104
C16Mean2.333 × 1032.204 × 1032.099 × 1032.197 × 1032.342 × 1032.454 × 1032.179 × 103
Std4.003 × 1025.176 × 1024.334 × 1023.828 × 1025.200 × 1025.927 × 1025.456 × 102
C17Mean1.875 × 1031.779 × 1031.843 × 1031.693 × 1031.951 × 1031.678 × 1031.649 × 103
Std4.481 × 1023.634 × 1023.989 × 1024.043 × 1024.353 × 1021.678 × 1033.443 × 102
C18Mean2.191 × 1063.682 × 1062.668 × 1064.334 × 1062.922 × 1062.653 × 1061.961 × 106
Std1.277 × 1062.578 × 1061.385 × 1063.166 × 1061.865 × 1062.348 × 1061.048 × 106
C19Mean1.684 × 1042.897 × 1041.728 × 1041.934 × 1042.474 × 1041.845 × 1041.099 × 104
Std1.131 × 1043.193 × 1041.162 × 1041.385 × 1042.199 × 1041.238 × 1049.587 × 103
C20Mean1.532 × 1031.318 × 1031.667 × 1031.432 × 1031.573 × 1031.658 × 1031.297 × 103
Std3.737 × 1022.580 × 1022.815 × 1023.284 × 1023.519 × 1023.268 × 1022.918 × 102
C21Mean6.386 × 1025.334 × 1026.680 × 1025.252 × 1026.987 × 1026.947 × 1024.561 × 102
Std7.417 × 1015.033 × 1019.858 × 1014.808 × 1019.675 × 1019.788 × 1016.032 × 101
C22Mean7.932 × 1038.612 × 1038.393 × 1038.492 × 1038.202 × 1038.073 × 1036.188 × 103
Std1.009 × 1031.321 × 1038.335 × 1021.331 × 1031.170 × 1031.177 × 1037.416 × 102
C23Mean1.093 × 1038.612 × 1021.055 × 1038.447 × 1021.068 × 1031.041 × 1037.713 × 102
Std1.205 × 1028.154 × 1011.414 × 1027.427 × 1011.522 × 1021.234 × 1027.208 × 101
C24Mean1.137 × 1031.017 × 1031.105 × 1031.003 × 1031.467 × 1031.088 × 1037.302 × 102
Std1.448 × 1021.423 × 1021.148 × 1021.462 × 1021.487 × 1021.384 × 1029.637 × 101
C25Mean6.141 × 1026.775 × 1026.150 × 1026.756 × 1025.665 × 1026.262 × 1025.499 × 102
Std2.208 × 1014.841 × 1012.884 × 1013.841 × 1013.192 × 1013.064 × 1013.163 × 101
C26Mean6.515 × 1033.902 × 1036.812 × 1033.916 × 1036.673 × 1036.324 × 1033.902 × 103
Std2.628 × 1033.137 × 1032.835 × 1032.938 × 1033.008 × 1033.374 × 1033.258 × 103
C27Mean1.034 × 1038.149 × 1029.655 × 1028.402 × 1025.00012 × 1021.076 × 1035.00011 × 102
Std1.281 × 1029.237 × 1011.916 × 1021.321 × 1022.090 × 10−42.044 × 1023.220 × 10−4
C28Mean5.871 × 1026.764 × 1025.848 × 1026.696 × 1025.031 × 1025.921 × 1025.000 × 102
Std4.508 × 1015.407 × 1014.147 × 1015.663 × 1011.199 × 1013.466 × 1013.310 × 10−4
C29Mean2.344 × 1031.779 × 1032.248 × 1031.885 × 1031.954 × 1032.312 × 1031.537 × 103
Std3.690 × 1022.849 × 1023.892 × 1023.093 × 1024.092 × 1023.989 × 1024.108 × 102
C30Mean1.445 × 1061.461 × 1061.545 × 1061.523 × 1066.387 × 1031.476 × 1066.158 × 103
Std5.693 × 1053.585 × 1056.347 × 1057.248 × 1056.987 × 1034.668 × 1055.932 × 103
The bolds are the optimal values.
Table 3. Ablation experiment tests on CEC 2017 functions (Dim = 100).
Table 3. Ablation experiment tests on CEC 2017 functions (Dim = 100).
FunctionIndexSSASSA1SSA2SSA3SSA4MESSAMSESSA
C01Mean3.400 × 1083.134 × 1083.167 × 1083.085 × 1083.544 × 1083.080 × 1081.865 × 107
Std1.320 × 1089.889 × 1079.049 × 1071.048 × 1081.084 × 1081.195 × 1088.154 × 106
C03Mean5.756 × 1054.860 × 1055.582 × 1054.984 × 1055.123 × 1055.239 × 1052.679 × 105
Std6.883 × 1041.119 × 1051.055 × 1058.873 × 1049.471 × 1041.047 × 1052.607 × 104
C04Mean6.977 × 1028.945 × 1026.757 × 1028.753 × 1026.970 × 1026.982 × 1025.209 × 102
Std9.500 × 1011.404 × 1021.105 × 1021.102 × 1029.570 × 1011.083 × 1027.216 × 101
C05Mean8.811 × 1028.727 × 1028.873 × 1028.655 × 1028.888 × 1028.808 × 1026.764 × 102
Std4.877 × 1015.506 × 1013.581 × 1014.758 × 1015.217 × 1016.030 × 1016.819 × 101
C06Mean6.711 × 1016.211 × 1016.703 × 1016.188 × 1016.771 × 1016.670 × 1011.063 × 101
Std2.818 × 1004.376 × 1003.212 × 1004.851 × 1002.823 × 1001.661 × 1001.412 × 100
C07Mean2.561 × 1032.469 × 1032.572 × 1032.505 × 1032.571 × 1032.583 × 1031.086 × 103
Std1.018 × 1021.525 × 1028.911 × 1011.288 × 1021.402 × 1028.535 × 1011.005 × 102
C08Mean1.049 × 1031.055 × 1031.067 × 1031.035 × 1031.050 × 1031.056 × 1036.938 × 102
Std5.063 × 1016.933 × 1015.054 × 1015.002 × 1014.331 × 1014.541 × 1016.579 × 101
C09Mean2.475 × 1042.762 × 1042.467 × 1042.710 × 1042.472 × 1042.465 × 1042.332 × 104
Std1.009 × 1033.582 × 1035.579 × 1023.149 × 1031.096 × 1038.065 × 1021.059 × 103
C10Mean1.669 × 1041.864 × 1041.629 × 1041.837 × 1041.662 × 1041.695 × 1041.353 × 104
Std1.547 × 1031.882 × 1031.499 × 1032.272 × 1031.547 × 1031.471 × 1031.087 × 103
C11Mean6.094 × 1045.162 × 1045.474 × 1045.892 × 1046.392 × 1045.076 × 1041.328 × 104
Std1.467 × 1041.536 × 1042.092 × 1041.647 × 1041.931 × 1041.432 × 1045.918 × 103
C12Mean2.320 × 1084.300 × 1082.355 × 1084.255 × 1083.355 × 1082.441 × 1088.343 × 107
Std1.010 × 1081.475 × 1087.268 × 1071.424 × 1081.438 × 1089.785 × 1078.426 × 107
C13Mean6.178 × 1045.989 × 1046.107 × 1046.606 × 1046.628 × 1046.045 × 1048.120 × 103
Std1.813 × 1042.154 × 1042.079 × 1042.383 × 1042.731 × 1042.352 × 1046.250 × 103
C14Mean2.431 × 1063.493 × 1062.086 × 1063.324 × 1062.420 × 1062.209 × 1061.513 × 106
Std1.038 × 1062.268 × 1069.219 × 1053.288 × 1068.656 × 1059.758 × 1055.933 × 105
C15Mean1.259 × 1041.564 × 1055.131 × 1043.506 × 1051.277 × 1054.509 × 1043.377 × 103
Std7.276 × 1035.827 × 1052.126 × 1051.483 × 1062.440 × 1051.912 × 1054.491 × 103
C16Mean5.354 × 1034.802 × 1035.020 × 1035.269 × 1035.245 × 1035.272 × 1034.813 × 103
Std5.545 × 1028.611 × 1028.918 × 1025.629 × 1026.878 × 1026.569 × 1028.047 × 102
C17Mean4.245 × 1034.082 × 1034.299 × 1034.013 × 1034.415 × 1034.317 × 1033.787 × 103
Std5.617 × 1026.148 × 1028.194 × 1026.563 × 1028.196 × 1026.173 × 1026.412 × 102
C18Mean3.163 × 1065.705 × 1063.523 × 1064.218 × 1063.433 × 1063.269 × 1062.034 × 106
Std1.832 × 1064.089 × 1061.551 × 1062.021 × 1061.544 × 1061.574 × 1069.128 × 105
C19Mean3.131 × 1045.763 × 1045.749 × 1042.548 × 1041.279 × 1042.603 × 1043.723 × 103
Std2.882 × 1048.447 × 1041.156 × 1052.585 × 1041.504 × 1042.212 × 1043.802 × 103
C20Mean3.871 × 1033.342 × 1034.124 × 1033.436 × 1033.993 × 1034.001 × 1033.246 × 103
Std5.609 × 1026.617 × 1026.076 × 1026.158 × 1026.377 × 1024.628 × 1025.244 × 102
C21Mean1.663 × 1031.280 × 1031.679 × 1031.251 × 1031.716 × 1031.648 × 1039.012 × 102
Std1.991 × 1021.207 × 1021.928 × 1021.076 × 1022.000 × 1021.965 × 1028.237 × 101
C22Mean1.828 × 1042.022 × 1041.805 × 1042.072 × 1041.773 × 1041.758 × 1041.462 × 104
Std1.400 × 1031.736 × 1031.621 × 1031.908 × 1031.780 × 1031.605 × 1031.177 × 103
C23Mean1.957 × 1031.404 × 1031.982 × 1031.419 × 1032.364 × 1031.958 × 1031.089 × 103
Std1.929 × 1021.053 × 1021.897 × 1021.167 × 1022.312 × 1021.860 × 1028.824 × 101
C24Mean2.847 × 1032.049 × 1032.723 × 1032.059 × 1033.953 × 1032.807 × 1031.665 × 103
Std2.736 × 1021.556 × 1022.551 × 1021.445 × 1023.417 × 1022.743 × 1021.065 × 102
C25Mean1.210 × 1031.338 × 1031.217 × 1031.340 × 1031.127 × 1031.197 × 1031.007 × 103
Std7.953 × 1018.471 × 1018.076 × 1011.175 × 1027.236 × 1017.137 × 1015.882 × 101
C26Mean2.246 × 1041.711 × 1042.284 × 1041.962 × 1042.425 × 1042.198 × 1041.538 × 104
Std4.939 × 1036.911 × 1034.413 × 1032.977 × 1032.319 × 1033.999 × 1035.236 × 103
C27Mean1.287 × 1031.067 × 1031.328 × 1031.068 × 1035.00024 × 1021.359 × 1035.00022 × 102
Std3.130 × 1021.700 × 1022.574 × 1021.459 × 1024.031 × 10−42.579 × 1023.760 × 10−4
C28Mean1.083 × 1031.226 × 1031.033 × 1031.205 × 1031.060 × 1031.042 × 1038.444 × 102
Std7.551 × 1011.098 × 1027.998 × 1011.262 × 1021.161 × 1027.441 × 1013.894 × 101
C29Mean5.832 × 1034.636 × 1035.553 × 1034.693 × 1034.261 × 1035.320 × 1033.627 × 103
Std7.132 × 1025.812 × 1026.826 × 1026.410 × 1028.352 × 1025.630 × 1025.149 × 102
C30Mean2.315 × 1062.128 × 1062.145 × 1062.393 × 1062.595 × 1052.429 × 1067.285 × 103
Std1.148 × 1069.114 × 1051.190 × 1062.061 × 1063.493 × 1051.364 × 1061.160 × 104
The bolds are the optimal values.
As can be seen from Table 1, MSESSA has the most bolds and optimal values, which presents excellent performance in terms of unimodal functions, simple multimodal functions, mixed functions, and combined functions. In addition, from the test values of SSA1, SSA2, SSA3, and SSA4, we can know they are a significant improvement compared with SSA in some functions. Therefore, it can be verified that the proposed strategies have a good effect on CEC 2017 test suites (dimension = 30). In addition, it can be found that the performance of MSESSA is better than that of MESSA, so it can be concluded that selective ensemble does have a certain effect on algorithm optimization compared with the ensemble to some extent.
As can be seen from Table 2, MSESSA has the most bolds and optimal values, which presents excellent performance in terms of unimodal functions, simple multimodal functions, mixed functions, and combined functions. In addition, from the test values of SSA1, SSA2, SSA3, and SSA4, we can know they are a significant improvement compared with SSA in some functions. Therefore, it can be verified the proposed strategies have a good effect on CEC 2017 test suites (dimension = 50), but compared with the test values on CEC 2017 test suites (dimension = 30), MSESSA presents some reduced advantages. For example, SSA has the best optimization effect on mixed function C14 and mixed function C15, and MSESSA is slightly worse than SSA. As with the tests on CEC 2017 test suites (dimension = 30), MSESSA has better performance than MESSA, so it also can be concluded that the selective ensemble does have a certain effect on algorithm optimization compared with the ensemble to some extent.
As can be seen from Table 3, MSESSA has the most bolds and optimal values, which presents excellent performance in terms of unimodal functions, simple multimodal functions, mixed functions, and combined functions. In addition, from the test values of SSA1, SSA2, SSA3 and SSA4, we can know they are a significant improvement compared with SSA in some functions. Therefore, it can be verified the proposed strategies have a good effect on CEC 2017 test suites (dimension = 100). In addition, it can be found that, although MSESSA is better than MESSA in general, MSESSA is slightly worse than MESSA in simple multimodal function C07 and mixed function C20. Overall, however, MSESSA has the best comprehensive performance, so it also can be concluded that the selective ensemble does have a certain effect on algorithm optimization compared with the ensemble to some extent.
In conclusion, in the ablation experiment, we compare the selective ensemble method with the ensemble method and verify the superiority of the proposed strategies. Compared with other algorithms in dealing with problems of different dimensions, MSESSA has a good optimization performance and is competitive among many algorithms. The results show that these three novel strategies can greatly improve the search capability and optimization performance of SSA. However, with the increase in dimensions, MSESSA’s advantages slightly decrease. Therefore, further studies are needed to improve the optimization performance of MSESSA in high dimensions.

6. Engineering Optimization Problem

Although MSESSA performs well with the test functions, the proposed algorithm is ultimately designed to solve practical optimization problems. Unlike the test functions, practical optimization problems often have certain constraints, which is undoubtedly a new challenge for the proposed algorithm. In other words, the ability of the algorithm to deal with constraints can be tested by practical engineering optimization problems. In order to further verify the ability of MSESSA to solve practical engineering optimization problems, four classical practical engineering optimization problems are used for experiments to minimize the values of design parameters and the overall cost of the engineering design problems, which are the design problem of tension/compression coil spring, the welded beam structure design problem, the speed reducer structure design problem, and pressure vessel design problem [57]. When constraint conditions are met, each problem is set as a minimization problem, and the feasibility rule [58] is adopted as the constraint technology. Each problem experiment independently runs 30 times, and the optimal value is selected for the result analysis.

6.1. The Tension/Compression Coil Spring Design Problem

The optimization goal of the design problem of the tension/compression coil spring is to minimize the weight of the tension/compression spring to the maximum extent under the influence of constraints such as the tension/compression spring disturbance, the pressure on the unit area of the tension/compression spring section (shear stress), and the average diameter of the thickness of the original coil of the tension/compression spring. There are three variables in this engineering problem. The problem model is described as follows:
X = ( x 1 , x 2 , x 3 ) T = ( N , D , d ) T
N  represents the number of spring coils, D  represents the diameter of winding coils, and d  represents the diameter of the wire. The search space range of the three variables is as follows:
2.0 x 1 15.0     0.25 x 2 1.3     0.05 x 3 2.0
The optimization problem of the tension/compression coil spring design problem can be modeled as follows:
m i n f ( x ) = ( x 1 + 2 ) x 2 x 3 2
The objective function of the tension/compression coil spring design problem is described as follows:
h 1 ( x ) = 1 x 1 x 2 3 71785 x 3 4 0     h 2 ( x ) = 4 x 2 2 x 2 x 3 12566 ( x 2 x 3 3 x 3 4 ) + 1 5108 x 3 2 1 0     h 3 ( x ) = 1 140.45 x 3 x 1 x 2 2 0     h 4 ( x ) = x 2 + x 3 1.5 1 0
The schematic diagram of the tension/compression coil spring design problem is shown in Figure 13. Table 4 records the experimental results. It can be seen from the table that the results obtained by MSESSA are smaller than those obtained by other comparison algorithms, indicating that the results obtained by MSESSA are the best.
Table 4. Optimization results of the tension/compression coil spring design problem.
Table 4. Optimization results of the tension/compression coil spring design problem.
Algorithm x 1 x 2 x 3 f ( x ) Rank
SSA0.05236662980857580.37323735397338610.38240497845420.01267358129874056
GWO0.05000048426787520.31743519794664514.03973386076180.012685615673326212
WOA0.06030564339047670.6016299833429434.359961122235110.01267521686572887
PSO0.05280283197421260.3840761584607989.849416961410610.01266599608684492
ABC0.05351990915250900.4023548283442089.043875902689790.012696834601916714
FA0.05248347705495100.37613191029365410.23537040439660.01267659163466869
QMESSA0.06273140197285250.6851470620860343.456386337296960.01266604751513163
LSSA0.05000000000000000.31742541622588914.02776977223450.01266824419727915
AGWO0.06082164947175060.5832118459998535.097220498528280.012692080410729513
jWOA0.05258529580623020.37866420780024210.10949375991520.012676634426790110
VPPSO0.05136049300709520.34886458191837111.76467133165830.01266721360522644
NABC0.05065892466660720.33239364806067012.87384631071130.012682171061730111
MSEFA0.05000660019352960.31757380118486114.01551372590550.01267565497799278
MSESSA0.06121431416249530.6321217609628133.990648102480640.01266572752241731

6.2. The Welded Beam Design Problem

The optimization objective of the welded beam design problem is to minimize the cost of the welded beam structural design under the constraint conditions of weld stress, buckling load, beam deflection, and beam bending stress. There are four variables in this engineering problem. The problem model is described as follows:
X = ( x 1 , x 2 , x 3 , x 4 ) T = ( h , l , t , b ) T
h  represents weld thickness, l  represents weld length, t  represents reinforcement thickness, and b  represents reinforcement width. The search space range of the four variables is as follows:
0.1 x 1 2     0.1 x 2 10     0.1 x 3 10     0.1 x 4 2
The optimization problem of the welded beam design problem can be modeled as follows:
m i n f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
The objective function of the welded beam design problem is described as follows:
h 1 ( x ) = τ ( x ) τ m a x 0     h 2 ( x ) = σ ( x ) σ m a x 0 h 3 ( x ) = x 1 x 4 0     h 4 ( x ) = 0.125 x 1 0 h 5 ( x ) = δ ( x ) δ m a x 0     h 6 ( x ) = P P c ( x ) 0 h 7 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 0
Some parameters in the objective function are described as follows:
τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 τ = P 2 x 1 x 2     τ = M R J     M = P ( L + x 2 2 ) R = x 2 2 4 + ( x 1 + x 3 2 ) 2     J = 2 2 c 1 x 2 x 2 2 12 + ( x 1 + x 3 2 ) 2 σ ( x ) = 6 P L x 3 2 x 4     δ ( x ) = 4 P L 3 E x 3 3 x 4 P c ( x ) = 4.013 E x 3 2 x 4 6 / 36 L 2 ( 1 x 3 2 L E 4 G ) P = 6000 l b     L = 14 i n E = 30 × 10 6 p s i     G = 12 × 10 6 p s i τ m a x = 13600 p s i     σ m a x = 30000 p s i     δ m a x = 0.25 i n
The schematic diagram of the welded beam design problem is shown in Figure 14. Table 5 records the experimental results. It can be seen from the table that the results obtained by PSO and MSESSA are smaller than those obtained by other comparison algorithms, indicating that the results obtained by PSO and MSESSA are better, and PSO ranks first, and MSESSA ranks second.
Table 5. Optimization results of the welded beam design problem.
Table 5. Optimization results of the welded beam design problem.
Algorithm x 1 x 2 x 3 x 4 f ( x ) Rank
SSA0.2057084659148383.470874598244119.037132220667800.2057283592351511.724909715313076
GWO0.2054888737363273.480725940130179.032141609318460.2059354763729441.725807210194108
WOA0.1710184744240824.102740034318369.690313132416230.2088555107315931.7340524302159012
PSO0.2057296397861283.470488665630909.036623910357330.2057296397861081.724852308598401
ABC0.2176107253283953.267783746573458.935991268093040.2187719415845131.7703088679279014
FA0.2054046062263283.484740283272199.038822314185000.2057593633475371.725517082011337
QMESSA0.2055273471371873.474847430744239.036646429421250.2057305676811041.724883099200004
LSSA0.2052938084218693.479895988018179.036620237156670.2057298323331551.724871921941883
AGWO0.2035853344906473.527712982467349.035025368937040.2059340618810741.7282778555990711
jWOA0.2050717041564383.490393669705309.037421826359430.2057626801579201.7264293658841110
VPPSO0.2049596113777123.487772737190619.036773526710850.2058220507820291.726243132295329
NABC0.1901029942997713.804368672484809.122897255946040.2065664248421601.7660721264302013
MSEFA0.2057015798148543.471086930633829.036639907710330.2057295713643751.724891768769005
MSESSA0.2055260081788233.474900537355829.036633872992360.2057324111753001.724868429928982

6.3. The Speed Reducer Design Problem

The optimization objective of the speed reducer design problem is to minimize the design weight of the reducer under the constraint conditions of bending stress, surface stress, lateral deflection, and axial stress of the gear teeth. There are seven variables in this engineering problem. The problem model is described as follows:
X = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) T = ( b , m , z , l 1 , l 2 , d 1 , d 2 ) T
b  represents the surface width, m  represents the tooth modulus, z  represents the number of pinion teeth, l 1  represents the length of the first shaft between bearings, l 2  represents the length of the second shaft between bearings, d 1  represents the diameter of the first shaft, and d 2  represents the diameter of the second shaft. The search space range of the seven variables is as follows:
2.6 x 1 3.6     0.7 x 2 0.8     17 x 3 28     7.3 x 4 8.3     7.8 x 5 8.3     2.9 x 6 3.9     5.0 x 7 5.5
The optimization problem of the speed reducer design problem can be modeled as follows:
m i n f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )
The objective function of the speed reducer design problem is described as follows:
h 1 ( x ) = 27 x 1 x 2 2 x 3 1 0       h 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 h 3 ( x ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0       h 4 ( x ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 h 5 ( x ) = [ ( 745 x 4 / x 2 x 3 ) 2 + 16.9 × 10 6 ] 1 / 2 85 x 7 3 1 0 h 6 ( x ) = [ ( 745 x 4 / x 2 x 3 ) 2 + 157.5 × 10 6 ] 1 / 2 85 x 7 3 1 0 h 7 ( x ) = x 2 x 3 40 1 0       h 8 ( x ) = 5 x 2 x 1 1 0 h 9 ( x ) = x 1 12 x 2 1 0       h 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 h 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0
The schematic diagram of the speed reducer design problem is shown in Figure 15. Table 6 records the experimental results of the problem. It can be seen from the table that the results obtained by PSO, QMESSA, MSESSA, LSSA, and MSEFA are smaller than those obtained by other algorithms, indicating that the results obtained by PSO, QMESSA, MSESSA, LSSA, and MSEFA are in dominance, and PSO ranks first, QMESSA ranks second, MSESSA ranks third, LSSA ranks fourth, and MSEFA ranks fifth.
Table 6. Optimization results of the structure design of the speed reducer design problem.
Table 6. Optimization results of the structure design of the speed reducer design problem.
Algorithm x 1 x 2 x 3 x 4 x 5 x 6 x 7 f ( x ) Rank
SSA3.500000181429360.7177.300000000000007.715321227112763.350214691168735.286654469557172994.471127698317
GWO3.501752947769790.7178.059501767674198.017344379330283.358702887263785.289442177330083002.7213727858911
WOA3.504833575368420.7177.993096343922697.854718729745403.354130464098305.286702012890193006.5827577714713
PSO3.499999999997600.7178.300000000000007.715319911476403.352207163037595.286654464979202994.471066145981
ABC3.500000027491850.7177.300000000000007.715321925215013.350214686495655.286654482050062994.471137190279
FA3.500000101664640.7177.300000000000007.715319980524983.350214690471775.286654489040642994.471129018558
QMESSA3.499999999997820.7177.300000000001297.715319911478543.350214666096315.286654464979222994.471066146042
LSSA3.500000000007090.7177.300000000026717.715319911581343.350214666127065.286654464986102994.471066163644
AGWO3.512586662817830.7177.393033187725268.012088565238913.357933579729895.287210044909153004.2068275856811
jWOA3.500000020948920.7177.300000000000007.734615100837253.371637422049125.303437403131423008.6208616854714
VPPSO3.500011300916630.7177.724119348852087.813194475926153.351076841899995.286696540924692998.3012273911512
NABC3.500000000646560.7177.300000000000007.715321111691113.350214755406385.286654479471122994.471117926166
MSEFA3.500000000029800.7177.300000000000007.715319925795863.350214666314575.286654465109892994.471066308815
MSESSA3.499999999997280.7177.300000000012087.715320115664113.350214666096315.286654465048192994.471066146493

6.4. The Pressure Vessel Design Problem

The optimization objective of the pressure vessel design problem is to minimize the total costs (including material, forming, and welding costs) under the constraints of shell thickness, head thickness, inner radius, and cylindrical section length of the container. There are four variables in this engineering problem. The problem model is described as follows:
X = ( x 1 , x 2 , x 3 , x 4 ) T = ( T s , T h , R , L ) T
T s  represents the shell thickness, T h  represents the head thickness, R  represents the inner radius, and L  represents the cylindrical section length of the container, excluding the head part. The search space range of the four variables is as follows:
0 x 1 99     0 x 2 99     10 x 3 200     10 x 4 200
The optimization problem of the pressure vessel design problem can be modeled as follows:
m i n f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
The objective function of the pressure vessel design problem is described as follows:
h 1 ( x ) = 0.0193 x 3 x 1 0       h 2 ( x ) = 0.00954 x 3 x 2 0       h 3 ( x ) = 1296000 π x 3 2 x 4 4 3 π x 3 3 0       h 4 ( x ) = x 4 240 0
The schematic diagram of the pressure vessel design problem is shown in Figure 16. The experimental results are recorded in Table 7. It can be seen from the table that the results obtained by MSESSA are smaller than those obtained by other comparison algorithms, indicating that the results obtained by MSESSA are the best.
Table 7. Optimization results of the pressure vessel design problem.
Table 7. Optimization results of the pressure vessel design problem.
Algorithm x 1 x 2 x 3 x 4 f ( x ) Rank
SSA0.9623320653220780.47568125576086149.861761617848399.44574339552055943.6850591958610
GWO0.7876093107000200.38944790244207440.7969538075406193.6461285675125892.840559750774
WOA0.8609407870610430.42815265851493943.5543054764665159.3943870589575898.149947864686
PSO0.8304198769702910.41047697545609443.0269366307679165.4610665675915967.1603807737112
ABC0.8752017663257710.43273837559038545.3317698078693140.3731299012176007.9550211063513
FA0.8828661448891340.43640119286203445.7443598389798136.1499115335615953.6993832508111
QMESSA0.7801588256951280.38563292010530440.4227370405541198.5694818621865885.476982123362
LSSA0.7860039171475620.38854026647995040.7255297056036196.4164783642295897.039294249985
AGWO0.7899029133779690.39513244111620440.8515204067893193.2149374012455918.150527879137
jWOA0.9622987349163260.44572663852827046.7067566234525126.8261494132656056.5808141011614
VPPSO0.9674188706926990.47819079339824350.124795744695998.63564679408425931.240352849498
NABC0.8197442593002570.40498427526227542.4419026627092172.4808039647935942.548982104299
MSEFA0.8816590268077220.43580455086228045.6818139958276136.7735166955825885.829101681573
MSESSA0.7781691620665890.38464941996953140.3196457000156199.9996244998915885.333664525541
From the results of four typical engineering optimization problems, we can know MSESSA is superior in engineering optimization problems, and it can be concluded that MSESSA has significant advantages in dealing with engineering optimization problems and can reduce the design cost.

7. Conclusions

In this study, a multi-strategy sparrow search algorithm with selective ensemble (MSESSA) is proposed. In MSESSA, a search strategy pool with three different characteristics is set up. Three search learning strategies are proposed to improve the performance of the algorithm from different aspects, which include the variable logarithmic spiral saltation learning strategy, suitable for optimizing global search and enhancing search ability; the neighborhood-guided learning strategy, suitable for optimizing local search and accelerating convergence speed; and the adaptive Gaussian random walk strategy, suitable for balancing global search and local search ability. The algorithm is optimized from the aspects of global search, local search, coordination of exploration, and exploitation ability. These three novel strategies can greatly improve the global search capability and the local search capability of SSA. MSESSA combines the idea of selective ensemble with multi-strategy optimization to improve the performance of the algorithm from different aspects. Our strength is the selective ensemble of strategies to avoid the waste of resources while achieving superior performance, and the idea of multi-strategy ensemble learning is better than the idea of using one learning strategy, which can better balance the exploration and the exploitation ability of an algorithm. Based on the advantage of the roulette wheel selection method, we can choose the strategy suitable for the current optimization stage with the aid of the priority roulette selection method. Finally, considering the characteristics of different roles in the sparrow population at the search stage, the original transboundary processing mechanism is modified. The modified boundary processing mechanism is proposed to dynamically adjust the transgressive sparrows’ locations. The random relocation method is proposed for discoverers and alerters to further conduct global search in a large range, and the relocation method based on the optimal and suboptimal of the population is proposed for scroungers to further conduct better local search. In order to verify the performance of MSESSA, it is tested on a full set of CEC2017 test functions with 13 other advanced algorithms, and the Wilcoxon rank-sum test is used to further discuss its performance characteristics. The results show that MSESSA is superior to 13 other advanced algorithms in solving accuracy, convergence speed, scalability, and stability. In addition, the ablation experiment is conducted to verify the effect of the proposed strategies and the validity of selective ensemble over ensemble. Finally, MSESSA is applied to four practical engineering optimization problems, which reflects its strong robustness and wide applicability. It can be concluded that MSESSA has significant advantages in dealing with engineering optimization problems and can reduce the design cost.
However, with the increase in dimensions, MSESSA’s advantages slightly decrease, and it is difficult to show the significant superiority in dealing with high-dimensional optimization problems as it is in dealing with low-dimensional optimization problems. In addition, MSESSA is mainly aimed at improving the position update of the model framework of “exploration–follow–warning” in the sparrow population. According to the NFL theorem, such an improvement is not suitable for solving all optimization problems and may encounter bottlenecks when facing complex problems such as high dimension, nonlinearity, and multi-objective.
Therefore, in future studies, it is necessary to consider improving the ability to solve high-dimensional optimization problems. Moreover, with the increasing complexity of current optimization problems, the original low-dimensional and single-objective optimization has evolved into high-dimensional and multi-objective optimization, so its potential in multi-objective optimization still needs to be further developed. In the next step, we will conduct more in-depth research on the ability to handle high-dimensional and multi-objective optimization, explore more areas of machine learning, and try to propose a multi-objective version of SSA and apply it to multi-objective optimization in new fields to improve the overall performance.

Author Contributions

Conceptualization, Z.W. and J.W.; methodology, Z.W. and J.W.; software, J.W.; validation, J.W. and D.Z.; formal analysis, J.W. and D.Z.; investigation, J.W. and D.Z.; resources, Z.W. and D.Z.; data curation, J.W. and D.Z.; writing—original draft preparation, J.W.; writing—review and editing, Z.W. and D.Z.; visualization, Z.W. and D.Z.; supervision, Z.W. and D.L.; project administration, D.L.; funding acquisition, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (Grant No. 62062037, 61562037, and 72261018) and the Natural Science Foundation of Jiangxi Province (Grant No. 20212BAB202014 and 20171BAB202026).

Data Availability Statement

All data for this study are available from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison of results on CEC 2017 functions (Dim = 30).
Table A1. Comparison of results on CEC 2017 functions (Dim = 30).
FunctionIndexSSAGWOWOAPSOABCFAQMESSALSSAAGWOjWOAVPPSONABCMSEFAMSESSA
C01Mean8.549 × 1031.006 × 1094.614 × 1083.852 × 1088.705 × 1052.194 × 10101.150 × 1041.709 × 1067.400 × 1094.990 × 1081.273 × 1053.804 × 1033.989 × 1081.936 × 103
Std5.735 × 1036.551 × 1082.030 × 1083.123 × 1083.104 × 1051.907 × 1097.502 × 1031.678 × 1061.821 × 1091.922 × 1086.345 × 1052.795 × 1035.523 × 1071.896 × 103
C03Mean6.769 × 1034.097 × 1042.385 × 1056.610 × 1031.105 × 1056.226 × 1043.159 × 1045.960 × 1046.031 × 1042.150 × 1053.033 × 1044.166 × 1044.009 × 1032.157 × 104
Std7.913 × 1031.417 × 1046.244 × 1043.164 × 1031.793 × 1046.326 × 1036.406 × 1038.528 × 1031.058 × 1045.222 × 1048.465 × 1035.998 × 1038.915 × 1024.454 × 103
C04Mean1.148 × 1021.669 × 1023.105 × 1022.396 × 1028.399 × 1012.964 × 1039.646 × 1011.061 × 1025.022 × 1022.866 × 1021.128 × 1028.686 × 1011.399 × 1028.254 × 101
Std2.961 × 1013.875 × 1011.245 × 1027.514 × 1011.750 × 1014.168 × 1021.969 × 1012.856 × 1011.433 × 1026.635 × 1011.767 × 1011.482 × 1011.035 × 1013.139 × 101
C05Mean2.507 × 1029.080 × 1013.182 × 1023.070 × 1021.251 × 1022.988 × 1021.964 × 1022.409 × 1022.526 × 1023.045 × 1021.417 × 1026.480 × 1011.487 × 1021.137 × 102
Std4.983 × 1013.707 × 1016.743 × 1017.645 × 1001.587 × 1011.326 × 1013.912 × 1015.741 × 1012.697 × 1015.109 × 1013.112 × 1019.171 × 1001.051 × 1013.044 × 101
C06Mean4.847 × 1015.794 × 1007.271 × 1016.874 × 1011.082 × 1005.943 × 1013.981 × 1013.864 × 1015.092 × 1017.522 × 1013.242 × 1019.202 × 10−17.816 × 1001.166 × 10−2
Std9.471 × 1003.735 × 1009.026 × 1001.977 × 1003.209 × 10−12.246 × 1009.403 × 1001.489 × 1017.506 × 1001.131 × 1016.715 × 1003.775 × 10−17.061 × 10−12.600 × 10−3
C07Mean5.360 × 1021.540 × 1025.495 × 1026.312 × 1021.498 × 1027.510 × 1023.301 × 1024.166 × 1023.536 × 1025.508 × 1021.776 × 1021.007 × 1022.340 × 1021.904 × 102
Std6.095 × 1013.639 × 1019.450 × 1012.373 × 1011.184 × 1014.494 × 1017.143 × 1011.663 × 1023.941 × 1018.800 × 1013.856 × 1011.074 × 1019.948 × 1002.539 × 101
C08Mean1.625 × 1028.402 × 1012.175 × 1021.999 × 1021.340 × 1022.803 × 1021.567 × 1021.512 × 1021.929 × 1022.284 × 1021.069 × 1026.549 × 1011.441 × 1021.024 × 102
Std3.109 × 1012.809 × 1015.455 × 1018.023 × 1001.173 × 1011.065 × 1013.050 × 1013.147 × 1012.479 × 1014.977 × 1012.570 × 1019.023 × 1001.145 × 1012.112 × 101
C09Mean4.450 × 1032.820 × 1038.517 × 1034.779 × 1033.321 × 1036.116 × 1034.068 × 1034.585 × 1034.743 × 1039.688 × 1031.755 × 1031.130 × 1027.420 × 1021.480 × 102
Std1.744 × 1029.525 × 1022.360 × 1032.265 × 1027.698 × 1025.634 × 1024.957 × 1021.148 × 1031.172 × 1033.758 × 1039.507 × 1025.476 × 1014.579 × 1021.877 × 101
C10Mean4.409 × 1033.594 × 1035.968 × 1034.731 × 1033.387 × 1036.735 × 1034.071 × 1034.490 × 1035.706 × 1035.860 × 1033.951 × 1033.235 × 1034.352 × 1033.119 × 103
Std6.839 × 1021.136 × 1039.172 × 1023.334 × 1023.558 × 1023.040 × 1024.853 × 1021.083 × 1038.798 × 1028.066 × 1028.595 × 1023.538 × 1024.878 × 1025.770 × 102
C11Mean1.632 × 1024.284 × 1023.391 × 1032.064 × 1029.038 × 1022.304 × 1031.157 × 1023.780 × 1021.655 × 1033.303 × 1032.447 × 1023.134 × 1022.537 × 1021.029 × 102
Std6.461 × 1012.256 × 1022.307 × 1035.084 × 1015.347 × 1025.036 × 1024.001 × 1011.017 × 1021.086 × 1032.197 × 1036.913 × 1011.618 × 1022.854 × 1014.095 × 101
C12Mean1.625 × 1064.630 × 1071.383 × 1083.791 × 1074.765 × 1062.304 × 1092.316 × 1062.990 × 1063.442 × 1081.529 × 1081.988 × 1071.457 × 1064.041 × 1071.257 × 106
Std1.592 × 1062.879 × 1071.264 × 1084.089 × 1071.635 × 1063.442 × 1082.643 × 1062.111 × 1064.034 × 1081.161 × 1081.048 × 1071.091 × 1069.499 × 1065.376 × 105
C13Mean1.621 × 1043.147 × 1064.703 × 1054.429 × 1042.049 × 1067.338 × 1081.110 × 1044.419 × 1042.265 × 1075.873 × 1059.063 × 1047.076 × 1041.892 × 1078.717 × 103
Std1.694 × 1049.100 × 1062.550 × 1052.101 × 1048.498 × 1051.869 × 1081.511 × 1042.273 × 1045.764 × 1074.202 × 1055.747 × 1046.281 × 1044.971 × 1061.250 × 104
C14Mean4.179 × 1042.063 × 1051.419 × 1065.783 × 1034.225 × 1051.459 × 1053.654 × 1048.917 × 1047.384 × 1052.060 × 1064.348 × 1041.133 × 1055.581 × 1041.132 × 104
Std3.943 × 1042.778 × 1051.214 × 1062.520 × 1042.475 × 1056.678 × 1044.007 × 1048.117 × 1049.347 × 1052.685 × 1065.038 × 1046.947 × 1044.587 × 1045.591 × 103
C15Mean7.314 × 1033.574 × 1053.996 × 1054.858 × 1033.216 × 1053.332 × 1071.392 × 1031.603 × 1046.715 × 1051.995 × 1054.321 × 1042.373 × 1042.007 × 1066.737 × 103
Std9.864 × 1036.375 × 1057.207 × 1053.469 × 1031.536 × 1051.663 × 1071.254 × 1031.263 × 1047.981 × 1051.648 × 1053.058 × 1042.262 × 1048.829 × 1057.053 × 103
C16Mean1.359 × 1038.051 × 1022.247 × 1031.939 × 1038.524 × 1022.096 × 1031.271 × 1031.409 × 1031.372 × 1032.473 × 1031.163 × 1031.227 × 1035.020 × 1029.145 × 102
Std2.553 × 1023.354 × 1025.543 × 1022.763 × 1021.400 × 1021.570 × 1022.964 × 1024.349 × 1023.283 × 1026.669 × 1022.853 × 1023.649 × 1021.607 × 1021.222 × 102
C17Mean7.865 × 1023.325 × 1021.027 × 1031.233 × 1033.637 × 1027.850 × 1026.877 × 1027.004 × 1025.583 × 1029.719 × 1024.011 × 1022.509 × 1026.988 × 1022.226 × 102
Std1.760 × 1021.683 × 1023.187 × 1023.296 × 1021.157 × 1021.181 × 1022.937 × 1022.642 × 1022.050 × 1023.450 × 1022.171 × 1029.597 × 1012.895 × 1024.537 × 101
C18Mean5.350 × 1051.192 × 1067.852 × 1061.029 × 1056.069 × 1052.108 × 1063.138 × 1051.477 × 1062.647 × 1064.204 × 1068.595 × 1052.375 × 1052.659 × 1051.491 × 105
Std5.206 × 1051.574 × 1066.631 × 1069.908 × 1042.704 × 1059.336 × 1053.257 × 1051.959 × 1064.323 × 1064.000 × 1067.458 × 1051.736 × 1051.220 × 1056.904 × 104
C19Mean8.556 × 1036.476 × 1051.126 × 1074.168 × 1042.698 × 1055.228 × 1079.548 × 1031.050 × 1041.140 × 1069.593 × 1061.741 × 1065.075 × 1043.132 × 1062.820 × 103
Std1.369 × 1048.857 × 1058.949 × 1068.541 × 1041.512 × 1051.962 × 1079.651 × 1031.532 × 1041.390 × 1067.350 × 1061.371 × 1063.977 × 1049.941 × 1051.887 × 103
C20Mean6.480 × 1023.849 × 1028.338 × 1021.330 × 1034.424 × 1026.037 × 1026.254 × 1026.797 × 1025.606 × 1028.148 × 1024.670 × 1025.867 × 1023.205 × 1022.836 × 102
Std2.353 × 1021.228 × 1022.288 × 1028.981 × 1011.008 × 1026.408 × 1012.368 × 1022.469 × 1021.603 × 1022.356 × 1021.551 × 1022.162 × 1025.024 × 1018.310 × 101
C21Mean4.024 × 1022.821 × 1024.939 × 1026.007 × 1022.690 × 1024.804 × 1023.626 × 1023.543 × 1024.232 × 1024.756 × 1023.163 × 1022.551 × 1023.345 × 1023.191 × 102
Std4.792 × 1012.101 × 1017.085 × 1013.831 × 1017.580 × 1011.270 × 1016.655 × 1016.393 × 1012.805 × 1015.828 × 1013.428 × 1013.958 × 1011.026 × 1013.130 × 101
C22Mean2.885 × 1032.085 × 1035.091 × 1036.006 × 1032.494 × 1022.511 × 1031.849 × 1031.997 × 1033.387 × 1035.119 × 1031.111 × 1031.002 × 1022.034 × 1029.891 × 102
Std2.207 × 1031.859 × 1032.083 × 1032.509 × 1025.105 × 1011.933 × 1021.917 × 1032.518 × 1032.485 × 1032.105 × 1031.858 × 1031.475 × 10−17.044 × 1001.682 × 103
C23Mean6.088 × 1024.424 × 1027.723 × 1022.516 × 1034.634 × 1027.899 × 1025.384 × 1025.410 × 1026.731 × 1027.836 × 1025.027 × 1024.216 × 1024.861 × 1025.018 × 102
Std7.659 × 1013.385 × 1018.525 × 1011.936 × 1023.926 × 1012.743 × 1015.799 × 1014.570 × 1013.832 × 1017.964 × 1014.283 × 1011.291 × 1011.078 × 1013.754 × 101
C24Mean6.897 × 1027.179 × 1028.387 × 1028.555 × 1024.529 × 1028.667 × 1026.319 × 1026.096 × 1027.846 × 1028.529 × 1025.398 × 1025.041 × 1025.471 × 1025.378 × 102
Std8.497 × 1018.740 × 1019.127 × 1011.036 × 1022.281 × 1022.467 × 1016.585 × 1016.053 × 1015.076 × 1019.854 × 1013.674 × 1016.749 × 1019.559 × 1006.224 × 101
C25Mean4.021 × 1024.761 × 1025.543 × 1024.766 × 1024.009 × 1021.472 × 1033.924 × 1024.016 × 1026.303 × 1025.556 × 1024.289 × 1023.910 × 1024.082 × 1023.893 × 102
Std1.444 × 1014.243 × 1014.706 × 1012.795 × 1016.507 × 1001.278 × 1021.122 × 1012.399 × 1016.877 × 1015.401 × 1011.845 × 1011.839 × 1002.399 × 1001.612 × 101
C26Mean3.545 × 1031.909 × 1035.265 × 1035.241 × 1038.509 × 1025.245 × 1032.510 × 1032.789 × 1034.190 × 1034.992 × 1032.196 × 1036.935 × 1022.129 × 1031.999 × 103
Std1.183 × 1032.496 × 1021.226 × 1036.367 × 1021.987 × 1024.088 × 1021.307 × 1031.150 × 1034.054 × 1029.764 × 1021.105 × 1035.211 × 1024.481 × 1022.124 × 103
C27Mean5.754 × 1025.376 × 1027.000 × 1024.244 × 1034.944 × 1029.410 × 1025.598 × 1025.447 × 1026.683 × 1026.948 × 1025.646 × 1025.250 × 1025.392 × 1025.000 × 102
Std4.742 × 1011.617 × 1017.502 × 1011.907 × 1021.024 × 1015.496 × 1011.799 × 1012.829 × 1016.778 × 1018.796 × 1012.583 × 1014.758 × 1007.384 × 1003.215 × 10−4
C28Mean4.377 × 1025.815 × 1026.770 × 1026.250 × 1024.936 × 1021.747 × 1034.316 × 1024.720 × 1029.025 × 1026.739 × 1024.896 × 1024.628 × 1024.490 × 1024.562 × 102
Std2.243 × 1015.419 × 1019.021 × 1015.474 × 1012.652 × 1001.381 × 1022.146 × 1013.095 × 1011.643 × 1029.749 × 1012.596 × 1019.614 × 1008.290 × 1003.926 × 101
C29Mean1.234 × 1038.864 × 1022.307 × 1032.427 × 1035.885 × 1022.013 × 1031.139 × 1031.110 × 1031.492 × 1032.334 × 1031.345 × 1036.491 × 1026.778 × 1027.867 × 102
Std2.555 × 1021.686 × 1024.736 × 1025.186 × 1026.955 × 1011.706 × 1023.357 × 1022.175 × 1022.127 × 1024.071 × 1022.759 × 1027.618 × 1013.611 × 1012.168 × 102
C30Mean1.431 × 1046.189 × 1062.549 × 1071.951 × 1068.173 × 1048.518 × 1079.499 × 1032.862 × 1041.664 × 1073.144 × 1077.012 × 1062.666 × 1044.430 × 1063.068 × 103
Std8.584 × 1034.461 × 1062.143 × 1072.335 × 1064.985 × 1042.307 × 1074.721 × 1033.133 × 1041.198 × 1073.064 × 1072.955 × 1061.064 × 1041.475 × 1063.586 × 103
The bolds are the optimal values.
Table A2. Comparison of results on CEC 2017 functions (Dim = 50).
Table A2. Comparison of results on CEC 2017 functions (Dim = 50).
FunctionIndexSSAGWOWOAPSOABCFAQMESSALSSAAGWOjWOAVPPSONABCMSEFAMSESSA
C01Mean1.471 × 1065.920 × 1093.750 × 1097.240 × 1093.219 × 1086.595 × 10105.135 × 1067.272 × 1082.850 × 10103.800 × 1095.661 × 1071.778 × 1051.222 × 1096.947 × 103
Std6.635 × 1052.727 × 1091.210 × 1091.980 × 1091.023 × 1085.037 × 1092.266 × 1064.317 × 1085.200 × 1091.030 × 1096.927 × 1071.200 × 1051.283 × 1084.534 × 103
C03Mean2.091 × 1051.062 × 1052.102 × 1057.945 × 1042.400 × 1051.513 × 1052.050 × 1053.115 × 1051.374 × 1052.097 × 1051.040 × 1051.053 × 1051.060 × 1054.623 × 104
Std4.565 × 1042.344 × 1045.616 × 1041.649 × 1042.103 × 1041.119 × 1046.901 × 1048.136 × 1041.336 × 1046.403 × 1041.704 × 1049.763 × 1031.653 × 1041.484 × 104
C04Mean1.908 × 1025.975 × 1021.155 × 1031.377 × 1034.335 × 1021.147 × 1041.993 × 1023.130 × 1024.095 × 1031.182 × 1033.033 × 1021.950 × 1023.096 × 1021.555 × 102
Std5.517 × 1015.048 × 1022.800 × 1023.761 × 1023.853 × 1011.235 × 1034.962 × 1016.543 × 1019.103 × 1023.081 × 1025.983 × 1012.656 × 1011.985 × 1015.302 × 101
C05Mean3.665 × 1022.020 × 1025.216 × 1023.973 × 1023.224 × 1026.125 × 1023.230 × 1023.558 × 1025.055 × 1025.155 × 1022.642 × 1021.933 × 1023.160 × 1022.441 × 102
Std3.133 × 1012.887 × 1016.868 × 1011.612 × 1012.428 × 1012.390 × 1013.204 × 1013.567 × 1013.125 × 1019.416 × 1014.310 × 1011.876 × 1011.457 × 1013.842 × 101
C06Mean6.327 × 1011.485 × 1018.582 × 1017.580 × 1017.441 × 1007.879 × 1015.241 × 1015.580 × 1017.118 × 1018.956 × 1014.500 × 1015.422 × 10−11.060 × 1012.455 × 100
Std7.087 × 1003.621 × 1001.135 × 1012.189 × 1001.460 × 1003.285 × 1006.263 × 1007.761 × 1004.513 × 1001.037 × 1017.100 × 1008.254 × 10−25.298 × 10−18.202 × 10−1
C07Mean1.044 × 1033.435 × 1021.135 × 1031.126 × 1033.931 × 1021.784 × 1037.136 × 1029.431 × 1027.775 × 1021.105 × 1034.458 × 1022.685 × 1024.485 × 1023.668 × 102
Std7.600 × 1016.985 × 1011.088 × 1022.489 × 1012.096 × 1017.696 × 1011.184 × 1021.474 × 1027.257 × 1019.009 × 1018.073 × 1011.390 × 1011.315 × 1013.450 × 101
C08Mean3.955 × 1022.109 × 1025.016 × 1024.347 × 1023.219 × 1026.119 × 1023.206 × 1023.679 × 1025.111 × 1024.938 × 1022.621 × 1021.974 × 1023.144 × 1022.288 × 102
Std3.523 × 1014.727 × 1016.381 × 1011.424 × 1012.622 × 1011.920 × 1014.650 × 1014.857 × 1013.269 × 1019.522 × 1015.723 × 1011.586 × 1011.149 × 1014.132 × 101
C09Mean1.244 × 1047.005 × 1033.182 × 1041.611 × 1041.895 × 1042.572 × 1041.146 × 1041.877 × 1042.405 × 1042.944 × 1047.293 × 1031.218 × 1034.901 × 1028.450 × 103
Std1.261 × 1033.638 × 1039.381 × 1039.774 × 1022.874 × 1031.708 × 1031.420 × 1038.620 × 1034.597 × 1036.850 × 1032.177 × 1035.399 × 1024.304 × 1012.080 × 103
C10Mean7.636 × 1036.657 × 1031.053 × 1048.753 × 1036.849 × 1031.292 × 1046.730 × 1037.283 × 1031.137 × 1041.090 × 1046.841 × 1036.821 × 1039.291 × 1035.484 × 103
Std7.927 × 1022.031 × 1039.005 × 1025.444 × 1023.204 × 1022.927 × 1021.080 × 1031.155 × 1031.038 × 1031.051 × 1031.027 × 1034.014 × 1024.844 × 1026.309 × 102
C11Mean3.043 × 1022.945 × 1032.199 × 1036.527 × 1021.304 × 1041.170 × 1042.643 × 1021.992 × 1037.451 × 1032.259 × 1038.121 × 1023.172 × 1036.352 × 1021.798 × 102
Std8.496 × 1011.375 × 1034.888 × 1021.448 × 1024.107 × 1031.617 × 1036.412 × 1011.253 × 1032.986 × 1035.718 × 1021.721 × 1021.615 × 1036.666 × 1013.197 × 101
C12Mean1.313 × 1076.052 × 1081.170 × 1091.190 × 1092.648 × 1081.854 × 10101.097 × 1076.255 × 1075.860 × 1091.210 × 1091.440 × 1081.279 × 1073.143 × 1086.261 × 106
Std6.892 × 1065.447 × 1085.630 × 1088.830 × 1085.241 × 1072.679 × 1097.233 × 1063.840 × 1072.630 × 1096.210 × 1088.892 × 1074.591 × 1063.481 × 1075.043 × 106
C13Mean2.721 × 1041.713 × 1083.870 × 1075.287 × 1051.410 × 1085.755 × 1096.114 × 1031.116 × 1053.970 × 1084.829 × 1078.335 × 1043.082 × 1051.136 × 1084.363 × 103
Std2.669 × 1042.494 × 1084.551 × 1074.511 × 1054.756 × 1071.469 × 1094.874 × 1038.961 × 1042.910 × 1084.045 × 1073.843 × 1042.007 × 1052.301 × 1076.047 × 103
C14Mean3.155 × 1059.216 × 1052.807 × 1061.396 × 1054.452 × 1063.191 × 1063.278 × 1057.482 × 1052.378 × 1063.057 × 1063.842 × 1051.098 × 1061.372 × 1053.385 × 105
Std1.606 × 1058.412 × 1052.151 × 1061.739 × 1052.280 × 1061.056 × 1062.218 × 1055.533 × 1052.238 × 1062.386 × 1062.431 × 1056.427 × 1053.732 × 1042.498 × 105
C15Mean1.410 × 1049.179 × 1061.952 × 1062.068 × 1042.630 × 1071.040 × 1091.332 × 1043.240 × 1044.422 × 1073.801 × 1063.802 × 1041.597 × 1053.124 × 1071.469 × 104
Std1.238 × 1041.712 × 1072.729 × 1061.892 × 1041.106 × 1072.637 × 1085.437 × 1031.221 × 1047.594 × 1075.744 × 1061.905 × 1041.434 × 1057.722 × 1061.528 × 104
C16Mean2.333 × 1031.626 × 1034.012 × 1032.911 × 1031.832 × 1034.300 × 1032.059 × 1032.344 × 1032.537 × 1034.571 × 1031.954 × 1031.915 × 1031.265 × 1032.179 × 103
Std4.003 × 1026.166 × 1028.306 × 1027.347 × 1022.034 × 1022.800 × 1023.929 × 1024.871 × 1022.537 × 1038.614 × 1023.947 × 1021.889 × 1021.294 × 1025.456 × 102
C17Mean1.875 × 1031.020 × 1032.428 × 1032.340 × 1031.404 × 1033.216 × 1031.634 × 1031.638 × 1031.826 × 1032.583 × 1031.601 × 1031.431 × 1031.058 × 1031.649 × 103
Std4.481 × 1023.147 × 1024.840 × 1026.533 × 1021.752 × 1022.521 × 1023.379 × 1024.161 × 1023.317 × 1026.384 × 1022.673 × 1021.950 × 1021.673 × 1023.443 × 102
C18Mean2.191 × 1065.657 × 1062.271 × 1075.592 × 1055.958 × 1062.071 × 1071.932 × 1065.215 × 1061.086 × 1072.436 × 1073.360 × 1061.993 × 1061.260 × 1061.961 × 106
Std1.277 × 1064.792 × 1061.376 × 1073.756 × 1052.536 × 1065.822 × 1069.264 × 1053.463 × 1061.501 × 1071.686 × 1073.302 × 1061.042 × 1064.989 × 1051.048 × 106
C19Mean1.684 × 1044.744 × 1066.895 × 1065.925 × 1051.636 × 1064.567 × 1082.437 × 1042.496 × 1041.287 × 1075.882 × 1061.267 × 1061.042 × 1061.422 × 1071.099 × 104
Std1.131 × 1049.317 × 1066.424 × 1069.113 × 1055.609 × 1051.279 × 1088.655 × 1031.815 × 1042.490 × 1076.716 × 1061.755 × 1068.510 × 1043.690 × 1069.587 × 103
C20Mean1.532 × 1038.485 × 1021.797 × 1031.923 × 1031.326 × 1031.654 × 1031.449 × 1031.446 × 1031.340 × 1031.780 × 1031.084 × 1031.153 × 1037.313 × 1021.297 × 103
Std3.737 × 1023.681 × 1023.002 × 1021.174 × 1021.764 × 1027.474 × 1012.277 × 1024.302 × 1023.161 × 1023.592 × 1022.288 × 1022.197 × 1021.066 × 1022.918 × 102
C21Mean6.386 × 1023.865 × 1028.567 × 1027.478 × 1025.435 × 1028.204 × 1025.071 × 1025.890 × 1027.360 × 1028.471 × 1024.610 × 1024.013 × 1025.033 × 1024.561 × 102
Std7.417 × 1012.922 × 1011.056 × 1026.275 × 1013.189 × 1012.410 × 1017.406 × 1016.121 × 1014.083 × 1017.858 × 1016.288 × 1012.059 × 1011.468 × 1016.032 × 101
C22Mean7.932 × 1037.643 × 1031.123 × 1041.194 × 1046.933 × 1031.168 × 1047.463 × 1038.511 × 1031.243 × 1041.117 × 1046.747 × 1035.286 × 1039.696 × 1026.188 × 103
Std1.009 × 1032.285 × 1031.077 × 1035.125 × 1021.644 × 1031.096 × 1038.206 × 1022.401 × 1031.625 × 1031.086 × 1032.003 × 1033.501 × 1032.285 × 1037.416 × 102
C23Mean1.093 × 1036.548 × 1021.410 × 1033.340 × 1037.670 × 1021.409 × 1038.525 × 1028.968 × 1021.208 × 1031.449 × 1037.799 × 1026.698 × 1027.327 × 1027.713 × 102
Std1.205 × 1024.468 × 1011.670 × 1021.988 × 1022.929 × 1015.289 × 1011.056 × 1029.595 × 1017.848 × 1011.626 × 1027.943 × 1011.935 × 1011.317 × 1017.208 × 101
C24Mean1.137 × 1039.053 × 1021.366 × 1031.364 × 1031.237 × 1031.549 × 1031.241 × 1038.813 × 1021.358 × 1031.368 × 1037.827 × 1028.872 × 1027.781 × 1027.302 × 102
Std1.448 × 1021.403 × 1021.513 × 1021.642 × 1022.373 × 1024.148 × 1011.414 × 1029.065 × 1015.525 × 1011.459 × 1026.639 × 1013.414 × 1011.507 × 1019.637 × 101
C25Mean6.141 × 1029.099 × 1021.299 × 1031.168 × 1037.375 × 1028.018 × 1036.143 × 1026.903 × 1022.588 × 1031.359 × 1037.306 × 1026.245 × 1026.245 × 1025.499 × 102
Std2.208 × 1011.556 × 1022.630 × 1021.664 × 1022.761 × 1017.004 × 1026.143 × 1024.378 × 1014.914 × 1022.699 × 1027.072 × 1011.508 × 1011.253 × 1013.163 × 101
C26Mean6.515 × 1033.725 × 1031.155 × 1049.616 × 1033.144 × 1031.109 × 1044.100 × 1033.611 × 1039.089 × 1031.100 × 1045.402 × 1033.416 × 1033.997 × 1033.902 × 103
Std2.628 × 1037.991 × 1021.557 × 1031.048 × 1037.943 × 1024.356 × 1023.527 × 1032.725 × 1039.068 × 1021.086 × 1031.627 × 1031.839 × 1021.680 × 1023.258 × 103
C27Mean1.034 × 1038.450 × 1021.722 × 1039.509 × 1035.000112 × 1022.412 × 1039.093 × 1028.001 × 1021.777 × 1031.817 × 1039.563 × 1027.657 × 1026.422 × 1025.000106 × 102
Std1.281 × 1028.282 × 1014.392 × 1028.212 × 1021.331 × 10−41.454 × 1021.596 × 1028.577 × 1012.045 × 1024.455 × 1021.174 × 1023.238 × 1011.410 × 1013.220 × 10−4
C28Mean5.871 × 1021.344 × 1031.941 × 1032.034 × 1034.996 × 1025.770 × 1035.829 × 1027.477 × 1023.022 × 1031.972 × 1038.005 × 1026.977 × 1025.379 × 1025.000 × 102
Std4.508 × 1013.887 × 1024.551 × 1023.146 × 1021.151 × 1003.075 × 1024.253 × 1012.177 × 1024.093 × 1023.069 × 1021.006 × 1022.072 × 1011.403 × 1013.310 × 10−4
C29Mean2.344 × 1031.626 × 1035.311 × 1034.235 × 1031.404 × 1035.581 × 1031.864 × 1032.038 × 1033.662 × 1035.172 × 1032.503 × 1031.143 × 1031.156 × 1031.537 × 103
Std3.690 × 1022.399 × 1021.186 × 1038.067 × 1021.678 × 1025.236 × 1024.111 × 1025.000 × 1026.000 × 1021.283 × 1034.405 × 1021.468 × 1029.263 × 1014.108 × 102
C30Mean1.445 × 1061.203 × 1082.340 × 1081.180 × 1082.286 × 1071.045 × 1091.058 × 1063.548 × 1062.720 × 1082.340 × 1087.907 × 1071.278 × 1067.651 × 1076.158 × 103
Std5.693 × 1054.878 × 1071.060 × 1083.720 × 1071.246 × 1071.624 × 1081.607 × 1051.732 × 1061.340 × 1081.150 × 1082.423 × 1072.043 × 1051.081 × 1075.932 × 103
The bolds are the optimal values.
Table A3. Comparison of results on CEC 2017 functions (Dim = 100).
Table A3. Comparison of results on CEC 2017 functions (Dim = 100).
FunctionIndexSSAGWOWOAPSOABCFAQMESSALSSAAGWOjWOAVPPSONABCMSEFAMSESSA
C01Mean3.400 × 1083.600 × 10104.420 × 10106.560 × 10102.287 × 10102.262 × 10116.660 × 1083.362 × 10101.290 × 10114.380 × 10107.650 × 1095.750 × 1063.800 × 1091.865 × 107
Std1.320 × 1081.140 × 10106.640 × 1097.860 × 1093.819 × 1091.121 × 10102.300 × 1086.685 × 1096.300 × 1098.210 × 1092.250 × 1093.435 × 1062.120 × 1088.154 × 106
C03Mean5.756 × 1053.549 × 1058.639 × 1052.572 × 1055.997 × 1054.140 × 1053.139 × 1053.892 × 1053.153 × 1059.042 × 1053.186 × 1052.762 × 1053.232 × 1052.679 × 105
Std6.883 × 1045.301 × 1041.856 × 1051.580 × 1043.789 × 1042.231 × 1042.208 × 1049.999 × 1041.071 × 1041.360 × 1053.230 × 1041.830 × 1044.179 × 1042.607 × 104
C04Mean6.977 × 1022.432 × 1036.935 × 1031.129 × 1045.117 × 1035.163 × 1046.678 × 1023.096 × 1031.720 × 1046.785 × 1031.524 × 1035.420 × 1026.165 × 1025.209 × 102
Std9.500 × 1016.891 × 1021.370 × 1032.560 × 1036.483 × 1023.198 × 1031.040 × 1028.166 × 1022.346 × 1031.407 × 1033.465 × 1026.453 × 1012.610 × 1017.216 × 101
C05Mean8.811 × 1026.266 × 1021.275 × 1039.579 × 1021.144 × 1031.528 × 1038.223 × 1021.051 × 1031.272 × 1031.292 × 1037.512 × 1027.585 × 1027.715 × 1026.764 × 102
Std4.877 × 1016.505 × 1011.106 × 1024.010 × 1016.917 × 1014.185 × 1015.283 × 1011.316 × 1024.581 × 1011.233 × 1028.422 × 1014.586 × 1012.233 × 1016.819 × 101
C06Mean6.711 × 1013.191 × 1011.000 × 1028.246 × 1013.217 × 1011.001 × 1025.957 × 1016.794 × 1018.876 × 1019.862 × 1016.077 × 1018.674 × 1001.280 × 1011.063 × 101
Std2.818 × 1004.339 × 1001.121 × 1013.458 × 1004.056 × 1002.359 × 1003.301 × 1006.476 × 1003.426 × 1001.176 × 1015.786 × 1005.219 × 10−14.335 × 10−11.412 × 100
C07Mean2.561 × 1031.161 × 1032.879 × 1032.878 × 1031.653 × 1035.114 × 1032.173 × 1032.333 × 1032.360 × 1032.863 × 1031.497 × 1039.714 × 1021.035 × 1031.086 × 103
Std1.018 × 1021.183 × 1021.280 × 1025.156 × 1019.782 × 1011.838 × 1021.751 × 1023.010 × 1021.143 × 1021.618 × 1021.917 × 1025.976 × 1011.934 × 1011.005 × 102
C08Mean1.049 × 1036.404 × 1021.429 × 1031.187 × 1031.162 × 1031.584 × 1038.954 × 1021.145 × 1031.341 × 1031.421 × 1038.149 × 1027.594 × 1027.783 × 1026.938 × 102
Std5.063 × 1011.391 × 1021.192 × 1023.254 × 1016.135 × 1013.973 × 1011.142 × 1021.827 × 1025.424 × 1011.090 × 1021.070 × 1024.124 × 1012.277 × 1016.579 × 101
C09Mean2.475 × 1042.981 × 1046.751 × 1043.972 × 1047.703 × 1048.134 × 1042.603 × 1043.613 × 1046.435 × 1046.929 × 1042.293 × 1042.058 × 1041.396 × 1032.332 × 104
Std1.009 × 1031.061 × 1041.614 × 1042.653 × 1037.490 × 1034.988 × 1031.957 × 1031.126 × 1044.946 × 1031.932 × 1043.298 × 1032.916 × 1039.284 × 1011.059 × 103
C10Mean1.669 × 1041.642 × 1042.556 × 1042.010 × 1041.828 × 1042.958 × 1041.484 × 1042.389 × 1042.819 × 1042.534 × 1041.626 × 1041.901 × 1042.325 × 1041.353 × 104
Std1.547 × 1034.055 × 1032.139 × 1031.322 × 1034.276 × 1024.312 × 1021.306 × 1035.713 × 1031.872 × 1031.725 × 1031.983 × 1035.372 × 1026.995 × 1021.087 × 103
C11Mean6.094 × 1046.102 × 1041.514 × 1053.495 × 1041.057 × 1051.385 × 1055.484 × 1042.532 × 1051.009 × 1051.455 × 1054.914 × 1046.831 × 1041.016 × 1041.328 × 104
Std1.467 × 1041.234 × 1045.146 × 1048.143 × 1031.706 × 1049.068 × 1031.811 × 1047.371 × 1041.440 × 1045.099 × 1049.981 × 1031.529 × 1042.367 × 1035.918 × 103
C12Mean2.320 × 1085.450 × 1096.630 × 1091.950 × 10101.193 × 10108.729 × 10102.500 × 1082.005 × 1094.130 × 10106.940 × 1099.260 × 1081.493 × 1081.477 × 1098.343 × 107
Std1.010 × 1083.220 × 1091.530 × 1095.170 × 1092.038 × 1099.079 × 1091.070 × 1087.850 × 1085.850 × 1091.900 × 1093.400 × 1085.415 × 1071.183 × 1088.426 × 107
C13Mean6.178 × 1043.350 × 1082.000 × 1085.570 × 1081.641 × 1091.684 × 10107.000 × 1041.865 × 1066.080 × 1091.710 × 1085.277 × 1041.560 × 1062.607 × 1088.120 × 103
Std1.813 × 1043.650 × 1089.522 × 1075.580 × 1084.827 × 1081.825 × 1092.759 × 1041.696 × 1061.980 × 1097.666 × 1071.978 × 1041.156 × 1062.265 × 1076.250 × 103
C14Mean2.431 × 1066.416 × 1061.117 × 1077.758 × 1053.099 × 1073.221 × 1071.510 × 1068.338 × 1061.075 × 1071.180 × 1074.633 × 1067.501 × 1062.079 × 1061.513 × 106
Std1.038 × 1063.308 × 1064.628 × 1063.544 × 1059.450 × 1066.743 × 1066.718 × 1054.150 × 1064.912 × 1064.774 × 1061.968 × 1062.549 × 1066.709 × 1055.933 × 105
C15Mean1.259 × 1041.670 × 1082.379 × 1074.824 × 1066.679 × 1085.791 × 1095.529 × 1033.289 × 1051.000 × 1092.086 × 1074.938 × 1041.548 × 1061.015 × 1083.377 × 103
Std7.276 × 1032.310 × 1082.059 × 1071.347 × 1072.199 × 1088.629 × 1083.061 × 1038.365 × 1058.350 × 1081.011 × 1071.586 × 1041.119 × 1061.778 × 1074.491 × 103
C16Mean5.354 × 1034.583 × 1031.268 × 1048.144 × 1036.052 × 1031.274 × 1044.677 × 1035.219 × 1039.398 × 1031.264 × 1046.104 × 1035.506 × 1034.344 × 1034.813 × 103
Std5.545 × 1029.351 × 1021.767 × 1037.943 × 1023.852 × 1025.881 × 1028.522 × 1027.679 × 1027.050 × 1021.818 × 1038.431 × 1024.490 × 1024.556 × 1028.047 × 102
C17Mean4.245 × 1033.090 × 1037.668 × 1035.181 × 1031.412 × 1043.155 × 1043.887 × 1034.826 × 1031.121 × 1047.518 × 1033.884 × 1033.899 × 1033.525 × 1033.787 × 103
Std5.617 × 1025.660 × 1021.411 × 1039.387 × 1027.969 × 1039.845 × 1034.889 × 1025.887 × 1021.145 × 1041.282 × 1036.791 × 1023.667 × 1023.528 × 1026.412 × 102
C18Mean3.163 × 1067.242 × 1061.199 × 1079.230 × 1052.889 × 1075.286 × 1074.422 × 1061.286 × 1071.453 × 1071.014 × 1073.995 × 1066.766 × 1063.349 × 1062.034 × 106
Std1.832 × 1064.604 × 1065.492 × 1063.973 × 1056.845 × 1061.092 × 1073.205 × 1067.808 × 1069.731 × 1064.390 × 1061.825 × 1062.792 × 1066.828 × 1059.128 × 105
C19Mean3.131 × 1041.320 × 1087.867 × 1072.273 × 1076.867 × 1086.090 × 1091.413 × 1041.025 × 1061.120 × 1097.709 × 1077.141 × 1062.220 × 1061.072 × 1083.723 × 103
Std2.882 × 1041.660 × 1084.910 × 1072.679 × 1072.233 × 1081.051 × 1091.494 × 1041.130 × 1067.860 × 1084.465 × 1073.960 × 1061.318 × 1062.257 × 1073.802 × 103
C20Mean3.871 × 1032.902 × 1034.719 × 1034.776 × 1034.343 × 1035.029 × 1033.617 × 1034.063 × 1034.236 × 1034.742 × 1033.349 × 1034.139 × 1033.042 × 1033.246 × 103
Std5.609 × 1028.019 × 1025.799 × 1022.680 × 1022.554 × 1021.798 × 1025.819 × 1028.767 × 1026.178 × 1025.635 × 1025.805 × 1022.558 × 1023.312 × 1025.244 × 102
C21Mean1.663 × 1038.653 × 1022.162 × 1031.735 × 1031.443 × 1031.927 × 1031.170 × 1031.637 × 1031.774 × 1032.141 × 1031.056 × 1031.010 × 1039.959 × 1029.012 × 102
Std1.991 × 1029.962 × 1011.337 × 1021.862 × 1025.934 × 1015.470 × 1011.642 × 1021.533 × 1027.586 × 1011.953 × 1021.440 × 1024.405 × 1012.897 × 1018.237 × 101
C22Mean1.828 × 1041.738 × 1042.709 × 1042.299 × 1041.932 × 1043.102 × 1041.719 × 1042.433 × 1042.983 × 1042.714 × 1041.832 × 1042.038 × 1041.617 × 1041.462 × 104
Std1.400 × 1033.043 × 1031.308 × 1031.024 × 1035.740 × 1025.742 × 1021.488 × 1034.683 × 1031.660 × 1031.699 × 1031.962 × 1035.645 × 1021.121 × 1041.177 × 103
C23Mean1.957 × 1031.184 × 1032.789 × 1036.723 × 1031.500 × 1033.190 × 1031.472 × 1031.674 × 1032.831 × 1032.786 × 1031.619 × 1031.098 × 1031.296 × 1031.089 × 103
Std1.929 × 1025.704 × 1012.757 × 1023.317 × 1026.082 × 1011.027 × 1021.533 × 1021.406 × 1021.402 × 1022.502 × 1021.652 × 1022.611 × 1012.788 × 1018.824 × 101
C24Mean2.847 × 1031.999 × 1033.978 × 1034.617 × 1032.394 × 1035.445 × 1032.055 × 1032.240 × 1034.535 × 1033.995 × 1032.123 × 1031.698 × 1031.619 × 1031.665 × 103
Std2.736 × 1021.825 × 1023.924 × 1024.111 × 1027.212 × 1012.278 × 1022.342 × 1022.701 × 1023.159 × 1024.173 × 1022.041 × 1022.838 × 1012.732 × 1011.065 × 102
C25Mean1.210 × 1033.060 × 1034.228 × 1034.689 × 1035.163 × 1032.298 × 1041.237 × 1033.516 × 1037.946 × 1034.261 × 1032.050 × 1031.457 × 1031.206 × 1031.007 × 103
Std7.953 × 1015.453 × 1025.258 × 1026.129 × 1025.541 × 1022.126 × 1036.421 × 1019.005 × 1028.640 × 1025.906 × 1022.610 × 1026.747 × 1013.432 × 1015.882 × 101
C26Mean2.246 × 1041.105 × 1043.314 × 1042.742 × 1041.819 × 1047.650 × 1041.867 × 1041.607 × 1043.004 × 1043.277 × 1041.736 × 1041.143 × 1041.036 × 1041.538 × 104
Std4.939 × 1039.404 × 1023.633 × 1032.963 × 1038.416 × 1027.443 × 1034.712 × 1034.162 × 1031.062 × 1033.610 × 1033.059 × 1034.108 × 1022.992 × 1025.236 × 103
C27Mean1.287 × 1031.271 × 1032.857 × 1031.559 × 1045.00024 × 1025.00025 × 1021.188 × 1031.005 × 1034.167 × 1032.833 × 1031.589 × 1031.052 × 1038.475 × 1025.00022 × 102
Std3.130 × 1029.177 × 1017.434 × 1029.188 × 1021.694 × 10−46.024 × 10−51.919 × 1021.124 × 1024.359 × 1028.140 × 1022.285 × 1024.235 × 1012.004 × 1013.760 × 10−4
C28Mean1.083 × 1034.420 × 1036.626 × 1039.118 × 1035.000244 × 1025.000242 × 1021.017 × 1034.807 × 1031.304 × 1046.402 × 1032.089 × 1032.652 × 1039.044 × 1028.444 × 102
Std7.551 × 1011.274 × 1038.612 × 1029.637 × 1022.771 × 10−41.107 × 10−47.211 × 1011.684 × 1031.292 × 1039.669 × 1023.028 × 1024.898 × 1023.004 × 1013.894 × 101
C29Mean5.832 × 1034.979 × 1031.366 × 1041.111 × 1041.184 × 1044.924 × 1064.911 × 1035.022 × 1031.196 × 1041.416 × 1047.124 × 1034.573 × 1034.483 × 1033.627 × 103
Std7.132 × 1027.056 × 1022.152 × 1031.624 × 1032.801 × 1033.753 × 1066.704 × 1025.890 × 1021.823 × 1032.491 × 1038.190 × 1024.667 × 1024.134 × 1025.149 × 102
C30Mean2.315 × 1064.960 × 1081.050 × 1099.660 × 1081.357 × 1095.842 × 10102.038 × 1061.217 × 1074.510 × 1099.000 × 1082.270 × 1082.579 × 1061.831 × 1087.285 × 103
Std1.148 × 1063.020 × 1084.950 × 1084.240 × 1082.762 × 1088.629 × 1071.104 × 1067.475 × 1061.290 × 1093.890 × 1081.050 × 1083.183 × 1062.757 × 1071.160 × 104
The bolds are the optimal values.
Table A4. p-values of the Wilcoxon rank-sum tests on CEC 2017 functions (Dim = 30).
Table A4. p-values of the Wilcoxon rank-sum tests on CEC 2017 functions (Dim = 30).
FunctionSSAGWOWOAPSOABCFAQMESSALSSAAGWOjWOAVPPSONABCMSEFA
C011.44 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.91 × 10−23.02 × 10−113.02 × 10−113.02 × 10−112.17 × 10−14.38 × 10−13.02 × 10−11
C033.02 × 10−115.07 × 10−103.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−114.18 × 10−93.02 × 10−113.02 × 10−113.02 × 10−119.26 × 10−93.02 × 10−113.02 × 10−11
C042.84 × 10−43.02 × 10−113.69 × 10−113.02 × 10−112.51 × 10−23.02 × 10−118.31 × 10−32.39 × 10−43.02 × 10−113.02 × 10−112.02 × 10−86.77 × 10−51.29 × 10−9
C051.09 × 10−104.43 × 10−33.02 × 10−113.02 × 10−115.55 × 10−23.02 × 10−113.82 × 10−104.20 × 10−103.02 × 10−114.50 × 10−114.03 × 10−31.21 × 10−104.42 × 10−6
C063.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−117.06 × 10−13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C073.02 × 10−112.84 × 10−43.02 × 10−113.02 × 10−115.57 × 10−103.02 × 10−113.47 × 10−103.81 × 10−73.02 × 10−113.02 × 10−119.93 × 10−28.15 × 10−116.53 × 10−8
C087.38 × 10−101.27 × 10−26.07 × 10−113.02 × 10−111.53 × 10−53.02 × 10−116.52 × 10−92.83 × 10−83.69 × 10−114.50 × 10−113.67 × 10−33.65 × 10−81.21 × 10−10
C093.02 × 10−113.20 × 10−93.02 × 10−113.02 × 10−115.61 × 10−54.98 × 10−113.82 × 10−103.02 × 10−112.23 × 10−93.02 × 10−118.15 × 10−53.02 × 10−113.02 × 10−11
C101.73 × 10−71.26 × 10−16.70 × 10−113.02 × 10−112.58 × 10−13.02 × 10−116.38 × 10−32.57 × 10−74.08 × 10−113.02 × 10−119.88 × 10−32.06 × 10−16.12 × 10−10
C111.17 × 10−54.98 × 10−113.02 × 10−113.96 × 10−88.99 × 10−113.02 × 10−112.77 × 10−13.69 × 10−113.02 × 10−113.02 × 10−113.16 × 10−108.20 × 10−73.02 × 10−11
C122.40 × 10−12.37 × 10−103.02 × 10−111.31 × 10−84.11 × 10−73.02 × 10−111.71 × 10−14.06 × 10−23.02 × 10−113.02 × 10−119.92 × 10−112.51 × 10−23.02 × 10−11
C134.21 × 10−28.99 × 10−114.50 × 10−117.69 × 10−83.02 × 10−113.02 × 10−111.22 × 10−25.53 × 10−85.27 × 10−53.02 × 10−111.46 × 10−102.03 × 10−93.02 × 10−11
C146.38 × 10−31.19 × 10−13.20 × 10−97.69 × 10−82.92 × 10−91.25 × 10−73.27 × 10−27.98 × 10−21.19 × 10−66.52 × 10−92.81 × 10−23.83 × 10−51.43 × 10−8
C159.63 × 10−28.99 × 10−113.02 × 10−112.64 × 10−13.02 × 10−113.02 × 10−114.31 × 10−84.46 × 10−44.08 × 10−113.02 × 10−118.89 × 10−108.84 × 10−73.02 × 10−11
C167.06 × 10−13.96 × 10−85.97 × 10−91.01 × 10−83.32 × 10−61.10 × 10−87.28 × 10−12.64 × 10−13.34 × 10−114.98 × 10−114.23 × 10−31.25 × 10−44.50 × 10−11
C176.95 × 10−14.57 × 10−91.00 × 10−31.20 × 10−84.69 × 10−81.78 × 10−41.81 × 10−14.86 × 10−31.26 × 10−16.57 × 10−24.22 × 10−45.19 × 10−71.17 × 10−9
C183.67 × 10−36.28 × 10−61.21 × 10−106.53 × 10−72.60 × 10−83.02 × 10−115.75 × 10−22.32 × 10−65.00 × 10−92.57 × 10−74.22 × 10−48.07 × 10−18.12 × 10−4
C194.73 × 10−11.33 × 10−103.02 × 10−112.39 × 10−43.02 × 10−113.02 × 10−119.35 × 10−16.73 × 10−13.02 × 10−113.02 × 10−113.02 × 10−111.17 × 10−93.02 × 10−11
C201.09 × 10−51.44 × 10−21.34 × 10−53.02 × 10−111.04 × 10−43.02 × 10−114.92 × 10−19.12 × 10−13.64 × 10−24.03 × 10−31.02 × 10−14.08 × 10−54.74 × 10−6
C211.47 × 10−78.20 × 10−74.08 × 10−113.02 × 10−114.94 × 10−53.02 × 10−112.96 × 10−59.53 × 10−79.92 × 10−114.08 × 10−116.63 × 10−13.20 × 10−91.86 × 10−3
C223.79 × 10−11.09 × 10−12.78 × 10−73.02 × 10−116.63 × 10−13.02 × 10−119.35 × 10−12.58 × 10−12.77 × 10−52.92 × 10−99.59 × 10−17.96 × 10−32.71 × 10−2
C231.39 × 10−61.55 × 10−94.08 × 10−113.02 × 10−112.49 × 10−63.02 × 10−115.59 × 10−17.62 × 10−13.02 × 10−113.02 × 10−117.28 × 10−13.02 × 10−112.15 × 10−2
C249.33 × 10−24.20 × 10−105.19 × 10−25.53 × 10−89.06 × 10−83.02 × 10−111.60 × 10−74.69 × 10−85.46 × 10−65.61 × 10−56.72 × 10−103.69 × 10−113.02 × 10−11
C254.86 × 10−33.69 × 10−113.02 × 10−119.92 × 10−112.38 × 10−73.02 × 10−117.70 × 10−44.42 × 10−63.02 × 10−113.02 × 10−112.92 × 10−93.03 × 10−21.11 × 10−6
C264.44 × 10−71.64 × 10−52.15 × 10−106.53 × 10−88.48 × 10−93.02 × 10−113.18 × 10−17.98 × 10−27.74 × 10−68.15 × 10−117.48 × 10−22.92 × 10−96.97 × 10−3
C273.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.56 × 10−43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C282.71 × 10−27.39 × 10−113.02 × 10−118.99 × 10−112.71 × 10−23.02 × 10−111.67 × 10−13.26 × 10−13.02 × 10−113.02 × 10−119.07 × 10−32.23 × 10−14.84 × 10−2
C295.86 × 10−62.28 × 10−16.70 × 10−113.02 × 10−119.26 × 10−93.02 × 10−112.75 × 10−36.57 × 10−24.20 × 10−103.69 × 10−113.26 × 10−74.46 × 10−48.50 × 10−2
C302.83 × 10−83.02 × 10−113.02 × 10−113.02 × 10−119.92 × 10−113.02 × 10−113.08 × 10−83.82 × 10−93.02 × 10−113.02 × 10−113.02 × 10−111.96 × 10−103.02 × 10−11
The underlines are the values meet p > 5%.
Table A5. p-values of the Wilcoxon rank-sum tests on CEC 2017 functions (Dim = 50).
Table A5. p-values of the Wilcoxon rank-sum tests on CEC 2017 functions (Dim = 50).
FunctionSSAGWOWOAPSOABCFAQMESSALSSAAGWOjWOAVPPSONABCMSEFA
C013.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C034.50 × 10−118.68 × 10−36.07 × 10−114.42 × 10−63.02 × 10−113.02 × 10−112.92 × 10−93.02 × 10−112.67 × 10−91.09 × 10−106.41 × 10−12.52 × 10−15.49 × 10−11
C048.88 × 10−63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.63 × 10−25.00 × 10−93.02 × 10−113.02 × 10−111.46 × 10−104.22 × 10−47.39 × 10−11
C054.98 × 10−119.51 × 10−63.02 × 10−113.02 × 10−111.41 × 10−93.02 × 10−111.17 × 10−96.72 × 10−103.02 × 10−113.02 × 10−118.30 × 10−15.86 × 10−67.77 × 10−9
C063.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C073.02 × 10−112.40 × 10−13.02 × 10−113.02 × 10−114.22 × 10−43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.01 × 10−43.02 × 10−114.62 × 10−10
C084.50 × 10−116.20 × 10−43.02 × 10−113.02 × 10−111.55 × 10−93.02 × 10−116.53 × 10−84.08 × 10−113.02 × 10−113.02 × 10−112.81 × 10−29.51 × 10−67.77 × 10−9
C092.03 × 10−97.60 × 10−73.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−111.11 × 10−43.02 × 10−113.02 × 10−113.02 × 10−115.59 × 10−19.51 × 10−63.02 × 10−11
C102.03 × 10−91.52 × 10−33.02 × 10−113.02 × 10−111.96 × 10−103.02 × 10−111.73 × 10−62.02 × 10−83.02 × 10−113.02 × 10−114.42 × 10−63.02 × 10−113.02 × 10−11
C115.07 × 10−104.35 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.25 × 10−73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.35 × 10−83.02 × 10−11
C122.28 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.31 × 10−31.21 × 10−103.02 × 10−113.02 × 10−118.99 × 10−113.02 × 10−113.02 × 10−11
C131.19 × 10−63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.52 × 10−15.49 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.49 × 10−113.02 × 10−11
C149.12 × 10−11.44 × 10−32.23 × 10−91.43 × 10−53.34 × 10−111.64 × 10−53.48 × 10−12.53 × 10−41.07 × 10−91.46 × 10−108.53 × 10−12.37 × 10−103.83 × 10−5
C151.04 × 10−44.98 × 10−113.02 × 10−111.26 × 10−13.02 × 10−113.02 × 10−111.03 × 10−29.26 × 10−93.02 × 10−113.02 × 10−114.74 × 10−64.62 × 10−103.02 × 10−11
C164.46 × 10−11.43 × 10−58.10 × 10−109.07 × 10−35.57 × 10−31.07 × 10−91.05 × 10−16.10 × 10−19.05 × 10−27.39 × 10−111.09 × 10−15.01 × 10−21.78 × 10−10
C179.88 × 10−31.17 × 10−56.72 × 10−101.36 × 10−72.97 × 10−13.52 × 10−75.30 × 10−17.06 × 10−12.07 × 10−26.01 × 10−82.15 × 10−21.91 × 10−22.60 × 10−8
C181.30 × 10−31.73 × 10−73.02 × 10−111.64 × 10−59.92 × 10−115.49 × 10−11.00 × 10−35.86 × 10−63.35 × 10−83.34 × 10−112.84 × 10−46.84 × 10−19.23 × 10−1
C192.64 × 10−13.34 × 10−113.02 × 10−114.50 × 10−113.02 × 10−113.02 × 10−114.64 × 10−33.33 × 10−13.02 × 10−113.02 × 10−113.34 × 10−111.21 × 10−103.02 × 10−11
C206.67 × 10−32.62 × 10−37.60 × 10−71.61 × 10−102.89 × 10−35.00 × 10−96.73 × 10−12.51 × 10−29.94 × 10−11.01 × 10−82.01 × 10−19.07 × 10−31.61 × 10−6
C218.15 × 10−111.34 × 10−53.02 × 10−113.02 × 10−113.01 × 10−71.36 × 10−72.43 × 10−52.19 × 10−83.02 × 10−113.02 × 10−113.03 × 10−34.11 × 10−71.01 × 10−8
C228.10 × 10−107.01 × 10−23.02 × 10−113.02 × 10−113.37 × 10−43.32 × 10−64.08 × 10−51.25 × 10−73.02 × 10−113.02 × 10−111.37 × 10−37.51 × 10−15.57 × 10−10
C236.07 × 10−111.64 × 10−53.02 × 10−113.02 × 10−112.40 × 10−15.00 × 10−91.95 × 10−35.19 × 10−73.02 × 10−113.02 × 10−112.23 × 10−12.00 × 10−51.50 × 10−2
C248.30 × 10−14.98 × 10−119.79 × 10−54.21 × 10−22.60 × 10−54.20 × 10−11.69 × 10−92.19 × 10−84.80 × 10−71.29 × 10−66.07 × 10−114.08 × 10−113.02 × 10−11
C259.26 × 10−93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.43 × 10−55.27 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.50 × 10−118.99 × 10−11
C261.03 × 10−22.24 × 10−21.61 × 10−105.49 × 10−111.67 × 10−17.96 × 10−34.92 × 10−16.95 × 10−15.07 × 10−103.02 × 10−114.06 × 10−23.79 × 10−17.73 × 10−2
C273.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.50 × 10−93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C281.96 × 10−103.02 × 10−113.02 × 10−113.02 × 10−114.12 × 10−11.95 × 10−36.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C293.35 × 10−86.20 × 10−13.02 × 10−113.02 × 10−114.92 × 10−11.69 × 10−91.76 × 10−21.86 × 10−93.02 × 10−113.02 × 10−113.47 × 10−102.00 × 10−54.74 × 10−6
C303.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.18 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
The underlines are the values meet p > 5%.
Table A6. p-values of the Wilcoxon rank-sum tests on CEC 2017 functions (Dim = 100).
Table A6. p-values of the Wilcoxon rank-sum tests on CEC 2017 functions (Dim = 100).
FunctionSSAGWOWOAPSOABCFAQMESSALSSAAGWOjWOAVPPSONABCMSEFA
C013.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.69 × 10−83.02 × 10−11
C033.02 × 10−118.99 × 10−113.02 × 10−113.79 × 10−13.02 × 10−113.02 × 10−111.20 × 10−83.02 × 10−112.92 × 10−93.02 × 10−112.15 × 10−102.13 × 10−41.07 × 10−7
C041.61 × 10−63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.69 × 10−83.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.71 × 10−11.49 × 10−6
C053.02 × 10−113.59 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.20 × 10−83.02 × 10−113.02 × 10−113.02 × 10−111.06 × 10−31.73 × 10−61.78 × 10−10
C063.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.88 × 10−63.32 × 10−6
C073.02 × 10−114.06 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.99 × 10−111.17 × 10−56.97 × 10−3
C083.02 × 10−112.53 × 10−43.02 × 10−113.02 × 10−113.02 × 10−113.32 × 10−61.09 × 10−103.34 × 10−113.02 × 10−113.02 × 10−111.53 × 10−55.86 × 10−63.20 × 10−9
C093.96 × 10−86.67 × 10−33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.82 × 10−93.02 × 10−113.02 × 10−113.02 × 10−117.01 × 10−27.60 × 10−73.02 × 10−11
C105.97 × 10−91.87 × 10−53.02 × 10−113.02 × 10−113.02 × 10−114.74 × 10−62.77 × 10−53.02 × 10−113.02 × 10−113.02 × 10−112.03 × 10−73.02 × 10−113.02 × 10−11
C114.50 × 10−117.66 × 10−53.02 × 10−111.61 × 10−103.02 × 10−111.15 × 10−13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.42 × 10−1
C126.72 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.09 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.42 × 10−63.02 × 10−11
C133.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C148.77 × 10−23.50 × 10−95.49 × 10−114.44 × 10−73.02 × 10−116.31 × 10−16.35 × 10−23.47 × 10−103.02 × 10−115.49 × 10−111.25 × 10−73.02 × 10−113.18 × 10−1
C151.31 × 10−83.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.39 × 10−23.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−11
C162.27 × 10−33.78 × 10−23.02 × 10−113.02 × 10−111.55 × 10−91.73 × 10−73.02 × 10−115.08 × 10−33.02 × 10−113.02 × 10−111.87 × 10−78.29 × 10−64.68 × 10−2
C176.55 × 10−44.74 × 10−63.34 × 10−119.76 × 10−103.02 × 10−112.64 × 10−13.63 × 10−14.11 × 10−73.02 × 10−113.34 × 10−119.93 × 10−28.88 × 10−11.44 × 10−2
C188.50 × 10−21.41 × 10−92.03 × 10−92.39 × 10−83.02 × 10−119.33 × 10−29.07 × 10−36.70 × 10−113.02 × 10−113.69 × 10−115.09 × 10−64.98 × 10−111.87 × 10−7
C198.89 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.09 × 10−83.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C202.25 × 10−43.03 × 10−31.60 × 10−73.69 × 10−111.86 × 10−99.92 × 10−117.98 × 10−22.16 × 10−31.16 × 10−74.20 × 10−108.65 × 10−11.10 × 10−89.82 × 10−1
C213.02 × 10−111.17 × 10−23.02 × 10−113.02 × 10−113.02 × 10−114.44 × 10−72.44 × 10−93.02 × 10−113.02 × 10−113.02 × 10−117.04 × 10−74.08 × 10−53.09 × 10−6
C222.38 × 10−73.26 × 10−73.02 × 10−113.02 × 10−113.02 × 10−117.96 × 10−39.26 × 10−98.99 × 10−113.02 × 10−113.02 × 10−111.61 × 10−106.12 × 10−102.71 × 10−2
C233.02 × 10−118.20 × 10−73.02 × 10−113.02 × 10−116.07 × 10−113.82 × 10−107.39 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.01 × 10−15.57 × 10−10
C244.98 × 10−112.19 × 10−83.02 × 10−113.02 × 10−111.09 × 10−104.06 × 10−27.96 × 10−11.11 × 10−63.02 × 10−113.02 × 10−119.88 × 10−38.10 × 10−103.02 × 10−11
C256.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.99 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C261.69 × 10−93.20 × 10−93.02 × 10−113.02 × 10−119.59 × 10−13.34 × 10−32.60 × 10−57.17 × 10−13.02 × 10−113.02 × 10−111.37 × 10−15.57 × 10−108.48 × 10−9
C273.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C288.99 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.17 × 10−93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
C292.15 × 10−107.12 × 10−93.02 × 10−113.02 × 10−113.02 × 10−116.77 × 10−53.47 × 10−106.53 × 10−83.02 × 10−113.02 × 10−113.02 × 10−117.09 × 10−84.64 × 10−5
C303.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
The underlines are the values meet p > 5%.

References

  1. Deng, J.; Wang, L. A competitive memetic algorithm for multi-objective distributed permutation flow shop scheduling problem. Swarm Evol. Comput. 2021, 32, 121–131. [Google Scholar] [CrossRef]
  2. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. 2020, 88, 105946. [Google Scholar] [CrossRef]
  3. Wang, Z.; Xie, H. Wireless Sensor Network Deployment of 3D Surface Based on Enhanced Grey Wolf Optimizer. IEEE Access 2020, 8, 57229–57251. [Google Scholar] [CrossRef]
  4. Zelinka, I.; Snášel, V.; Abraham, A. Handbook of Optimization: From Classical to Modern Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 38, pp. II–III. [Google Scholar]
  5. Mavrovouniotis, M.; Muller, F.; Yang, S. Ant Colony Optimization with Local Search for Dynamic Traveling Salesman Problems. IEEE Trans. Cybern. 2017, 47, 1743–1756. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Sun, G.; Han, R.; Deng, L.; Li, C.; Yang, G. Hierarchical structure-based joint operations algorithm for global optimization. Swarm Evol. Comput. 2023, 79, 101311. [Google Scholar] [CrossRef]
  7. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  8. Too, J.; Mafarja, M.; Mirjalili, S. Spatial bound whale optimization algorithm: An efficient high-dimensional feature selection approach. Neural Comput. Appl. 2021, 33, 16229–16250. [Google Scholar] [CrossRef]
  9. Shami, T.; Mirjalili, S.; Al-Eryani, Y.; Daoudi, K.; Izadi, S.; Abualigah, L. Velocity pausing particle swarm optimization: A novel variant for global optimization. Neural Comput. Appl. 2023, 35, 1–31. [Google Scholar] [CrossRef]
  10. Zhu, D.; Huang, Z.; Liao, S. Improved Bare Bones Particle Swarm Optimization for DNA Sequence Design. IEEE Trans. NanoBiosci. 2022. [Google Scholar] [CrossRef]
  11. Li, S.; Luo, X.; Wu, L. An improved whale optimization algorithm for locating critical slip surface of slopes. Adv. Eng. Softw. 2021, 157, 103009. [Google Scholar] [CrossRef]
  12. Mortazavi, A. Bayesian Interactive Search Algorithm: A New Probabilistic Swarm Intelligence Tested on Mathematical and Structural Optimization Problems. Adv. Eng. Softw. 2021, 155, 102994. [Google Scholar] [CrossRef]
  13. Peng, H.; Deng, C.; Wu, Z. Best neighbor-guided artificial bee colony algorithm for continuous optimization problems. Soft Comput. A Fusion Found. Methodol. Appl. 2019, 23, 8723–8740. [Google Scholar] [CrossRef]
  14. Peng, H.; Qian, J.; Kong, F.; Fan, D.; Shao, P.; Wu, Z. Enhancing firefly algorithm with sliding window for continuous optimization problems. Neural Comput. Appl. 2022, 34, 13733–13756. [Google Scholar] [CrossRef]
  15. Kennedy, J.; Eberhart, R. Particle swarm optimization. In IEEE International Conference on Neural Networks; IEEE Service Center: Perth, Australia, 1995; pp. 1942–1948. [Google Scholar]
  16. Mirjalili, S.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  17. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  18. Karaboga, D.; Akay, B. A comparative study of Artificial Bee Colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  19. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimization. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  20. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  21. Liu, T.; Meng, X. Hybrid Strategy Improved Sparrow Search Algorithm in the Field of Intrusion Detection. IEEE Access 2023, 11, 32134–32151. [Google Scholar]
  22. Xue, Z.; Yu, J.; Zhao, A.; Zong, Y.; Yang, S.; Wang, M. Optimal chiller loading by improved sparrow search algorithm for saving energy consumption. J. Build. Eng. 2023, 67, 105980. [Google Scholar] [CrossRef]
  23. Panimalar, K.; Kanmani, S. Energy efficient cluster head selection using improved Sparrow Search Algorithm in Wireless Sensor Networks. J. King Saud Univ. Comput. Inf. Sci. 2021, 34, 8564–8575. [Google Scholar]
  24. Zhang, J.; Zheng, J.; Xie, X.; Lin, Z.; Li, H. Mayfly sparrow search hybrid algorithm for RFID Network Planning. IEEE Sens. J. 2022, 22, 16673–16686. [Google Scholar] [CrossRef]
  25. Yu, W.; Kang, H.; Sun, G.; Liang, S.; Li, J. Bio-Inspired Feature Selection in Brain Disease Detection via an Improved Sparrow Search Algorithm. IEEE Trans. Instrum. Meas. 2023, 72, 1–15. [Google Scholar] [CrossRef]
  26. Qiu, S.; Li, A. Application of Chaos Mutation Adaptive Sparrow Search Algorithm in Edge Data Compression. Sensors 2022, 22, 5425. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, R.; Huang, H.; Wei, J.; Ma, C.; Zhu, Y.; Chen, Y.; Fan, Q. An improved sparrow search algorithm based on quantum computations and multi-strategy enhancement. Expert Syst. Appl. 2023, 215, 119421. [Google Scholar] [CrossRef]
  28. Ouyang, C.; Zhu, D.; Wang, F. A Learning Sparrow Search Algorithm. Comput. Intell. Neurosci. 2021, 2021, 3946958. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, C.; Fu, X.; Pei, J.; Dong, Z. A novel sparrow search algorithm for the traveling salesman problem. IEEE Access 2021, 9, 153456–153471. [Google Scholar] [CrossRef]
  30. Meng, K.; Chen, C.; Xin, B. MSSSA: A multi-strategy enhanced sparrow search algorithm for global optimization. Front. Inf. Technol. Electron. Eng. 2022, 23, 1828–1847. [Google Scholar] [CrossRef]
  31. Zhang, H.; Zhang, Y. An Improved Sparrow Search Algorithm for Optimizing Support Vector Machines. IEEE Access 2023, 11, 8199–8206. [Google Scholar] [CrossRef]
  32. Gao, B.; Shen, W.; Guan, H.; Zheng, L.; Zhang, W. Research on Multistrategy Improved Evolutionary Sparrow Search Algorithm and its Application. IEEE Access 2022, 10, 62520–62534. [Google Scholar] [CrossRef]
  33. Ma, J.; Hao, Z.; Sun, W. Enhancing sparrow search algorithm via multi-strategies for continuous optimization problems. Inf. Process. Manag. 2022, 59, 102854. [Google Scholar] [CrossRef]
  34. Mallipeddi, R.; Mallipeddi, P.; Suganthan, N. Ensemble strategies with adaptive evolutionary programming. Inf. Sci. 2010, 180, 1571–1581. [Google Scholar] [CrossRef]
  35. Du, W.; Li, B. Multi-strategy ensemble particle swarm optimization for dynamic optimization. Inf. Sci. 2008, 178, 3096–3109. [Google Scholar] [CrossRef]
  36. Wang, H.; Wu, Z.; Rahnamayan, S.; Sun, H.; Liu, Y.; Pan, J. Multi-strategy ensemble artificial bee colony algorithm. Inf. Sci. 2014, 279, 587–603. [Google Scholar] [CrossRef]
  37. Al-Sulttani, A.; Al-Mukhtar, M.; Roomi, A.; Farooque, A.; Khedher, K.; Yaseen, Z. Proposition of New Ensemble Data-Intelligence Models for Surface Water Quality Prediction. IEEE Access 2021, 9, 2169–3536. [Google Scholar] [CrossRef]
  38. Yan, G.; Yu, C.; Bai, Y. Wind turbine bearing temperature forecasting using a new data-driven ensemble approach. Machines 2021, 9, 248. [Google Scholar] [CrossRef]
  39. Afan, H.A.; Osman, A.I.A.; Essam, Y.; Ahmed, A.N.; Huang, Y.F.; Kisi, O.; Sherif, M.; Sefelnasr, A.; Chau, K.W.; El-Shafie, A. Modeling the fluctuations of groundwater level by employing ensemble deep learning techniques. Eng. Appl. Comput. Fluid Mech. 2021, 15, 1420–1439. [Google Scholar] [CrossRef]
  40. Zhou, Z.; Wu, J.; Wei, T. Ensembling neural networks: Many could be better than all. Artif. Intell. 2002, 137, 239–263. [Google Scholar] [CrossRef] [Green Version]
  41. Peng, H.; Xiao, W.; Han, Y.; Jiang, A.; Xu, Z.; Li, M.; Wu, Z. Multi-strategy firefly algorithm with selective ensemble for complex engineering optimization problems. Appl. Soft Comput. 2022, 120, 108634. [Google Scholar] [CrossRef]
  42. Ouyang, C.; Qiu, Y.; Zhu, D. Adaptive Spiral Flying Sparrow Search Algorithm. Sci. Program. 2021, 2021, 1–16. [Google Scholar] [CrossRef]
  43. Wolpert, D.; Macready, W. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  44. Konstantinos, Z.; Stelios, T. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar]
  45. Sadhu, A.; Konar, A.; Bhattacharjee, T.; Das, S. Synergism of Firefly Algorithm and Q-Learning for Robot Arm Path Planning. Swarm Evol. Comput. 2018, 43, 50–68. [Google Scholar] [CrossRef]
  46. Samuel, Y.; Daniel, G.; Sergio, T. A Dimensional Comparison between Evolutionary Algorithm and Deep Reinforcement Learning Methodologies for Autonomous Surface Vehicles with Water Quality Sensors. Sensors 2021, 21, 2862. [Google Scholar]
  47. Mallipeddi, R.; Suganthan, P.; Pan, Q.; Tasgetiren, M. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  48. Tang, A.; Zhou, H.; Han, T.; Xie, L. A Chaos Sparrow Search Algorithm with Logarithmic Spiral and Adaptive Step for Engineering Problems. CMES-Comput. Model. Eng. Sci. 2022, 130, 331–364. [Google Scholar] [CrossRef]
  49. Davut, I.; Serdar, E.; Erdal, E.; Murat, K. Augmented Hunger Games Search Algorithm Using Logarithmic Spiral Opposition-based Learning for Function Optimization and Controller Design. J. King Saud Univ. Eng. Sci. 2022. [Google Scholar] [CrossRef]
  50. Peng, H.; Zeng, Z.; Deng, C.; Wu, Z. Multi-strategy serial cuckoo search algorithm for global optimization. Knowl. Based Syst. 2021, 214, 106729. [Google Scholar] [CrossRef]
  51. Yang, Q.; Yan, J.; Gao, X.; Xu, D.; Lu, Z.; Zhang, J. Random neighbor elite guided differential evolution for global numerical optimization. Inf. Sci. 2022, 607, 1408–1438. [Google Scholar] [CrossRef]
  52. Funda, K. A novel improved chef-based optimization algorithm with Gaussian random walk-based diffusion process for global optimization and engineering problems. Math. Comput. Simul. 2023, 212, 195–223. [Google Scholar]
  53. Qian, W.; Chai, J.; Xu, Z.; Zhang, Z. Differential evolution algorithm with multiple mutation strategies based on roulette wheel selection. Appl. Intell. 2018, 48, 3612–3629. [Google Scholar] [CrossRef]
  54. Avinash, C.; Ankur, K.; Deep, S. Enhancing sentiment analysis using Roulette wheel selection based cuckoo search clustering method. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 1–29. [Google Scholar]
  55. Beşkirli, M. Solving continuous optimization problems using the tree seed algorithm developed with the roulette wheel strategy. Expert Syst. Appl. 2021, 170, 114579. [Google Scholar] [CrossRef]
  56. Hu, B.; Xiao, H.; Yang, N.; Jin, H.; Wang, L. A hybrid approach based on double roulette wheel selection and quadratic programming for cardinality constrained portfolio optimization. Concurr. Comput. Pract. Exp. 2021, 34, e6818. [Google Scholar] [CrossRef]
  57. Baykasoglu, A.; Ozsoydan, F. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  58. Kalyanmoy, D. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar]
Figure 1. The logarithmic spiral shape.
Figure 1. The logarithmic spiral shape.
Electronics 12 02505 g001
Figure 2. The schematic diagram of saltation learning [50].
Figure 2. The schematic diagram of saltation learning [50].
Electronics 12 02505 g002
Figure 3. The neighborhood of sparrows.
Figure 3. The neighborhood of sparrows.
Electronics 12 02505 g003
Figure 4. The schematic diagram of neighborhood-guided learning.
Figure 4. The schematic diagram of neighborhood-guided learning.
Electronics 12 02505 g004
Figure 5. The strategies selection framework of MSESSA.
Figure 5. The strategies selection framework of MSESSA.
Electronics 12 02505 g005
Figure 6. The pie chart of strategies selection.
Figure 6. The pie chart of strategies selection.
Electronics 12 02505 g006
Figure 13. The schematic diagram of the tension/compression coil spring design problem [57].
Figure 13. The schematic diagram of the tension/compression coil spring design problem [57].
Electronics 12 02505 g013
Figure 14. The schematic diagram of the welded beam design problem [57].
Figure 14. The schematic diagram of the welded beam design problem [57].
Electronics 12 02505 g014
Figure 15. The schematic diagram of the speed reducer design problem [57].
Figure 15. The schematic diagram of the speed reducer design problem [57].
Electronics 12 02505 g015
Figure 16. The schematic diagram of the pressure vessel design problem [57].
Figure 16. The schematic diagram of the pressure vessel design problem [57].
Electronics 12 02505 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Wang, J.; Li, D.; Zhu, D. A Multi-Strategy Sparrow Search Algorithm with Selective Ensemble. Electronics 2023, 12, 2505. https://doi.org/10.3390/electronics12112505

AMA Style

Wang Z, Wang J, Li D, Zhu D. A Multi-Strategy Sparrow Search Algorithm with Selective Ensemble. Electronics. 2023; 12(11):2505. https://doi.org/10.3390/electronics12112505

Chicago/Turabian Style

Wang, Zhendong, Jianlan Wang, Dahai Li, and Donglin Zhu. 2023. "A Multi-Strategy Sparrow Search Algorithm with Selective Ensemble" Electronics 12, no. 11: 2505. https://doi.org/10.3390/electronics12112505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop