Next Article in Journal
Imbalanced Data Classification Based on Improved Random-SMOTE and Feature Standard Deviation
Previous Article in Journal
Solution to Several Split Quaternion Matrix Equations
Previous Article in Special Issue
Combining Autoregressive Integrated Moving Average Model and Gaussian Process Regression to Improve Stock Price Forecast
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Snake Optimizer Using Sobol Sequential Nonlinear Factors and Different Learning Strategies and Its Applications

National Center for Materials Service Safety, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1708; https://doi.org/10.3390/math12111708
Submission received: 22 April 2024 / Revised: 25 May 2024 / Accepted: 28 May 2024 / Published: 30 May 2024
(This article belongs to the Special Issue Intelligence Optimization Algorithms and Applications)

Abstract

:
The Snake Optimizer (SO) is an advanced metaheuristic algorithm for solving complicated real-world optimization problems. However, despite its advantages, the SO faces certain challenges, such as susceptibility to local optima and suboptimal convergence performance in cases involving discretized, high-dimensional, and multi-constraint problems. To address these problems, this paper presents an improved version of the SO, known as the Snake Optimizer using Sobol sequential nonlinear factors and different learning strategies (SNDSO). Firstly, using Sobol sequences to generate better distributed initial populations helps to locate the global optimum solution faster. Secondly, the use of nonlinear factors based on the inverse tangent function to control the exploration and exploitation phases effectively improves the exploitation capability of the algorithm. Finally, introducing learning strategies improves the population diversity and reduces the probability of the algorithm falling into the local optimum trap. The effectiveness of the proposed SNDSO in solving discretized, high-dimensional, and multi-constraint problems is validated through a series of experiments. The performance of the SNDSO in tackling high-dimensional numerical optimization problems is first confirmed by using the Congress on Evolutionary Computation (CEC) 2015 and CEC2017 test sets. Then, twelve feature selection problems are used to evaluate the effectiveness of the SNDSO in discretized scenarios. Finally, five real-world technical multi-constraint optimization problems are employed to evaluate the performance of the SNDSO in high-dimensional and multi-constraint domains. The experiments show that the SNDSO effectively overcomes the challenges of discretization, high dimensionality, and multi-constraint problems and outperforms superior algorithms.

Graphical Abstract

1. Introduction

Almost all real-world engineering design problems can be converted into optimization problems and solved using metaheuristics technology and more conventional mathematical optimization methods [1,2]. However, as society advances, engineering design problems are challenged by both high-dimensional and multimodal problems [1]. The problems arising from today’s design problems cannot be solved using traditional mathematical optimization techniques because they require the advanced mathematical modeling of the problem [3,4,5]. Metaheuristic techniques do not involve complex mathematical modeling processes. These techniques are characterized by simplicity and can effectively deal with this challenge [6,7,8].
Metaheuristics is a method for randomized optimization by modeling the behavior and phenomena of organisms in nature [6]. In a finite number of iterations, the global optimal solution is gradually determined by randomly initializing the population in the solution space and controlling the population by the best individuals in the population. It treats the optimization problem as a black box problem that does not require the researcher to perform complex mathematical modeling [8], and offers the advantages of simple structure, easy implementation, and the avoidance of local optima. As engineering optimization problems have become increasingly complex in recent years, a large number of metaheuristic algorithms have been proposed to address this challenge. Most researchers classify these algorithms into four categories: evolutionary algorithms, chemical and physical algorithms, human algorithms, and swarm intelligence algorithms [9].
The evolutionary algorithm is a population-based metaheuristic algorithm inspired by genetic rules and formed by operations such as selection, crossover, and mutation. Typical examples of evolutionary algorithms include Differential Evolution (DE) [10], the Genetic Algorithm (GA) [11], and Evolutionary Strategies (ESs) [12]. Chemical and physical algorithms are metaheuristic algorithms that arise from modeling real chemical reaction phenomena and physical phenomena. Common chemical and physical-based randomization algorithms include Simulated Annealing (SA) [13], the Multi-Verse Optimizer (MVO) [14], and the Big Bang–Big Crunch algorithm (BB-BC) [15]. A human algorithm is a metaheuristic algorithm inspired, for example, by human thinking or social behavior. Common human algorithms include the Teaching–Learning Based Optimization (TLBO) algorithm [16], Search And Rescue (SAR) optimization [17], and the Arithmetic Optimization Algorithm (AOA) [18]. Swarm intelligence algorithms are metaheuristic algorithms inspired by the behavior of groups of organisms in nature which model the phenomenon of groups of organisms. Common swarm intelligence algorithms include the Grey Wolf Optimizer (GWO) [19], Competitive Swarm Optimizer (CSO) [20], Ant Colony Optimization (ACO) [21], Aquila Optimizer (AO) [22], Dung Beetle Optimizer (DBO) [23], Harris Hawks Optimization (HHO) [24], and Snake Optimizer (SO) [9]. Among them, the SO is widely known due to its novelty and efficiency.
The SO is a swarm intelligence-based metaheuristic algorithm for modeling snake mating behavior introduced by Fatma A. Hashim in 2022. The snakes were divided into two subpopulations based on their gender. Eight location update methods were designed according to the external environment (food and temperature) to make sure the problem was resolved efficiently. The optimizer is evaluated for its convergence performance by comparing it with advanced metaheuristic algorithms on the Congress on Evolutionary Computation (CEC) 2017 [25] and CEC2020 [26] function test sets. It was also used to solve four well-known engineering optimization problems to test its ability to solve realistic engineering optimization problems. The experimental results show that it has good convergence performance.
Although the SO has achieved good results in solving optimization problems, the algorithm may still suffer from poor convergence performance and fall into local optima when dealing with discretization and multi-constraint optimization problems. Furthermore, due to The No Free Lunch (NFL) theory [27], it is impossible for one algorithm to solve all problems. The above facts lead us to improve the SO to alleviate the problem of poor convergence performance and the tendency to fall into local optima.
This paper uses Sobol sequence initialization, nonlinear factor analysis, and learning strategies to propose an augmented SO, called the Snake Optimizer using Sobol sequential nonlinear factors and different learning strategies (SNDSO). The proposed SNDSO can effectively improve the convergence performance of the algorithm and improve the ability to jump out of the local optimum trap. At the same time, the algorithm also effectively improves the stability of the algorithm. The main contribution points of this paper are as follows:
  • Using Sobol sequences to generate initialized populations with better distributions is useful to better locate the global optimal solution and avoid falling into local optimal traps;
  • Introducing nonlinear factors with inverse tangent properties effectively regulates the algorithm’s exploration and exploitation phases, thereby enhancing the capability for algorithm exploitation;
  • Using learning strategies improves population diversity and reduces the probability of the algorithm falling into a local optimum trap;
  • Using the SNDSO to solve feature selection (FS) problems confirms its advantages in solving discretized optimization problems;
  • The performance of the SNDSO in tackling real-world engineering optimization problems is tested on five realistic engineering optimization problems with multiple constraints.
The rest of the work in this paper is as follows: the mathematical model of the SO is presented in Section 2. The implementation details of the SNDSO are described in detail in Section 3. Section 4 first tests the performance of the SNDSO for high-dimensional numerical optimization problems by testing the SNDSO on the CEC2015 [28] and CEC2017 function test sets. Secondly, the performance of the SNDSO for high-dimensional discretization problems is tested using an FS problem. Finally, the performance of the SNDSO in handling real engineering optimization problems is tested using five multi-constraint realistic engineering optimization problems. Conclusions as well as future research plans are given in Section 5. To facilitate the presentation of subsequent algorithms, the definitions of the variables, constants, and operators involved in this paper are given in Nomenclature.

2. Snake Optimizer

This section introduces the SO mathematical model and implementation details. The SO is a metaheuristic algorithm developed by simulating the mating behavior of snakes. Based on gender, the population is divided into two subpopulations. At the same time, different location update methods are set depending on the temperature and food conditions. Therefore, the algorithm has a certain efficiency. The mathematical model and implementation details of the SO are described below.

2.1. Initialization

Initialization means randomly initializing multiple individuals in the solution space of the problem in order to perform subsequent iterative updates. Equation (1) can be used to represent the initialization process.
X i = X m i n + r ( X m a x X m i n )
where X i denotes the information of the i t h individual. X m a x denotes the upper boundary of the problem solution space and X m i n denotes the lower boundary of the problem solution space. r denotes a random number in the [0, 1] interval.

2.2. Clusters

The population was divided equally into two groups: females and males. Equations (2) and (3) are used to represent this process.
N m = N / 2
N f = N N m
where N denotes the number of individuals in the population, N m denotes the number of male individuals, and N f denotes the number of female individuals.

2.3. Parameter Definition

The fitness value of the optimal male individual ( f b e s t , m ), the fitness value of the optimal female individual ( f b e s t , f ), and the fitness value of the food location ( f f o o d ) are determined. Temperature T e m p is expressed using Equation (4). Food quantity Q is expressed using Equation (5).
T e m p = e x p ( t T )
Q = c 1 e x p ( t T T )
where t denotes the current number of iterations and T denotes the maximum number of iterations.

2.4. Exploration Phase (No Food)

If Q < T h r e s h o l d   ( T h r e s h o l d = 0.25 ) indicates that there is no food, the individual seeks food by randomly choosing a location and updating their position through it; thus, a full exploration of the solution space is obtained. For males, this process is represented using Equation (6). For female individuals, this process is represented using Equation (8).
X i , m ( t + 1 ) = X r a n d ,   m ( t ) ± c 2 A m ( ( X m a x X m i n )   r a n d + X m i n )
where X i , m denotes the location of the i t h male, X r a n d ,   m denotes a random individual in the male population, r a n d denotes a random number in the interval [0, 1], and A m denotes the ability of the male to search for food, which is expressed using Equation (7).
A m = e x p ( f r a n d , m f i , m )
where f r a n d , m denotes the fitness value of a random male individual. f i , m denotes the fitness value of the i t h male individual.
X i , f = X r a n d ,   f ( t + 1 ) ± c 2 A f ( ( X m a x X m i n ) r a n d + X m i n )
where X i , f denotes the location of the i t h female individual and X r a n d ,   f denotes a random individual in the female population. A f denotes the ability of the female individual to find food, which is expressed using Equation (9).
A f = e x p ( f r a n d ,   f f i , f )
where f r a n d ,   f denotes the fitness value of a random female individual and f i , f denotes the fitness value of the i t h female individual.

2.5. Exploitation Phase (Food Exist)

If Q > T h r e s h o l d   ( T h r e s h o l d = 0.25 ) , then food is present, and at this point the exploitation phase is carried out; in this phase, if T e m p > T h r e s h o l d   ( T h r e s h o l d = 0.6 ) means that the snake is in a warm environment, at this point the location of the snake is updated using Equation (10).
X i ( t + 1 ) = X f o o d   ± c 3 T e m p r a n d   ( X f o o d   X i , j ( t ) )
where X i denotes the position of an individual in the population, and X f o o d denotes the position of the optimal individual.
If T e m p < T h r e s h o l d   ( T h r e s h o l d = 0 . 6 ) indicates that the snake is in a cold environment, the snake will be in fight mode or mating mode at that time. In this case, during the fight mode, the position is updated for male individuals using Equation (11) and the position of the female is updated using Equation (12). In mating mode, the position of the male is updated using Equation (15) and the position of the female is updated using Equation (16).
X i , m ( t + 1 ) = X i , m ( t ) + c 3 F M r a n d   ( Q X b e s t , f X i , m ( t ) )
where X b e s t , f denotes the optimal female individual and F M denotes the fighting ability of the male individual.
X i , f ( t + 1 ) = X i , f ( t + 1 ) + c 3 F F r a n d ( Q X b e s t ,   m X i , f ( t + 1 ) )
where X b e s t ,   m denotes the optimal male individual and F F denotes the fighting ability of the female individual. F M and F F are expressed using Equation (13) and Equation (14), respectively.
F M = e x p ( f b e s t , f f i )
F F = e x p ( f b e s t , m f i )
where f b e s t , f denotes the optimal fitness value among females. f b e s t , m denotes the optimal fitness value among males. f i denotes the fitness value of the i t h individual.
X i , m ( t + 1 ) = X i , m ( t ) + c 3 M m r a n d   ( Q X i , f ( t ) X i , m ( t ) )
X i , f ( t + 1 ) = X i , f ( t ) + c 3 M f r a n d   ( Q X i , m ( t ) X i , f ( t ) )
where M m and M f denote the reproductive capacity of male and female individuals, respectively. M m and M f are expressed using Equation (17) and Equation (18), respectively.
M m = e x p ( f i , f f i , m )
M f = e x p ( f i , m f i , f )
If the eggs hatch, the worst male individual and worst female individual are selected and replaced using Equations (19) and (20).
X w o r s t , m = X m i n + r a n d   ( X m a x X m i n )
X w o r s t   , f = X m i n + r a n d   ( X m a x X m i n )
where X w o r s t   , m and X w o r s t , f denote the worst male individual and the worst female individual, respectively.
The pseudo-code for the SO is shown in Algorithm 1. The main execution steps of SO are as follows:
Step 1: Initialize the problem parameters, including the problem dimension ( D i m ), upper and lower bounds of the problem solution space ( U B and L B ), population size ( N ), maximum number of iterations ( T ), and current number of iterations ( t ), where D i m , U B , and L B are problem parameters, and the specific settings refer to the definition of the test problem. To ensure fairness, the initial N is set with reference to the original SO [9]. The value T is taken by customization and the population is initialized according to Equation (1).
Step 2: Use Equations (2) and (3) to divide the population into two sub-populations.
Step 3: If t < = T , then perform step 4; otherwise, jump to Step 7.
Step 4: Find the optimal male individual and optimal female individual and define the temperature T e m p and food quantity Q using Equation (4) and Equation (5), respectively.
Step 5: If Q < 0.25 , then use Equations (6) and (8) to update the positions of male and female individuals, respectively, and jump to step 3. If Q > 0.25 , perform Step 6.
Step 6: If T e m p > 0.6 , then use Equation (10) to update the position of the individual. If T e m p < = 0.6 and r a n d > 0.6 , update the position of the male individual and the female individual using Equation (11) and Equation (12), respectively. If T e m p < = 0.6 and r a n d < = 0.6 , firstly, use Equations (15) and (16) to update the positions of male and female individuals, respectively; afterwards, if T e m p < = 0.6 and r a n d < = 0.6 and e g g is equal to 1, use Equations (19) and (20) to update the worst male and worst female individuals, respectively. Jump to Step 3.
Step 7: Return the optimal individual.
Algorithm 1. Snake Optimizer.
1: Initialize problem setting (include D i m , U B , L B , N , T , t ).
2: Initialize the population using Equation (1).
3: Divide population into two equal groups, N m and N f , using Equations (2) and (3).
4: While ( t < = T ), perform the following:
5:  Evaluate each group N m and N f .
6:  Find the best male f b e s t , m and best female f b e s t , f .
7:  Define temperature T e m p using Equation (4) and food quantity Q using Equation (5).
8:  if ( Q < 0.25 %), initiate the exploration phase.
9:     Perform exploration using Equation (6) and Equation (8).
10: For other % values, initiate the exploitation phase.
11:    if ( T e m p > 0.6 ), then proceed as follows:
12:     Perform exploitation using Equation (10).
13:    Otherwise, proceed as follows:
14:     if ( r a n d > 0.6 ), then:
15:       Perform fight mode using Equation (11) and Equation (12).
16:     Otherwise, proceed as follows:
17:       Perform mating mode using Equation (17) and Equation (18).
18:       Randomly select an egg from the {0, 1} set.
19:       if  e g g = = 1
20:         Update worst male, worst female using Equations (19) and (20).
21:       end if
22:     end if
23:    end if
24: end if
25: end While
26: return optimal solution.

3. Proposed SNDSO

The original SO has the advantages of fast convergence speed and simple structure; however, when coping with complex real optimization problems, it may suffer from poor convergence performance and easily fall into local optimum traps. To alleviate such problems, this section integrates multiple strategies into the SO to enhance the performance of the SO.

3.1. Sobol Sequences

A high-quality initialized population ensures population diversity, avoids falling into local optimality traps, and speeds up the algorithm’s ability to locate the globally optimal solution. It has been pointed out in the literature [29] that the Sobol sequence is a sequence with small differences and has the advantage of uniform distribution. Therefore, the initialization of the population using the Sobol sequence in the literature [30] improves the initial diversity of the population, which in turn improves the search performance of the algorithm. Inspired by this, in this section, we generate initialized populations using Sobol sequences. The Sobol sequence is generated as follows:
Assuming m i is a positive odd number less than 2 i , then the following equation is considered:
v i = m i 2 i
where v i is generated by the following function polynomial:
f ( z ) = z p + c 1 z p 1 + + c p 1 z + c p
if i > p , v i is expressed as follows:
v i = b 1 v i 1 b 2 v i 2 b p v i p [ v i p / 2 p ]
where ‘ ’ denotes the XOR operation in binary, and the equivalent formula m i is as follows:
m i = 2 b 1 m i 1 2 2 b 2 m i 2 2 p b p m i p m i p
any integer n can be uniquely represented as an expression associated with the number 2, expressed as follows:
n = j = 1 k a j 2 j 1
where the value of a j is 0 or 1, then the z t h element of the Sobol sequence is generated by the following formula:
f   z = a 1 v 1 a 2 v 2 a k v k
Under the condition that the population number is 500, the distribution of Sobol sequences is shown in Figure 1a, and the distribution of pseudo-random sequences is shown in Figure 1b, from which it can be seen that the Sobol sequences are more uniformly distributed spatially, and a better diversity of initialized populations is obtained. The population is initialized using the following formula:
X i = X m i n + f   z ( X m a x X m i n )

3.2. Nonlinear Factor

Almost all population-based metaheuristic algorithms contain two phases: the exploration phase and the exploitation phase. The exploration phase mainly ensures that the algorithm searches a larger region and reduces the risk of falling into a local optimum trap. The exploitation phase focuses on the optimal region to improve the convergence accuracy and convergence speed of the algorithm. If these two phases do not achieve a good balance, the algorithm iteration process may have problems such as falling into the local optimum trap and decreases in convergence speed and convergence accuracy.
In the original SO, the use of food quantity Q to control the exploration and exploitation phases of the algorithm makes the algorithm strong in exploration but weak in exploitation, which leads to the problem of the poor convergence speed and convergence accuracy of the algorithm. In the literature [31], it has been shown that nonlinear factors possessing an inverse tangent property can lead to a good balance between the exploitation and exploration phases, which leads to enhanced algorithmic convergence. Therefore, inspired by this, in this section, a nonlinear factor with an inverse tangent property is proposed to replace the original food quantity Q to control the exploration phase and the exploitation phase, which is expressed in Equation (28).
Q = d 1 ( s t a r t + e n d t a n h ( 0.75 π t / T ) )
where d 1 = 0.5 , s t a r t = 0.3 , e n d = 0.7 , t denotes the current number of iterations and T denotes the maximum number of iterations.
Figure 2 shows the curve shapes of the original food quantity Q and the improved food quantity Q . From Figure 2, it can be seen that the improved food quantity Q grows faster at the pre iteration stage and slower at the later stage of the iteration stage relative to the original food quantity Q . This indicates that relative to the original control factor, the improved control factor allows the algorithm to have a stronger exploitation capability while maintaining some exploration capability. The convergence accuracy and convergence speed of the algorithm are improved.

3.3. Learning Strategy

In response to the problems of the algorithm’s tendency to fall into local optimum traps, lack of population diversity, and slow convergence during the iteration process, learning strategies are used in this section to cope with these problems. As individuals engage in the learning process, on the one hand, an individual is enriched by learning from the differences in information between other individuals in the population. In this section, the main consideration is the information gap between the i t h individual and a randomized individual, as well as the information gap between two randomized individuals, using Equations (29) and (30), respectively.
G a p r a n d 1 / i = X r a n d 1 X i
G a p r a n d 2 / r a n d 3 = X r a n d 2 X r a n d 3
where G a p r a n d 1 / i denotes the information gap between the i t h individual and a random individual, G a p r a n d 2 / r a n d 3 denotes the information gap between two random individuals, X r a n d 1 , X r a n d 2 , and X r a n d 3 denote three different random individuals in the population, respectively, and X i denotes the i t h individual.
We focus on the case where the optimal individual and the i t h individual fall into a local optimality trap and learn using Equation (31) and Equation (32), respectively.
X L e a r n = X b e s t + r a n d G a p r a n d 1 / i + r a n d G a p r a n d 2 / r a n d 3
X L e a r n = X i + 0.5 G a p r a n d 1 / i + 0.5 G a p r a n d 2 / r a n d 3
where X L e a r n indicates the individual that is generated through learning.
On the other hand, individuals enrich their information by learning within a radius R ; the learning process is expressed in Equation (33).
X L e a r n = X i + ( R + 2 R r a n d ) X i
where the learning radius R is expressed in Equation (34).
R = 0.02 ( 1 t / T )
Considering the above two aspects together, the learning strategy is expressed in Equation (35). Among them, it is pointed out in the literature [32] that individuals can effectively enhance the algorithmic population diversity and improve the ability to jump out of the local optimum trap by learning from other individuals in the population. The proposal of Equations (31) and (32) is mainly inspired by the above idea, which can effectively enhance the population diversity and reduce the probability of falling into the local optimum trap. Meanwhile, it is pointed out in the literature [33] that individuals can also effectively enhance the algorithm’s global search performance by self-learning in a certain range. The proposal of Equation (33) is mainly inspired by this idea, which can effectively enhance the algorithm’s global search capability.
Meanwhile, Figure 3 visualizes the process of the learning strategy, where the green circle indicates the position of the individual after updating through the learning strategy and the purple circle indicates the searching member of the population. The green line represents the process of individual X i being updated through Equations (31)–(33). From the figure, it can be seen that the diversity of the algorithmic population is improved and the global search capability is enhanced by updating the position in three ways.
X L e a r n = { L e a r n i n g   t h r o u g h   Eq .   ( 33 ) i f   r a n d < 0 . 5 { L e a r n i n g   t h r o u g h   Eq .   ( 31 )     i f   r a n d < 0.5   L e a r n i n g   t h r o u g h   Eq .   ( 32 )     o t h e r w i s e o t h e r w i s e  
Individuals are retained using an elite retention strategy, which is mathematically expressed as follows:
X i = { X L e a r n i f   f L e a r n < f i X i o t h e r w i s e
where f L e a r n denotes the fitness value of X L e a r n .

3.4. SNDSO Implementation

In this section, details of the implementation of the SNDSO will be presented. The flowchart of the SNDSO is shown in Figure 4 and the pseudo-code of the SNDSO is shown in Algorithm 2. The main execution steps of the SNDSO are as follows:
Step 1: Initialize D i m , U B , L B , N , T and t , and initialize the population according to Equation (27).
Step 2: Use Equations (2) and (3) to divide the population into two sub-populations.
Step 3: If t < = T , then perform step 4; otherwise, jump to Step 7.
Step 4: Find the optimal male individual and optimal female individual and define the temperature T e m p and food quantity Q using Equation (4) and Equation (28), respectively.
Step 5: If Q < 0.25 and r a n d < 0.5 , then use Equations (6) and (8) to update the position of male and female individuals, respectively. If Q < 0.25 and r a n d > = 0.5 , then use Equations (35) and (36) to update the position of the individual; then, jump to step 3. If Q > 0.25 , perform step 6.
Step 6: If T e m p > 0.6 , then use Equation (10) to update the position of the individual. If T e m p < = 0.6 and r a n d > 0.6 , update the position of the male individual and the female individual using Equation (11) and Equation (12), respectively. If T e m p < = 0.6 and r a n d < = 0.6 , firstly, use Equations (15) and (16) to update the positions of male and female individuals, respectively; afterwards, if T e m p < = 0.6 and r a n d < = 0.6 and e g g is equal to 1, use Equations (19) and (20) to update the worst male and worst female individuals, respectively. Jump to Step 3.
Step 7: Return the optimal individual.
Algorithm 2. SNDSO.
1: Initialize problem setting (include D i m , U B , L B , N , T , t ).
2: Initialize the population using Equation (27).
3: Divide population to two equal groups, N m and N f , using Equations (2) and (3).
4: While ( t < = T ), perform the following:
5:  Evaluate each group N m and N f .
6:  Find best male f b e s t , m and best female f b e s t , f .
7:  Define temperature T e m p using Equation (4) and food quantity Q using Equation (28).
8:  if ( Q < 0.25 %), initiate the exploration phase.
9:     if r a n d < 0.5 , then proceed as follows:
10:       Perform exploration using Equations (6) and (8).
11:     Otherwise, proceed as follows:
12:       Perform exploration using Equations (35) and (36).
13:     end if
14:  For other % values, initiate the exploitation phase.
15:     if ( T e m p > 0.6 ), then proceed as follows:
16:       Perform exploitation using Equation (10).
17:     Otherwise, proceed as follows:
18:       if ( r a n d > 0.6 ), then proceed as follows:
19:         Perform fight mode using Equations (11) and (12).
20:       Otherwise, proceed as follows:
21:         Perform mating mode using Equations (17) and (18).
22:         Randomly select an egg from the {0, 1} set.
23:         if  e g g = = 1 , then proceed as follows:
24:           Update worst male, worst female using Equations (19) and (20).
25:         end if
26:       end if
27:     end if
28:  end if
29: end While
30: return optimal solution.

4. Experimental Results

In this section, the performance of the proposed SNDSO is evaluated through a series of experiments. Firstly, the performance of the SNDSO when dealing with high-dimensional and multimodal numerical optimization problems is evaluated using the CEC2015 test function set and CEC2017 test function set; among them, the multimodal problems contain multiple locally optimal solutions, which can effectively test the algorithm’s global optimization search ability. Secondly, the performance of the SNDSO in handling discretized optimization problems is evaluated using twelve FS problems. Finally, the SNDSO is applied to solve five multi-constraint, high-dimensional real-world engineering optimization problems. Meanwhile, the SNDSO is compared with fifteen state-of-the-art algorithms. The information of the compared algorithms is shown in Table 1. The performance of the SNDSO in handling global optimization problems is comprehensively evaluated through a series of experiments.
For fairness, All experiments were run on an AMD Ryzen 5 3600 6-core processor 3.60 GHz pro-cessor with 8 GB of RAM manufactured by UltraVision Semiconductor (Dallas, TX, USA). The operating system was Windows 11; all code was implemented on MATLAB R2021b.

4.1. SO Parameter Test

In this section, we focus on validating the parameters { c 1 , c 2 , c 3 } in the SO. In the literature [9], the values of { c 1 , c 2 , c 3 } are taken as {0.5, 0.05, 2}, but the reason for choosing the parameters {0.5, 0.05, 2} is not given in the literature [9]. Therefore, in this section, we determine the parameters { c 1 , c 2 , c 3 } by means of parameter combinations. We define the values of c 1 in the set {0.3, 0.5, 0.7} at intervals of 0.2, the values of c 2 in the set {0.03, 0.05, 0.07} at intervals of 0.02, and the values of c 3 in the set {1, 2, 3} at intervals of 1. The values of c 1 , c 2 , and c 3 are combined to form 27 parameter combinations. The above combinations were passed for comparison on the CEC 2015 test set to select the parameter values. The population size was set to 30, the maximum number of function evaluations was 100,000, and the test dimensions were 30. Each experiment was run independently 30 times. Among the metrics used for the analysis is the rank based on the mean value.
The CEC2015 test set is shown in Table 2; as can be seen in Table 2, the CEC2015 test set contains unimodal, simple multimodal, hybrid, and composition functions. Among them, the unimodal function contains only one optimal point and is therefore used to test the algorithm’s local search ability. The simple multimodal, hybrid, and composition functions contain multiple local optimal points and are mainly used to test the algorithm’s global search ability. Also, the upper and lower bounds of the test problem are listed in the last row of the table. The last column of the table lists the theoretically optimal values of each test function, which will be used for the subsequent analysis of the experimental results.
The mean value-based rank-filled plot is shown in Figure 5, where the Y-axis represents the test function number and the X-axis represents the 27 parameter combinations involved in the experiment; the color of the rank-filled plot gradually changes from red to dark blue as the rank increases. From the figure, it can be seen that the color of the parameter combination {0.5, 0.05, 2} is more reddish overall, which indicates its overall performance is better than the other combinations. Meanwhile, the parameter combination {0.5, 0.05, 2} has an average ranking of 3.60 on the 15 tested functions, followed by the parameter combination {0.3, 0.05, 2} with an average ranking of 7.33, while the average ranking of the combination {0.5, 0.05, 2} is significantly better than that of the combination {0.3, 0.05, 2}, which confirms that the parameter combination {0.5, 0.05, 2}’s solution performance is ahead of other parameter combinations. Meanwhile, the parameter combination {0.5, 0.05, 2} is ranked first on the seven tested functions, demonstrating a stronger winning rate compared to other parameter combinations. In summary, we believe that it is more reasonable to choose the parameter combination {0.5, 0.05, 2}, and therefore use the parameter combination {0.5, 0.05, 2} as the parameter setting for the subsequent experiments.

4.2. Running Parameter Test

In this section, we focus on the population size and the setting of the maximum number of function evaluations when the algorithm runs. According to the literature [44], when the maximum number of function evaluations is at the 105 level, the algorithm will achieve a smooth convergence result, so we temporarily set the maximum number of function evaluations to 100,000, and then proceeded to analyze the convergence characteristics under 100,000 function evaluations to determine the value of the maximum number of function evaluations. Meanwhile, since the population size is an important factor affecting the convergence performance of the algorithm, the population size is set to 10, 20, 30, 60, and 120 in this section to test the convergence performance of the algorithm corresponding to different population sizes. The experiments are conducted on the CEC2015 test set and each experiment is run independently 30 times to eliminate chance. The evaluation metrics include the mean, mean-based ranking, and Wilcoxon rank sum test, where ‘+’ indicates that the current algorithm is significantly better than the corresponding algorithm in the last column, ‘-’ indicates that the current algorithm is significantly weaker than the corresponding algorithm in the last column, and ‘=’ indicates that the current algorithm is not significantly different from the algorithm corresponding to the last column. The experimental results are shown in Table 3. Where bold numbers in the table indicate minimum values.
We can see this in Table 3. When the population size is 10, the performance on 86.6% of the test functions is weaker than the performance when the population size is 30. Moreover, the Wilcoxon rank sum test shows that it is significantly weaker than the performance when the population size is 30 on 13 of the tested functions. When the population size is 20, the performance is weaker than the performance when the population size is 30 on 80% of the test functions. Also, the Wilcoxon rank sum test shows that it is significantly weaker than the performance when the population size is 30 on 10 test functions. From the mean ranking, it can be seen that the performance when the population size is 10 is weaker than the performance when the population size is 20, which is mainly attributed to the fact that a population size that is too small will cause the algorithm to have poorer population diversity, which in turn will lead to the algorithm being prone to fall into the local optimum trap and reduce the convergence performance. When the population size is 60, the performance of the algorithm is weaker than that of the algorithm with a population size of 30 in 86.6% of the test functions. Meanwhile, the Wilcoxon rank sum test shows that the performance is significantly weaker than the performance when the population size is 30 on 12 test functions. When the population size is 120, the performance is weaker than the performance when the population size is 30 on 86.6% of the tested functions. The Wilcoxon rank sum test shows that the performance is significantly weaker than the performance when the population size is 30 on 12 test functions. This indicates that an increase in the population size helps to improve the search range of the algorithm, but it can result in a large consumption of function summation times, causing an unnecessary waste of computational resources. At the same time, this can also lead to an increase in the degree of confusion in the population, resulting in the convergence speed and accuracy of the algorithm not being guaranteed. Therefore, through the above analysis of the experimental results, it is more reasonable to choose a population size of 30, so that better solution performance can be obtained. In the subsequent experiments, the population size will be set to 30.
Meanwhile, Figure 6 shows the convergence plot of the SNDSO for a maximum number of function evaluations of 100,000 times under different population size conditions, involving both unimodal and multimodal functions. From the figure, it can be seen that the algorithm obtains a stable convergence state after about 100,000 function evaluations under different population conditions. This indicates that the algorithm will reach a converged state when the maximum number of function evaluations is set to 100,000 times, which will help us to analyze the experimental results correctly and effectively. At the same time, in order to reduce the unnecessary waste of computational resources, we do not increase the maximum function evaluation times, and set it to 100,000 times as the experimental conditions for the subsequent experiments.

4.3. Effectiveness of Different Strategies

The SNDSO was proposed by integrating the Sobol sequence initialization strategy, nonlinear factor, and learning strategy into the SO. This subsection focuses on verifying the effectiveness of each strategy. The SSO is formed by integrating Sobol sequence initialization strategy into the SO, a nonlinear factor into the SO to form the NSO, and a learning strategy into the SO to form the DSO. The SO, SSO, NSO, and DSO are tested on the CEC2015 test set and the algorithms were ranked using the Friedman mean rank test to check the effectiveness of the strategies. The function test dimension is 30, the maximum number of function evaluations is 100,000, and the population size is 30. Each experiment was run independently 30 times.
The results of the Friedman mean rank test are shown in Table 4, where smaller values indicate better performance of the algorithms. The SSO, NSO, and DSO outperform the original SO on test functions F1 and F2, which indicates that the Sobol sequence initialization strategy, nonlinear factor, and learning strategy all contribute to the enhancement of the local search ability of the SO. Meanwhile, the SSO outperforms the SO on 77% of the multimodal functions. The NSO and DSO outperform the SO on 85% of the multimodal functions, and are weaker than the SO on only a small number of multimodal functions; so, from a comprehensive point of view, all three strategies incorporated in this paper have enhanced the global searching ability of the SO. Meanwhile, the average ranking of the SNDSO is lower than that of the SO, SSO, NSO and DSO, which shows that integrating the three strategies into the SO at the same time can better improve the algorithm performance. In summary, we can conclude that all the three strategies introduced in this paper have a contributing effect on algorithm performance enhancement, and at the same time, integrating the three strategies into the SO at the same time can enhance the algorithm performance more effectively.

4.4. Results on CEC2017

In this section, the main focus is on evaluating the performance of the SNDSO for solving numerical optimization problems by using the SNDSO with seven state-of-the-art algorithms (SO, DBO, SSA, TLBO, FSTDE, AHA, and WOA) for solving the CEC2017 test function set. Among them, we compare them from three perspectives: novel algorithms, highly cited algorithms, and improved algorithms. Among them, the SO, DBO, and AHA are novel algorithms proposed after 2022, which characterize the most novel advances in the field of swarm intelligence algorithms. Meanwhile, the SSA, TLBO, and WOA have accumulated more than 20,000 citations so far, which demonstrates the strong solution performance as well as the robustness of these algorithms. Finally, the FSTDE algorithm is introduced as an improved differential algorithm with high performance. By comparing the above algorithms, the performance of the the SNDSO can be evaluated in a comprehensive and integrated way. The CEC2017 test function set is shown in Table 5; from the table, it can be seen that the CEC2017 test set also contains unimodal, simple multimodal, hybrid, and composition functions. The experiments on this test set allow for a comprehensive evaluation of the algorithm’s ability to synthesize and find an optimal solution. Also, the last row of the table lists the upper and lower bounds of the test problem. The last column of the table lists the theoretical optimal values for each test problem. The test dimensions were 30, 50, and 100, the population size was 30, and the maximum number of function evaluations was set to 100,000 times. Running each experiment independently many times to take the mean value can effectively avoid errors in the experiment. Therefore, each experiment is run independently 30 times. The performance of the SNDSO in solving numerical optimization problems is evaluated in all aspects using population diversity analysis, exploration and exploitation analysis, numerical analysis, convergence and stability analysis, nonparametric test analysis, and runtime analysis.

4.4.1. Population Diversity Analysis

The population diversity directly affects the performance of the algorithm, and if the population diversity is too low, it often leads to the problem of the algorithm being stuck in localized stagnation. In this section, the population diversity of the SO and the SNDSO is tested using the CEC2017 test function set. The test dimension is 30. Also, in this subsection, the population size is set to 30 and the maximum number of function evaluations is 15,000. Since each iteration performs 30 function evaluations, the maximum number of iterations is 15,000 / 30 , which has a value of 500. The experimental results are shown in Figure 7, from which it can be seen that, firstly, the SNDSO possesses a higher population diversity at the beginning of the iteration due to the fact that the SNDSO uses Sobol sequential initialization to produce a more evenly distributed population in the initialization phase. Secondly, due to the introduction of the learning strategy in the SNDSO to improve the population diversity, the population diversity of the SNDSO in the iterative process is higher than that of the SO. The higher population diversity allows the algorithm to reduce the risk of falling into the trap of local optimality and improve the performance of the algorithm.

4.4.2. Exploration and Exploitation Analysis

Exploration and exploitation are the two most important phases of meta-heuristic algorithms, and the performance of the algorithm can be improved by controlling the exploration and exploitation phases in a reasonable way. In this section, the exploration/exploitation phase of the SNDSO is analyzed using the CEC2017 test function set; the test dimension is 30. Also, in this subsection, the population size is set to 30 and the maximum number of function evaluations is 15,000. Since each iteration performs 30 function evaluations, the maximum number of iterations is 15,000/30, which has a value of 500. The experimental results are shown in Figure 8. Since the SO has a strong exploration capability but a weak exploitation capability, a nonlinear factor is introduced into the SNDSO to control the exploration/exploitation phases, which in turn improves the algorithm exploitation capability and enhances the convergence accuracy and speed. As can be seen from Figure 8, the algorithm explores at a high percentage at the beginning of the iteration, which ensures the ability of the algorithm to fully explore the solution space. Subsequently, the algorithm is exploited at a high percentage, which accelerates the convergence speed and accuracy. By balancing the exploration/exploitation phases in this way during the iteration process, the problem of insufficient SO exploitation capability is effectively compensated, and the convergence speed and accuracy of the algorithm is improved.

4.4.3. Numerical Analysis

In this subsection, the SNDSO is experimentally compared with seven state-of-the-art algorithms (SO, DBO, SSA, TLBO, FSTDE, AHA, WOA) on the CEC2017 test function set with a test dimension of 30, 50, and 100. Table 6, Table 7 and Table 8 demonstrate the results of the experiment; the evaluation metrics contain mean, standard deviation, and solution accuracy. The average ranking of each algorithm was calculated, and the solution accuracy can be expressed by the Function Error Value ( FEV ), the expression of which is given below:
FEV = f ( X b e s t ) f ( X o p t )
where f ( X b e s t ) denotes the optimal fitness value for the algorithm iteration and f ( X o p t ) denotes the theoretical optimal value for the last column in Table 5.
Table 6 shows the statistical results for a test dimension of 30. The “Rank First”, “Mean Rank”, and “Final Rank” in the last three rows of the table indicate the number of first ranks obtained by comparing the mean values, the mean rank of the algorithm based on the mean values on all test problems, and the final rank obtained by sorting the mean ranks, respectively. From Table 6, it can be seen that on the unimodal function F1, the SNDSO ranks first in both mean and FEV, which indicates that the strategy introduced in this paper can effectively enhance the exploitation of the algorithm. Meanwhile, on the unimodal function F3, the SNDSO’s performance is second only to the SSA, which is due to the fact that the SSA has a better exploitation strategy, which also indicates that the exploitation phase of the SNDSO has some defects in specific problems and can be further improved. On the multimodal test problem, the SNDSO ranked first with 63%, which is an exciting result compared to other algorithms. This is due to the fact that the introduction of nonlinear factors allows for a good balance in the exploration/exploitation phases. Meanwhile, the Sobol sequence initialization strategy and learning strategy effectively enhance the population diversity of the algorithm. The global search performance of the algorithm is made to improve. Meanwhile, the analysis of the FEV intuitively shows that the SNDSO has the smallest FEV on 17 multimodal problems, which dominates a great deal, especially on F6 and F9, where it almost achieves the theoretical optimum solution as well. However, although it achieves good performance on most of the functions, its performance on some specific multimodal problems is worse than the existing optimization algorithms, which suggests that there is still room for the SNDSO to be improved for specific multimodal testing problems. However, from a comprehensive point of view, the SNDSO has an average ranking of one point nine three on all the test functions, and its solution performance is far ahead of the other algorithms, which confirms that the introduction of the Sobol sequence initialization strategy, the nonlinear factor, and the learning strategy into the SO can effectively enhance the algorithm’s performance.
Table 7 demonstrates the statistical results for a test dimension of 50, from which it can be seen that, again, the SNDSO possesses a stronger exploitation capability on the unimodal function F1 but has a weaker performance than the SSA on the specific problem of F3. On the multimodal test problem, the SNDSO ranked first place in 70% of the cases, which is an improvement of 7% in the performance compared to a test dimension of 30, which suggests that with the increases in test dimension, the nonlinear factor and learning strategy proposed in this paper can better promote the algorithm’s global optimization-seeking performance. Meanwhile, as the test dimension increases, it can be seen that the FEV of all algorithms is increased, but in comparison, the SNDSO has the smallest FEV in 63% of the cases, which also shows that the SNDSO possesses a better global optimization-seeking capability, which is attributed to the introduction of the three strategies in this paper. However, it is undeniable that the SNDSO still has low performance on some specific multimodal functions, which also indicates that the performance of the SNDSO proposed in this paper needs to be improved when solving specific problems. From the comprehensive ranking, the average ranking of the SNDSO is one point eight nine, which is improved compared with the average ranking of one point nine three when the test dimension is 30, which also indicates that the Sobol sequence initialization strategy, nonlinear factor, and learning strategy proposed in this paper can further promote the performance of the algorithm in high-dimensional environments.
Table 8 demonstrates the statistical results when the test dimension is 100, from which it can be seen that on the multimodal test problem, the SNDSO occupies the first place with 89%, demonstrating a strong solution performance. Meanwhile, from the FEV metric, the SNDSO occupies the smallest error at 67%. Meanwhile, from the average ranking, the SNDSO has a mean rank of one point three four at the test dimension of 100, which is improved compared to the mean ranking at the test dimensions of 30 and 50, which demonstrates that the Sobol sequence initialization strategy, nonlinear factor, and learning strategy proposed in the paper promote the performance of the algorithm more obviously with the increase in the problem dimension. Figure 9 visually depicts the process of the average ranking changes with the test dimensions, where the “Mean Rank” in the vertical coordinate indicates the mean rank of the algorithm based on the mean value on all test problems. From the figure, it can be visualized that the average ranking of the SNDSO decreases with the increase in problem dimensions. This shows that the three strategies proposed in this paper promote the algorithm more obviously in the high-dimensional testing environment and can improve the global optimization ability of the algorithm more effectively.

4.4.4. Convergence and Stability Analysis

In addition to the convergence accuracy, the convergence speed is also important, and Figure 10 shows the convergence curves of different algorithms in solving the set of CEC2017 test functions with a test dimension of 30. From the figure, it can be seen that in most cases, the SNDSO tends to converge after 20,000 function evaluations, which gives the SNDSO a faster convergence speed compared to other algorithms. At the same time, the algorithm-solving stability is also important; Figure 11 shows the boxplot when the algorithm is solved, where ‘+’ indicates an outlier, from which it can be seen that the SNDSO has a higher solving stability. This is mainly due to the fact that the strategy proposed in this paper is effective in striking a balance between the exploration/exploitation phases and providing the algorithm with a higher population diversity.

4.4.5. Nonparametric Test Analysis

In this section, the Wilcoxon rank sum test and Friedman mean rank test are used to analyze the differences between the SNDSO and the comparison algorithms.
Table 9, Table 10 and Table 11 show the Wilcoxon rank sum test results of the SNDSO and the comparison algorithms on the CEC2017 test function sets with test dimensions of 30, 50, and 100, respectively. p < 0.05 indicates that there is a difference between the two algorithms, ‘+’ indicates that the comparison algorithm outperforms the SNDSO, ‘=’ indicates that the comparison algorithm is not significantly different from the SNDSO, and ‘-’ indicates that the comparison algorithm performs weaker than the SNDSO.
As can be seen from Table 9, when the test dimension is 30, the SNDSO significantly outperforms the SO and AHA on 79.3% of the test functions, and the DBO on 96.5% of the test functions. Therefore, we can conclude that, compared with current novel algorithms, the SNDSO proposed in this paper occupies a very good solution advantage due to the introduction of the improved strategy. Meanwhile, the SNDSO significantly outperforms the SSA and TLBO on 72.4% of the tested functions and the WOA on 96.5% of the tested functions, which indicates that the SNDSO has a stronger solution performance compared with the highly referenced algorithms that have a strong solution stability. In addition, the SNDSO significantly outperforms the advanced and improved FSTDE algorithm on 76% of the tested functions. In summary, we can conclude that the global optimization performance of the proposed SNDSO is well improved due to the introduction of the Sobol initialization strategy, nonlinear factor, and learning strategy in the algorithm.
Meanwhile, it can be seen from Table 10 that when the test dimension is 50, the SNDSO significantly outperforms the SO on 83% of the test functions, significantly outperforms the DBO on 93% of the test functions, and significantly outperforms the AHA on 76% of the test functions. This shows that with the increase in the test dimension, the solution performance of the SNDSO still occupies a very good advantage over the novel algorithms. Meanwhile, the SNDSO significantly outperforms the SSA on 69% of the tested functions, the TLBO on 73% of the tested functions, and the WOA on 93% of the tested functions. This indicates that with the testing dimension of 50, the solution performance of the SNDSO still occupies a good advantage compared to the highly referenced algorithms. Meanwhile, compared with the advanced and improved FSTDE algorithm, the SNDSO significantly outperforms it on 79% of the tested functions. The above analysis confirms that the improvement strategy proposed in this paper still promotes the algorithm performance as the test dimensions increase.
From Table 11, it can be seen that the SNDSO significantly outperforms the SO on 86% of the test functions, the DBO on 96.5% of the test functions, and the AHA on 90% of the test functions compared to the novel algorithm when the test function is 100. The above fact shows that the SNDSO can solve the high dimensional test problem better compared to the novel algorithm. Compared with highly referenced algorithms, the SNDSO significantly outperforms the SSA on 76% of the test functions, the TLBO on 86% of the test functions, and the WOA on 100% of the test functions. This shows that the SNDSO can solve the high-dimensional testing problems more efficiently than highly referenced algorithms which have a very strong stability. Compared with the FSTDE algorithm, the SNDSO significantly outperforms it on 93% of the tested functions, which is because the improved strategy adopted in this paper enhances the performance of the algorithm in solving high-dimensional optimization problems more effectively.
Table 12 shows the Friedman mean rank test results of the algorithm on the CEC2017 test functions of different test dimensions, from which it can be seen that the SNDSO is ranked first in all the test dimensions, demonstrating a strong solving performance. This is mainly due to the effective enhancement of the algorithm performance by the Sobol sequential initialization strategy, nonlinear factor, and learning strategy introduced in this paper.

4.4.6. Runtime Analysis

Considering the practicality of the algorithm, the algorithm runtime needs to be considered in addition to the solution accuracy and convergence performance when solving realistic problems. Table 13 demonstrates the actual running time of the algorithm in solving the CEC2017 test function. From Table 13, it can be seen that the WOA is dominant in terms of running time, which is mainly due to the simple structure of the WOA. In contrast, the SNDSO proposed in this paper ranks fifth in the overall ranking, which is due to the addition of the Sobol initialization strategy, nonlinear factor, and learning strategy in the original algorithm, resulting in a complex structure of the algorithm, which in turn increases the computational steps. However, we can see that the running time of the SNDSO is second only to the SSA, TLBO, WOA, and AHA, which are very robust algorithms. Therefore, although the running time of the SNDSO does not dominate, it obtains a better solving performance; so, it is considered that the running time is still in an acceptable range, and it can be used as a practical algorithm for solving the realistic optimization problem.

4.5. Results for FS Problems

In this section, the performance of the SNDSO in solving the discretization optimization problem is tested using the FS problem. The FS problem can be defined as selecting a subset of features from the original dataset to improve the classification accuracy by utilizing less feature information. The FS problem can be transformed into a discretization optimization problem. The SNDSO and seven advanced algorithms (HHO, BOA, WOA, ABO, PSO, GWO, SO) are applied to solve the FS problem. The performance of the SNDSO in handling the discretization optimization problem is evaluated by conducting a statistical analysis of the experimental results.

4.5.1. Establishment of Optimization Model

The FS problem is a multidimensional discrete optimization problem aimed at achieving a higher classification accuracy by using a smaller number of features. Therefore, the objective function of the FS problem can be formulated as follows:
m i n   f ( x ) = λ 1 e r r o r + λ 2 R n
In this context, e r r o r represents the error rate of classifying the dataset using the selected subset of features. R denotes the size of the selected feature subset, while n represents the number of features in the original dataset. λ 1 and λ 2 refer to penalty factors, λ 1 belonging to the interval [0, 1] and λ 2 = 1 λ 1 . In this paper, λ 1 = 0.9.
In order to apply the SNDSO to solve the FS problem, it is necessary to discretize the SNDSO. Specifically, each individual in the algorithm is represented by a binary vector. For instance, an individual X i = ( x i , 1 , x i , j , , x i , D ) , where x i , j is a binary variable. D is the number of features in the original dataset. When x i , j is 1, it indicates that the j t h feature in the original dataset is selected; when x i , j is 0, it means the j t h feature is not selected. During the individual initialization phase, individuals are initialized using random numbers within the interval [0, 1], and then binary encoding is performed on the individuals using the following formula:
x i , j = { 1 , x i , j > 0.5 0 , x i , j 0.5 i = 1 , 2 , , N ; j = 1 , 2 , , D .

4.5.2. Experimental Analyses

This section primarily applies the SNDSO and seven advanced algorithms (HHO, BOA, WOA, ABO, PSO, GWO, SO) to deal with twelve FS problems. The performance of the SNDSO in handling FS problems is evaluated through the statistical analysis of experimental results. The twelve FS datasets can be obtained from http://archive.ics.uci.edu/ml/index.php (accessed on 27 May 2024), with specific information shown in Table 14.
For the selected feature subset, the classification accuracy is calculated using the K-Nearest Neighbors (KNNs) algorithm. In this study, K = 5. Additionally, k-fold cross-validation with k = 5 is used, where 80% of the data are utilized for training and 20% for validation. By employing k-fold cross-validation, the classification accuracy of the chosen feature subset can be effectively computed.
In this section, each experiment was independently conducted 30 times with a population size of 10 and a maximum iteration count of 100. The performance evaluation of the the SNDSO for FS problems was assessed using the following criteria.
(1)
Fitness values: the best fitness value, mean fitness value, and worst fitness value are obtained from the results of 30 independent experiments.
(2)
Average classification accuracy: the average classification accuracy is obtained by taking the average of the classification accuracies obtained from 30 experiments.
(3)
Average number of features: the average number of features is obtained by taking the average value of the feature counts selected from 30 independent experiments.
(4)
Rank: the rank represents the ranking of the algorithm on each criterion.
(5)
This section is not mandatory but can be added to the manuscript if the discussion is unusually long or complex.
Table 15 presents the fitness values of the SNDSO on 12 FS problems, including the best fitness value, mean fitness value, worst fitness value, and rankings based on the best, mean, and worst fitness values. From Table 15, it can be observed that in terms of the best fitness value, the SNDSO has an average ranking of one point five, which positions it as the top-ranked algorithm compared to the advanced algorithms. Furthermore, the SNDSO ranks first in 11 out of the 12 datasets, with a ratio of 91.7%, demonstrating its powerful solving performance and scalability. Regarding the average fitness value, the SNDSO has an average ranking of one point zero eight, securing the first position among the advanced algorithms. Similarly, the SNDSO ranks first in 11 out of the 12 datasets, with a ratio of 91.7%, indicating the excellent stability of the SNDSO when solving FS problems. To further illustrate the stability of the SNDSO, Figure 12 shows the boxplots of the SNDSO and the comparison algorithm on the six FS datasets. From Figure 12, it can be seen that the SNDSO exhibits a stronger solution stability compared to the comparison algorithm. In terms of the worst fitness value, the SNDSO has an average ranking of one point five, ultimately securing the first position. Moreover, when compared to the advanced algorithms, the SNDSO ranks first in nine out of the twelve datasets, with a ratio of 75%, indicating its superior fault tolerance when solving FS problems. The above analysis shows that the SNDSO is an excellent algorithm that can be used to solve feature selection problems. However, it is undeniable that the performance of the algorithm is not as good as the traditional PSPO algorithm on the Zoo dataset, which shows that the performance of the SNDSO can be further improved when solving specific feature selection problems.
In addition, the convergence speed of algorithms in solving FS problems is also crucial. Figure 13 illustrates the convergence curves of the algorithm when solving FS problems. From Figure 13, it can be observed that the SNDSO exhibits a faster convergence speed and convergence accuracy. This demonstrates its powerful convergence capability. Furthermore, Table 16 presents the results of the Wilcoxon rank sum test, and it is evident from Table 16 that the SNDSO outperforms the HHO algorithm, BOA, and WOA on 12 datasets, performs significantly better than the ABO algorithm, PSO algorithm, and GWO on 10 datasets, and outperforms the SO on 11 datasets, showcasing its strong solving performance.
Table 17 presents the average classification accuracy of 30 independent experiments, while Table 18 displays the average number of features for these experiments. As can be seen from Table 17, the SNDSO achieves a very good class accuracy by ranking first on 67% of the dataset. This is mainly due to the introduction of the Sobol initialization strategy, nonlinear factor, and learning strategy, which enhances the algorithm’s global optimality seeking ability. However, the classification accuracy of the SNDSO is lower than the PSO algorithm on the breast cancer dataset, and lower than the GWO on the vote and vehicle datasets, which is mainly because the PSO algorithm and GWO retain more useful feature information during the feature selection process, which reflects the fact that the SNDSO is not as good as the traditional feature selection datasets on some specific feature selection optimization algorithms, and further enhancement of the SNDSO’s global optimization capability is needed. However, from a comprehensive point of view, the average performance of the SNDSO in terms of the classification accuracy on all datasets is the best, which shows that the strategy proposed in this paper can effectively enhance the optimization performance of the algorithm.
Meanwhile, from Table 18, it can be seen that each algorithm is capable of the dimensionality reduction of features in the dataset. The SNDSO ranks second in the average number of features, only after the GWO. However, the average classification accuracy of the GWO is weaker than that of the SNDSO. This is because the GWO deletes the useful feature information during the feature dimensionality reduction process, which results in the weaker classification accuracy of the GWO compared to that of the SNDSO, while on the other hand, the SNDSO effectively retains this useful feature information, which makes it ranked first in the average classification accuracy. This also reflects the fact that the global optimization ability of the GWO in the feature selection problem is weaker than that of the SNDSO; therefore, the SNDSO can be considered a promising feature selection method.
Table 19 presents the statistical results of the algorithm running times. It can be observed from Table 19 that the SNDSO ranked fourth in terms of running time. This is attributed to the incorporation of three strategies. However, such running time falls within an acceptable range. Figure 14 illustrates the final rankings of the algorithm across different metrics. From Figure 14, it can be seen that the SNDSO ranks first in terms of optimal fitness value, average fitness value, worst fitness value, and average classification accuracy. This demonstrates its powerful solving performance and indicates that the SNDSO can serve as an effective approach to FS problems.

4.6. Results for Constrained Engineering Optimization Problems

In this section, the focus is mainly on utilizing the SNDSO to tackle five constrained engineering optimization problems and compare them with seven optimization algorithms (SO, BESD, DBO, MFO, SSA, PSO, EO). The performance of the SNDSO in handling these problems is evaluated through the statistical analysis of experimental results. The population size is set to 30, the maximum function evaluation times are limited to 30,000, and each experiment is independently run 30 times. The mean value, standard deviation, worst value, and optimal value of the objective function are calculated, and the optimal solution is recorded simultaneously.

4.6.1. Results on Three-Bar Truss Design Problem

The given optimization problem originates from the field of civil engineering and involves a constrained space with irregular topography; the structure is shown in Figure 15. The primary aim of this problem is to minimize the weight of the bar structures. The constraints of this problem are derived from the stress limitations imposed on each individual bar. Consequently, the resulting problem is characterized by a linear objective function alongside three nonlinear constraints. The mathematical representation of this problem is provided below.
Minimize : f ( x ¯ ) = l ( x 2 + 2 2 x 1 ) subject   to : g 1 ( x ¯ ) = x 2 2 x 2 x 1 + 2 x 1 2 P σ 0 , g 2 ( x ¯ ) = x 2 + 2 x 1 2 x 2 x 1 + 2 x 1 2 P σ 0 , g 3 ( x ¯ ) = 1 x 1 + 2 x 2 P σ 0 , where , l = 100 , P = 2 , and   σ = 2 . with   bounds : 0 x 1 , x 2 1 .
Table 20 presents the optimal solutions of the three-bar truss design problem obtained with the SNDSO and seven other advanced algorithms. Table 21 displays the statistical results, including the best fitness value, worst fitness value, mean fitness value, and standard deviation. From Table 20, it can be observed that the SNDSO ranks first and achieves the optimal solution with a fitness function value of 263.895843, represented by X = (0.788680, 0.408233). Furthermore, Table 21 reveals that the SNDSO ranks first in terms of the best fitness value, second only to EO in the worst fitness value, and second only to BESD in the average fitness value. This indicates that the SNDSO exhibits a certain potential in handling the three-bar truss design problem.

4.6.2. Results on 10-Bar Truss Design

The main goal of this problem is to accomplish weight reduction in the truss structure while guaranteeing adherence to frequency limitations; the structure is shown in Figure 16. Mathematically, the problem can be defined as follows:
Minimize : f ( x ¯ ) = i = 1 10 L i ( x i ) ρ i A i subject   to : g 1 ( x ¯ ) = 7 ω 1 ( x ¯ ) 1 0 , g 2 ( x ¯ ) = 15 ω 2 ( x ¯ ) 1 0 , g 3 ( x ¯ ) = 20 ω 3 ( x ¯ ) 1 0 , with   bounds : 6.45 × 10 5 A i 5 × 10 3 , i = 1 , 2 , , 10 . where , x ¯ = { A 1 , A 2 , , A 10 } , ρ = 2770 .
Table 22 presents the optimal solutions of the 10-bar truss design problem obtained with the SNDSO and seven advanced algorithms. Table 23 shows the statistical results, including the best fitness value, worst fitness value, mean fitness value, and standard deviation. From Table 22, it can be observed that EO ranks first, achieving the optimal solution X = (0.003453, 0.001442, 0.003537, 0.001484, 0.000065, 0.000457, 0.002329, 0.002423, 0.001262, 0.001250) with a fitness function value of 524.551304. The proposed SNDSO ranks second, obtaining the optimal solution X = (0.003475, 0.001514, 0.003552, 0.001428, 0.000065, 0.000456, 0.002461, 0.002281, 0.001221, 0.001264) with a fitness function value of 524.582840. From Table 23, it can be seen that the SNDSO performs slightly worse than EO in terms of the best fitness value but outperforms other algorithms in terms of the worst fitness value and average fitness value. This indicates that the SNDSO exhibits strong solving performance and stability in dealing with the 10-bar truss design problem, making it an effective approach for solving this problem.

4.6.3. Results on Tension/Compression Spring Design

The primary goal of this issue is to improve the weight of a spring that undergoes tension or compression. Four constraints are involved in this problem, and three variables are employed for weight calculation: x 1 represents the wire’s diameter, x 2 signifies the average coil diameter, and x 3 denotes the number of active coils; the structure is shown in Figure 17. The problem’s definition is as follows.
Table 24 presents the optimal solutions obtained using the SNDSO and seven advanced algorithms for solving the tension/compression spring design problem. Table 25 displays the statistical results, including the best fitness value, worst fitness value, average fitness value, and standard deviation. From Table 24, it can be observed that the SNDSO, EO, and DBO jointly rank first, achieving the optimal solution X = (0.051700, 0.356984, 11.273355) with a fitness function value of 0.012665. Furthermore, Table 25 indicates that the SNDSO outperforms the SO, BESD, MFO, SSA, and PSO in terms of the best fitness value, ranking second only to BESD in the worst fitness value, and surpasses other algorithms in terms of average fitness value. These findings demonstrate the high efficiency and potential of the SNDSO in tackling the tension/compression spring design problem. Utilizing the SNDSO as a method for solving the tension/compression spring design problem can lead to superior performance.
Minimize : f ( x ¯ ) = x 1 2 x 2 ( 2 + x 3 ) subject   to : g 1 ( x ¯ ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ¯ ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 g 3 ( x ¯ ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ¯ ) = x 1 + x 2 1.5 1 0 with   bounds : 0.05 x 1 2.00 0.25 x 2 1.30 2.00 x 3 15.0

4.6.4. Results on Weight Minimization of Speed Reducer

The problem at hand pertains to the development of a compact aircraft engine speed reducer; the structure is shown in Figure 18. In order to achieve optimal performance, the mathematical expression of the optimization problem is as follows:
Minimize : f ( x ¯ ) = 0.7854 x 2 2 x 1 ( 14.9334 x 3 43.0934 + 3.3333 x 3 2 ) + 0.7854 ( x 5 x 7 2 + x 4 x 6 2 ) 1.508 x 1 ( x 7 2 + x 6 2 ) + 7.477 ( x 7 3 + x 6 3 ) subject   to : g 1 ( x ¯ ) = x 1 x 2 2 x 3 + 27 0 , g 2 ( x ¯ ) = x 1 x 2 2 x 3 2 + 397.5 0 , g 3 ( x ¯ ) = x 2 x 6 4 x 3 x 4 3 + 1.93 0 , g 4 ( x ¯ ) = x 2 x 7 4 x 3 x 5 3 + 1.93 0 , g 5 ( x ¯ ) = 10 x 6 3 16.91 × 10 6 + ( 745 x 4 x 2 1 x 3 1 ) 2 1100 0 , g 6 ( x ¯ ) = 10 x 7 3 157.5 × 10 6 + ( 745 x 5 x 2 1 x 3 1 ) 2 850 0 , g 7 ( x ¯ ) = x 2 x 3 40 0 , g 8 ( x ¯ ) = x 1 x 2 1 + 5 0 , g 9 ( x ¯ ) = x 1 x 2 1 12 0 , g 10 ( x ¯ ) = 1.5 x 6 x 4 + 1.9 0 , g 11 ( x ¯ ) = 1.1 x 7 x 5 + 1.9 0 , with   bounds : 0.7 x 2 0.8 , 17 x 3 28 , 2.6 x 1 3.6 , 5 x 7 5.5 , 7.3 x 5 , x 4 8.3 , 2.9 x 6 3.9 .
Table 26 presents the optimal solutions of the speed reducer design problem obtained with the SNDSO and seven advanced algorithms. Table 27 displays the statistical results. From Table 26, it can be observed that, except for BESD and SSA, the other algorithms share the top rank. Among them, the SNDSO achieves the optimal solution with a fitness function value of 2994.424466, represented as X = (3.500000, 0.700000, 17.000000, 7.300000, 7.715320, 3.350541, 5.286654). Table 27 reveals that the SNDSO shares the first rank with EO in terms of the average fitness value. The SNDSO demonstrates efficient solving performance when dealing with the speed reducer design problem.

4.6.5. Results on Welded Beam Design

The primary aim of this task is to create a welded beam with the lowest possible expense. This task comprises five limitations and employs four variables in the development of the welded beam; the structure is shown in Figure 19. The mathematical depiction of this task can be formulated as follows.
Table 28 presents the optimal solutions for the welded beam design problem obtained with the SNDSO and seven advanced algorithms. Table 29 shows the statistical results, including the best fitness value, worst fitness value, mean fitness value, and standard deviation. From Table 28, it can be observed that the SNDSO ranks first alongside the SO, DBO, PSO, and EO. Among them, the SNDSO achieves the best solution with a fitness function value of 1.670218: X = (0.198832, 3.337365, 9.192024, 0.198832). From Table 29, it is evident that the SNDSO outperforms other algorithms in terms of the average fitness value. This demonstrates the high efficiency and potential of the SNDSO in tackling the welded beam design problem, leading to competitive results.
Minimize : f ( x ¯ ) = 0.04811 x 3 x 4 ( x 2 + 14 ) + 1.10471 x 1 2 x 2 subject   to : g 1 ( x ¯ ) = x 1 x 4 0 , g 2 ( x ¯ ) = δ ( x ¯ ) δ m a x 0 , g 3 ( x ¯ ) = P P c ( x ¯ ) g 4 ( x ¯ ) = τ m a x τ ( x ¯ ) , g 5 ( x ¯ ) = σ ( x ¯ ) σ m a x 0 , where , τ = τ 2 + τ 2 + 2 τ τ x 2 2 R , τ = R M J , τ = P 2 x 2 x 1 , M = p ( x 2 2 + L ) , R = x 2 2 4 + ( x 1 + x 3 2 ) 2 , J = 2 ( ( x 2 2 4 + ( x 1 + x 3 2 ) 2 ) 2 x 1 x 2 ) , σ ( x ¯ ) = 6 P L x 4 x 3 2 , δ ( x ¯ ) = 6 P L 3 E x 3 2 X 4 , P c ( x ¯ ) = 4.013 E x 3 x 4 3 6 L 2 ( 1 x 3 2 L E 4 G ) L = 14 in , P = 6000 lb , E = 30.10 6 psi , σ m a x = 30,000 psi , τ m a x = 13,600 psi , G = 12.10 6 psi , δ m a x = 0.25 in , .

5. Conclusions and Future Directions

In this study, the population was initialized by using the Sobol sequence, a nonlinear factor was used to control the balance between a global and local search, and finally, a learning strategy was used to improve the algorithm’s ability to search globally and jump out of the local optimum trap. These features effectively improve the performance of the SO. A series of experiments were conducted in the paper to evaluate the performance of the SNDSO. The experimental results show that the proposed SNDSO not only improves the performance of SO, but also occupies an advantage in comparison with 15 state-of-the-art algorithms. It provides an effective program for solving realistic problems. However, although the SNDSO has achieved good optimization results, it is undeniable that the SNDSO is even less effective than traditional optimization algorithms in dealing with specific multimodal optimization problems and feature selection problems. Meanwhile, the SNDSO proposed in this paper has not been used to solve the challenging real-world combinatorial optimization problems.
Therefore, our future work will focus on the following two points: First, we will deeply optimize and improve the SNDSO for the multimodal optimization problem and feature selection problem. We are committed to enhancing the global search capability of the SNDSO and further improving its optimization effect in dealing with complex scenarios such as multi-peak function and discrete feature space scenarios. Second, we will broaden the application areas of the SNDSO and work on solving challenging combinatorial optimization problems in the real world. We will closely integrate the actual problem background, establish reasonable mathematical models for specific problems, and design corresponding the SNDSO to solve them in order to verify the practical value and potential of the SNDSO in solving real-world combinatorial optimization problems.

Author Contributions

Conceptualization, W.Z. (Wenda Zheng); methodology, W.Z. (Wenda Zheng); software, W.Z. (Wenda Zheng) and Y.A.; validation, Y.A. and W.Z. (Weidong Zhang); formal analysis, W.Z. (Wenda Zheng) and Y.A.; investigation, W.Z. (Weidong Zhang); resources, W.Z. (Weidong Zhang); data curation, W.Z. (Wenda Zheng); writing—original draft preparation, W.Z. (Wenda Zheng); writing—review and editing, W.Z. (Wenda Zheng) and Y.A.; visualization, W.Z. (Weidong Zhang); supervision, W.Z. (Weidong Zhang); project administration, W.Z. (Weidong Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors thank the reviewers for their valuable suggestions and comments.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

X i Position of the i t h individual
X m a x Problem upper boundary
X m i n Problem lower boundary
r , r a n d Random number in the [0, 1] interval
N Number of individuals
N m Number of male individuals
N f Number of female individuals
X b e s t , f Optimal female individual
X b e s t ,   m Optimal male individual
X b e s t Optimal individual
f b e s t , m Fitness value of the X b e s t ,   m
f b e s t , f Fitness value of the X b e s t ,   m
f f o o d Fitness value of the X b e s t
T e m p Temperature
Q Food quantity
t Current number of iterations
T Maximum number of iterations
X i , m Location of the i t h male
X r a n d ,   m Random individual in male population
A m Ability of the male to search for food
f r a n d , m Fitness value of X r a n d ,   m
f i , m Fitness value of X i , m
X i , f Location of the i t h female individual
X r a n d ,   f Random individual in the female population
A f Ability of the female individual to find food
f r a n d , f Fitness value of X r a n d ,   f
f i , f Fitness value of X i , f
F M Fighting ability of the male individual
F F Fighting ability of the female individual
f i Fitness value of X i .
M m Reproductive capacity of male individual
M f Reproductive capacity of female individual
X w o r s t   , m Position of worst male individual
X w o r s t , f Position of worst female individual
XOR operation in binary
Matrix dot product operation
c 1 , c 2 , c 3 The values were taken as 0.5, 0.05, and 2, respectively
d 1 , s t a r t , e n d The values were taken as 0.5, 0.3, and 0.7, respectively
G a p r a n d 1 / i Gap between the i t h individual and random individual
G a p r a n d 2 / r a n d 3 Gap between two random individuals
X r a n d 1 , X r a n d 2 , X r a n d 3 Different random individuals
X L e a r n Individual generated through learning
X b e s t The optimal individual
R Learning radius
f L e a r n Fitness value of X L e a r n  

References

  1. Hussien, A.G.; Oliva, D.; Houssein, E.H.; Juan, A.A.; Yu, X. Binary Whale Optimization Algorithm for Dimensionality Reduction. Mathematics 2020, 8, 1821. [Google Scholar] [CrossRef]
  2. Hao, Y.; Helo, P.; Shamsuzzoha, A. Virtual Factory System Design and Implementation: Integrated Sustainable Manufacturing. Int. J. Syst. Sci. Oper. Logist. 2018, 5, 116–132. [Google Scholar] [CrossRef]
  3. Hussien, A.G.; Amin, M.; Wang, M.; Liang, G.; Alsanad, A.; Gumaei, A.; Chen, H. Crow Search Algorithm: Theory, Recent Advances, and Applications. IEEE Access 2020, 8, 173548–173565. [Google Scholar] [CrossRef]
  4. Rabbani, M.; Hosseini-Mokhallesun, S.A.A.; Ordibazar, A.H.; Farrokhi-Asl, H. A Hybrid Robust Possibilistic Approach for a Sustainable Supply Chain Location-Allocation Network Design. Int. J. Syst. Sci. Oper. Logist. 2020, 7, 60–75. [Google Scholar] [CrossRef]
  5. Sayyadi, R.; Awasthi, A. A Simulation-Based Optimisation Approach for Identifying Key Determinants for Sustainable Transportation Planning. Int. J. Syst. Sci. Oper. Logist. 2018, 5, 161–174. [Google Scholar] [CrossRef]
  6. Abualigah, L.; Gandomi, A.H.; Elaziz, M.A.; Hussien, A.G.; Khasawneh, A.M.; Alshinwan, M.; Houssein, E.H. Nature-Inspired Optimization Algorithms for Text Document Clustering—A Comprehensive Analysis. Algorithms 2020, 13, 345. [Google Scholar] [CrossRef]
  7. Hussien, A.G.; Hassanien, A.E.; Houssein, E.H.; Amin, M.; Azar, A.T. New Binary Whale Optimization Algorithm for Discrete Optimization Problems. Eng. Optim. 2020, 52, 945–959. [Google Scholar] [CrossRef]
  8. Sayyadi, R.; Awasthi, A. An Integrated Approach Based on System Dynamics and ANP for Evaluating Sustainable Transportation Policies. Int. J. Syst. Sci. Oper. Logist. 2020, 7, 182–191. [Google Scholar] [CrossRef]
  9. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  10. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  11. Dînsoreanu, M. An Introduction to Genetic Algorithms. Appl. Med. Inform. 1995, 1, 11–18. [Google Scholar]
  12. Rechenberg, I. Evolution strategy: Optimization of technical systems by means of biological evolution. Fromman-Holzboog Stuttg. 1973, 104, 15–16. [Google Scholar]
  13. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  14. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A Nature-Inspired Algorithm for Global Optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  15. Erol, O.K.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  16. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. CAD Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  17. Shabani, A.; Asgarian, B.; Salido, M.; Asil Gharebaghi, S. Search and Rescue Optimization Algorithm: A New Optimization Method for Solving Constrained Engineering Optimization Problems. Expert. Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  18. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  20. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef]
  21. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  22. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  23. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  24. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  25. Altay, O. Chaotic Slime Mould Optimization Algorithm for Global Optimization. Artif. Intell. Rev. 2022, 55, 3979–4040. [Google Scholar] [CrossRef]
  26. Qaraad, M.; Amjad, S.; Hussein, N.K.; Elhosseini, M.A. Large Scale Salp-Based Grey Wolf Optimization for Feature Selection and Global Optimization. Neural. Comput. Appl. 2022, 34, 8989–9014. [Google Scholar] [CrossRef]
  27. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  28. Cao, Z.; Jia, H.; Wang, Z.; Foh, C.H.; Tian, F. A Differential Evolution with Autonomous Strategy Selection and Its Application in Remote Sensing Image Denoising. Expert Syst. Appl. 2024, 238, 122108. [Google Scholar] [CrossRef]
  29. Junaid, M.; Bangyal, W.H.; Ahmad, J. A novel bat algorithm using sobol sequence for the initialization of population. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. [Google Scholar]
  30. Wu, S.; Jiang, J.; Yan, Y.; Bao, W.; Shi, Y. Improved Coyote Algorithm and Application to Optimal Load Forecasting Model. Alex. Eng. J. 2022, 61, 7811–7822. [Google Scholar] [CrossRef]
  31. Wang, M.; Wang, J.S.; Li, X.D.; Zhang, M.; Hao, W.K. Harris hawk optimization algorithm based on Cauchy distribution inverse cumulative function and tangent flight operator. Appl. Intell. 2022, 52, 10999–11026. [Google Scholar] [CrossRef]
  32. Zhang, Q.; Gao, H.; Zhan, Z.H.; Li, J.; Zhang, H. Growth Optimizer: A powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. Knowl.-Based Syst. 2023, 261, 110206. [Google Scholar] [CrossRef]
  33. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Northern goshawk optimization: A new swarm-based algorithm for solving optimization problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  35. Tsafarakis, S.; Zervoudakis, K.; Andronikidis, A.; Altsitsiadis, E. Fuzzy Self-Tuning Differential Evolution for Optimal Product Line Design. Eur. J. Oper. Res. 2020, 287, 1161–1169. [Google Scholar] [CrossRef]
  36. Zhao, W.; Wang, L.; Mirjalili, S. Artificial Hummingbird Algorithm: A New Bio-Inspired Optimizer with Its Engineering Applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  37. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  38. Arora, S.; Singh, S. Butterfly Optimization Algorithm: A Novel Approach for Global Optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  39. Qi, X.; Zhu, Y.; Zhang, H. A New Meta-Heuristic Butterfly-Inspired Algorithm. J. Comput. Sci. 2017, 23, 226–239. [Google Scholar] [CrossRef]
  40. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  41. Civicioglu, P.; Besdok, E. Bezier Search Differential Evolution Algorithm for Numerical Function Optimization: A Comparative Study with CRMLSP, MVO, WA, SHADE and LSHADE. Expert Syst. Appl. 2021, 165, 113875. [Google Scholar] [CrossRef]
  42. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  43. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  44. Abdel-Basset, M.; El-Shahat, D.; Jameel, M.; Abouhawwash, M. Exponential distribution optimizer (EDO): A novel math-inspired algorithm for global optimization and engineering problems. Artif. Intell. Rev. 2023, 56, 9329–9400. [Google Scholar] [CrossRef]
Figure 1. Ref. [30] (a). Sobol sequential random number distribution. (b). Pseudo-random number sequence distribution.
Figure 1. Ref. [30] (a). Sobol sequential random number distribution. (b). Pseudo-random number sequence distribution.
Mathematics 12 01708 g001
Figure 2. Comparison between the original Q and the improved Q.
Figure 2. Comparison between the original Q and the improved Q.
Mathematics 12 01708 g002
Figure 3. Learning strategy simulation.
Figure 3. Learning strategy simulation.
Mathematics 12 01708 g003
Figure 4. The flowchart of the SNDSO.
Figure 4. The flowchart of the SNDSO.
Mathematics 12 01708 g004
Figure 5. The mean value-based rank-filled of different parameter combinations.
Figure 5. The mean value-based rank-filled of different parameter combinations.
Mathematics 12 01708 g005
Figure 6. Convergence plot of the different population size on the CEC2015.
Figure 6. Convergence plot of the different population size on the CEC2015.
Mathematics 12 01708 g006
Figure 7. The population diversity of SO and the SNDSO.
Figure 7. The population diversity of SO and the SNDSO.
Mathematics 12 01708 g007
Figure 8. The exploration and exploitation of the SNDSO.
Figure 8. The exploration and exploitation of the SNDSO.
Mathematics 12 01708 g008
Figure 9. Mean ranking of algorithms on different test dimensions.
Figure 9. Mean ranking of algorithms on different test dimensions.
Mathematics 12 01708 g009
Figure 10. Convergence plot of different algorithms on the CEC2017 test function set (Dim = 30).
Figure 10. Convergence plot of different algorithms on the CEC2017 test function set (Dim = 30).
Mathematics 12 01708 g010
Figure 11. Boxplot of different algorithms on the CEC2017 test function set (Dim = 30).
Figure 11. Boxplot of different algorithms on the CEC2017 test function set (Dim = 30).
Mathematics 12 01708 g011
Figure 12. Box plots of algorithms dealing with FS problems.
Figure 12. Box plots of algorithms dealing with FS problems.
Mathematics 12 01708 g012
Figure 13. Convergence plot of the algorithm dealing with the FS problems.
Figure 13. Convergence plot of the algorithm dealing with the FS problems.
Mathematics 12 01708 g013
Figure 14. Ranking of all metrics for algorithms dealing with FS problems.
Figure 14. Ranking of all metrics for algorithms dealing with FS problems.
Mathematics 12 01708 g014
Figure 15. 3-bar truss design problem.
Figure 15. 3-bar truss design problem.
Mathematics 12 01708 g015
Figure 16. 10-bar truss design problem.
Figure 16. 10-bar truss design problem.
Mathematics 12 01708 g016
Figure 17. Tension/compression spring design problem.
Figure 17. Tension/compression spring design problem.
Mathematics 12 01708 g017
Figure 18. Construction of speed reducer.
Figure 18. Construction of speed reducer.
Mathematics 12 01708 g018
Figure 19. Welded beam design problem.
Figure 19. Welded beam design problem.
Mathematics 12 01708 g019
Table 1. Parameter information for 16 advanced algorithms.
Table 1. Parameter information for 16 advanced algorithms.
AlgorithmsTimeParameters Settings
SO2022 T 1 = 0.25 ,   T 2 = 0.6 ,   C 1 = 0.5 ,   C 2 = 0.05 ,   C 3 = 2
DBO2022 k = 0.1 ,   λ = 0.1 ,   b = 0.3 ,   S = 0.5
Salp Swarm Algorithm (SSA) [34]2017 c 1 = 2 e x p ( ( 4 F E s / M a x F E s ) 2 )
TLBO2011 T F = 1   o r   2
Fuzzy Self-Tuning Differential Evolution (FSTDE) [35]2020 Δ o { 0.2 , 0.4 , 0.6 } ,   β m i n { 0.1 , 0.4 , 0.7 } ,   β m a x { 0.4 , 0.7 , 0.9 } ,   P C R { 0.01 , 0.1 , 0.5 }
Artificial Hummingbird Algorithm (AHA) [36]2021 M i g r a t i o n   c o e f f i c i e n t = 2 n
Whale Optimization Algorithm
(WOA) [37]
2016 b = 1 ,   a 1 = 2 ( 2 F E s / M a x F E s ) ,   a 2 = 1 ( F E s / M a x F E s )
HHO2019 E 0 = 2 r a n d 1 ,   E 1 = 2 2 ( F E s / M a x F E s )
Butterfly Optimization Algorithm (BOA) [38]2019 p = 0.8 ,   α = 0.1 ,   c = 0.01
Artificial Butterfly Optimization (ABO) [39]2017 r a t i o e = 0.2 ,   s t e p e = 0.05
Particle Swarm Optimization (PSO) [40]1995 w = 1 ,   w p = 0.99 ,   c 1 = 1.5 ,   c 2 = 2.0
GWO2014 α = 2 2 ( F E s / M a x F E S )
Bezier Search Differential Evolution (BESD) [41]2020 K = 5
Moth-Flame Optimization (MFO) [42]2015 r = 1 ( F E s / M a x F E s ) ,   t = r + ( 1 r ) r a n d ,   b = 1
Equilibrium Optimizer (EO) [43]2020 V = 1 ,   a 1 = 2 ,   a 2 = 1 ,   G P = 0.5
SNDSONA T 1 = 0.25 ,   T 2 = 0.6 ,   C 1 = 0.5 ,   C 2 = 0.05 ,   C 3 = 2
Table 2. The CEC2015 test function set.
Table 2. The CEC2015 test function set.
ProblemTypesNameOptimum
CEC2015_F1UnimodalRotated high conditioned elliptic function100
CEC2015_F2 Rotated cigar function200
CEC2015_F3Simple MultimodalShifted and rotated Ackley’s function300
CEC2015_F4 Shifted and rotated Rastrigin’s function400
CEC2015_F5 Shifted and rotated Schwefel’s function500
CEC2015_F6HybridHybrid function 1 (N = 3)600
CEC2015_F7 Hybrid function 2 (N = 4)700
CEC2015_F8 Hybrid function 3 (N = 5)800
CEC2015_F9CompositionComposition function 1 (N = 3)900
CEC2015_F10 Composition function 2 (N = 3)1000
CEC2015_F11 Composition function 3 (N = 5)1100
CEC2015_F12 Composition function 4 (N = 5)1200
CEC2015_F13 Composition function 5 (N = 5)1300
CEC2015_F14 Composition function 6 (N = 7)1400
CEC2015_F15 Composition function 7 (N = 10)1500
Search range: [−100, 100]
Table 3. Results of the different population size on the CEC2015.
Table 3. Results of the different population size on the CEC2015.
ProblemMetrics N = 10 N = 20 N = 60 N = 120 N = 30
CEC2015_F1Mean1.1095 × 1068.0399 × 1058.2751 × 1051.5211 × 1066.2595 × 105
Rank42351
Wilcoxon ===-NA
CEC2015_F2Mean2.7250 × 1032.3829 × 1033.6684 × 1031.7561 × 1037.6041 × 102
Rank43521
Wilcoxon ----NA
CEC2015_F3Mean3.2103 × 1023.2103 × 1023.2096 × 1023.2097 × 1023.2093 × 102
Rank44 2 3 1
Wilcoxon ----NA
CEC2015_F4Mean5.5304 × 1025.0796 × 1025.4226 × 1024.9738 × 1024.6580 × 102
Rank53 4 2 1
Wilcoxon ----NA
CEC2015_F5Mean4.7199 × 1035.7387 × 1036.4086 × 1036.7890 × 1035.8907 × 103
Rank12 4 5 3
Wilcoxon +=--NA
CEC2015_F6Mean1.5306 × 1051.4430 × 1051.9620 × 1051.3288 × 1055.5288 × 104
Rank43 5 2 1
Wilcoxon ----NA
CEC2015_F7Mean7.1553 × 1027.1206 × 1027.1266 × 1027.1128 × 1027.1069 × 102
Rank5 3 4 2 1
Wilcoxon -=-=NA
CEC2015_F8Mean8.3496 × 1044.5179 × 1044.1471 × 1046.2157 × 1042.3666 × 104
Rank5 3 2 4 1
Wilcoxon ----NA
CEC2015_F9Mean1.0807 × 1031.0160 × 1031.0034 × 1031.0034 × 1031.0033 × 103
Rank5 4 2 2 1
Wilcoxon ---=NA
CEC2015_F10Mean1.0534 × 1055.9160 × 1041.0865 × 1051.3714 × 1055.7913 × 104
Rank3 2 4 5 1
Wilcoxon -=--NA
CEC2015_F11Mean2.1071 × 1031.8789 × 1031.7977 × 1031.7727 × 1031.7314 × 103
Rank5 4 3 2 1
Wilcoxon ----NA
CEC2015_F12Mean1.3511 × 1031.3543 × 1031.4004 × 1031.4004 × 1031.3350 × 103
Rank2 3 4 4 1
Wilcoxon ----NA
CEC2015_F13Mean1.3007 × 1031.3000 × 1031.3000 × 1031.3000 × 1031.3000 × 103
Rank51 1 1 1
Wilcoxon --++NA
CEC2015_F14Mean3.7289 × 1043.6974 × 1043.5288 × 1043.5486 × 1043.4832 × 104
Rank5 4 2 3 1
Wilcoxon --=-NA
CEC2015_F15Mean1.6000 × 1031.6000 × 1031.6000 × 1031.6000 × 1031.6000 × 103
Rank11 1 1 1
Wilcoxon -=--NA
Mean Rank 3.87 2.80 3.07 2.87 1.13
+/-/= 1/13/10/10/51/12/21/12/2NA
Table 4. Friedman mean rank test for CEC2015.
Table 4. Friedman mean rank test for CEC2015.
ProblemSOSSONSODSOSNDSO
CEC2015_F154321
CEC2015_F254321
CEC2015_F343251
CEC2015_F443521
CEC2015_F512345
CEC2015_F654321
CEC2015_F744321
CEC2015_F853421
CEC2015_F952413
CEC2015_F1053421
CEC2015_F1153421
CEC2015_F1234125
CEC2015_F1354312
CEC2015_F1453142
CEC2015_F1544231
Mean Rank4.333.333.002.401.80
Final Rank54321
Table 5. The CEC2017 test function set.
Table 5. The CEC2017 test function set.
ProblemTypesNameOptimum
CEC2017_F1UnimodalShifted and Rotated Bent Cigar Function100
CEC2017_F3 Shifted and Rotated Zakharov Function300
CEC2017_F4MultimodalShifted and Rotated Rosenbrock’s Function400
CEC2017_F5 Shifted and Rotated Rastrigin’s Function500
CEC2017_F6 Shifted and Rotated Expanded Scaffer’s F6 Function600
CEC2017_F7 Shifted and Rotated Lunacek Bi-Rastrigin Function700
CEC2017_F8 Shifted and Rotated Non-Continuous Rastrigin’s Function800
CEC2017_F9 Shifted and Rotated Lévy Function900
CEC2017_F10 Shifted and Rotated Schwefel’s Function1000
CEC2017_F11HybridHybrid function 1 (N = 3)1100
CEC2017_F12 Hybrid function 1 (N = 3)1200
CEC2017_F13 Hybrid function 3 (N = 3)1300
CEC2017_F14 Hybrid function 4 (N = 4)1400
CEC2017_F15 Hybrid function 5 (N = 4)1500
CEC2017_F16 Hybrid function 6 (N = 4)1600
CEC2017_F17 Hybrid function 6 (N = 5)1700
CEC2017_F18 Hybrid function 6 (N = 5)1800
CEC2017_F19 Hybrid function 6 (N = 5)1900
CEC2017_F20 Hybrid function 6 (N = 6)2000
CEC2017_F21CompositionComposition function 1 (N = 3)2100
CEC2017_F22 Composition function 2 (N = 3)2200
CEC2017_F23 Composition function 3 (N = 4)2300
CEC2017_F24 Composition function 4 (N = 4)2400
CEC2017_F25 Composition function 5 (N = 5)2500
CEC2017_F26 Composition function 6 (N = 5)2600
CEC2017_F27 Composition function 7 (N = 6)2700
CEC2017_F28 Composition function 8 (N = 6)2800
CEC2017_F29 Composition function 9 (N = 3)2900
CEC2017_F30 Composition function 10 (N = 3)3000
Search range: [−100, 100]
Table 6. Numerical results of the different algorithms on solving the CEC2017 test function set (Dim = 30).
Table 6. Numerical results of the different algorithms on solving the CEC2017 test function set (Dim = 30).
ProblemMetricSODBOSSATLBOFSTDEAHAWOASNDSO
CEC2017_F1Mean7.6968 × 1031.7314 × 1064.7140 × 1033.9611 × 1038.2179 × 1035.6786 × 1039.7321 × 1075.1689 × 102
Std6.9515 × 1032.9496 × 1065.6379 × 1034.3817 × 1035.6047 × 1035.9722 × 1035.6634 × 1073.0473 × 102
FEV1.5298 × 1025.8481 × 1025.2639 × 1013.7545 × 10−12.0682 × 1033.4245 × 1021.7628 × 1073.0000 × 10−1
CEC2017_F3Mean2.7926 × 1043.9541 × 1044.2415 × 1024.1228 × 1039.9256 × 1043.1930 × 1032.0375 × 1051.8652 × 103
Std8.3168 × 1031.2925 × 1041.6859 × 1022.5277 × 1031.8787 × 1041.8332 × 1036.1654 × 1044.6252 × 1013
FEV1.1898 × 1042.0305 × 1041.0637 × 10−24.0060 × 1026.8241 × 1045.9081 × 1021.0931 × 1051.5652 × 103
CEC2017_F4Mean5.0176 × 1025.1928 × 1024.9866 × 1024.9621 × 1024.9478 × 1024.9794 × 1025.8535 × 1024.0437 × 102
Std2.9394 × 1013.2965 × 1011.8208 × 1012.7301 × 1012.8417 × 1002.5441 × 1014.6642 × 1010.0000 × 100
FEV7.1148 × 1015.4783 × 1017.3208 × 1014.3529 × 1019.0041 × 1016.4576 × 1011.0389 × 1024.3721 × 100
CEC2017_F5Mean6.1882 × 1027.3835 × 1026.4518 × 1026.1376 × 1025.9197 × 1026.4690 × 1028.0220 × 1025.3363 × 102
Std3.6416 × 1015.2716 × 1013.3602 × 1012.6023 × 1019.4389 × 1003.3996 × 1016.2190 × 1014.3588 × 100
FEV6.4804 × 1019.2447 × 1019.4521 × 1015.9699 × 1016.8614 × 1018.2581 × 1012.0024 × 1023.2838 × 101
CEC2017_F6Mean6.0921 × 1026.4427 × 1026.4042 × 1026.1550 × 1026.0000 × 1026.0229 × 1026.7349 × 1026.0010 × 102
Std6.3719 × 1001.0389 × 1011.0706 × 1017.2252 × 1009.7157 × 10−32.9894 × 1001.2214 × 1017.5138 × 10−2
FEV1.5847 × 1002.3438 × 1012.1631 × 1014.7650 × 1000.0000 × 1006.1249 × 10−24.8646 × 1010.0000 × 100
CEC2017_F7Mean8.3350 × 1029.9429 × 1028.6975 × 1029.0679 × 1028.1462 × 1029.9040 × 1021.2773 × 1038.9628 × 102
Std3.9154 × 1019.0041 × 1013.6262 × 1015.4023 × 1017.8481 × 1008.0980 × 1017.9178 × 1012.8136 × 101
FEV8.3719 × 1011.5847 × 1021.0696 × 1021.1479 × 1029.6365 × 1011.4059 × 1023.5467 × 1021.4163 × 102
CEC2017_F8Mean9.1339 × 1021.0168 × 1039.2138 × 1028.8497 × 1028.9763 × 1029.2023 × 1021.0050 × 1038.3930 × 102
Std2.9846 × 1015.3370 × 1014.0855 × 1012.2114 × 1011.2430 × 1012.1571 × 1015.6911 × 1014.5539 × 100
FEV5.0871 × 1011.1342 × 1025.0743 × 1015.3728 × 1017.4855 × 1018.8552 × 1011.3946 × 1023.4824 × 101
CEC2017_F9Mean1.5316 × 1034.7118 × 1033.7953 × 1031.9050 × 1031.2497 × 1033.4366 × 1031.0079 × 1049.0056 × 102
Std4.2244 × 1021.7258 × 1031.4379 × 1036.7509 × 1022.7801 × 1021.0277 × 1034.9202 × 1033.1873 × 10−1
FEV1.6041 × 1021.5195 × 1034.3583 × 1022.0039 × 1027.3508 × 1011.0426 × 1033.4726 × 1032.2055 × 1011
CEC2017_F10Mean3.8240 × 1036.0211 × 1034.8265 × 1037.7712 × 1034.7344 × 1034.3533 × 1036.5874 × 1036.6151 × 103
Std6.4217 × 1029.2260 × 1026.1594 × 1026.6894 × 1022.6319 × 1026.7587 × 1027.7786 × 1024.8435 × 102
FEV1.5094 × 1033.1788 × 1032.7820 × 1033.7052 × 1033.0434 × 1032.2264 × 1033.8403 × 1034.6036 × 103
CEC2017_F11Mean1.2546 × 1031.4409 × 1031.3000 × 1031.2379 × 1031.7796 × 1031.1910 × 1032.9283 × 1031.1454 × 103
Std5.8729 × 1011.0587 × 1024.6247 × 1014.3927 × 1014.7503 × 1023.4546 × 1011.3831 × 1038.5309 × 10−1
FEV3.9386 × 1011.8816 × 1021.0572 × 1024.6258 × 1011.7625 × 1023.9416 × 1014.7664 × 1023.6000 × 101
CEC2017_F12Mean1.5552 × 1072.9351 × 1076.8690 × 1068.1761 × 1045.2830 × 1061.0133 × 1061.1565 × 1086.4213 × 104
Std1.6079 × 1076.5031 × 1074.0910 × 1069.1597 × 1042.3165 × 1066.4323 × 1056.4869 × 1074.4952 × 104
FEV3.3228 × 1042.9947 × 1051.3244 × 1061.4293 × 1041.9861 × 1062.3776 × 1057.7703 × 1061.3400 × 104
CEC2017_F13Mean5.8820 × 1051.5221 × 1068.8660 × 1041.7018 × 1044.3172 × 1051.9841 × 1042.4293 × 1052.9001 × 104
Std7.9095 × 1051.8575 × 1064.4136 × 1041.6763 × 1043.8698 × 1051.7751 × 1043.6378 × 1051.9471 × 104
FEV3.2675 × 1032.2544 × 1041.6615 × 1044.5050 × 1026.6059 × 1033.7402 × 1022.9488 × 1041.2620 × 103
CEC2017_F14Mean2.0610 × 1045.6819 × 1041.4945 × 1041.1986 × 1043.0733 × 1052.8129 × 1041.8931 × 1061.7391 × 103
Std2.5389 × 1045.3657 × 1041.1383 × 1041.2171 × 1041.8644 × 1052.7480 × 1041.9436 × 1061.8178 × 101
FEV7.2600 × 1022.4485 × 1032.2189 × 1038.9801 × 1023.7286 × 1046.7992 × 1024.9313 × 1043.3579 × 102
CEC2017_F15Mean1.1524 × 1059.4785 × 1047.3017 × 1044.7432 × 1031.0602 × 1054.2974 × 1031.4447 × 1051.5873 × 103
Std1.5235 × 1051.1166 × 1056.7115 × 1044.0869 × 1038.9260 × 1043.3613 × 1031.4741 × 1055.1624 × 101
FEV7.7956 × 1021.4919 × 1032.3426 × 1042.2252 × 1022.0646 × 1046.9976 × 1012.6461 × 1047.7890 × 101
CEC2017_F16Mean2.7576 × 1033.2647 × 1032.5474 × 1032.4505 × 1032.5264 × 1032.8006 × 1033.8146 × 1031.8520 × 103
Std3.9252 × 1024.4304 × 1023.0154 × 1023.3139 × 1021.5372 × 1022.6103 × 1025.3913 × 1021.3876 × 1012
FEV5.5222 × 1028.4239 × 1024.8915 × 1021.8320 × 1026.2821 × 1025.3819 × 1021.1894 × 1032.5201 × 102
CEC2017_F17Mean2.2504 × 1032.4617 × 1032.1528 × 1031.9619 × 1032.0370 × 1032.2582 × 1032.5932 × 1031.7633 × 103
Std1.9061 × 1022.5665 × 1021.5217 × 1021.3291 × 1021.0840 × 1021.9367 × 1022.8520 × 1021.6006 × 101
FEV1.4067 × 1021.4015 × 1022.1115 × 1026.5383 × 1011.5264 × 1022.8154 × 1022.9875 × 1023.7609 × 101
CEC2017_F18Mean8.8315 × 1051.7061 × 1062.0075 × 1053.2319 × 1059.1437 × 1051.5366 × 1055.2409 × 1061.3902 × 105
Std8.4346 × 1052.5172 × 1061.1560 × 1052.5306 × 1055.9038 × 1051.1665 × 1055.0927 × 1069.7676 × 104
FEV7.6403 × 1044.6738 × 1043.2624 × 1047.4136 × 1041.4604 × 1052.7265 × 1049.7192 × 1046.0568 × 103
CEC2017_F19Mean7.5455 × 1054.5696 × 1057.0117 × 1057.1248 × 1036.4803 × 1049.6477 × 1036.0112 × 1061.9815 × 103
Std1.1022 × 1066.9521 × 1054.3723 × 1055.3588 × 1033.2324 × 1041.0424 × 1046.1383 × 1069.2504 × 1013
FEV4.8615 × 1028.3936 × 1021.1094 × 1047.6690 × 1013.0306 × 1032.5298 × 1022.5268 × 1048.1462 × 101
CEC2017_F20Mean2.5598 × 1032.5509 × 1032.5201 × 1032.3121 × 1032.3024 × 1032.5035 × 1032.7778 × 1032.1702 × 103
Std2.0656 × 1022.1082 × 1021.6065 × 1021.2509 × 1021.0797 × 1021.6943 × 1022.5342 × 1023.2149 × 100
FEV9.8836 × 1011.8688 × 1022.0014 × 1021.0238 × 1025.2641 × 1011.5974 × 1023.2116 × 1021.6652 × 102
CEC2017_F21Mean2.4163 × 1032.5492 × 1032.4291 × 1032.3890 × 1032.3918 × 1032.4273 × 1032.5706 × 1032.4378 × 103
Std2.9850 × 1016.2977 × 1013.8786 × 1012.6984 × 1012.5175 × 1013.2751 × 1015.6185 × 1012.6464 × 101
FEV2.5595 × 1023.2172 × 1022.6957 × 1022.3983 × 1021.6685 × 1022.6328 × 1023.6268 × 1021.6500 × 102
CEC2017_F22Mean4.5892 × 1034.2780 × 1035.1976 × 1032.7059 × 1034.4198 × 1032.3013 × 1036.5914 × 1032.3007 × 103
Std1.7805 × 1032.0568 × 1032.1077 × 1031.5310 × 1031.8632 × 1031.9340 × 1002.1882 × 1031.2356 × 100
FEV1.0013 × 1021.0060 × 1021.0000 × 1021.0000 × 1021.0000 × 1021.0003 × 1022.2039 × 1021.0000 × 102
CEC2017_F23Mean2.8423 × 1032.9816 × 1032.7687 × 1032.7882 × 1032.7651 × 1032.7923 × 1033.0865 × 1032.7175 × 103
Std5.0849 × 1019.0469 × 1013.6346 × 1013.7569 × 1011.5082 × 1013.0857 × 1011.0749 × 1026.4086 × 100
FEV4.3670 × 1024.7053 × 1024.0921 × 1024.1249 × 1024.2546 × 1024.4529 × 1025.6973 × 1024.1630 × 102
CEC2017_F24Mean2.9720 × 1033.1301 × 1032.9304 × 1032.9448 × 1032.9926 × 1033.0378 × 1033.1978 × 1032.9477 × 103
Std5.4106 × 1018.7737 × 1013.0603 × 1013.3581 × 1011.7276 × 1015.8788 × 1019.9094 × 1012.2396 × 101
FEV5.0120 × 1025.9157 × 1024.7639 × 1024.9562 × 1025.5719 × 1025.4004 × 1025.9375 × 1024.9662 × 102
CEC2017_F25Mean2.8880 × 1032.9424 × 1032.9017 × 1032.9065 × 1032.8907 × 1032.9018 × 1033.0148 × 1032.8896 × 103
Std4.0438 × 1004.7463 × 1012.0943 × 1011.9977 × 1011.9521 × 1002.0768 × 1013.8347 × 1016.0523 × 100
FEV3.8363 × 1023.8411 × 1023.8691 × 1023.8475 × 1023.8807 × 1023.8380 × 1024.3809 × 1023.8300 × 102
CEC2017_F26Mean6.1434 × 1036.2145 × 1034.4528 × 1034.7555 × 1034.5716 × 1034.0319 × 1037.8932 × 1034.8406 × 103
Std6.9520 × 1029.5024 × 1021.2151 × 1031.1539 × 1033.4823 × 1021.7759 × 1031.1293 × 1034.0412 × 102
FEV1.9104 × 1037.7789 × 1022.0000 × 1022.0000 × 1029.2522 × 1022.0008 × 1023.4186 × 1031.4534 × 103
CEC2017_F27Mean3.2844 × 1033.3124 × 1033.2456 × 1033.2620 × 1033.2136 × 1033.2670 × 1033.4302 × 1033.2306 × 103
Std4.1969 × 1017.1139 × 1012.6213 × 1013.8189 × 1014.6635 × 1002.3967 × 1011.0217 × 1021.4670 × 101
FEV5.2317 × 1025.2360 × 1025.0525 × 1025.2010 × 1025.0656 × 1025.2238 × 1025.5078 × 1024.9882 × 102
CEC2017_F28Mean3.2402 × 1033.4542 × 1033.2208 × 1033.2205 × 1033.2877 × 1033.2135 × 1033.3762 × 1033.2129 × 103
Std4.5262 × 1014.1670 × 1022.5381 × 1012.0421 × 1012.0831 × 1011.4817 × 1013.9377 × 1011.7401 × 101
FEV4.0248 × 1024.2633 × 1023.7469 × 1023.9105 × 1024.4680 × 1023.8273 × 1025.0563 × 1023.7427 × 102
CEC2017_F29Mean4.1437 × 1034.3638 × 1034.0972 × 1033.8030 × 1033.6447 × 1033.8094 × 1034.9466 × 1033.6595 × 103
Std3.1847 × 1023.8875 × 1022.4977 × 1022.1535 × 1021.0393 × 1021.9355 × 1024.8903 × 1021.8744 × 102
FEV7.4529 × 1026.9844 × 1027.7266 × 1025.2767 × 1025.1652 × 1025.5320 × 1028.6103 × 1025.0545 × 102
CEC2017_F30Mean6.8100 × 1052.3906 × 1062.5414 × 1068.1994 × 1035.2204 × 1049.5798 × 1031.9892 × 1076.9760 × 103
Std1.3978 × 1062.6885 × 1061.7516 × 1062.7841 × 1033.0443 × 1042.9485 × 1031.2561 × 1071.1063 × 103
FEV3.2696 × 1032.2775 × 1042.8069 × 1052.2858 × 1031.1279 × 1043.4988 × 1033.4805 × 1062.2809 × 103
Rank First 202241018
Mean Rank4.79316.62074.24143.17243.68973.79317.75861.9310
Final Rank67523481
Table 7. Numerical results of the different algorithms on solving the CEC2017 test function set (Dim = 50).
Table 7. Numerical results of the different algorithms on solving the CEC2017 test function set (Dim = 50).
ProblemMetricSODBOSSATLBOFSTDEAHAWOASNDSO
CEC2017_F1Mean6.9187 × 1067.4556 × 1078.0054 × 1031.4179 × 1062.9963 × 1061.2886 × 1056.6916 × 1083.5994 × 102
Std2.1993 × 1076.7658 × 1079.6800 × 1034.8212 × 1061.0251 × 1071.0840 × 1054.5185 × 1082.1919 × 102
FEV4.6129 × 1047.0033 × 1011.0451 × 1001.2442 × 1039.8313 × 1031.2159 × 1042.5159 × 1087.4084 × 101
CEC2017_F3Mean1.2267 × 1051.5214 × 1053.4862 × 1048.1620 × 1042.6856 × 1053.2442 × 1041.7291 × 1057.7868 × 104
Std1.4828 × 1042.1483 × 1041.1200 × 1041.8059 × 1043.3418 × 1048.7966 × 1035.0815 × 1041.4224 × 104
FEV9.0859 × 1047.7104 × 1041.3097 × 1044.1987 × 1042.1575 × 1051.7594 × 1041.0912 × 1051.2900 × 104
CEC2017_F4Mean6.0746 × 1028.7549 × 1025.8011 × 1025.8780 × 1026.4978 × 1025.5063 × 1021.0277 × 1034.7390 × 102
Std6.9045 × 1016.5721 × 1024.4475 × 1015.3160 × 1012.1518 × 1014.6665 × 1011.5149 × 1021.9662 × 101
FEV7.4978 × 1011.9135 × 1027.5973 × 1015.8647 × 1012.0447 × 1027.9359 × 1013.9024 × 1024.1900 × 101
CEC2017_F5Mean7.5092 × 1029.7259 × 1028.2475 × 1027.4058 × 1027.9370 × 1028.2279 × 1029.5229 × 1026.2193 × 102
Std5.5773 × 1019.3113 × 1017.8566 × 1012.9907 × 1011.6782 × 1014.0968 × 1015.6026 × 1011.7461 × 101
FEV1.7688 × 1022.6013 × 1021.8116 × 1021.6417 × 1022.6179 × 1022.3185 × 1023.6051 × 1029.7653 × 101
CEC2017_F6Mean6.2409 × 1026.6478 × 1026.5591 × 1026.3224 × 1026.0011 × 1026.0872 × 1026.8597 × 1026.0086 × 102
Std7.4489 × 1009.9216 × 1008.7479 × 1006.3110 × 1001.7096 × 10−19.2411 × 1001.0239 × 1015.2345 × 10−1
FEV9.2075 × 1004.8632 × 1014.0624 × 1012.2247 × 1011.7858 × 10−41.1440 × 1006.2466 × 1013.5901 × 10−8
CEC2017_F7Mean9.2640 × 1021.3833 × 1031.1295 × 1031.2087 × 1031.0142 × 1031.3585 × 1031.7766 × 1031.1296 × 103
Std3.7379 × 1011.3256 × 1028.6389 × 1018.6085 × 1011.3079 × 1011.6460 × 1021.0199 × 1024.4823 × 101
FEV1.7303 × 1024.3420 × 1022.6969 × 1023.9655 × 1022.8788 × 1023.9277 × 1028.6269 × 1023.4330 × 102
CEC2017_F8Mean1.0545 × 1031.2686 × 1031.0908 × 1031.0483 × 1031.0923 × 1031.1190 × 1031.2704 × 1031.0200 × 103
Std5.5812 × 1018.1043 × 1016.6961 × 1012.9157 × 1011.6373 × 1014.0922 × 1016.8501 × 1016.9378 × 1013
FEV1.3085 × 1023.2021 × 1021.6218 × 1021.9803 × 1022.5740 × 1022.5571 × 1023.5310 × 1022.1996 × 102
CEC2017_F9Mean3.8962 × 1031.5487 × 1041.2444 × 1041.0414 × 1046.8864 × 1031.0718 × 1042.8280 × 1049.7836 × 102
Std1.4863 × 1036.2322 × 1033.4933 × 1034.5990 × 1031.1708 × 1032.3244 × 1039.1805 × 1032.0102 × 101
FEV1.1897 × 1034.1300 × 1034.7410 × 1031.5465 × 1033.9690 × 1035.5871 × 1031.5885 × 1045.1329 × 101
CEC2017_F10Mean6.2709 × 1031.0253 × 1047.8386 × 1031.3325 × 1049.4782 × 1036.8469 × 1031.0782 × 1041.1965 × 104
Std1.2427 × 1032.0070 × 1031.0851 × 1031.7078 × 1034.3288 × 1028.3414 × 1021.3334 × 1035.6182 × 102
FEV3.0879 × 1036.5402 × 1034.0211 × 1037.8028 × 1037.3751 × 1034.2084 × 1037.1276 × 1039.7405 × 103
CEC2017_F11Mean1.4158 × 1031.7496 × 1031.4536 × 1031.3465 × 1034.5340 × 1031.3105 × 1032.4296 × 1031.2678 × 103
Std6.0419 × 1013.5635 × 1029.0311 × 1015.2032 × 1011.3635 × 1031.3283 × 1024.6763 × 1022.7636 × 101
FEV1.7758 × 1021.7585 × 1022.2206 × 1021.5636 × 1029.5835 × 1028.7702 × 1017.0643 × 1021.1480 × 102
CEC2017_F12Mean4.8320 × 1072.1037 × 1083.5248 × 1071.8175 × 1069.5338 × 1076.3373 × 1065.3407 × 1081.1190 × 106
Std5.1193 × 1071.4152 × 1082.3263 × 1071.5567 × 1064.1975 × 1074.8968 × 1062.8960 × 1086.1463 × 105
FEV3.1048 × 1061.9643 × 1075.9824 × 1062.4000 × 1052.1259 × 1079.8789 × 1051.0417 × 1082.3581 × 105
CEC2017_F13Mean2.0862 × 1066.8353 × 1061.5866 × 1051.1587 × 1042.9434 × 1061.9045 × 1048.7713 × 1065.4735 × 103
Std3.8336 × 1069.5547 × 1061.3681 × 1058.4234 × 1033.1917 × 1061.3722 × 1041.0043 × 1075.5104 × 103
FEV6.5636 × 1035.3423 × 1043.6804 × 1041.1738 × 1035.2960 × 1042.2740 × 1039.6757 × 1054.0670 × 102
CEC2017_F14Mean3.2331 × 1051.7161 × 1061.1267 × 1057.0472 × 1041.6706 × 1061.9638 × 1051.9814 × 1066.2322 × 104
Std3.9031 × 1052.0354 × 1069.6405 × 1046.9838 × 1047.6149 × 1051.6106 × 1051.3820 × 1065.6623 × 104
FEV2.1032 × 1047.0635 × 1031.8319 × 1042.8652 × 1033.6429 × 1057.1815 × 1033.8966 × 1052.8123 × 103
CEC2017_F15Mean5.1368 × 1059.9105 × 1067.3417 × 1041.0053 × 1043.2126 × 1051.0031 × 1048.9389 × 1055.2488 × 103
Std9.5256 × 1054.1805 × 1075.8702 × 1046.3377 × 1035.4247 × 1057.0448 × 1031.2026 × 1062.6713 × 103
FEV1.7157 × 1039.8815 × 1031.7637 × 1046.5671 × 1021.8698 × 1042.9670 × 1024.8331 × 1042.3731 × 102
CEC2017_F16Mean3.7962 × 1034.6345 × 1033.4240 × 1032.8993 × 1033.4476 × 1033.3279 × 1035.4054 × 1032.9814 × 103
Std5.0250 × 1025.5184 × 1024.2740 × 1024.1400 × 1023.1194 × 1024.5405 × 1029.2631 × 1024.6252 × 1013
FEV1.0685 × 1031.8452 × 1031.1548 × 1036.3845 × 1021.2548 × 1039.3065 × 1022.3727 × 1031.3814 × 103
CEC2017_F17Mean3.4495 × 1033.9403 × 1033.3833 × 1032.9285 × 1033.2845 × 1033.3346 × 1034.1581 × 1032.7380 × 103
Std3.8626 × 1024.1380 × 1022.6674 × 1022.3148 × 1022.5534 × 1023.3678 × 1025.5396 × 1029.1598 × 101
FEV8.6558 × 1021.0934 × 1031.2407 × 1037.8211 × 1026.5466 × 1021.0820 × 1031.5645 × 1039.7232 × 102
CEC2017_F18Mean4.1225 × 1065.6941 × 1061.4912 × 1061.3823 × 1066.3645 × 1061.4399 × 1061.4816 × 1077.5714 × 105
Std4.1214 × 1065.5995 × 1061.2275 × 1061.5643 × 1062.8237 × 1069.7924 × 1051.4508 × 1078.5371 × 105
FEV1.8350 × 1052.3902 × 1051.4596 × 1051.5753 × 1051.1319 × 1061.2854 × 1053.0932 × 1064.9942 × 104
CEC2017_F19Mean7.4091 × 1051.7095 × 1062.2760 × 1061.6291 × 1049.4985 × 1042.0754 × 1044.1310 × 1061.5621 × 104
Std1.5208 × 1062.1557 × 1061.1176 × 1068.8087 × 1037.2886 × 1041.0205 × 1044.5821 × 1061.0684 × 104
FEV8.0289 × 1022.2697 × 1043.0789 × 1042.6735 × 1021.6106 × 1042.1731 × 1033.7980 × 1042.4295 × 102
CEC2017_F20Mean3.1515 × 1033.5412 × 1033.1829 × 1033.2374 × 1033.1152 × 1033.3052 × 1033.7748 × 1032.8105 × 103
Std2.6731 × 1023.0993 × 1023.3733 × 1023.8443 × 1021.3312 × 1023.1463 × 1024.1470 × 1024.6252 × 1013
FEV5.9017 × 1027.2049 × 1024.0829 × 1024.5669 × 1029.1138 × 1026.9915 × 1028.1249 × 1028.1047 × 102
CEC2017_F21Mean2.5461 × 1032.8119 × 1032.5766 × 1032.5385 × 1032.6145 × 1032.5708 × 1032.9476 × 1032.5539 × 103
Std5.6417 × 1011.1103 × 1025.9013 × 1014.0144 × 1011.7521 × 1014.5508 × 1011.1614 × 1023.4725 × 100
FEV3.4883 × 1024.0307 × 1023.5726 × 1023.8067 × 1024.7680 × 1023.7283 × 1026.2497 × 1023.4814 × 102
CEC2017_F22Mean8.4564 × 1031.1198 × 1049.6777 × 1031.2086 × 1041.1192 × 1049.4874 × 1031.3106 × 1041.3641 × 104
Std1.1669 × 1031.4956 × 1039.5175 × 1025.2042 × 1034.0537 × 1029.6525 × 1021.2338 × 1034.5931 × 102
Fev3.6249 × 1036.1686 × 1036.0139 × 1031.1986 × 1028.0207 × 1035.8052 × 1038.5548 × 1031.0371 × 104
CEC2017_F23Mean3.1523 × 1033.3931 × 1032.9982 × 1033.0791 × 1033.0451 × 1033.0896 × 1033.7187 × 1032.9921 × 103
Std1.0811 × 1021.7441 × 1026.3050 × 1019.7515 × 1011.9855 × 1017.2203 × 1011.9317 × 1021.3876 × 10−12
FEV6.6113 × 1027.9823 × 1026.0478 × 1025.9582 × 1027.1553 × 1026.5093 × 1021.1280 × 1036.9206 × 102
CEC2017_F24Mean3.2843 × 1033.6589 × 1033.1353 × 1033.2079 × 1033.3189 × 1033.4237 × 1033.7508 × 1033.2258 × 103
Std1.0628 × 1021.8788 × 1025.4742 × 1017.5968 × 1012.7426 × 1018.6706 × 1011.4682 × 1024.5243 × 101
FEV7.0727 × 1029.0542 × 1026.3868 × 1026.8689 × 1028.6670 × 1028.4307 × 1021.0048 × 1037.4271 × 102
CEC2017_F25Mean3.0912 × 1033.1443 × 1033.0614 × 1033.1051 × 1033.0832 × 1033.0959 × 1033.3785 × 1033.0403 × 103
Std3.8128 × 1015.8831 × 1012.7413 × 1012.8687 × 1012.5435 × 1012.1759 × 1011.2905 × 1026.5164 × 100
FEV5.1172 × 1025.5999 × 1025.1491 × 1025.5081 × 1025.4066 × 1025.4804 × 1026.2332 × 1025.1054 × 102
CEC2017_F26Mean8.5278 × 1038.9769 × 1035.5778 × 1038.8695 × 1036.8610 × 1035.5005 × 1031.3040 × 1044.9156 × 103
Std1.0729 × 1032.0750 × 1032.1942 × 1031.8372 × 1032.8307 × 1023.0981 × 1031.5662 × 1031.2641 × 102
FEV3.8610 × 1031.1398 × 1033.0000 × 1021.6879 × 1033.3537 × 1033.0798 × 1026.8374 × 1032.2925 × 103
CEC2017_F27Mean3.7602 × 1033.9652 × 1033.5008 × 1033.7192 × 1033.3612 × 1033.6790 × 1034.3759 × 1033.3521 × 103
Std1.2393 × 1021.6747 × 1021.0366 × 1021.5364 × 1023.2691 × 1011.6853 × 1024.6830 × 1026.1003 × 10−1
FEV8.4008 × 1029.7285 × 1026.5867 × 1027.2196 × 1025.8008 × 1027.4193 × 1021.0044 × 1035.7849 × 102
CEC2017_F28Mean3.3211 × 1034.7840 × 1033.3189 × 1033.3785 × 1033.7817 × 1033.3655 × 1034.0351 × 1033.3181 × 103
Std2.8688 × 1011.8489 × 1033.4704 × 1014.3927 × 1018.2740 × 1023.5919 × 1012.3307 × 1026.6993 × 10−1
FEV4.6491 × 1025.4597 × 1024.5936 × 1024.9127 × 1025.5535 × 1024.9703 × 1029.0185 × 1024.5730 × 102
CEC2017_F29Mean5.1569 × 1036.1511 × 1035.0660 × 1034.8640 × 1034.2899 × 1034.3632 × 1037.8499 × 1034.1893 × 103
Std5.4474 × 1021.1348 × 1033.9181 × 1024.8829 × 1021.9336 × 1022.8144 × 1021.1466 × 1034.4961 × 102
FEV1.3573 × 1031.7673 × 1031.3787 × 1031.2392 × 1038.9380 × 1028.7807 × 1023.3468 × 1036.3319 × 102
CEC2017_F30Mean8.7878 × 1063.1713 × 1076.3653 × 1071.0419 × 1062.3058 × 1061.0052 × 1061.7282 × 1081.1149 × 106
Std1.9232 × 1072.7160 × 1071.2583 × 1071.9536 × 1055.3038 × 1052.9745 × 1055.6393 × 1072.5320 × 105
FEV8.2780 × 1055.1846 × 1063.0958 × 1077.5182 × 1051.2682 × 1067.2445 × 1057.8861 × 1077.0535 × 105
Rank First 301212020
Mean Rank4.2069 6.7931 3.8621 3.5517 4.4483 3.5172 7.7241 1.8966
Final Rank57436281
Table 8. Numerical results of the different algorithms on solving the CEC2017 test function set for (Dim = 100).
Table 8. Numerical results of the different algorithms on solving the CEC2017 test function set for (Dim = 100).
ProblemMetricSODBOSSATLBOFSTDEAHAWOASNDSO
CEC2017_F1Mean1.4868 × 1071.2012 × 10101.7449 × 1044.7467 × 1091.4211 × 1091.6540 × 1071.3190 × 10104.1728 × 106
Std9.8621 × 1063.0708 × 10101.8279 × 1043.2922 × 1091.0071 × 1098.0912 × 1062.9395 × 1094.3502 × 106
FEV4.7215 × 1069.3130 × 1081.4521 × 1025.2721 × 1081.1191 × 1086.2539 × 1068.1442 × 1091.0330 × 106
CEC2017_F3Mean3.1826 × 1053.5045 × 1053.3968 × 1053.5863 × 1057.5227 × 1051.7160 × 1058.8699 × 1052.8479 × 105
Std1.6276 × 1042.1552 × 1045.6005 × 1043.1149 × 1046.8323 × 1041.8385 × 1041.4860 × 1053.0252 × 104
FEV2.7353 × 1053.1973 × 1052.1703 × 1053.1219 × 1056.0402 × 1051.3234 × 1056.0987 × 1051.1546 × 105
CEC2017_F4Mean8.0983 × 1027.1986 × 1037.4117 × 1021.5056 × 1031.4770 × 1038.8926 × 1023.4724 × 1037.1327 × 102
Std7.7472 × 1011.2426 × 1047.0513 × 1016.0066 × 1022.7408 × 1026.1654 × 1018.1815 × 1025.7007 × 100
FEV2.7359 × 1026.8363 × 1022.1066 × 1026.3863 × 1027.4973 × 1023.1354 × 1021.8707 × 1032.1023 × 102
CEC2017_F5Mean1.1981 × 1031.4930 × 1031.3424 × 1031.1735 × 1031.5732 × 1031.3275 × 1031.6677 × 1031.0779 × 103
Std1.3490 × 1022.3215 × 1029.8293 × 1017.7532 × 1013.4885 × 1018.0051 × 1011.0943 × 1022.9857 × 100
FEV4.2868 × 1026.7197 × 1026.4373 × 1025.5087 × 1029.7439 × 1025.8016 × 1029.3461 × 1024.2197 × 102
CEC2017_F6Mean6.3946 × 1026.6900 × 1026.6458 × 1026.4883 × 1026.1423 × 1026.2281 × 1026.9402 × 1026.1089 × 102
Std6.4676 × 1001.3498 × 1016.7739 × 1003.5262 × 1003.1049 × 1001.1149 × 1019.6397 × 1003.9044 × 100
FEV2.7805 × 1015.1826 × 1015.2668 × 1014.1098 × 1017.5212 × 1005.7378 × 1007.9053 × 1014.0310 × 100
CEC2017_F7Mean1.3392 × 1032.7088 × 1032.0119 × 1032.3081 × 1031.8157 × 1032.7683 × 1033.5326 × 1032.0046 × 103
Std6.6652 × 1012.3312 × 1022.1511 × 1022.2947 × 1024.0286 × 1012.2595 × 1022.0664 × 1029.0887 × 101
FEV4.9533 × 1021.5441 × 1039.5752 × 1021.0520 × 1031.0377 × 1031.5885 × 1032.4294 × 1031.1673 × 103
CEC2017_F8Mean1.5331 × 1031.8716 × 1031.6860 × 1031.5646 × 1031.8610 × 1031.7292 × 1032.1121 × 1031.7468 × 103
Std1.1504 × 1022.2745 × 1021.2936 × 1026.2945 × 1013.3031 × 1011.0320 × 1021.1743 × 1026.3554 × 101
FEV5.0844 × 1027.9423 × 1026.0075 × 1025.8318 × 1029.8365 × 1027.3487 × 1021.1584 × 1038.2957 × 102
CEC2017_F9Mean1.3834 × 1044.8142 × 1042.8214 × 1045.0604 × 1046.0725 × 1042.4006 × 1045.4802 × 1047.5261 × 103
Std4.6874 × 1031.9125 × 1044.0159 × 1037.3175 × 1037.3588 × 1031.1907 × 1031.4342 × 1044.3529 × 102
FEV7.0570 × 1032.0040 × 1041.9304 × 1043.2202 × 1044.4448 × 1041.8221 × 1043.9125 × 1045.8194 × 103
CEC2017_F10Mean1.7802 × 1042.1714 × 1041.5705 × 1043.0993 × 1042.5432 × 1041.4925 × 1042.4489 × 1041.3257 × 104
Std6.5174 × 1035.0846 × 1032.0305 × 1031.3443 × 1035.5155 × 1021.2892 × 1032.1063 × 1033.0233 × 102
FEV9.5836 × 1031.2903 × 1048.9378 × 1032.4359 × 1042.3096 × 1041.1515 × 1041.9736 × 1048.8399 × 103
CEC2017_F11Mean1.9876 × 1049.4062 × 1044.0115 × 1035.2462 × 1037.9567 × 1042.5881 × 1041.1377 × 1053.6060 × 103
Std5.3712 × 1034.3169 × 1044.9910 × 1021.5042 × 1031.5685 × 1048.2292 × 1034.3960 × 1048.1875 × 101
FEV8.9681 × 1032.4329 × 1041.8129 × 1032.1088 × 1034.6559 × 1041.2416 × 1045.8445 × 1042.4255 × 103
CEC2017_F12Mean2.9106 × 1081.1369 × 1093.5080 × 1082.0413 × 1081.2527 × 1095.2596 × 1072.5511 × 1091.9809 × 107
Std2.4654 × 1086.7029 × 1081.4736 × 1084.2336 × 1084.9437 × 1082.0206 × 1078.1236 × 1081.0721 × 107
FEV2.9009 × 1072.1694 × 1081.0297 × 1082.4188 × 1074.7365 × 1081.9081 × 1078.4941 × 1087.3683 × 106
CEC2017_F13Mean5.9408 × 1067.8630 × 1077.9810 × 1041.8059 × 1041.4897 × 1077.1682 × 1042.1065 × 1079.1063 × 103
Std9.2130 × 1061.1189 × 1082.6687 × 1049.1815 × 1033.7752 × 1071.7234 × 1051.2487 × 1076.0468 × 103
FEV1.3747 × 1041.1936 × 1053.9130 × 1045.3202 × 1034.5138 × 1059.1499 × 1035.9844 × 1062.1061 × 103
CEC2017_F14Mean3.9787 × 1068.2981 × 1061.0560 × 1069.2195 × 1051.8961 × 1071.8705 × 1068.2176 × 1068.6645 × 105
Std3.4041 × 1067.4345 × 1065.7090 × 1056.9576 × 1056.3840 × 1067.9368 × 1053.2630 × 1066.5123 × 105
FEV3.2703 × 1051.2417 × 1062.9645 × 1051.0003 × 1053.7998 × 1068.2419 × 1051.6813 × 1061.8694 × 105
CEC2017_F15Mean2.2059 × 1067.7609 × 1066.5415 × 1044.9141 × 1031.6483 × 1061.5691 × 1046.4756 × 1064.7736 × 103
Std3.7094 × 1062.0048 × 1072.3604 × 1043.1590 × 1039.4550 × 1051.6832 × 1048.2966 × 1063.0661 × 103
FEV3.9203 × 1038.2628 × 1042.4897 × 1046.9523 × 1021.8201 × 1057.3170 × 1026.5846 × 1056.5191 × 102
CEC2017_F16Mean6.2231 × 1038.2772 × 1036.6311 × 1035.5385 × 1038.0065 × 1036.0381 × 1031.2883 × 1044.3093 × 103
Std1.0154 × 1031.5317 × 1037.9175 × 1026.8389 × 1023.6608 × 1028.1836 × 1022.0352 × 1032.7394 × 102
FEV3.1390 × 1033.8411 × 1033.7423 × 1032.4399 × 1035.6271 × 1032.5774 × 1038.1093 × 1032.4399 × 103
CEC2017_F17Mean5.9066 × 1037.4802 × 1035.4078 × 1035.5310 × 1037.0543 × 1035.2076 × 1038.5219 × 1034.7031 × 103
Std6.1978 × 1029.7309 × 1024.8077 × 1026.7704 × 1023.0935 × 1024.7908 × 1021.4958 × 1036.8721 × 101
FEV2.9537 × 1033.9465 × 1032.7332 × 1032.4419 × 1034.7217 × 1032.6648 × 1034.7590 × 1032.9166 × 103
CEC2017_F18Mean6.9131 × 1061.0726 × 1072.3631 × 1061.6111 × 1062.3376 × 1072.7110 × 1066.7446 × 1061.1198 × 106
Std5.3414 × 1068.7218 × 1061.2875 × 1068.5861 × 1057.8307 × 1069.1411 × 1052.4345 × 1062.5593 × 105
FEV1.1602 × 1068.4044 × 1057.3946 × 1053.3682 × 1051.0564 × 1071.2178 × 1062.0269 × 1063.2756 × 105
CEC2017_F19Mean7.6261 × 1061.2565 × 1078.7799 × 1066.5990 × 1034.7735 × 1069.7817 × 1033.4037 × 1073.4885 × 103
Std1.1230 × 1071.2509 × 1074.3892 × 1061.0065 × 1042.4319 × 1068.8265 × 1032.3245 × 1071.4784 × 103
FEV1.0252 × 1036.4900 × 1051.0673 × 1065.5047 × 1021.8684 × 1064.9506 × 1027.8735 × 1063.7586 × 102
CEC2017_F20Mean5.5846 × 1036.3989 × 1035.3158 × 1036.6005 × 1035.6453 × 1035.2582 × 1036.5832 × 1034.8005 × 103
Std1.0286 × 1035.3245 × 1026.3847 × 1029.7729 × 1022.8089 × 1025.6959 × 1027.2198 × 1024.5339 × 102
FEV2.2168 × 1033.4322 × 1031.9833 × 1031.7423 × 1032.9388 × 1032.2310 × 1033.2995 × 1032.2949 × 103
CEC2017_F21Mean3.0570 × 1033.8122 × 1033.1587 × 1033.1650 × 1033.4220 × 1033.0887 × 1034.0694 × 1033.0026 × 103
Std1.3963 × 1022.0786 × 1021.4560 × 1021.3860 × 1023.7380 × 1019.5127 × 1011.6996 × 1023.6710 × 101
FEV6.8050 × 1021.2228 × 1037.6721 × 1027.9855 × 1021.2575 × 1037.7488 × 1021.6789 × 1036.7067 × 102
CEC2017_F22Mean1.8219 × 1042.4529 × 1041.8144 × 1043.3452 × 1042.7358 × 1041.9550 × 1042.8377 × 1041.4152 × 104
Std5.9709 × 1034.1212 × 1031.2555 × 1031.3092 × 1036.5514 × 1021.5628 × 1031.8139 × 1034.8339 × 102
FEV9.8554 × 1031.5735 × 1041.4081 × 1042.7075 × 1042.3742 × 1041.4398 × 1042.3383 × 1041.1282 × 104
CEC2017_F23Mean3.7221 × 1034.5920 × 1033.5889 × 1033.8700 × 1033.6461 × 1033.4056 × 1034.8941 × 1033.3200 × 103
Std1.8177 × 1022.6556 × 1021.2188 × 1021.8632 × 1023.3515 × 1019.4038 × 1012.5252 × 1021.0746 × 102
FEV1.1322 × 1031.6966 × 1031.0322 × 1031.2778 × 1031.2450 × 1039.1690 × 1022.1345 × 1038.4364 × 102
CEC2017_F24Mean4.7531 × 1035.6227 × 1034.1241 × 1034.7745 × 1034.2536 × 1034.3269 × 1036.1524 × 1034.1195 × 103
Std2.7775 × 1025.3680 × 1021.5204 × 1022.7944 × 1024.5410 × 1011.4636 × 1023.7959 × 1025.9064 × 100
FEV1.8741 × 1032.3076 × 1031.4539 × 1031.8498 × 1031.7744 × 1031.6858 × 1033.1289 × 1031.3184 × 103
CEC2017_F25Mean3.5054 × 1034.1471 × 1033.4398 × 1033.9761 × 1034.2853 × 1033.5404 × 1034.9228 × 1033.4132 × 103
Std9.1402 × 1011.9847 × 1036.3834 × 1012.4419 × 1022.0112 × 1029.5560 × 1012.9775 × 1021.6637 × 101
FEV8.4263 × 1029.4549 × 1028.3100 × 1021.0874 × 1031.4336 × 1038.4758 × 1021.9113 × 1039.0037 × 102
CEC2017_F26Mean1.9256 × 1042.0619 × 1041.4997 × 1042.7735 × 1041.6571 × 1041.9742 × 1043.4396 × 1041.2943 × 104
Std2.2483 × 1033.3522 × 1032.9101 × 1031.9883 × 1034.2314 × 1027.2992 × 1033.3903 × 1031.0609 × 103
FEV1.3375 × 1041.2864 × 1045.1434 × 1022.1666 × 1041.3152 × 1041.4614 × 1032.5222 × 1049.0576 × 103
CEC2017_F27Mean4.0880 × 1034.5178 × 1033.7788 × 1034.4496 × 1033.6224 × 1033.8888 × 1035.4459 × 1033.6176 × 103
Std1.9366 × 1024.7320 × 1021.3347 × 1023.6220 × 1024.2204 × 1011.4642 × 1026.8519 × 1028.1102 × 101
FEV1.0197 × 1031.2296 × 1038.5464 × 1021.0258 × 1037.8690 × 1029.3425 × 1021.6822 × 1037.8548 × 102
CEC2017_F28Mean3.5962 × 1031.3182 × 1043.5170 × 1034.4893 × 1031.5828 × 1043.7064 × 1036.1924 × 1033.6687 × 103
Std5.7846 × 1017.8642 × 1034.3478 × 1014.5355 × 1028.4106 × 1019.2488 × 1015.7373 × 1026.1108 × 101
FEV6.8568 × 1021.1881 × 1036.2523 × 1029.8239 × 1021.2906 × 1047.5927 × 1022.4250 × 1036.2190 × 102
CEC2017_F29Mean8.6005 × 1031.0030 × 1049.3258 × 1038.0419 × 1037.9940 × 1036.9411 × 1031.5746 × 1046.4913 × 103
Std1.0500 × 1031.4044 × 1036.8766 × 1026.6941 × 1023.1488 × 1025.2665 × 1022.0600 × 1031.8257 × 102
FEV4.0623 × 1034.9307 × 1035.0186 × 1033.9790 × 1034.2054 × 1032.9818 × 1039.3382 × 1033.3646 × 103
CEC2017_F30Mean4.9725 × 1054.7897 × 1077.2061 × 1071.2749 × 1059.4935 × 1053.2649 × 1057.3937 × 1085.9693 × 104
Std3.2208 × 1053.8380 × 1073.2944 × 1071.1631 × 1054.0375 × 1052.3659 × 1053.3343 × 1085.0741 × 104
FEV1.1235 × 1057.2025 × 1062.4246 × 1071.3064 × 1042.1795 × 1056.7028 × 1042.1584 × 1081.3063 × 104
Rank First 202001024
Mean Rank3.7931 6.5862 3.5172 4.4828 5.4138 3.3793 7.4828 1.3448
Final Rank47356281
Table 9. p-values of CEC2017 test functions (Dim = 30).
Table 9. p-values of CEC2017 test functions (Dim = 30).
ProblemSODBOSSATLBOFSTDEAHAWOA
CEC2017_F13.3068 × 10−8/-9.7373 × 10−11/-2.6520 × 10−6/-2.7506 × 10−5/-2.8900 × 10−11/-5.4788 × 10−10/-2.9617 × 10−11/-
CEC2017_F31.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/+2.0511 × 10−5/-1.2118 × 10−12/-1.5363 × 10−4/-1.2118 × 10−12/-
CEC2017_F43.3593 × 10−11/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-
CEC2017_F51.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-
CEC2017_F63.0180 × 10−11/-3.0180 × 10−11/-3.0180 × 10−11/-3.0180 × 10−11/-2.1183 × 10−11/+1.1731 × 10−9/-3.0180 × 10−11/-
CEC2017_F71.5964 × 10−7/+1.1674 × 10−5/-2.7548 × 10−3/+4.2039 × 10−1/=3.0199 × 10−11/+3.8349 × 10−6/-3.0199 × 10−11/-
CEC2017_F81.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-
CEC2017_F92.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-
CEC2017_F103.0199 × 10−11/+6.6689 × 10−3/+1.7769 × 10−10/+2.4386 × 10−9/-3.0199 × 10−11/+3.0199 × 10−11/+4.6427 × 10−1/=
CEC2017_F111.2354 × 10−9/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-4.9074 × 10−9/-1.4634 × 10−11/-
CEC2017_F122.6695 × 10−9/-3.0199 × 10−11/-3.0199 × 10−11/-8.4180 × 10−1/=3.0199 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-
CEC2017_F131.3345 × 10−1/=5.5727 × 10−10/-4.3106 × 10−8/-6.0971 × 10−3/+1.7294 × 10−7/-4.0595 × 10−2/+9.9186 × 10−11/-
CEC2017_F141.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-
CEC2017_F151.7203 × 10−12/-1.7203 × 10−12/-1.7203 × 10−12/-3.0159 × 10−12/-1.7203 × 10−12/-7.7129 × 10−11/-1.7203 × 10−12/-
CEC2017_F161.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-3.3593 × 10−11/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-
CEC2017_F172.8628 × 10−11/-2.8628 × 10−11/-2.8628 × 10−11/-9.4287 × 10−11/-2.8628 × 10−11/-2.8628 × 10−11/-2.8628 × 10−11/-
CEC2017_F186.0104 × 10−8/-2.0283 × 10−7/-2.9205 × 10−2/-2.6806 × 10−4/-1.2057 × 10−10/-6.6273 × 10−1/=6.7220 × 10−10/-
CEC2017_F191.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-3.3593 × 10−11/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-
CEC2017_F204.7536 × 10−10/-2.5174 × 10−11/-2.5174 × 10−11/-1.6724 × 10−7/-7.3944 × 10−9/-4.7536 × 10−10/-2.5174 × 10−11/-
CEC2017_F213.8481 × 10−3/+3.8249 × 10−9/-2.0621 × 10−1/=7.0881 × 10−8/+4.1825 × 10−9/+2.4581 × 10−1/=5.4941 × 10−11/-
CEC2017_F223.8116 × 10−9/-6.6672 × 10−11/-7.0948 × 10−9/-3.3622 × 10−5/-2.0265 × 10−9/-1.1652 × 10−5/-3.0066 × 10−11/-
CEC2017_F232.1549 × 10−12/-1.7203 × 10−12/-2.6318 × 10−8/-6.2561 × 10−11/-3.7688 × 10−12/-2.1549 × 10−12/-1.7203 × 10−12/-
CEC2017_F246.3533 × 10−2/=3.0199 × 10−11/-2.6077 × 10−2/+3.7904 × 10−1/=1.1737 × 10−9/-1.4294 × 10−8/-3.0199 × 10−11/-
CEC2017_F255.5611 × 10−4/+9.2603 × 10−9/-1.8577 × 10−1/=8.2919 × 10−6/-2.1540 × 10−6/-3.5012 × 10−3/-3.0199 × 10−11/-
CEC2017_F263.8249 × 10−9/-1.4294 × 10−8/-8.7663 × 10−1/=3.4783 × 10−1/=6.7869 × 10−2/=7.9590 × 10−3/+3.0199 × 10−11/-
CEC2017_F271.4733 × 10−7/-4.1825 × 10−9/-6.3772 × 10−3/-5.8587 × 10−6/-9.8329 × 10−8/+7.0881 × 10−8/-3.6897 × 10−11/-
CEC2017_F281.1738 × 10−3/-6.0658 × 10−11/-2.5188 × 10−1/=1.4945 × 10−1/=4.0772 × 10−11/-5.3951 × 10−1/=3.0199 × 10−11/-
CEC2017_F291.1023 × 10−8/-1.5465 × 10−9/-6.5183 × 10−9/-3.6709 × 10−3/-4.5530 × 10−1/=3.5638 × 10−4/-6.6955 × 10−11/-
CEC2017_F302.3738 × 10−8/-2.9916 × 10−11/-2.9916 × 10−11/-8.2293 × 10−2/=2.9916 × 10−11/-4.7229 × 10−6/-2.9916 × 10−11/-
+/-/=4/23/21/28/04/21/42/21/65/22/23/23/30/28/1
Table 10. p-values of CEC2017 test functions (Dim = 50).
Table 10. p-values of CEC2017 test functions (Dim = 50).
ProblemSODBOSSATLBOFSTDEAHAWOA
CEC2017_F12.9119 × 10−11/-5.3982 × 10−10/-2.7222 × 10−7/-2.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-2.9119 × 10−11/-
CEC2017_F31.4643 × 10−10/-9.9186 × 10−11/-6.0658 × 10−11/+3.6322 × 10−1/=3.0199 × 10−11/-3.6897 × 10−11/+3.3384 × 10−11/-
CEC2017_F42.0107 × 10−10/-2.8003 × 10−11/-2.0107 × 10−10/-2.0107 × 10−10/-2.8003 × 10−11/-2.0946 × 10−9/-2.8003 × 10−11/-
CEC2017_F52.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-
CEC2017_F63.0199 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-4.1997 × 10−10/+1.2057 × 10−10/-3.0199 × 10−11/-
CEC2017_F73.0199 × 10−11/+2.8716 × 10−10/-9.7052 × 10−1/=2.6806 × 10−4/-3.3384 × 10−11/+4.5726 × 10−9/-3.0199 × 10−11/-
CEC2017_F84.5559 × 10−3/-1.2118 × 10−12/-2.2081 × 10−6/-1.9139 × 10−7/-1.2118 × 10−12/-1.2118 × 10−12/-1.2118 × 10−12/-
CEC2017_F92.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-
CEC2017_F103.0199 × 10−11/+4.6390 × 10−5/+3.0199 × 10−11/+3.3681 × 10−5/-3.0199 × 10−11/+3.0199 × 10−11/+9.2113 × 10−5/+
CEC2017_F119.6911 × 10−11/-9.6911 × 10−11/-2.9468 × 10−11/-1.9801 × 10−8/-2.9468 × 10−11/-6.4124 × 10−1/=2.9468 × 10−11/-
CEC2017_F123.0199 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-4.0595 × 10−2/-3.0199 × 10−11/-2.2273 × 10−9/-3.0199 × 10−11/-
CEC2017_F136.1210 × 10−10/-3.0199 × 10−11/-3.0199 × 10−11/-1.9963 × 10−5/-3.0199 × 10−11/-3.0811 × 10−8/-3.0199 × 10−11/-
CEC2017_F147.7387 × 10−6/-2.0338 × 10−9/-3.3874 × 10−2/-6.1001 × 10−1/=3.0199 × 10−11/-5.4620 × 10−6/-3.0199 × 10−11/-
CEC2017_F151.9916 × 10−7/-2.9321 × 10−11/-2.9321 × 10−11/-3.1632 × 10−3/-2.9321 × 10−11/-4.6104 × 10−3/-2.9321 × 10−11/-
CEC2017_F167.4716 × 10−10/-1.2118 × 10−12/-1.5363 × 10−4/-1.5722 × 10−1/=7.4716 × 10−10/-9.2965 × 10−4/-1.2118 × 10−12/-
CEC2017_F178.7020 × 10−8/-6.0335 × 10−11/-2.2048 × 10−11/-6.3408 × 10−4/-1.0856 × 10−9/-1.6157 × 10−10/-2.2048 × 10−11/-
CEC2017_F181.1747 × 10−4/-3.0103 × 10−7/-2.1566 × 10−3/-1.1228 × 10−2/-8.1527 × 10−11/-3.1573 × 10−5/-4.5043 × 10−11/-
CEC2017_F193.5012 × 10−3/-6.0658 × 10−11/-4.0772 × 10−11/-5.7929 × 10−1/=4.1997 × 10−10/-4.0595 × 10−2/-3.6897 × 10−11/-
CEC2017_F207.4716 × 10−10/-3.3593 × 10−11/-1.3341 × 10−8/-2.2081 × 10−6/-1.2118 × 10−12/-7.4716 × 10−10/-1.2118 × 10−12/-
CEC2017_F212.8839 × 10−1/=4.6957 × 10−10/-8.3657 × 10−2/=3.3109 × 10−2/+2.4823 × 10−11/-7.5997 × 10−2/=2.4823 × 10−11/-
CEC2017_F223.0199 × 10−11/+2.3897 × 10−8/+3.0199 × 10−11/+3.3285 × 10−1/=3.0199 × 10−11/+3.0199 × 10−11/+5.3685 × 10−2/=
CEC2017_F233.3593 × 10−11/-1.2118 × 10−12/-1.0000 × 100/=1.5363 × 10−4/-1.2118 × 10−12/-1.3341 × 10−8/-1.2118 × 10−12/-
CEC2017_F247.2884 × 10−3/-3.3384 × 10−11/-4.3106 × 10−8/+2.3399 × 10−1/=8.1014 × 10−10/-1.2057 × 10−10/-3.0199 × 10−11/-
CEC2017_F254.3222 × 10−7/-2.2048 × 10−11/-2.9781 × 10−3/-2.2048 × 10−11/-4.2336 × 10−10/-2.2048 × 10−11/-2.2048 × 10−11/-
CEC2017_F261.7203 × 10−12/-9.7354 × 10−10/-6.0696 × 10−2/=5.0695 × 10−11/-1.7203 × 10−12/-6.0696 × 10−2/=1.7203 × 10−12/-
CEC2017_F271.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-7.2672 × 10−2/=1.4634 × 10−11/-1.4634 × 10−11/-
CEC2017_F288.8753 × 10−1/=2.2048 × 10−11/-1.8273 × 10−1/=6.6892 × 10−9/-2.2048 × 10−11/-4.2336 × 10−10/-2.2048 × 10−11/-
CEC2017_F291.8500 × 10−8/-9.9186 × 10−11/-2.6015 × 10−8/-1.8608 × 10−6/-5.0120 × 10−2/=2.7086 × 10−2/-3.0199 × 10−11/-
CEC2017_F305.5329 × 10−8/-3.0199 × 10−11/-3.0199 × 10−11/-4.1191 × 10−1/=9.9186 × 10−11/-1.3272 × 10−2/+3.0199 × 10−11/-
+/-/=3/24/22/27/04/20/51/21/74/23/24/22/31/27/1
Table 11. p-values of CEC2017 test functions (Dim = 100).
Table 11. p-values of CEC2017 test functions (Dim = 100).
ProblemSODBOSSATLBOFSTDEAHAWOA
CEC2017_F11.1023 × 10−8/-3.0199 × 10−11/-3.0199 × 10−11/+3.0199 × 10−11/-3.0199 × 10−11/-1.8567 × 10−9/-3.0199 × 10−11/-
CEC2017_F34.7445 × 10−6/-1.9568 × 10−10/-5.2650 × 10−5/-2.8716 × 10−10/-3.0199 × 10−11/-3.0199 × 10−11/+3.0199 × 10−11/-
CEC2017_F41.5876 × 10−9/-1.7203 × 10−12/-2.6350 × 10−2/-1.7203 × 10−12/-1.7203 × 10−12/-1.9256 × 10−12/-1.7203 × 10−12/-
CEC2017_F51.6513 × 10−3/-1.4634 × 10−11/-1.4634 × 10−11/-1.6572 × 10−8/-1.4634 × 10−11/-6.7467 × 10−11/-1.4634 × 10−11/-
CEC2017_F63.0199 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-9.0307 × 10−4/-5.0912 × 10−6/-3.0199 × 10−11/-
CEC2017_F73.0199 × 10−11/+3.0199 × 10−11/-8.5338 × 10−1/=3.5201 × 10−7/-1.4643 × 10−10/+3.0199 × 10−11/-3.0199 × 10−11/-
CEC2017_F82.2273 × 10−9/+1.2235 × 10−1/=6.3533 × 10−2/=9.9186 × 10−11/+1.0702 × 10−9/-5.5923 × 10−1/=3.0199 × 10−11/-
CEC2017_F94.9347 × 10−11/-2.7047 × 10−11/-2.7047 × 10−11/-2.7047 × 10−11/-2.7047 × 10−11/-2.7047 × 10−11/-2.7047 × 10−11/-
CEC2017_F101.8155 × 10−3/-2.2048 × 10−11/-1.2245 × 10−7/-2.2048 × 10−11/-2.2048 × 10−11/-1.9603 × 10−7/-2.2048 × 10−11/-
CEC2017_F111.4634 × 10−11/-1.4634 × 10−11/-1.8032 × 10−5/-6.6759 × 10−8/-1.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-
CEC2017_F128.9934 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-9.7555 × 10−10/-3.0199 × 10−11/-1.3111 × 10−8/-3.0199 × 10−11/-
CEC2017_F131.3289 × 10−10/-3.0199 × 10−11/-3.0199 × 10−11/-8.1975 × 10−7/-3.0199 × 10−11/-7.3803 × 10−10/-3.0199 × 10−11/-
CEC2017_F147.5991 × 10−7/-3.1589 × 10−10/-5.7460 × 10−2/=8.6499 × 10−1/=3.0199 × 10−11/-1.2860 × 10−6/-4.9752 × 10−11/-
CEC2017_F156.1210 × 10−10/-3.0199 × 10−11/-3.0199 × 10−11/-9.0000 × 10−1/=3.0199 × 10−11/-9.5139 × 10−6/-3.0199 × 10−11/-
CEC2017_F161.4634 × 10−11/-1.4634 × 10−11/-1.4634 × 10−11/-2.5707 × 10−10/-1.4634 × 10−11/-2.9596 × 10−10/-1.4634 × 10−11/-
CEC2017_F171.6157 × 10−10/-2.2048 × 10−11/-3.7806 × 10−8/-1.9703 × 10−6/-2.2048 × 10−11/-2.6932 × 10−6/-2.2048 × 10−11/-
CEC2017_F184.5411 × 10−10/-4.5411 × 10−10/-3.2721 × 10−5/-1.8350 × 10−1/=2.3890 × 10−11/-1.0672 × 10−10/-2.3890 × 10−11/-
CEC2017_F191.9568 × 10−10/-3.0199 × 10−11/-3.0199 × 10−11/-6.9724 × 10−3/-3.0199 × 10−11/-3.0939 × 10−6/-3.0199 × 10−11/-
CEC2017_F201.4342 × 10−3/-2.3890 × 10−11/-1.4342 × 10−3/-2.6148 × 10−8/-1.1607 × 10−9/-2.9720 × 10−3/-6.5151 × 10−11/-
CEC2017_F213.3582 × 10−1/=2.2048 × 10−11/-9.3276 × 10−7/-4.3222 × 10−7/-2.2048 × 10−11/-1.1190 × 10−4/-2.2048 × 10−11/-
CEC2017_F224.6669 × 10−4/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-2.2048 × 10−11/-
CEC2017_F239.9186 × 10−11/-3.0199 × 10−11/-2.6695 × 10−9/-3.0199 × 10−11/-3.3384 × 10−11/-2.3800 × 10−3/-3.0199 × 10−11/-
CEC2017_F241.7203 × 10−12/-1.7203 × 10−12/-6.4334 × 10−1/=1.7203 × 10−12/-1.7203 × 10−12/-5.6323 × 10−11/-1.7203 × 10−12/-
CEC2017_F251.6338 × 10−5/-2.2048 × 10−11/-5.3869 × 10−2/=2.2048 × 10−11/-2.2048 × 10−11/-2.7240 × 10−9/-2.2048 × 10−11/-
CEC2017_F262.3890 × 10−11/-2.3890 × 10−11/-4.5278 × 10−7/-2.3890 × 10−11/-2.3890 × 10−11/-8.5522 × 10−6/-2.3890 × 10−11/-
CEC2017_F276.6955 × 10−11/-3.0199 × 10−11/-2.6784 × 10−6/-4.5043 × 10−11/-6.4142 × 10−1/=7.3803 × 10−10/-3.0199 × 10−11/-
CEC2017_F285.2650 × 10−5/+3.0199 × 10−11/-7.3891 × 10−11/+3.6897 × 10−11/-3.0199 × 10−11/-1.3732 × 10−1/=3.0199 × 10−11/-
CEC2017_F292.5872 × 10−11/-2.5872 × 10−11/-2.5872 × 10−11/-2.5872 × 10−11/-2.5872 × 10−11/-2.1447 × 10−4/-2.5872 × 10−11/-
CEC2017_F306.0658 × 10−11/-3.0199 × 10−11/-3.0199 × 10−11/-5.0842 × 10−3/-3.3384 × 10−11/-8.8910 × 10−10/-3.0199 × 10−11/-
+/-/=3/25/10/28/12/22/51/25/31/27/11/26/20/29/0
Table 12. Friedman mean rank test for CEC2017.
Table 12. Friedman mean rank test for CEC2017.
Dimensions30 50 100
AlgorithmsMean RankFinal RankMean RankFinal RankMean RankFinal Rank
SO4.3414 63.9678 53.7161 4
DBO6.2241 76.4046 76.2011 7
SSA4.3264 53.9460 43.6270 3
TLBO3.4701 23.6563 24.4057 5
FSTDE4.1690 44.6552 65.5471 6
AHA3.8701 33.6839 33.3833 2
WOA7.5115 87.4287 87.3701 8
SNDSO2.0874 12.2575 11.7494 1
Table 13. Running time of the algorithm for CEC2017 (Dim = 30).
Table 13. Running time of the algorithm for CEC2017 (Dim = 30).
ProblemSODBOSSATLBOFSTDEAHAWOASNDSO
CEC2017_F16.20 × 1005.24 × 1004.22 × 1004.82 × 1001.51 × 1014.11 × 1003.95 × 1005.20 × 100
CEC2017_F34.56 × 1005.26 × 1004.08 × 1004.07 × 1001.50 × 1014.38 × 1003.68 × 1004.69 × 100
CEC2017_F44.78 × 1005.46 × 1003.90 × 1004.53 × 1001.50 × 1014.76 × 1003.99 × 1004.68 × 100
CEC2017_F54.71 × 1005.47 × 1004.33 × 1004.65 × 1001.55 × 1014.42 × 1003.76 × 1004.23 × 100
CEC2017_F65.17 × 1005.86 × 1004.55 × 1004.95 × 1001.56 × 1014.90 × 1004.27 × 1006.62 × 100
CEC2017_F76.23 × 1006.39 × 1004.24 × 1004.50 × 1001.55 × 1014.58 × 1003.89 × 1006.22 × 100
CEC2017_F85.52 × 1005.25 × 1004.13 × 1004.36 × 1001.52 × 1014.53 × 1004.02 × 1005.68 × 100
CEC2017_F96.45 × 1006.75 × 1004.22 × 1004.27 × 1001.53 × 1014.67 × 1003.86 × 1006.45 × 100
CEC2017_F106.32 × 1007.01 × 1004.64 × 1004.32 × 1001.54 × 1014.73 × 1003.97 × 1006.10 × 100
CEC2017_F116.91 × 1006.81 × 1004.38 × 1004.54 × 1001.54 × 1014.62 × 1003.90 × 1008.06 × 100
CEC2017_F127.48 × 1007.25 × 1004.69 × 1004.56 × 1001.54 × 1014.76 × 1004.07 × 1007.03 × 100
CEC2017_F137.33 × 1007.17 × 1004.63 × 1004.70 × 1001.56 × 1014.76 × 1004.04 × 1006.89 × 100
CEC2017_F146.66 × 1008.31 × 1004.70 × 1004.32 × 1001.57 × 1015.03 × 1004.07 × 1006.00 × 100
CEC2017_F157.35 × 1001.02 × 1014.52 × 1004.33 × 1001.57 × 1014.79 × 1003.86 × 1006.02 × 100
CEC2017_F166.95 × 1007.65 × 1004.67 × 1004.38 × 1001.58 × 1014.68 × 1003.94 × 1006.26 × 100
CEC2017_F178.09 × 1007.57 × 1005.05 × 1005.07 × 1001.59 × 1015.04 × 1004.15 × 1008.06 × 100
CEC2017_F187.88 × 1007.95 × 1005.41 × 1005.32 × 1001.65 × 1015.02 × 1004.29 × 1008.18 × 100
CEC2017_F191.03 × 1019.79 × 1007.48 × 1007.39 × 1001.82 × 1016.96 × 1006.24 × 1009.96 × 100
CEC2017_F209.70 × 1008.11 × 1005.80 × 1005.63 × 1001.63 × 1015.54 × 1004.66 × 1008.33 × 100
CEC2017_F219.60 × 1008.25 × 1006.03 × 1006.17 × 1001.65 × 1015.70 × 1005.04 × 1009.13 × 100
CEC2017_F221.07 × 1018.72 × 1006.60 × 1006.85 × 1001.75 × 1016.31 × 1005.58 × 1009.67 × 100
CEC2017_F231.17 × 1018.83 × 1006.66 × 1007.28 × 1001.73 × 1016.51 × 1005.88 × 1009.92 × 100
CEC2017_F241.18 × 1018.96 × 1007.05 × 1006.80 × 1001.73 × 1016.67 × 1006.08 × 1009.94 × 100
CEC2017_F251.15 × 1018.84 × 1006.64 × 1006.93 × 1001.74 × 1016.49 × 1005.72 × 1009.87 × 100
CEC2017_F261.17 × 1019.20 × 1009.83 × 1007.21 × 1001.74 × 1016.87 × 1006.08 × 1001.02 × 101
CEC2017_F271.21 × 1019.46 × 1001.00 × 1017.47 × 1001.76 × 1017.09 × 1006.44 × 1001.06 × 101
CEC2017_F281.20 × 1019.24 × 1001.07 × 1017.31 × 1001.75 × 1017.81 × 1006.37 × 1001.05 × 101
CEC2017_F291.22 × 1018.90 × 1001.04 × 1016.83 × 1001.72 × 1016.48 × 1005.90 × 1001.01 × 101
CEC2017_F301.34 × 1011.07 × 1011.20 × 1018.43 × 1001.89 × 1017.98 × 1007.25 × 1001.17 × 101
Mean Rank6.45 5.62 3.31 3.07 8.00 3.00 1.03 5.52
Final Rank76438215
Table 14. Information on 12 FS datasets.
Table 14. Information on 12 FS datasets.
DatasetsNumber of FeaturesNumber of ClassificationsDataset Size
IonosphereEW342351
Breastcancer92699
BreastEW302569
Congress162435
Wine133178
Vote162435
Vehicle184846
Exactly1321000
Glass97214
HeartEW132270
Zoo167101
SonarEW602208
Table 15. Fitness values for algorithms dealing with FS problems.
Table 15. Fitness values for algorithms dealing with FS problems.
DatasetsMetricHHOBOAWOAABOPSOGWOSOSNDSO
IonosphereEWBest0.0404200.0631930.0444540.0532770.0562180.0374790.0345380.017647
Mean0.0868940.0839660.0727200.0742300.1154060.0722580.0499220.042078
Worst0.1469750.1036130.0988240.1017650.1796640.1065550.0738660.063950
Rank4/7/78/6/55/4/36/5/47/8/83/3/62/2/21/1/1
BreastcancerBest0.0545960.0462830.0416470.0462830.0398080.0638690.0527580.035172
Mean0.0682630.0525100.0568530.0500130.0423660.0666990.0560250.040357
Worst0.0897680.0592330.0657070.0592330.0527580.0721820.0610710.046283
Rank7/8/84/4/33/6/64/3/32/2/28/7/76/5/51/1/1
BreastEWBest0.0472270.0425960.0492630.0644540.0472270.0392630.0505600.027965
Mean0.0744850.0697740.0707790.0798130.0585610.0508640.0646250.038916
Worst0.0970500.0837170.1031560.0937170.0731560.0631560.0770500.051298
Rank4/7/73/5/56/6/88/8/64/3/32/2/27/4/41/1/1
CongressBest0.0476290.0497840.0456900.0579740.0497840.0476290.0353450.026940
Mean0.0563790.0696980.0608410.0579740.0616670.0477800.0434340.027356
Worst0.0933190.0851290.0995690.0579740.0788790.0497840.0476290.033190
Rank4/4/77/8/63/6/88/5/46/7/55/3/32/2/21/1/1
WineBest0.0372840.0560340.0579740.0269400.0230770.0476290.0394400.023077
Mean0.0450570.0684990.0586710.0269400.0321470.0481180.0457900.023077
Worst0.0788790.0892240.0642240.0269400.0564840.0560340.0497840.023077
Rank4/4/77/8/88/7/63/2/21/3/56/6/45/5/31/1/1
VoteBest0.0394400.0663790.0372840.0269400.0372840.0250000.0372840.022845
Mean0.0721980.0807470.0381180.0275650.0450140.0291880.0381180.025194
Worst0.1079740.1017240.0497840.0394400.0663790.0476290.0497840.039440
Rank7/7/88/8/74/4/43/2/14/6/62/3/34/4/41/1/2
VehicleBest0.2889550.2565420.2405650.2570020.2618670.2410260.2570020.203517
Mean0.3188760.2977310.3075400.2861130.2844000.2573720.2800790.247686
Worst0.3588760.3317880.3530900.3160420.3324790.2729780.3158120.268113
Rank8/8/84/6/52/7/75/5/47/4/63/2/25/3/31/1/1
ExactlyBest0.0461540.2672310.0461540.0461540.0461540.0461540.0461540.046154
Mean0.2528860.2859220.2529540.1786590.1616260.1628360.1377080.134306
Worst0.3090380.2968460.2911920.2866920.3270380.2911920.3039620.291192
Rank1/6/78/8/51/7/21/5/11/3/81/4/21/2/67/1/4
GlassBest0.2793650.2801590.2373020.3119050.2793650.2690480.2365080.140476
Mean0.3078040.3131480.2587570.3169050.2903970.2772750.2426190.161217
Worst0.3555560.3650790.2912700.3452380.3007940.3222220.3222220.204762
Rank5/6/77/7/83/3/28/8/65/5/34/4/52/2/41/1/1
HeartEWBest0.1371790.1641030.1051280.0884620.1230770.0897440.1051280.071795
Mean0.1884620.2008120.1761970.1519230.1549150.1021370.1208120.093803
Worst0.2307690.2307690.2628210.2051280.1974360.1461540.2153850.124359
Rank7/7/68/8/64/6/82/4/46/5/33/2/24/3/51/1/1
ZooBest0.0625000.0437500.0887500.0375000.0312500.0375000.0375000.025000
Mean0.1068330.1037920.1422920.0558330.0382080.0578330.0632080.052500
Worst0.1662500.1725000.1850000.1087500.0762500.0887500.0762500.082500
Rank7/7/66/6/78/8/83/3/52/1/13/4/43/5/11/2/3
SonarEWBest0.0572360.0891870.0605690.0758540.0552850.0216670.0402850.015000
Mean0.1209840.1440830.1134770.1218440.0978810.0537600.0743850.024732
Worst0.1747560.1819920.1594720.1719920.1430890.0891870.1211380.035285
Rank5/6/78/8/86/5/57/7/64/4/42/2/23/3/31/1/1
Mean RankBest5.2500006.5000004.4166674.8333334.0833333.5000003.6666671.500000
Mean6.4166676.8333335.7500004.7500004.2500003.5000003.3333331.083333
Worst7.0833336.0833335.5833333.8333334.5000003.5000003.5000001.500000
Final RankBest78564231
Mean78654321
Worst87645221
Table 16. Wilcoxon rank sum test results for algorithms dealing with FS problems.
Table 16. Wilcoxon rank sum test results for algorithms dealing with FS problems.
DatasetsHHOBOAWOAABOPSOGWOSO
IonosphereEW1.0555 × 10−9/-3.2682 × 10−11/-5.2685 × 10−9/-8.6348 × 10−11/-4.4491 × 10−11/-1.4666 × 10−8/-2.5330 × 10−2/-
Breastcancer1.3459 × 10−11/-2.9249 × 10−10/-2.1090 × 10−10/-4.1276 × 10−8/-3.7672 × 10−2/-1.0952 × 10−11/-8.0691 × 10−12/-
BreastEW3.9725 × 10−11/-6.4721 × 10−11/-3.2519 × 10−11/-2.9339 × 10−11/-5.2284 × 10−11/-1.9376 × 10−8/-3.2175 × 10−11/-
Congress1.6140 × 10−12/-2.2370 × 10−12/-9.4316 × 10−13/-4.1574 × 10−14/-2.1000 × 10−12/-1.4043 × 10−13/-1.8547 × 10−12/-
Wine5.3349 × 10−13/-1.1537 × 10−12/-8.6958 × 10−14/-1.6853 × 10−14/-3.9882 × 10−8/-4.1617 × 10−14/-2.4891 × 10−13/-
Vote7.0983 × 10−12/-6.2782 × 10−12/-2.0056 × 10−10/-8.2478 × 10−6/-5.6734 × 10−11/-1.1501 × 10−6/-1.9095 × 10−10/-
Vehicle2.9229 × 10−11/-9.5705 × 10−11/-2.5323 × 10−10/-1.0592 × 10−10/-7.8783 × 10−11/-7.1944 × 10−3/-1.0349 × 10−10/-
Exactly2.9875 × 10−5/-5.5831 × 10−8/-1.3384 × 10−4/-8.1567 × 10−1/=4.7391 × 10−1/=1.6698 × 10−1/=4.1160 × 10−1/=
Glass2.2623 × 10−11/-2.2871 × 10−11/-2.1779 × 10−11/-1.6678 × 10−11/-4.7919 × 10−12/-4.7849 × 10−12/-2.3095 × 10−12/-
HeartEW2.8252 × 10−11/-2.7827 × 10−11/-2.4068 × 10−10/-1.3806 × 10−7/-2.8433 × 10−10/-7.4760 × 10−4/-5.2863 × 10−5/-
Zoo2.2076 × 10−10/-1.3317 × 10−9/-2.2290 × 10−11/-7.7129 × 10−2/=1.6582 × 10−1/=1.0894 × 10−1/=1.8346 × 10−3/-
SonarEW2.9321 × 10−11/-2.9321 × 10−11/-2.9358 × 10−11/-2.9321 × 10−11/-2.9137 × 10−11/-1.6445 × 10−9/-2.9155 × 10−11/-
+/-/=0/12/00/12/00/12/00/10/20/10/20/10/20/11/1
Table 17. Average classification accuracy of algorithms dealing with FS problems.
Table 17. Average classification accuracy of algorithms dealing with FS problems.
DatasetsHHOBOAWOAABOPSOGWOSOSNDSO
IonosphereEW92.52381092.76190593.23809593.57142990.23809593.71428696.00000097.666667
76548321
Breastcancer95.70743497.74580396.81055298.10551698.87290296.78657197.31414998.561151
84631752
BreastEW94.89675595.45722796.13569394.21828996.81415996.60767095.63421898.466077
76482351
Congress95.40229995.17241494.55938794.25287495.55555695.59387097.16475197.701149
56784321
Wine96.47509694.86590094.29118897.70114999.61904895.44061395.977011100.000000
47832651
Vote94.06130394.17624596.55172497.70114996.89655299.23371696.55172498.659004
87534152
Vehicle69.01380770.35503069.38856072.38658872.74161775.14792972.58382676.429980
87534152
Exactly74.95000069.88333374.80000083.96666786.60000086.06666789.40000089.550000
68753421
Glass70.07936569.44444476.26984169.60317571.19047672.93650875.63492185.873016
68275431
HeartEW82.22222280.67901283.64197587.22222286.60493891.35802590.30864293.024691
78645231
Zoo93.50000093.16666788.33333398.33333399.66666798.66666796.33333397.500000
67831254
SonarEW89.18699286.91056991.62601689.43089492.92682996.34146394.79674899.918699
78564231
Mean Rank6.5833336.8333335.6666674.7500003.5833333.1666673.7500001.500000
Final Rank78653241
Table 18. Average number of features for algorithms dealing with FS problems.
Table 18. Average number of features for algorithms dealing with FS problems.
DatasetsHHOBOAWOAABOPSOGWOSOSNDSO
IonosphereEW6.6666676.4000004.0333335.5666679.3666675.3333334.7333337.166667
65148327
Breastcancer2.6666672.9000002.5333332.9666672.9000003.4000002.8666672.466667
35275841
BreastEW8.5666678.66666710.8000008.3333338.9666676.1000007.6000007.533333
56847132
Congress2.4000004.2000001.9000001.0000003.4666671.3000002.8666671.066667
58417362
Wine2.1333333.5666671.1666671.0000003.7333331.1333331.5333333.000000
57318246
Vote3.0000004.5333331.1333331.1000002.7333333.5666671.1333332.100000
68215724
Vehicle7.2000005.5666675.7666676.7666677.0333336.0666676.0000006.400000
81267435
Exactly3.5666671.9333333.4000004.4666675.3333334.8666675.5000005.233333
31247586
Glass3.4666673.4333334.0666673.9000002.8000003.0333332.1000003.066667
65872314
HeartEW3.7000003.5000003.7666674.8000004.4666673.1666674.3666674.033333
32487165
Zoo7.7333336.7666675.9666676.5333335.6333337.3333334.8333334.800000
86453721
SonarEW14.20000015.76666722.86666716.03333320.53333312.50000016.53333314.400000
24857163
Mean Rank5.0000004.8333334.0000004.4166676.0833333.7500003.9166673.833333
Final Rank76458132
Table 19. Average running time of the algorithm for the FS problem.
Table 19. Average running time of the algorithm for the FS problem.
DatasetsHHOBOAWOAABOPSOGWOSOSNDSO
IonosphereEW4.8030663.1864142.9113801.9983433.1263663.2168163.0107143.078643
86215734
Breastcancer5.1357613.3482223.0663252.2873413.4054593.4164103.2957083.331846
85216734
BreastEW5.2544003.3664373.1624852.2086943.3488283.5263353.2934453.321811
86215734
Congress4.5099803.2984032.7325472.0336973.3607613.3499872.9946123.072036
85217634
Wine4.3695983.2987172.5937462.0725513.1794093.5541282.8408273.127230
86215734
Vote4.5959803.3931882.8860062.1601253.4219983.3849642.6927853.178043
86317524
Vehicle5.8145803.7501903.3860572.5328043.8040943.7023823.6315013.663357
86317524
Exactly5.3280743.7971753.3048132.3415183.8939873.7482753.6608423.631808
86217543
Glass5.0610523.1682193.0752752.2268093.2743133.3347042.9632633.125194
85316724
HeartEW4.7604093.2426403.0374632.2558853.3667953.3591683.2947183.134978
84217653
Zoo4.9951523.2058863.0174722.3264583.3110883.3182313.2310363.138697
84216753
SonarEW4.8059153.0653382.9009622.0979883.0895343.0815033.0730963.052341
84217653
Mean Rank8.0000005.2500002.2500001.0000006.2500006.2500003.3333333.666667
Final Rank85216634
Table 20. Optimal solution of algorithms in three-bar truss design problem.
Table 20. Optimal solution of algorithms in three-bar truss design problem.
Algorithms X Optimal
x 1 x 2
SO0.7886610.408288263.895844
BESD0.7886610.408289263.895845
DBO0.7887960.407905263.895854
MFO0.7892570.406605263.896092
SSA0.7887040.408167263.895845
PSO0.7886370.408357263.895844
EO0.7887550.408023263.895848
SNDSO0.7886800.408233263.895843
Table 21. Statistical results of algorithms for the three-bar truss design problem.
Table 21. Statistical results of algorithms for the three-bar truss design problem.
AlgorithmsBestWorstMeanStd
SO263.895844263.896673263.8960482.428072 × 10−4
BESD263.895845263.895882263.8958568.098410 × 10−6
DBO263.895854263.908123263.8978152.689815 × 10−3
MFO263.896092264.261487263.9572871.003014 × 10−1
SSA263.895845263.908875263.8970762.415208 × 10−3
PSO263.895844263.899023263.8963157.695109 × 10−4
EO263.895848263.896450263.8959591.444199 × 10−4
SNDSO263.895843263.896624263.8959521.703197 × 10−4
Table 22. Optimal solution of algorithms in 10-bar truss design problem.
Table 22. Optimal solution of algorithms in 10-bar truss design problem.
Algorithms X Optimal
x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10
SO0.0036160.0014650.0033820.0015630.0000660.0004520.0022360.0024300.0012960.001237524.985183
BESD0.0034610.0014570.0036000.0015540.0000790.0004640.0023040.0023920.0012350.001252526.144065
DBO0.0035620.0015080.0035040.0015490.0000650.0004500.0022860.0023640.0012280.001255524.924768
MFO0.0037560.0014930.0032270.0013850.0000650.0004610.0025170.0022640.0012570.001303526.035144
SSA0.0036000.0015350.0033880.0013950.0000650.0004580.0025330.0022140.0011920.001340525.175775
PSO0.0035160.0015310.0035230.0014810.0000650.0004530.0023940.0022870.0012360.001256524.587317
EO0.0034530.0014420.0035370.0014840.0000650.0004570.0023290.0024230.0012620.001250524.551304
SNDSO0.0034750.0015140.0035520.0014280.0000650.0004560.0024610.0022810.0012210.001264524.582840
Table 23. Statistical results of algorithms for the 10-bar truss design problem.
Table 23. Statistical results of algorithms for the 10-bar truss design problem.
AlgorithmsBestWorstMeanStd
SO524.985183538.641398530.1322423.842013 × 100
BESD526.144065532.596966529.8566691.327543 × 100
DBO524.924768578.222671539.1279691.458350 × 101
MFO526.035144601.429160539.7702951.661953 × 101
SSA525.175775540.098010531.3002104.484961 × 100
PSO524.587317533.323805528.3049133.072231 × 100
EO524.551304533.391502528.7306403.060234 × 100
SNDSO524.582840532.572344526.9105442.741038 × 100
Table 24. Optimal solution of algorithms in tension/compression spring design problem.
Table 24. Optimal solution of algorithms in tension/compression spring design problem.
Algorithms X Optimal
x 1 x 2 x 3
SO0.0519500.36302910.9283400.012666
BESD0.0511930.34487812.0429460.012692
DBO0.0517790.35887911.1633470.012665
MFO0.0519730.36358710.8973280.012667
SSA0.0500000.31680114.1109500.012760
PSO0.0523540.37292210.3985880.012673
EO0.0516320.35533711.3703600.012665
SNDSO0.0517000.35698411.2733550.012665
Table 25. Statistical results of algorithms for the tension/compression spring design problem.
Table 25. Statistical results of algorithms for the tension/compression spring design problem.
AlgorithmsBestWorstMeanStd
SO0.0126660.0137820.0129483.152255 × 10−4
BESD0.0126920.0128310.0127453.829559 × 10−5
DBO0.0126650.0177860.0137651.833997 × 10−3
MFO0.0126670.0177730.0136801.577297 × 10−3
SSA0.0126840.0181610.0131901.024217 × 10−3
PSO0.0126730.0149460.0132695.539001 × 10−4
EO0.0126650.0137880.0129943.146420 × 10−4
SNDSO0.0126650.0129240.0127175.474825 × 10−5
Table 26. Optimal solution of algorithms in speed reducer design problem.
Table 26. Optimal solution of algorithms in speed reducer design problem.
Algorithms X Optimal
x 1 x 2 x 3 x 4 x 5 x 6 x 7
SO3.5000000.70000017.0000007.3000007.7153203.3505415.2866542994.424466
BESD3.5018060.70002017.0004447.3381667.7697563.3560705.2906243000.768049
DBO3.5000000.70000017.0000007.3000007.7153203.3505415.2866542994.424466
MFO3.5000000.70000017.0000007.3000007.7153203.3505415.2866542994.424466
SSA3.5102620.70000017.0000007.8141097.7742593.3581795.2866753006.265463
PSO3.5000000.70000017.0000007.3000007.7153203.3505415.2866542994.424466
EO3.5000000.70000017.0000007.3000007.7153203.3505415.2866542994.424466
SNDSO3.5000000.70000017.0000007.3000007.7153203.3505415.2866542994.424466
Table 27. Statistical results of algorithms for the speed reducer design problem.
Table 27. Statistical results of algorithms for the speed reducer design problem.
AlgorithmsBestWorstMeanStd
SO2994.4244662996.9123872994.6329815.146991 × 10−1
BESD3000.7680493007.9869073003.6225571.802085 × 100
DBO2994.4244663310.5003803056.4629739.058321 × 101
MFO2994.4244663043.0349363002.5783131.532264 × 101
SSA3006.2654633113.7519593032.9212122.870079 × 101
PSO2994.4244662994.4244662994.4244661.818989 × 10−12
EO2994.4244662994.4244662994.4244666.498050 × 10−8
SNDSO2994.4244662994.4244722994.4244661.145042 × 10−6
Table 28. Optimal solution of algorithms in welded beam design problem.
Table 28. Optimal solution of algorithms in welded beam design problem.
Algorithms X Optimal
x 1 x 2 x 3 x 4
SO0.1988323.3373699.1920240.1988321.670218
BESD0.1976293.3736269.2050420.1988931.675844
DBO0.1988323.3373649.1920220.1988321.670218
MFO0.1988313.3373849.1920240.1988321.670219
SSA0.1904723.4772179.2824870.1984131.687969
PSO0.1988323.3373659.1920240.1988321.670218
EO0.1988323.3373659.1920240.1988321.670218
SNDSO0.1988323.3373659.1920240.1988321.670218
Table 29. Statistical results of algorithms for the welded beam design problem.
Table 29. Statistical results of algorithms for the welded beam design problem.
AlgorithmsBestWorstMeanStd
SO1.6702181.7272581.6748211.130506 × 10−2
BESD1.6758441.6925521.6846604.536354 × 10−3
DBO1.6702181.8246211.7411245.629337 × 10−2
MFO1.6702192.1035181.7517481.114074 × 10−1
SSA1.6879691.9734871.7502976.508382 × 10−2
PSO1.6702182.1644511.7149531.016073 × 10−1
EO1.6702181.6755761.6706511.266918 × 10−3
SNDSO1.6702181.6702831.6702241.478651 × 10−5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, W.; Ai, Y.; Zhang, W. Improved Snake Optimizer Using Sobol Sequential Nonlinear Factors and Different Learning Strategies and Its Applications. Mathematics 2024, 12, 1708. https://doi.org/10.3390/math12111708

AMA Style

Zheng W, Ai Y, Zhang W. Improved Snake Optimizer Using Sobol Sequential Nonlinear Factors and Different Learning Strategies and Its Applications. Mathematics. 2024; 12(11):1708. https://doi.org/10.3390/math12111708

Chicago/Turabian Style

Zheng, Wenda, Yibo Ai, and Weidong Zhang. 2024. "Improved Snake Optimizer Using Sobol Sequential Nonlinear Factors and Different Learning Strategies and Its Applications" Mathematics 12, no. 11: 1708. https://doi.org/10.3390/math12111708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop