Next Article in Journal
Steady Solutions to Equations of Viscous Compressible Multifluids
Next Article in Special Issue
An Enhanced Fuzzy Hybrid of Fireworks and Grey Wolf Metaheuristic Algorithms
Previous Article in Journal
Skew Cyclic and Skew Constacyclic Codes over a Mixed Alphabet
Previous Article in Special Issue
An Interval Type-2 Fuzzy Logic Approach for Dynamic Parameter Adaptation in a Whale Optimization Algorithm Applied to Mathematical Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy-Improved Growth Optimizer and Its Applications

1
College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
3
College of Mechanical Engineering, Guizhou University, Guiyang 550025, China
*
Authors to whom correspondence should be addressed.
Axioms 2024, 13(6), 361; https://doi.org/10.3390/axioms13060361
Submission received: 5 March 2024 / Revised: 6 May 2024 / Accepted: 23 May 2024 / Published: 28 May 2024
(This article belongs to the Special Issue Advances in Mathematical Optimization Algorithms and Its Applications)

Abstract

:
The growth optimizer (GO) is a novel metaheuristic algorithm designed to tackle complex optimization problems. Despite its advantages of simplicity and high efficiency, GO often encounters localized stagnation when dealing with discretized, high-dimensional, and multi-constraint problems. To address these issues, this paper proposes an enhanced version of GO called CODGBGO. This algorithm incorporates three strategies to enhance its performance. Firstly, the Circle-OBL initialization strategy is employed to enhance the quality of the initial population. Secondly, an exploration strategy is implemented to improve population diversity and the algorithm’s ability to escape local optimum traps. Finally, the exploitation strategy is utilized to enhance the convergence speed and accuracy of the algorithm. To validate the performance of CODGBGO, it is applied to solve the CEC2017, CEC2020, 18 feature selection problems, and 4 real engineering optimization problems. The experiments demonstrate that the novel CODGBGO algorithm effectively addresses the challenges posed by complex optimization problems, offering a promising approach.

1. Introduction

There are many realistic optimization problems in the field of scientific research and engineering applications, and they can almost always be converted into optimization problems for solutions [1,2]. There are many challenges and difficulties in solving these problems, such as nonlinearity, discretization, and high complexity [3,4]. Meta-heuristic algorithms are an effective method for solving this type of problem, which is widely used in real-world optimization problems due to their simplicity and ease of implementation [5,6].
In recent years, researchers have proposed a large number of optimization algorithms to solve realistic optimization problems. In this paper, these algorithms are categorized into four groups: evolution-based algorithms, population-based algorithms, chemistry and physics-based algorithms, and human-based algorithms. Among them, evolution-based algorithms are typically represented by Evolutionary Strategies (ES) [7], Biogeography-Based Optimization (BBO) [8], and Genetic Algorithm (GA) [9]. The algorithms based on population intelligence are Fire Hawk Optimizer (FHO) [10], Bat Algorithm (BA) [11], fox optimizer [12], Golden Jackal Optimization (GJO) [13]. Algorithms based on chemistry and physics have Gravitational Search Algorithm (GSA) [14], Big Bang-Big Crunch algorithm (BB-BC) [15], Simulated Annealing (SA) [16], Magnetic Optimization Algorithms (MOA) [17], Water Evaporation Optimization (WEO) [18], Atom Search Optimization (ASO) [19]. Human-based algorithms have search and rescue optimization [20], Human Mental Search (HMS) [21], arithmetic optimization algorithm (AOA) [22]. However, as optimization problems become highly complex, these aforementioned algorithms have some drawbacks in solving real-world optimization problems. For instance, there are problems such as getting stuck in localized stagnation and poor convergence performance. Therefore, optimization algorithms with improved strategies are widely used to solve complex real-world optimization problems.
Several researchers have proposed enhanced optimization algorithms to tackle specific engineering design challenges. A.M. Shaheen et al. introduced the enhanced equilibrium optimization algorithm (IEOA) for power distribution network configuration, achieving optimal allocation of distributed generators [23]. Bahaeddin Turkoglu et al. introduced a binary artificial algae algorithm to enhance feature selection and classification accuracy [24]. Gang Hu developed an enhanced variant of the black widow optimization algorithm for feature selection, yielding notable performance outcomes [25]. Min Xu proposed a novel binary arithmetic optimization algorithm (BAOA) that outperformed competitors in benchmark datasets for feature selection [26]. Gang Hu et al. proposed an enhanced hybrid arithmetic optimization algorithm named CSOAOA for solving engineering problems and demonstrated its utility in solving real optimization problems [27].
Although existing heuristic algorithms have made good progress in solving optimization problems, the number of combinations that algorithms need to search increases exponentially as the dimensionality of the optimization problem expands and the complexity of constraints increases. Existing algorithms often face the problem of premature convergence, meaning that they may stop at suboptimal solution sets before reaching the global optimum. The fundamental reason for this is that the exploration and exploitation performance of the algorithms is weak, which limits their effectiveness in capturing the intrinsic patterns and features of the actual optimization problem data. Therefore, it is necessary to explore a new, suitable, and efficient metaheuristic algorithm that can fully explore the search space during the optimization process in order to better mitigate the challenges brought by high-dimensional problems. Fortunately, the inspiration for the growth optimizer (GO) comes from the learning process of individuals in social growth, and it has been proven to be a robust tool with high exploration capability [28]. Relevant studies have shown that GO has strong exploration capability and application scalability. For example, GO has been successfully applied to the parameter identification of solar photovoltaic cells [29,30], multi-level threshold image segmentation and wireless sensor network node deployment [31], and enhancing intrusion detection systems in the internet of things and cloud environments [32].
To the best of our knowledge, existing papers have not attempted to propose an improved GO applicable to both continuous and discrete optimization problems. Considering that GO may fall into local optima when solving high-dimensional complex optimization problems, this paper proposes an enhanced growth optimizer (CODGBGO) by combining three strategies. Among them, the Circle-OBL initialization strategy is used to generate initial solutions with good distributions. Secondly, the exploration strategy expands the search space by self-learning in a region of radius R and learning the differences among individuals, which in turn enhances the exploration ability of the algorithm and its ability to jump out of the local optimum trap. Finally, the exploitation strategy greatly improves the convergence speed and convergence accuracy of the algorithm through the guidance of optimal individuals. The main contributions of this paper are summarized as follows:
  • Using the Circle-OBL initialization strategy for initializing better-distributed populations makes the performance of the algorithm improve.
  • The exploration strategy improves the global exploration performance of the algorithm by learning self-knowledge over a radius R and improving population diversity by learning the differences between individuals.
  • The exploitation strategy leads to an improvement in the convergence speed and convergence accuracy of the algorithm through the bootstrapping of the optimal individuals.
  • CODGBGO is proposed by combining the above strategies, and it is confirmed to be a promising optimization method by numerical optimization, feature selection, and engineering optimization.
The subsequent work is structured as follows: Section 2 presents the fundamental theory of primitive GO. In Section 3, CODGBGO is introduced as a novel approach. Subsequently, in Section 4, we apply the proposed CODGBGO to solve various problem sets, including the CEC2017 [33], and IEEE CEC2020 [34], as well as addressing 18 feature selection problems and 4 mechanical design problems. Section 5 provides the conclusions and future directions.

2. The Theory of Growth Optimizer

Inspired by the process of personal growth, GO was introduced in 2022 as a novel optimizer [28]. Learning and reflection are two key stages of GO; they complement each other and foster individual growth. Learning is to draw knowledge from the gaps between different individuals, and reflecting is to improve one’s knowledge by summarizing one’s strengths and weaknesses.

2.1. Learning Stage

In the learning stage, growth resistance ( G R ) is used to indicate the amount of knowledge that an individual has learned. Assuming that the individual is represented as x i = ( x i , 1 , x i , 2 , , x i , D ) , where x i , D represents the D t h dimension knowledge of the i t h individual. The value of G R i is taken as the value of the objective function of the individual x i , where a larger G R i indicates that the individual has learned a lesser amount of knowledge and vice versa. The individual learns by examining the gaps between other individuals in the population and growing in the process. The gaps between the leader and the elite ( G a p 1 ), the leader and the bottom ( G a p 2 ), the elite and the bottom ( G a p 3 ), and the gap between two random individuals ( G a p 4 ) are mainly modeled in the learning stage. Each gap is described by the following equation:
G a p 1 = x b e s t x b e t t e r G a p 2 = x b e s t x w o r s e G a p 3 = x b e t t e r x w o r s e G a p 4 = x L 1 x L 2
where x b e s t represents the leader of the society, x b e t t e r represents one of the P 1 1 optimal individuals, referred to as the elite, x w o r s e represents one of the bottom P 1 individuals, where P 1 takes the value 5, x L 1 and x L 2 denote two different random individuals. G a p k k = 1 , 2 , 3 , 4 represents the gap between two individuals of different types; learners learn from and benefit from the gap G a p k .
Generally, individuals will tend to learn from information that has a large knowledge gap. The learning factor ( L F ) is introduced for each group of knowledge gaps, and the learning factor L F k will affect an individual’s learning efficiency for the k t h group of knowledge gaps, expressed as the following equation:
L F k = G a p k k = 1 4 G a p k , ( k = 1 , 2 , 3 , 4 )
where L F k represents the k t h group knowledge gap normalization and its value belongs to the range [0, 1]. The larger L F k indicates the more efficiently the individual learns from the k t h group knowledge gap, and vice versa.
The extent of knowledge learning varies across individuals, where S F i is used to denote the extent of learning of the i t h individual, expressed as the following equation.
S F i = G R i G R m a x
where G R i represents the resistance to growth of the i t h individual and G R m a x represents the maximum resistance to growth. A larger S F i indicates that the individual has a greater range of knowledge to learn and tends to perform the exploration phase and, conversely, tends to perform the exploitation phase.
Individuals learn by learning from individual knowledge gaps to promote their growth. For example, the amount of knowledge acquired by the i t h individual through learning from the k t h group of knowledge gaps is denoted as K A k using the following equation:
K A k = S F i · L F k · G a p k , ( k = 1 , 2 , 3 , 4 )
The specific learning process of the i t h individual is expressed by the following equation:
x i n e w = x i I t + K A 1 + K A 2 + K A 3 + K A 4
where I t represents the current number of iterations and x i n e w represents the new state of the i t h individual after learning.
By adjusting the individuals during the learning stage, the quality of the individuals is likely to improve, but it is also likely to regress; therefore, for the individuals whose quality has improved, the retention operation is performed. For individuals whose quality regresses, they are retained with probability P 2 , and those that exceed the probability P 2 are discarded. This process is described using the following equation:
x i I t + 1 = x i n e w   i f   f x i n e w < f x i I t x i n e w i f   r 1 < p 2 x i I t   e l s e e l s e
where r 1 denotes a random number in the range [0, 1] and P 2 takes the value 0.001. f x i n e w denotes the objective function value after learning for the i t h individual.

2.2. Reflection Stage

The reflection stage is mainly a stage in which the individual retains his good aspects by reflecting on his deficiencies and, for the bad aspects, learns from the good individuals. For the aspects that cannot be remedied, it is necessary to abandon the previous knowledge and learn systematically again. The reflexive stage of GO is expressed using the following formula:
x i , j I t + 1 = l b + r 4 × ( u b l b ) i f r 3 < A F x i , j I t + r 5 × ( R j x i , j I t ) e l s e i f   r 2 < P 3 x i , j I t e l s e
A F = 0.01 + 0.99 × ( 1 F E s M a x F E s )
where l b and u b are the lower and upper bounds of the problem, respectively, r 2 , r 3 , r 4 , r 5 represents pseudorandom numbers in the range [0, 1], P 3 represents the probability of reflection, which takes the value of 0.3 in this paper, A F represents the attenuation factor, F E s represents the current number of function evaluations, and M a x F E s represents the maximal number of function evaluations, and A F decreases linearly from 1 to 0.01 as F E s increases. R j denotes the j t h dimension information of the leader individual and the elite individual, which means that in the reflection stage, if i t h individual needs to learn from the j t h dimension of the other excellent individuals, there will be an upper-level individual to guide it.

2.3. Implementation of GO for Optimization

In this subsection, a detailed implementation of GO is given. Algorithm 1 gives the pseudo-code of GO. The implementation steps of the GO algorithm are given below:
Step 1: Initialize the run parameters containing the population size ( N ), the dimension of the problem to be solved ( D ), u b and l b of the problem to be solved, and the parameters P 1 , P 2 , P 3 .
Step 2: Initialize the population and calculate the fitness function value based on the initialization parameters. Each individual is a 1 × D vector. The whole population is an N × D matrix.
Step 3: Enter the main loop. Sort the individuals in the population by G R to obtain the optimal solution x b e s t , x b e s t will be updated in each iteration.
Step 4: Execute the learning stage by first selecting the elite individual x b e t t e r , the bottom individual x w o r s e , and the random individuals x L 1 and x L 2 , and sequentially using Equations (1)–(6) to update the position of i t h individual and update the globally optimal g b e s t x in a timely manner. The number of function evaluations F E s is added to 1.
Step 5: Execute the reflection phase by updating the j t h dimension of the i t h individual according to Equations (7) and (8), and then use Equation (6) to update the i t h individual position and update the global optimal solution g b e s t x in a timely manner. The number of function evaluations F E s is added to 1.
Step 6: If the loop terminates, return the globally optimal solution g b e s t x , otherwise go to step 3 to continue execution.
Algorithm 1: The pseudocode of the GO algorithm
Axioms 13 00361 i001

3. A Multi-Strategy Enhanced Growth Optimizer

The original GO has the advantages of faster convergence and simpler structure. However, when dealing with complex high-dimensional optimization problems, there is a lack of population diversity in the iterative process of the algorithm, which leads to the problem that the algorithm tends to be easy to trap in the local optimum. At the same time, there are deficiencies in the exploitation capability, resulting in the loss of convergence accuracy and convergence speed. In this work, the Circle-OBL initialization strategy, exploration strategy, and exploitation strategy are introduced into the original GO to form CODGBGO. In CODGBGO, firstly, the Circle-OBL initialization strategy is used to generate a well-initialized population to improve the overall quality of the population. Secondly, the exploration strategy is suggested in the learning stage to improve the diversity of the population; this improves the ability of the algorithm to escape from the local optimization trap. Finally, the exploitation strategy is employed in the reflection stage to improve the performance of the algorithm to exploitation, speed up the convergence speed of the algorithm, and improve the convergence accuracy.

3.1. Circle-OBL Initialization Strategy

Literature [35] indicates that the improvement of the initialization scheme using chaotic mapping and OBL strategies can generate initial populations with better solution quality, leading to better optimization results. Among them, chaotic mapping is used to address the problem of premature convergence by generating initial solutions with higher diversity levels.The OBL strategy aims to accelerate the convergence of the algorithm by exploring a wider region of the solution space during the initialization process. Therefore, in this subsection, the population will be initialized using the Circle-OBL initialization strategy, where the mathematical expression for the chaotic circle mapping is given by Equation (9) [36].
z k + 1 = z k + a m o d ( b 2 π s i n ( 2 π z k ) , 1 )
According to the literature [36], where a = 0.5 ,   b = 0.2 ,   z 1 = r a n d , m o d represents the modulo operation, sequences are more diverse after circle chaotic mapping. Figure 1a,b show the sequence distribution after 500 iterations for pseudo-random mapping and circle chaotic mapping, respectively, and it can be seen that circle chaotic mapping sequences have better traversal. Subsequently, the individuals are initialized using the generated circle chaotic mapping sequences, and at the same time, they are subjected to opposing learning, which is represented by the following equation:
x i = z ( u b l b ) + l b x i * = r a n d ( 1 , D ) ( u b + l b ) x i ( i = 1 , 2 , , N )
where x i * represents the opposing individual of i t h individual. Compose N chaotic individuals into a chaotic population P 4 and N opposing individuals into an opposing population P 4 * , expressed as Equation (11).
P 4 = ( x 1 ;   x 2 ; ; x N ) P 4 * = ( x 1 * ;   x 2 * ; ; x N * )
Combining the two populations to form P = { P 4 P 4 * } , and taking the top N individuals with the best objective function values to form the final initialized population.

3.2. Exploration Strategy

To address the problem that the original GO is prone to falling into the local optimum trap due to the lack of population diversity at the late stage of the algorithm iteration, the algorithm is easy to fall into the local optimum trap. In this section, the exploration strategy is proposed to enhance population diversity and then improve the ability of the algorithm to jump out of the local optimal trap. The strategy mainly contains the differential strategy part and the self-search strategy part.
Literature [37] indicates that the differential strategy can effectively enhance the algorithm’s global search capability as well as its ability to jump out of the local optimum trap. Inspired by this, the difference strategy is introduced into the algorithm in this section to enhance its performance. Here, our main consideration is to learn from the knowledge gap between the current individual and a random individual and the knowledge gap between two random individuals, and the two sets of knowledge gaps are expressed using the following equation:
D E k 1 / i = x k 1 x i I t D E k 2 / k 3 = x k 2 x k 3
where x k 1 , x k 2 , x k 3 denote three mutually exclusive individuals in the population, x i I t denotes the i t h individual, D E k 1 / i denotes the knowledge gap between the current individual and a random individual, and D E k 2 / k 3 denotes the knowledge gap between two random individuals. Subsequently, the i t h individual learns from the two sets of gaps; here, the i t h individual learns knowledge equally from both sets of knowledge gaps. This process is represented using the following equation:
x i n e w = x i I t + 0.5 · D E k 1 / i + 0.5 · D E k 2 / k 3
Literature [38] indicates that individuals can effectively enhance the global search performance of an algorithm by learning within a certain range. Inspired by this, a self-learning strategy is proposed in this subsection to enhance the algorithm’s performance. A self-search strategy is the process by which an individual improves himself by learning knowledge within a radius of R . In the selection of R , the wider distribution of normally distributed random numbers is considered suitable to be used as a learning radius, allowing the global search ability to be strengthened. At the same time, considering that in the iterative process, a larger search radius will also cause the loss of convergence speed, multiplying a tail term on the basis of a normally distributed random number makes a balance in each function. Hence, R is expressed using the following equation:
R = N ( 0 , 1 ) · π / 8
where N ( 0 , 1 ) denotes a random number obeying a normal distribution, π / 8 as a tail term to balance global search with local search. The self-search process for the i t h individual is expressed using the following equation:
x i n e w = x i I t · ( 1 + R )
The equal combination of the differential strategy and the self-search strategy forms the exploration strategy. The process is shown in Figure 2, the green line represents the simulation process of the exploration strategy. From the figure, it can be seen that through the self-learning strategy and the differential strategy, the individual makes it jump out of the local optimal region by learning in the region with radius R. The learning of the local optimal region can be seen by the self-learning strategy and the difference strategy. At the same time, the individual learns from the differences among other individuals, which improves their global search ability. Meanwhile, it can be seen that the population exploration space has improved. Through the above analysis, it is confirmed that the exploration strategy enhances the algorithm’s global search ability. expressed as the following equation:
x i n e w = x i I t + 0.5 · D E k 1 / i + 0.5 · D E k 2 / k 3 i f   r a n d < 0.5 x i I t · ( 1 + R ) e l s e
The new state of i t h individual is subsequently retained using elite retention, expressed as the following equation:
x i I t + 1 = x i n e w i f   f x i n e w < f x i I t x i I t e l s e
where   f x i n e w denotes the objective function value after exploration strategy for the i t h individual.

3.3. Exploitation Strategy

In the literature [39], it is stated that using optimal individuals for bootstrapping can effectively enhance algorithm exploitation and improve the convergence speed. Inspired by this, an exploitation strategy based on optimal individual bootstrapping is proposed in this section, and at the same time, taking into account the situation of falling into a local optimum, the differential strategy is also taken into account in order to balance the exploitation phase and the exploration phase, aiming to enhance the algorithm development capability as much as possible, where the gap is represented by the following equation:
G a p k 4 / i = x k 4 x i I t
where x k 4 denotes a random individual, G a p k 4 / i denotes the gap between the current individual and the randomized individual. Also, in order to enhance the algorithm exploitation, the globally optimal individual is used to guide the position update of the current individual; this process is shown in Figure 3. From the figure, the individual quickly approaches the globally optimal individual, and at the same time, due to the consideration of the differential idea, the individual avoids falling into the local optimal region, which improves the convergence speed of the algorithm as well as the practical exploitation ability. The process is expressed using the following equation:
x i n e w = x b e s t + r a n d · G a p k 4 / i

3.4. Implementation of CODGBGO for Optimization

In this subsection, a detailed implementation of CODGBGO is given. Algorithm 2 gives the pseudo-code of CODGBGO. The implementation steps of the CODGBGO algorithm are given below:
Step 1: Initialize the run parameters containing N , D , u b and l b , and the parameters P 1 , P 2 , P 3 , α , β .
Step 2: Use the Circle-OBL initialization strategy. Initialize the population according to Equations (10) and (11), and calculate the objective function value based on the initialization parameters. Each individual is a 1 × D vector. The whole population is an N × D matrix.
Step 3: Enter the main loop. Sort the individuals in the population by G R to obtain the optimal solution x b e s t . x b e s t will be updated in each iteration.
Step 4: Execute the learning stage, if r a n d < α , by first selecting the elite individual x b e t t e r , the bottom individual x w o r s e , and the random individuals x L 1 and x L 2 , then sequentially using Equations (1)–(6) to update the position of the i t h individual, otherwise using Equations (12)–(17) to update the position of the i t h individual, and updating the globally optimal node g b e s t x in a timely manner. The number of function evaluations F E s is added to 1.
Step 5: Execute the reflection stage, if r a n d < β , by updating the j t h dimension of the i t h individual according to Equations (7) and (8), and then use Equation (6) to update the individual position; otherwise update the position of the i t h individual according to Equations (17)–(19), and update the global optimal solution g b e s t x in a timely manner. The number of function evaluations F E s is added to 1.
Step 6: If the loop terminates, return the globally optimal solution g b e s t x ; otherwise, go to step 3 to continue execution.
Algorithm 2: The pseudocode of the CODGBGO algorithm
Axioms 13 00361 i002

3.5. Compytional Complexity

This section analyzes the computational complexity of the proposed CODGBGO algorithm. The computational complexity of the original GO initialization is O ( N ) , In each iteration, each member of the population undergoes two stages of updating and evaluation of its objective function. The computational complexity of the update process is O ( 2 · T · N · D ) , where T is the maximum number of iterations. Therefore, the computational complexity of GO is O ( N · ( 1 + 2 T · D ) ) . Compared to GO, CODGBGO first introduces the Circle-OBL initialization strategy in the initialization process, so the computational complexity of initialization is O ( 2 N ) . The introduction of exploration and exploitation strategies does not change the original GO logic, so the computational complexity of the update process remains O ( 2 · T · N · D ) . Therefore, the computational complexity of CODGBGO is O ( 2 N · ( 1 + T · D ) ) .

4. Experimental Results

In this section, we will conduct a series of experiments to evaluate the performance of the proposed CODGBGO. First, its performance in solving numerical optimization problems is evaluated using the CEC2017 test set and the CEC2020 test set. Second, its performance in solving discretization problems is evaluated using 18 feature selection problems. Finally, four constrained engineering applications are solved using the proposed CODGBGO to evaluate its performance in solving realistic constrained optimization problems. Meanwhile, in order to comprehensively and objectively evaluate the solution performance of CODGBGO, the proposed CODGBGO is compared with numerous algorithms. The compared algorithms include classical algorithms, improved algorithms, high citation algorithms, popular algorithms, new algorithms, and superior algorithms.
To ensure the fairness of the experiments, the population size was set to 60, and the maximum number of function evaluations was set to 60,000. The test dimensions were set to 30D, 50D, and 100D for CEC2017, 10D, and 20D for CEC2020. All experiments were performed using Windows 11 as the operating system, and the code execution took place within the MATLAB R2021b environment.

4.1. Comparison Algorithm and CEC Benchmark Problems

In order to comprehensively and objectively evaluate the performance of the proposed CODGBGO, this paper compares CODGBGO with several existing optimization algorithms on several sets of test functions. Among them, the test function sets selected in this paper are IEEE CEC2017 and IEEE CEC2020, with detailed information as shown in Table 1 and Table 2, involving unimodal functions, multimodal functions, hybrid functions, and composition functions. The unimodal function contains only one local optimal solution and is mainly used to verify the local search capability of CODGBGO. The complexity of multimodal, hybrid, and composition functions is higher than that of unimodal functions, and due to the inclusion of multiple local optimal solutions, they are easy to fall into the local optimal trap in the solution process. Therefore, the tests on multimodal, hybrid, and synthetic functions are mainly used to verify the performance of CODGBGO in solving complex problems, to check the ability to jump out of local optimums, and to evaluate the balance between the global search phase and the local search phase. Meanwhile, in order to comprehensively reflect the performance of CODGBGO, 21 comparison algorithms are selected for experimental comparison in this paper. These include classical algorithms, improved algorithms, high citation algorithms, popular algorithms, new algorithms, and superior algorithms. The specific parameter configurations are shown in Table 3.

4.2. Effectiveness of Strategies and Parameter Settings

4.2.1. Effectiveness of Strategies

In this subsection, the effectiveness of each of the added strategies is mainly tested. Among them, CODGBGO is a new optimization algorithm formed by integrating the Circle-OBL initialization strategy, the exploration strategy, and the exploitation strategy on the basis of GO. In order to verify the effectiveness of each strategy, each strategy is individually integrated into GO to form a new combination. Among them, the Circle-OBL initialization strategy is integrated into GO to form COGO, the exploration strategy is integrated into GO to form DGGO, and the exploitation strategy is integrated into GO to form BGO. These combinations were evaluated on the CEC2017 test function set with 30D dimensions, and each experiment was run independently for 30 times. The experimental results were ranked using the Friedman mean rank test to verify the effectiveness of the strategies. The experimental results are shown in Table 4. From Table 4, it can be observed that the rankings of COGO, DGGO, and BGO are all superior to GO. This indicates that the introduction of each strategy is beneficial for enhancing the performance of GO. Moreover, combining the three strategies into GO yields a more effective enhancement of its performance.

4.2.2. Parameter {α,β} Settings

In this subsection, the two control parameters of CODGBGO are studied in detail to determine the optimal parameter combination to achieve the best performance of CODG-BGO. In order to avoid experimental redundancy in parameter selection, a round of testing was performed before parameter {α, β} was selected, and finally parameter a was filtered to have α significant advantage of its effect in the interval [0.7, 0.9], and parameter β was filtered to have a significant advantage in the interval [0.85, 0.95]. For further determination of the parameter values, parameter α is divided into sets {0.7, 0.8, 0.9} at intervals of 0.1 and parameter b is divided into sets {0.85, 0.9, 0.95} at intervals of 0.5, so there are 9 combinations: {0.7, 0.85}, {0.7, 0.9}, {0.7, 0.95}, {0.8, 0.85}, {0.8, 0.9}, {0.8, 0.95}, {0.9, 0.85}, {0.9, 0.9}, and {0.9, 0.95}. In order to determine the best parameter combinations, these nine combinations are tested in the CEC2017 test function set with a test dimension of 30D. Each experiment was run independently 30 times, using the Friedman mean rank test to rank the experimental results and determine the optimal parameter combination. The experimental results are shown in Table 5.
From Table 5, it can be found that when the value of β is 0.85, the sum of the final rankings of the combinations {0.7, 0.85}, {0.8, 0.85}, and {0.9, 0.85} is 24. When the value of β is 0.9, the sum of the final rankings of the combinations {0.7, 0.9}, {0.8, 0.9}, and {0.9, 0.9} is 13. When the value of β is 0.95, the sum of the final rankings of the combinations {0.7, 0.95}, {0.8, 0.95}, and {0.9, 0.95} is 8. To summarize, the algorithm achieves better results when the value of the parameter β is 0.95. In the same way, when the value of α is 0.7, the sum of the final rankings of the combinations {0.7, 0.85}, {0.7, 0.9}, and {0.7, 0.95} is 14. When the value of α is 0.8, the sum of the final rankings of the combinations {0.8, 0.85}, {0.8, 0.9}, and {0.8, 0.95} is 11. When the value of α is 0.9, the sum of the final rankings of the combinations {0.9, 0.85} and {0.8, 0.95} is 11. 0.9, 0.85}, {0.9, 0.9}, and {0.9, 0.95} have a final rank sum of 20. In summary, the algorithm can achieve better results when the parameter α takes the value of 0.8. Therefore, the parameter {0.8, 0.95} is chosen as the best parameter combination for subsequent experiments in this paper.

4.3. Experimental Results

In order to further evaluate the performance of CODGBGO, in this subsection, the proposed CODGBGO is experimentally compared with 10 common optimization algorithms on IEEE CEC2017 test functions, where the compared algorithms involve classical, highly citation, improved, and new algorithms. The test functions involve unimodal, multimodal, hybrid, and composition functions. The performance of CODGBGO is comprehensively assessed through population diversity analysis, exploration and exploitation analysis, numerical analysis, convergence and stability analysis, non-parametric analysis, and expanded analysis.

4.3.1. Population Diversity Analysis

Having a good population diversity contributes to the algorithm’s ability to quickly converge to the global optimal solution and avoid getting trapped in local optima. In this section, we mainly analyze the differences in population diversity between GO and CODGBGO. The IEEE CEC2017 test function set is used for diversity experiments, with a test dimension of 30D. The formula for calculating population diversity is shown below:
I c ( t ) = i = 1 N j = 1 D ( x i , j ( t ) c j ( t ) ) 2
where I c ( t ) denotes the population diversity in generation t and c j ( t ) denotes the centrifugal degree of the j t h dimension in generation t , using the following equation:
c j ( t ) = 1 D i = 1 N x i , j ( t )
The experimental results are shown in Figure 4. As can be seen from Figure 4, the introduction of the Circle-OBL initialization strategy in CODGBGO results in a higher initial population diversity compared to GO. Additionally, the introduction of the exploration strategy ensures that CODGBGO consistently maintains higher population diversity throughout the iteration process compared to GO. In conclusion, CODGBGO is able to more effectively avoid getting trapped in local optima, allowing it to converge to the global optimal solution faster.

4.3.2. Exploration and Exploitation Analysis

Exploration and exploitation are the two most important stages in metaheuristic algorithms, and effectively controlling these stages helps enhance algorithm performance. In this section, an analysis is conducted on the exploration and exploitation stages of CODGBGO using the IEEE CEC2017 test function set with a test dimension of 30D. The exploration ratio is calculated using Equation (22) and the exploitation ratio using Equation (23).
E x p l o r a t i o n ( % ) = D i v ( t ) D i ν m a x · 100
E x p l o i t a t i o n ( % ) = D i ν ( t ) D i ν m a x D i ν m a x · 100
where D i ν ( t ) denotes the dimension diversity measurement denoted as Equation (24), D i ν m a x represents the maximum diversity achieved.
D i ν ( t ) = 1 D j = 1 D 1 N i = 1 N m e d i a n ( x j ( t ) ) x i , j ( t )
The experimental results are shown in Figure 5. From Figure 5, it can be observed that in the early iterations, CODGBGO explores the search space with a high percentage, indicating its ability to effectively explore the search space. Subsequently, due to the introduction of the exploitation strategy into GO, the exploitation ratio gradually increases, enhancing the algorithm’s convergence speed and precision. Throughout the iteration process, CODGBGO achieves a good balance between exploration and exploitation stages, effectively avoiding the problem of algorithm stagnation in local optima and improving the algorithm’s solving performance.

4.3.3. Numerical Analysis

In this section, the performance of CODGBGO in solving numerical optimization problems is analyzed. The IEEE CEC2017 test functions are used for the experiments, and comparisons are made with 10 other state-of-the-art optimization algorithms. The test dimensions are 30D, 50D, and 100D, and the experimental results are presented in Table 6, Table 7, and Table 8, respectively, where the bold numbers in the table indicate the top rankings.
As can be seen from Table 6, when the test dimension is 30D, CODGBGO ranks first on the 24 test functions, demonstrating a very excellent solving performance. On the unimodal function F1, the solution accuracy is weaker than that of INFO, which is mainly due to the fact that INFO adopts a more advanced exploitation technique, which guarantees the convergence accuracy, while the exploitation strategy introduced in this paper is only second to INFO. However, on the simple multimodal functions F4 to F10, CODGBGO occupies a great advantage and ranks first in 85.7% of cases, which indicates that due to the introduction of the exploration strategy, the algorithm’s ability to jump out of the local optimum trap has been strengthened. Meanwhile, on the complex multimodal functions consisting of hybrid and composition functions, CODGBGO obtains a winning percentage of 90%, which indicates that the exploitation strategy and exploration strategy proposed in this paper make the two phases of the algorithm well-balanced, which enhances the algorithm’s performance of the globally optimal search and can effectively cope with the challenges posed by the complex multimodal problems. Meanwhile, from a comprehensive point of view, CODGBGO outperforms other algorithms in terms of both the average ranking and the percentage of the first ranking, which indirectly confirms the role of the strategy proposed in this paper in boosting the performance of the algorithm.
Meanwhile, when the test dimension is 50D, CODGBGO also demonstrates strong solution performance, ranking first on 82.7% of the test functions. Among them, it dominates 85.7% of the simple multimodal test functions, which also shows that the exploration strategy proposed in this paper can still effectively improve the algorithm’s ability to jump out of the local optimum with an increase in test dimensions. Meanwhile, on the complex multimodal function, it occupies 90% of the solution advantage, which also confirms that the exploitation strategy and exploration strategy proposed in this paper still have effectiveness with the increase of the test dimension. Meanwhile, from the perspective of the first overall ranking, it can be seen that the strategy proposed in this paper is still effective in promoting the algorithm as the problem dimension increases.
Finally, when the test dimension is 100D, the solution performance of all algorithms shows a decline. Inevitably, the solving performance of CODGBGO also shows a slight decline, but the overall ranking shows that the proposed CODGBGO still has a certain advantage in solving high-dimensional test problems. Specifically, it achieves a winning rate of 60% on the complex multimodal problem. This also indirectly reflects that the exploration strategy and exploitation strategy still have a certain contribution to the performance of the algorithm when solving high-dimensional problems.
To summarize, by testing different dimensions at CEC2017, it is confirmed that the exploration strategy and exploitation strategy proposed in this paper can promote the algorithm effectively. In addition, the disadvantage is that the facilitation will gradually decrease as the test dimensions rise, but this is within an acceptable range.

4.3.4. Convergence and Stability Analysis

In addition to the solution accuracy, the convergence speed and stability of the algorithm are also important. This subsection will analyze the convergence and stability of the algorithm. Experiments are conducted using the IEEE CEC2017 test function set with a test dimension of 30 D. The experimental results are shown in Figure 6 and Figure 7. From Figure 6, it can be seen that in most cases, CODGBGO is in the lead after 30,000 function evaluations, with faster convergence speed and convergence accuracy. Meanwhile, from Figure 7, it can be seen that CODGBGO has higher solution stability.

4.3.5. Nonparametric Analysis

In this subsection, non-parametric tests are used to compare the differences between CODGBGO and competitive algorithms, and experiments are conducted using the IEEE CEC2017 test function set with test dimensions of 30D, 50D, and 100D.First, the Wilcoxon rank sum test is used to compare the differences between CODGBGO and the competitive algorithms, and the results of the experiments are shown in Table 9, Table 10, and Table 11, respectively. Where the significance factor p < 0.05 indicates that there is a significant difference between the competitive algorithms and CODGBGO, and on the contrary, it indicates that there is no significant difference between the competitive algorithms and CODGBGO. When there is a significant difference between the algorithms, the competitive algorithms are determined to be superior or weaker than CODGBGO by comparing the means. where ‘+’ indicates that the competitive algorithms are significantly better than CODGBGO, ‘−’ indicates that the competitive algorithms are significantly weaker than CODGBGO, and ‘=‘ indicates that there is no significant difference between the competitive algorithm and CODGBGO.As can be seen from Table 9, Table 10 and Table 11, CODGBGO significantly outperforms the other competing algorithms in most cases, demonstrating strong comprehensive solution performance. Secondly, the experimental results are ranked using the Friedman mean rank test, and the results are shown in Table 12, from which it can be seen that CODGBGO is ahead of the competing algorithms in terms of average rank on different test dimensions. In summary, it can be shown that the CODGBGO proposed in this paper is more competitive than other competitive algorithms on the IEEE CEC2017 test set.

4.3.6. Expanded Analysis

In this section, the performance differences between CODGBGO and the superior algorithms are primarily tested. The IEEE CEC2020 test function set is used for the experiments, with test dimensions of 10D and 20D.
Firstly, the algorithm’s performance is evaluated using the mean and standard deviation metrics, and the results are shown in Table 13. We can see from Table 13 that when the test dimension is 10D, CODGBGO ranks first on 80% of the test functions and achieves good solution performance. Among them, it is weaker than ALSHADE on test function F8 and ALSHADE, IMODE, and LSHADE on test function F9, which shows that CODGBGO still suffers from low convergence performance when dealing with specific problems compared to the best existing optimization algorithms, and this is attributed to the fact that there is still room for improvement in the exploration strategy and the exploitation strategy proposed in this paper. However, from a comprehensive point of view, CODGBGO is undeniably an excellent algorithm. Meanwhile, when the test is 20D, consistent with the above analysis, CODGBGO still has the best comprehensive performance, but there is still some room for improvement.
Secondly, the Wilcoxon rank sum test is used to compare the differences between CODGBGO and the superior algorithms, and the experimental results are shown in Table 14. From Table 14, it can be observed that for a test dimension of 10D, CODGBGO significantly outperforms ALSHADE in five test functions and significantly outperforms IMODE and LSHADE in four test functions. For a test dimension of 20D, CODGBGO significantly outperforms ALSHADE and LSHADE in seven test functions and significantly outperforms IMODE in six test functions, demonstrating its strong competitiveness. Secondly, the Friedman mean rank test is used to rank the experimental results, and the results are presented in Table 15. From Table 15, it can be seen that CODGBGO achieves the top ranking in all test dimensions, outperforming other superior algorithms and demonstrating higher solution stability and performance.

4.4. Results for Feature Selection Problems

It has been verified in previous experiments that the proposed CODGBGO possesses efficient performance in dealing with numerical optimization problems. In this section, the performance of the proposed CODGBGO for discretized optimization problems is evaluated by using CODGBGO for solving feature selection problems. The feature selection problem can be defined as the dimensionality reduction of the original dataset features to improve the classification accuracy. The feature selection problem can be viewed as a discretized optimization problem and the CODGBGO algorithm is proposed in this paper to solve this challenging problem.

4.4.1. Establishment of an Optimization Model

The feature selection problem is a multidimensional optimization problem where the objective is to obtain the highest classification accuracy using the minimum number of features. The fitness function of the feature selection model is defined as follows:
m i n   f ( x ) = λ 1 · e r r o r + λ 2 · R / n ,
where x denotes the selected feature subset, e r r o r denotes the classification error of the selected feature subset, R denotes the selected feature subset size, and n denotes the number of features in the original dataset, λ 1 [ 0 , 1 ] , λ 2 = 1 λ 1 , In this paper, λ 1 takes the value of 0.9.
To deal with the feature selection problem, CODGBGO was discretized by defining each individual in the population as a binary variable. For example, an individual in the population X i = ( x i , 1 , x i , j , x i , D ) , where D denotes the dimension of the problem to be solved and its value is the number of features in the original dataset, x i , j is a binary variable and x i , j equals 1 indicates that the j t h feature is selected, otherwise, the j t h feature is not selected. During the initialization phase of the population, generate N individuals, variables are randomly generated in the interval [0, 1] and then converted to binary variables by a threshold of 0.5. The definition is as follows:
x i , j = 1 ,   x i , j > 0.5 0 ,   x i , j 0.5     i = 1 , 2 , , N ;   j = 1 , 2 , , D .

4.4.2. Experimental Analyses

UCI is a machine learning database proposed by the University of California Irvine. It can be obtained at http://archive.ics.uci.edu/mL/index.php (accessed on 4 March 2024), In this paper, 18 datasets were selected to evaluate the performance of the proposed CODGBGO in dealing with discrete optimization problems. The dataset description information is shown in Table 16. The comparison algorithms are PSO, DE, GWO, MVO, WOA, ABO, BOA, HHO, EO, and GO.
In this paper, the selected subset of features is used to calculate the classification accuracy using the K-nearest neighbor (KNN) algorithm, with K set to 5. Subsequently, the performance was evaluated using 5-fold cross-validation and the dataset is divided into five parts: one for testing and four for training. The accuracy of feature subset prediction can be effectively calculated by K-fold cross-validation. Meanwhile, 10 algorithms were used to experimentally compare with CODGBGO to evaluate the performance of CODGBGO. The population size was set to 60, the maximum number of iterations was set to 100, and each experiment was executed independently for 30 times. The following metrics were used to evaluate the performance of CODGBGO:
Fitness values: They are obtained from 30 independent runs of the experiment and contain the best value, the worst value, and the mean value.
Classification accuracy: it is obtained by calculating the average classification accuracy of 30 independent experiments. The classification accuracy for each experiment is obtained by solving the classification accuracy by experimenting with KNN on a selected subset of features.
Feature subset size: it is obtained by calculating the average feature subset size of 30 independent experiments.
Rank: It represents the ranking in different evaluation criteria (e.g., mean, best, worst) and is used to visualize the performance of the algorithm.
Table 17 shows the best fitness value, mean fitness value, worst fitness value, and ranking on each metric for 30 independent runs of the algorithm on different datasets. As can be seen in Table 17, CODGBGO ranked first on all 18 datasets in the best fitness value metric, with a 100% win rate. Meanwhile, on the mean fitness value metric, it ranked first on 17 datasets and was weaker than DE only on the Zoo dataset, with an average ranking of 1.0556. On the worst fitness value metric, it ranked first on 16 datasets, weaker than GWO on the IonosphereEW dataset, and weaker than GO on the Landsat dataset, with an average ranking of 1.1111. The final ranking of the proposed CODGBGO is first in all three metrics: best fitness value, average fitness value, and worst fitness value. The above analysis confirms that, due to the introduction of exploitation strategy and exploration strategy, CODGBGO has more efficient performance in dealing with feature selection problems. Meanwhile, compared with the comparison algorithms, CODGBGO possesses higher expandability, which also means that CODGBGO may also be very competitive when dealing with other engineering optimization problems. However, it inevitably falls into a local optimum when solving a specific problem and loses its global optimization capability. This also indicates that there is still room for improvement in the exploration strategy and exploitation strategy in this paper.
Figure 8 shows the distribution of the best fitness values generated by CODGBGO over 30 independently run experiments. As can be seen in Figure 8, the stability of CODGBGO outperforms the comparison algorithms on most of the datasets. This also shows that CODGBGO has higher solution stability in solving the feature selection problem and can be used as an effective method to solve the feature selection problem. In addition to the accuracy and stability of the solution, the speed of convergence of the algorithm in solving real-world problems is crucial. Figure 9 shows the average convergence curve of CODGBGO in solving the feature selection problem. From Figure 9, it can be seen that its convergence speed and accuracy are better than those of the other comparison algorithms on most of the datasets.
Through the above analysis, CODGBGO has higher computational accuracy, better stability, and faster convergence speed in solving the feature selection problem. Table 18 shows the Wilcoxon’s rank sum test results of the algorithm in solving different datasets, where ‘+’, ‘−’, and ‘=’ have the meanings as described earlier with a significance factor of 0.05. As can be seen from Table 18, CODGBGO significantly outperforms PSO, MVO, WOA, ABO, BOA, HHO, and EO on 18 datasets, DE and GWO on 17 datasets, and GO on 15 datasets. This also shows that CODGBGO is an effective method for solving the feature selection problem.
In addition, Table 19 and Table 20 show the average classification accuracy and the average feature subset size of the algorithms when solving different datasets, respectively. Figure 10 shows the average accuracy ranking of the algorithms when solving different datasets. Table 21 shows the running time for each dataset. As can be seen in Table 19 and Figure 10, CODGBGO finished first on 14 datasets, second on 3 datasets, and just 8th on the BreastEW dataset, with an average ranking of 1.5556 for average accuracy, resulting in a first-place ranking. As can be seen in Table 20, CODGBGO has an average ranking of 4 in terms of the average selected feature subset size, and ultimately ranks second behind EO, but the average classification accuracy of EO is weaker than that of CODGBGO. This shows that EO removes strongly correlated feature attributes in the feature selection process, which leads to a reduction in classification accuracy, which is attributed to the fact that EO’s global optimization ability is not as good as CODGBGO’s. While CODGBGO is weaker than EO in terms of the feature subset size, its classification accuracy is better than that of the other algorithms, which confirms that, due to the introduction of the strategies, CODGBGO’s has achieved a better balance between the global and local searches and has a stronger global optimization ability.
Finally, Figure 11 shows the combined performance on six evaluation criteria, which are best fitness value, average fitness value, worst fitness value, average accuracy, average running time, and average feature subset size. As can be seen in Figure 11, CODGBGO does not dominate in terms of average runtime, but it is within an acceptable range. CODGBGO is weaker than EO in average feature subset size, but CODGBGO outperforms EO in classification accuracy. CODGBGO outperforms the comparison algorithm in all other metrics. In summary, it can be shown that the comprehensive performance of CODGBGO is better than the comparison algorithm, and it is an effective method to solve the feature selection problem.

4.5. Results for Constrained Engineering Application Problems

In the previous experiments, the ability of CODGBGO to solve numerical optimization problems and discretized optimization problems was verified, and in this section, the proposed CODGBGO is mainly applied to solve four constrained engineering optimization problems in order to evaluate its performance in solving constrained engineering optimization problems. The selected engineering problems are 10-bar truss design, tension/compression spring design, weight minimization of a speed reducer, and welded beam design. The comparison algorithms are GO, INFO, SSA, TLBO, BESD, DE, PSO, SO, FOPSO, and ICGWO. The population size was set to 60, the maximum number of function evaluations was set to 60,000, and each experiment was run independently 30 times.

4.5.1. Results on 10-Bar Truss Design [61]

The primary objective of this problem is to minimize the weight of the truss structure while ensuring that certain frequency constraints are met. The structure is shown in Figure 12, and the mathematical formulation for addressing this problem can be described as follows:
Minimize : f ( x ¯ ) = i = 1 10 L i ( x i ) ρ i A i subject   to : g 1 ( x ¯ ) = 7 ω 1 ( x ¯ ) 1 0 , g 2 ( x ¯ ) = 15 ω 2 ( x ¯ ) 1 0 , g 3 ( x ¯ ) = 20 ω 3 ( x ¯ ) 1 0 , with   bounds : 6.45 × 10 5 A i 5 × 10 3 , i = 1 , 2 , , 10 . where , x ¯ = { A 1 , A 2 , , A 10 } , ρ = 2770 .
The optimal solution of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem is shown in Table 22, and the statistical results are presented in Table 23. From Table 22, it can be observed that CODGBGO ranks first and obtains the optimal solution X = (0.003515, 0.001471, 0.003511, 0.001472, 0.000065, 0.000456, 0.002368, 0.002371, 0.001244, 0.001241) with a fitness function value of 524.4511. Furthermore, Table 23 demonstrates that CODGBGO outperforms other algorithms in terms of the best fitness function value, worst fitness function value, average fitness function value, and standard deviation, ranking first in all categories. In conclusion, CODGBGO is an effective method for solving the 10-bar truss design problem.

4.5.2. Results on Tension/Compression Spring Design [62]

The primary aim of this problem is to optimize the mass of a tension or compression spring. This problem involves four constraints, and three variables are employed to compute the mass: the wire diameter ( x 1 ), the average coil diameter ( x 2 ), and the number of active coils ( x 3 ). The structure is shown in Figure 13, and the formulation of this problem is as follows:
Minimize : f ( x ¯ ) = x 1 2 x 2 ( 2 + x 3 ) subject   to : g 1 ( x ¯ ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ¯ ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 g 3 ( x ¯ ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ¯ ) = x 1 + x 2 1.5 1 0 with   bounds : 0.05 x 1 2.00 0.25 x 2 1.30 2.00 x 3 15.0
The optimal solution of the proposed CODGBGO and comparative algorithms for the tension/compression spring design problem are presented in Table 24, while the corresponding statistical results are shown in Table 25. From Table 24, it can be observed that GO, DE, SO, and CODGBGO all rank first simultaneously. Specifically, CODGBGO achieves the best solution with a fitness function value of 0.012665, corresponding to X = (0.051701, 0.356996, 11.272677). Moreover, from Table 25, it is evident that CODGBGO ranks second in terms of the worst fitness function value, being weaker than DE, but ranks first in terms of the average fitness function value. Therefore, based on the above analysis, CODGBGO is considered suitable for solving the tension/compression spring design problem.

4.5.3. Results on Weight Minimization of a Speed Reducer [63]

This task encompasses the development of a speed reduction mechanism for a compact aircraft engine. The structure is shown in Figure 14, and the optimization problem derived from this task can be expressed in the following manner:
Minimize : f ( x ¯ ) = 0.7854 x 2 2 x 1 ( 14.9334 x 3 43.0934 + 3.3333 x 3 2 ) + 0.7854 ( x 5 x 7 2 + x 4 x 6 2 ) 1.508 x 1 ( x 7 2 + x 6 2 ) + 7.477 ( x 7 3 + x 6 3 ) subject   to : g 1 ( x ¯ ) = x 1 x 2 2 x 3 + 27 0 , g 2 ( x ¯ ) = x 1 x 2 2 x 3 2 + 397.5 0 , g 3 ( x ¯ ) = x 2 x 6 4 x 3 x 4 3 + 1.93 0 , g 4 ( x ¯ ) = x 2 x 7 4 x 3 x 5 3 + 1.93 0 , g 5 ( x ¯ ) = 10 x 6 3 16.91 × 10 6 + ( 745 x 4 x 2 1 x 3 1 ) 2 1100 0 , g 6 ( x ¯ ) = 10 x 7 3 157.5 × 10 6 + ( 745 x 5 x 2 1 x 3 1 ) 2 850 0 , g 7 ( x ¯ ) = x 2 x 3 40 0 , g 8 ( x ¯ ) = x 1 x 2 1 + 5 0 , g 9 ( x ¯ ) = x 1 x 2 1 12 0 , g 10 ( x ¯ ) = 1.5 x 6 x 4 + 1.9 0 , g 11 ( x ¯ ) = 1.1 x 7 x 5 + 1.9 0 , with   bounds : 0.7 x 2 0.8 , 17 x 3 28 , 2.6 x 1 3.6 , 5 x 7 5.5 , 7.3 x 5 , x 4 8.3 , 2.9 x 6 3.9 .
The optimal solution of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer problem is shown in Table 26, and the statistical results are presented in Table 27. From Table 26, it can be observed that GO, INFO, TLBO, PSO, SO, and CODGBGO all ranked first simultaneously. CODGBGO obtained the best solution with a fitness function value of 2994.424466, where X = (3.500000, 0.700000, 17.000000, 7.300000, 7.715320, 3.350541, 5.286654). Additionally, Table 27 reveals that CODGBGO shares the top rank with GO, INFO, TLBO, and PSO regarding the worst and average fitness function values. Based on these findings, this study concludes that CODGBGO is well-suited for solving the weight minimization problem of a speed reducer.

4.5.4. Results on Welded Beam Design [64]

The primary aim of this problem is to optimize the design of a welded beam while minimizing costs. The problem involves five constraints, and four variables are utilized in the development of the welded beam. The structure is shown in Figure 15, and the mathematical formulation of this problem can be outlined as follows:
Minimize : f ( x ¯ ) = 0.04811 x 3 x 4 ( x 2 + 14 ) + 1.10471 x 1 2 x 2 subject   to : g 1 ( x ¯ ) = x 1 x 4 0 , g 2 ( x ¯ ) = δ ( x ¯ ) δ m a x 0 , g 3 ( x ¯ ) = P P c ( x ¯ ) g 4 ( x ¯ ) = τ m a x τ ( x ¯ ) , g 5 ( x ¯ ) = σ ( x ¯ ) σ m a x 0 , where , τ = τ 2 + τ 2 + 2 τ τ x 2 2 R , τ = R M J , τ = P 2 x 2 x 1 , M = p x 2 2 + L , R = x 2 2 4 + x 1 + x 3 2 2 , J = 2 x 2 2 4 + x 1 + x 3 2 2 2 x 1 x 2 , σ ( x ¯ ) = 6 P L x 4 x 3 2 , δ ( x ¯ ) = 6 P L 3 E x 3 2 X 4 , P c ( x ¯ ) = 4.013 E x 3 x 4 3 6 L 2 1 x 3 2 L E 4 G L = 14 in , P = 6000 lb , E = 30.10 6 psi , σ m a x = 30 , 000 psi , τ m a x = 13 , 600 psi , G = 12.10 6 psi , δ m a x = 0.25 in , .
The optimal solution of the proposed CODGBGO algorithm and the comparative algorithm on the Welded beam design problem is shown in Table 28, and the statistical results are presented in Table 29. From Table 28, it can be observed that GO, INFO, TLBO, DE, PSO, SO, and CODGBGO are all ranked first simultaneously. CODGBGO achieves the best solution with a fitness function value of 1.670218, where X = (0.198832, 3.337365, 9.192024, 0.198832). Additionally, from Table 29, it can be seen that CODGBGO shares the top rank with GO, INFO, and TLBO in terms of the worst fitness function value. Furthermore, CODGBGO is jointly ranked first with GO, INFO, TLBO, and DE in terms of the average fitness function value. In conclusion, based on the above analysis, this study concludes that CODGBGO is suitable for solving the welded beam design problem.

5. Conclusions and Future Directions

In this work, the Circle-OBL initialization strategy, exploration strategy, and exploitation strategy are integrated into the GO to form an enhanced GO, which is called CODGBGO. In CODGBGO, the first step is to provide a good initial population using the Circle-OBL initialization strategy, which is instrumental in enhancing the search quality. Secondly, the exploration strategy is adopted to enhance population diversity and the ability of the algorithm to escape local optima traps. Finally, the exploitation strategy is used to improve the exploitation ability of the algorithm and enhance its convergence performance. Through a large number of experiments, it has been confirmed that CODGBGO has good advantages and scalability in dealing with continuous optimization, discrete optimization, and engineering optimization, which is specifically reflected in the comprehensive evaluation due to the good balance between the exploitation phase and the exploration phase in CODGBGO, which makes the algorithm have a better performance in global optimization search. However, by analyzing the unimodal problems, we find that the exploitation strategy proposed in this paper needs to be further improved to enhance the exploitation performance. Meanwhile, when dealing with discrete combinatorial problems with high dimensionality, the CODGBGO proposed in this paper has weaker performance than the traditional optimization algorithms in specific cases, despite its superiority in comprehensive performance. Finally, although it shows good results on engineering optimization problems, it needs to be further tested in prospective applications, such as the field of nonlinear systems.
Therefore, our next work will focus on the following aspects: 1. further improving the CODGBGO proposed in this paper to further enhance the solution’s performance. 2. developing specific algorithms for high-dimensional combinatorial optimization problems that are applicable to such problems. 3. incorporating complex optimization problems in nonlinear systems into the testing of the algorithms in order to evaluate their performance more comprehensively.

Author Contributions

Conceptualization, R.X. and L.Y.; methodology, R.X. and L.Y.; software, R.X. and S.L.; validation, R.X. and F.W.; formal analysis, F.W.; investigation, T.Z.; resources, S.L.; data curation, R.X. and L.Y.; writing—original draft preparation, R.X.; writing—review and editing, R.X. and P.Y.; visualization, F.W., T.Z. and P.Y.; supervision, L.Y. and S.L.; project administration, L.Y. and S.L.; funding acquisition, L.Y. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Key Research and Development Program of China, grant number 2023YFB3308802; and in part by the National Natural Science Foundation of China’s Top-Level Program, grant number 52275480; in part by the Reserve Projects for Centralized Guidance of Local Science and Technology Development Funds, grant number QKHZYD [2023]002.

Data Availability Statement

All data in this paper can be obtained by contacting the corresponding author.

Acknowledgments

We are grateful to the publisher for formatting the paper and to the editor and reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hussien, A.G.; Oliva, D.; Houssein, E.H.; Juan, A.A.; Yu, X. Binary Whale Optimization Algorithm for Dimensionality Reduction. Mathematics 2020, 8, 1821. [Google Scholar] [CrossRef]
  2. Hao, Y.; Helo, P.; Shamsuzzoha, A. Virtual Factory System Design and Implementation: Integrated Sustainable Manufacturing. Int. J. Syst. Sci. Oper. Logist. 2018, 5, 116–132. [Google Scholar] [CrossRef]
  3. Simpson, A.R.; Dandy, G.C.; Murphy, L.J. Genetic algorithms compared to other techniques for pipe optimization. J. Water Resour. Plan. Manag. 1994, 120, 423–443. [Google Scholar] [CrossRef]
  4. Gharaei, A.; Hoseini Shekarabi, S.A.; Karimi, M. Modelling And Optimal Lot-Sizing of the Replenishments in Constrained, Multi-Product and Bi-Objective EPQ Models with Defective Products: Generalised Cross Decomposition. Int. J. Syst. Sci. Oper. Logist. 2020, 7, 262–274. [Google Scholar] [CrossRef]
  5. Hussien, A.G.; Hassanien, A.E.; Houssein, E.H.; Amin, M.; Azar, A.T. New Binary Whale Optimization Algorithm for Discrete Optimization Problems. Eng. Optim. 2020, 52, 945–959. [Google Scholar] [CrossRef]
  6. Sayyadi, R.; Awasthi, A. An Integrated Approach Based on System Dynamics and ANP for Evaluating Sustainable Transportation Policies. Int. J. Syst. Sci. Oper. Logist. 2020, 7, 182–191. [Google Scholar] [CrossRef]
  7. Schwefel, H.-P.; Beyer, H.-G. Evolution Strategies-A Comprehensive Introduction Evolution Strategies A Comprehensive Introduction. ACM Comput. Classif. 2002, 1, 3–52. [Google Scholar]
  8. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  9. Fleming, P.J.; Fonseca, C.M. Genetic algorithms in control systems engineering. IFAC Proc. Vol. 1993, 26, 605–612. [Google Scholar] [CrossRef]
  10. Azizi, M.; Talatahari, S.; Gandomi, A.H. Fire Hawk Optimizer: A Novel Metaheuristic Algorithm. Artif. Intell. Rev. 2023, 56, 287–363. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  12. Mohammed, H.; Rashid, T. FOX: A FOX-Inspired Optimization Algorithm. Appl. Intell. 2023, 53, 1030–1050. [Google Scholar] [CrossRef]
  13. Chopra, N.; Mohsin Ansari, M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  14. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  15. Erol, O.K.; Eksin, I. A New Optimization Method: Big Bang-Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  16. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  17. Kushwaha, N.; Pant, M.; Kant, S.; Jain, V.K. Magnetic Optimization Algorithm for Data Clustering. Pattern Recognit. Lett. 2018, 115, 59–65. [Google Scholar] [CrossRef]
  18. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A Novel Physically Inspired Optimization Algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  19. Zhao, W.; Wang, L.; Zhang, Z. Atom Search Optimization and Its Application to Solve a Hydrogeologic Parameter Estimation Problem. Knowl. Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  20. Shabani, A.; Asgarian, B.; Salido, M.; Asil Gharebaghi, S. Search and Rescue Optimization Algorithm: A New Optimization Method for Solving Constrained Engineering Optimization Problems. Expert Syst. Appl. 2020, 161, 113698. [Google Scholar] [CrossRef]
  21. Mousavirad, S.J.; Ebrahimpour-Komleh, H. Human Mental Search: A New Population-Based Metaheuristic Optimization Algorithm. Appl. Intell. 2017, 47, 850–887. [Google Scholar] [CrossRef]
  22. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  23. Shaheen, A.M.; Elsayed, A.M.; El-Sehiemy, R.A.; Abdelaziz, A.Y. Equilibrium Optimization Algorithm for Network Reconfiguration and Distributed Generation Allocation in Power Systems. Appl. Soft Comput. 2021, 98, 106867. [Google Scholar] [CrossRef]
  24. Turkoglu, B.; Uymaz, S.A.; Kaya, E. Binary Artificial Algae Algorithm for Feature Selection [Formula Presented]. Appl. Soft Comput. 2022, 120, 108630. [Google Scholar] [CrossRef]
  25. Hu, G.; Du, B.; Wang, X.; Wei, G. An Enhanced Black Widow Optimization Algorithm for Feature Selection. Knowl. Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  26. Xu, M.; Song, Q.; Xi, M.; Zhou, Z. Binary Arithmetic Optimization Algorithm for Feature Selection. Soft Comput. 2023, 27, 11395–11429. [Google Scholar] [CrossRef]
  27. Hu, G.; Zhong, J.; Du, B.; Wei, G. An Enhanced Hybrid Arithmetic Optimization Algorithm for Engineering Applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  28. Zhang, Q.; Gao, H.; Zhan, Z.H.; Li, J.; Zhang, H. Growth Optimizer: A Powerful Metaheuristic Algorithm for Solving Con-tinuous and Discrete Global Optimization Problems. Knowl. Based Syst. 2023, 261, 110206. [Google Scholar] [CrossRef]
  29. Aribia, H.B.; El-Rifaie, A.M.; Tolba, M.A.; Shaheen, A.; Moustafa, G.; Elsayed, F.; Elshahed, M. Growth Optimizer for Parameter Identification of Solar Photovoltaic Cells and Modules. Sustainability 2023, 15, 7896. [Google Scholar] [CrossRef]
  30. Hakmi, S.H.; Alnami, H.; Moustafa, G.; Ginidi, A.R.; Shaheen, A.M. Modified Rime-Ice Growth Optimizer with Polynomial Differential Learning Operator for Single- and Double-Diode PV Parameter Estimation Problem. Electronics 2024, 13, 1611. [Google Scholar] [CrossRef]
  31. Gao, H.; Zhang, Q.; Bu, X.; Zhang, H. Quadruple Parameter Adaptation Growth Optimizer with Integrated Distribution, Confrontation, and Balance Features for Optimization. Expert Syst. Appl. 2024, 235, 121218. [Google Scholar] [CrossRef]
  32. Fatani, A.; Dahou, A.; Abd Elaziz, M.; Al-qaness, M.A.A.; Lu, S.; Alfadhli, S.A.; Alresheedi, S.S. Enhancing Intrusion Detection Systems for IoT and Cloud Environments Using a Growth Optimizer Algorithm and Conventional Neural Networks. Sensors 2023, 23, 4430. [Google Scholar] [CrossRef]
  33. Altay, O. Chaotic Slime Mould Optimization Algorithm for Global Optimization. Artif. Intell. Rev. 2022, 55, 3979–4040. [Google Scholar] [CrossRef]
  34. Qaraad, M.; Amjad, S.; Hussein, N.K.; Elhosseini, M.A. Large Scale Salp-Based Grey Wolf Optimization for Feature Selection and Global Optimization. Neural Comput. Appl. 2022, 34, 8989–9014. [Google Scholar] [CrossRef]
  35. Ahmad, M.F.; Isa, N.A.M.; Lim, W.H.; Ang, K.M. Differential Evolution with Modified Initialization Scheme Using Chaotic Oppositional Based Learning Strategy. Alex. Eng. J. 2022, 61, 11835–11858. [Google Scholar] [CrossRef]
  36. Li, X.D.; Wang, J.S.; Hao, W.K.; Zhang, M.; Wang, M. Chaotic Arithmetic Optimization Algorithm. Appl. Intell. 2022, 52, 16718–16757. [Google Scholar] [CrossRef]
  37. Zhang, H.; Liu, T.; Ye, X.; Heidari, A.A.; Liang, G.; Chen, H.; Pan, Z. Differential Evolution-Assisted Salp Swarm Algorithm with Chaotic Structure for Real-World Problems. Eng. Comput. 2023, 39, 1735–1769. [Google Scholar] [CrossRef]
  38. Dehghani, M.; Hubalovsky, S.; Trojovsky, P. Northern Goshawk Optimization: A New Swarm-Based Algorithm for Solving Optimization Problems. IEEE Access. 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  39. Yang, Q.; Yan, J.Q.; Gao, X.D.; Xu, D.D.; Lu, Z.Y.; Zhang, J. Random Neighbor Elite Guided Differential Evolution for Global Numerical Optimization. Inf. Sci. 2022, 607, 1408–1438. [Google Scholar] [CrossRef]
  40. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  41. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  42. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  43. Civicioglu, P.; Besdok, E. Bezier Search Differential Evolution Algorithm for Numerical Function Optimization: A Comparative Study with CRMLSP, MVO, WA, SHADE and LSHADE. Expert Syst. Appl. 2021, 165, 113875. [Google Scholar] [CrossRef]
  44. Civicioglu, P.; Besdok, E. Bernstein-Levy Differential Evolution Algorithm for Numerical Function Optimization. Neural Comput. Appl. 2023, 35, 6603–6621. [Google Scholar] [CrossRef]
  45. Malik, N.A.; Chang, C.L.; Chaudhary, N.I.; Raja, M.A.Z.; Cheema, K.M.; Shu, C.M.; Alshamrani, S.S. Knacks of Fractional Order Swarming Intelligence for Parameter Estimation of Harmonics in Electrical Systems. Mathematics 2022, 10, 1570. [Google Scholar] [CrossRef]
  46. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z. Variants of Chaotic Grey Wolf Heuristic for Robust Identification of Control Autoregressive Model. Biomimetics 2023, 8, 141. [Google Scholar] [CrossRef] [PubMed]
  47. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. CAD Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  49. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A Nature-Inspired Algorithm for Global Optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  51. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  52. Qi, X.; Zhu, Y.; Zhang, H. A New Meta-Heuristic Butterfly-Inspired Algorithm. J. Comput. Sci. 2017, 23, 226–239. [Google Scholar] [CrossRef]
  53. Arora, S.; Singh, S. Butterfly Optimization Algorithm: A Novel Approach for Global Optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  54. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  55. Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  56. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm Based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  57. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  58. IEEE Computational Intelligence Society; Institute of Electrical and Electronics Engineers. Behavioral Study of the Surrogate Model-aware Evolutionary Search Framework. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, Beijing, China, 6–11 July 2014; ISBN 9781479914883. [Google Scholar]
  59. Institute of Electrical and Electronics Engineers; IEEE Computational Intelligence Society. Hybrid Single and Multiobjective Optimization for Engineering Design without Exact Specifications. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC): 2020 Conference Proceedings, Glasgow, UK, 19–24 July 2020; ISBN 9781728169293. [Google Scholar]
  60. Li, Y.; Han, T.; Zhou, H.; Tang, S.; Zhao, H. A Novel Adaptive L-SHADE Algorithm and Its Application in UAV Swarm Resource Configuration Problem. Inf. Sci. 2022, 606, 350–367. [Google Scholar] [CrossRef]
  61. Azizi, M.; Aickelin, U.; Khorshidi, H.A.; Shishehgarkhaneh, M.B. Shape and Size Optimization of Truss Structures by Chaos Game Optimization Considering Frequency Constraints. J. Adv. Res. 2022, 41, 89–100. [Google Scholar] [CrossRef] [PubMed]
  62. Tzanetos, A.; Blondin, M. A Qualitative Systematic Review of Metaheuristics Applied to Tension/Compression Spring Design Problem: Current Situation, Recommendations, and Research Direction. Eng. Appl. Artif. Intell. 2023, 118, 105521. [Google Scholar] [CrossRef]
  63. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A Test-Suite of Non-Convex Constrained Optimization Problems from the Real-World and Some Baseline Results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  64. Akay, B.; Karaboga, D. Artificial Bee Colony Algorithm for Large-Scale Problems and Engineering Design Optimization. J. Intell. Manuf. 2012, 23, 1001–1014. [Google Scholar] [CrossRef]
Figure 1. Sequence distribution map. (a) Random mapping (b) Circle chaotic mapping.
Figure 1. Sequence distribution map. (a) Random mapping (b) Circle chaotic mapping.
Axioms 13 00361 g001
Figure 2. The process of exploration strategy.
Figure 2. The process of exploration strategy.
Axioms 13 00361 g002
Figure 3. The process of exploitation strategy.
Figure 3. The process of exploitation strategy.
Axioms 13 00361 g003
Figure 4. Comparison of population diversity between GO and CODGBGO.
Figure 4. Comparison of population diversity between GO and CODGBGO.
Axioms 13 00361 g004
Figure 5. The exploration and exploitation ratio of CODGBGO.
Figure 5. The exploration and exploitation ratio of CODGBGO.
Axioms 13 00361 g005
Figure 6. Comparison of convergence curves on the IEEE CEC2017 test function set.
Figure 6. Comparison of convergence curves on the IEEE CEC2017 test function set.
Axioms 13 00361 g006
Figure 7. Comparison of stability on the IEEE CEC2017 test function set.
Figure 7. Comparison of stability on the IEEE CEC2017 test function set.
Axioms 13 00361 g007
Figure 8. The boxplot of comparison algorithms on feature selection.
Figure 8. The boxplot of comparison algorithms on feature selection.
Axioms 13 00361 g008
Figure 9. The convergence curve plots of comparison algorithms on feature selection.
Figure 9. The convergence curve plots of comparison algorithms on feature selection.
Axioms 13 00361 g009
Figure 10. Comparison of average classification accuracy.
Figure 10. Comparison of average classification accuracy.
Axioms 13 00361 g010
Figure 11. Combined performance on six evaluation criteria.
Figure 11. Combined performance on six evaluation criteria.
Axioms 13 00361 g011
Figure 12. The 10-bar truss design structure.
Figure 12. The 10-bar truss design structure.
Axioms 13 00361 g012
Figure 13. The tension/compression spring design structure.
Figure 13. The tension/compression spring design structure.
Axioms 13 00361 g013
Figure 14. The speed reducer design structure.
Figure 14. The speed reducer design structure.
Axioms 13 00361 g014
Figure 15. The welded beam design structure.
Figure 15. The welded beam design structure.
Axioms 13 00361 g015
Table 1. Information about the IEEE CEC2017 function test set.
Table 1. Information about the IEEE CEC2017 function test set.
IndexTypesNameOptimum
CEC2017_F1UnimodalShifted and Rotated Bent Cigar Function100
CEC2017_F3 Shifted and Rotated Zakharov Function300
CEC2017_F4MultimodalShifted and Rotated Rosenbrock’s Function400
CEC2017_F5 Shifted and Rotated Rastrigin’s Function500
CEC2017_F6 Shifted and Rotated Expanded Scaffer’s F6 Function600
CEC2017_F7 Shifted and Rotated Lunacek Bi-Rastrigin Function700
CEC2017_F8 Shifted and Rotated Non-Continuous Rastrigin’s Function800
CEC2017_F9 Shifted and Rotated Lévy Function900
CEC2017_F10 Shifted and Rotated Schwefel’s Function1000
CEC2017_F11HybridHybrid function 1 (N = 3)1100
CEC2017_F12 Hybrid function 1 (N = 3)1200
CEC2017_F13 Hybrid function 3 (N = 3)1300
CEC2017_F14 Hybrid function 4 (N = 4)1400
CEC2017_F15 Hybrid function 5 (N = 4)1500
CEC2017_F16 Hybrid function 6 (N = 4)1600
CEC2017_F17 Hybrid function 6 (N = 5)1700
CEC2017_F18 Hybrid function 6 (N = 5)1800
CEC2017_F19 Hybrid function 6 (N = 5)1900
CEC2017_F20 Hybrid function 6 (N = 6)2000
CEC2017_F21CompositionComposition function 1 (N = 3)2100
CEC2017_F22 Composition function 2 (N = 3)2200
CEC2017_F23 Composition function 3 (N = 4)2300
CEC2017_F24 Composition function 4 (N = 4)2400
CEC2017_F25 Composition function 5 (N = 5)2500
CEC2017_F26 Composition function 6 (N = 5)2600
CEC2017_F27 Composition function 7 (N = 6)2700
CEC2017_F28 Composition function 8 (N = 6)2800
CEC2017_F29 Composition function 9 (N = 3)2900
CEC2017_F30 Composition function 10 (N = 3)3000
Search range: [−100,100]
Table 2. Information about the IEEE CEC2020 function test set.
Table 2. Information about the IEEE CEC2020 function test set.
IndexTypesNameOptimum
CEC2020_F1UnimodalShifted and Rotated Bent Cigar Function100
CEC2020_F2MultimodalShifted and Rotated Schwefel’s Function1100
CEC2020_F3 Shifted and Rotated Lunacek bi-Rastrigin Function700
CEC2020_F4 Expanded Rosenbrock’s plus Griewangk’s Function1900
CEC2020_F5HybridHybrid Function 1 (N = 3)1700
CEC2020_F6 Hybrid Function 2 (N = 4)1600
CEC2020_F7 Hybrid Function 3 (N = 5)2100
CEC2020_F8CompositionComposition Function 1 (N = 3)2200
CEC2020_F9 Composition Function 2 (N = 4)2400
CEC2020_F10 Composition Function 3 (N = 5)2500
Search range: [−100, 100]
Table 3. The comparison algorithm details and parameter settings.
Table 3. The comparison algorithm details and parameter settings.
AlgorithmsProposed TimeParameters SettingsCitationsType
Particle Swarm Optimization (PSO) [40]1995 w = 1 ,   w p = 0.99 ,   c 1 = 1.5 ,   c 2 = 2.0 81,387classical
Differential Evolution (DE) [41]1997 F = 0.5 ,   C R = 0 . 9 33,925classical
Comprehensive Learning Particle Swarm Optimizer (CLPSO) [42]2006 w m i n = 0.2 ,   w m a x = 0.9 ,   c = 1 . 496 2838Improved
Bezier Search Differential Evolution (BESD) [43]2021 K = 5 40Improved
Bernstein-levy Differential Evolution (BDE) [44]2023 N o   p a r a m e t e r s 3Improved
Fractional Order Particle Swarm Optimization (FOPSO) [45]2022 λ = 0.1 18Improved
Improved Chaotic Grey Wolf Optimizer (ICGWO) [46]2023 N o   p a r a m e t e r s 15Improved
Teaching Learning-Based Optimization (TLBO) [47]2011 T F = 1   o r   2 3438high citation
Grey Wolf Optimizer (GWO) [48]2014 α = 2 2 · ( F E s M a x F E s ) 11,102high citation
Multi-Verse Optimizer (MVO) [49]2016 W E P M a x = 1 ,   W E P M i n = 0.2 2094high citation
Whale Optimization Algorithm
(WOA) [11]
2016 b = 1 ,   a 1 = 2 ( 2 · F E s / M a x F E s ) , a 2 = 1 ( F E s / M a x F E s ) 7672high citation
Salp Swarm Algorithm (SSA) [50]2017 c 1 = 2 · e x p ( ( 4 · F E s / M a x F E s ) 2 ) 3284high citation
Harris Hawks Optimization
(HHO) [51]
2019 E 0 = 2 · r a n d 1 ,   E 1 = 2 2 · ( F E s / M a x F E s ) 3048high citation
Artificial Butterfly Optimization (ABO) [52]2017 r a t i o e = 0.2 ,   s t e p e = 0.05 67popular
Butterfly Optimization Algorithm
(BOA) [53]
2019 p = 0.8 ,   α = 0 . 1 ,   c = 0 . 01 813popular
Equilibrium Optimizer
(EO) [54]
2020 V = 1 ,   a 1 = 2 ,   a 2 = 1 ,   G P = 0.5 1177popular
Dung Beetle Optimizer (DBO) [55]2023 k   a n d   λ = 0 . 1 ,   b = 0 . 3 ,   S = 0 . 5 38new
INFO [56]2022 I = r a n d i ( [ 2 , 5 ] ) 258new
Snake Optimizer (SO) [57]2022 T 1 = 0.25 ,   T 2 = 0.6 ,   C 1 = 0.5 ,   C 2 = 0.05 ,   C 3 = 2 198new
Growth Optimizer (GO)2023 p 1 = 5 ,   p 2 = 0.001 ,   p 3 = 0.3 11new
LSHADE [58]2014 N P i n i t = 18 · D ,   N P m i n = 4 ,   | A |   = 2 . 6 · N P ,   p = 0 . 11 ,   H = 6 671superior
Improved Multi-Operator Differential Evolution (IMODE) [59]2020 N P i n i t = 6 · D 2 ,   N P m i n = 4 ,   | A | = 2.6 ,   H = 20 · D 106superior
Adaptive L-SHADE algorithm (ALSHADE) [60]2022 N P i n i t = 18 · D ,   N P m i n = 4 ,   | A | = 2.6 · N P ,   p = 0.11 ,   e = 0.5 6superior
Table 4. Friedman mean rank test results for different combinations of strategies.
Table 4. Friedman mean rank test results for different combinations of strategies.
ProblemsGOCOGODGGOBGOCODGBGO
CEC2017_F13.533.432.232.932.87
CEC2017_F34.373.671.533.971.47
CEC2017_F43.132.903.273.302.40
CEC2017_F53.774.303.901.331.70
CEC2017_F62.232.402.803.833.73
CEC2017_F74.074.203.671.501.57
CEC2017_F83.873.974.031.331.80
CEC2017_F92.532.471.934.403.67
CEC2017_F103.704.174.131.531.47
CEC2017_F113.733.433.432.402.00
CEC2017_F124.102.902.232.972.80
CEC2017_F133.602.872.404.032.10
CEC2017_F144.033.602.502.971.90
CEC2017_F153.503.432.673.302.10
CEC2017_F163.832.903.602.801.87
CEC2017_F173.603.103.432.532.33
CEC2017_F184.234.001.473.571.73
CEC2017_F194.003.802.603.031.57
CEC2017_F203.472.903.432.832.37
CEC2017_F213.904.273.831.531.47
CEC2017_F224.003.132.533.871.47
CEC2017_F233.803.903.631.971.70
CEC2017_F244.004.303.571.731.40
CEC2017_F253.032.203.203.103.47
CEC2017_F263.633.433.132.572.23
CEC2017_F272.272.933.033.273.50
CEC2017_F282.702.832.833.103.53
CEC2017_F293.673.933.401.702.30
CEC2017_F304.403.502.003.331.77
Mean Rank3.613.412.982.782.22
Final Rank54321
Table 5. Friedman mean rank test results for different parameter combinations.
Table 5. Friedman mean rank test results for different parameter combinations.
Problems{0.7, 0.9}{0.7, 0.95}{0.7, 0.85}{0.8, 0.85}{0.8, 0.9}{0.9, 0.85}{0.9, 0.9}{0.9, 0.95}{0.8, 0.95}
CEC2017_F15.03 5.17 5.90 5.37 4.13 5.40 4.40 4.43 5.17
CEC2017_F34.40 4.60 3.87 3.20 4.83 6.80 5.73 6.17 5.40
CEC2017_F44.70 5.43 5.77 4.30 4.17 5.17 5.20 5.33 4.93
CEC2017_F54.77 4.43 6.10 5.20 5.17 5.87 4.73 4.07 4.67
CEC2017_F64.90 3.87 7.10 6.73 4.27 6.83 4.90 3.20 3.20
CEC2017_F74.40 4.67 4.83 4.93 5.43 5.90 5.47 5.20 4.17
CEC2017_F85.50 3.77 5.83 4.97 5.57 6.40 4.53 4.60 3.83
CEC2017_F95.20 3.17 5.83 6.13 5.57 6.97 4.93 3.23 3.97
CEC2017_F105.03 5.70 3.67 4.83 4.60 4.43 4.60 6.07 6.07
CEC2017_F114.37 5.93 4.77 4.83 4.10 5.73 4.70 5.07 5.50
CEC2017_F125.57 5.73 5.50 5.83 4.37 5.87 4.23 4.30 3.60
CEC2017_F134.63 5.30 5.60 4.63 4.77 5.80 5.53 4.27 4.47
CEC2017_F145.03 4.77 5.07 4.33 4.80 5.07 5.13 6.50 4.30
CEC2017_F155.30 5.43 4.20 4.17 5.73 4.73 5.50 5.57 4.37
CEC2017_F165.57 5.17 3.80 5.13 4.43 4.13 5.77 5.40 5.60
CEC2017_F174.93 5.23 4.63 5.03 5.10 4.73 4.07 5.93 5.33
CEC2017_F184.87 4.73 5.30 4.40 4.07 5.30 5.13 5.63 5.57
CEC2017_F195.60 4.80 4.90 4.93 4.93 4.83 4.30 5.50 5.20
CEC2017_F205.40 4.53 5.40 5.40 5.00 5.43 4.67 4.57 4.60
CEC2017_F213.87 4.37 5.07 6.57 5.63 5.83 5.57 4.00 4.10
CEC2017_F224.23 4.70 4.47 4.18 4.53 5.23 5.90 6.48 5.27
CEC2017_F234.90 4.00 5.77 6.13 4.60 5.50 5.97 4.20 3.93
CEC2017_F244.47 3.97 5.53 5.63 5.73 5.37 5.27 4.67 4.37
CEC2017_F254.50 5.17 5.37 5.30 5.10 5.63 5.57 3.93 4.43
CEC2017_F265.43 3.57 5.67 5.77 4.50 6.33 5.20 4.27 4.27
CEC2017_F275.13 5.37 4.73 4.93 5.37 5.40 4.30 4.97 4.80
CEC2017_F285.13 5.13 5.37 4.80 4.87 4.80 4.83 5.67 4.40
CEC2017_F294.23 4.50 5.57 5.27 4.73 5.90 4.03 5.70 5.07
CEC2017_F305.67 5.13 5.43 4.83 4.17 5.33 4.83 5.57 4.03
Mean Rank4.92 4.77 5.21 5.10 4.84 5.54 5.00 4.98 4.64
Final Rank428739651
Table 6. Comparison of numerical results for the IEEE CEC2017 test function set for 30D.
Table 6. Comparison of numerical results for the IEEE CEC2017 test function set for 30D.
ProblemsMetricTLBOSSAINFODEDBOSOCLPSOBDEBESDGOCODGBGO
CEC2017_F1Mean4.53 × 1033.29 × 1031.05 × 1021.26 × 1057.13 × 1061.63 × 1041.85 × 1094.95 × 1053.07 × 1081.62 × 1021.54 × 102
Std4.89 × 1033.95 × 1037.16 × 1005.43 × 1041.13 × 1071.40 × 1043.86 × 1082.67 × 1059.21 × 1072.01 × 1021.46 × 102
CEC2017_F3Mean4.03 × 1041.14 × 1046.84 × 1024.00 × 1047.03 × 1045.50 × 1049.29 × 1042.70 × 1042.13 × 1047.70 × 1023.09 × 102
Std8.00 × 1034.68 × 1031.53 × 1037.37 × 1031.64 × 1041.07 × 1041.56 × 1046.04 × 1033.76 × 1035.10 × 1028.78 × 100
CEC2017_F4Mean4.97 × 1025.03 × 1024.79 × 1024.91 × 1025.47 × 1025.19 × 1027.74 × 1025.07 × 1025.94 × 1024.41 × 1024.28 × 102
Std3.64 × 1011.82 × 1012.62 × 1018.80 × 1004.33 × 1014.05 × 1014.30 × 1012.19 × 1012.54 × 1013.26 × 1013.10 × 101
CEC2017_F5Mean5.90 × 1026.43 × 1026.36 × 1027.21 × 1027.10 × 1025.71 × 1027.28 × 1026.23 × 1026.81 × 1026.06 × 1025.48 × 102
Std1.73 × 1014.24 × 1013.01 × 1011.28 × 1014.79 × 1012.70 × 1011.77 × 1018.45 × 1001.15 × 1011.84 × 1011.66 × 101
CEC2017_F6Mean6.07 × 1026.47 × 1026.18 × 1026.09 × 1026.30 × 1026.01 × 1026.23 × 1026.01 × 1026.21 × 1026.00 × 1026.00 × 102
Std4.34 × 1001.17 × 1018.90 × 1002.69 × 1008.18 × 1006.00 × 10−12.68 × 1002.06 × 10−12.81 × 1007.81 × 10−32.92 × 10−2
CEC2017_F7Mean8.61 × 1028.98 × 1029.49 × 1029.48 × 1029.13 × 1028.14 × 1021.07 × 1038.77 × 1029.35 × 1028.58 × 1027.91 × 102
Std4.16 × 1015.80 × 1015.35 × 1011.35 × 1017.17 × 1013.18 × 1012.59 × 1011.24 × 1011.42 × 1011.14 × 1011.90 × 101
CEC2017_F8Mean8.68 × 1029.49 × 1029.20 × 1021.01 × 1031.00 × 1038.60 × 1021.03 × 1039.21 × 1029.64 × 1029.03 × 1028.52 × 102
Std1.83 × 1014.41 × 1012.69 × 1011.33 × 1016.23 × 1011.81 × 1011.64 × 1011.21 × 1019.94 × 1002.24 × 1011.27 × 101
CEC2017_F9Mean1.20 × 1034.39 × 1032.80 × 1031.13 × 1035.57 × 1031.06 × 1034.49 × 1039.63 × 1021.92 × 1039.01 × 1029.03 × 102
Std2.76 × 1021.49 × 1036.58 × 1022.97 × 1022.30 × 1031.07 × 1026.86 × 1025.94 × 1013.24 × 1021.19 × 1002.95 × 100
CEC2017_F10Mean8.12 × 1035.01 × 1035.10 × 1038.57 × 1035.48 × 1035.13 × 1037.51 × 1035.26 × 1036.67 × 1037.04 × 1034.54 × 103
Std4.02 × 1027.43 × 1026.12 × 1022.98 × 1027.92 × 1022.04 × 1032.62 × 1021.76 × 1022.46 × 1024.36 × 1026.17 × 102
CEC2017_F11Mean1.25 × 1031.30 × 1031.26 × 1031.23 × 1031.54 × 1031.22 × 1031.80 × 1031.20 × 1031.32 × 1031.17 × 1031.14 × 103
Std4.93 × 1015.10 × 1014.65 × 1012.63 × 1011.28 × 1024.63 × 1011.86 × 1022.13 × 1012.10 × 1012.66 × 1012.80 × 101
CEC2017_F12Mean2.82 × 1051.08 × 1075.73 × 1043.61 × 1054.14 × 1077.01 × 1061.05 × 1081.04 × 1069.41 × 1063.43 × 1041.69 × 104
Std3.43 × 1059.93 × 1063.70 × 1041.30 × 1056.31 × 1071.07 × 1072.78 × 1076.60 × 1053.50 × 1062.79 × 1048.16 × 103
CEC2017_F13Mean1.45 × 1041.09 × 1051.78 × 1042.27 × 1032.81 × 1061.17 × 1062.75 × 1072.16 × 1041.16 × 1054.22 × 1031.90 × 103
Std1.25 × 1046.58 × 1041.37 × 1042.08 × 1025.92 × 1061.36 × 1061.75 × 1071.33 × 1044.89 × 1041.10 × 1041.19 × 103
CEC2017_F14Mean1.56 × 1043.48 × 1042.37 × 1031.49 × 1037.24 × 1043.83 × 1048.63 × 1041.74 × 1032.06 × 1031.48 × 1031.46 × 103
Std1.62 × 1043.23 × 1041.53 × 1037.27 × 1009.64 × 1046.00 × 1044.94 × 1041.91 × 1023.11 × 1021.06 × 1011.47 × 101
CEC2017_F15Mean8.45 × 1037.78 × 1043.98 × 1031.62 × 1037.81 × 1041.49 × 1051.09 × 1065.62 × 1035.12 × 1031.62 × 1031.58 × 103
Std6.89 × 1035.31 × 1044.13 × 1032.34 × 1016.62 × 1042.51 × 1056.83 × 1055.05 × 1039.80 × 1024.24 × 1013.21 × 101
CEC2017_F16Mean2.58 × 1032.76 × 1032.55 × 1033.46 × 1033.10 × 1032.51 × 1033.02 × 1032.67 × 1033.04 × 1032.46 × 1032.22 × 103
Std3.96 × 1023.32 × 1023.19 × 1021.94 × 1024.48 × 1023.06 × 1021.57 × 1021.53 × 1021.07 × 1021.72 × 1021.85 × 102
CEC2017_F17Mean1.92 × 1032.25 × 1032.26 × 1032.38 × 1032.46 × 1032.20 × 1032.15 × 1031.99 × 1032.06 × 1031.90 × 1031.85 × 103
Std9.02 × 1012.45 × 1022.24 × 1021.02 × 1022.06 × 1022.07 × 1029.02 × 1019.75 × 1018.35 × 1015.95 × 1019.11 × 101
CEC2017_F18Mean5.79 × 1055.13 × 1053.28 × 1042.27 × 1031.85 × 1061.23 × 1061.04 × 1064.50 × 1045.47 × 1043.94 × 1031.93 × 103
Std4.43 × 1053.80 × 1052.10 × 1041.82 × 1022.72 × 1061.36 × 1063.76 × 1052.09 × 1041.59 × 1043.41 × 1035.00 × 101
CEC2017_F19Mean8.00 × 1031.84 × 1063.39 × 1031.95 × 1031.06 × 1062.00 × 1051.60 × 1066.07 × 1037.12 × 1031.96 × 1031.93 × 103
Std6.13 × 1031.15 × 1062.99 × 1035.65 × 1001.38 × 1064.62 × 1051.09 × 1064.17 × 1031.85 × 1038.45 × 1017.15 × 100
CEC2017_F20Mean2.36 × 1032.54 × 1032.50 × 1032.83 × 1032.63 × 1032.43 × 1032.50 × 1032.33 × 1032.48 × 1032.25 × 1032.16 × 103
Std1.35 × 1022.00 × 1021.65 × 1021.06 × 1021.80 × 1021.54 × 1029.13 × 1016.95 × 1018.06 × 1011.02 × 1029.22 × 101
CEC2017_F21Mean2.37 × 1032.44 × 1032.43 × 1032.51 × 1032.52 × 1032.37 × 1032.50 × 1032.42 × 1032.47 × 1032.41 × 1032.35 × 103
Std2.03 × 1013.73 × 1012.76 × 1011.57 × 1014.66 × 1011.87 × 1015.21 × 1011.41 × 1018.89 × 1001.55 × 1011.61 × 101
CEC2017_F22Mean2.56 × 1033.32 × 1034.58 × 1033.32 × 1035.22 × 1033.73 × 1033.04 × 1032.31 × 1032.43 × 1032.50 × 1032.30 × 103
Std1.35 × 1031.88 × 1032.10 × 1031.46 × 1031.94 × 1032.03 × 1032.34 × 1028.07 × 1002.53 × 1011.12 × 1037.36 × 10−1
CEC2017_F23Mean2.74 × 1032.79 × 1032.82 × 1032.83 × 1032.91 × 1032.75 × 1032.85 × 1032.77 × 1032.87 × 1032.74 × 1032.69 × 103
Std2.30 × 1013.89 × 1015.39 × 1015.74 × 1017.42 × 1013.19 × 1014.38 × 1011.49 × 1011.50 × 1012.32 × 1011.67 × 101
CEC2017_F24Mean2.91 × 1032.93 × 1032.96 × 1033.04 × 1033.06 × 1032.90 × 1033.04 × 1032.95 × 1033.03 × 1032.95 × 1032.88 × 103
Std2.12 × 1013.20 × 1014.24 × 1011.49 × 1018.10 × 1013.58 × 1011.88 × 1011.69 × 1011.86 × 1011.71 × 1011.60 × 101
CEC2017_F25Mean2.90 × 1032.91 × 1032.90 × 1032.89 × 1032.95 × 1032.89 × 1033.07 × 1032.89 × 1032.99 × 1032.89 × 1032.89 × 103
Std1.66 × 1012.15 × 1011.84 × 1016.11 × 10−14.56 × 1011.48 × 1013.71 × 1017.63 × 1001.63 × 1011.02 × 1001.56 × 100
CEC2017_F26Mean4.36 × 1034.77 × 1035.50 × 1034.13 × 1036.35 × 1034.87 × 1034.74 × 1034.23 × 1034.36 × 1034.36 × 1033.96 × 103
Std1.09 × 1038.63 × 1021.33 × 1032.27 × 1021.12 × 1033.95 × 1026.38 × 1029.30 × 1027.63 × 1023.04 × 1024.04 × 102
CEC2017_F27Mean3.24 × 1033.26 × 1033.26 × 1033.23 × 1033.27 × 1033.26 × 1033.27 × 1033.24 × 1033.37 × 1033.20 × 1033.21 × 103
Std2.11 × 1014.69 × 1012.93 × 1017.35 × 1004.11 × 1011.67 × 1017.69 × 1005.13 × 1001.69 × 1019.68 × 1001.03 × 101
CEC2017_F28Mean3.23 × 1033.26 × 1033.18 × 1033.22 × 1033.38 × 1033.27 × 1033.50 × 1033.24 × 1033.38 × 1033.16 × 1033.19 × 103
Std2.37 × 1013.04 × 1015.01 × 1011.02 × 1017.41 × 1014.35 × 1015.20 × 1011.72 × 1013.07 × 1016.24 × 1016.24 × 101
CEC2017_F29Mean3.69 × 1034.07 × 1034.12 × 1034.39 × 1034.20 × 1033.85 × 1034.14 × 1033.69 × 1034.12 × 1033.62 × 1033.53 × 103
Std1.37 × 1022.83 × 1023.06 × 1021.88 × 1022.84 × 1022.45 × 1021.43 × 1028.32 × 1011.09 × 1028.50 × 1019.22 × 101
CEC2017_F30Mean1.10 × 1045.51 × 1068.87 × 1031.80 × 1042.42 × 1064.18 × 1053.60 × 1062.28 × 1043.63 × 1056.82 × 1035.32 × 103
Std4.44 × 1033.42 × 1063.35 × 1035.35 × 1034.31 × 1061.10 × 1061.51 × 1061.34 × 1041.57 × 1059.44 × 1023.24 × 102
MeanRank 4.97 7.24 5.72 6.38 9.72 5.90 9.66 4.90 7.55 2.76 1.21
FinalRank 4857116103921
Rank First 001000000424
Table 7. Comparison of numerical results for the IEEE CEC2017 test function set for 50D.
Table 7. Comparison of numerical results for the IEEE CEC2017 test function set for 50D.
ProblemsMetricTLBOSSAINFODEDBOSOCLPSOBDEBESDGOCODGBGO
CEC2017_F1Mean3.63 × 1077.02 × 1037.88 × 1044.29 × 1074.41 × 1081.90 × 1071.13 × 10104.35 × 1074.59 × 1094.31 × 1037.58 × 103
Std1.09 × 1088.47 × 1033.99 × 1051.49 × 1072.97 × 1084.48 × 1071.54 × 1091.34 × 1079.72 × 1085.09 × 1038.08 × 103
CEC2017_F3Mean1.60 × 1051.01 × 1051.99 × 1041.97 × 1052.13 × 1051.33 × 1052.27 × 1051.04 × 1058.33 × 1043.73 × 1041.31 × 104
Std2.23 × 1042.97 × 1046.91 × 1032.91 × 1044.47 × 1041.38 × 1042.50 × 1041.75 × 1041.26 × 1041.35 × 1044.70 × 103
CEC2017_F4Mean5.99 × 1025.89 × 1025.38 × 1026.00 × 1028.12 × 1026.60 × 1021.89 × 1036.47 × 1021.23 × 1035.19 × 1025.17 × 102
Std5.49 × 1014.67 × 1015.47 × 1013.84 × 1011.70 × 1026.83 × 1011.74 × 1022.59 × 1011.26 × 1025.87 × 1014.12 × 101
CEC2017_F5Mean6.95 × 1028.38 × 1028.07 × 1029.53 × 1029.13 × 1026.34 × 1029.98 × 1027.90 × 1029.09 × 1027.84 × 1026.32 × 102
Std2.82 × 1015.35 × 1014.61 × 1011.94 × 1017.88 × 1014.36 × 1011.94 × 1011.58 × 1011.94 × 1013.30 × 1013.68 × 101
CEC2017_F6Mean6.21 × 1026.56 × 1026.40 × 1026.26 × 1026.56 × 1026.04 × 1026.40 × 1026.03 × 1026.41 × 1026.00 × 1026.00 × 102
Std4.80 × 1009.77 × 1009.47 × 1005.75 × 1008.02 × 1002.01 × 1003.96 × 1005.95 × 10−13.98 × 1001.01 × 10−11.45 × 10−1
CEC2017_F7Mean1.07 × 1031.14 × 1031.30 × 1031.22 × 1031.16 × 1039.39 × 1021.59 × 1031.08 × 1031.26 × 1031.08 × 1038.98 × 102
Std6.20 × 1011.27 × 1021.07 × 1021.86 × 1011.48 × 1024.11 × 1015.46 × 1012.65 × 1013.16 × 1011.84 × 1014.87 × 101
CEC2017_F8Mean1.01 × 1031.11 × 1031.10 × 1031.25 × 1031.23 × 1039.49 × 1021.30 × 1031.09 × 1031.21 × 1031.09 × 1039.14 × 102
Std2.68 × 1017.77 × 1015.42 × 1012.33 × 1016.39 × 1015.28 × 1013.12 × 1012.01 × 1012.07 × 1012.16 × 1012.99 × 101
CEC2017_F9Mean6.86 × 1031.28 × 1047.97 × 1036.81 × 1031.96 × 1041.77 × 1031.98 × 1042.32 × 1031.19 × 1049.49 × 1021.06 × 103
Std2.92 × 1032.96 × 1031.70 × 1032.72 × 1037.92 × 1034.09 × 1022.77 × 1034.48 × 1021.44 × 1034.57 × 1012.05 × 102
CEC2017_F10Mean1.46 × 1048.04 × 1038.10 × 1031.52 × 1048.85 × 1031.01 × 1041.29 × 1049.85 × 1031.23 × 1041.35 × 1048.23 × 103
Std3.93 × 1021.18 × 1039.15 × 1023.74 × 1021.84 × 1032.57 × 1033.31 × 1024.10 × 1023.38 × 1023.94 × 1029.60 × 102
CEC2017_F11Mean1.35 × 1031.58 × 1031.31 × 1031.41 × 1032.70 × 1031.48 × 1035.43 × 1031.54 × 1032.28 × 1031.28 × 1031.22 × 103
Std6.00 × 1011.07 × 1026.18 × 1013.59 × 1011.78 × 1031.05 × 1021.28 × 1031.19 × 1022.50 × 1022.57 × 1013.47 × 101
CEC2017_F12Mean3.55 × 1067.64 × 1071.94 × 1067.53 × 1064.43 × 1085.73 × 1071.93 × 1091.80 × 1073.49 × 1085.66 × 1053.39 × 105
Std6.30 × 1063.87 × 1071.10 × 1063.68 × 1063.73 × 1086.76 × 1075.54 × 1088.58 × 1069.08 × 1074.14 × 1052.95 × 105
CEC2017_F13Mean8.57 × 1031.75 × 1057.50 × 1036.61 × 1041.95 × 1075.51 × 1064.50 × 1081.34 × 1051.11 × 1079.39 × 1035.66 × 103
Std5.31 × 1031.05 × 1053.80 × 1031.77 × 1042.78 × 1076.05 × 1061.68 × 1082.17 × 1055.75 × 1068.55 × 1036.40 × 103
CEC2017_F14Mean1.36 × 1051.94 × 1051.31 × 1041.67 × 1031.50 × 1064.78 × 1052.15 × 1063.68 × 1046.42 × 1041.71 × 1031.61 × 103
Std1.10 × 1051.35 × 1051.50 × 1042.15 × 1011.94 × 1061.19 × 1069.50 × 1052.53 × 1042.81 × 1047.07 × 1013.67 × 101
CEC2017_F15Mean8.25 × 1035.70 × 1041.09 × 1042.80 × 1035.76 × 1061.01 × 1067.54 × 1072.29 × 1042.77 × 1053.46 × 1033.22 × 103
Std6.30 × 1032.34 × 1047.18 × 1032.63 × 1022.36 × 1071.58 × 1063.16 × 1071.82 × 1041.25 × 1054.41 × 1033.47 × 103
CEC2017_F16Mean3.13 × 1033.69 × 1033.50 × 1035.35 × 1034.40 × 1033.59 × 1034.36 × 1033.86 × 1034.20 × 1033.79 × 1033.08 × 103
Std4.59 × 1025.27 × 1024.75 × 1022.23 × 1025.98 × 1027.99 × 1023.15 × 1021.80 × 1022.69 × 1023.34 × 1024.17 × 102
CEC2017_F17Mean2.92 × 1033.40 × 1033.25 × 1034.08 × 1034.12 × 1033.19 × 1033.89 × 1033.23 × 1033.34 × 1033.05 × 1032.74 × 103
Std2.99 × 1023.74 × 1023.52 × 1021.75 × 1024.02 × 1024.31 × 1022.38 × 1021.93 × 1021.35 × 1021.52 × 1022.29 × 102
CEC2017_F18Mean1.95 × 1062.48 × 1061.25 × 1058.20 × 1045.88 × 1065.31 × 1067.48 × 1065.90 × 1057.89 × 1052.51 × 1047.92 × 103
Std1.32 × 1062.21 × 1069.12 × 1042.83 × 1046.86 × 1064.90 × 1064.05 × 1065.11 × 1053.60 × 1051.51 × 1045.77 × 103
CEC2017_F19Mean1.84 × 1043.30 × 1061.54 × 1042.27 × 1035.29 × 1066.62 × 1051.54 × 1072.05 × 1041.14 × 1052.28 × 1032.08 × 103
Std8.69 × 1032.57 × 1061.08 × 1041.18 × 1027.55 × 1061.23 × 1066.95 × 1067.84 × 1033.26 × 1044.59 × 1027.73 × 101
CEC2017_F20Mean3.73 × 1033.18 × 1033.31 × 1034.19 × 1033.61 × 1033.11 × 1033.40 × 1033.29 × 1033.51 × 1033.22 × 1032.99 × 103
Std2.42 × 1023.99 × 1023.53 × 1021.71 × 1024.12 × 1024.90 × 1021.53 × 1021.59 × 1021.52 × 1021.87 × 1022.10 × 102
CEC2017_F21Mean2.50 × 1032.61 × 1032.58 × 1032.75 × 1032.79 × 1032.45 × 1032.78 × 1032.59 × 1032.69 × 1032.58 × 1032.41 × 103
Std4.04 × 1017.03 × 1015.79 × 1012.42 × 1019.89 × 1014.42 × 1011.79 × 1011.65 × 1011.75 × 1013.64 × 1013.04 × 101
CEC2017_F22Mean1.40 × 1049.51 × 1039.70 × 1031.66 × 1041.09 × 1041.32 × 1041.40 × 1041.11 × 1041.30 × 1041.45 × 1048.96 × 103
Std5.15 × 1031.63 × 1039.94 × 1024.20 × 1021.31 × 1032.56 × 1031.99 × 1032.27 × 1031.68 × 1032.37 × 1032.41 × 103
CEC2017_F23Mean2.98 × 1033.03 × 1033.13 × 1033.19 × 1033.36 × 1032.95 × 1033.24 × 1033.05 × 1033.27 × 1032.99 × 1032.84 × 103
Std4.59 × 1017.99 × 1017.16 × 1012.61 × 1019.04 × 1017.45 × 1012.96 × 1012.07 × 1013.67 × 1015.07 × 1013.13 × 101
CEC2017_F24Mean3.17 × 1033.15 × 1033.27 × 1033.34 × 1033.43 × 1033.09 × 1033.41 × 1033.26 × 1033.42 × 1033.20 × 1033.02 × 103
Std8.19 × 1015.26 × 1019.83 × 1012.52 × 1019.34 × 1018.84 × 1012.29 × 1013.16 × 1013.05 × 1015.69 × 1012.88 × 101
CEC2017_F25Mean3.14 × 1033.08 × 1033.06 × 1033.07 × 1033.51 × 1033.11 × 1034.42 × 1033.14 × 1033.68 × 1033.03 × 1033.03 × 103
Std4.32 × 1014.06 × 1012.90 × 1012.09 × 1011.28 × 1034.18 × 1012.05 × 1022.92 × 1011.13 × 1023.62 × 1013.62 × 101
CEC2017_F26Mean7.81 × 1035.57 × 1039.22 × 1038.14 × 1031.00 × 1046.25 × 1038.30 × 1037.01 × 1038.76 × 1035.63 × 1034.88 × 103
Std1.74 × 1032.36 × 1032.39 × 1036.20 × 1021.61 × 1035.67 × 1021.01 × 1031.92 × 1021.11 × 1036.90 × 1022.95 × 102
CEC2017_F27Mean3.57 × 1033.55 × 1033.65 × 1033.53 × 1033.75 × 1033.59 × 1033.72 × 1033.55 × 1034.20 × 1033.25 × 1033.29 × 103
Std1.45 × 1021.14 × 1021.70 × 1021.23 × 1022.11 × 1028.92 × 1015.44 × 1014.87 × 1018.02 × 1015.61 × 1015.38 × 101
CEC2017_F28Mean3.42 × 1033.35 × 1033.32 × 1033.34 × 1035.39 × 1033.41 × 1035.03 × 1033.47 × 1034.38 × 1033.29 × 1033.30 × 103
Std5.39 × 1012.83 × 1013.43 × 1013.02 × 1012.19 × 1034.49 × 1012.79 × 1024.42 × 1011.57 × 1022.37 × 1012.18 × 101
CEC2017_F29Mean4.48 × 1035.38 × 1034.97 × 1035.83 × 1035.47 × 1034.46 × 1035.74 × 1034.28 × 1035.37 × 1034.02 × 1033.75 × 103
Std2.98 × 1025.12 × 1024.32 × 1022.47 × 1025.22 × 1024.56 × 1023.15 × 1021.61 × 1022.04 × 1022.29 × 1022.08 × 102
CEC2017_F30Mean9.43 × 1059.89 × 1079.06 × 1056.57 × 1062.41 × 1071.59 × 1071.18 × 1083.91 × 1064.65 × 1077.96 × 1057.70 × 105
Std1.42 × 1052.55 × 1072.09 × 1051.49 × 1061.74 × 1072.24 × 1074.28 × 1077.51 × 1058.21 × 1061.00 × 1051.39 × 105
MeanRank 5.17 6.00 4.86 6.83 9.41 5.24 10.03 5.55 8.24 3.34 1.31
FinalRank 4738105116921
Rank First 000000000524
Table 8. Comparison of numerical results for the IEEE CEC2017 test function for 100D.
Table 8. Comparison of numerical results for the IEEE CEC2017 test function for 100D.
ProblemsMetricTLBOSSAINFODEDBOSOCLPSOBDEBESDGOCODGBGO
CEC2017_F1Mean6.64 × 1091.82 × 1052.06 × 1077.10 × 1094.61 × 10106.02 × 1077.23 × 10102.18 × 1094.57 × 10102.47 × 1072.21 × 107
Std3.12 × 1091.96 × 1056.46 × 1071.84 × 1094.83 × 10101.07 × 1086.83 × 1094.04 × 1085.52 × 1099.71 × 1061.24 × 107
CEC2017_F3Mean4.80 × 1053.96 × 1051.71 × 1056.53 × 1055.70 × 1053.32 × 1056.45 × 1053.25 × 1052.66 × 1053.01 × 1051.70 × 105
Std5.54 × 1041.11 × 1052.44 × 1049.49 × 1042.04 × 1051.93 × 1045.03 × 1043.01 × 1041.55 × 1046.52 × 1042.53 × 104
CEC2017_F4Mean1.65 × 1038.84 × 1028.94 × 1021.48 × 1036.19 × 1031.01 × 1031.04 × 1041.29 × 1036.04 × 1037.74 × 1027.60 × 102
Std3.22 × 1026.50 × 1011.48 × 1021.91 × 1026.09 × 1031.54 × 1021.49 × 1039.55 × 1016.34 × 1024.17 × 1014.72 × 101
CEC2017_F5Mean1.14 × 1031.36 × 1031.26 × 1031.66 × 1031.64 × 1039.94 × 1021.81 × 1031.35 × 1031.64 × 1031.36 × 1031.04 × 103
Std8.68 × 1011.32 × 1026.52 × 1014.38 × 1012.03 × 1021.71 × 1024.11 × 1013.51 × 1012.35 × 1013.42 × 1011.07 × 102
CEC2017_F6Mean6.42 × 1026.67 × 1026.54 × 1026.60 × 1026.68 × 1026.15 × 1026.69 × 1026.14 × 1026.69 × 1026.05 × 1026.07 × 102
Std4.98 × 1006.82 × 1006.68 × 1007.82 × 1007.97 × 1003.02 × 1003.86 × 1001.44 × 1003.38 × 1009.15 × 10−12.01 × 100
CEC2017_F7Mean2.09 × 1032.02 × 1032.61 × 1032.25 × 1032.40 × 1031.37 × 1033.58 × 1031.81 × 1032.42 × 1031.76 × 1031.52 × 103
Std1.66 × 1021.97 × 1022.32 × 1029.08 × 1013.89 × 1025.48 × 1011.59 × 1024.45 × 1011.03 × 1024.03 × 1011.02 × 102
CEC2017_F8Mean1.52 × 1031.65 × 1031.63 × 1031.99 × 1032.02 × 1031.31 × 1032.11 × 1031.66 × 1031.98 × 1031.66 × 1031.36 × 103
Std7.93 × 1011.50 × 1029.69 × 1015.32 × 1011.78 × 1021.75 × 1025.30 × 1013.96 × 1012.88 × 1013.69 × 1011.11 × 102
CEC2017_F9Mean4.52 × 1043.00 × 1042.17 × 1044.69 × 1044.66 × 1046.21 × 1039.05 × 1042.45 × 1044.75 × 1045.21 × 1037.28 × 103
Std8.32 × 1033.55 × 1032.40 × 1038.12 × 1032.03 × 1041.58 × 1038.77 × 1034.25 × 1033.95 × 1031.94 × 1033.42 × 103
CEC2017_F10Mean3.18 × 1041.62 × 1041.68 × 1043.25 × 1041.93 × 1043.15 × 1042.79 × 1042.41 × 1042.82 × 1043.14 × 1042.22 × 104
Std5.55 × 1022.06 × 1031.70 × 1035.48 × 1023.83 × 1039.29 × 1026.95 × 1025.79 × 1024.35 × 1026.53 × 1021.43 × 103
CEC2017_F11Mean4.46 × 1041.88 × 1043.63 × 1033.93 × 1041.43 × 1058.18 × 1041.30 × 1053.13 × 1044.91 × 1044.03 × 1032.62 × 103
Std1.08 × 1046.80 × 1035.47 × 1027.47 × 1034.98 × 1041.99 × 1042.50 × 1047.28 × 1038.33 × 1036.43 × 1021.81 × 102
CEC2017_F12Mean2.33 × 1086.00 × 1082.93 × 1072.19 × 1082.15 × 1092.50 × 1081.89 × 10104.92 × 1086.13 × 1092.16 × 1071.22 × 107
Std2.10 × 1082.91 × 1081.60 × 1078.45 × 1077.57 × 1082.85 × 1083.65 × 1091.22 × 1089.88 × 1086.63 × 1066.70 × 106
CEC2017_F13Mean2.07 × 1048.79 × 1042.19 × 1041.04 × 1061.21 × 1081.28 × 1072.17 × 1091.21 × 1063.06 × 1085.63 × 1038.05 × 103
Std8.46 × 1033.31 × 1043.61 × 1045.75 × 1051.30 × 1081.31 × 1075.39 × 1086.14 × 1057.45 × 1073.36 × 1036.08 × 103
CEC2017_F14Mean1.38 × 1062.92 × 1063.54 × 1052.91 × 1051.01 × 1077.61 × 1062.27 × 1072.33 × 1062.17 × 1062.19 × 1049.00 × 103
Std6.81 × 1051.61 × 1061.80 × 1059.76 × 1049.56 × 1066.51 × 1067.02 × 1061.31 × 1066.00 × 1051.80 × 1041.15 × 104
CEC2017_F15Mean5.68 × 1037.87 × 1044.69 × 1031.74 × 1051.44 × 1072.87 × 1065.02 × 1081.00 × 1052.40 × 1075.38 × 1033.27 × 103
Std3.51 × 1032.97 × 1042.75 × 1036.10 × 1044.14 × 1074.44 × 1061.75 × 1087.34 × 1049.34 × 1065.19 × 1031.69 × 103
CEC2017_F16Mean5.64 × 1036.89 × 1036.00 × 1031.09 × 1048.75 × 1038.44 × 1031.03 × 1048.53 × 1031.04 × 1048.94 × 1035.86 × 103
Std9.37 × 1029.34 × 1026.61 × 1024.20 × 1029.88 × 1022.03 × 1037.67 × 1023.99 × 1023.63 × 1025.64 × 1026.87 × 102
CEC2017_F17Mean4.93 × 1035.75 × 1035.81 × 1037.50 × 1038.46 × 1035.90 × 1039.89 × 1036.37 × 1036.81 × 1036.23 × 1035.11 × 103
Std5.64 × 1026.17 × 1025.49 × 1023.18 × 1021.16 × 1031.47 × 1038.83 × 1022.80 × 1022.65 × 1024.14 × 1026.93 × 102
CEC2017_F18Mean2.75 × 1064.21 × 1064.11 × 1052.61 × 1062.12 × 1071.01 × 1072.90 × 1072.82 × 1062.61 × 1062.32 × 1051.65 × 105
Std1.44 × 1062.26 × 1062.20 × 1056.22 × 1052.05 × 1071.21 × 1077.53 × 1069.69 × 1056.06 × 1059.28 × 1047.46 × 104
CEC2017_F19Mean4.36 × 1031.39 × 1076.72 × 1033.98 × 1052.29 × 1075.97 × 1066.08 × 1089.75 × 1042.93 × 1074.61 × 1033.31 × 103
Std2.54 × 1039.33 × 1064.37 × 1031.92 × 1052.39 × 1077.51 × 1061.75 × 1085.47 × 1049.69 × 1063.62 × 1031.86 × 103
CEC2017_F20Mean7.09 × 1035.40 × 1035.55 × 1037.84 × 1036.06 × 1036.88 × 1036.26 × 1036.36 × 1036.84 × 1036.81 × 1035.57 × 103
Std3.13 × 1026.23 × 1025.44 × 1023.10 × 1027.10 × 1024.56 × 1024.14 × 1022.73 × 1022.81 × 1023.47 × 1025.70 × 102
CEC2017_F21Mean3.06 × 1033.12 × 1033.19 × 1033.46 × 1033.80 × 1032.83 × 1033.64 × 1033.20 × 1033.47 × 1033.16 × 1032.83 × 103
Std1.18 × 1021.38 × 1021.47 × 1025.06 × 1011.35 × 1021.90 × 1023.98 × 1013.65 × 1013.80 × 1014.23 × 1011.16 × 102
CEC2017_F22Mean3.34 × 1041.92 × 1041.98 × 1043.48 × 1042.22 × 1043.17 × 1043.03 × 1042.67 × 1043.09 × 1043.33 × 1042.50 × 104
Std4.79 × 1031.54 × 1031.93 × 1035.24 × 1023.05 × 1032.55 × 1037.53 × 1025.89 × 1023.98 × 1026.27 × 1021.15 × 103
CEC2017_F23Mean3.60 × 1033.59 × 1033.79 × 1034.07 × 1034.35 × 1033.38 × 1034.07 × 1033.65 × 1034.37 × 1033.36 × 1033.21 × 103
Std1.27 × 1021.64 × 1021.28 × 1027.13 × 1011.94 × 1021.16 × 1026.18 × 1013.14 × 1015.86 × 1018.32 × 1014.91 × 101
CEC2017_F24Mean4.35 × 1034.14 × 1034.59 × 1034.52 × 1035.26 × 1034.05 × 1034.79 × 1034.23 × 1035.06 × 1034.03 × 1033.70 × 103
Std1.68 × 1021.59 × 1022.56 × 1026.90 × 1012.41 × 1021.67 × 1025.08 × 1015.11 × 1018.05 × 1011.02 × 1027.94 × 101
CEC2017_F25Mean4.24 × 1033.60 × 1033.51 × 1034.15 × 1036.35 × 1033.73 × 1031.24 × 1044.08 × 1037.09 × 1033.44 × 1033.41 × 103
Std2.25 × 1027.97 × 1015.82 × 1011.43 × 1024.65 × 1031.41 × 1021.15 × 1031.41 × 1023.49 × 1024.57 × 1016.71 × 101
CEC2017_F26Mean2.27 × 1041.44 × 1042.26 × 1041.87 × 1042.47 × 1041.36 × 1042.14 × 1041.57 × 1042.47 × 1041.28 × 1041.00 × 104
Std3.09 × 1033.15 × 1034.12 × 1038.12 × 1022.53 × 1031.73 × 1038.55 × 1025.41 × 1021.14 × 1031.11 × 1038.84 × 102
CEC2017_F27Mean4.22 × 1033.87 × 1033.91 × 1033.86 × 1034.18 × 1033.84 × 1034.51 × 1034.01 × 1035.31 × 1033.42 × 1033.43 × 103
Std2.45 × 1021.40 × 1022.73 × 1022.03 × 1022.15 × 1021.21 × 1021.08 × 1027.65 × 1011.37 × 1023.67 × 1014.26 × 101
CEC2017_F28Mean5.13 × 1033.68 × 1033.62 × 1034.41 × 1031.76 × 1044.31 × 1031.58 × 1045.05 × 1031.04 × 1043.53 × 1033.53 × 103
Std6.61 × 1029.50 × 1017.47 × 1013.36 × 1025.79 × 1033.45 × 1021.33 × 1033.29 × 1025.32 × 1025.10 × 1013.76 × 101
CEC2017_F29Mean7.82 × 1039.56 × 1037.66 × 1031.06 × 1041.03 × 1047.28 × 1031.46 × 1048.35 × 1031.14 × 1048.25 × 1036.72 × 103
Std6.19 × 1029.87 × 1026.62 × 1023.48 × 1021.46 × 1031.07 × 1031.01 × 1032.88 × 1023.46 × 1025.06 × 1027.14 × 102
CEC2017_F30Mean4.26 × 1051.22 × 1089.77 × 1041.80 × 1068.40 × 1071.04 × 1061.06 × 1094.06 × 1063.47 × 1085.73 × 1043.34 × 104
Std3.61 × 1055.26 × 1071.05 × 1055.53 × 1051.33 × 1086.90 × 1052.67 × 1081.17 × 1061.08 × 1082.85 × 1041.15 × 104
MeanRank 5.76 4.90 4.07 7.55 8.72 5.07 9.93 5.86 8.79 3.59 1.76
FinalRank 6438951171021
Rank First 240004000514
Table 9. p-values of the IEEE CEC2017 test function set for 30D.
Table 9. p-values of the IEEE CEC2017 test function set for 30D.
ProblemsTLBOSSAINFODEDBOSOCLPSOBDEBESDGO
CEC2017_F12.67 × 10−9/−4.18 × 10−9/−1.50 × 10−2/+3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.70 × 10−2/−
CEC2017_F33.02 × 10−11/−3.02 × 10−11/−1.22 × 10−2/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F43.82 × 10−9/−3.02 × 10−11/−4.69 × 10−8/−3.02 × 10−11/−3.02 × 10−11/−2.15 × 10−10/−3.02 × 10−11/−7.39 × 10−11/−3.02 × 10−11/−3.27 × 10−2/−
CEC2017_F51.86 × 10−9/−4.50 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.01 × 10−4/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.78 × 10−10/−
CEC2017_F63.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.01 × 10−4/+
CEC2017_F71.55 × 10−9/−7.39 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.37 × 10−10/−4.03 × 10−3/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.08 × 10−11/
CEC2017_F84.71 × 10−4/−6.70 × 10−11/−2.15 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−8.50 × 10−2/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−5.57 × 10−10/−
CEC2017_F93.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.08 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−6.55 × 10−4/+
CEC2017_F103.02 × 10−11/−2.51 × 10−2/−1.30 × 10−3/−3.02 × 10−11/−9.51 × 10−6/−5.59 × 10−1/=3.02 × 10−11/−1.61 × 10−6/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F111.61 × 10−10/−3.02 × 10−11/−1.33 × 10−10/−7.39 × 10−11/−3.02 × 10−11/−9.26 × 10−9/−3.02 × 10−11/−2.92 × 10−9/−3.02 × 10−11/−1.64 × 10−5/−
CEC2017_F124.50 × 10−11/−3.02 × 10−11/−4.57 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.62 × 10−3/−
CEC2017_F133.47 × 10−10/−3.02 × 10−11/−2.61 × 10−10/−1.43 × 10−8/−3.02 × 10−11/−6.07 × 10−11/−3.02 × 10−11/−4.50 × 10−11/−3.02 × 10−11/−4.22 × 10−4/−
CEC2017_F143.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.20 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.19 × 10−6/−
CEC2017_F153.02 × 10−11/−3.02 × 10−11/−6.07 × 10−11/−3.57 × 10−6/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.71 × 10−4/−
CEC2017_F161.17 × 10−4/−1.31 × 10−8/−1.34 × 10−5/−3.02 × 10−11/−2.61 × 10−10/−7.20 × 10−5/−3.02 × 10−11/−2.15 × 10−10/−3.02 × 10−11/−1.53 × 10−5/−
CEC2017_F171.24 × 10−3/−7.77 × 10−9/−1.33 × 10−10/−3.02 × 10−11/−4.50 × 10−11/−1.31 × 10−8/−2.15 × 10−10/−3.01 × 10−7/−1.17 × 10−9/−3.67 × 10−3/−
CEC2017_F183.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.08 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.33 × 10−10/−
CEC2017_F193.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−7.39 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.31 × 10−8/−
CEC2017_F202.20 × 10−7/−1.78 × 10−10/−1.69 × 10−9/−3.02 × 10−11/−1.33 × 10−10/−1.85 × 108/−5.49 × 10−11/−2.83 × 10−8/−5.49 × 10−11/−2.27 × 10−3/−
CEC2017_F216.53 × 10−7/−3.69 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.25 × 10−5/−6.70 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−6.70 × 10−11/−
CEC2017_F221.29 × 10−9/−1.86 × 10−9/−1.11 × 10−4/−3.02 × 10−11/−5.57 × 10−10/−1.61 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−5.00 × 10−9/−
CEC2017_F231.20 × 10−8/−3.02 × 10−11/−3.34 × 10−11/−4.98 × 10−11/−3.02 × 10−11/−4.20 × 10−10/−4.08 × 10−11/−4.08 × 10−11/−3.02 × 10−11/−9.26 × 10−9/−
CEC2017_F242.00 × 10−5/−8.48 × 10−9/−3.47 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−1.30 × 10−3/−3.02 × 10−11/−4.50 × 10−11/−3.02 × 10−11/−6.07 × 10−11/−
CEC2017_F259.06 × 10−8/−5.07 × 10−10/−3.57 × 10−6/−1.17 × 10−3/−3.02 × 10−11/−5.60 × 10−7/−3.02 × 10−11/−9.92 × 10−11/−3.02 × 10−11/−1.45 × 10−1/=
CEC2017_F261.03 × 10−2/−9.53 × 10−7/−1.86 × 10−6/−1.58 × 10−1/=3.82 × 10−9/−1.21 × 10−10/−9.06 × 10−8/−2.71 × 10−2/−8.77 × 10−2/=8.15 × 10−5/−
CEC2017_F277.12 × 10−9/−2.87 × 10−10/−1.21 × 10−10/−2.87 × 10−10/−7.39 × 10−11/−6.07 × 10−11/−3.02 × 10−11/−4.98 × 10−11/−3.02 × 10−11/−1.68 × 10−3/+
CEC2017_F284.22 × 10−4/−9.83 × 10−8/−2.77 × 10−1/=7.62 × 10−3/−3.34 × 10−11/−3.08 × 10−8/−3.02 × 10−11/−5.97 × 10−5/−3.02 × 10−11/−3.39 × 102/+
CEC2017_F291.25 × 10−5/−8.15 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.08 × 10−11/−2.38 × 10−7/−3.02 × 10−11/−8.35 × 10−8/−3.02 × 10−11/−2.25 × 10−4/−
CEC2017_F304.08 × 10−11/−3.02 × 10−11/−1.61 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−9.92 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.16 × 10−10/−
+/−/=0/29/00/29/01/27/10/28/10/29/00/27/20/29/00/29/00/28/14/24/1
Table 10. p-values of the IEEE CEC2017 test function set for 50D.
Table 10. p-values of the IEEE CEC2017 test function set for 50D.
ProblemsTLBOSSAINFODEDBOSOCLPSOBDEBESDGO
CEC2017_F13.02 × 10−11/−4.73 × 10−1/=6.31 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.09 × 10−1/=
CEC2017_F33.02 × 10−11/−3.02 × 10−11/−3.83 × 10−5/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.55 × 10−9/−
CEC2017_F42.78 × 10−7/−8.84 × 10−7/−4.36 × 10−2/−1.01 × 10−8/−3.02 × 10−11/−1.96 × 10−10/−3.02 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−8.07 × 10−1/=
CEC2017_F57.69 × 10−8/−3.02 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.69 × 10−11/−9.47 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.08 × 10−11/−
CEC2017_F63.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.00 × 10−3/+
CEC2017_F71.46 × 10−10/−8.15 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−5.49 × 10−11/−3.77 × 10−4/−3.02 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F81.21 × 10−10/−5.49 × 10−11/−3.34 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.84 × 10−2/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F93.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.07 × 10−9/−3.02 × 10−11/−4.08 × 10−11/−3.02 × 10−11/−4.46 × 10−4/+
CEC2017_F103.02 × 10−11/−5.20 × 10−1/=5.69 × 10−1/=3.02 × 10−11/−2.46 × 10−1/=6.20 × 104/−3.02 × 10−11/−8.48 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F113.82 × 10−10/−3.02 × 10−11/−3.08 × 10−8/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−5.53 × 10−8/−
CEC2017_F128.10 × 10−10/−3.02 × 10−11/−3.16 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.03 × 10−2/−
CEC2017_F135.56 × 10−4/−3.02 × 10−11/−9.52 × 10−4/−3.02 × 10−11/−3.02 × 10−11/−3.82 × 10−10/−3.02 × 10−11/−9.92 × 10−11/−3.02 × 10−11/−1.03 × 10−2/−
CEC2017_F143.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.16 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.96 × 10−8/−
CEC2017_F151.09 × 10−5/−3.34 × 10−11/−1.73 × 10−7/−2.53 × 10−4/+3.02 × 10−11/−3.82 × 10−10/−3.02 × 10−11/−5.07 × 10−10/−3.02 × 10−11/−3.18 × 10−1/=
CEC2017_F166.31 × 10−1/=1.09 × 10−5/−2.68 × 10−4/−3.02 × 10−11/−8.89 × 10−10/−4.03 × 10−3/−1.33 × 10−10/−5.00 × 10−9/−3.16 × 10−10/−1.47 × 10−7/−
CEC2017_F172.42 × 10−2/−9.76 × 10−10/−6.01 × 10−8/−3.02 × 10−11/−3.02 × 10−11/−2.77 × 10−5/−3.02 × 10−11/−1.17 × 10−9/−4.08 × 10−11/−1.60 × 10−7/−
CEC2017_F183.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.20 × 10−7/−
CEC2017_F195.49 × 10−11/−3.02 × 10−11/−5.49 × 10−11/−6.52 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.89 × 10−4/−
CEC2017_F201.78 × 10−10/−4.36 × 10−2/−3.18 × 10−4/−3.02 × 10−11/−9.83 × 10−8/−3.95 × 10−1/=1.01 × 10−8/−1.29 × 10−6/−3.82 × 10−10/−9.21 × 10−5/−
CEC2017_F211.86 × 10−9/−3.34 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−6.20 × 10−4/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.34 × 10−11/−
CEC2017_F223.32 × 10−6/−4.04 × 10−1/=3.63 × 10−1/=3.02 × 10−11/−1.43 × 10−5/−3.08 × 10−8/−1.07 × 10−9/−9.06 × 10−8/−2.02 × 10−8/−5.07 × 10−10/−
CEC2017_F236.70 × 10−11/−3.34 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.03 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.78 × 10−10/−
CEC2017_F241.78 × 10−10/−6.70 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.04 × 10−4/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.98 × 10−11/−
CEC2017_F251.96 × 10−10/−3.09 × 10−6/−1.06 × 10−3/−4.35 × 10−5/−9.92 × 10−11/−1.29 × 10−9/−3.02 × 10−11/−5.49 × 10−11/−3.02 × 10−11/−7.51 × 10−1/=
CEC2017_F268.48 × 10−9/−1.86 × 10−1/=8.48 × 10−9/−3.02 × 10−11/−5.57 × 10−10/−6.70 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.43 × 10−5/−
CEC2017_F278.15 × 10−11/−5.49 × 10−11/−7.39 × 10−11/−1.46 × 10−10/−4.50 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.68 × 10−4/+
CEC2017_F286.70 × 10−11/−2.83 × 10−8/−4.43 × 10−3/−3.32 × 10−6/−3.02 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.77 × 10−1/=
CEC2017_F295.49 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−5.00 × 10−9/−3.02 × 10−11/−7.39 × 10−11/−3.02 × 10−11/−1.25 × 10−4/−
CEC2017_F301.09 × 10−5/−3.02 × 10−11/−1.03 × 10−2/−3.02 × 10−11/−3.02 × 10−11/−4.50 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−8.50 × 10−2/=
+/−/=0/28/10/25/40/26/31/28/00/28/10/27/20/29/00/29/00/29/03/20/6
Table 11. p-values of the IEEE CEC2017 test function set for 100D.
Table 11. p-values of the IEEE CEC2017 test function set for 100D.
ProblemsTLBOSSAINFODEDBOSOCLPSOBDEBESDGO
CEC2017_F13.02 × 10−11/−3.02 × 10−11/+7.74 × 10−6/+3.02 × 10−11/−3.02 × 10−11/−7.04 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.05 × 10−1/=
CEC2017_F33.02 × 10−11/−3.02 × 10−11/−9.12 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.69 × 10−11/−
CEC2017_F43.02 × 10−11/−1.41 × 10−9/−1.47 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−1.96 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−7.73 × 10−2/=
CEC2017_F56.91 × 10−4/−9.76 × 10−10/−3.47 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−1.33 × 10−1/=3.02 × 10−11/−3.34 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F63.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−7.39 × 10−11/−3.02 × 10−11/−3.34 × 10−11/−3.02 × 10−11/−3.56 × 10−4/+
CEC2017_F73.02 × 10−11/−6.70 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.08 × 10−8/+3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.34 × 10−11/−
CEC2017_F84.11 × 10−7/−2.44 × 10−9/−8.89 × 10−10/−3.02 × 10−11/−3.69 × 10−11/−3.18 × 10−1/=3.02 × 10−11/−5.49 × 10−11/−3.02 × 10−11/−7.39 × 10−11/−
CEC2017_F93.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−5.40 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.32 × 10−2/+
CEC2017_F103.02 × 10−11/−8.99 × 10−11/+4.50 × 10−11/+3.02 × 10−11/−4.31 × 10−8/+3.02 × 10−11/−3.02 × 10−11/−1.87 × 107/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F113.02 × 10−11/−3.02 × 10−11/−1.07 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.34 × 10−11/−
CEC2017_F123.02 × 10−11/−3.02 × 10−11/−3.96 × 10−8/−3.02 × 10−11/−3.02 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.61 × 10−6/−
CEC2017_F132.83 × 10−8/−3.02 × 10−11/−5.61 × 10−5/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.01 × 10−1/=
CEC2017_F143.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−6.28 × 10−6/−
CEC2017_F157.20 × 10−5/−3.02 × 10−11/−3.34 × 10−3/−3.02 × 10−11/−3.02 × 10−11/−7.39 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.06 × 10−2/−
CEC2017_F162.12 × 10−1/=2.43 × 10−5/−4.55 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−2.15 × 10−6/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F172.34 × 10−1/=1.60 × 10−3/−1.11 × 10−4/−3.02 × 10−11/−3.02 × 10−11/−9.05 × 10−2/=3.02 × 10−11/−2.87 × 10−10/−3.34 × 10−11/−1.01 × 10−8/−
CEC2017_F183.02 × 10−11/−3.02 × 10−11/−4.80 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.86 × 10−3/−
CEC2017_F192.61 × 10−2/−3.02 × 10−11/−8.29 × 10−6/−3.02 × 10−11/−3.02 × 10−11/−4.98 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.29 × 10−1/=
CEC2017_F204.98 × 10−11/−3.79 × 10−1/=7.17 × 10−1/=3.02 × 10−11/−6.38 × 10−3/−9.76 × 10−10/−1.34 × 105/−1.60 × 107/−1.61 × 10−10/−2.15 × 10−10/−
CEC2017_F213.08 × 10−8/−9.76 × 10−10/−2.61 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−3.63 × 10−1/=3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.34 × 10−11/−
CEC2017_F225.57 × 10−10/−3.02 × 10−11/+1.21 × 10−10/+3.02 × 10−11/−5.53 × 10−8/+3.34 × 10−11/−3.02 × 10−11/−1.20 × 10−8/−3.02 × 10−11/−3.02 × 10−11/−
CEC2017_F233.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.25 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.92 × 10−9/−
CEC2017_F243.02 × 10−11/−3.69 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.61 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.09 × 10−10/−
CEC2017_F253.02 × 10−11/−1.29 × 10−9/−3.81 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−6.07 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−1.50 × 10−2/−
CEC2017_F263.02 × 10−11/−8.48 × 10−9/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.15 × 10−10/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.37 × 10−10/−
CEC2017_F273.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.84 × 101/=
CEC2017_F283.02 × 10−11/−6.52 × 10−9/−2.32 × 10−6/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−4.55 × 10−1/=
CEC2017_F294.11 × 10−7/−5.49 × 10−11/−1.64 × 10−5/−3.02 × 10−11/−3.02 × 10−11/−5.19 × 10−2/=3.02 × 10−11/−6.70 × 10−11/−3.02 × 10−11/−1.17 × 10−9/−
CEC2017_F303.34 × 10−11/−3.02 × 10−11/−7.04 × 10−7/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−3.02 × 10−11/−2.84 × 10−4/−
+/−/=0/27/23/25/13/23/30/29/02/27/01/22/60/29/00/29/00/29/02/21/6
Table 12. Friedman mean rank test results for the IEEE CEC2017 test function set.
Table 12. Friedman mean rank test results for the IEEE CEC2017 test function set.
Dimensions30D 50D 100D
AlgorithmsMean RankFinal RankMean RankFinal RankMean RankFinal Rank
TLBO5.0535.3155.776
SSA7.0886.0875.135
INFO5.5154.8233.993
DE6.5177.0287.678
DBO8.91108.69108.499
SO5.5865.1045.014
CLPSO9.60119.84119.8711
BDE5.3045.6165.787
BESD7.7998.3998.7710
GO3.0323.3823.632
CODGBGO1.6511.7511.871
Table 13. Comparison of numerical results on the IEEE CEC2020 test function.
Table 13. Comparison of numerical results on the IEEE CEC2020 test function.
Dimensions 10D 20D
Algorithms Algorithms
Problems ALSHADEIMODELSHADECODGBGOALSHADEIMODELSHADECODGBGO
CEC2020_F1Mean1.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 102
Std0.00 × 1002.64 × 10−150.00 × 1001.50 × 10−102.04 × 10−145.91 × 10−121.20 × 10−129.49 × 10−5
CEC2020_F2Mean1.15 × 1031.16 × 1031.12 × 1031.11 × 1031.21 × 1031.30 × 1031.28 × 1031.12 × 103
Std6.99 × 1015.43 × 1012.25 × 1015.19 × 1004.01 × 1019.85 × 1018.29 × 1018.88 × 100
CEC2020_F3Mean7.13 × 1027.17 × 1027.12 × 1027.12 × 1027.26 × 1027.29 × 1027.27 × 1027.26 × 102
Std8.45 × 10−12.28 × 1009.45 × 10−11.36 × 1011.43 × 1011.98 × 1007.90 × 10−13.47 × 10−13
CEC2020_F4Mean1.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 103
Std1.07 × 10−12.25 × 10−17.01 × 10−27.91 × 10−21.89 × 10−12.84 × 10−12.13 × 10−14.63 × 10−13
CEC2020_F5Mean1.72 × 1031.73 × 1031.70 × 1031.70 × 1032.16 × 1032.07 × 1032.01 × 1031.93 × 103
Std3.07 × 1014.53 × 1011.42 × 1003.33 × 10−12.20 × 1021.02 × 1022.75 × 1002.82 × 100
CEC2020_F6Mean1.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 103
Std3.00 × 10−12.03 × 1003.00 × 10−12.88 × 10−12.59 × 10−13.44 × 10−13.53 × 10−10.00 × 100
CEC2020_F7Mean2.10 × 1032.10 × 1032.10 × 1032.10 × 1032.22 × 1032.16 × 1032.15 × 1032.16 × 103
Std2.60 × 10−13.11 × 10−13.18 × 1011.79 × 10−11.02 × 1026.29 × 1015.23 × 1013.01 × 100
CEC2020_F8Mean2.30 × 1032.31 × 1032.31 × 1032.31 × 1032.31 × 1032.31 × 1032.31 × 1032.31 × 103
Std2.50 × 1014.55 × 10−134.31 × 10−134.31 × 10−130.00 × 1007.97 × 10−136.86 × 10−130.00 × 100
CEC2020_F9Mean2.72 × 1032.66 × 1032.68 × 1032.74 × 1032.81 × 1032.82 × 1032.81 × 1032.84 × 103
Std4.26 × 1011.13 × 1029.39 × 1014.35 × 1004.83 × 1006.04 × 1011.66 × 1001.33 × 101
CEC2020_F10Mean2.92 × 1032.91 × 1032.91 × 1032.90 × 1032.92 × 1032.92 × 1032.91 × 1032.91 × 103
Std2.33 × 1011.72 × 1011.72 × 1014.54 × 10−11.09 × 1011.69 × 1011.72 × 10−23.05 × 10−2
Mean Rank 2.42.82.11.42.32.92.21.4
Final Rank 34213421
Rank First 22181248
Table 14. p-values of the IEEE CEC2020 test function set.
Table 14. p-values of the IEEE CEC2020 test function set.
Dimensions 10D 20D
Algorithms Algorithms
ProblemsALSHADEIMODELSHADEALSHADEIMODELSHADE
CEC2020_F11.63 × 10−11/+1.63 × 10−11/+1.63 × 10−11/+1.21 × 10−12/+3.16 × 10−12/+7.82 × 10−12/+
CEC2020_F21.42 × 10−4/−2.36 × 10−9/−7.39 × 10−4/−1.37 × 10−8/−9.26 × 10−10/−2.15 × 10−12/−
CEC2020_F34.87 × 10−4/−7.41 × 10−11/−4.34 × 10−2/−1.22 × 10−12/−1.01 × 10−12/−3.90 × 10−13/−
CEC2020_F42.40 × 10−11/−2.40 × 10−11/−5.93 × 10−11/−7.15 × 10−9/−3.36 × 10−11/−1.21 × 10−12/−
CEC2020_F55.18 × 10−5/−3.87 × 10−4/−2.21 × 10−3/−2.34 × 10−5/−3.73 × 10−11/−4.29 × 10−14/−
CEC2020_F63.47 × 10−1/=7.47 × 10−2/=6.93 × 10−1/=1.81 × 10−7/−2.05 × 10−5/−1.21 × 10−12/−
CEC2020_F71.47 × 10−1/=1.22 × 10−1/=3.53 × 10−1/=2.06 × 10−2/−1.55 × 10−1/=2.34 × 10−5/+
CEC2020_F81.61 × 10−1/=3.34 × 10−1/=3.34 × 10−1/=3.34 × 10−1/=3.34 × 10−1/=3.34 × 10−1/=
CEC2020_F91.55 × 10−8/+1.64 × 10−6/+3.61 × 10−11/+9.92 × 10−11/+1.22 × 10−2/+3.02 × 10−11/+
CEC2020_F106.70 × 10−3/−2.71 × 10−1/=6.25 × 10−1/=1.72 × 10−12/−1.07 × 10−9/−1.35 × 10−12/−
+/−/=2/5/32/4/42/4/42/7/12/6/23/7/0
Table 15. Friedman mean rank test results for IEEE CEC2020 test function set.
Table 15. Friedman mean rank test results for IEEE CEC2020 test function set.
Dimensions10D 20D
AlgorithmsMean RankFinal RankMean RankFinal Rank
ALSHADE2.5832.552
IMODE2.7842.874
SHADE2.3522.593
CODGBGO2.2811.991
Table 16. The information from 18 datasets.
Table 16. The information from 18 datasets.
DatasetsNumber of FeaturesNumber of ClassificationsDataset Size
IonosphereEW342351
Breastcancer92699
BreastEW302569
Congress162435
Wine133178
Vote162435
Vehicle184846
Exactly1321000
Glass97214
HeartEW132270
Landsat3662000
Lymphography184148
Zoo167101
WDBC302569
SonarEW602208
Libras9015360
Spectf442267
MUSK1662476
Table 17. Comparison of algorithms for feature selection.
Table 17. Comparison of algorithms for feature selection.
DatasetsMetricPSODEGWOMVOWOAABOBOAHHOEOGOCODGBGO
IonosphereEWBest0.0878 0.0562 0.0345 0.0492 0.0573 0.0503 0.0375 0.0503 0.0603 0.0375 0.0246
Mean0.1278 0.0858 0.0545 0.0826 0.0919 0.0865 0.0641 0.1017 0.0714 0.0708 0.0476
Worst0.1797 0.1143 0.0750 0.1025 0.1245 0.1183 0.0977 0.1521 0.0889 0.0977 0.0790
Rank(11/11/11)(8/7/7)(2/2/1)(5/6/6)(9/9/9)(6/8/8)(4/3/5)(6/10/10)(10/5/3)(3/4/4)(1/1/2)
BreastcancerBest0.0592 0.0528 0.0592 0.0592 0.0611 0.0592 0.0639 0.0416 0.0463 0.0546 0.0398
Mean0.0626 0.0547 0.0603 0.0607 0.0684 0.0612 0.0696 0.0672 0.0468 0.0569 0.0422
Worst0.0805 0.0592 0.0722 0.0722 0.0916 0.0740 0.0787 0.0879 0.0481 0.0657 0.0463
Rank(6/8/9)(4/3/3)(6/5/5)(6/6/5)(10/10/11)(6/7/7)(11/11/8)(2/9/10)(3/2/2)(5/4/4)(1/1/1)
BreastEWBest0.0439 0.0333 0.0472 0.0413 0.0419 0.0406 0.0452 0.0333 0.0293 0.0346 0.0246
Mean0.0532 0.0488 0.0535 0.0602 0.0597 0.0574 0.0601 0.0531 0.0415 0.0489 0.0325
Worst0.0672 0.0672 0.0632 0.0752 0.0798 0.0704 0.0719 0.0706 0.0619 0.0719 0.0413
Rank(9/6/4)(3/3/4)(11/7/3)(7/11/10)(8/9/11)(6/8/6)(10/10/8)(3/5/7)(2/2/2)(5/4/8)(1/1/1)
CongressBest0.0166 0.0250 0.0394 0.0601 0.0269 0.0623 0.0582 0.0476 0.0313 0.0353 0.0063
Mean0.0215 0.0389 0.0452 0.0747 0.0275 0.0744 0.0788 0.0551 0.0371 0.0371 0.0063
Worst0.0313 0.0623 0.0519 0.0914 0.0353 0.0787 0.0974 0.0996 0.0373 0.0394 0.0063
Rank(2/2/2)(3/6/7)(7/7/6)(10/10/9)(4/3/3)(11/9/8)(9/11/10)(8/8/11)(5/4/4)(6/5/5)(1/1/1)
WineBest0.0308 0.0231 0.0565 0.0231 0.0411 0.0565 0.0308 0.0231 0.0231 0.0565 0.0231
Mean0.0717 0.0427 0.0711 0.0449 0.0687 0.0606 0.0639 0.0664 0.0420 0.0656 0.0259
Worst0.1336 0.0642 0.1079 0.1053 0.1851 0.0745 0.1079 0.1387 0.0899 0.0899 0.0385
Rank(6/11/9)(1/3/2)(9/10/7)(1/4/6)(8/9/11)(9/5/3)(6/6/7)(1/8/10)(1/2/4)(9/7/4)(1/1/1)
VoteBest0.0332 0.0269 0.0269 0.0228 0.0332 0.0166 0.0746 0.0394 0.0269 0.0250 0.0063
Mean0.0399 0.0334 0.0269 0.0388 0.0367 0.0175 0.0932 0.0557 0.0269 0.0280 0.0067
Worst0.0541 0.0438 0.0269 0.0769 0.0435 0.0435 0.1388 0.0996 0.0269 0.0500 0.0188
Rank(8/9/8)(5/6/6)(5/3/2)(3/8/9)(8/7/4)(2/2/4)(11/11/11)(10/10/10)(5/3/2)(4/5/7)(1/1/1)
VehicleBest0.2575 0.2519 0.2563 0.2468 0.2568 0.2626 0.2621 0.2572 0.2568 0.2514 0.2308
Mean0.2730 0.2710 0.2770 0.2660 0.2933 0.2857 0.2988 0.2966 0.2687 0.2746 0.2463
Worst0.3100 0.2950 0.3047 0.2947 0.3272 0.3105 0.3427 0.3334 0.3052 0.2943 0.2623
Rank(9/5/7)(4/4/4)(5/7/5)(2/2/3)(6/9/9)(11/8/8)(10/11/11)(8/10/10)(6/3/6)(3/6/2)(1/1/1)
ExactlyBest0.0462 0.0462 0.0462 0.0462 0.0462 0.0462 0.2177 0.0628 0.0462 0.0462 0.0462
Mean0.1256 0.1047 0.2058 0.2006 0.2537 0.2183 0.2886 0.2662 0.1711 0.0781 0.0603
Worst0.3040 0.2704 0.2912 0.3143 0.2912 0.2912 0.2912 0.3387 0.2905 0.2912 0.2635
Rank(1/4/9)(1/3/2)(1/7/4)(1/6/10)(1/9/4)(1/8/4)(11/11/4)(10/10/11)(1/5/3)(1/2/4)(1/1/1)
GlassBest0.2373 0.2905 0.2587 0.2698 0.2690 0.2373 0.2373 0.2484 0.2802 0.2690 0.1833
Mean0.2438 0.2982 0.2745 0.2941 0.2952 0.2469 0.2890 0.2799 0.2952 0.2838 0.1894
Worst0.2810 0.3230 0.3016 0.3452 0.3333 0.2802 0.3341 0.3238 0.3119 0.3016 0.2476
Rank(2/2/3)(11/11/7)(6/4/4)(9/8/11)(7/10/9)(2/3/2)(2/7/10)(5/5/8)(10/9/6)(7/6/4)(1/1/1)
HeartEWBest0.1897 0.1641 0.1231 0.1231 0.1038 0.1564 0.1551 0.1808 0.1308 0.1641 0.0974
Mean0.2287 0.2052 0.1332 0.1754 0.1324 0.1928 0.2210 0.2291 0.1421 0.1742 0.1038
Worst0.2808 0.2615 0.1821 0.2218 0.1731 0.2385 0.2654 0.3038 0.1731 0.3308 0.1385
Rank(11/10/9)(8/8/7)(3/3/4)(3/6/5)(2/2/2)(7/7/6)(6/9/8)(10/11/10)(5/4/2)(8/5/11)(1/1/1)
LandsatBest0.1117 0.1194 0.1122 0.1088 0.1061 0.1059 0.1292 0.1183 0.1083 0.1048 0.1010
Mean0.1214 0.1325 0.1196 0.1201 0.1204 0.1160 0.1395 0.1436 0.1160 0.1122 0.1109
Worst0.1343 0.1495 0.1268 0.1339 0.1386 0.1240 0.1498 0.1623 0.1257 0.1228 0.1240
Rank(7/8/7)(10/9/9)(8/5/5)(6/6/6)(4/7/8)(3/4/2)(11/10/10)(9/11/11)(5/3/4)(2/2/1)(1/1/2)
LymphographyBest0.1375 0.0898 0.1042 0.1065 0.0699 0.1264 0.1153 0.0755 0.0644 0.0644 0.0533
Mean0.1890 0.1189 0.1181 0.1581 0.1152 0.1598 0.1741 0.1314 0.0860 0.0929 0.0649
Worst0.2506 0.1741 0.1519 0.2251 0.1885 0.1941 0.2418 0.2052 0.1153 0.1320 0.0954
Rank(11/11/11)(6/6/5)(7/5/4)(8/8/9)(4/4/6)(10/9/7)(9/10/10)(5/7/8)(2/2/2)(2/3/3)(1/1/1)
ZooBest0.0375 0.0438 0.0375 0.0375 0.0375 0.0375 0.0625 0.0500 0.0438 0.0438 0.0375
Mean0.0875 0.0505 0.0667 0.0690 0.0668 0.0607 0.1323 0.0938 0.0746 0.0614 0.0516
Worst0.1475 0.0950 0.1213 0.1525 0.1075 0.0888 0.1788 0.1725 0.0825 0.0825 0.0825
Rank(1/9/8)(7/1/5)(1/5/7)(1/7/9)(1/6/6)(1/3/4)(11/11/11)(10/10/10)(7/8/1)(7/4/1)(1/2/1)
WDBCBest0.0372 0.0791 0.0385 0.0552 0.0465 0.0532 0.0419 0.0545 0.0498 0.0498 0.0372
Mean0.0527 0.0877 0.0457 0.0943 0.0642 0.0601 0.0575 0.0781 0.0528 0.0631 0.0393
Worst0.0778 0.1004 0.0678 0.1269 0.0830 0.0750 0.0758 0.0971 0.0704 0.0771 0.0578
Rank(1/3/7)(11/10/10)(3/2/2)(10/11/11)(5/8/8)(8/6/4)(4/5/5)(9/9/9)(6/4/3)(6/7/6)(1/1/1)
SonarEWBest0.0536 0.0620 0.0167 0.1075 0.0822 0.0789 0.0859 0.0672 0.0320 0.0470 0.0117
Mean0.1225 0.0922 0.0609 0.1457 0.1127 0.1293 0.1111 0.1295 0.0546 0.0893 0.0192
Worst0.1887 0.1209 0.1195 0.1937 0.1584 0.1684 0.1381 0.1867 0.0995 0.1364 0.0300
Rank(5/8/10)(6/5/4)(2/3/3)(11/11/11)(9/7/7)(8/9/8)(10/6/6)(7/10/9)(3/2/2)(4/4/5)(1/1/1)
LibrasBest0.1389 0.1625 0.1269 0.0992 0.1428 0.1644 0.1303 0.1494 0.1156 0.1447 0.0667
Mean0.1770 0.1979 0.1581 0.1376 0.1885 0.2185 0.1727 0.1984 0.1435 0.1783 0.0913
Worst0.2103 0.2122 0.1833 0.1569 0.2439 0.2492 0.1939 0.2411 0.1781 0.2050 0.1164
Rank(6/6/7)(10/9/8)(4/4/4)(2/2/2)(7/8/10)(11/11/11)(5/5/5)(9/10/9)(3/3/3)(8/7/6)(1/1/1)
SpectfBest0.0681 0.0602 0.0544 0.1076 0.0963 0.0985 0.1145 0.1076 0.0669 0.0511 0.0511
Mean0.1073 0.1140 0.0926 0.1416 0.1506 0.1213 0.1484 0.1469 0.0930 0.0817 0.0748
Worst0.1496 0.1496 0.1360 0.1745 0.1913 0.1551 0.1689 0.1936 0.1280 0.1304 0.1111
Rank(6/5/5)(4/6/5)(3/3/4)(9/8/9)(7/11/10)(8/7/7)(11/10/8)(9/9/11)(5/4/2)(1/2/3)(1/1/1)
MUSKBest0.0610 0.0792 0.0342 0.0510 0.0503 0.0692 0.0963 0.0606 0.0328 0.0676 0.0199
Mean0.1020 0.0981 0.0681 0.0843 0.0816 0.0960 0.1424 0.0894 0.0516 0.0913 0.0366
Worst0.1411 0.1272 0.1106 0.1175 0.1138 0.1243 0.1703 0.1234 0.0728 0.1210 0.0591
Rank(7/10/10)(10/9/9)(3/3/3)(5/5/5)(4/4/4)(9/8/8)(11/11/11)(6/6/7)(2/2/2)(8/7/6)(1/1/1)
Mean RankBest6.0556 6.2222 4.7778 5.5000 5.7778 6.6111 8.4444 7.0556 4.5000 4.9444 1.0000
Mean7.1111 6.0556 4.7222 6.9444 7.3333 6.7778 8.7778 8.7778 3.7222 4.6667 1.0556
Worst7.5556 5.7778 4.0556 7.5556 7.3333 5.9444 8.2222 9.5000 2.9444 4.6667 1.1111
Final RankBest7835691110241
Mean8547961010231
Worst8538761011241
Table 18. p-value of the Wilcoxon rank sum test on different datasets.
Table 18. p-value of the Wilcoxon rank sum test on different datasets.
DatasetsPSODEGWOMVOWOAABOBOAHHOEOGO
IonosphereEW2.80 × 10−11/−6.88 × 10−10/−1.56 × 10−2/−1.04 × 10−9/−7.07 × 10−11/−1.48 × 10−9/−6.83 × 10−5/−2.42 × 10−10/−1.26 × 10−8/−7.75 × 10−7/−
Breastcancer6.85 × 10−12/−6.83 × 10−12/−1.86 × 10−12/−3.38 × 10−12/−7.62 × 10−12/−6.81 × 10−12/−9.99 × 10−12/−4.77 × 10−8/−5.61 × 10−8/−6.29 × 10−12/−
BreastEW2.21 × 10−11/−7.23 × 10−10/−2.05 × 10−11/−2.78 × 10−11/−2.33 × 10−11/−2.83 × 10−11/−2.30 × 10−11/−3.80 × 10−10/−1.24 × 10−6/−2.67 × 10−10/−
Congress8.56 × 10−13/−1.10 × 10−12/−5.51 × 10−13/−1.10 × 10−12/−6.14 × 10−14/−5.29 × 10−13/−9.74 × 10−13/−7.32 × 10−13/−2.71 × 10−14/−2.03 × 10−13/−
Wine2.85 × 10−10/−5.29 × 10−8/−7.33 × 10−12/−1.99 × 10−6/−1.06 × 10−11/−3.21 × 10−12/−1.99 × 10−11/−2.39 × 10−9/−3.69 × 10−5/−4.43 × 10−12/−
Vote1.33 × 10−12/−1.50 × 10−12/−2.71 × 10−14/−1.57 × 10−12/−3.00 × 10−13/−1.61 × 10−12/−1.63 × 10−12/−1.54 × 10−12/−2.71 × 10−14/−3.86 × 10−13/−
Vehicle8.02 × 10−10/−2.58 × 10−10/−1.28 × 10−10/−5.52 × 10−8/−8.10 × 10−11/−2.71 × 10−11/−3.84 × 10−11/−4.48 × 10−11/−3.56 × 10−7/−5.60 × 10−11/−
Exactly7.22 × 10−5/−4.90 × 10−6/−1.38 × 10−6/−1.68 × 10−7/−2.80 × 10−11/−2.25 × 10−9/−3.54 × 10−13/−1.93 × 10−12/−1.04 × 10−4/−5.87 × 10−1/=
Glass2.42 × 10−10/−1.25 × 10−12/−2.12 × 10−12/−2.34 × 10−12/−2.61 × 10−12/−2.87 × 10−10/−8.94 × 10−12/−2.96 × 10−12/−1.17 × 10−12/−2.01 × 10−12/−
HeartEW1.08 × 10−11/−1.13 × 10−11/−4.53 × 10−11/−1.72 × 10−11/−4.17 × 10−9/−1.11 × 10−11/−1.12 × 10−11/−1.16 × 10−11/−3.47 × 10−11/−5.04 × 10−12/−
Landsat1.52 × 10−7/−4.05 × 10−11/−9.38 × 10−8/−1.86 × 10−5/−9.13 × 10−6/−6.00 × 10−4/−2.99 × 10−11/−3.85 × 10−11/−6.17 × 10−4/−3.55 × 10−1/=
Lymphography2.16 × 10−11/−3.18 × 10−11/−1.91 × 10−11/−2.15 × 10−11/−1.13 × 10−10/−2.12 × 10−11/−2.20 × 10−11/−3.78 × 10−11/−5.95 × 10−6/−3.37 × 10−7/−
Zoo3.01 × 10−7/−6.93 × 10−2/=7.37 × 10−2/=2.38 × 10−3/−9.68 × 10−4/−4.64 × 10−2/−6.34 × 10−11/−9.31 × 10−9/−5.00 × 10−6/−2.52 × 10−2/−
WDBC4.35 × 10−9/−2.99 × 10−12/−7.54 × 10−9/−4.21 × 10−12/−2.89 × 10−11/−2.38 × 10−11/−3.48 × 10−10/−4.37 × 10−12/−5.34 × 10−9/−2.26 × 10−11/−
SonarEW2.88 × 10−11/−2.87 × 10−11/−1.06 × 10−9/−2.88 × 10−11/−2.89 × 10−11/−2.89 × 10−11/−2.89 × 10−11/−2.89 × 10−11/−2.87 × 10−11/−2.84 × 10−11/−
Libras2.99 × 10−11/−2.99 × 10−11/−2.98 × 10−11/−7.31 × 10−11/−3.00 × 10−11/−3.00 × 10−11/−2.97 × 10−11/−3.00 × 10−11/−3.64 × 10−11/−2.99 × 10−11/−
Spectf4.06 × 10−8/−2.26 × 10−8/−3.75 × 10−4/−3.66 × 10−11/−7.28 × 10−11/−6.79 × 10−11/−2.97 × 10−11/−4.01 × 10−11/−1.88 × 10−4/−9.16 × 10−2/=
MUSK3.00 × 10−11/−3.00 × 10−11/−2.38 × 10−8/−4.95 × 10−11/−1.04 × 10−10/−3.00 × 10−11/−3.00 × 10−11/−3.01 × 10−11/−3.43 × 10−6/−3.00 × 10−11/−
(+/−/=)(0/18/0)(0/17/1)(0/17/1)(0/18/0)(0/18/0)(0/18/0)(0/18/0)(0/18/0)(0/18/0)(0/15/3)
Table 19. The mean classification accuracy of the comparison algorithm.
Table 19. The mean classification accuracy of the comparison algorithm.
DatasetsPSODEGWOMVOWOAABOBOAHHOEOGOCODGBGO
IonosphereEW89.1429 94.7143 95.5714 94.5714 91.0476 92.4286 94.9524 90.3333 93.0952 93.9524 96.1905
1142598310761
Breastcancer96.5468 97.7458 96.8825 97.2422 95.2038 96.8585 95.5635 96.0671 98.1775 96.5947 99.0168
83438588788
BreastEW97.4631 99.2035 96.5782 97.6696 96.1357 96.7257 96.2242 97.4631 97.6696 98.1121 99.1150
6194118106432
Congress99.6935 97.9693 96.4368 94.2912 97.7778 93.3716 93.6782 95.3640 96.6667 97.0115 100.0000
2379411108651
Wine95.9048 99.8095 95.5238 98.2857 95.6190 96.5714 96.8571 96.3810 98.0952 96.1905 100.0000
9211310657481
Vote97.8161 99.2720 97.7011 98.0077 96.7816 98.7739 91.9540 96.0536 97.7011 99.6169 100.0000
6375941110721
Vehicle74.3393 75.0099 72.1893 74.9310 71.4596 71.7949 70.3550 71.7357 73.3136 73.8462 76.2919
4273108119651
Exactly90.8833 94.6667 80.1000 82.1000 73.7167 78.6833 69.3333 72.3000 85.4333 96.0000 98.4000
4376981110521
Glass78.1746 71.2698 73.5714 72.3810 70.0794 77.3016 72.6190 74.2063 72.1429 72.4603 82.6984
2105811364971
HeartEW77.8395 81.7284 87.6543 84.0123 89.5679 82.2222 78.5802 77.1605 87.2840 83.8889 91.9136
1083527911461
Landsat90.3833 89.4500 89.3167 90.6083 89.2417 90.3417 87.4333 87.2000 89.5917 91.0417 90.7000
4783951011612
Lymphography83.2184 91.7241 89.0805 86.3218 90.8046 86.3218 84.2529 88.2759 93.7931 93.5632 96.2069
1146858107231
Zoo94.3333 99.6667 96.5000 97.3333 98.0000 97.3333 90.0000 94.6667 95.6667 96.8333 98.5000
1017434119862
WDBC96.5782 93.3038 95.9292 92.2714 94.1003 94.0708 95.9587 93.3628 95.2212 94.3068 97.0796
2104117839561
SonarEW90.2439 95.2846 95.4472 88.0488 89.9187 88.9431 90.8130 88.6992 95.6911 93.0081 100.0000
7431189610251
Libras84.5833 84.3519 85.0463 89.4444 81.5741 78.5648 83.5648 80.5556 85.8333 83.7037 92.4537
5642911810371
Spectf92.8302 93.1447 92.9560 88.9937 84.7170 88.4906 86.2893 85.7862 92.5157 94.8428 95.5975
5347118910621
MUSK93.5088 96.1053 95.0526 95.4737 95.2982 92.6316 86.9474 93.8596 96.8070 94.2807 99.2281
9364510118271
Mean Rank6.3889 4.2778 5.7778 5.6111 7.7778 7.2778 8.4444 8.7222 5.1667 4.9444 1.5556
Final Rank7265981011431
Table 20. The mean size of feature subsets selecting by the comparison algorithm.
Table 20. The mean size of feature subsets selecting by the comparison algorithm.
DatasetsPSODEGWOMVOWOAABOBOAHHOEOGOCODGBGO
IonosphereEW10.2333 13.0000 4.9667 11.4667 3.8667 6.2333 6.3333 5.0000 3.1333 5.5667 4.5333
9114102785163
Breastcancer2.8333 3.1000 2.9000 3.2333 2.2667 2.9667 2.6667 2.8667 2.7333 2.3667 3.0000
5107111836429
BreastEW9.1000 12.5000 6.8000 11.7667 7.4667 8.3667 7.8333 9.0667 6.1667 9.5667 7.3667
8112104657193
Congress3.0000 3.3000 2.1000 3.7333 1.2000 2.3667 3.5000 2.1333 1.1333 1.6333 1.0000
8951137106241
Wine4.5333 5.3333 4.0000 3.8333 3.8000 3.8667 4.6333 4.4000 3.2333 4.0667 3.3667
9116435108172
Vote3.2333 4.3000 1.0000 3.3333 1.2333 1.0333 3.3333 3.2333 1.0000 3.9333 1.0667
6111853861104
Vehicle7.5667 8.3000 4.8000 7.2667 6.5667 5.7333 5.7667 7.6000 5.1333 7.0667 5.9333
9111863410275
Exactly5.6667 7.3667 3.4667 5.1333 2.2333 3.4333 1.6333 2.2000 5.2000 5.4667 5.9667
9115634127810
Glass4.2667 3.5667 3.3000 4.1000 2.3333 3.8333 3.8333 4.3000 4.0000 3.2333 3.0333
26627331111
HeartEW3.8000 5.3000 2.8667 4.1000 5.0000 4.2667 3.6667 3.0667 3.6000 3.8000 4.0333
61931134321
Landsat12.5333 13.5000 8.4333 12.8000 8.5000 10.4667 9.5000 10.2333 8.0333 11.3667 9.8000
31816242311
Lymphography6.8333 8.0000 3.5667 6.3000 5.8333 6.6000 5.8333 4.6667 5.4333 6.3000 5.5333
1011175952374
Zoo5.8333 7.6000 5.6333 7.2000 7.8000 5.8667 6.7667 7.3333 5.7000 5.2667 6.1000
4102811579316
WDBC6.5667 8.2333 2.7333 7.4333 3.3333 2.0333 6.3333 5.5000 2.9333 3.5667 3.9000
9112104187356
SonarEW20.8333 29.8333 11.9333 22.8667 13.2000 17.8667 17.0333 16.7000 9.4667 15.8333 11.5000
9113104876152
Libras34.4000 51.3667 21.1667 38.3667 20.3667 23.0667 22.3333 21.0667 14.3667 28.5000 21.0000
9115102764183
Spectf18.8333 23.0000 12.8333 18.7000 5.7333 7.8000 11.0000 8.3667 11.2667 15.5333 15.5000
1011691243587
MUSK72.2667 104.7333 39.2000 72.3333 65.2000 49.2667 41.3667 56.7000 37.9333 66.0333 49.2000
9112107536184
Mean Rank7.4444 9.3889 4.1667 7.6667 4.1667 4.7778 5.5000 5.2222 2.3889 5.5000 4.0000
Final Rank9113103576172
Table 21. Mean runtime of feature selection based on different algorithms.
Table 21. Mean runtime of feature selection based on different algorithms.
DatasetsPSODEGWOMVOWOAABOBOAHHOEOGOCODGBGO
IonosphereEW3.3108 3.1722 3.3264 3.1784 2.9800 2.1202 3.2604 4.9315 3.3270 6.5616 6.6642
Breastcancer3.6709 3.5429 3.4315 3.4680 3.0807 2.2683 3.4019 5.7813 3.4697 6.8237 6.9120
BreastEW3.3533 3.2018 3.5022 3.2243 3.1798 2.2524 3.3439 5.2125 3.4673 6.6059 6.9163
Congress3.3494 3.2961 3.2631 3.3033 2.6290 2.0781 3.2823 4.5191 3.1994 6.5350 6.4676
Wine3.1881 3.1710 3.1694 3.1464 2.9201 2.1440 3.1045 4.6576 3.1517 6.3121 6.3179
Vote3.3267 3.3109 3.2309 3.2795 2.6670 2.0366 3.2974 4.6792 3.3187 6.5316 6.6233
Vehicle3.6556 3.6239 3.5830 3.6427 3.3595 2.4563 3.5264 5.4494 3.5911 7.4301 7.4114
Exactly3.7490 3.7573 3.6516 3.7003 3.0622 2.2741 3.5822 5.0219 3.6353 7.3639 7.6153
Glass3.2112 3.1554 3.1348 3.1633 2.7608 2.1511 2.9145 4.8530 3.1383 6.2397 6.2764
HeartEW3.2382 3.2001 3.2068 3.1889 2.9947 2.1126 3.0886 4.7290 3.2077 6.4120 6.5637
Landsat4.3829 4.4849 4.6258 4.2721 4.2646 2.9833 4.4782 7.0426 4.6445 8.6532 9.0210
Lymphography3.1431 3.1904 3.1270 3.1329 2.8913 2.1470 3.1233 4.5939 3.1329 6.2383 6.2533
Zoo3.2060 3.0961 3.0762 3.0675 2.9377 2.0992 3.1017 4.8683 3.2445 6.1870 6.2997
WDBC3.3966 3.2996 3.3730 3.3750 2.9942 2.0821 3.3059 5.0062 3.3910 6.8395 6.8839
SonarEW3.0436 3.0531 3.0037 3.0109 2.9500 2.1247 3.0362 4.6112 3.0677 6.0004 6.1562
Libras3.2948 3.3294 3.1570 3.2474 3.0417 2.2046 3.1727 4.8582 3.1632 6.3859 6.3510
Spectf3.0568 3.0503 3.0780 3.0571 2.8979 2.0055 3.0874 4.7757 3.1155 5.9824 6.1543
MUSK3.6771 3.8192 3.3378 3.5578 3.3299 2.3570 3.3974 5.3402 3.3292 7.1061 6.8958
Mean time3.4030 3.3753 3.3488 3.3342 3.0523 2.2165 3.3058 5.0517 3.3664 6.6782 6.7657
Mean Rank8754213961011
Table 22. The optimal solution of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem.
Table 22. The optimal solution of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem.
Algorithms X Optimal
x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10
GO0.003526 0.001473 0.003511 0.001466 0.000065 0.000456 0.002384 0.002362 0.001224 0.001250 524.4727
INFO0.003560 0.001477 0.003443 0.001485 0.000065 0.000455 0.002361 0.002359 0.001228 0.001283 524.5944
SSA0.003571 0.001596 0.003558 0.001489 0.000065 0.000447 0.002432 0.002200 0.001206 0.001235 525.0305
TLBO0.003493 0.001466 0.003537 0.001454 0.000065 0.000457 0.002394 0.002368 0.001233 0.001243 524.4853
BESD0.003514 0.001515 0.003449 0.001451 0.000115 0.000455 0.002411 0.002330 0.001239 0.001297 526.6039
DE0.003536 0.001470 0.003509 0.001511 0.000065 0.000454 0.002315 0.002394 0.001233 0.001245 524.5510
PSO0.003578 0.001473 0.003455 0.001521 0.000065 0.000453 0.002297 0.002400 0.001267 0.001224 524.5765
SO0.003552 0.001476 0.003445 0.001433 0.000065 0.000458 0.002433 0.002334 0.001236 0.001266 524.5727
FOPSO0.003599 0.001420 0.003452 0.001539 0.000065 0.000455 0.002234 0.002503 0.001292 0.001176 524.7809
ICGWO0.003211 0.001594 0.003734 0.001469 0.000065 0.000495 0.002890 0.002145 0.001243 0.001277 538.3172
CODGBGO0.003515 0.001471 0.003511 0.001472 0.000065 0.000456 0.002368 0.002371 0.001244 0.001241 524.4511
Table 23. Statistical results of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem.
Table 23. Statistical results of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem.
AlgorithmsBestWorstMeanStd
GO524.472692 530.626625 524.972147 1.205804
INFO524.594363 536.501563 527.520947 3.520433
SSA525.030465 536.997795 529.502118 3.881703
TLBO524.485260 530.858513 525.462996 2.132384
BESD526.603949 531.093229 529.159895 1.163149
DE524.550988 525.206518 524.754413 0.154388
PSO524.576532 533.737282 528.496080 3.450720
SO524.572671 535.307127 527.344681 2.970005
FOPSO524.780922537.102061529.8477883.819223
ICGWO541.278051573.614056552.8566177.635963
CODGBGO524.451125 524.602817 524.463803 0.034718
Table 24. The optimal solution of the proposed CODGBGO and comparative algorithms for the tension/compression spring design.
Table 24. The optimal solution of the proposed CODGBGO and comparative algorithms for the tension/compression spring design.
Algorithms X Optimal
x 1 x 2 x 3
GO0.051610 0.354815 11.401536 0.012665
INFO0.051879 0.361303 11.025125 0.012666
SSA0.050000 0.315802 14.245195 0.012826
TLBO0.051520 0.352659 11.531001 0.012666
BESD0.051768 0.358525 11.191061 0.012674
DE0.051696 0.356874 11.279811 0.012665
PSO0.051861 0.360861 11.050136 0.012666
SO0.051636 0.355443 11.364087 0.012665
FOPSO0.0516540.35587411.3386200.012665
ICGWO0.0500000.31721214.2308100.012872
CODGBGO0.051701 0.356996 11.272677 0.012665
Table 25. Statistical results of the proposed CODGBGO and comparative algorithms for the tension/compression spring design.
Table 25. Statistical results of the proposed CODGBGO and comparative algorithms for the tension/compression spring design.
AlgorithmsBestWorstMeanStd
GO0.012665 0.012742 0.012672 1.423895 × 10−5
INFO0.012666 0.012734 0.012686 1.642346 × 10−5
SSA0.012728 0.013977 0.012920 2.356985 × 10−4
TLBO0.012666 0.012732 0.012683 1.585319 × 10−5
BESD0.012674 0.012792 0.012727 2.427097 × 10−5
DE0.012665 0.012666 0.012665 2.013640 × 10−7
PSO0.012666 0.014329 0.013268 5.261813 × 10−4
SO0.012665 0.013907 0.012835 2.417478 × 10−4
FOPSO0.012665 0.014304 0.013264 5.067277 × 10−4
ICGWO0.013111 0.016786 0.014086 1.306659 × 10−3
CODGBGO0.012665 0.012676 0.012666 1.957407 × 10−6
Table 26. The optimal solution of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer.
Table 26. The optimal solution of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer.
Algorithms X Optimal
x 1 x 2 x 3 x 4 x 5 x 6 x 7
GO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
INFO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
SSA3.500072 0.700000 17.000000 7.660034 8.087044 3.358114 5.286784 3007.818687
TLBO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
BESD3.503719 0.700002 17.001660 7.340884 7.755843 3.351224 5.288070 2998.504231
DE3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424479
PSO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
SO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
FOPSO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
ICGWO3.600000 0.700000 17.000000 8.300000 8.300000 3.353812 5.500000 3197.929838
CODGBGO3.500000 0.700000 17.000000 7.300000 7.715320 3.350541 5.286654 2994.424466
Table 27. Statistical results of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer.
Table 27. Statistical results of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer.
AlgorithmsBestWorstMeanStd
GO2994.424466 2994.424466 2994.424466 1.850085 × 10−12
INFO2994.424466 2994.424466 2994.424466 1.850085 × 10−12
SSA3007.818687 3124.932806 3035.033900 2.585991 × 101
TLBO2994.424466 2994.424466 2994.424466 1.850085 × 10−12
BESD2998.504231 3005.944928 3002.199777 1.751670 × 100
DE2994.424479 2994.424576 2994.424525 3.035569 × 10−5
PSO2994.424466 2994.424466 2994.424466 1.850085 × 10−12
SO2994.424466 2996.185353 2994.499642 3.245885 × 10−1
FOPSO2994.424466 3033.701596 2995.733703 7.050461 × 100
ICGWO3218.393654 3218.393654 3218.393654 4.547474 × 10−13
CODGBGO2994.424466 2994.424466 2994.424466 1.850085 × 10−12
Table 28. The optimal solution of the proposed CODGBGO and comparative algorithms for the welded beam design.
Table 28. The optimal solution of the proposed CODGBGO and comparative algorithms for the welded beam design.
Algorithms X Optimal
x 1 x 2 x 3 x 4
GO0.1988323.3373659.1920240.1988321.670218
INFO0.1988323.3373659.1920240.1988321.670218
SSA0.1964083.3787699.2103260.1987471.674474
TLBO0.1988323.3373659.1920240.1988321.670218
BESD0.1982503.3686979.1889290.1993991.677316
DE0.1988323.3373679.1920240.1988321.670218
PSO0.1988323.3373659.1920240.1988321.670218
SO0.1988323.3373659.1920240.1988321.670218
FOPSO0.198832 3.337365 9.192024 0.198832 1.670218
ICGWO0.198308 3.457363 9.478214 0.201282 1.752510
CODGBGO0.1988323.3373659.1920240.1988321.670218
Table 29. Statistical results of the proposed CODGBGO and comparative algorithms for the welded beam design.
Table 29. Statistical results of the proposed CODGBGO and comparative algorithms for the welded beam design.
AlgorithmsBestWorstMeanStd
GO1.670218 1.670218 1.670218 1.086776 × 10−8
INFO1.670218 1.670218 1.670218 1.129203 × 10−15
SSA1.674474 1.831619 1.749027 5.575776 × 10−2
TLBO1.670218 1.670218 1.670218 1.129203 × 10−15
BESD1.677316 1.691348 1.683443 3.948120 × 10−3
DE1.670218 1.670219 1.670218 2.127735 × 10−7
PSO1.670218 1.711166 1.672234 8.154641 × 10−3
SO1.670218 1.679417 1.670570 1.673049 × 10−3
FOPSO1.670218 2.330363 1.860656 2.038525 × 10−1
ICGWO1.752510 2.140073 1.888851 7.733192 × 10−2
CODGBGO1.670218 1.670218 1.670218 9.433468 × 10−12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, R.; Yu, L.; Li, S.; Wu, F.; Zhang, T.; Yuan, P. Multi-Strategy-Improved Growth Optimizer and Its Applications. Axioms 2024, 13, 361. https://doi.org/10.3390/axioms13060361

AMA Style

Xie R, Yu L, Li S, Wu F, Zhang T, Yuan P. Multi-Strategy-Improved Growth Optimizer and Its Applications. Axioms. 2024; 13(6):361. https://doi.org/10.3390/axioms13060361

Chicago/Turabian Style

Xie, Rongxiang, Liya Yu, Shaobo Li, Fengbin Wu, Tao Zhang, and Panliang Yuan. 2024. "Multi-Strategy-Improved Growth Optimizer and Its Applications" Axioms 13, no. 6: 361. https://doi.org/10.3390/axioms13060361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop