Next Article in Journal
A Deficiency of the Predict-Then-Optimize Framework: Decreased Decision Quality with Increased Data Size
Next Article in Special Issue
Multi-Objective Gray Wolf Optimizer with Cost-Sensitive Feature Selection for Predicting Students’ Academic Performance in College English
Previous Article in Journal
Continuous and Discrete ZND Models with Aid of Eleven Instants for Complex QR Decomposition of Time-Varying Matrices
Previous Article in Special Issue
Optimizing a Multi-Layer Perceptron Based on an Improved Gray Wolf Algorithm to Identify Plant Diseases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Evolution with Group-Based Competitive Control Parameter Setting for Numerical Optimization

School of Sciences, Xi’an Polytechnic University, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(15), 3355; https://doi.org/10.3390/math11153355
Submission received: 5 June 2023 / Revised: 16 July 2023 / Accepted: 28 July 2023 / Published: 31 July 2023
(This article belongs to the Special Issue Evolutionary Computation 2022)

Abstract

:
Differential evolution (DE) is one of the most popular and widely used optimizers among the community of evolutionary computation. Despite numerous works having been conducted on the improvement of DE performance, there are still some defects, such as premature convergence and stagnation. In order to alleviate them, this paper presents a novel DE variant by designing a new mutation operator (named “DE/current-to-pbest_id/1”) and a new control parameter setting. In the new operator, the fitness value of the individual is adopted to determine the chosen scope of its guider among the population. Meanwhile, a group-based competitive control parameter setting is presented to ensure the various search potentials of the population and the adaptivity of the algorithm. In this setting, the whole population is randomly divided into multiple equivalent groups, the control parameters for each group are independently generated based on its location information, and the worst location information among all groups is competitively updated with the current successful parameters. Moreover, a piecewise population size reduction mechanism is further devised to enhance the exploration and exploitation of the algorithm at the early and later evolution stages, respectively. Differing from the previous DE versions, the proposed method adaptively adjusts the search capability of each individual, simultaneously utilizes multiple pieces of successful parameter information to generate the control parameters, and has different speeds to reduce the population size at different search stages. Then it could achieve the well trade-off of exploration and exploitation. Finally, the performance of the proposed algorithm is measured by comparing with five well-known DE variants and five typical non-DE algorithms on the IEEE CEC 2017 test suite. Numerical results show that the proposed method is a more promising optimizer.

1. Introduction

Differential Evolution (DE), proposed by Storn and Price [1] in 1995, is a population-based stochastic search algorithm. Like other evolutionary algorithms, the DE algorithm contains mutation, crossover, and selection operations. Due to its simple structure and excellent performance, the DE algorithm has been widely studied and used in many scientific and engineering fields, such as engineering scheduling problems [2,3,4], resource allocation problems in mobile communication [5], UAV path planning problems [6], biomedical problems [7], and so on. However, the performance of the DE algorithm is still challenged when facing increasingly complex optimization problems, where a large number of local optima usually exist, and then premature convergence and stagnation easily occur.
As pointed out in Ref. [8], the performance of DE is mainly dependent on its mutation strategy and control parameter setting. In fact, the mutation strategy plays an important role in the search ability of DE on the search space during the evolution process, and the control parameter setting can adjust its search characteristic. Consequently, a series of works have been conducted on the improvement of DE performance in the past decades by enhancing the mutation operation [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41] and control parameter setting [9,28,30,42,43,44,45,46,47,48,49]. Particularly, the previous mutation approaches are always achieved through designing a new mutation operator [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26] and integrating multiple mutation strategies [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. With respect to generating a new mutation operator, one important and popular way is to exploit the underlying information of the population to construct the search direction, thus enhancing the search effectiveness and efficiency of the algorithm. On the other hand, the combined mutation strategy often employs multiple operators with various search characteristics to balance the exploration and exploitation of the algorithm. It should be mentioned that compared to the combined mutation approaches, the development of a new operator does not require the determination of the candidate mutation operator pool in advance, and does not need to consider the selection rule of candidate operators, which are often very difficult to decide. Thus, it is desirable to develop a more promising new mutation operator.
Up to now, through fully exploiting the underlying information of the population, many mutation operators have been presented [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. For example, to balance well the exploration and exploitation of the algorithm, based on the scheme of “DE/current-to-best/1”, which makes the algorithm easily fall into the local optimum, Zhang et al. [9] proposed a new mutation operator, named “DE/current-to-pbest/1”, where the top p% individuals of the population are randomly used to guide the search and an external archive is used to store historical inferiority solutions to enhance the diversity of the population. To fully exploit the advantage of individuals, Gong and Cai [10] proportionally selected the parents for the mutation process based on the fitness values of individuals (rank-DE), and Wang et al. [11] simultaneously employed the fitness value and diversity of individuals to determine their probability of participating into the mutation. By employing the comprehensive ranking of individuals based on both fitness and diversity, Cheng et al. [24] proposed a mutation operator (FDDE). In order to promote the optimization ability of DE in solving complicated optimization problems, Liu et al. [25] devised a simple yet effective mutation scheme by using a non-linear selection probability for each individual, while by utilizing the population’s holistic information and the direction of differential vector, an enhanced mutation operator was designed to avoid the issue of local optimal trapping [26]. Although the above methods can effectively improve the performance of DE, the special characteristic of the individual is rarely considered to determine its guider under the framework of DE/current-to-guiding/1, which might be helpful to effectively adjust the search requirements of different individuals. Thus, it is desirable to devise a more adaptive and promising mutation operator for DE.
Moreover, for the existing control parameter settings, one of the most successful and attractive approaches is to design an adaptive control mechanism, which can adaptively adjust the search capability of the algorithm. To do this, numerous adaptive methods have been developed over recent decades [9,28,30,42,43,44,45,46,47,48,49]. For example, to ensure the robustness of the algorithm, Liu et al. [42] used a fuzzy logic controller to tune the control parameters of mutation and crossover operations, and then proposed a fuzzy adaptive differential evolution algorithm. By integrating the scale factor with individuals, Brest et al. [43] proposed another self-adapting parameter control strategy (jDE), where the successful parameter shall be continuously entered into the next generation and the failure one will be randomly recreated. Meanwhile, by making full use of the search information of the population, Zhang et al. [9] presented an adaptive parameter setting, and Tanabe et al. [44] further introduced the historical success information of the population to create the scale factor and crossover rate for each individual, and put forward a history-based adaptive approach (SHADE). On the other hand, for the population size, Tanabe et al. [45] presented a linear reduction mechanism to dynamically adjust the population during the whole evolutionary process. Further, by using the performance of the best individual, Xia et al. [48] developed an adaptive control mechanism to increase or decrease the population. Moreover, to adaptively adjust the population diversity and balance the exploration and exploitation of the algorithm, Zeng et al. [49] further proposed a sawtooth-linear population size adaptive method to maintain the search ability of the algorithm. Even though these methods above could effectively strengthen the adaptivity and robustness of the algorithm for different search states and problems, the ones with the historical search information are usually adopted to set the control parameters for the whole population, the result of which may be that when there are some biases that obtain the successful offspring, they lead to a serious degradation on the performance of the algorithm. Meanwhile, despite many adaptive mechanisms having been designed for the population size and having promoted the performance of DE, they often adopt one single rule to adjust the population size, and even require additionally measuring the search state of the population, which may be not able to fully satisfy the varying search needs of the algorithm at different evolutionary stages and may seriously increase the computational burden. Therefore, it is necessary to make further efforts in developing a more effective parameter control setting.
Based on the considerations above, in this paper, we propose a new DE algorithm, named differential evolution with group-based competitive control parameter setting (GCIDE). Specifically, the main contributions of this paper are as follows.
(1)
In order to make full use of the fitness value of individuals to adjust their search capability, under the framework of “DE/current-to-guiding/1”, a new mutation operator named “DE/current-to-pbest_id/1” is first proposed. In the new operator, each individual’s fitness value is employed to determine the chosen range of its associated guider among the whole population, i.e., decide the parameter p in the “DE/current-to-pbest/1” for each target individual. Compared to the existing related methods [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26], the proposed operator adaptively assigns a suitable guider for each individual to adjust its search capability based on its special characteristic. Thus, this new operator can effectively satisfy the search requirements of different individuals, thus enhancing the search effectiveness of the algorithm.
(2)
To alleviate the degradation of DE performance caused by the successful biases, a grouping-based competitive parameter control method is also proposed to create the scale factors and crossover rates for the population. In this proposed method, the population is first randomly and equally divided into k groups, the control parameters for the individuals in each group are independently generated with an adaptive manner, and the location parameters with the worst performance are competitively updated by using the successful parameters of the current population. Differing from the related adaptive methods [42,43,44,45], the proposed setting can simultaneously use multiple historical pieces of successful information to independently generate the control parameters for the population, and the worst one is dynamically replaced by the current successful information during the whole evolutionary process. Then, the developed setting is capable of not only avoiding being mislead by biases, but also maintaining the adaptive ability of the algorithm.
(3)
In order to further meet the different search needs of the algorithm at different search stages, by designing and integrating two different non-linear schemes, a piecewise population reduction mechanism is presented to dynamically adjust the population size during the evolutionary process. Unlike the previous approaches [46,47,48,49], this new method, respectively, adopts a non-linear formula with slower or quicker reduction speed to set the population size, and when a smaller population size is generated, the corresponding worst individuals will be removed from the current population. Thereby, this mechanism could more effectively enhance the exploration and exploitation of the algorithm at the early and later search stages, respectively.
Finally, to verify the performance of the GCIDE, numerical experiments are conducted by comparing with five well-known DE variants and five other non-DE heuristic algorithms on the 30 benchmark functions from IEEE CEC 2017 [50] and three real applications. Numerical results show that the GCIDE has a more competitive performance.
The rest of this paper is organized as follows. In Section 2, the related work is simply described. In Section 3, we present the proposed GCIDE algorithm in detail. In Section 4, experimental results of the proposed algorithm are given. Finally, Section 5 concludes this paper.

2. Related Work

In this section, we shall introduce the classical differential evolution algorithm and the typical previous works, respectively.

2.1. Classical Differential Evolution

Herein, the four basic operations of the classical differential evolution algorithm are described in detail, including initialization, mutation, crossover, and selection [1].
First, a population with NP solutions, S G = { X i G : i = 1 , 2 , , N P } , is randomly generated in the search space. X i G = ( x i , 1 G , x i , 2 G , , x i , D G ) denotes the i-th individual in SG, and G is the current number of iterations. In detail, the j-th dimension X i , j G of X i G can be generated by
X i , j G = L j + ( U j L j ) · r a n d .
Here, L j and U j denote the j-th dimension lower and upper boundaries of the decision space, respectively, and rand is a random number between 0 and 1 within uniform distribution.
After the initialization, for each target individual X i G , its corresponding mutant vector V i G is generated by special mutation operation. Herein, four commonly used mutation operations are described as follows:
DE/rand/1:
V i G = X r 1 G + F ( X r 2 G X r 3 G ) ,
DE/best/1:
V i G = X b e s t G + F ( X r 1 G X r 2 G ) ,
DE/rand-to-best/1:
V i G = X r 1 G + F ( X b e s t G X r 1 G ) + F ( X r 2 G X r 3 G ) ,
DE/current-to-best/1:
V i G = X i G + F ( X b e s t G X i G ) + F ( X r 1 G X r 2 G ) .
Herein, F is a scaling factor ranging from [0, 1], r 1 , r 2 , r 3 { 1 , 2 , , N P } are three randomly selected and exclusive indexes having r 1 r 2 r 3 i , and X b e s t G denotes the best individual in the population at the generation G evaluated by its fitness value.
In order to avoid the mutant individuals exceeding the search space and suitably dealing with each variable according to its search characteristic, the following constraint handling technique in [9] is adopted here. Specifically, if v i , j G is beyond the feasible domain region [ L j , U j ] , it will be readjusted in the feasible domain region by
v i , j G = { min { U j , 2 L j v i , j G } , i f v i , j G < L j , max { L j , 2 U j v i , j G } , i f v i , j G > U j .
Subsequently, the trial vector U i G = ( u i , 1 G , u i , 2 G , , u i , D G ) is generated by crossover operation between the target vector X i G = ( x i , 1 G , x i , 2 G , , x i , D G ) and the mutation vector V i G = ( v i , 1 G , v i , 2 G , , v i , D G ) . The specific operation is as follow:
u i , j G = { v i , j G , i f r a n d C r o r j = j r a n d , x i , j G , o t h e r w i s e .
Here, C r [ 0 , 1 ] is the crossover probability, and j r a n d is a randomly generated index in [ 1 , D ] .
Finally, the target vector is compared with the trial vector according to their fitness values, and the one with a better fitness value survives to the next generation. The binomial crossover operation is as follow:
X i G + 1 = { U i G , i f f ( U i G ) f ( X i G ) , X i G , o t h e r w i s e .
Here, f ( U i G ) and f ( X i G ) are the fitness values of U i G and X i G , respectively.
Noticeably, once the DE algorithm is called, the initialization will be first executed, and then the mutation, crossover, and selection operations are repeated until the pre-given termination criterion is satisfied.

2.2. Related Researches

As is well-known, the performance of DE is heavily related to its mutation operation and control parameter setting. As a result, various mutation strategies and control parameter settings have been presented to enhance the search ability of DE [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49]. Particularly, with respect to the mutation scheme, the information of individual or population have been fully exploited to design an improved mutation operator [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. Concretely, by randomly choosing a guider for each individual from the top p% individuals, Zhang et al. [9] proposed mutation operator “DE/current-to-pbest/1”. Through employing the fitness values of individuals to decide the parents in the mutation process, Gong and Cai [10] presented a rank-based mutation operator. Meanwhile, by simultaneously considering the fitness value and diversity of individuals, Wang et al. [11] put forward an enhanced mutation operation. Similarly, to alleviate all individuals in the population having an equal chance to be selected as the parents, Cai et al. [13] combined the fitness and position information of the population simultaneously to choose the parents in the mutation. By utilizing both the fitness value and decision space information of the population, Sharifi-Noghabi et al. [15] also designed a union-based mutation operator. By randomly selecting three individuals from the population and using the best one as the base vector and the difference vectors as the disturb item, Mohamed and Suganthan [16] devised a triangular mutation operator. By utilizing both the fitness value and novelty of individuals, Xia et al. [18] also proposed a new parent selection method for the mutation of DE. To get a better perception of the landscape of objectives, Meng et al. [21] introduced a depth information-based external archive and used it to create the difference vector in the mutation process. By using the best individual and a randomly selected individual to generate the base vector, Ma and Bai [22] presented a novel mutation scheme. Moreover, via employing the comprehensive ranking of individuals based on both fitness and diversity, Cheng et al. [24] proposed a new mutation operator (FDDE). Liu et al. [25] devised a simple yet effective mutation scheme by using a non-linear selection probability for each individual, while Li et al. [26] developed an enhanced mutation operator to avoid the issue of local optimal trapping by utilizing the population’s holistic information and the direction of differential vector.
On the other hand, for integrating the benefits of multiple mutation operators, many combined strategies were also designed [27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. For example, by using four different mutation operators as the mutation pool, and dynamically choosing one among them for each individual based on their historical successful ratio, Qin et al. [27] presented a self-adaptive selection mechanism. By simultaneously employing three mutation operators for every individual to generate offspring, Wang et al. [28] developed a composite mutation strategy. Gui et al. [30] randomly divided the whole population into multiple groups and assigned different mutation operators for each individual within one group based on its role. Li et al. [31] introduced reinforcement learning to dynamically provide the selection chance of each candidate mutation operator. By introducing the idea of a cultural algorithm and different mutation strategies into belief space, Deng et al. [35] proposed a new combined mutation strategy to balance the global exploration ability and local optimization ability. Moreover, for constraint optimization problems, Qiao et al. [38] presented a self-adaptive resources allocation-based mutation strategy by assigning a suitable operator to each individual based on its performance feedback. Besides, by hybridizing with an estimation of distribution algorithm, Li et al. [40] designed a hybrid DE variant, and by dividing the whole population into leader-adjoint populations and separately assigning one operator for each subpopulation, Li et al. [41] further proposed a combined way to integrate the advantages of several different operators.
Moreover, with respect to the control parameters involved in the DE algorithm, many adaptive approaches have also been presented over recent decades [9,28,30,42,43,44,45,46,47,48,49]. In particular, for the scale factor and crossover rate, by making full use of the successful parameters, Zhang et al. [9] presented an adaptive parameter setting, and Tanabe et al. [44] further introduced the historical success information of the population and put forward a history-based adaptive approach (SHADE). By introducing a fuzzy logic controller, Liu et al. [42] proposed a fuzzy adaptive parameter control method, and by integrating the scale factor with individuals and adaptively updating it according to the performance of its corresponding offspring, Brest et al. [42] proposed another self-adapting parameter control strategy (jDE). Moreover, for the population size, by linearly reducing the size of the population during the evolution process, Tanabe et al. [45] presented a linear reduction mechanism. Meanwhile, by fully using the diversity of the population to evaluate its search state, Poláková et al. [47] dynamically increased or decreased the population, and by fully using the performance of the best individual to evaluate the search state of the population, Xia et al. [48] designed an adaptive control mechanism to dynamically adjust the size of the population. Moreover, by introducing a sawtooth-linear function to calculate the size of the population, Zeng et al. [49] further proposed a sawtooth-linear population size adaptive method to maintain the search ability of the algorithm.
Notably, as described above, one can find that the special characteristic of the individual is rarely considered to determine its guider under the framework of DE/current-to-guiding/1, the adaptive parameter settings with the historical search information are usually adopted to set the control parameters for the whole population, and the existing population size control approaches often adopt one single rule to adjust the population size, and even require additionally measuring the search state of the population. Thus, the biases that obtain the successful offspring could not be effectively handled and would lead to a serious degradation of the performance of the algorithm, while the varying search needs of the algorithm may not be fully satisfied at different evolutionary stages, and the computational cost of the algorithm might be seriously increased. Therefore, it is necessary to develop a more promising DE variant.

3. Proposed Algorithm

In this section, we provide a detailed description of the proposed algorithm (GCIDE), including a new mutation strategy, named DE/current-to-pbest_id/1, a group-based competitive control parameter setting, and a piecewise population size reduction mechanism.

3.1. New Mutation Strategy

As can be seen from the Introduction, the settings of p in the existing versions of DE/current-to-pbest/1 are usually fixed or controlled by a formula related to the number of iterations. These may result in inefficient searches for the algorithm and the risk of falling into local optimum. For this, a new mutation strategy named DE/current-to-pbest_id/1 is proposed in this paper as follows:
V i G = X i G + F ( X p b e s t _ i d G X i G ) + F ( X r 1 G X r 2 G ) ,
where X p b e s t _ i d G is the associated guiding individual for the target individual by randomly selecting from the top p of individuals in the current population.
From Equation (9), we can see that the selection of X p b e s t _ i d G depends on the value of p for every target individual. A smaller p makes the population converge to the better individuals quickly. This can ensure the convergence speed of the algorithm to some extent, but increases the risk of falling into the local optimum. A larger p can ensure the diversity of the population during the evolutionary process to a certain extent, but has the limitation of slow convergence. Thus, by using the fitness information of individuals, the following equation is designed to set its corresponding p by
p i = 0.2 × f i f min ( f max f min ) + σ + 0.11 ,
where f i is the fitness value of X i G . f min and f max are the minimum fitness value and maximum fitness value in the population, respectively, σ is a very small positive number to prevent p from generating invalid values in the later iteration, and is set to 0.01 here. From Equation (10), individuals with smaller fitness values are more inclined to exploitation and their corresponding p are smaller, while individuals with larger fitness values are more inclined to exploration, and their corresponding p are larger. Thus, this setting can make full use of the characteristics of different individuals, and thus balance well the exploration and exploitation of the algorithm.

3.2. Group-Based Competitive Control Parameter Setting

The widely used methods for adjusting the control parameters are the adaptive ones using the historical successful parameters. For example, in references [9,44], all the successful parameters of individuals are employed to update their generation distributions for the next population. However, when there are some biases which obtain the successful offspring in the last iteration, they may result in misleading the creation of the next parameters, and are used for every individual. This could cause a serious degradation of the performance of the algorithm. For this, a group-based competitive control parameter setting shall be proposed in this subsection.
In this new method, the population is first randomly divided into k groups with equal sizes, and the parameters including crossover rate Cr and scaling factor F of individuals in each group are independently generated. Specifically, for the i-th individual X i , k G in the k-th group, its corresponding crossover rate Cri and scaling factor Fi are generated by
{ C r i = r a n d n i ( μ C r k , 0.1 ) , F i = r a n d c i ( μ F k , 0.1 ) ,
where r a n d n i ( μ C r k , 0.1 ) is a normal distribution with mean μ C r k and standard deviation 0.1, r a n d c i ( μ F k , 0.1 ) is a Cauchy distribution with mean μ F k and standard deviation 0.1. At the beginning of the algorithm, we let μ C r i = μ F i = 0.5 for i = 1 , 2 , , k . Notably, in order to avoid being mislead by the bias occurring in the last iteration, and also effectively ensure the adaptivity of the algorithm, the historical successful parameters are still adopted here, but they are only used to update the values of μ C r and μ F for the group with the worst performance. Specifically, the detailed update procedures of μ C r and μ F can be, respectively, described as
{ μ C r i d x , G + 1 = { m e a n W L ( S C r ) , i f S C r a n d max { S C r } > 0 , 0 , i f S C r a n d μ C r i d x , G = 0 , μ C r i d x , G , o t h e r w i s e . m e a n W L = s = 1 | S C r | w s S C r 2 ( s ) s = 1 | S C r | w s S C r ( s ) , w s = f s s = 1 | S C r | f s ,
{ μ F i d x , G + 1 = { m e a n W L ( S F ) , i f S F , μ F i d x , G , o t h e r w i s e , m e a n W L = s = 1 | S F | w s S F 2 ( s ) s = 1 | S F | w s S F ( s ) , w s = f s s = 1 | S F | f s .
Herein, S C r and S F are the sets of successful Cr and F, respectively, Δ f s is the fitness deviation between the target individual X s and its offspring U s , and idx denotes the index of the group with the worst performance, which is characterized by the success rate r j (j = 1, 2, …, k) of the individuals in the group here as
{ r j = { n s j 2 n s ( n s j + n f j ) , i f n s j > 0 , ε , o t h e r w i s e . n s = j = 1 k n s j .
Here, ε is a very small positive number and is set to 0.01, n s j is the number of successful individuals in the j-th group, and n f j is the number of unsuccessful individuals in the j-th group. Note that if there are multiple groups who have the same minimal success rate, the updated μ C r and μ F will be employed for the group randomly selected among them. Obviously, from Equations (11)–(14), one can see that the number of groups k might importantly affect the performance of the proposed setting. In fact, a larger k may lead to the defect that the overly outdated successful parameters are stored for a long time and employed to generate the control parameters, which have not been suitable for the search requirements of the current population, thus degrading the search effectiveness of the algorithm. Thereby, the value of k should not be too large, and we set k to 4 in this paper by a series of tuning experiments, which can be further found in Section 4.1.
As can be seen from the above, this proposed setting not only makes full use of the successful parameters to maintain the adaptivity of the algorithm for varying search states, but also avoids being mislead by the successful biases. Thus, it is capable of enhancing the optimization of the algorithm further.

3.3. A Piecewise Population Reduction Mechanism

In most DE variants, although their parameters are automatically adjusted, the population size often remains constant throughout the search process. In fact, smaller population sizes tend to lead to faster convergence, but increase the risk of convergence to a local optimum. On the other hand, a larger population size encourages a wide search, but tends to converge more slowly.
Based on this consideration, we further propose a deterministic rule-based population reduction mechanism herein, which prefers to fully explore the solution space at the early stage of evolution, while converging quickly to find the best solution at the later stage of evolution. In detail, the specific formula for computing the population size is as follows:
{ N P = ( 1 3 N P i n i t N P i n i t ) ( 2 3 F E S max N P i n i t ) 2 × ( F E S N P i n i t ) 2 + N P i n i t , i f F E S 2 3 F E S max , N P = ( 1 3 N P i n i t N P min ) ( 2 3 F E S max N P min ) 2 × ( F E S F E S max ) 2 + N P min , otherwise .
Herein, N P i n i and N P min denote the initial and minimum population sizes, respectively, and F E S max and F E S denote the maximum and current number of function evaluations, respectively. From Equation (15), it can be seen that at the early and later stages of evolution, the population size is monotonously reduced and has a slower and faster speed, separately. When the new population size is smaller than the current one, the next population will just consist of the best NP individuals in the current population based on their fitness values. Thus, this mechanism is able to effectively promote the balance between exploration and exploitation.
Specifically, by integrating the proposed mutation strategy, a group-based competitive control parameter setting and a piecewise population reduction mechanism, the overall procedure of the proposed algorithm GCIDE is also shown in Algorithm 1.
Algorithm 1 The detailed procedure of GCIDE
1: Input the initial parameters, including population size NP, the number of groups k, the prime values of μ C r and μ F for each group, and the number of generation G = 0.
2: Randomly initialize population S G = { X i G : i = 1 , 2 , , N P } by Equation (1);
3: Evaluate the fitness value of each individual;
4: while Termination condition is not satisfied
5:   Randomly divide the population with equal k groups;
6:   For each group, create the control parameters for its individuals by Equation (11);
7:   for i = 1:NP
8:      Calculate the p value for each individual by Equation (10);
9:      Perform mutation using Equation (9);
10:    Perform crossover using Equation (7);
11:    Perform selection using Equation (8);
12:  end for
13:  Calculate the success rate r j for each group by Equation (14);
14:  Update the control parameters by Equations (12)–(14);
15:  Calculate the new population size using Equation (15), and create the next population;
16:  G = G + 1
17: end while
From Algorithm 1, one can see that in each generation, the population is first randomly divided into k groups, and the control parameters for individuals in each group are independently created by the corresponding location parameters. After this, the proposed mutation strategy is employed to create the mutant individual for each target individual, and the binomial crossover operation is used to generate its offspring. Subsequently, the greedy selection strategy is adopted to update the current population according to the fitness value, and the presented piecewise population reduction mechanism is further utilized to remove the unpromising individuals from the current population. In particular, the proposed mutation strategy makes full use of the fitness value of individuals to properly choose a guider from the population to create its search direction, which can adaptively adjust the search ability of each individual. Meanwhile, the proposed group-based competitive control parameter setting can simultaneously utilize multiple successful parameter information to generate the control parameters for the current population, and competitively update the worst historical location information with the current search records during the whole evolution process. This setting can not only maintain the adaptivity of the algorithm for varying search environments, but also alleviate being mislead by the successful biases. Moreover, the presented piecewise population reduction mechanism reduces the size of the population slowly and quickly at the early and later stages of evolution separately, which can effectively ensure the exploration of the algorithm at the beginning of the search process, and speed up the convergence of the algorithm at the final evolution stage. Therefore, the proposed GCIDE can properly adjust the search ability of each individual, and effectively satisfy the trade-off between exploration and exploitation of the algorithm.

3.4. Complexity Analysis

Compared with classic DE, whose complexity is O ( G max × N P × D ) , GCIDE has additional computational burdens caused by three parts, including the mutation operation, the parameter control setting, and the population reduction mechanism. Herein, G max is the maximal number of generations. During one generation, the computation complexity of the mutation operation just includes that of calculating the value of pi, which needs to find the maximum and minimum fitness values among the current population. Thus, its extra complexity is O ( 3 × G max × N P ) . Meanwhile, for the parameter control setting, its only extra part is to calculate the success rate of every group based on their fitness values. Thereby, its extra complexity is O ( G max × N P ) . Moreover, the extra complexity of the population reduction mechanism is O ( G max × N P ) . As a result, the total complexity of GCIDE is O ( G max × N P × ( D + 5 ) ) , simplified as O ( G max × N P × D ) . Therefore, the new algorithm will not cause extra serious computational burden.

4. Numerical Experimental Results and Analysis

In this section, a series of experiments are performed to verify the advantage of our proposed GCIDE. Firstly, the optimal choice of the newly introduced parameters was determined experimentally. Then, comparative experiments with five typical DE variants and five typical non-DE heuristic algorithms are conducted on the IEEE CEC 2017 test suite [50]. In this test suite, four different types of functions are included, namely unimodal functions, simple multimodal functions, hybrid functions, and composition functions. To obtain statistical results, each algorithm carries out 30 independent runs on each benchmark function. Moreover, a validity experiment is conducted on the newly proposed strategy of GCIDE. Finally, the performance of GCIDE is verified on three practical problems.

4.1. Sensitivity Analyses of k and NPini

From Section 3, we can find that there are two parameters in the proposed algorithm, the number of groups k and the initial population size NPini. To analyze the effects of k and NPini on the performance of GCIDE, numerous experiments are conducted on eight typical functions by setting k and NPini to different values. These eight chosen functions are unimodal functions F01 and F02, simple multimodal functions F05 and F07, hybrid functions F10 and F16, and composition functions F21 and F26; and k and NPini are, respectively, set to 4, 6, 8, and 10, and 12D, 18D, 23D, and 25D. Table 1 lists the numerical results of GCIDE with different k and NPini, while the best results of each function are marked in bold (the same below).
According to the results shown in Table 1, on unimodal functions F1 and F2, GCIDE gets the superior results when the initial population size is 12D and 23D and the group number is six and eight. Meanwhile, on simple multimodal functions F5 and F7, GCIDE has excellent performance when the initial population size is 23D and 25D and the group number is four. Besides, on the hybrid functions F10 and F16, one can see that a smaller initial population size and a larger group number have the better results. Finally, on the composition functions F21 and F26, a larger initial population size and a smaller group number are more advantageous for GCIDE. Moreover, Table 2 further reports the rankings of GCIDE with various parameter values, which are based on the Friedman test [51]. From Table 2, it can be seen that GCIDE has the best ranking when the initial population size is 23D and the group number is four. Therefore, these two parameter values will be used in this paper.

4.2. Comparison and Discussion

In this subsection, we shall evaluate the performance of GCIDE by comparing it with five well-known DE variants (JADE [9], jDE [43], FDDE [24], rank-DE [10], and SHADE [44]) and five other typical heuristic algorithms (EPSO [52], MKE [53], HGSA [54], HLWCA [55], AOA [56]), respectively.
Particularly, among the chosen five other heuristic algorithms above, EPSO is a recent and well-known variant of PSO, which properly integrates five different top PSO variants. MKE is a new memetic evolutionary algorithm by randomly selecting some Monkey King particles, and then separately transforming them to a small group of monkeys for exploitation. Meanwhile, HGSA and HLWCA are both heuristic optimizers inspired by physics. HGSA is an enhanced gravitational search algorithm by designing hierarchical structures to interact with the population to enhance the exploration of the algorithm, while HLWCA is a recent variant of the water cycle algorithm by introducing the idea of hierarchical learning. Moreover, AOA is a new meta-heuristic method, which simulates the distributional behavior of the main arithmetic operators in mathematics. Obviously, these methods above have a certain representativeness. Thus, it is meaningful and reasonable to compare GCIDE with them here.
In these experiments, the parameters involved in the above ten compared algorithms are set to the same settings as in their original papers, and those of our method are set to the mentioned settings in Section 3. For clarity, their detailed parameter settings are further given in Table 3.

4.2.1. Comparison with Five DE Variants

First, five DE variants are selected as the comparison algorithms to verify the performance of GCIDE, including JADE [9], jDE [43], FDDE [24], rank-DE [10], and SHADE [44]. Table 4 and Table 5 provide the comparison results of GCIDE and five compared algorithms on the IEEE CEC2017 benchmark test suite with D = 30 and D = 50, respectively. Herein, “+”, “−” and “=” indicate that GCIDE has better, worse, and similar performance than the compared algorithms, respectively, “Ranking” is the average rank based on the Friedman test [51] (the same below).
As can be seen from Table 4, when D = 30, the proposed GCIDE obtains better results in most test functions, except for nine, including F1–F3, F6, F18, F20, F27, and F29–F30. Specifically, compared to JADE, GCIDE wins 25, loses 3, and holds 2. Against jDE, GCIDE wins 23, loses 5, and holds 2. Compared with FDDE, GCIDE wins 20, loses 9, and holds 1. Compared with rank-DE, GCIDE wins 27, loses 2, and holds 1. Compared with SHADE, GCIDE wins 25, loses 3, and holds 2. Meanwhile, when D = 50, from Table 5, the proposed GCIDE has a clear advantage in both unimodal functions, simple multimodal functions, hybrid functions, and composition functions, and just loses the advantage in eight functions, including F1, F3–F4, F6, F11, F25, F28, and F30. Specifically, compared to JADE, GCIDE wins 25, loses 5, and holds 0. Against jDE, GCIDE wins 23, loses 7, and holds 0. Compared with FDDE, GCIDE wins 25, loses 5, and holds 0. Compared with rank-DE, GCIDE wins 26, loses 4, and holds 0. Compared with SHADE, GCIDE wins 25, loses 5, and holds 0.
To verify the significance of GCIDE, Table 6 and Table 7 list the statistical results of GCIDE and its competitors based on Wilcoxon signed-rank tests [51] when D = 30 and D = 50, respectively. From Table 6 and Table 7, it can be seen that GCIDE has more R+ values compared to the other five algorithms in the cases of both D = 30 and D = 50, and the obtained p-values are all much less than 0.05. Thus, there is a significant difference between GCIDE and its comparison algorithms.
Moreover, to clearly show the convergence performance of GCIDE, the evolution curves of GCIDE and five compared algorithms are also depicted on eight typical functions, including F1, F2, F5, F7, F16, F17, F21, and F26. Figure 1 and Figure 2 draw their evolution curves when both D = 30 and D = 50, respectively. As shown in Figure 1, when D = 30, the convergence trend of GCIDE is slower in the early evolutionary stage and rapid when entering the late evolutionary stage. In particular, for simple multimodal functions F5 and F7, GCIDE converges quickly to find better solutions when other algorithms fall into local optima. For other hybrid and composition functions, GCIDE has a similar performance to its compared algorithms, but has a higher accuracy. Meanwhile, when D = 50, as shown in Figure 2, GCIDE still shows higher convergence accuracy. Therefore, GCIDE has a better convergence performance.

4.2.2. Comparison with Five Other Heuristic Algorithms

Five non-DE heuristic algorithms are further selected as comparison algorithms to show the performance of GCIDE, including EPSO [52], MKE [53], HGSA [54], HLWCA [55], and AOA [56]. Table 8 and Table 9 provide the comparison results of GCIDE and these five compared algorithms on the IEEE CEC2017 test suite with D = 30 and D = 50, respectively.
From Table 8, it can be seen that the proposed GCIDE obtains better results in most of the functions except for eleven test functions when D = 30, including F1, F4, F6, F9, F21, F22, F24–F26, F28, and F29. Specifically, compared to EPSO, GCIDE wins 23, loses 7, and holds 0. Against MKE, GCIDE wins 27, loses 3, and holds 0. Compared with HGSA, GCIDE wins 24, loses 6, and holds 0. Compared with HLWCA, GCIDE wins 28, loses 2, and holds 0. Compared with AOA, GCIDE is better on all test functions. Meanwhile, when D = 50, from Table 9, the proposed GCIDE is also absolutely superior on the nineteen functions, including F1, F3, F5, F7, F8, F10–F20, F22, F23, F27, and F29. Specifically, compared to EPSO, GCIDE wins 23, loses 7, and holds 0. Against MKE, GCIDE wins 27, loses 3, and holds 0. Compared with HGSA, GCIDE wins 22, loses 8, and holds 0. Compared with HLWCA, GCIDE wins 25, loses 5, and holds 0. Compared with AOA, GCIDE is better on all test functions.
To verify the significance of GCIDE, Table 10 and Table 11 list the statistical results of GCIDE and its competitors based on Wilcoxon signed-rank tests [51] when D = 30 and D = 50, respectively. From Table 10 and Table 11, it can be seen that GCIDE has more R+ values compared to the other five algorithms in the cases of both D = 30 and D = 50, and the obtained p-values are all much less than 0.05. Thus, there is a significant difference between GCIDE and its comparison algorithms.

4.3. Effectiveness of the Proposed Strategies

In this section, we conduct comparative experiments on the effectiveness of the strategies. For simplicity, we denote the variant of GCIDE using a fixed population size by GCIDE-1 and the variant of GCIDE without grouping the parameter control by GCIDE-2. Also, to investigate the effect of different mutation strategies on the performance of GCIDE, we further choose two additional mutation operators DE/rand/1 and DE/best/1, and denote the GCIDE with them by GCIDE (rand) and GCIDE (best), separately. Table 12 provides the numerical results of GCIDE and its four variants above on the IEEE CEC 2017 test suite with D = 30.
As can be seen from Table 12, the mutation strategy has a large effect on the performance of GCIDE, and both DE/rand/1 and DE/best/1 fail to achieve good optimization performance. Besides, we can see that GCIDE-2 performs worse than GCIDE on the simple multi-peaked functions and combinatorial functions. Meanwhile, the overall performance of GCIDE-1 is poorer than GCIDE on all test functions. The reason for this is that the group-based parameter control method effectively reduces the mutual misleading of parameters, and the piecewise population size reduction mechanism can effectively enhance the performance of the algorithm by making the algorithm inclined to exploration in the early stage and exploitation in the later stage. Thereby, the proposed strategies can effectively promote the optimization performance of GCIDE.

4.4. Real Applications

In this subsection, three practical problems are further adopted and tested to show the practicality of GCIDE, including the Design of Tension/Compression Spring [57], the side collision problem of automobiles [58], and the Spread Spectrum Radar Polyphase Code Design [59].

4.4.1. Design of Tension/Compression Spring

First, the tension/compression spring design problem [57] is used to show the performance of GCIDE. This problem is subject to constraints on minimum deflection, shear stress, surge frequency, limits on the outside diameter, and design variables. The design variables are the mean coil diameter D(=x1), the wire diameter d(=x2), and the number of active coils N(=x3). The problem can be stated as:
min f ( X ) = ( x 3 + 2 ) x 2 x 1 2 ,
and it is subject to the following constraints:
{ g 1 ( X ) = 1 x 2 3 x 3 71785 x 1 4 0 , g 2 ( X ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 , g 3 ( X ) = 1 140.45 x 1 x 2 2 x 3 0 , g 4 ( X ) = x 1 + x 2 1.5 ,
while the ranges of these variables are
{ 0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 1.5 .
The numerical results of GCIDE and five DE variants used in the last subsection on this problem are shown in Table 13, where each algorithm is run 30 times to obtain the mean, standard deviation, the best and worst values. From Table 13, it can be seen that GCIDE, FDDE, rank-DE, and SHADE have the same performance on mean, best and worst values, and GCIDE has the best result on standard deviation. This shows that GCIDE has strong solving ability and better stability than other compared algorithms on this real problem.

4.4.2. Side Collision Problem of Automobile

Moreover, there are common forms of collisions, such as frontal collisions, side collisions, and rear-end collisions. Among them, side collisions are less frequent than the other two forms of collisions, yet side collisions are the most harmful to drivers and passengers. Moreover, with the increasing influence of C-NCAP management rules on consumers, automakers are also paying more attention to the passive safety of cars. In detail, more descriptions and discussions on this problem can be found in reference [58].
For clarity, this practical problem can be modelled by the following optimization problem with 11 decision variables:
min : F ( X ) = f ( X ) + M ( i = 1 10 g i 2 ( X ) ) .
Here, f ( X ) is a function on the weight of the door, M denotes the penalty term here and is set to 1000, and g i ( X ) denotes the constraint function, which can be shown below:
f ( X ) = 1.98 + 4.90 x 1 + 6.67 x 2 + 6.98 x 3 + 4.01 x 4 + 1.78 x 5 + 2.73 x 7 , g 1 ( X ) = 1.16 0.3717 x 2 x 4 0.00931 x 2 x 10 0.484 x 3 x 9 + 0.01343 x 6 x 10 1 0 , g 2 ( X ) = 46.36 9.9 x 2 12.9 x 1 x 2 0.484 x 3 x 9 + 0.1107 x 3 x 10 32 0 , g 3 ( X ) = 33.86 + 2.95 x 3 + 0.1792 x 3 5.057 x 1 x 2 11.0 x 2 x 8 0.0215 x 5 x 10 9.98 x 7 x 8 + 22.0 x 8 x 9 32 0 , g 4 ( X ) = 28.98 + 3.818 x 3 4.2 x 1 x 2 + 0.0207 x 5 x 10 + 6.63 x 6 x 9 7.7 x 7 x 8 + 0.32 x 9 x 10 32 0 , g 5 ( X ) = 0.261 0.0159 x 1 x 2 0.188 x 1 x 8 0.019 x 2 x 7 + 0.0144 x 3 x 5 + 0.0008757 x 5 x 10 + 0.08045 x 6 x 9 + 0.00139 x 8 x 11 + 0.00001575 x 10 x 11 0.32 0 , g 6 ( X ) = 0.214 + 0.00817 x 5 0.131 x 1 x 8 0.0704 x 1 x 9 + 0.03099 x 2 x 6 0.018 x 2 x 7 + 0.0208 x 3 x 8 + 0.121 x 3 x 9 0.00364 x 5 x 6 + 0.0007715 x 5 x 10 0.0005354 x 6 x 10 + 0.00121 x 8 x 11 + 0.00184 x 9 x 10 0.02 x 2 2 0.32 0 , g 7 ( X ) = 0.74 0.62 x 2 0.163 x 3 x 8 + 0.001232 x 3 x 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32 0 , g 8 ( X ) = 4.72 0.5 x 4 0.19 x 2 x 3 0.0122 x 4 x 10 + 0.009325 x 6 x 10 + 0.000191 x 11 2 4 0 , g 9 ( X ) = 10.58 0.674 x 1 x 2 1.95 x 2 x 8 + 0.02054 x 3 x 10 0.0198 x 4 x 10 + 0.028 x 6 x 10 9.9 0 , g 10 ( X ) = 16.45 0.489 x 3 x 7 0.843 x 5 x 6 + 0.0432 x 9 x 10 0.0556 x 9 x 11 0.000786 x 11 2 15.7 0 .
Here, x 1 is the B-pillar inner, x 2 is the B-pillar reinforcement, x 3 is the floor side inner, x 4 is the cross member, x 5 is the door beam, x 6 is the door beltline reinforcement, x 7 is the roof rail, x 8 is the materials of B-pillar inner, x 9 is the floor side inner, x 10 is the barrier height, and x 11 is the hitting position. The ranges of these variables are
{ 0.5 x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 1.5 , x 8 , x 9 { 0.192 , 0.345 } , 30 x 10 , x 11 + 30 .
As described above, we have D = 11 in this experiment. Table 14 lists the experimental results of GCDE, JADE, jDE, FDDE, rank-DE, and SHADE with 30 independent runs on this problem, and their mean, standard deviation, best and worst values of the 30 running results are given. From Table 14, we can see that GCIDE gets the best results in all terms of indicators. Thus, GCIDE is a more promising optimizer for this real problem.

4.4.3. Spread Spectrum Radar Polyphase Code Design

Finally, the other real engineering problem, namely Spread Spectrum Radar Polyphase Code Design (SSRPPCD) [59], is also used to demonstrate the effectiveness of GCIDE. In detail, this problem can be mathematically described as:
min X Ω f ( X ) = max { ϕ 1 ( X ) , , ϕ 2 m ( X ) } .
Herein, Ω = { X = ( x 1 , , x D ) R D | 0 x j 2 π f o r j = 1 , , D } , m = 2 D 1 , ϕ m + i ( X ) = ϕ i ( X ) for i = 1 , , m and
{ ϕ 2 i 1 ( X ) = j = 1 D cos ( k = | 2 i j 1 | + 1 j x k ) , i = 1 , , D , ϕ 2 i ( X ) = 0.5 + j = 1 + 1 D cos ( k = | 2 i j | + 1 j x k ) , i = 1 , , D 1 .
From the description of the problem, it is clear that the problem has a complex structure, which is a great challenge for the performance of the algorithm.
Table 15 lists the numerical results of GCDE and JADE, jDE, FDDE, rank-DE, and SHADE when D = 30, and their mean, standard deviation, best and worst values of the 30 running results are given. From Table 15, one can see that GCIDE has better performance than its all counterparts in terms of mean value, standard deviation, best value and worst value. Thus, GCIDE is better optimizer on this problem.

5. Conclusions

In this paper, an enhanced DE (GCIDE) algorithm was presented for solving global optimization problems. In GCIDE, a new mutation operator named “DE/current-to-pbest_id/1” was first developed to properly adjust the search ability of each individual by using its fitness value to determine the chosen range of its guider. Meanwhile, a group-based competitive parameter control setting was proposed to maintain the various search requirements of the population by randomly and equally dividing the whole population into multiple groups, independently creating the control parameters for each group, and competitively updating the worst location parameter information with the current successful parameters. Moreover, a piecewise population size reduction mechanism was further put forward to enhance the exploration and convergence of the population at the early and later search stages, respectively. Compared to the existing DE variants, GCIDE adaptively adjusts the chosen range of the guider for each individual, independently generates the control parameters for each group, and competitively replaces the worst location parameter information among all groups with the current successful parameters, while reducing the population size with different speeds at different search periods. Thus, GCIDE is capable of effectively enhancing the adaptivity of the algorithm and balancing well its exploration and exploitation. At last, the performance of GCIDE was evaluated and discussed by conducting a series of experiments on the benchmark functions from the IEEE CEC 2017 test suite and three real applications. Experimental results indicated the superiority of GCIDE compared to ten typical or well-known compared methods.
It should be also mentioned that the fitness value of the individual is just used to measure its characteristic and then adjust its search performance in this paper, which might be not able to precisely evaluate its search potential. Besides, the special requirement of individuals is not fully taken into consideration during the generation of control parameters, and only a few real applications are introduced in the current paper. Thereby, we will further design a new evaluation approach on the search characteristic of individuals, a more effective and efficient adaptive parameter control method, and apply GCIDE to more practical optimization problems in the future.

Author Contributions

Conceptualization, Y.G. and M.T.; methodology, X.H.; software, Y.G. and Y.M.; validation, Y.G., Y.M. and M.T.; formal analysis, Y.G.; writing—original draft preparation, Y.G. and M.T.; writing—review and editing, Y.G. and M.T.; visualization, Y.G.; supervision, X.H. and Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China No. 12101477, and the Scientific Research Startup Foundation of Xi’an Polytechnic University (BS202052).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. de Fátima Morais, M.; Ribeiro, M.H.D.M.; da Silva, R.G.; Mariani, V.C.; dos Santos Coelho, L. Discrete differential evolution metaheuristics for permutation flow shop scheduling problems. Comput. Ind. Eng. 2022, 166, 107956. [Google Scholar] [CrossRef]
  3. Zou, D.; Gong, D. Differential evolution based on migrating variables for the combined heat and power dynamic economic dispatch. Energy 2022, 238, 121664. [Google Scholar] [CrossRef]
  4. Wang, G.-G.; Gao, D.; Pedrycz, W. Solving multiobjective fuzzy job-shop scheduling problem by a hybrid adaptive differential evolution algorithm. IEEE Trans. Ind. Inform. 2022, 18, 8519–8528. [Google Scholar] [CrossRef]
  5. Zhang, X.; Zhang, X.; Wu, Z. Spectrum allocation by wave based adaptive differential evolution algorithm. Ad Hoc Netw. 2019, 94, 101969. [Google Scholar] [CrossRef]
  6. Chai, X.; Zheng, Z.; Xiao, J.; Yan, L.; Qu, B.; Wen, P.; Wang, H.; Zhou, Y.; Sun, H. Multi-strategy fusion differential evolution algorithm for UAV path planning in complex environment. Aerosp. Sci. Technol. 2022, 121, 107287. [Google Scholar] [CrossRef]
  7. Kozlov, K.; Ivanisenko, N.; Ivanisenko, V.; Kolchanov, N.; Samsonova, M.; Samsonov, A.M. Enhanced differential evolution entirely parallel method for biomedical applications. In Proceedings of the Parallel Computing Technologies: 12th International Conference, St. Petersburg, Russia, 30 September–4 October 2013; pp. 409–416. [Google Scholar]
  8. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Differential evolution mutations: Taxonomy, comparison and convergence analysis. IEEE Access 2021, 9, 68629–68662. [Google Scholar] [CrossRef]
  9. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution With Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  10. Gong, W.; Cai, Z. Differential evolution with ranking-based mutation operators. IEEE Trans. Cybern. 2013, 43, 2066–2081. [Google Scholar] [CrossRef]
  11. Wang, J.; Liao, J.; Zhou, Y.; Cai, Y. Differential evolution enhanced with multiobjective sorting-based mutation operators. IEEE Trans. Cybern. 2014, 44, 2792–2805. [Google Scholar] [CrossRef] [PubMed]
  12. Gong, W.; Cai, Z.; Liang, D. Adaptive ranking mutation operator based differential evolution for constrained optimization. IEEE Trans. Cybern. 2015, 45, 716–727. [Google Scholar] [CrossRef] [PubMed]
  13. Cai, Y.; Chen, Y.; Wang, T.; Tian, H. Improving differential evolution with a new selection method of parents for mutation. Front. Comput. Sci. 2016, 10, 246–269. [Google Scholar] [CrossRef]
  14. Yi, W.; Zhou, Y.; Gao, L.; Li, X.; Mou, J. An improved adaptive differential evolution algorithm for continuous optimization. Expert Syst. Appl. 2016, 44, 1–12. [Google Scholar] [CrossRef]
  15. Sharifi-Noghabi, H.; Rajabi Mashhadi, H.; Shojaee, K. A novel mutation operator based on the union of fitness and design spaces information for Differential Evolution. Soft Comput. 2016, 21, 6555–6562. [Google Scholar] [CrossRef] [Green Version]
  16. Mohamed, A.W.; Suganthan, P.N. Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Comput. 2017, 22, 3215–3235. [Google Scholar] [CrossRef]
  17. Meng, Z.; Pan, J.-S. HARD-DE: Hierarchical archive based mutation strategy with depth information of evolution for the enhancement of differential evolution on numerical optimization. IEEE Access 2019, 7, 12832–12854. [Google Scholar] [CrossRef]
  18. Xia, X.; Tong, L.; Zhang, Y.; Xu, X.; Yang, H.; Gui, L.; Li, Y.; Li, K. NFDDE: A novelty-hybrid-fitness driving differential evolution algorithm. Inf. Sci. 2021, 579, 33–54. [Google Scholar] [CrossRef]
  19. Wang, M.; Ma, Y.; Wang, P. Parameter and strategy adaptive differential evolution algorithm based on accompanying evolution. Inf. Sci. 2022, 607, 1136–1157. [Google Scholar] [CrossRef]
  20. Sheng, M.; Chen, S.; Liu, W.; Mao, J.; Liu, X. A differential evolution with adaptive neighborhood mutation and local search for multi-modal optimization. Neurocomputing 2022, 489, 309–322. [Google Scholar] [CrossRef]
  21. Meng, Z.; Yang, C.; Li, X.; Chen, Y. Di-DE: Depth Information-Based Differential Evolution With Adaptive Parameter Control for Numerical Optimization. IEEE Access 2020, 8, 40809–40827. [Google Scholar] [CrossRef]
  22. Ma, Y.; Bai, Y. A multi-population differential evolution with best-random mutation strategy for large-scale global optimization. Appl. Intell. 2020, 50, 1510–1526. [Google Scholar] [CrossRef]
  23. Guan, B.; Zhao, Y.; Yin, Y.; Li, Y. A differential evolution based feature combination selection algorithm for high-dimensional data. Inf. Sci. 2021, 547, 870–886. [Google Scholar] [CrossRef]
  24. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm Evol. Comput. 2021, 61, 100816. [Google Scholar] [CrossRef]
  25. Liu, D.; He, H.; Yang, Q. Function value ranking aware differential evolution for global numerical optimization. Swarm Evol. Comput. 2023, 78, 101282. [Google Scholar] [CrossRef]
  26. Li, X.; Wang, K.; Yang, H. PAIDDE: A permutation archive information directed differential evolution algorithm. IEEE Access 2022, 10, 50384–50402. [Google Scholar] [CrossRef]
  27. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm with Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  28. Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution with Composite Trial Vector Generation Strategies and Control Parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  29. Wang, Y.; Liu, Z.-Z.; Li, J.; Li, H.-X.; Yen, G.G. Utilizing cumulative population distribution information in differential evolution. Appl. Soft Comput. 2016, 48, 329–346. [Google Scholar] [CrossRef]
  30. Gui, L.; Xia, X.; Yu, F.; Wu, H.; Wu, R.; Wei, B.; Zhang, Y.; Li, X.; He, G. A multi-role based differential evolution. Swarm Evol. Comput. 2019, 50, 100508. [Google Scholar] [CrossRef]
  31. Li, Z.; Shi, L.; Yue, C.; Shang, Z.; Qu, B. Differential evolution based on reinforcement learning with fitness ranking for solving multimodal multiobjective problems. Swarm Evol. Comput. 2019, 49, 234–244. [Google Scholar] [CrossRef]
  32. Meng, Z.; Zhong, Y.; Yang, C. CS-DE: Cooperative strategy based differential evolution with population diversity enhancement. Inf. Sci. 2021, 577, 663–696. [Google Scholar] [CrossRef]
  33. Kumar, A.; Misra, R.K.; Singh, D.; Das, S. Testing a multi-operator based differential evolution algorithm on the 100-digit challenge for single objective numerical optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 34–40. [Google Scholar]
  34. Fachin, J.M.; Reynoso-Meza, G.; Mariani, V.C.; dos Santos Coelho, L. Self-adaptive differential evolution applied to combustion engine calibration. Soft Comput. 2021, 25, 109–135. [Google Scholar] [CrossRef]
  35. Deng, W.; Ni, H.; Liu, Y.; Chen, H.; Zhao, H. An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation. Appl. Soft Comput. 2022, 127, 109419. [Google Scholar] [CrossRef]
  36. Yi, W.; Chen, Y.; Pei, Z.; Lu, J. Adaptive differential evolution with ensembling operators for continuous optimization problems. Swarm Evol. Comput. 2022, 69, 100994. [Google Scholar] [CrossRef]
  37. Tan, Z.; Tang, Y.; Huang, H.; Luo, S. Dynamic fitness landscape-based adaptive mutation strategy selection mechanism for differential evolution. Inf. Sci. 2022, 607, 44–61. [Google Scholar] [CrossRef]
  38. Qiao, K.; Liang, J.; Yu, K.; Yuan, M.; Qu, B.; Yue, C. Self-adaptive resources allocation-based differential evolution for constrained evolutionary optimization. Knowl.-Based Syst. 2022, 235, 107653. [Google Scholar] [CrossRef]
  39. Zhang, H.; Sun, J.; Xu, Z.; Shi, J. Learning unified mutation operator for differential evolution by natural evolution strategies. Inf. Sci. 2023, 632, 594–616. [Google Scholar] [CrossRef]
  40. Li, Y.; Han, T.; Tang, S.; Huang, C.; Zhou, H.; Wang, Y. An improved differential evolution by hybridizing with Estimation of distribution algorithm. Inf. Sci. 2023, 619, 439–456. [Google Scholar] [CrossRef]
  41. Li, Y.; Wang, S.; Yang, H. Enhancing differential evolution algorithm using leader-adjoint populations. Inf. Sci. 2023, 622, 235–268. [Google Scholar] [CrossRef]
  42. Liu, J.; Lampinen, J. A fuzzy adaptive differential evolution algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  43. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  44. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 71–78. [Google Scholar]
  45. Tanabe, R.; Fukunaga, A. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  46. Ali, M.Z.; Awad, N.H.; Suganthan, P.N.; Reynolds, R.G. An Adaptive Multipopulation Differential Evolution With Dynamic Population Reduction. IEEE Trans. Cybern. 2017, 47, 2768–2779. [Google Scholar] [CrossRef] [PubMed]
  47. Poláková, R.; Tvrdík, J.; Bujok, P. Differential evolution with adaptive mechanism of population size according to current population diversity. Swarm Evol. Comput. 2019, 50, 100519. [Google Scholar] [CrossRef]
  48. Xia, X.; Gui, L.; Zhang, Y.; Xu, X.; Yu, F.; Wu, H.; Wei, B.; He, G.; Li, Y.; Li, K. A fitness-based adaptive differential evolution algorithm. Inf. Sci. 2021, 549, 116–141. [Google Scholar] [CrossRef]
  49. Zeng, Z.; Zhang, M.; Zhang, H.; Hong, Z. Improved differential evolution algorithm based on the sawtooth-linear population size adaptive method. Inf. Sci. 2022, 608, 1045–1071. [Google Scholar] [CrossRef]
  50. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective bound constrained real-parameter numerical optimization. In Technical Report; Nanyang Technological University: Singapore, 2016; pp. 1–34. [Google Scholar]
  51. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  52. Lynn, N.; Suganthan, P.N. Ensemble particle swarm optimizer. Appl. Soft Comput. 2017, 55, 533–548. [Google Scholar] [CrossRef]
  53. Meng, Z.; Pan, J.S. Monkey king evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization. Knowl.-Based Syst. 2016, 97, 144–157. [Google Scholar] [CrossRef]
  54. Wang, Y.; Yu, Y.; Gao, S.; Pan, H.; Yang, G. A hierarchical gravitational search algorithm with an effective gravitational constant. Swarm Evol. Comput. 2019, 46, 118–139. [Google Scholar] [CrossRef]
  55. Chen, C.; Wang, P.; Dong, H.; Wang, X. Hierarchical learning water cycle algorithm. Appl. Soft Comput. 2020, 86, 105935. [Google Scholar] [CrossRef]
  56. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  57. Arora, J. Introduction to Optimum Design; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  58. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.-P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef] [PubMed]
  59. Das, S.; Suganthan, P.N. Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems. Jadavpur Univ. Nanyang Technol. Univ. Kolkata 2010, 341–359. [Google Scholar]
Figure 1. The evolution curves of GCIDE, JADE, jDE, FDDE, rank−DE, and SHADE on F1 (a), F2 (b), F5 (c), F7 (d), F16 (e), F17 (f), F21 (g), F26 (h) with D = 30.
Figure 1. The evolution curves of GCIDE, JADE, jDE, FDDE, rank−DE, and SHADE on F1 (a), F2 (b), F5 (c), F7 (d), F16 (e), F17 (f), F21 (g), F26 (h) with D = 30.
Mathematics 11 03355 g001aMathematics 11 03355 g001b
Figure 2. The evolution curves of GCIDE, JADE, jDE, FDDE, rank−DE, and SHADE on F1 (a), F2 (b), F5 (c), F7 (d), F16 (e), F17 (f), F21 (g), F26 (h) with D = 50.
Figure 2. The evolution curves of GCIDE, JADE, jDE, FDDE, rank−DE, and SHADE on F1 (a), F2 (b), F5 (c), F7 (d), F16 (e), F17 (f), F21 (g), F26 (h) with D = 50.
Mathematics 11 03355 g002aMathematics 11 03355 g002b
Table 1. Numerical results of GCIDE with various k and NPini.
Table 1. Numerical results of GCIDE with various k and NPini.
-FunctionF1F2F5F7
NPiniGroup NumberMeanStdMeanStdMeanStdMeanStd
12Dk = 41.47 × 10−142.59 × 10−152.33 × 10−16.26 × 10−18.43 × 1001.52 × 1003.68 × 1011.40 × 100
k = 61.23 × 10−144.91 × 10−151.80 × 1009.86 × 1009.43 × 1001.60 × 1003.70 × 1017.77 × 10−1
k = 81.23 × 10−144.91 × 10−150.00 × 1000.00 × 1009.80 × 1001.51 × 1003.73 × 1011.18 × 100
k = 101.33 × 10−143.61 × 10−156.67 × 10−22.54 × 10−19.94 × 1001.90 × 1003.77 × 1011.37 × 100
18Dk = 41.47 × 10−144.55 × 10−151.93 × 1001.00 × 1018.22 × 1001.70 × 1003.68 × 1011.13 × 100
k = 61.47 × 10−142.59 × 10−151.90 × 1009.84 × 1008.44 × 1001.35 × 1003.64 × 1011.26 × 100
k = 81.37 × 10−142.59 × 10−153.33 × 10−21.83 × 10−19.79 × 1001.38 × 1003.68 × 1011.40 × 100
k = 101.42 × 10−140.00 × 10+001.38 × 10+27.48 × 1028.74 × 1001.10 × 1003.63 × 1011.20 × 100
23Dk = 41.47 × 10−144.55 × 10−151.87 × 1001.00 × 1017.35 × 1001.27 × 1003.63 × 1011.14 × 100
k = 61.75 × 10−146.11 × 10−156.67 × 10−22.54 × 10−18.65 × 1001.45 × 1003.59 × 1011.17 × 100
k = 81.71 × 10−145.78 × 10−150.00 × 1000.00 × 1009.42 × 1001.49 × 1003.65 × 1011.32 × 100
k = 101.61 × 10−144.91 × 10−153.33 × 10−21.83 × 10−19.46 × 1001.65 × 1003.73 × 1011.51 × 100
25Dk = 41.99 × 10−141.03 × 10−141.87 × 1001.00 × 1017.81 × 1001.65 × 1003.57 × 1011.05 × 100
k = 61.75 × 10−147.16 × 10−156.67 × 10−22.54 × 10−18.56 × 1001.78 × 1003.67 × 1011.27 × 100
k = 82.04 × 10−149.65 × 10−151.33 × 10−13.46 × 10−18.96 × 1001.89 × 1003.70 × 1011.45 × 100
k = 101.89 × 10−147.77 × 10−150.00 × 1000.00 × 1009.65 × 1001.96 × 1003.67 × 1011.33 × 100
-FunctionF10F16F21F26
NPiniGroup NumberMeanStdMeanStdMeanStdMeanStd
12Dk = 41.54 × 1031.91 × 1021.83 × 1021.14 × 1022.08 × 1021.94 × 1009.24 × 1025.52 × 101
k = 61.58 × 1031.98 × 1022.07 × 1028.23 × 1012.09 × 1021.99 × 1009.33 × 1025.99 × 101
k = 81.48 × 1032.01 × 1021.93 × 1028.66 × 1012.10 × 1021.95 × 1009.23 × 1025.60 × 101
k = 101.51 × 1032.10 × 1022.02 × 1021.17 × 1022.11 × 1021.85 × 1009.38 × 1025.13 × 101
18Dk = 41.50 × 1032.75 × 1021.56 × 1028.94 × 1012.09 × 1021.72 × 1009.13 × 1025.34 × 101
k = 61.60 × 1031.96 × 1021.33 × 1028.11 × 1012.08 × 1021.81 × 1009.12 × 1024.63 × 101
k = 81.52 × 1032.63 × 1021.48 × 1029.84 × 1012.09 × 1021.67 × 1009.27 × 1025.15 × 101
k = 101.59 × 1032.13 × 1021.31 × 1029.24 × 1012.09 × 1021.77 × 1009.22 × 1024.51 × 101
23Dk = 41.55 × 1031.91 × 1021.58 × 1029.59 × 1012.08 × 1021.52 × 1009.09 × 1024.62 × 101
k = 61.59 × 1032.74 × 1021.53 × 1027.50 × 1012.08 × 1021.66 × 1009.15 × 1025.04 × 101
k = 81.58 × 1031.97 × 1021.79 × 1029.45 × 1012.09 × 1021.85 × 1009.23 × 1024.77 × 101
k = 101.54 × 1032.35 × 1021.97 × 1028.83 × 1012.10 × 1021.83 × 1009.24 × 1024.46 × 101
25Dk = 41.53 × 1032.36 × 1021.53 × 1029.54 × 1012.07 × 1021.76 × 1009.27 × 1026.42 × 101
k = 61.57 × 1032.05 × 1021.32 × 1029.59 × 1012.09 × 1021.68 × 1009.12 × 1026.40 × 101
k = 81.57 × 1032.19 × 1021.55 × 1021.07 × 1022.09 × 1021.64 × 1009.12 × 1024.69 × 101
k = 101.58 × 1032.25 × 1021.64 × 1029.59 × 1012.10 × 1021.43 × 1009.12 × 1024.10 × 101
Table 2. Ranking of GCIDE with various parameter values.
Table 2. Ranking of GCIDE with various parameter values.
NPini12D18D23D25D
k46810468104681046810
Ranking7.567.197.888.138.1311.008.8111.505.757.388.8110.886.947.139.449.50
Table 3. Parameter settings.
Table 3. Parameter settings.
AlgorithmParameter Settings
JADE [9] μ F = 0.5 , F C ( μ F , 0.1 ) , C r N ( μ C r , 0.1 ) , p = 0.05 , c = 0.1
jDE [43] F = 0.5 , C r = 0.9 , τ F = τ C r = 0.1 , F l = 0.1 , F u = 0.9
FDDE [24] F = 0.5 , C r = 0.9 , τ F = τ C r = 0.1 , F l = 0.1 , F u = 0.9 , w ( 0.2 , 0.8 )
rank-DE [10] F = 0.5 , C r = 0.9 , τ F = τ C r = 0.1 , F l = 0.1 , F u = 0.9
SHADE [44] μ F = 0.5 , F C ( μ F , 0.1 ) , μ C r = 0.5 , C r N ( μ C r , 0.1 ) , p = 0.2 , H = 100
EPSO [52] N p o p = 30 , g 1 = 15 , g 2 = 25
MKE [53] F C = 0.7
HGSA [54] G 0 = 100 , L = 100 , w 1 ( t ) = 1 t 6 / T 6 , w 2 ( t ) = t 6 / T 6 , K [ n , 2 ]
HLWCA [55] N p o p = 200 , N s r = 30 , d max = 10 8 , c = 2
AOA [56] N p o p = 30 , α = 5 , μ = 0.5
GCIDE N P i n i t = 23 D , k = 4 , μ F i = μ C r i = 0.5 , i = 1 , 2 , , k , N P min = k
Table 4. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on IEEE CEC 2017 test suite when D = 30.
Table 4. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on IEEE CEC 2017 test suite when D = 30.
AlgorithmsJADEjDEFDDERank-DESHADEGCIDE
FunctionsMean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
F011.42 × 10−14
3.73 × 10−15 (−)
1.14 × 10−14
5.78 × 10−15 (−)
7.11 × 10−15
7.23 × 10−15 (−)
1.23 × 10−14
4.91 × 10−15 (−)
1.18 × 10−14
5.39 × 10−15 (−)
1.52 × 10−14
3.61 × 10−15
F021.04 × 101
2.16 × 101 (+)
6.70 × 102
3.37 × 103 (+)
0.00 × 100
0.00 × 100 (−)
1.84 × 104
9.70 × 104 (+)
3.85 × 102
1.49 × 103 (+)
1.00 × 10−1
4.03 × 10−1
F031.03 × 104
1.65 × 104 (+)
8.43 × 104
1.58 × 103 (+)
9.54 × 10−8
2.68 × 10−7 (+)
2.08 × 10−9
2.89 × 10−9 (+)
9.66 × 10−14
3.70 × 10−14 (−)
2.01 × 10−13
1.08 × 10−13
F044.60 × 101
2.50 × 101 (+)
5.83 × 101
1.11 × 101 (+)
5.36 × 101
1.83 × 101 (+)
4.62 × 101
2.63 × 101 (+)
5.08 × 101
2.02 × 101 (+)
2.94 × 101
3.10 × 101
F052.53 × 101
3.93 × 100 (+)
4.03 × 101
6.78 × 100 (+)
1.68 × 102
1.31 × 101 (+)
3.42 × 101
7.77 × 100 (+)
1.46 × 101
2.81 × 100 (+)
7.96 × 100
1.58 × 100
F061.67 × 10−13
6.50 × 10−14 (−)
1.14 × 10−13
0.00 × 100 (−)
2.79 × 10−7
8.94 × 10−7 (−)
4.14 × 10−5
1.64 × 10−4 (+)
2.66 × 10−5
2.89 × 10−5 (+)
8.48 × 10−6
7.50 × 10−6
F075.41 × 101
4.73 × 100 (+)
7.53 × 101
7.25 × 100 (+)
2.04 × 102
9.66 × 100 (+)
8.95 × 101
3.33 × 101 (+)
4.46 × 101
2.92 × 100 (+)
3.58 × 101
1.06 × 100
F082.52 × 101
4.18 × 100 (+)
4.40 × 101
6.69 × 100 (+)
1.60 × 102
3.64 × 101 (+)
3.71 × 101
7.12 × 100 (+)
1.55 × 101
2.43 × 100 (+)
8.41 × 100
1.40 × 100
F092.98 × 10−3
1.63 × 10−2 (+)
0.00 × 100
0.00 × 100 (−)
1.51 × 10−2
8.29 × 10−2 (+)
1.33 × 100
1.27 × 100 (+)
2.98 × 10−3
1.63 × 10−2 (+)
5.31 × 10−14
5.77 × 10−14
F101.86 × 103
2.54 × 102 (+)
2.63 × 103
3.31 × 102 (+)
6.51 × 103
4.19 × 102 (+)
4.38 × 103
5.05 × 102 (+)
1.74 × 103
2.25 × 102 (+)
1.53 × 103
3.10 × 102
F113.64 × 101
2.88 × 101 (+)
2.33 × 101
2.42 × 101 (+)
2.12 × 101
3.07 × 101 (+)
5.69 × 101
3.96 × 101 (+)
2.97 × 101
2.36 × 101 (+)
1.84 × 101
2.35 × 101
F121.04 × 103
3.47 × 102 (+)
8.55 × 103
6.79 × 103 (+)
1.53 × 104
1.85 × 104 (+)
7.14 × 103
7.39 × 103 (+)
1.21 × 103
4.28 × 102 (+)
9.93 × 102
3.87 × 102
F134.56 × 101
2.71 × 101 (+)
3.92 × 101
2.55 × 101 (+)
8.16 × 101
2.09 × 101 (+)
7.92 × 101
1.48 × 102 (+)
3.98 × 101
2.99 × 101 (+)
1.48 × 101
6.60 × 100
F146.03 × 103
7.93 × 103 (+)
2.77 × 101
1.23 × 101 (+)
4.94 × 101
1.72 × 101 (+)
4.42 × 101
1.74 × 101 (+)
3.14 × 101
5.12 × 100 (+)
2.24 × 101
1.22 × 100
F156.34 × 102
1.35 × 103 (+)
1.34 × 101
3.92 × 100 (+)
8.77 × 100
6.65 × 100 (+)
4.27 × 101
4.71 × 101 (+)
2.43 × 101
1.56 × 101 (+)
3.48 × 100
1.31 × 100
F163.76 × 102
1.65 × 102 (+)
4.36 × 102
1.46 × 102 (+)
2.49 × 102
2.57 × 102 (+)
3.77 × 102
2.37 × 102 (+)
2.72 × 102
1.43 × 102 (+)
1.40 × 102
9.45 × 101
F177.78 × 101
2.26 × 101 (+)
9.33 × 101
2.37 × 101 (+)
6.74 × 101
2.80 × 101 (+)
8.32 × 101
7.23 × 101 (+)
4.87 × 101
1.09 × 101 (+)
3.14 × 101
7.54 × 100
F181.68 × 104
3.89 × 104 (+)
5.38 × 101
6.29 × 101 (+)
2.14 × 101
9.55 × 100 (−)
1.99 × 102
3.29 × 102 (+)
7.97 × 101
5.60 × 101 (+)
2.19 × 101
1.28 × 100
F192.02 × 103
3.30 × 103 (+)
1.12 × 101
2.82 × 100 (+)
7.54 × 100
1.64 × 100 (+)
2.37 × 101
2.34 × 101 (+)
1.53 × 101
1.49 × 101 (+)
7.03 × 100
1.59 × 100
F201.08 × 102
5.61 × 101 (+)
8.25 × 101
4.18 × 101 (+)
2.66 × 101
3.36 × 101 (−)
9.47 × 101
7.52 × 101 (+)
5.89 × 101
3.77 × 101 (+)
4.07 × 101
9.98 × 100
F212.26 × 102
3.67 × 100 (+)
2.44 × 102
5.25 × 100 (+)
3.61 × 102
1.13 × 101 (+)
2.39 × 102
9.26 × 100 (+)
2.16 × 102
3.02 × 100 (+)
2.08 × 102
1.43 × 100
F221.00 × 102
0.00 × 100 (=)
1.00 × 102
8.30 × 10−14 (=)
5.55 × 102
1.73 × 103 (+)
8.17 × 102
1.67 × 103 (+)
1.00 × 102
8.30 × 10−14 (=)
1.00 × 102
2.27 × 10−13
F233.72 × 102
5.52 × 100 (+)
3.91 × 102
6.99 × 100 (+)
4.94 × 102
5.22 × 101 (+)
3.89 × 102
1.07 × 101 (+)
3.64 × 102
4.17 × 100 (+)
3.45 × 102
3.29 × 100
F244.41 × 102
6.29 × 100 (+)
4.61 × 102
6.39 × 100 (+)
5.63 × 102
4.79 × 101 (+)
4.63 × 102
1.34 × 101 (+)
4.38 × 102
4.69 × 100 (+)
4.22 × 102
2.62 × 100
F253.87 × 102
1.56 × 10−1 (=)
3.87 × 102
1.38 × 101 (=)
3.87 × 102
4.51 × 10−2 (=)
3.87 × 102
1.36 × 100 (=)
3.87 × 102
8.66 × 10−2 (=)
3.87 × 102
5.06 × 10−2
F261.15 × 103
1.71 × 102 (+)
1.36 × 103
9.09 × 101 (+)
1.35 × 103
5.28 × 102 (+)
1.42 × 103
2.50 × 102 (+)
1.09 × 103
5.47 × 101 (+)
9.06 × 102
3.35 × 101
F275.05 × 102
7.78 × 100 (−)
5.01 × 102
6.19 × 100 (−)
4.96 × 102
8.53 × 100 (−)
5.12 × 102
9.69 × 100 (+)
5.07 × 102
7.34 × 100 (+)
5.05 × 102
4.19 × 100
F283.17 × 102
3.92 × 101 (+)
3.45 × 102
5.66 × 101 (+)
3.36 × 102
5.17 × 101 (+)
3.43 × 102
5.88 × 101 (+)
3.16 × 102
4.15 × 101 (+)
3.14 × 102
3.67 × 101
F294.88 × 102
3.48 × 101 (+)
5.07 × 102
4.09 × 101 (+)
4.06 × 102
5.10 × 101 (−)
5.26 × 102
8.43 × 101 (+)
4.71 × 102
3.80 × 101 (+)
4.37 × 102
6.35 × 100
F302.33 × 103
9.02 × 102 (+)
2.15 × 103
1.47 × 102 (+)
2.04 × 103
9.40 × 101 (−)
2.20 × 103
3.34 × 102 (+)
2.10 × 103
1.61 × 102 (+)
2.06 × 103
7.02 × 101
+/−/=25/3/223/5/220/9/127/2/125/3/2
Ranking3.973.953.634.772.971.72
Table 5. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on IEEE CEC 2017 test suite when D = 50.
Table 5. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on IEEE CEC 2017 test suite when D = 50.
AlgorithmsJADEjDEFDDERank-DESHADEGCIDE
FunctionsMean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
F014.69 × 10−14
1.94 × 10−14 (−)
1.04 × 10−7
2.73 × 10−7 (−)
2.61 × 10−2
3.95 × 10−2 (+)
1.91 × 10−7
3.97 × 10−7 (−)
4.88 × 10−14
2.13 × 10−14 (−)
4.61 × 10−7
1.20 × 10−6
F027.32 × 1011
3.27 × 1012 (+)
7.56 × 108
2.59 × 109 (+)
9.99 × 1011
3.63 × 1012 (+)
9.07 × 10+14
4.90 × 10+15 (+)
8.27 × 1012
3.23 × 1013 (+)
2.15 × 106
9.52 × 106
F033.60 × 104
4.89 × 104 (+)
5.82 × 102
1.03 × 103 (+)
4.68 × 103
1.84 × 103 (+)
9.18 × 100
1.23 × 101 (+)
3.39 × 10−13
1.01 × 10−13 (−)
4.99 × 10−9
2.13 × 10−8
F043.33 × 101
3.80 × 101 (−)
6.71 × 101
5.03 × 101 (−)
8.32 × 101
5.31 × 101 (+)
5.07 × 101
4.03 × 101 (−)
5.72 × 101
4.79 × 101 (−)
7.98 × 101
4.07 × 101
F055.27 × 101
8.71 × 100 (+)
9.42 × 101
1.05 × 101 (+)
2.28 × 102
1.58 × 101 (+)
7.53 × 101
1.60 × 101 (+)
3.23 × 101
6.26 × 100 (+)
1.80 × 101
2.04 × 100
F061.36 × 10−13
4.63 × 10−14 (−)
1.14 × 10−13
0.00 × 100 (−)
4.73 × 10−8
1.42 × 10−7 (−)
1.34 × 10−2
4.16 × 10−2 (+)
8.99 × 10−4
1.27 × 10−3 (+)
1.03 × 10−4
8.19 × 10−5
F071.03 × 102
6.48 × 100 (+)
1.48 × 102
1.29 × 101 (+)
2.93 × 102
1.14 × 101 (+)
1.69 × 102
7.23 × 101 (+)
8.20 × 101
6.31 × 100 (+)
6.11 × 101
1.26 × 100
F085.38 × 101
9.40 × 100 (+)
9.24 × 101
9.22 × 100 (+)
2.28 × 102
1.75 × 101 (+)
8.41 × 101
1.48 × 101 (+)
3.32 × 101
5.25 × 100 (+)
1.86 × 101
1.85 × 100
F091.18 × 100
8.46 × 10−1 (+)
3.01 × 10−2
8.70 × 10−2 (+)
2.75 × 10−1
6.71 × 10−1 (+)
1.76 × 101
1.88 × 101 (+)
8.50 × 10−1
9.37 × 10−1 (+)
2.98 × 10−3
1.63 × 10−2
F103.75 × 103
2.99 × 102 (+)
4.99 × 103
3.26 × 102 (+)
9.19 × 103
3.92 × 102 (+)
8.93 × 103
1.13 × 103 (+)
3.40 × 103
3.07 × 102 (+)
3.05 × 103
4.18 × 102
F111.17 × 102
2.93 × 101 (+)
5.14 × 101
1.26 × 101 (−)
5.28 × 101
2.09 × 101 (−)
1.32 × 102
4.77 × 101 (+)
1.23 × 102
3.57 × 101 (+)
6.53 × 101
1.29 × 101
F124.43 × 103
2.35 × 103 (+)
5.38 × 104
3.78 × 104 (+)
1.18 × 105
5.25 × 104 (+)
4.42 × 104
3.01 × 104 (+)
5.75 × 103
3.40 × 103 (+)
2.94 × 103
1.01 × 103
F133.29 × 102
2.35 × 102 (+)
2.61 × 103
3.06 × 103 (+)
6.25 × 103
7.19 × 103 (+)
3.33 × 102
4.26 × 102 (+)
3.10 × 102
1.97 × 102 (+)
6.43 × 101
3.31 × 101
F143.10 × 104
7.14 × 104 (+)
6.32 × 101
1.81 × 101 (+)
6.47 × 101
5.17 × 101 (+)
4.22 × 102
3.84 × 102 (+)
1.96 × 102
6.75 × 101 (+)
3.13 × 101
3.12 × 100
F153.45 × 102
1.62 × 102 (+)
7.45 × 101
8.25 × 101 (+)
5.70 × 101
3.47 × 101 (+)
2.36 × 102
3.67 × 102 (+)
3.60 × 102
1.28 × 102 (+)
4.16 × 101
1.26 × 101
F168.46 × 102
2.16 × 102 (+)
9.81 × 102
1.90 × 102 (+)
9.05 × 102
4.87 × 102 (+)
9.64 × 102
2.98 × 102 (+)
7.98 × 102
1.83 × 102 (+)
4.05 × 102
8.51 × 101
F175.75 × 102
1.31 × 102 (+)
6.89 × 102
1.14 × 102 (+)
6.28 × 102
3.01 × 102 (+)
5.65 × 102
2.17 × 102 (+)
5.05 × 102
1.52 × 102 (+)
3.38 × 102
9.46 × 101
F182.18 × 104
8.69 × 104 (+)
2.23 × 103
2.16 × 103 (+)
6.79 × 103
5.09 × 103 (+)
9.66 × 103
6.45 × 103 (+)
2.04 × 102
1.40 × 102 (+)
4.20 × 101
1.18 × 101
F191.31 × 102
4.25 × 101 (+)
2.76 × 101
8.63 × 100 (+)
1.02 × 102
4.07 × 102 (+)
1.00 × 102
4.04 × 101 (+)
1.54 × 102
5.24 × 101 (+)
2.51 × 101
4.08 × 100
F204.56 × 102
9.56 × 101 (+)
4.99 × 102
1.25 × 102 (+)
5.13 × 102
2.46 × 102 (+)
4.09 × 102
2.37 × 102 (+)
3.25 × 102
1.14 × 102 (+)
1.38 × 102
5.32 × 101
F212.53 × 102
8.19 × 100 (+)
2.95 × 102
1.04 × 101 (+)
4.34 × 102
1.06 × 101 (+)
2.85 × 102
1.49 × 101 (+)
2.34 × 102
4.61 × 100 (+)
2.18 × 102
2.49 × 100
F222.97 × 103
2.05 × 103 (+)
5.23 × 103
1.44 × 103 (+)
9.79 × 103
3.32 × 102 (+)
8.97 × 103
1.57 × 103 (+)
3.43 × 103
1.39 × 103 (+)
1.01 × 102
4.45 × 100
F234.77 × 102
9.49 × 100 (+)
5.15 × 102
1.28 × 101 (+)
6.36 × 102
3.42 × 101 (+)
5.16 × 102
1.88 × 101 (+)
4.57 × 102
9.96 × 100 (+)
4.31 × 102
9.23 × 100
F245.42 × 102
9.53 × 100 (+)
5.79 × 102
8.05 × 100 (+)
5.65 × 102
1.32 × 101 (+)
5.84 × 102
1.86 × 101 (+)
5.32 × 102
6.70 × 100 (+)
5.09 × 102
5.85 × 100
F255.17 × 102
3.61 × 101 (−)
5.09 × 102
3.99 × 101 (−)
4.89 × 102
2.15 × 101 (−)
5.19 × 102
3.90 × 101 (−)
5.22 × 102
3.77 × 101 (−)
5.39 × 102
2.39 × 101
F261.60 × 103
9.58 × 101 (+)
1.99 × 103
9.92 × 101 (+)
2.47 × 103
6.61 × 102 (+)
2.14 × 103
1.80 × 102 (+)
1.39 × 103
7.11 × 101 (+)
1.23 × 103
6.37 × 101
F275.61 × 102
3.56 × 101 (+)
5.51 × 102
2.72 × 101 (+)
5.42 × 102
2.22 × 101 (+)
6.15 × 102
6.31 × 101 (+)
5.48 × 102
2.15 × 101 (+)
5.39 × 102
1.01 × 101
F284.83 × 102
2.27 × 101 (−)
4.81 × 102
2.32 × 101 (−)
4.72 × 102
2.18 × 101 (−)
4.93 × 102
2.57 × 101 (−)
4.81 × 102
2.45 × 101 (−)
5.00 × 102
1.85 × 101
F294.84 × 102
7.34 × 101 (+)
5.28 × 102
6.15 × 101 (+)
3.85 × 102
1.02 × 102 (+)
7.53 × 102
2.40 × 102 (+)
4.63 × 102
8.44 × 101 (+)
3.79 × 102
2.08 × 101
F306.83 × 105
7.15 × 104 (+)
6.11 × 105
3.54 × 104 (−)
6.24 × 105
4.12 × 104 (−)
7.31 × 105
1.40 × 105 (+)
6.60 × 105
9.10 × 104 (+)
6.29 × 105
3.63 × 104
+/−/=25/5/023/7/025/5/026/4/025/5/0
Ranking3.573.624.304.633.021.87
Table 6. Comparison results of GCIDE with JADE, jDE, FDDE, rank-DE, and SHADE based on Wilcoxon signed-rank tests when D = 30.
Table 6. Comparison results of GCIDE with JADE, jDE, FDDE, rank-DE, and SHADE based on Wilcoxon signed-rank tests when D = 30.
AlgorithmR+R−pSignificance
GCIDE vs. JADE394410.000+
GCIDE vs. jDE418170.000+
GCIDE vs. FDDE3541110.012+
GCIDE vs. rank-DE453120.000+
GCIDE vs. SHADE376590.001+
Table 7. Comparison results of GCIDE with JADE, jDE, FDDE, rank-DE, and SHADE based on Wilcoxon signed-rank tests when D = 50.
Table 7. Comparison results of GCIDE with JADE, jDE, FDDE, rank-DE, and SHADE based on Wilcoxon signed-rank tests when D = 50.
AlgorithmR+R−pSignificance
GCIDE vs. JADE439.525.50.000+
GCIDE vs. jDE404610.000+
GCIDE vs. FDDE413520.000+
GCIDE vs. rank-DE448170.000+
GCIDE vs. SHADE431340.000+
Table 8. Numerical results of GCIDE, EPSO, MKE, HGSA, HLWCA, and AOA on IEEE CEC 2017 test suite when D = 30.
Table 8. Numerical results of GCIDE, EPSO, MKE, HGSA, HLWCA, and AOA on IEEE CEC 2017 test suite when D = 30.
AlgorithmsEPSOMKEHGSAHLWCAAOAGCIDC
FunctionsMean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
F012.82 × 100
3.33 × 100 (+)
1.18 × 10−14
5.39 × 10−15 (−)
3.52 × 102
3.57 × 102 (+)
7.77 × 102
1.22 × 103 (+)
4.23 × 1010
6.20 × 109 (+)
1.52 × 10−14
3.61 × 10−15
F022.67 × 10−1
5.21 × 10−1 (+)
7.33 × 10−1
3.66 × 100 (+)
7.11 × 101
4.89 × 101 (+)
2.48 × 1020
1.34 × 1021 (+)
5.73 × 1051
3.14 × 1052 (+)
1.00 × 10−1
4.03 × 10−1
F032.02 × 10−5
1.33 × 10−5 (+)
9.08 × 10−4
1.32 × 10−3 (+)
3.19 × 104
3.19 × 103 (+)
2.14 × 103
1.60 × 103 (+)
7.42 × 104
6.92 × 103 (+)
2.01 × 10−13
1.08 × 10−13
F047.52 × 10−1
4.39 × 10−1 (−)
2.48 × 101
2.99 × 101 (−)
1.32 × 102
4.03 × 100 (+)
7.31 × 101
3.21 × 101 (+)
9.20 × 103
2.19 × 103 (+)
2.94 × 101
3.10 × 101
F056.25 × 101
5.41 × 100 (+)
6.19 × 101
1.85 × 101 (+)
1.12 × 102
6.99 × 100 (+)
8.25 × 101
3.01 × 101 (+)
3.22 × 102
3.43 × 101 (+)
7.96 × 100
1.58 × 100
F061.37 × 10−12
3.86 × 10−13 (−)
8.14 × 10−6
1.83 × 10−5 (−)
2.92 × 100
1.84 × 100 (+)
1.54 × 10−3
2.77 × 10−3 (+)
6.91 × 101
5.76 × 100 (+)
8.48 × 10−6
7.50 × 10−6
F071.17 × 102
7.47 × 100 (+)
1.06 × 102
1.56 × 101 (+)
3.85 × 101
1.32 × 100 (+)
1.42 × 102
2.19 × 101 (+)
6.00 × 102
5.54 × 101 (+)
3.58 × 101
1.06 × 100
F084.12 × 101
6.93 × 100 (+)
7.32 × 101
1.94 × 101 (+)
1.24 × 102
6.70 × 100 (+)
8.00 × 101
2.70 × 101 (+)
2.48 × 102
2.73 × 101 (+)
8.41 × 100
1.40 × 100
F091.48 × 101
6.30 × 100 (+)
3.30 × 10−1
4.20 × 10−1 (+)
3.41 × 10−14
5.30 × 10−14 (−)
9.00 × 101
1.41 × 102 (+)
4.95 × 103
9.29 × 102 (+)
5.31 × 10−14
5.77 × 10−14
F101.65 × 103
2.50 × 102 (+)
3.31 × 103
7.32 × 102 (+)
3.18 × 103
2.65 × 102 (+)
3.37 × 103
6.61 × 102 (+)
5.27 × 103
5.50 × 102 (+)
1.53 × 103
3.10 × 102
F114.85 × 101
1.25 × 101 (+)
4.14 × 101
2.72 × 101 (+)
8.90 × 101
1.12 × 101 (+)
6.69 × 101
3.31 × 101 (+)
2.63 × 103
1.01 × 103 (+)
1.84 × 101
2.35 × 101
F122.18 × 103
3.99 × 102 (+)
9.88 × 103
1.19 × 104 (+)
1.93 × 104
7.47 × 103 (+)
2.76 × 105
7.12 × 105 (+)
8.03 × 109
2.02 × 109 (+)
9.93 × 102
3.87 × 102
F138.49 × 101
2.33 × 101 (+)
1.15 × 102
3.55 × 102 (+)
1.78 × 102
1.22 × 102 (+)
7.12 × 102
4.21 × 102 (+)
4.05 × 104
1.34 × 104 (+)
1.48 × 101
6.60 × 100
F148.89 × 101
2.03 × 101 (+)
5.53 × 101
1.51 × 101 (+)
1.96 × 102
2.44 × 101 (+)
5.55 × 102
4.25 × 102 (+)
5.17 × 104
4.80 × 104 (+)
2.24 × 101
1.22 × 100
F156.77 × 101
1.92 × 101 (+)
2.60 × 101
1.38 × 101 (+)
1.56 × 102
6.48 × 101 (+)
6.60 × 102
4.54 × 102 (+)
1.82 × 104
6.10 × 103 (+)
3.48 × 100
1.31 × 100
F164.14 × 102
1.46 × 102 (+)
7.95 × 102
2.32 × 102 (+)
1.34 × 103
1.57 × 102 (+)
4.13 × 102
1.50 × 102 (+)
2.84 × 103
1.33 × 103 (+)
1.40 × 102
9.45 × 101
F171.27 × 102
2.54 × 101 (+)
2.31 × 102
1.33 × 102 (+)
7.12 × 102
1.09 × 102 (+)
1.88 × 102
7.29 × 101 (+)
1.10 × 103
3.54 × 102 (+)
3.14 × 101
7.54 × 100
F182.50 × 104
1.26 × 104 (+)
5.34 × 101
2.24 × 101 (+)
1.72 × 104
4.37 × 103 (+)
9.90 × 104
8.30 × 104 (+)
6.59 × 105
5.18 × 105 (+)
2.19 × 101
1.28 × 100
F196.74 × 101
2.47 × 101 (+)
1.45 × 101
6.56 × 100 (+)
1.13 × 102
4.60 × 101 (+)
7.38 × 102
9.86 × 102 (+)
1.07 × 106
1.08 × 105 (+)
7.03 × 100
1.59 × 100
F202.56 × 102
4.56 × 101 (+)
2.52 × 102
1.21 × 102 (+)
8.43 × 102
1.33 × 102 (+)
2.71 × 102
1.18 × 102 (+)
7.26 × 102
1.81 × 102 (+)
4.07 × 101
9.98 × 100
F214.18 × 101
3.06 × 101 (−)
2.69 × 102
1.70 × 101 (+)
1.55 × 102
1.07 × 101 (−)
1.50 × 102
0.00 × 100 (−)
5.09 × 102
4.82 × 101 (+)
2.08 × 102
1.43 × 100
F224.20 × 101
6.06 × 100 (−)
9.52 × 102
1.48 × 103 (+)
1.47 × 102
7.87 × 100 (+)
1.50 × 102
0.00 × 100 (+)
6.02 × 103
5.71 × 102 (+)
1.00 × 102
2.27 × 10−13
F235.39 × 102
7.77 × 100 (+)
4.14 × 102
1.42 × 101 (+)
4.02 × 102
1.13 × 102 (+)
5.67 × 102
1.41 × 101 (+)
1.19 × 103
1.80 × 102 (+)
3.45 × 102
3.29 × 100
F242.00 × 102
1.10 × 10−12 (−)
4.81 × 102
1.45 × 101 (+)
2.00 × 102
1.41 × 10−9 (−)
6.62 × 102
4.03 × 102 (+)
1.50 × 103
2.31 × 102 (+)
4.22 × 102
2.62 × 100
F254.01 × 102
1.23 × 100 (+)
3.87 × 102
1.23 × 10−1 (+)
2.00 × 102
2.67 × 10−8 (−)
3.71 × 102
1.27 × 102 (−)
1.85 × 103
4.54 × 102 (+)
3.87 × 102
5.06 × 10−2
F262.00 × 102
9.62 × 10−11 (−)
1.60 × 103
3.18 × 102 (+)
2.00 × 102
2.40 × 10−8 (−)
1.64 × 103
1.21 × 103 (+)
8.02 × 103
1.18 × 103 (+)
9.06 × 102
3.35 × 101
F278.59 × 102
3.96 × 101 (+)
5.10 × 102
9.15 × 100 (+)
1.16 × 103
2.05 × 102 (+)
7.79 × 102
4.60 × 101 (+)
1.80 × 103
3.48 × 102 (+)
5.05 × 102
4.19 × 100
F284.42 × 102
3.24 × 101 (+)
3.55 × 102
6.15 × 101 (+)
2.00 × 102
4.00 × 10−8 (−)
5.23 × 102
3.49 × 102 (+)
3.40 × 103
6.67 × 102 (+)
3.14 × 102
3.67 × 101
F294.34 × 102
5.15 × 101 (−)
6.18 × 102
1.72 × 102 (+)
4.72 × 102
1.89 × 101 (+)
5.02 × 102
9.81 × 101 (+)
3.26 × 103
1.12 × 103 (+)
4.37 × 102
6.35 × 100
F302.22 × 103
5.64 × 102 (+)
2.54 × 103
7.90 × 102 (+)
2.60 × 104
6.35 × 103 (+)
7.73 × 103
1.23 × 104 (+)
2.70 × 108
7.74 × 108 (+)
2.06 × 103
7.02 × 101
+/−/=23/7/027/3/024/6/028/2/030/0/0
Ranking2.592.983.624.215.971.64
Table 9. Numerical results of GCIDE, EPSO, MKE, HGSA, HLWCA, and AOA on IEEE CEC 2017 test suite when D = 50.
Table 9. Numerical results of GCIDE, EPSO, MKE, HGSA, HLWCA, and AOA on IEEE CEC 2017 test suite when D = 50.
AlgorithmsEPSOMKEHGSAHLWCAAOAGC-IDC
FunctionsMean/
Std
Mean/
Std
Mean/
Std
Mean/
Std
Mean/
Std
Mean/
Std
F013.92 × 103
9.21 × 101 (+)
2.80 × 10−4
8.09 × 10−4 (+)
1.16 × 103
1.05 × 103 (+)
5.89 × 103
6.69 × 103 (+)
1.05 × 1011
5.21 × 109 (+)
4.61 × 10−7
1.20 × 10−6
F023.35 × 101
2.14 × 101 (−)
1.45 × 108
3.43 × 108 (+)
4.62 × 10+13
6.49 × 10+13 (+)
1.07 × 1035
3.92 × 1035 (+)
3.46 × 1084
1.28 × 1085 (+)
2.15 × 106
9.52 × 106
F031.33 × 101
9.63 × 100 (+)
8.54 × 101
7.81 × 101 (+)
9.37 × 104
5.95 × 103 (+)
1.58 × 104
5.78 × 103 (+)
1.68 × 105
1.56 × 104 (+)
4.99 × 10−9
2.13 × 10−8
F046.26 × 101
2.90 × 101 (−)
4.10 × 101
4.89 × 101 (−)
7.14 × 101
3.07 × 101 (−)
8.34 × 101
2.23 × 101 (+)
2.70 × 104
4.55 × 103 (+)
7.98 × 101
4.07 × 101
F051.45 × 102
1.60 × 101 (+)
1.53 × 102
3.25 × 101 (+)
2.58 × 102
1.24 × 101 (+)
1.72 × 102
4.40 × 101 (+)
5.85 × 102
3.43 × 101 (+)
1.80 × 101
2.04 × 100
F067.74 × 10−11
1.06 × 10−10 (−)
1.03 × 10−2
4.78 × 10−2 (+)
5.93 × 100
1.30 × 100 (+)
2.32 × 100
2.52 × 100 (+)
8.66 × 101
4.67 × 100 (+)
1.03 × 10−4
8.19 × 10−5
F072.56 × 102
1.55 × 101 (+)
2.38 × 102
4.86 × 101 (+)
6.72 × 101
1.81 × 100 (+)
3.28 × 102
8.08 × 101 (+)
1.13 × 103
7.15 × 101 (+)
6.11 × 101
1.26 × 100
F089.33 × 101
1.17 × 101 (+)
1.51 × 102
3.18 × 101 (+)
2.99 × 102
1.55 × 101 (+)
1.85 × 102
4.19 × 101 (+)
6.21 × 102
3.68 × 101 (+)
1.86 × 101
1.85 × 100
F091.67 × 102
5.47 × 101 (+)
2.71 × 101
5.98 × 101 (+)
6.82 × 10−14
5.66 × 10−14 (−)
1.16 × 103
6.98 × 102 (+)
2.21 × 104
3.02 × 103 (+)
2.98 × 10−3
1.63 × 10−2
F103.90 × 103
2.82 × 102 (+)
6.99 × 103
1.12 × 103 (+)
5.06 × 103
3.36 × 102 (+)
7.14 × 103
1.04 × 103 (+)
1.13 × 104
5.33 × 102 (+)
3.05 × 103
4.18 × 102
F112.29 × 102
3.15 × 101 (+)
8.82 × 101
2.57 × 101 (+)
1.56 × 102
1.71 × 101 (+)
1.82 × 102
5.69 × 101 (+)
1.52 × 104
2.46 × 103 (+)
6.53 × 101
1.29 × 101
F123.50 × 103
4.96 × 102 (+)
6.88 × 104
4.40 × 104 (+)
4.23 × 104
8.61 × 103 (+)
4.86 × 105
2.13 × 106 (+)
6.55 × 1010
1.22 × 1010 (+)
2.94 × 103
1.01 × 103
F133.01 × 103
1.89 × 103 (+)
9.50 × 102
2.15 × 103 (+)
1.66 × 104
2.47 × 103 (+)
2.68 × 104
1.76 × 104 (+)
4.52 × 109
4.07 × 109 (+)
6.43 × 101
3.31 × 101
F149.63 × 101
1.95 × 101 (+)
1.58 × 102
8.80 × 101 (+)
1.75 × 102
1.34 × 101 (+)
7.87 × 102
8.89 × 102 (+)
6.91 × 105
7.37 × 105 (+)
3.13 × 101
3.12 × 100
F152.89 × 103
7.75 × 101 (+)
8.63 × 101
5.12 × 101 (+)
2.32 × 103
6.67 × 102 (+)
3.90 × 103
2.98 × 103 (+)
3.03 × 104
7.94 × 103 (+)
4.16 × 101
1.26 × 101
F168.81 × 102
1.84 × 102 (+)
1.43 × 103
3.41 × 102 (+)
1.76 × 103
2.89 × 102 (+)
1.23 × 103
3.61 × 102 (+)
5.40 × 103
1.36 × 103 (+)
4.05 × 102
8.51 × 101
F175.70 × 102
1.31 × 102 (+)
9.54 × 102
2.91 × 102 (+)
1.24 × 103
1.23 × 102 (+)
7.75 × 102
2.78 × 102 (+)
2.42 × 103
3.33 × 102 (+)
3.38 × 102
9.46 × 101
F186.85 × 104
3.04 × 104 (+)
4.99 × 103
3.45 × 103 (+)
8.91 × 104
2.11 × 104 (+)
3.42 × 105
2.49 × 105 (+)
1.67 × 107
1.23 × 107 (+)
4.20 × 101
1.18 × 101
F191.34 × 102
4.23 × 101 (+)
5.70 × 101
4.20 × 101 (+)
3.97 × 103
1.10 × 103 (+)
7.42 × 103
8.31 × 103 (+)
4.66 × 105
1.33 × 104 (+)
2.51 × 101
4.08 × 100
F207.22 × 102
1.17 × 102 (+)
8.72 × 102
3.12 × 102 (+)
1.67 × 103
1.87 × 102 (+)
8.32 × 102
2.92 × 102 (+)
1.51 × 103
2.71 × 102 (+)
1.38 × 102
5.32 × 101
F217.15 × 101
2.55 × 101 (−)
3.52 × 102
3.20 × 101 (+)
1.03 × 102
3.32 × 101 (−)
9.83 × 101
3.96 × 101 (−)
9.86 × 102
7.80 × 101 (+)
2.18 × 102
2.49 × 100
F221.10 × 102
1.83 × 101 (+)
7.60 × 103
1.18 × 103 (+)
3.19 × 102
1.03 × 101 (+)
1.94 × 102
4.93 × 101 (+)
1.29 × 104
7.10 × 102 (+)
1.01 × 102
4.45 × 100
F238.42 × 102
2.58 × 101 (+)
5.83 × 102
4.95 × 101 (+)
7.27 × 102
1.83 × 102 (+)
9.33 × 102
5.04 × 101 (+)
2.23 × 103
3.01 × 102 (+)
4.31 × 102
9.23 × 100
F242.00 × 102
8.64 × 10−11 (−)
6.41 × 102
3.63 × 101 (+)
2.00 × 102
5.32 × 10−13 (−)
5.33 × 102
5.62 × 102 (+)
2.46 × 103
3.87 × 102 (+)
5.09 × 102
5.85 × 100
F254.84 × 102
2.33 × 101 (−)
5.05 × 102
3.93 × 101 (−)
2.00 × 102
5.09 × 10−12 (−)
5.14 × 102
4.20 × 101 (−)
1.13 × 104
1.61 × 103 (+)
5.39 × 102
2.39 × 101
F262.00 × 102
1.15 × 10−9 (−)
2.57 × 103
3.87 × 102 (+)
2.00 × 102
2.29 × 10−12 (−)
2.00 × 102
0.00 × 100 (−)
1.35 × 104
1.08 × 103 (+)
1.23 × 103
6.37 × 101
F271.37 × 103
9.08 × 101 (+)
5.95 × 102
6.29 × 101 (+)
2.44 × 103
4.49 × 102 (+)
1.00 × 103
8.10 × 101 (+)
4.30 × 103
6.60 × 102 (+)
5.39 × 102
1.01 × 101
F284.39 × 102
2.43 × 101 (−)
4.89 × 102
2.29 × 101 (−)
2.00 × 102
4.15 × 10−12 (−)
4.99 × 102
3.60 × 101 (−)
8.56 × 103
8.79 × 102 (+)
5.00 × 102
1.85 × 101
F291.02 × 103
1.47 × 102 (+)
8.77 × 102
2.53 × 102 (+)
6.40 × 102
3.26 × 101 (+)
1.13 × 103
2.82 × 102 (+)
1.71 × 104
7.45 × 103 (+)
3.79 × 102
2.08 × 101
F301.78 × 104
4.01 × 103 (−)
6.54 × 105
5.71 × 104 (+)
8.74 × 104
7.57 × 103 (−)
2.30 × 104
6.34 × 103 (−)
3.11 × 108
5.75 × 107 (+)
6.29 × 105
3.63 × 104
+/−/=21/9/027/3/022/8/025/5/030/0/0
Ranking2.573.173.404.075.971.83
Table 10. Comparison results of GCIDE with EPSO, MKE, HGSA, HLWCA, and AOA based on Wilcoxon signed-rank tests when D = 30.
Table 10. Comparison results of GCIDE with EPSO, MKE, HGSA, HLWCA, and AOA based on Wilcoxon signed-rank tests when D = 30.
AlgorithmR+R−pSignificance
GCIDE vs. EPSO3391260.028+
GCIDE vs. MKE447180.000+
GCIDE vs. HGSA366990.006+
GCIDE vs. HLWCA403620.000+
GCIDE vs. AOA43500.000+
Table 11. Comparison results of GCIDE with EPSO, MKE, HGSA, HLWCA, and AOA based on Wilcoxon signed-rank tests when D = 50.
Table 11. Comparison results of GCIDE with EPSO, MKE, HGSA, HLWCA, and AOA based on Wilcoxon signed-rank tests when D = 50.
AlgorithmR+R−pSignificance
GCIDE vs. EPSO439.525.50.000+
GCIDE vs. MKE404610.000+
GCIDE vs. HGSA413520.000+
GCIDE vs. HLWCA448170.000+
GCIDE vs. AOA43500.000+
Table 12. Numerical results of GCIDE and its four variants above on IEEE CEC 2017 test suite with D = 30.
Table 12. Numerical results of GCIDE and its four variants above on IEEE CEC 2017 test suite with D = 30.
AlgorithmGCIDE-1GCIDE-2GCIDE (Rand)GCIDE (Best)GCIDE
FunctionMean
Std
Mean
Std
Mean
Std
Mean
Std
Mean
Std
F011.42 × 10−14
0.00 × 100 (−)
1.56 × 10−14
4.34 × 10−15 (+)
7.87 × 100
2.88 × 101 (+)
9.76 × 10−14
1.81 × 10−13 (+)
1.52 × 10−14
3.61 × 10−15
F021.77 × 104
9.70 × 104 (+)
3.33 × 10−2
1.83 × 10−1 (−)
3.33 × 1010
9.78 × 1010 (+)
2.90 × 1021
1.59 × 1022 (+)
1.00 × 10−1
4.03 × 10−1
F033.06 × 10−5
1.68 × 10−4 (+)
1.31 × 10−13
4.76 × 10−14 (−)
5.21 × 103
2.00 × 104 (+)
6.76 × 10−13
3.92 × 10−13 (+)
2.01 × 10−13
1.08 × 10−13
F042.03 × 101
2.82 × 101 (−)
3.80 × 101
2.92 × 101 (+)
6.88 × 101
3.26 × 101 (+)
2.31 × 101
2.83 × 101 (−)
2.94 × 101
3.10 × 101
F051.53 × 101
2.78 × 100 (+)
1.00 × 101
1.79 × 100 (+)
3.50 × 101
8.77 × 100 (+)
5.55 × 101
1.67 × 101 (+)
7.96 × 100
1.58 × 100
F065.92 × 10−4
1.02 × 10−3 (+)
4.41 × 10−6
3.79 × 10−6 (−)
2.15 × 10−6
2.51 × 10−6 (−)
6.25 × 10−1
5.60 × 10−1 (+)
8.48 × 10−6
7.50 × 10−6
F074.35 × 101
2.08 × 100 (+)
4.09 × 101
2.48 × 100 (+)
6.15 × 101
9.47 × 100 (+)
9.58 × 101
1.76 × 101 (+)
3.58 × 101
1.06 × 100
F081.73 × 101
3.94 × 100 (+)
1.01 × 101
2.01 × 100 (+)
3.33 × 101
8.69 × 100 (+)
5.59 × 101
1.48 × 101 (+)
8.41 × 100
1.40 × 100
F091.78 × 10−1
3.52 × 10−1 (+)
3.79 × 10−14
5.45 × 10−14 (−)
1.74 × 10−13
1.54 × 10−13 (+)
5.06 × 101
4.89 × 101 (+)
5.31 × 10−14
5.77 × 10−14
F101.75 × 103
2.15 × 102 (+)
1.57 × 103
2.39 × 102 (+)
1.99 × 103
4.13 × 102 (+)
2.89 × 103
8.90 × 102 (+)
1.53 × 103
3.10 × 102
F115.41 × 101
3.22 × 101 (+)
9.89 × 100
1.22 × 101 (−)
1.26 × 101
4.63 × 100 (−)
9.22 × 101
3.92 × 101 (+)
1.84 × 101
2.35 × 101
F121.32 × 103
6.92 × 102 (+)
1.09 × 103
3.06 × 102 (+)
1.01 × 103
8.68 × 102 (+)
1.14 × 104
8.07 × 103 (+)
9.93 × 102
3.87 × 102
F134.34 × 101
3.67 × 101 (+)
1.49 × 101
6.82 × 100 (+)
2.45 × 101
8.06 × 100 (+)
2.45 × 102
2.01 × 102 (+)
1.48 × 101
6.60 × 100
F143.06 × 101
8.66 × 100 (+)
2.31 × 101
1.49 × 100 (+)
1.14 × 101
7.53 × 100 (−)
1.17 × 102
3.53 × 101 (+)
2.24 × 101
1.22 × 100
F152.38 × 101
1.57 × 101 (+)
3.80 × 100
1.37 × 100 (+)
1.05 × 101
3.46 × 100 (+)
9.90 × 101
7.76 × 101 (+)
3.48 × 100
1.31 × 100
F163.71 × 102
1.22 × 102 (+)
1.43 × 102
8.37 × 101 (+)
2.08 × 102
1.19 × 102 (+)
6.72 × 102
2.95 × 102 (+)
1.40 × 102
9.45 × 101
F175.63 × 101
3.25 × 101 (+)
2.75 × 101
6.92 × 100 (−)
3.53 × 101
1.16 × 101 (+)
1.98 × 102
1.21 × 102 (+)
3.14 × 101
7.54 × 100
F184.24 × 101
2.20 × 101 (+)
2.17 × 101
1.18 × 100 (−)
2.40 × 101
4.46 × 100 (+)
5.77 × 101
2.33 × 101 (+)
2.19 × 101
1.28 × 100
F191.45 × 101
8.76 × 100 (+)
7.24 × 100
1.62 × 100 (+)
6.53 × 100
1.66 × 100 (−)
6.40 × 101
3.18 × 101 (+)
7.03 × 100
1.59 × 100
F209.36 × 101
5.40 × 101 (+)
4.19 × 101
7.19 × 100 (+)
2.98 × 101
1.52 × 101 (−)
1.86 × 102
1.25 × 102 (+)
4.07 × 101
9.98 × 100
F212.15 × 102
2.83 × 100 (+)
2.08 × 102
2.19 × 100 (+)
2.33 × 102
7.95 × 100 (+)
2.60 × 102
1.59 × 101 (+)
2.08 × 102
1.43 × 100
F221.00 × 102
1.85 × 10−13 (=)
1.00 × 102
2.27 × 10−13 (=)
1.00 × 102
2.63 × 10−10 (=)
1.00 × 102
5.68 × 102 (=)
1.00 × 102
2.27 × 10−13
F233.59 × 102
5.76 × 100 (+)
3.54 × 102
3.65 × 100 (+)
3.77 × 102
9.51 × 100 (+)
4.08 × 102
1.86 × 101 (+)
3.45 × 102
3.29 × 100
F244.29 × 102
3.30 × 100 (+)
4.20 × 102
2.91 × 100 (−)
4.49 × 102
1.02 × 101 (+)
4.74 × 102
1.57 × 101 (+)
4.22 × 102
2.62 × 100
F253.87 × 102
3.67 × 10−1 (=)
3.87 × 102
4.04 × 10−2 (=)
3.87 × 102
7.57 × 10−2 (=)
3.87 × 102
1.59 × 100 (=)
3.87 × 102
5.06 × 10−2
F261.09 × 103
6.72 × 101 (+)
9.83 × 102
3.78 × 101 (+)
1.22 × 103
1.47 × 102 (+)
1.70 × 103
3.78 × 102 (+)
9.06 × 102
3.35 × 101
F275.11 × 102
6.30 × 100 (+)
5.05 × 102
5.63 × 100 (+)
4.98 × 102
1.02 × 101 (−)
5.29 × 102
1.08 × 101 (+)
5.05 × 102
4.19 × 100
F283.48 × 102
5.63 × 101 (+)
3.16 × 102
3.26 × 101 (+)
3.25 × 102
4.20 × 101 (+)
3.53 × 102
6.26 × 101 (+)
3.14 × 102
3.67 × 101
F294.61 × 102
1.45 × 101 (+)
4.30 × 102
1.13 × 101 (−)
4.35 × 102
1.43 × 101 (−)
6.97 × 102
1.23 × 102 (+)
4.37 × 102
6.35 × 100
F302.22 × 103
1.90 × 102 (+)
2.07 × 103
4.48 × 101 (+)
2.18 × 103
7.99 × 101 (+)
3.36 × 103
1.45 × 103 (+)
2.06 × 103
7.02 × 101
+/−/=26/2/218/10/221/7/227/1/2
Ranking3.422.023.054.731.78
Table 13. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on the tension/compression spring design problem.
Table 13. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on the tension/compression spring design problem.
AlgorithmsMeanStdBestWorst
GCIDE2.11 × 10−16.91 × 10−172.11 × 10−12.11 × 10−1
JADE2.25 × 10−19.69 × 10−12.20 × 10−12.43 × 10−1
jDE2.69 × 10−15.19 × 10−12.28 × 10−12.91 × 10−1
FDDE2.11 × 10−18.15 × 10−172.11 × 10−12.11 × 10−1
rank-DE2.11 × 10−17.73 × 10−172.11 × 10−12.11 × 10−1
SHADE2.11 × 10−18.93 × 10−172.11 × 10−12.11 × 10−1
Table 14. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on the side collision problem of automobile.
Table 14. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on the side collision problem of automobile.
AlgorithmsMeanStdBestWorst
GCIDE1.58 × 1011.09 × 10−11.56 × 1011.62 × 101
JADE2.09 × 1011.81 × 10−12.07 × 1012.11 × 101
jDE2.82 × 1011.39 × 1002.55 × 1013.12 × 101
FDDE2.09 × 1011.86 × 10−12.07 × 1012.11 × 101
rank-DE2.74 × 1011.36 × 1002.49 × 1013.08 × 101
SHADE2.77 × 1011.61 × 1002.38 × 1013.06 × 101
Table 15. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on the Spread Spectrum Radar Polyphase Code Design when D = 30.
Table 15. Numerical results of GCIDE, JADE, jDE, FDDE, rank-DE, and SHADE on the Spread Spectrum Radar Polyphase Code Design when D = 30.
AlgorithmsMeanStdBestWorst
GCIDE2.48 × 1009.64 × 10−22.10 × 1002.64 × 100
JADE2.52 × 1001.27 × 10−12.20 × 1002.74 × 100
jDE2.79 × 1001.60 × 10−12.33 × 1003.02 × 100
FDDE2.96 × 1001.02 × 10−12.74 × 1003.16 × 100
rank-DE2.70 × 1003.95 × 10−12.33 × 1002.93 × 100
SHADE2.63 × 1001.27 × 10−12.27 × 1002.85 × 100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, M.; Gao, Y.; He, X.; Zhang, Q.; Meng, Y. Differential Evolution with Group-Based Competitive Control Parameter Setting for Numerical Optimization. Mathematics 2023, 11, 3355. https://doi.org/10.3390/math11153355

AMA Style

Tian M, Gao Y, He X, Zhang Q, Meng Y. Differential Evolution with Group-Based Competitive Control Parameter Setting for Numerical Optimization. Mathematics. 2023; 11(15):3355. https://doi.org/10.3390/math11153355

Chicago/Turabian Style

Tian, Mengnan, Yanghan Gao, Xingshi He, Qingqing Zhang, and Yanhui Meng. 2023. "Differential Evolution with Group-Based Competitive Control Parameter Setting for Numerical Optimization" Mathematics 11, no. 15: 3355. https://doi.org/10.3390/math11153355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop