Next Article in Journal
In-Plane Vibrations of Elastic Lattice Plates and Their Continuous Approximations
Previous Article in Journal
Correction: Alshareef, S.M.; Fathy, A. Efficient Red Kite Optimization Algorithm for Integrating the Renewable Sources and Electric Vehicle Fast Charging Stations in Radial Distribution Networks. Mathematics 2023, 11, 3305
Previous Article in Special Issue
Enhanced Classification of Heartbeat Electrocardiogram Signals Using a Long Short-Term Memory–Convolutional Neural Network Ensemble: Paving the Way for Preventive Healthcare
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Evolution Algorithm with Three Mutation Operators for Global Optimization

1
Engineering Training Center, Nanjing University of Information Science & Technology, Nanjing 210044, China
2
School of Management Science and Engineering, Nanjing University of Information Science & Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2311; https://doi.org/10.3390/math12152311
Submission received: 18 June 2024 / Revised: 14 July 2024 / Accepted: 16 July 2024 / Published: 24 July 2024

Abstract

:
Differential evolution algorithm is a very powerful and recently proposed evolutionary algorithm. Generally, only a mutation operator and predefined parameter values of differential evolution algorithm are utilized to solve various optimization problems, which limits the performance of the algorithm. In this paper, six commonly used mutation operators are divided into three categories according to their own features. A mutation pool is established based on the three categories. A parameter pool with three predefined values is designed. During evolution, three mutation operators are randomly chosen from the three categories, and three parameter values are also randomly selected from the parameter pool. The three groups of mutation operators and parameter values are employed to produce trial vectors. The proposed algorithm makes good use of different mutation operators. Three recently proposed differential evolution variants and three non-differential evolution algorithms are used to make comparisons on the 29 testing functions from CEC. The experimental results have demonstrated that the proposed algorithm is very competitive. The proposed algorithm is utilized to solve three real applications, and the results are superior.

1. Introduction

As one kind of the existing evolutionary algorithms (EAs), the Differential Evolution (DE) algorithm is very elegant and simple. It was put forward by Storn and Price [1]. It is very effective and efficient when solving global and continuous optimization problems. After that, many scholars have focused on the algorithm and obtained superior achievements. It has been used in plenty of industrial fields, for example power flow [2], product design [3], intrusion detection [4], image process [5], and so on.
Generally, the DE algorithm initializes a set of population in the search space. They are evaluated by the fitness function, which is often the single-objective optimization problem. Then, DE randomly chooses some individuals as parents to evolve and generate the offspring individuals by mutation and crossover operators. A greedy selection operator is employed to compare the fitness values of the offspring and parent. If the fitness value of the parent is worse than that of the offspring, the offspring takes the place of the parent and enters the next evolution. The DE algorithm will end when the predefined condition is met [5,6].
When DE is used to optimize problems, the success relies on the mutation strategies and related parameter control methods. In the past two decades, various DE variants have been put forward to improve the search capacity. Some of them have focused on mutation strategies and related parameter setting methods, while others pay attention to ensemble methods and hybrid methods. Therefore, four kinds of improvements are presented and discussed here.

1.1. Adaptive Parameter Control Methods

Two key parameters: crossover rate C R and scale factor F are of great importance. They determine the quality of the solution generated. Both directly influence the performance of the algorithm. Choosing suitable values for two parameters demands lots of previous experiences. The adaptive parameter control method is the process in which two parameters can dynamically change according to the evolution and environment. Fuzzy adaptive DE (FADE) is proposed, in which fuzzy logical theory is introduced to adapt two parameters for mutation and crossover [7]. A new adaptive DE, jDE, adaptively adjusts two parameters for each individual and regenerates values in each evolution [8]. Self-adaptive DE (SaDE) adjusts values of two parameters and associated mutation operators simultaneously [9]. Parameter-adaptive DE (PaDE) is put forward to control parameter adaptation schemes [10]. By using the information from individuals to adaptively control the parameters, the search efficiency is improved [11]. An adaptive DE with challengers mechanism and an aging leader (ADE-ALC) is proposed, in which two parameters are renovated by previous experiences [12]. The two parameters of DE are adjusted by a compound sinusoidal formula [13]. Three adaptive control methods are combined for different mutation strategies [14]. A double-level adaptive parameter control approach is developed in the adaptive DE (ADE) [15]. An effective and simple method of adapting C R and F is proposed and has achieved better performances [16]. In fitness-adaptive DE (FiADE), two parameters are adjusted according to the dynamic of fitness values [17]. An individual-dependent parameter setting method is used to tune the two parameters [18]. The two parameters C R and F adaptively evolve by their own methods and find near-optimal values [19]. Based on these adaptive control methods, the performance of DE has been greatly improved.

1.2. Improved Mutation Strategy

The DE algorithm provides various mutation strategies, which distinguish it from the conventional EAs. Each mutation strategy has its own feature. Some improved mutation strategies have been developed to boost the performance of DE. A novel mutation strategy DE/current-to-pbest with optional archive is implemented in JADE [20]. Then, the JADE algorithm with sorting C R (JADE_sort) is also proposed [21]. A little less greedy mutation operator with improved exploration capability is implemented to enhance the global search of DE stagnation [22]. The biological phenomenon is employed to produce a novel mutation vector [23]. DE/target-to-best/1 operator is proposed. The operator uses the information of the neighborhood of each population member [24]. DE/current-to-gr_best/1 utilizes the best of a group of randomly chosen solutions [16]. A neighborhood mutation strategy is proposed to keep the multiple optima solutions found in the evolution [25]. In order to improve the exploitation, series ranking-based mutation operators are proposed. The better ranking a solution has, the more chance it will be chosen [26]. Proportional, tournament, and rank-based selection are implemented in the mutation stage [27]. Neighborhood and direction information-based DE algorithm (NDi-DE) uses two operators: the direction induces mutation strategy, while the neighbor guides the selection scheme [28]. An individual-based selection probability and two neighborhood-based mutation operators are proposed [29]. A novel mutation scheme is proposed, in which k -tournament selection is employed to choose the base vector [30].

1.3. Ensemble Methods

Various optimization problems have different characteristics. Different mutation operators can produce better performances compared with a single mutation operator. An ensemble of mutation strategies and parameter values for DE (EPSDE) is developed, in which DE/rand/1, DE/best/2, and DE/current-to-rand/1 are integrated [31]. DE/rand/1, DE/rand/2 and DE/current-to-rand/1, DE/rand/2, and DE/rand/1 are ensembled in composite DE (CoDE) [32]. Then, JADE, EPSDE, and CoDE are combined in the ensemble of DE variants (EDEV) [33]. An extended multi-population ensemble DE uses three mutation operators: DE/current-to-rand/1, DE/current-to-pbest/1, and DE/pbad-to-pbest/1 [34]. A stochastic mixed mutation strategy by using DE/pbest/1 and DE/current-to-pbest/1 is proposed [35]. DE/best/1 and DE/rand/1 are included in ensemble and arithmetic recombination-based DE [36]. A multi-role-based DE is developed by integrating DE/rand/1, DE/current-to-best/1, and DE/current-to-lbest/1 [37].

1.4. Hybrid Methods

Different algorithms are mixed to improve the whole performances. As gravitational search algorithm (GSA) has superior performances, a new hybrid algorithm SGSA-DE based on DE and self-adaptive GSA is proposed to solve optimization problems [38]. The modified bat algorithm and DE algorithm are integrated to form a hybrid system [39].
In addition, more and more machine learning techniques are integrated into the DE algorithm in order to boost its performance. Salar et al. developed Gaussian cross-entropy with organizing intelligence, in which a self-organizing map was involved [40]. Hu et al. introduced a Reinforcement Learning (RL) technique into the DE algorithm and proposed the RL-DE algorithm. In the algorithm, mutation scalar is adaptively adjusted based on the evolution environment [41].
Undoubtedly, these methods have greatly increased the performance of the algorithm. They are superior to the traditional time-consuming trial-and-error search method, which results in many computational burdens. However, all of the above methods do not use all commonly used mutation operators, such as DE/rand/1, DE/best/1, DE/rand/2, DE/best/2, DE/rand-to-best/1, and DE/current-to-rand/1 in the evolution. In fact, these mutation operators have their characteristics and can cope with different optimization problems. We can divide them into three categories according to their features. DE/best/1, DE/best/2, and DE/rand-to-best/1 belong to the first category as three of them have the best individual found heretofore; DE/rand/2 and DE/rand/1 are the second category as they perform random search, and the DE/current-to-rand/1 belongs to the last category. It is very difficult to decide which mutation strategy is better in each category. Based on the observations and motivation, a mutation pool and parameter pool are designed, in which six mutation operators are included. In the evolution, three mutation strategies are randomly chosen from the three categories, respectively, and three associated parameters are chosen simultaneously from the parameter pool. Based on the mutation pool and parameter pool, a variant DE is designed, in which three mutation strategies and parameters are used in a single iteration. So, we define the proposed algorithm as TDE. The performance of the TDE is estimated on a set of CEC testing functions and real applications. The results have demonstrated that the TDE is superior to four DE variants and three non-DE algorithms.
Based on the above analysis, the main contributions of the paper can be outlined as follows:
(1)
Six common mutation operators are divided into three categories. A mutation pool is designed based on the three categories.
(2)
A DE variant based on the mutation pool is proposed based on the mutation pool and parameter pool.
(3)
The proposed algorithm is used to cope with the CEC testing functions and economic dispatch (ED) problem. The results have denoted that the TDE is competitive.
The paper is organized as follows. The conventional DE algorithm is briefly introduced in Section 2. Section 3 provides the proposed algorithm. The experiments and performance evaluation are implemented in Section 4. Section 5 gives the conclusion. Section 6 provides some future works.

2. DE Algorithm

Similar to the conventional EAs, there are four steps: population initialization, mutation, crossover, and selection in the DE algorithm. Among them, mutation, crossover, and selection are very important in the DE.

2.1. Initialization

In the initialization step, population X i t =   x i , 1 t , x i , 2 t , , x i , j t , ,   x i , D t , i = 1 , 2 , 3 , , N P 1 , N P ; j = 1 , 2 , , D ; t = 1 ; is randomly generated in the search space L j , U j , where D and N P are the dimension of the problem and the population size, respectively, t is the t t h generation,   L j is the lower boundaries of j t h dimension, U j   is the upper boundaries of j t h dimension.
X 1 , j 1 = L j + U j L j × r a n d 0 , 1

2.2. Mutation

At each   t generation, a mutant vector V i t =   v i , 1 t , v i , 2 t , , v i , j t , ,   v i , D t is generated for each target vector X i t by mutation operators. DE provides various mutation operators. Table 1 has listed commonly used mutation operators, in which r 1 i , r 2 i , r 3 i , r 4 i and r 5 i   are five different integers randomly generated between 1 and N P ,   r 1 i r 2 i r 3 i r 4 i r 5 i ,     X b e s t t is the optimal solution and has the best fitness value at the t t h generation, F is the scaling factor between 0 and 1, K is the random number in the range of [0, 1]. It should be noted that DE/current-to-rand/1 has no crossover operator, it directly generates a trial vector U i t .

2.3. Crossover

Then, the crossover operator is used to produce trial vectors. Each trial vector U i t = u i , 1 t , u i , 2 t , , u i , j t , ,   u i , D t is produced by the target vector X i t and its mutator vector V i t as follows.
u i , j t = v i , j t   i f   r a n d j 0 ,   1 C R   o r   j = j r a n d x i , j t   o t h e r s
where r a n d j is a random number uniformly distributed in the range of [0, 1), CR is the crossover rate and the range of C R   is usually between 0 and 1 to determine the probability that u i , j t is from v i , j t or x i , j t , and j r a n d is a random integer between 1 and N P .
It is necessary to check whether u i , j t is in the search space L j , U j or not. If it has exceeded outside of the boundaries, we must reset the u i , j t . A simple method to reset the u i , j t is as follows.
u i , j t = min U j , 2 L j u i , j t , i f   u i , j t < L j max U j , 2 U j u i , j t , i f   u i , j t > U j  
where L j is the lower boundary of the j t h dimension, U j is the upper boundary of the j t h dimension [32].

2.4. Selection

The last step of DE operator is selection, which is a greedy strategy. It is utilized to compare the fitness values of the trial vector U i t   and the target vector X i t . Taking the minimization problem as an instance, if the fitness value of X i t is worse than that of target vector U i t , U i t is set to X i t + 1 . On the opposite side, if the fitness value of X i t is better than that of target vector U i t , X i t is set to X i t + 1 . The process can be presented as follows.
X i t + 1 = U i t   i f   f U i t < f X i t X i t   o t h e r w i s e  
where f U i t and f X i t are the fitness values of U i t and X i t . The standard DE algorithm is listed in Algorithm 1.
Algorithm 1: DE algorithm
Input the   boundary   of   the   decision   variable   L , U ,   the   dimension   of   the   problem   D ,   the   population   size   N P ,   the   maximal   iteration   i t m a x
Outputthe optimal solution found by DE algorithm
1:Initialize the population p by Equation (1);
2:t = 1;
3: While   ( t i t m a x );
4: Generate mutated vectors by Equation (2);
5: Perform crossover step to obtain trial vector U by Equation (3);
6: Check the boundary of the U by Equation (3);
7: Evaluate trial vector U ;
8: Implement   selection   step   to   obtain   population   p by Equation (4);
9:  Update the best solution;
10:   t = t + 1 ;
11:end;
12:Output the best solution found by DE algorithm.

3. The Proposed Algorithm

3.1. Mutation Pool of the Proposed Algorithm

Mutation operators and associated parameter values play very important roles in the DE algorithm. Different mutation operators have different characteristics. It is said that DE/best/2, DE/best/1, and DE/rand-to-best/1 have the optimal solution found heretofore; the convergence speed is faster than other mutation operators, and the performance is better when it is used to deal with unimodal problems [9]. The exploration of DE/rand/2 and DE/rand/1 is stronger compared with the exploitation [26]. It has no bias to select individuals to enter the mutation stage. The two mutation operators do not have the best information, the convergence speed is poor compared with the former mutation operators. The DE/current-to-rand/1 is a rotation mutation operator, which can be used to cope with rotation problems and is different from the former categories [31].
Based on above observation, we can divide these mutation operators into three categories, the first category is with the optimal solution found heretofore, which includes DE/best/2, DE/best/1, and DE/rand-to-best/1; the second category is without the information, such as DE/rand/2 and DE/rand/1; the third one is DE/current-to-rand/1. In each category, it is very hard to compare which one is better in the real applications because each application has its own feature.
Several researchers have pointed out that different mutation operators can result in better performances [31,32]. On the one hand, diverse optimization problems have their own features. A single mutation operator with unique parameter values cannot meet the demands of different real applications. On the other hand, different mutation strategies can complement each other. For example, DE/current-to-rand/1 and DE/rand/2 can be incorporated to solve optimization problems, in which the DE/rand/2 has better exploration capacities, and DE/current-to-rand/1 is effective for solving rotated problems [9]. Similar mutation ensembles can be observed in many literature works [31,32].
It should be observed that previous ensemble methods only used some mutation operators and did not make good use of all mutation operators during evolution. Additionally, they did not classify mutation operators into different categories according to their characteristics. Based on the observations, we design a mutation pool presented in Figure 1, in which three categories are included. In each generation, two mutation operators are randomly chosen from the first and the second categories. The third mutation operator is DE/current-to-rand/1. The ensemble method can fully make good use of different mutation operators.
Associated parameters F and C R are also very vital. A larger F can perform a population search around the whole space. The exploration capacity is enhanced. On the opposite side, a smaller   F can result in fast convergence as it pushes population search around their neighbor. A larger C R can make the vector U i t different from the target vector X i t , which can enhance the diversity of the population, while a smaller C R is proper to separable problems [32]. Therefore, we give three different parameter values [1.0, 0.1], [1.0, 0.9], and [0.5, 0.8]. The first parameter value is to improve exploration, the second parameter value is to deal with separable problems, and the third one can balance the exploitation and exploration.
Based on the main steps of DE algorithm, the pseudocode on how to generate the offspring is implemented in Algorithm 2.
Algorithm 2: MutationCrossover
Input: current population X t , population size N P , parameter pool F , and C R ;
Output: the trial vector   U t ;
1 :   f o r   i = 1 : N P ;
2:   Randomly select F 1 , C R 1 , F 2 , C R 2 , and F 3 , C R 3 from the parameter pool;
3:   Randomly select a mutation operator from DE/rand-to-best/1, DE/best/1, and DE/best/2;
4:   Generate U i , 1 by the selected mutation operator and F 1 , C R 1 ;
5:   Randomly select a mutation operator from DE/rand/2 and DE/rand/1;
6:   Generate U i , 2   by the selected mutation operator and F 2 , C R 2 ;
7:   Generate U i , 3   by the DE/current-to-rand/1 and F 3 , C R 3 ;
8:   Check the boundaries of U i , 1 , U i , 2 , and U i , 3 using Equation (3), respectively;
9:    U i t = U i , 1 ; U i , 2 ; U i , 3 ;
10: end.

3.2. The Selection Step of the Proposed Algorithm

Then, the selection operator is used to compare the trial vector U i t and the current vector X i t . The trial vector U i t is composed from the vector generated by three mutation operators. The size of U i t is three times that of the vector X i t . The pseudocode on selection is implemented in Algorithm 3.
Algorithm 3: Selection
Input: the current population X t = X 1 t , X 2 t , , X i t , , X N P t ,
  the trial vector U t = U 1 t , U 2 t , , U i t , , U 3 × N P t ;
Output: the vector   X t + 1 = = X 1 t + 1 , X 2 t + 1 , , X i t + 1 , , X N P t + 1 ;
1 :   f o r   i = 1 : N P ;
2:   Evaluate the fitness values of the three individuals U 3 × i 2 : 3 × i t ;
3:   Select the best individual from four vectors: X i t , U 3 × i 2 : 3 × i t based on their fitness values;
4:   Set the best individual to   X i t + 1 ;
5: end.

3.3. The Framework of the Proposed Algorithm

Based on the above analysis, the pseudocode of the TDE is presented in Algorithm 4.
Algorithm 4: The proposed algorithm
Input: the population size: N P ; the parameter values F   C R = F 1 , C R 1 ; F 2 , C R 2 ; F 3 , C R 3 ; the maximal function evaluations: MFEs
Output: the best solution found;
1: F E S = 0 ;
2: Generate the initial population X ;
3: Evaluate their fitness values f ;
4: F E S = F E S + N P ;
5: While F E S M F E s ;
6:   U t = MutationCrossover( X t , N P , F   C R ) ;
7:   X t + 1 = Selection( X t , U t );
8:  Update the optimal solution found;
9:   F E S = F E S + 3 × N P ;
10: end while;
11: Output the optimal solution found.
Firstly, randomly generate population in the search space as the initial population and evaluate their fitness values. Then, the trial vector will be generated according to the well-designed mutation pool and parameter pool. By the selection operator, the offspring will be produced and enter the next evolution. The flowchart of the proposed algorithm is shown in Figure 2.

4. Experiment and Results Analysis

In this section, the TDE is evaluated comprehensively. Twenty-nine testing functions are from CEC 2017. The details of these functions can be seen in CEC 2017. These testing functions can be split into four categories: unimodal testing functions ( f 1 ~ f 3 ), basic multimodal testing functions ( f 4 ~ f 10 ), expanded multimodal testing functions ( f 11 ~ f 20 ), and hybrid composition testing functions ( f 21 ~ f 30 ). Among these functions, function f 2 is not included here as it is not stable. So, we also do not discuss the function.
According to the criteria from CEC 2017, the maximum function evaluations are 500,000, and the dimension of these problems is 30. Figure 3 demonstrates the images of four functions. It can be observed that most of them have many local optima, which are changelings for optimization algorithms.

4.1. Comparison Results

The DE algorithm and three recently proposed DE variants are chosen to make comparisons, CoDE [32], RLDE [41], and tlboDE [42]. CoDE uses DE/rand/1, DE/rand/2 and DE/current-to-rand/1 are integrated into CoDE. tlboDE is a kind of hybrid method, in which teaching–learning-based optimization and DE are adaptively combined. RLDE is based on RL technique, in which scale factor is adaptively adjusted. Two non-DE algorithms, Comprehensive Learning PSO (CLPSO) [43] and Selective Opposition Grey Wolf Optimizer (SOGWO) [44], and CMAES [45] are also used to make comparisons. CLPSO employs a new learning method whereby all other particles’ historical optimal information can be utilized to refresh a particle’s velocity [43]. SOGWO is a variant of Grey Wolf Optimizer (GWO), which introduced a kind of machine learning technique, selective opposition learning [44]. CMAES is a variant of evolutionary strategy, in which the population size is increased [45].
Seven algorithms run thirty times independently in order to avoid randomness. In each run, the result f x f x * is recorded, in which x * is the global optimum solution of the benchmark function, f x   is the best fitness value found by the algorithm. The mean value and deviation of the results are obtained to access the performance of the algorithm.
The experimental results are listed in Table 2. The best results are highlighted in bold in Table 2. Taking the first row as an example, the mean and standard deviation of DE algorithm on the first testing function is 1.42 × 10−14, 3.84 × 10−28, respectively; 3.84 × 10−28 is the mean value of fifty runs. In each run, an optimal fitness value can be searched by DE algorithm. After fifty runs, fifty optimal values are acquired. The mean of these values is 1.42 × 10−14, and the standard deviation is 3.84 × 10−28. Now, we discuss the results according to their features.
(1)
For the first group functions they belong to unimodal functions. It has been reported that function f 2 is not stable so it is not included in Table 2. For these unimodal functions, there is no local optimum. They are mainly used to test the exploitation of the algorithm. We can find that our proposed TDE has obtained the best results in terms of exploiting the optimum. The optimal fitness values of two functions are found by the algorithm. This is due to the operator that contains the best information found so far to guide the search direction.
(2)
For the second group functions ( f 4 ~ f 10 ), they belong to basic multimodal functions. The main features of these functions are that they have many local optima, which increase exponentially with dimension. The proposed TDE algorithm has obtained the best results on five functions, i.e., f 4 , f 5 , f 7 , f 8 , and f 10 , which accounts for 5/7 of the total functions. tlboDE, RLDE, and SOGWO cannot perform well on one function. Through the comparison, we can find that the proposed TDE algorithm is more powerful than these competitive variants.
(3)
For the third group functions ( f 11 ~ f 20 ), they are extended multimodal functions. They are more difficult to optimize compared to the second group functions. They are combined by different basic multimodal functions. Our proposed TDE algorithm has achieved the best results on f 11 , f 13 , f 14 , f 15 , f 17 , f 18 , and f 19 , which is the most among seven algorithms. CoDE and CLPSO have obtained superior performances on f 12 and f 16 , respectively.
(4)
For the fourth group functions ( f 21 ~ f 30 ), they are composition functions. They are still challenging for most intelligent algorithms as they have a massive number of local optima. Exploration and exploitation can be tested simultaneously. According to Table 2, it can be observed that the proposed algorithm has acquired the best results on f 22 , f 23 , f 25 , f 27 , f 28 , and f 29 , which accounts for 60% of testing functions in this group. The main reason why our proposed algorithm performs better is that three mutation operators from three different groups are integrated together so that the exploration and exploitation are considered simultaneously.
In summary, we find that the proposed algorithm, DE, CoDE, tlboDE, RLDE, CLPSO, and CMAES have obtained 15, 4, 7, 0, 0, 3, and 1 best results. Unfortunately, SOGWO and RLDE cannot obtain any better results on one function. In order to make the comparison more intuitively, we rank these algorithms according to their ranking in each test function. The radar maps of these algorithms are plotted in Figure 4. The area that these maps cover is the whole performance of the algorithm. The smaller the area, the better the algorithm. It is obvious that the area that the proposed TDE algorithm has is the least. Therefore, it is the best one among eight competitive algorithms.
Two representative convergence graphs of these algorithms on testing functions are demonstrated in Figure 5. It can be noted that the TDE converges faster than the rest of the algorithms on most functions. Taking f 10 as an example, it is a well-known, shifted Rosenbrock function. As the function has many local optima, DE is likely to get stuck at the local optima and result in the premature convergence. However, the proposed algorithm adopts different mutation operators, which can improve the exploitation ability. It can help the algorithm jump out of these local optima. Therefore, the proposed algorithm has better convergence speed.

4.2. Parameter Sensitivity Analysis

In this subsection, the parameter sensitivity analysis is performed. In our proposed TDE algorithm, the parameter pool contains three groups of parameters. They may influence the performances of the algorithm. We make some changes: F = 0.9   0.9   0.4 and C R = 0.2   0.8   0.7 . The new algorithm is defined as TDE1. Now, we perform the same experiments based on the setting in Section 4.1. Four representative testing functions are chosen to test the influence of these parameters. The results including mean and standard deviation of TDE and TDE1 are listed in Table 3. The differences among the two algorithms are not significant, which indicates that the two groups of parameters have less influence on the performances of the algorithm. Therefore, we can conclude that the parameter setting of the two groups has no significant effects on the final results.

4.3. The Applications

In this subsection, the proposed TDE algorithm is applied to solve three industrial applications so that the generality of the algorithm can be further evaluated.

4.3.1. Economic Dispatch Problem

The proposed algorithm is utilized to solve the ED problem. In recent years, China has achieved rapid social and economic development. Its GDP has grown from USD 174.98 billion in 1978 to USD 14.44 trillion in 2019, ranking second in the world. At the same time, due to the large number of pollutants emitted by industrial development, it also brings serious environmental pollution problems. According to the Report on the State of the Ecology and Environment in China 2018, 64.2% of the 338 prefecture-level and above cities in the country failed to meet the environmental air quality standards. Environmental pollution will not only destroy the ecological balance and harm people’s health but also have a negative impact on the sustainable development of the economy. In order to control pollution, China has formulated a series of policies and regulations, such as the “Thirteenth Five-Year Plan for Ecological Environmental Protection” and “Opinions on Reinforcing Ecological Environmental Protection to Fight the Pollution Prevention and Control”, to control the discharge of pollutants. In addition, with the increasingly serious environmental problems, many countries around the world have actively carried out energy conservation and emission reduction work and formulated laws and regulations to limit harmful gas emissions. The UK government has put forward the Energy Market Reform (EMR) and promulgated the Climate Change Act, which aims to reduce emissions by 80% on the basis of 1990 by 2050 [46]. The Paris Agreement on climate change, signed in 2016 by 178 parties, is also in the form of legal constraints and strives to achieve the goal of achieving global zero net greenhouse gas emissions by the second half of this century. In this situation, industry is faced with a new challenge, namely to minimize the emission of pollutants, which is called the ED problem [47].
The ED problem is usually a quadratic function of the generating capacity of the unit and can be presented as:
F P G i = i = 1 N G [ a i P G i 2 + b i P G i + c i ]
where F P G i is the total fuel cost function, P G i is the output power of the i th thermal power generator, a i , b i , c i are the fuel cost coefficients of the i th thermal power generator, N G is the number of thermal power generators.
For thermal power units, due to the physical characteristics of the unit itself, there is an upper and lower limit on the output value. The operation of the unit should be within the constraints of the unit itself.
P i m i n P i P i m a x
where P i m i n is the minimum power output of the i t h thermal power generator, and P i m a x is the maximum power output of the i t h thermal power generator.
The ED problem also needs to meet the constraints of power balance, that is, the total generation capacity of the thermal power unit should be equal to the transmission loss plus the load demand of the power system [47]. The power balance equation is as follows.
i = 1 N G P i = P D
where P D is the load demand of the power system. The proposed algorithm is employed to solve the six-unit system [48]. The load demand is 500 MW. The parameters of the six-unit system are listed in Table 4.
The population size is 100. Each population contains six generators p 1 , p 2 , p 3 , p 4 , p 5 , p 6 , which are randomly generated at first. The dimension of the population is 100 × 6 . The maximum iteration is 300. The algorithm runs fifty times independently. The above eight algorithms are applied to solve the problem. The results are presented in Table 5. In Table 5, the best, median, mean, and worst of the obtained values are included, in which the best results are highlighted in bold.
In examining the outcomes from the contrasting algorithms, some differences exist between their respective best, median, and mean results. However, the difference becomes apparent when assessing the worst and standard deviations. CLPSO and SOGWO algorithms exhibit the second-to-last and the last-place performances, respectively, in terms of worst values. Notably, the CLPSO average registers as the poorest across the eight algorithms evaluated. Overall, TDE emerges as the frontrunner, delivering the most favorable results across all indices, indicative of its superior performance.
It can be seen from the iteration curve that CLPSO, SOGWO, RLDE, and CoDE converge prematurely in the early stage of iteration; tlboDE and TDE converge quickly and can find satisfactory optimal values, as shown in Figure 6.

4.3.2. Multi-Product Batch Plant

This is a multi-product batch plant problem with M serial processing stages, where fixed amounts Q i from N   products must be produced [49]. The problem is formulated as follows.
Minimize:
f x ¯ = j = 1 M α j N j V j β j  
Subject to:
g 1 x ¯ = S i j B i V j 0
g 2 x ¯ = H + i = 1 N Q i T L i B i 0
g 3 x ¯ = t i j N j T L i 0
with bounds
1 N i N j u
V j l V j V j u
T L i l T L i T L i u
B j l B j B j u
where N = 2 , M = 3 , α j = 250 , H = 6000 , β j = 0.6 , N j u = 3 , V j l = 250 , and V j u = 2500 . The values of other parameters are calculated by
T L i l = max ( t i j N j u )
T L i u = max t i j
B j l = Q i T L i H
B j u = min ( Q i , min j ( V j u S i j ) )
The population size is 100. The maximal iteration is set to 400. The results obtained are listed in Table 6, in which the best results are highlighted in bold. From the results of the best values, TDE, CMA-ES, and tlboDE obtained the best results. However, among the three algorithms, TDE is superior to CMA-ES and tlboDE in terms of median, mean, and worst values, which indicates that the performance of TDE is satisfactory both in terms of optimization ability and robustness of the algorithm.
From the iterative curve, shown in Figure 7, the exploration ability of CMA-ES is poor, and the search strategy in the initial stage of TDE and CoDE is more effective.

4.3.3. Process Synthesis Problem

This problem has nonlinearities in all real variables and binary variables [49].
Minimize:
f x ¯ = ( 1 x 4 ) 2 + ( 1 x 5 ) 2 + ( 1 x 6 ) 2 ln 1 + x 7 + ( 1 x 1 ) 2 + ( 2 x 2 ) 2 + ( 3 x 3 ) 2
Subject to:
g 1 x ¯ = x 1 + x 2 + x 3 + x 4 + x 5 + x 6 5 0 ,
g 2 x ¯ = x 6 3 + x 1 2 + x 2 2 + x 3 2 = 5.5 0 ,
g 3 x ¯ = x 1 + x 4 1.2 0 ,
g 4 x ¯ = x 2 + x 5 1.8 0 ,
g 5 x ¯ = x 3 + x 6 2.5 0 ,
g 6 x ¯ = x 1 + x 7 1.2 0 ,
g 7 x ¯ = x 5 2 + x 2 2 1.64 0 ,
g 8 x ¯ = x 6 2 + x 3 2 4.25 0 ,
g 9 x ¯ = x 5 2 + x 3 2 4.64 0 ,
with bounds
0 x 2 , x 3 , x 1 100 ,
x 7 , x 6 , x 5 , x 4 0 , 1 .
The population size is 100. The maximal iteration is set to 100. The performances are listed in Table 7, in which the best results are highlighted in bold. From the results of the best values, only CoDE, SOGWO, and CLPSO obtain the value of 2.93, and the other algorithms all attain better optimal values than them. In terms of median results, RLDE obtained the best median, but TDE, CLPSO, and tlboDE tied for second place with a median of 2.93, indicating that these algorithms measure the consistency of algorithms over multiple runs. However, from the results of mean, worst value, and standard deviation, it can be concluded that TDE performs best in terms of stability. Overall, TDE’s performance is satisfactory.
The convergence curve is plotted in Figure 8. It can be seen from the iterative graph that CMA-ES has poor exploration ability, while TDE, CLPSO, and CoDE have more effective search strategies in the initial stage, but CODE has a slower convergence speed, while TDE has a faster convergence speed.

5. Conclusions

Mutation operators and associated parameter setting methods play an important role in improving the performance of the DE algorithm. Six commonly used mutation operators are divided into three categories according to their features. To make good use of these mutation operators, a mutation pool is designed, in which six mutation operators from three categories are included. What’s more, a parameter pool is established, in which three parameter values are predefined. In each generation, three mutation operators from three categories are randomly selected from the mutation pool, and three parameter values are randomly selected. Each chosen mutation operator and parameter value are utilized to produce trial vectors. Based on the procedure, a novel DE, TDE, is implemented.
The proposed algorithm can make good use of six mutation operators during evolution, which can enhance the search capacity. The proposed TDE algorithm is compared with the three latest DE variants CoDE, tlboDE, and RLDE; three non-DE algorithms, CLPSO, SOGWO, and CMA-ES on 29 benchmark functions from CEC 2017. The experimental results have demonstrated that the TDE is superior to these algorithms. The proposed TDE algorithm has obtained the best results on twenty functions, which is the best one of all the algorithms. The superior performances can be attributed to the well-designed mutation pool and parameter pool. Then, the TDE is employed to deal with three optimization problems in real applications. The results have also indicated that the TDE has obtained better performances.

6. Future Works

Future works can be divided into two aspects. First, we can develop more advanced intelligent algorithms. Especially, we can introduce some machine learning techniques to intelligent algorithms to boost the performances of these intelligent algorithms. Second, we can use our proposed algorithm to solve more industrial applications so that the generality of the works can be further proved.

Author Contributions

Writing—original draft preparation, X.W.; conceptualization, X.W.; investigation, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Social Science Foundation of the Chinese Ministry of Education (No. 22YJC630144), Social Science Research in Colleges and Universities in Jiangsu Province (2020SJA0182).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Pulluri, H.; Naresh, R.; Sharma, V. An enhanced self-adaptive differential evolution based solution methodology for multiobjective optimal power flow. Appl. Soft Comput. 2017, 54, 229–245. [Google Scholar] [CrossRef]
  3. Tsafarakis, S.; Zervoudakis, K.; Andronikidis, A.; Altsitsiadis, E. Fuzzy self-tuning differential evolution for optimal product line design. Eur. J. Oper. Res. 2020, 287, 1161–1169. [Google Scholar] [CrossRef]
  4. Almasoudy, F.H.; Al-Yaseen, W.L.; Idrees, A.K. Differential Evolution Wrapper Feature Selection for Intrusion Detection System. Procedia Comput. Sci. 2020, 167, 1230–1239. [Google Scholar] [CrossRef]
  5. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  6. Guo, S.-M.; Yang, C.-C.; Hsu, P.-H.; Tsai, J.S.H. Improving Differential Evolution with a Successful-Parent-Selecting Framework. IEEE Trans. Evol. Comput. 2015, 19, 717–730. [Google Scholar] [CrossRef]
  7. Liu, J.; Lampinen, J. A fuzzy adaptive differential evolution algorithm. Soft Comput. Fusion Found Methodol. Applicat. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  8. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  9. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm with Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  10. Meng, Z.; Pan, J.-S.; Tseng, K.-K. PaDE: An enhanced Differential Evolution algorithm with novel control parameter adaptation schemes for numerical optimization. Knowl. Based Syst. 2019, 168, 80–99. [Google Scholar] [CrossRef]
  11. Tian, M.; Gao, X.; Dai, C. Differential evolution with improved individual-based parameter setting and selection strategy. Appl. Soft Comput. 2017, 56, 286–297. [Google Scholar] [CrossRef]
  12. Fu, C.M.; Jiang, C.; Chen, G.S.; Liu, Q.M. An adaptive differential evolution algorithm with an aging leader and challengers mechanism. Appl. Soft Comput. 2017, 57, 60–73. [Google Scholar] [CrossRef]
  13. Draa, A.; Chettah, K.; Talbi, H. A Compound Sinusoidal Differential Evolution algorithm for continuous optimization. Swarm Evol. Comput. 2019, 50, 100450. [Google Scholar] [CrossRef]
  14. Gong, W.; Cai, Z.; Ling, C.X.; Li, H. Enhanced differential evolution with adaptive strategies for numerical optimization. IEEE Trans. Syst. Man Cybern. B Cybern. 2011, 41, 397–413. [Google Scholar] [CrossRef] [PubMed]
  15. Yu, W.J.; Shen, M.; Chen, W.N.; Zhan, Z.H.; Gong, Y.J.; Lin, Y.; Liu, O.; Zhang, J. Differential evolution with two-level parameter adaptation. IEEE Trans. Cybern. 2014, 44, 1080–1099. [Google Scholar] [CrossRef] [PubMed]
  16. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P.N. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans. Syst. Man Cybern. B Cybern. 2012, 42, 482–500. [Google Scholar] [CrossRef] [PubMed]
  17. Ghosh, A.; Das, S.; Chowdhury, A.; Giri, R. An improved differential evolution algorithm with fitness-based adaptation of the control parameters. Inf. Sci. 2011, 181, 3749–3765. [Google Scholar] [CrossRef]
  18. Lixin, T.; Yun, D.; Jiyin, L. Differential Evolution with an Individual-Dependent Mechanism. IEEE Trans. Evol. Comput. 2015, 19, 560–574. [Google Scholar] [CrossRef]
  19. Fan, Q.; Yan, X. Self-Adaptive Differential Evolution Algorithm with Zoning Evolution of Control Parameters and Adaptive Mutation Strategies. IEEE Trans. Cybern. 2016, 46, 219–232. [Google Scholar] [CrossRef]
  20. Jingqiao, Z.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  21. Zhou, Y.Z.; Yi, W.C.; Gao, L.; Li, X.Y. Adaptive Differential Evolution with Sorting Crossover Rate for Continuous Optimization Problems. IEEE Trans. Cybern. 2017, 47, 2742–2753. [Google Scholar] [CrossRef] [PubMed]
  22. Mohamed, A.W.; Hadi, A.A.; Jambi, K.M. Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global numerical optimization. Swarm Evol. Comput. 2019, 50, 100455. [Google Scholar] [CrossRef]
  23. Prabha, S.; Yadav, R. Differential evolution with biological-based mutation operator. Eng. Sci. Technol. Int. J. 2020, 23, 253–263. [Google Scholar] [CrossRef]
  24. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential Evolution Using a Neighborhood-Based Mutation Operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef]
  25. Qu, B.Y.; Suganthan, P.N.; Liang, J.J. Differential Evolution with Neighborhood Mutation for Multimodal Optimization. IEEE Trans. Evol. Comput. 2012, 16, 601–614. [Google Scholar] [CrossRef]
  26. Gong, W.; Cai, Z. Differential evolution with ranking-based mutation operators. IEEE Trans. Cybern. 2013, 43, 2066–2081. [Google Scholar] [CrossRef] [PubMed]
  27. Stanovov, V.; Akhmedova, S.; Semenkin, E. Selective Pressure Strategy in differential evolution: Exploitation improvement in solving global optimization problems. Swarm Evol. Comput. 2019, 50, 100463. [Google Scholar] [CrossRef]
  28. Cai, Y.; Wang, J. Differential evolution with neighborhood and direction information for numerical optimization. IEEE Trans. Cybern. 2013, 43, 2202–2215. [Google Scholar] [CrossRef]
  29. Tian, M.; Gao, X. Differential evolution with neighborhood-based adaptive evolution mechanism for numerical optimization. Inf. Sci. 2019, 478, 422–448. [Google Scholar] [CrossRef]
  30. Bajer, D. Adaptive k-tournament mutation scheme for differential evolution. Appl. Soft Comput. 2019, 85, 105776. [Google Scholar] [CrossRef]
  31. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  32. Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution with Composite Trial Vector Generation Strategies and Control Parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  33. Wu, G.; Shen, X.; Li, H.; Chen, H.; Lin, A.; Suganthan, P.N. Ensemble of differential evolution variants. Inf. Sci. 2018, 423, 172–186. [Google Scholar] [CrossRef]
  34. Tong, L.; Dong, M.; Jing, C. An improved multi-population ensemble differential evolution. Neurocomputing 2018, 290, 130–147. [Google Scholar] [CrossRef]
  35. Tian, M.; Gao, X. An improved differential evolution with information intercrossing and sharing mechanism for numerical optimization. Swarm Evol. Comput. 2019, 50, 100341. [Google Scholar] [CrossRef]
  36. Hui, S.; Suganthan, P.N. Ensemble and Arithmetic Recombination-Based Speciation Differential Evolution for Multimodal Optimization. IEEE Trans. Cybern. 2016, 46, 64–74. [Google Scholar] [CrossRef] [PubMed]
  37. Gui, L.; Xia, X.; Yu, F.; Wu, H.; Wu, R.; Wei, B.; Zhang, Y.; Li, X.; He, G. A multi-role based differential evolution. Swarm Evol. Comput. 2019, 50, 100508. [Google Scholar] [CrossRef]
  38. Zhao, F.; Xue, F.; Zhang, Y.; Ma, W.; Zhang, C.; Song, H. A hybrid algorithm based on self-adaptive gravitational search algorithm and differential evolution. Expert Syst. Appl. 2018, 113, 515–530. [Google Scholar] [CrossRef]
  39. Yildizdan, G.; Baykan, Ö.K. A novel modified bat algorithm hybridizing by differential evolution algorithm. Expert Syst. Appl. 2020, 141, 112949. [Google Scholar] [CrossRef]
  40. Farahmand-Tabar, S.; Ashtari, P.; Babaei, M. Gaussian cross-entropy and organizing intelligence for design optimization of the outrigger system with inclined belt truss in real-size tall buildings. Probabilistic Eng. Mech. 2024, 76, 103616. [Google Scholar] [CrossRef]
  41. Hu, Z.; Gong, W.; Li, S. Reinforcement learning-based differential evolution for parameters extraction of photovoltaic models. Energy Rep. 2021, 7, 916–928. [Google Scholar] [CrossRef]
  42. Li, S.; Gong, W.; Wang, L.; Yan, X.; Hu, C. A hybrid adaptive teaching–learning-based optimization and differential evolution for parameter identification of photovoltaic models. Energy Convers. Manag. 2020, 225, 113474. [Google Scholar] [CrossRef]
  43. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  44. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective Opposition based Grey Wolf Optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  45. Auger, A.; Hansen, N. A restart CMA evolution strategy with increasing population size. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC-2005), Edinburgh, UK, 2–5 September 2005; pp. 1769–1776. [Google Scholar]
  46. Climate Change Committee. Meeting Carbon Budgets–The Need for a Step Change. Economics. 2009. Available online: https://www.theccc.org.uk/publication/meeting-carbon-budgets-the-need-for-a-step-change-1st-progress-report/ (accessed on 12 October 2009).
  47. Jayabarathi, T.; Raghunathan, T.; Adarsh, B.R.; Suganthan, P.N. Economic dispatch using hybrid grey wolf optimizer. Energy 2016, 111, 630–641. [Google Scholar] [CrossRef]
  48. Balamurugan, R.; Subramanian, S. A Simplified Recursive Approach to Combined Economic Emission Dispatch. Electr. Power Compon. Syst. 2007, 36, 17–27. [Google Scholar] [CrossRef]
  49. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
Figure 1. The mutation pool.
Figure 1. The mutation pool.
Mathematics 12 02311 g001
Figure 2. The flowchart of the proposed TDE algorithm.
Figure 2. The flowchart of the proposed TDE algorithm.
Mathematics 12 02311 g002
Figure 3. The 3-D map for four functions.
Figure 3. The 3-D map for four functions.
Mathematics 12 02311 g003
Figure 4. The radar map of eight algorithms.
Figure 4. The radar map of eight algorithms.
Mathematics 12 02311 g004aMathematics 12 02311 g004b
Figure 5. Two convergence curves of seven algorithms on test functions.
Figure 5. Two convergence curves of seven algorithms on test functions.
Mathematics 12 02311 g005
Figure 6. The convergence curve of eight algorithms on the economic dispatch problem.
Figure 6. The convergence curve of eight algorithms on the economic dispatch problem.
Mathematics 12 02311 g006
Figure 7. The convergence curve of eight algorithms on the multi-product batch plant.
Figure 7. The convergence curve of eight algorithms on the multi-product batch plant.
Mathematics 12 02311 g007
Figure 8. The convergence curve of eight algorithms on the process synthesis problem.
Figure 8. The convergence curve of eight algorithms on the process synthesis problem.
Mathematics 12 02311 g008
Table 1. Mutation operator.
Table 1. Mutation operator.
MutationOperator
DE/best/1 V i t = X b e s t t + F × X r 1 i t X r 2 i t
DE/best/2 V i t = X b e s t t + F × X r 1 i t X r 2 i t + F × X r 3 i t X r 4 i t
DE/rand-to-best/1 V i t = X i t + K × X b e s t t X i t + F × X r 1 i t X r 2 i t
DE/rand/1 V i t = X r 1 i t + F × X r 2 i t X r 3 i t
DE/rand/2 V i t = X r 1 i t + F × X r 2 i t X r 3 i t + F × X r 4 i t X r 5 i t
DE/current-to-rand/1 U i t = X r 1 i t + K × X r 2 i t X r 3 i t + F × X r 4 i t X r 5 i t
Table 2. The mean and standard deviation of seven algorithms on test functions.
Table 2. The mean and standard deviation of seven algorithms on test functions.
Function DECoDEtlboDERLDESOGWOCLPSOCMA-ESTDE
f 1 Mean1.42 × 10−141.50 × 10−43.67 × 1091.42 × 10−148.16 × 1081.15 × 1011.42 × 10−140
var3.84 × 10−281.56 × 10−82.38 × 10189.29 × 10−295.66 × 10175.88 × 1022.91 × 10−280
f 3 Mean1.09 × 1055.83 × 10−51.98 × 1048.36 × 10−122.81 × 1042.76 × 1045.68 × 10−140
var5.84 × 1082.89 × 10−96.07 × 1071.98 × 10−217.34 × 1073.28 × 1074.39 × 10−270
f 4 Mean6.22 × 1015.89 × 1014.42 × 1025.51 × 1011.56 × 1026.75 × 1013.27 × 1013.17 × 101
var8.10 × 1011.74 × 1004.53 × 1041.68 × 1031.79 × 1033.29 × 1028.43 × 1029.06 × 102
f 5 Mean1.76 × 1021.16 × 1021.26 × 1028.44 × 1018.44 × 1015.10 × 1017.42 × 1024.93 × 101
var1.26 × 1021.25 × 1023.27 × 1029.85 × 1023.95 × 1024.78 × 1015.32 × 1041.99 × 102
f 6 Mean1.14 × 10−133.43 × 10−46.74 × 1003.66 × 1003.64 × 1001.64 × 10−79.47 × 1011.87 × 10−5
var1.81 × 10−271.00 × 10−82.93 × 1001.05 × 1013.93 × 1002.36 × 10−151.32 × 1021.78 × 10−9
f 7 Mean2.09 × 1021.62 × 1021.93 × 1021.34 × 1021.28 × 1029.15 × 1013.16 × 1038.08 × 101
var1.21 × 1028.54 × 1015.16 × 1029.55 × 1028.76 × 1026.29 × 1012.85 × 106237 × 102
f 8 Mean1.79 × 1021.19 × 1021.27 × 1028.11 × 1017.94 × 1015.80 × 1015.76 × 1025.06 × 101
var1.45 × 1021.06 × 1024.49 × 1026.13 × 1024.38 × 1025.08 × 1012.32 × 1042.49 × 102
f 9 Mean0.00 × 1002.58 × 10−72.64 × 1022.25 × 1024.41 × 1022.83 × 1011.39 × 1041.37 × 101
var2.58 × 10−289.81 × 10−141.53 × 1043.44 × 1041.51 × 1051.11 × 1027.90 × 1061.44 × 10
f 10 Mean7.38 × 1034.70 × 1035.64 × 1032.85 × 1032.82 × 1032.86 × 1035.18 × 1032.79 × 103
var8.98 × 1048.28 × 1047.65 × 1052.97 × 1054.22 × 1058.35 × 1048.42 × 1053.62 × 105
f 11 Mean5.91 × 1012.91 × 1018.24 × 1016.97 × 1013.00 × 1027.54 × 1011.82 × 1023.13 × 101
var3.57 × 1011.83 × 1029.14 × 1027.16 × 1021.14 × 1052.87 × 1023.90 × 1035.49 × 102
f 12 Mean2.47 × 1041.53 × 1034.76 × 1061.33 × 1043.76 × 1074.45 × 1051.45 × 1033.72 × 103
var3.76 × 1081.81 × 1057.69 × 10131.04 × 1081.64 × 10154.62 × 10101.65 × 1051.19 × 107
f 13 Mean1.05 × 1026.66 × 1018.78 × 1029.10 × 1026.07 × 1068.15 × 1021.73 × 1033.19 × 101
var1.08 × 1027.57 × 1011.45 × 1057.08 × 1061.56 × 10159.89 × 1048.10 × 1051.01 × 102
f 14 Mean6.73 × 1013.26 × 1013.62 × 1015.51 × 1011.39 × 1051.30 × 1041.93 × 1022.77 × 101
var2.45 × 1016.60 × 1014.66 × 1013.62 × 1027.33 × 10101.31 × 1082.88 × 1031.33 × 102
f 15 Mean4.87 × 1011.76 × 1016.94 × 1018.51 × 1022.09 × 1051.36 × 1023.01 × 1021.31 × 101
var2.58 × 1012.51 × 1017.11 × 1026.52 × 1065.21 × 10111.82 × 1031.19 × 1044.68 × 101
f 16 Mean1.54 × 1033.39 × 1027.26 × 1027.05 × 1026.78 × 1024.44 × 1026.00 × 1026.07 × 102
var2.53 × 1042.48 × 1046.33 × 1049.13 × 1045.02 × 1041.57 × 1041.15 × 1056.26 × 104
f 17 Mean5.68 × 1029.41 × 1018.20 × 1012.61 × 1022.35 × 1028.34 × 1012.90 × 1027.58 × 101
var1.35 × 1042.17 × 1021.15 × 1032.03 × 1041.36 × 1046.42 × 1024.15 × 1046.18 × 103
f 18 Mean5.18 × 1012.71 × 1014.39 × 1012.84 × 1047.20 × 1051.21 × 1052.14 × 1024.53 × 101
var2.10 × 1014.74 × 1003.46 × 1025.41 × 1088.41 × 10112.42 × 1095.04 × 1031.58 × 103
f 19 Mean2.88 × 1011.83 × 1011.02 × 1018.07 × 1014.73 × 1055.04 × 1012.14 × 1028.19 × 10
var5.66 × 1005.61 × 1001.39 × 1011.31 × 1031.87 × 10124.15 × 1025.33 × 1033.16 × 101
f 20 Mean6.29 × 1025.15 × 1011.10 × 1022.88 × 1023.25 × 1021.50 × 1021.35 × 1031.21 × 102
var1.21 × 1041.45 × 1028.68 × 1031.93 × 1041.82 × 1042.98 × 1031.05 × 1051.03 × 104
f 21 Mean3.78 × 1023.12 × 1023.15 × 1022.73 × 1022.75 × 1022.20 × 1024.86 × 1022.49 × 102
var1.07 × 1029.50 × 1015.10 × 1024.70 × 1026.01 × 1022.51 × 1037.61 × 1041.97 × 102
f 22 Mean7.32 × 1031.00 × 1024.99 × 1021.06 × 1032.04 × 1031.29 × 1025.64 × 1031.00 × 102
var5.99 × 1041.31 × 10−141.61 × 1042.35 × 1062.19 × 1061.47 × 1021.45 × 1067.38 × 10−15
f 23 Mean5.44 × 1024.55 × 1024.66 × 1024.47 × 1024.25 × 1024.05 × 1022.04 × 1033.97 × 102
var1.19 × 1021.07 × 1021.17 × 1031.06 × 1036.71 × 1025.37 × 1014.21 × 1052.10 × 102
f 24 Mean5.97 × 1024.69 × 1025.25 × 1025.27 × 1025.10 × 1024.41 × 1024.56 × 1024.66 × 102
var7.61 × 1017.25 × 1028.14 × 1021.11 × 1032.71 × 1031.28 × 1041.47 × 1032.98 × 102
f 25 Mean3.79 × 1023.87 × 1025.29 × 1023.94 × 1024.53 × 1023.87 × 1023.87 × 1023.87 × 102
var4.54 × 10−43.24 × 10−41.99 × 1031.89 × 1027.93 × 1022.23 × 10−015.78 × 1009.06 × 10−2
f 26 Mean2.56 × 1031.81 × 1031.83 × 1032.10 × 1031.79 × 1037.65 × 1021.07 × 1031.42 × 103
var7.06 × 1032.35 × 1052.54 × 1054.91 × 1051.00 × 1059.82 × 1042.76 × 1054.46 × 104
f 27 Mean5.13 × 1024.81 × 1025.17 × 1025.59 × 1025.31 × 1025.13 × 1026.93 × 1025.05 × 102
var7.87 × 1007.99 × 1012.47 × 1024.69 × 1022.45 × 1022.15 × 1018.55 × 1059.67 × 101
f 28 Mean5.12 × 1023.09 × 1025.69 × 1023.52 × 1025.42 × 1024.26 × 1023.57 × 1023.44 × 102
var5.09 × 1009.15 × 1023.90 × 1033.14 × 1032.04 × 1034.18 × 1013.36 × 1033.83 × 103
f 29 Mean8.81 × 1025.94 × 1025.57 × 1027.65 × 1027.51 × 1025.40 × 1027.86 × 1024.69 × 102
var2.63 × 1042.02 × 1037.09 × 1033.67 × 1041.04 × 1042.70 × 1034.60 × 1047.04 × 103
f 30 Mean6.92 × 1021.98 × 1035.86 × 1033.26 × 1033.28 × 1068.07 × 1032.29 × 1032.17 × 103
var2.49 × 1042.67 × 1024.75 × 1068.32 × 1056.84 × 10123.45 × 1062.67 × 1054.61 × 104
Table 3. The results of parameter sensitivity analysis.
Table 3. The results of parameter sensitivity analysis.
FunctionTDETDE1
MeanStdMeanStd
f 1 0000
f 6 4.52 × 10−31.16 × 10−21.87 × 10−51.78 × 10−9
f 17 7.16 × 1012.73 × 1037.58 × 1016.18 × 103
f 22 1.00 × 1028.97 × 1031.00 × 1027.38 × 10−15
Table 4. The coefficients of the six-unit system.
Table 4. The coefficients of the six-unit system.
Generator a i b i c i Min Load (MW)Max Load (MW)
10.152538.540756.80010125
20.106046.160451.32510150
30.028040.4001050.00035225
40.035538.3101243.53035210
50.021136.3281658.570130325
60.018038.2701356.660125315
Table 5. The comparisons among eight algorithms (MW).
Table 5. The comparisons among eight algorithms (MW).
AlgorithmBestMedianMeanWorstStdRanking
DE2.70041 × 1042.70041 × 1042.70041 × 1042.70041 × 1046.81 × 1053
CoDE2.70057 × 1042.70127 × 1042.70134 × 1042.70273 × 10426.425
tlboDE2.70041 × 1042.70041 × 1042.70041 × 1042.70041 × 1041.51 × 1062
RLDE2.70048 × 1042.70280 × 1042.70390 × 1042.71433 × 1041.14 × 1036
SOGWO2.70240 × 1042.71312 × 1042.71495 × 1042.74456 × 1048.64 × 1037
CLPSO2.70415 × 1042.71370 × 1042.71597 × 1042.72839 × 1044.07 × 1038
CMA-ES2.70041 × 1042.70041 × 1042.70050 × 1042.70182 × 1048.374
TDE2.70041 × 1042.70041 × 1042.70041 × 1042.70041 × 1041.5 × 1071
Table 6. The obtained results of eight algorithms on the multi-product batch plant.
Table 6. The obtained results of eight algorithms on the multi-product batch plant.
AlgorithmBestMedianMeanWorstStdRanking
DE5.80 × 1045.92 × 1045.92 × 1046.01 × 1042.51 × 1057
CoDE5.43 × 1045.61 × 1045.61 × 1045.88 × 1041.07 × 1063
tlboDE5.36 × 1045.85 × 1045.73 × 1045.85 × 1044.57 × 1064
RLDE5.37 × 1045.86 × 1045.85 × 1045.93 × 1041.57 × 1065
SOGWO5.37 × 1045.37 × 1045.38 × 1045.39 × 1041.71 × 1032
CLPSO5.69 × 1046.38 × 1046.36 × 1046.89 × 1046.30 × 1068
CMA-ES5.36 × 1045.85 × 1045.88 × 1048.07 × 1042.41 × 1076
TDE5.36 × 1045.36 × 1045.37 × 1045.39 × 1041.76 × 1031
Table 7. The obtained results of eight algorithms on the process synthesis problem.
Table 7. The obtained results of eight algorithms on the process synthesis problem.
AlgorithmBestMedianMeanWorstStdRanking
DE2.922.942.953.109.55 × 1044
CoDE2.933.063.083.337.03 × 1035
tlboDE2.922.932.932.934.46 × 1072
RLDE2.922.923.174.632.57 × 1016
SOGWO2.933.623.605.575.90 × 1017
CLPSO2.932.932.942.964.72 × 1053
CMA-ES2.924.624.274.774.90 × 1018
TDE2.922.932.932.932.21 × 10−71
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Yu, X. Differential Evolution Algorithm with Three Mutation Operators for Global Optimization. Mathematics 2024, 12, 2311. https://doi.org/10.3390/math12152311

AMA Style

Wang X, Yu X. Differential Evolution Algorithm with Three Mutation Operators for Global Optimization. Mathematics. 2024; 12(15):2311. https://doi.org/10.3390/math12152311

Chicago/Turabian Style

Wang, Xuming, and Xiaobing Yu. 2024. "Differential Evolution Algorithm with Three Mutation Operators for Global Optimization" Mathematics 12, no. 15: 2311. https://doi.org/10.3390/math12152311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop