Next Article in Journal
A Two-Stage Method to Test the Robustness of the Generalized Approximate Message Passing Algorithm
Next Article in Special Issue
Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization
Previous Article in Journal
Algorithms for Drug Sensitivity Prediction
Previous Article in Special Issue
A Variable Block Insertion Heuristic for the Blocking Flowshop Scheduling Problem with Total Flowtime Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Cloud Particles Differential Evolution Algorithm for Real-Parameter Optimization

School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
Algorithms 2016, 9(4), 78; https://doi.org/10.3390/a9040078
Submission received: 18 July 2016 / Revised: 2 November 2016 / Accepted: 14 November 2016 / Published: 18 November 2016
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications)

Abstract

:
The issue of exploration-exploitation remains one of the most challenging tasks within the framework of evolutionary algorithms. To effectively balance the exploration and exploitation in the search space, this paper proposes a modified cloud particles differential evolution algorithm (MCPDE) for real-parameter optimization. In contrast to the original Cloud Particles Differential Evolution (CPDE) algorithm, firstly, control parameters adaptation strategies are designed according to the quality of the control parameters. Secondly, the inertia factor is introduced to effectively keep a better balance between exploration and exploitation. Accordingly, this is helpful for maintaining the diversity of the population and discouraging premature convergence. In addition, the opposition mechanism and the orthogonal crossover are used to increase the search ability during the evolutionary process. Finally, CEC2013 contest benchmark functions are selected to verify the feasibility and effectiveness of the proposed algorithm. The experimental results show that the proposed MCPDE is an effective method for global optimization problems.

1. Introduction

Recently, many real-world problems which belong to optimization problems are very complex and are quite difficult to solve. Traditional optimization methods are weak in some problems which are multi-modal, high dimension, discontinuous, multi-objective, and dynamic, etc. Nature-inspired meta-heuristic algorithms which can be called artificial evolution (AE) [1] are becoming more and more popular in engineering applications by building feasible solutions. These evolutionary algorithms (EAs) which are known to be capable of finding the near-optimum solution to the real-parameter optimization problems, have been successfully applied to many optimization problems, such as optimization, scheduling, economic problems, neural network training, data clustering, large-scale, constrained, forecasting and multi-objective [2,3,4,5,6,7,8,9].
The meta-heuristic algorithms can be grouped in three main categories [10]: evolution-based, physics-based, and swarm intelligence-based methods. The evolutionary algorithms which are based on evolutionary process include Genetic Algorithm (GA) [11], Genetic Programming (GP) [12], Differential Evolution (DE) [13], Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) [14], and Biogeography-Based Optimizer (BBO) [15], et al. DE is a classical global optimization algorithm which is proposed by Storn and Price. CMA-ES, proposed by Hansen and Ostermeier, adapts the complete covariance matrix of the normal mutation distribution to solve optimization problems. Some other methods which are based on physical processes include the Simulated Annealing (SA) [16,17], Brain Storm Optimization (BSO) [18], Chemical Reaction Optimization (CRO) [19], etc. SA is a heuristic algorithm which is based on an analog of thermodynamics that describes the way metals cool and anneal [20]. BSO mimics the brainstorming process in which a group of people solves a problem together [21]. CRO is a chemical-reaction-inspired metaheuristic algorithm which mimics the characteristics of chemical reactions in solving optimization problems [19]. Moreover, there are some swarm intelligent methods based on animal-behavior phenomena such as Artificial Bee Colony (ABC) [22], Teaching-Learning-Based Optimization (TLBO) [23,24] et al. ABC, proposed by Karaboga, simulates the foraging behavior of the honeybee swarm and has been applied to solve many engineering optimization problems [25,26]. The TLBO method, proposed by Rao, is based on the effect of the influence of a teacher on the output of learners in a class [23].
The Cloud Particles Differential Evolution (CPDE) algorithm [27], which is inspired by the cloud formation and state change, is a population-based algorithm. CPDE employs phase transformation mechanism to promote the superior cloud particle to lead the swarm through the evolution. The evolutionary process is divided into three stages in CPDE. They are gaseous, liquid and solid, respectively. The cloud particles explore the searching area by condensation operation in a gaseous state. In a liquid state, the liquefaction operation is carried out to realize macro-local exploitation. In a solid state, solidification operation is carried out to realize micro-local exploitation. CPDE has been shown to perform well on many optimization problems. However, it should be noted that the new cloud particles are generated by the superior cloud particles, and then CPDE may easily trap in a local optima when solving complex problems containing multiple local optimal solutions, such as CEC2013 benchmark functions.
This paper proposes a modified cloud particles differential evolution algorithm (MCPDE). Firstly, control parameters adaptation strategies are designed by tuning the movement step and crossover factor used at different evolutionary stages. Secondly, the inertia factor is introduced to effectively balance exploration and exploitation. Superior cloud particles which are assigned with a smaller movement step guide the searching direction and exploit the area where better particles may exist, while inferior cloud particles which are assigned with a larger movement step maintain population diversity. In addition, the opposition mechanism and the orthogonal crossover are used to increase the search ability during the evolutionary process. Finally, the size of population is gradually decreased during the evolution process to result in faster convergence.
The rest of the paper is organized as follows. Section 2 reviews the basic differential evolution algorithm and variants of DE. Section 3 describes the modified cloud particles differential evolution algorithm. To evaluate the performance of MCPDE, experiments are carried out on the CEC2013 contest which includes latest 28 standard benchmark functions in Section 4. For the source code used for the compared algorithms, one may refer to http://ist.csu.edu.cn/YongWang.htm. Finally, the conclusions and possible future research are drawn up in Section 5.

2. Background

2.1. Basic Differential Evolution Algorithm

DE is a well-known global optimization algorithm which includes mutation, crossover and selection. During each generation, trial vectors are produced by mutation and crossover operations. Then, vectors, which will survive to the next generation, are determined by the selection operation.

2.1.1. Mutation

With respect to each individual x i , G (called target vector) at generation G, a new individual v i , G = ( v i , G 1 , v i , G 2 , v i , G D ) , which is called the mutant vector, is produced by mutation operation and arithmetic recombination. Many mutation strategies can be found in the literature [28,29], the classical one is “DE/rand/1”:
v i , G = x r 1 , G + F × ( x r 2 , G x r 3 , G )
The indices r 1 , r 2 , r 3 are three uniformly distributed random numbers within the range [1, N]. Index i is different from the indices r 1 , r 2 , r 3 . The control parameter F, namely mutation factor, is defined by the user for scaling the difference vector.

2.1.2. Crossover

To increase the population diversity, crossover operation is generally employed on the target vector x i , G = ( x i , G 1 , x i , G 2 , , x i , G D ) to generate a trial vector u i , G = ( u i , G 1 , u i , G 2 , , u i , G D ) . Binomial (uniform) crossover and exponential crossover are generally used in DE. In the basic version of DE, binomial crossover is used and is defined as follows:
u i , G j = { v i , G j , i f   ( r a n d j ( 0 , 1 ) C r   o r   j = j r a n d ) x i , G j   , o t h e r w i s e j = 1 , 2 , ,   D
In Equation (2), the crossover rate C r [ 0 , 1 ] is a control parameter. r a n d j ( 0 , 1 ) is randomly selected in the range [ 0 , 1 ] . j r a n d is randomly selected in the range [ 1 , D ] . Mutant vector v i , G is generated according to Equation (1).

2.1.3. Selection

Selection operator determines the vectors which will survive for the next generation. If the fitness of u i , G is better than or as good as x i , G , u i , G is selected. Otherwise, x i , G is selected. The selection operation is defined as follows:
x i , G + 1 = { u i , G , i f   f ( u i , G ) f ( x i , G ) x i , G , o t h e r w i s e

2.2. Related Works

The performance of DE is directly affected by the control parameters and related evolutionary strategies. Therefore, many variants of DE are proposed for improving the performance of the algorithm.

2.2.1. Adapting Control Parameters of Differential Evolution

In jDE [30], the self-adaptation of control parameters is proposed. F and Cr are encoded into the individuals and updated with some probabilities so that better control parameters are used in the next generation. In SaDE [28], promising solutions are generated with self-adapted control parameter. The parameter F is generated by N(0.5, 0.3). The crossover rate Cr is generated by N(Crm, 0.1) with Crm initialized to 0.5. In JADE [29], “DE/current-to-pbest” with optional external archive is introduced. The external archive stores inferior solutions to provide a promising direction for the search process and improve the population diversity. Control parameters are automatically updated according to previously successful experiences. In success-history based adaptive DE (SHADE) [31], a new parameter adaptation mechanism which is based on the successful searching experience is proposed. Many variants of parameters control such as FiADE, DMPSADE and DESSA are available in the literature [32,33,34].

2.2.2. Generation Strategy of Differential Evolution

DE researchers have suggested that some trial vector generation strategies and operations can improve the performance of DE. CoDE [35] combines three well-studied trial vector generation strategies with three random control parameter settings to generate trial vectors. In L-SHADE [36], the Linear Population Size Reduction (LPSR) is embedded into SHADE so that the robustness of the algorithm is improved. Swagatam [37] proposed an improvement mechanism of DE by using the concept of the neighborhood of each population member. Wenyin Gong et al. [38] proposed a crossover rate repair technique for the adaptive DE algorithms. The crossover rate in DE is repaired by its corresponding binary string which is used to replace the original crossover rate. In addition, some algorithms [39,40,41,42] are based on population initialization and population tuning strategy.

2.2.3. Hybridized Versions of Differential Evolution

Some useful techniques or different evolutionary algorithms are combined with DE algorithm for improving the performance of DE. A hybrid of the DE algorithm (DE/EDA) [43], proposed by Sun et al., produces new promising solutions by DE/EDA offspring generation scheme. Adam [44] proposed an adaptive memetic differential evolution algorithm. The algorithm uses Nelder-Mead algorithm as a local search method. Zheng [45] combines DE with fireworks algorithm (FA) to improve the performance of DE. Ali [46] presents a hybrid optimization approach based on DE and receptor editing property of immune system. A detailed survey of the hybrid DE algorithms can be found in [4,47,48,49,50,51].

3. Modified Cloud Particles Differential Evolution Algorithm

Control parameters and evolutionary strategies can significantly influence the performance of the algorithm. Based on our previous work [27], a modified cloud particles differential evolution algorithm (MCPDE) is proposed.

3.1. The Proposed MCPDE

The relation between exploration and exploitation is an important issue in the framework of EAs. The performance of the algorithm can be effectively improved by a balance between exploration and exploitation in algorithm. Research results show that the algorithm should start with exploration and then gradually change into exploitation. Based on this analysis, inertia factor and adaptive control parameters strategies in different stage are designed to keep the balance between exploration-exploitation. The opposition mechanism and the orthogonal crossover are employed to increase the search ability during the evolutionary process. Finally, the size of population is gradually decreased during the evolution process to result in faster convergence.
Like other optimization algorithms, the proposed algorithm starts with an initial population which is composed of the cloud particles. Each cloud particle represents a feasible solution of the problem. An MCPDE population is represented as a set of real parameter vectors which is defined as follows:
x i = ( x 1 , x 2 , , x D ) ,   i = 1 , ... , N
where D is the dimensionality of the optimization problem, and N is the population size.
At each generation, in order to find better solutions, superior particles exploit the searching area with a smaller step and guide the searching direction, and inferior particles explore promising areas with a relatively large radius and maintain population diversity. The evolutionary strategy, based on DE/current-to-pbest with optional archive, is generated as follows:
ω 1 = 0.85 + 10 F E S M a x F E S 1.9
ω 2 = 2 ω 1
v i = x r 1 + ω 1 × F i × ( x b e s t x r 1 ) + ω 2 × F i × ( x r 2 x ˜ r 3 )
where ω1 and ω2 are inertia factors, i { 1 , , N } , r1, r2 and r3 are mutually different random integer indices selected in the range [1,N]. FES and MaxFES are the number of function evaluations and the maximum number of function evaluations, respectively. In Equation (5), 0.85 and 1.9 are achieved by experiments. The value of FES/MaxFES gradually increases as the iteration progresses. Therefore, the superior particles attract the new particle to exploit better solutions with increasing ω1. F i is the mutation factor that controls the speed of the algorithm process. It is used by each cloud particle x i and is generated at each generation. x b e s t is randomly chosen as one of the top p cloud particles in the current population. p is 15% of the population size. x ˜ r 3 is selected from the union of the population and the archive. If the archive size exceeds 150% of the population size, some solutions are randomly removed from the archive so that some new cloud particles can be inserted into the archive. The archive is the set of archived inferior solutions in JADE [29]. However, a mathematical proof has been proposed to indicate that opposite numbers may likely to be closer to the optimal solution [52]. Motivated by this, some inferior solutions of the archive are randomly selected and replaced by their opposite solutions. The opposite mechanism [39] on these inferior solutions is defined as follows:
x ˜ i = a + b x i
where x i [ a , b ] , i = 1, …, D. x ˜ = ( x ˜ 1 , x ˜ 2 , , x ˜ D ) is the opposite of x = ( x 1 , x 2 , , x D ) . The interchange number is N D .
Figure 1 shows the curves of ω1 and ω2. It can be seen that ω1 tends to increase continually and ω2 tends to decrease as the iteration progresses. The variation of ω1 and ω2 ensure that the proposed algorithm smoothly transits between exploration and exploitation. At the early evolution stage, inferior particles try to search for further areas in the solution space, and a larger ω2 is able to maintain the diversity and exploration capability. Then, as the generation increases, ω2 tends to decrease while ω1 tends to increase. In this way, the new particle is strongly attracted around the current superior particles and tries to exploit better solutions which may exist in their neighborhoods. Meanwhile, the convergence speed is enhanced.

3.2. Control Parameters Assignments

In classic DE, control parameters are preset and fixed during the entire iteration process. However, it is impossible to find one constant parameter setting that can fit all problems. As pointed out in [53], the different parameter settings not only play an important role in the performance of DE, but also may be used to solve specific test problems. Thus, a novel parameter adaptation scheme is presented to adjust the parameter F and Cr at different evolutionary stage.
In MCPDE algorithm, the parameter settings are divided into three stages according to the successful mutation factors F at current generation. The initial Fi and Cri used by each cloud particle x i are generated independently and formulated as follows, respectively:
F i = r 2 × ( r 1 × f 0 5 M a x F E S + f 0 5 ) + f 0
C r i = r 2 × ( r 1 × c r 0 5 M a x F E S + c r 0 5 ) + c r 0
where f0 and cr0 are initialized to be 0.5, respectively. r1 and r2 are random numbers in [0, 1].
In each generation, the set SF is used to store all successful mutation factors at current generation. Similarly, the set SCr stores all the successful crossover rates at current generation. The size of SF is recorded as |SF|. If |SF| exceeds the current population size N, randomly selected elements are deleted from SF and SCr. Then SF and SCr are preserved for the next generation. When the set SF is empty, it indicates that F and Cr at current generation are the proper parameters for the algorithm. Then, they are preserved for the next generation. When |SF| is less than the current population size N, new control parameters F 1 ' , F 2 ' , ,   F N | S F | ' are produced according to Equation (9). C r 1 ' , C r 2 ' , , C r N | S F | ' are defined by
C r i ' = σ ( S C r ) + r i   ( i = 1 , , N | S C r | )
where σ ( S C r ) refers to the standard deviation of S C r . r i is randomly selected in the range [0, 1].
By the end of each generation, the parameters F and Cr are updated when |SF| is less than the current population size N, as defined by
F = S F F
C r = S C r C r '
where F = ( F 1 ' , F 2 ' , , F N | S F | ' ) , C r ' = ( C r 1 ' , C r 2 ' , , C r N | S F | ' ) .
In MCPDE algorithm, different control parameters are chosen at different stages. At the early stage of evolution, the control parameter values near f0 and cr0 with the randomization according to Equations (9) and (10). Better diversity may improve the exploration ability. Then, cloud particles try to explore further areas in the solution space. At each generation, better control parameters are preserved for the next generation. The population diversity is improved and the convergence speed is accelerated with better control parameters. However, it is difficult to find better control parameters with the increasing generation. Thus, the algorithm may hard to jump out of the local optimum because of faster convergence and poorer diversity. In order to solve these problems, some new parameters F are introduced to maintain search efficiency according to Equations (9) and (12) while some new parameters Cr are produced to improve population diversity according to Equations (11) and (13). Therefore, the performance of the algorithm MCPDE is improved by choosing different control parameters strategies at different evolutionary stages.
The size of the population used by EAs plays a significant role in controlling exploration and exploitation. Large population sizes can encourage wider exploration of the search space, while small population sizes may promote exploitation of the search space. Therefore, the population size is gradually decreased as the iteration continues. By the end of each generation, the population size N is updated and is defined by
N = N 0 N 0 M a x F E S × F E S
N = { N 1 i f   N < N N o t h e r w i s e
where N0 is the initial population size. FES is the current number of fitness evaluation, and MaxFES is the maximum number of fitness evaluations. If N <   N ' , the worst individual is deleted and the archive size is resized. Because Equation (7) requires at least four particles, the minimum population size N is set to 4.

3.3. Orthogonal Crossover

It is well known that crossover operation is helpful for sharing the better gene segment by exchanging the gene information of the parents. However, the quality of the offspring produced by the crossover operator is highly dependent on the characteristics of target problems, so that multiple crossover operators are employed instead of a single crossover operator for solving different optimization problems [54]. As pointed out in [55], OX (orthogonal crossover) operators can conduct effective search in a region proposed by the parents. Hence, we come up with the idea that uses QOX (quantization technique with orthogonal crossover) [55] operator to enhance the search ability of MCPDE. In order to save the computational cost, we apply QOX only on a better particle which is randomly selected from P b e s t , G . The orthogonal array used in QOX operator is often denoted by L M ( Q K ) , namely K factors (i.e., variables) with Q levels (i.e., values) and M combinations. In MCPDE, let Q = 3, M = 9, and then L 9 ( 3 4 ) is used.
L 9 ( 3 4 ) = [ 1   1   1   1 1   2   2   2 1   3   3   3 2   1   2   3 2   2   3   1 2   3   1   2 3   1   3   2 3   2   1   3 3   3   2   1 ]
Q levels for the cloud particle P i , G is defined as follows:
l i , j = min ( C i , G b e s t , C i , G   ) + j 1 Q 1 × ( max ( C i , G b e s t , C i , G ) min ( C i , G b e s t , C i , G ) )
where j = 1, …, Q. C i , G b e s t and C i , G are the parents which define a search range [ min ( C i , G b e s t , C i , G   ) , max ( C i , G b e s t , C i , G ) ] for particle P i , G . C i , G b e s t is randomly selected from P b e s t , G .
The particle P i , G is divided into K subvectors:
{ H 1 = ( p i , G 1 , p i , G 2 , , p i , G t 1 ) H 2 = ( p i , G t 1 + 1 , p i , G t 1 + 2 , , p i , G t 2 ) H k = ( p i , G t k 1 + 1 , p i , G t k 1 + 2 , , p i , G D )
where t1, t2, …, tk−1 are randomly generated integers and 1 < t1 < t2 < tk−1 <…< D.
Hi is treated as a factor in QOX operator, and Q levels for are Hi defined as follows:
{ L i 1 = ( l t i 1 + 1 , 1 , l t i 1 + 2 , 1 , , l t i , 1 )   L i 2 = ( l t i 1 + 1 , 2 , l t i 1 + 2 , 2 , , l t i , 2 )     L iQ = ( l t i 1 + 1 , Q , l t i 1 + 2 , Q , , l t i , Q )  
Then, M solutions are constructed on factors H1, H2, …, Hk.
The pseudo-code of MCPDE is illustrated in Algorithm 1.
Algorithm 1. MCPDE Algorithm
1:Initialize D (number of dimensions), N (number of population), L M ( Q K ) ; Archive A = φ;
2:Initialize population randomly
3:Generate mutation factors F and Cr according to Equations (9) and (10)
4:while the termination criteria are not met do
5: Randomly replace N/D inferior solutions by their opposite solutions according to Equation (8)
6: Generate new individuals according to Equations (5)–(7)
7: Randomly select an index i from {1, …, N}
8: Qrthogonal Crossover according to Equations (17)–(19)
9:for i = 1 to N do
10:  if f(ui) < f(xi) then
11:  xiA; xi = ui
12:  endif
13:endfor
14: Calcute N for the next generation according to Equations (14) and (15)
15:if |SF| ≥ N then
16:  delete randomly selected elements from the SF and SCr so that the parameters size are N
17:elseif (|SF| < N and SFφ) then
18:  Update F and Cr are according to Equations (11)–(13)
19:elseif SF = φ then
20:  Fg+1 = Fg; Crg+1 = Crg;
21:endif
22:endwhile

4. Experiments and Discussion

4.1. General Experimental Setting

(1) Test Problems and Dimension Setting: For a comprehensive evaluation of MCPDE, all the CEC2013 [36] benchmark functions are used to evaluate the performance of MCPDE. The CEC2013 benchmark set consists of 28 test functions. According to their shape characteristics, these benchmark functions can be broadly classified into three kinds of optimization functions [56].
  • unimodal problems f1f5
  • basic multimodal problems f6f20, and
  • composition problems f21f28
For all of the problems, the search space is [–100,100]D. In this paper, the dimension (D) of all functions is set to 10 and 30.
(2) Experimental Platform and Termination Criterion: For all experiments, 30 independent runs are carried out on the same machine with a Celoron 3.40 GHz CPU, 4 GB memory, and windows 7 operating system with Matlab R2009b, and conducted with D × 10,000 (number of function evaluations, FES).
(3) Performance Metrics: In our experimental studies, the mean value (Fmean), standard deviation (SD), maximum value (Max) and minimum value (Min) of the solution error measure [57] which is defined as f(x) f(x*) are recorded for evaluating the performance of each algorithm, where f(x) is the best fitness value found by an algorithm in a run, and f(x*) is the real global optimization value of tested problem. In order to statistically compare the proposed algorithm with its peers, Wilcoxon’s rank-sum test at the 5% significance level is used to evaluate whether the median fitness values of two sets of obtained results are statistically different from each other. Three marks “−”, “+” and “≈” are also used to denote that the performance of MCPDE is better than, worse than, and similar to that of the compared algorithm, respectively.

4.2. Comparison with Nine State-of-the-Art Intelligent Algorithms on 10 and 30 Dimension

In this part, MCPDE is compared with PSO, PSOcf (PSO with constriction factor) [58], TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE. The appropriate parameters are important for the performance of the intelligent optimization algorithms. Therefore, the setting of parameters of different algorithms is given in the following:
For MCPDE, The population size N is set to 13 × D. The maximum size of the archive is set to 1.5 × N. For DE, the population size N is set to 100. F and CR are set to 0.5 and 0.9, respectively. For PSO, the population size N is set to 40, the linearly decreasing inertia ω from 0.9 to 0.4 is adopted over the course of the search, and the acceleration coefficients c1, c2 are both set to 1.49445. For JADE, the population size N is set to 100, p = 0.05 and c = 0.1. The parameters of other algorithms are the same as those used in the corresponding references.
The statistical results, in terms of Fmean, SD, Max and Min obtained in 30 independent runs by each algorithm, are reported in Table 1 and Table 2.
(1) Unimodal problems f1f5: From the statistical results of Table 1 and Table 2, we can see that MCPDE is better than other compared algorithms on unimodal problems f1f5 according to the average rank (Avg-rank) for 10 dimensions and 30 dimensions. Considering f1f5 with 10 dimensions, for f1, MCPDE, DE, JADE, jDE, CMA-ES and CPDE work well and obtain better results. For f2, MCPDE performs better than other algorithms except JADE and CMA-ES. Moreover, for f3, MCPDE performs significantly better than the compared algorithms. For f4, MCPDE, CMA-ES and CPDE beat other compared algorithms. For f5, MCPDE, DE, JADE, jDE and CPDE are better than other algorithms. On f1f5, MCPDE performs better than PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE on 5, 5, 5, 3, 2, 5, 3, 2, 2 test problems, respectively. The overall ranking sequences for unimodal problems are MCPDE, CMA-ES, DE, CPDE, JADE, jDE, TLBO, CoDE, PSO and PSOcf in descending direction. When the search space dimension D is set to 30, according to Table 2, MCPDE is much better than PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE on 5, 5, 5, 3, 3, 5, 3, 3, 4 test functions, respectively. MCPDE performs better than all compared algorithms for the unimodal problems except f2 and f4. For f2 and f4, MCPDE ranks secondly. The overall ranking sequences for unimodal problems are MCPDE, jDE, DE, CMA-ES, JADE, CPDE, CoDE, PSO, TLBO and PSOcf in descending direction. The reason that MCPDE has the outstanding performance may be the use of the inertia factors, which are helpful for guiding the search direction.
(2) Multimodal problems f6f20: Considering the multimodal functions f6f20 in Table 3, MCPDE is significantly better than other algorithms on f6, f7, f9 and f20. Considering f8, most of the compared algorithms can achieve the similar results except CMA-ES. MCPDE beats most of the compared algorithms except that JADE and jDE have a similar performance on f11. JADE performs best on f12, f13, f15 and f18f19. jDE performs best on f14. CMA-ES performs best on f10 and f16. JADE and jDE perform best on f17. On these 15 multimodal problems, MCPDE performs better than PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE on 14, 12, 14, 14, 4, 14, 9, 13 and 11 problems respectively. The overall ranking sequences on multimodal problems are MCPDE, JADE, jDE, CPDE, PSO, TLBO, CoDE, DE, PSOcf and CMA-ES in descending direction. When the search space dimension D is set to 30, according to the experimental results on 15 test problems from Table 4, we find that MCPDE outperforms PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and on 11, 10, 13, 14, 4, 14, 6, 13, 13 testproblems, respectively. The overall ranking sequences on multimodal problems are JADE, MCPDE, jDE, CPDE, PSO, CoDE, TLBO, PSOcf, DE and CMA-ES in descending direction.
(3) Composite problems f21f28: As is known to all, composite problems are very time consuming for fitness evaluation compared to others because these functions combine multiple test problems into a complex landscape. Therefore, it is extremely difficult for state-of-the-art intelligent optimization algorithms to obtain relatively ideal results. Concerning the composition functions f21f28 in Table 5, MCPDE performs better than PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, and CPDE on 7, 8, 7, 6, 3, 5, 7, 7 and 5 out of 8 test problems, respectively. Conversely, PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE surpass MCPDE on 1, 0, 0, 0, 1, 2, 0, 0 and 0 problems respectively. The overall ranking sequences of composite problems are MCPDE, JADE, CoDE, CPDE, DE, TLBO, PSO, jDE, PSOcf and CMA-ES in descending direction. It can be observed from Table 6 that MCPDE still performs better on these composition functions when the search space dimension D is set to 30. MCPDE performs better than PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE on 7, 7, 8, 7, 5, 8, 5, 8 and 7 out of 8 test problems, respectively. The overall ranking sequences of composite problems are MCPDE, JADE, CPDE, DE, jDE, PSO, TLBO, CoDE, CMA-ES and PSOcf in descending direction.
All in all, MCPDE performs better than the compared algorithms on the unimodal, multimodal and composition problems with D = 10 and D = 30. Overall, Table 7 shows that MCPDE has a good performance on CEC2013 test problems. When D is set to 10, the overall ranking sequences on CEC2013 test problems are MCPDE, JADE, CPDE, jDE, DE, CoDE, TLBO, PSO, PSOcf and CMA-ES in descending direction. The overall ranking sequences on CEC2013 test functions with D = 30 are MCPDE, JADE, jDE, CPDE, DE, PSO, CoDE, CMA-ES, TLBO and PSOcf in descending direction. The convergence graphs of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE on different benchmark functions in terms of the mean errors (in logarithmic scale) in 30 runs are plotted in Figure 2 (D = 10) and Figure 3 (D = 30). Sixteen benchmark functions are selected to compare the performance of different algorithms in Figure 2 and Figure 3. From Figure 2, it can be seen that MCPDE performs better than other compared algorithms on 9 out of 16 test problems. Figure 3 shows that MCPDE beats other compared algorithms on 8 out of 16 test problems. The comparison experiments indicate that MCPDE is a challenging method for these functions. Moreover, MCPDE has a higher convergence rate because of good exploration ability.
The experimental results reveal that MCPDE works well for most benchmark problems. This is due to the effective parameter adaptation approach and the inertia factors which are used in MCPDE. Better control parameters are preserved to produce new control parameters for the next generation. Therefore, the probability of finding better solutions is greater and this is helpful for improving the performance of the proposed algorithm. The inertia factors are changed during the evolution process to favor, balance, and combine the exploration with exploitation. At the beginning of the search, the inertia factor ω1 is less than ω2, so it favors exploration. Then, ω1 tends to increase continually while ω2 tends to decrease. Accordingly, it balances the search direction. Later, the inertia factor ω1 is greater than ω2, so the exploitation ability of the algorithm is dynamically adjusted. In addition, the opposition mechanism and the orthogonal crossover are helpful for increasing the search ability during the evolutionary process. Therefore, both the exploration and exploitation aspects are done in parallel during the optimization process. Accordingly, MCPDE not only can improve the convergence rate of algorithm but also can decrease the risk of premature convergence as much as possible.

5. Conclusions

In order to improve the exploration-exploitation dilemma in the whole search space during the evolutionary process of the optimization algorithm, a new meta-heuristic optimization algorithm MCPDE for solving real-parameter optimization problems over continuous space is proposed in this paper. An effective parameter adaptation approach and the inertia factor are introduced into the modified cloud particles differential evolution algorithm. Moreover, the opposition mechanism and the orthogonal crossover are employed to increase the search ability during the evolutionary process. Then, the proposed algorithm is applied to 28 benchmark functions from the CEC2013 benchmark suite. The experimental results indicate that MCPDE performs much better than the compared algorithms for most benchmark problems. Thus, the proposed algorithm MCPDE is effective.
Future work will focus on how to design reasonable topological structures to make the algorithm more efficient and applied to constrained and multi-objective optimization problems. Moreover, it is expected that MCPDE will be used to tackle some practical engineering problems and real word applications.

Acknowledgments

The authors express their sincere thanks to the reviewers for their significant contributions to the improvement of the final paper. This research is supported by the support of the Doctoral Foundation of Xi’an University of Technology (112-451116017) and Scientific Research Program of Key Laboratory of Shaanxi Provincial Education Department (15JS078).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Wolfgang, B.; Guillaume, B.; Steffen, C.; James, A.F.; François, K.; Virginie, L.; Julian, F.M.; Miroslav, R.; Jeremy, J.R. From artificial evolution to computational evolution: A research agenda. Nature 2006, 7, 729–735. [Google Scholar]
  2. Tian, G.D.; Ren, Y.P.; Zhou, M.C. Dual-Objective Scheduling of Rescue Vehicles to Distinguish Forest Fires via Differential Evolution and Particle Swarm Optimization Combined Algorithm. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3009–3021. [Google Scholar] [CrossRef]
  3. Zaman, M.F.; Elsayed, S.M.; Ray, T.; Sarker, R.A. Evolutionary Algorithms for Dynamic Economic Dispatch Problems. IEEE Trans. Power Syst. 2016, 31, 1486–1495. [Google Scholar] [CrossRef]
  4. Mininno, E.; Neri, F.; Cupertino, F.; Naso, D. Compact differential evolution. IEEE Trans. Evolut. Comput. 2011, 15, 32–54. [Google Scholar] [CrossRef]
  5. Das, S.; Abraham, A.; Konar, A. Automatic clustering using an improved differential evolution algorithm. IEEE Trans. Syst. Man Cybern. Part A 2008, 38, 218–236. [Google Scholar] [CrossRef]
  6. Liu, B.; Zhang, Q.F.; Fernandez, F.V.; Gielen, G.G.E. An Efficient Evolutionary Algorithm for Chance-Constrained Bi-Objective Stochastic Optimization. IEEE Trans. Evol. Comput. 2013, 17, 786–796. [Google Scholar] [CrossRef]
  7. Segura, C.; Coello, C.A.C.; Hernández-Díaz, A.G. Improving the vector generation strategyof Differential Evolution for large-scale optimization. Inf. Sci. 2015, 323, 106–129. [Google Scholar] [CrossRef]
  8. Jara, E.C. Multi-Objective Optimization by Using Evolutionary Algorithms: The p-Optimality Criteria. IEEE Trans. Evol. Comput. 2014, 18, 167–179. [Google Scholar] [CrossRef]
  9. Chen, S.-H.; Chen, S.-M.; Jian, W.-S. Fuzzy time series forecasting based on fuzzy logical relationships and similarity measures. Inf. Sci. 2016, 327, 272–287. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  11. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  12. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  13. Storn, R.; Price, K.V. Differential evolution—A simple and efficient heuristic for global optimization over continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  14. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evolut. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef] [PubMed]
  15. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  16. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  17. Černý, V. Thermo dynamical approach to the traveling salesman problem: An efficient simulation algorithm. J. Optim. Theory Appl. 1985, 45, 41–51. [Google Scholar] [CrossRef]
  18. Shi, Y.H. Brain Storm Optimization Algorithm. Adv. Swarm Intell. Ser. Lect. Notes Comput. Sci. 2011, 6728, 303–309. [Google Scholar]
  19. Lam, A.Y.S.; Li, V.O.K. Chemical-Reaction-Inspired Metaheuristic for Optimization. IEEE Trans. Evol. Comput. 2010, 14, 381–399. [Google Scholar] [CrossRef] [Green Version]
  20. Mua, C.H.; Xie, J.; Liu, Y.; Chen, F.; Liu, Y.; Jiao, L.C. Memetic algorithm with simulated annealing strategy and tightness greedy optimization for community detection in networks. Appl. Soft Comput. 2015, 34, 485–501. [Google Scholar] [CrossRef]
  21. Cheng, S.; Shi, Y.H.; Qin, Q.D.; Ting, T.O.; Bai, R.B. Maintaining Population Diversity in Brain Storm Optimization Algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 3230–3237.
  22. Basturk, B.; Karaboga, D. An Artifical BEE Colony(ABC) Algorithm for Numeric Function Optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Indianapolis, IN, USA, 12–14 May 2006.
  23. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  24. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  25. Pan, Q.K. An effective co-evolutionary artificial bee colony algorithm for steelmaking-continuous casting scheduling. Eur. J. Oper. Res. 2016, 250, 702–714. [Google Scholar] [CrossRef]
  26. José, A.; Osuna, D.; Lozano, M.; García-Martínez, C. An alternative artificial bee colony algorithm with destructive–constructive neighbourhood operator for the problem of composing medical crews. Inf. Sci. 2016, 326, 215–226. [Google Scholar]
  27. Li, W.; Wang, L.; Yao, Q.Z.; Jiang, Q.Y.; Yu, L.; Wang, B.; Hei, X.H. Cloud Particles Differential Evolution Algorithm: A Novel Optimization Method for Global Numerical Optimization. Math. Probl. Eng. 2015, 2015, 497398. [Google Scholar] [CrossRef]
  28. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evolut. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  29. Jingqiao, Z.; Arthur, C.S. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evolut. Comput. 2009, 13, 945–957. [Google Scholar] [CrossRef]
  30. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  31. Tanabe, R.; Fukunaga, A. Success-History Based Parameter Adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congresson Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78.
  32. Ghosh, A.; Das, S.; Chowdhury, A.; Giri, R. An improved differential evolution algorithm with fitness-based adaptation of the control parameters. Inf. Sci. 2011, 181, 3749–3765. [Google Scholar] [CrossRef]
  33. Qin, F.Q.; Feng, Y.X. Self-adaptive differential evolution algorithm with discrete mutation control parameters. Expert Syst. Appl. 2015, 42, 1551–1572. [Google Scholar]
  34. Lu, X.F.; Tang, K.; Bernhard, S.; Yao, X. A new self-adaptation scheme for differential evolution. Neurocomputing 2014, 146, 2–16. [Google Scholar] [CrossRef]
  35. Wang, Y.; Cai, Z.X.; Zhang, Q.F. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  36. Tanabe, R.; Fukunaga, A.S. Improving the Search Performance of SHADE Using Linear Population Size Reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665.
  37. Swagatam, D.; Ajith, A.; Uday, K.C.; Amit, K. Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar]
  38. Gong, W.Y.; Cai, Z.H.; Wang, Y. Repairing the crossover rate in adaptive differential evolution. Appl. Soft Comput. 2014, 15, 149–168. [Google Scholar] [CrossRef]
  39. Rahnamayan, S.; Hamid, R.T.; Magdy, M.A.S. Opposition-based differential evolution. IEEE Trans. Evolut. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef]
  40. Zhou, Y.Z.; Li, X.Y.; Gao, L. A differential evolution algorithm with intersect mutation operator. Appl. Soft Comput. 2013, 13, 390–401. [Google Scholar] [CrossRef]
  41. Michael, G.E.; Dimitris, K.T.; Nicos, G.P.; Vassilis, P.P.; Michael, N.V. Enhancing differential evolution utilizing proximity-based mutation operators. IEEE Trans. Evol. Comput. 2011, 15, 99–119. [Google Scholar]
  42. Zhu, W.; Tang, Y.; Fang, J.A.; Zhang, W.B. Adaptive population tuning scheme for differential evolution. Inf. Sci. 2013, 223, 164–191. [Google Scholar] [CrossRef]
  43. Sun, J.; Zhang, Q.; Tsang, E.P. DE/EDA: A new evolutionary algorithm for global optimization. Inf. Sci. 2005, 169, 249–262. [Google Scholar] [CrossRef]
  44. Adam, P.P. Adaptive Memetic Differential evolution with global and local neighborhood-based mutation operators. Inf. Sci. 2013, 241, 164–194. [Google Scholar]
  45. Zheng, Y.J.; Xu, X.L.; Ling, H.F.; Chen, S.Y. A hybrid fireworks optimization method with differential evolution operators. Neurocomputing 2015, 148, 75–82. [Google Scholar] [CrossRef]
  46. Ali, R.Y. A new hybrid differential evolution algorithm for the selection of optimal machining parameters in milling operations. Appl. Soft Comput. 2013, 13, 1561–1566. [Google Scholar]
  47. Al, R.Y. Hybrid Taguchi-differential evolution algorithm for optimization of multi-pass turning operations. Appl. Soft Comput. 2013, 13, 1433–1439. [Google Scholar]
  48. Xiang, W.L.; Ma, S.F.; An, M.Q. hABCDE: A hybrid evolutionary algorithm based on artificial bee colony algorithm and differential evolution. Appl. Math. Comput. 2014, 238, 370–386. [Google Scholar] [CrossRef]
  49. Asafuddoula, M.; Tapabrata, R.; Ruhul, S. An adaptive hybrid differential evolution algorithm for single objective optimization. Appl. Math. Comput. 2014, 231, 601–618. [Google Scholar] [CrossRef]
  50. Antonin, P.; Carlos, A.C.C. A hybrid Differential Evolution—Tabu Search algorithm for the solution of Job-Shop Scheduling Problems. Appl. Soft Comput. 2013, 13, 462–474. [Google Scholar]
  51. Zhang, C.M.; Chen, J.; Xin, B. Distributed memetic differential evolution with the synergy of Lamarckian and Baldwinian learning. Appl. Soft Comput. 2013, 13, 2947–2959. [Google Scholar] [CrossRef]
  52. Rahnamayan, S.; Tizhoosh, H.; Salama, M. Opposition versus randomness in soft computing techniques. Appl. Soft Comput. 2008, 8, 906–918. [Google Scholar] [CrossRef]
  53. Cui, L.; Li, G.; Lin, Q.; Chen, J.; Lu, N. Adaptive differential evolution algorithm with novel mutation strategies in multiple sub-populations. Comput. Oper. Res. 2016, 67, 155–173. [Google Scholar] [CrossRef]
  54. Yoon, H.; Moon, B.R. An empirical study on the synergy of multiple crossover operators. IEEE Trans. Evol. Comput. 2002, 6, 212–223. [Google Scholar] [CrossRef]
  55. Wang, Y.; Cai, Z.X.; Zhang, Q.F. Enhancing the search ability of differential evolution through orthogonal crossover. Inf. Sci. 2012, 185, 153–177. [Google Scholar] [CrossRef]
  56. Liang, J.J.; Qu, B.Y.; Suganthan, P.N.; Hernndez-Daz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report 201212; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, Technical Report; Nanyang Technological University: Singapore, January 2013. [Google Scholar]
  57. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem Definitions and Evaluation Criteria for the CEC2005 Special Session on Real-Parameter Optimization; Technical Report; Nanyang Technological University: Singapore, KanGAL Report Number 2005005; Kanpur Genetic Algorithms Laboratory: Kanpur, India, May 2005. [Google Scholar]
  58. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
Figure 1. Variation curves of inertia factors. (a) the variation of ω1; (b) the variation of ω2.
Figure 1. Variation curves of inertia factors. (a) the variation of ω1; (b) the variation of ω2.
Algorithms 09 00078 g001
Figure 2. Evolution of the mean function error values derived from PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE versus the number of FES on sixteen test problems with D = 10. (a) f2; (b) f3; (c) f6; (d) f7; (e) f9; (f) f10; (g) f11; (h) f12; (i) f14; (j) f15; (k) f16; (l) f20; (m) f23; (n) f24; (o) f25; (p) f26.
Figure 2. Evolution of the mean function error values derived from PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE versus the number of FES on sixteen test problems with D = 10. (a) f2; (b) f3; (c) f6; (d) f7; (e) f9; (f) f10; (g) f11; (h) f12; (i) f14; (j) f15; (k) f16; (l) f20; (m) f23; (n) f24; (o) f25; (p) f26.
Algorithms 09 00078 g002
Figure 3. Evolution of the mean function error values derived from PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE versus the number of FES on sixteen test problems with D = 30. (a) f3; (b) f5; (c) f6; (d) f7; (e) f8; (f) f9; (g) f10; (h) f12; (i) f13; (j) f17; (k) f20; (l) f24; (m) f25; (n) f26; (o) f27; (p) f28.
Figure 3. Evolution of the mean function error values derived from PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE versus the number of FES on sixteen test problems with D = 30. (a) f3; (b) f5; (c) f6; (d) f7; (e) f8; (f) f9; (g) f10; (h) f12; (i) f13; (j) f17; (k) f20; (l) f24; (m) f25; (n) f26; (o) f27; (p) f28.
Algorithms 09 00078 g003
Table 1. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f1f5 test functions with 10D.
Table 1. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f1f5 test functions with 10D.
FPSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
f1Fmean5.30 × 10−1431.23.03 × 10−14001.80 × 10−110000
SD9.78 × 10−141717.86 × 10−14007.98 × 10−120000
Max2.27 × 10−149382.27 × 10−13003.66 × 10−110000
Min000005.45 × 10−120000
Compare/Rank−/8−/10−/7≈/1≈/1−/9≈/1≈/1≈/1\/1
f2Fmean9.59 × 1055.00 × 1051.06 × 1055.98 × 10−1201981.61 × 10803.04 × 1069.85 × 10−14
SD9.33 × 1055.85 × 1059.08 × 1044.39 × 10−120897.90 × 10801.61 × 1061.14 × 10−13
Max3.48 × 1062.36 × 1063.87 × 1052.18 × 10−1104254.33 × 10706.71 × 1062.27 × 10−13
Min9.70 × 1043.02 × 1041.58 × 1041.13 × 10−12061.6006.78 × 1070
Compare/Rank−/10−/9−/8−/4+/1−/7−/5+/1−/6\/3
f3Fmean4.66 × 1064.99 × 1084.82 × 1050.13526.81.05 × 1062.154.35 × 1020.1946.34 × 105
SD1.39 × 1078.24 × 1081.98 × 1060.17435.87.04 × 1053.700.2380.2884.82 × 105
Max7.39 × 1073.62 × 1091.07 × 1070.6881162.65 × 10615.11.301.151.42 × 104
Min4.67 × 1038.16 × 1052.29 × 1021.06 × 10909.83 × 1048.73 × 10401.21 × 1052.27 × 1013
Compare/Rank−/9−/10−/7−/3−/6−/8−/5−/2−/4\/1
f4Fmean3.87 × 1032.05 × 1032.98 × 1037.57 × 10143209.83 × 1013.72 × 1012000
SD3.40 × 1033.80 × 1031.23 × 1031.24 × 10131.26 × 1034.45 × 1011.28 × 1011000
Max1.85 × 1042.12 × 1046.90 × 1034.54 × 10136.04 × 1031.896.91 × 1011000
Min3341041.38 × 103002.61 × 1010000
Compare/Rank−/10−/8−/9−/4−/7−/6−/5≈/1≈/1\/1
f5Fmean1.21 × 101318.21.47 × 1013005.17 × 10802.08 × 101300
SD5.92 × 101442.26.08 × 1014001.88 × 10801.12 × 101300
Max2.27 × 10131363.41 × 1013001.04 × 10706.82 × 101300
Min01.13 × 10131.13 × 1013001.45 × 10801.13 × 101300
Compare/Rank−/6−/10−/7≈/1≈/1−/9≈/1−/8≈/1\/1
−/≈/+5/0/05/0/05/0/03/2/02/2/15/0/03/2/02/2/12/3/0\
Avg-Rank8.609.407.602.603.207.803.402.602.601.40
Table 2. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f1f5 test functions with 30D.
Table 2. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f1f5 test functions with 30D.
FPSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
f1Fmean6.29 × 10137.90 × 1031.11 × 1012003.63 × 10804.32 × 10132.27 × 10140
SD2.78 × 10133.56 × 1036.49 × 1013001.18 × 10801.92 × 10136.99 × 10140
Max1.81 × 10121.97 × 1043.63 × 1012006.32 × 10809.09 × 10132.27 × 10130
Min2.27 × 10132.70 × 1034.54 × 1013001.26 × 10802.27 × 101300
Compare/Rank−/7−/10−/8≈/1≈/1−/9≈/1−/6≈/1\/1
f2Fmean1.51 × 1073.91 × 1071.23 × 1063.47 × 1056.25 × 1031.24 × 1052.26 × 1054.32 × 10132.97 × 105344
SD1.12 × 1074.06 × 1075.38 × 1052.59 × 1056.93 × 1031.40 × 1051.59 × 1051.61 × 10132.04 × 105252
Max4.32 × 1071.43 × 1082.31 × 1069.68 × 1053.43 × 1047.28 × 1057.46 × 1056.82 × 10137.35 × 105951
Min7.68 × 1052.30 × 1062.41 × 1054.85 × 1045452.42 × 1045.69 × 1042.27 × 10138.34 × 10421.4
Compare/Rank−/9−/10−/8−/7−/3−/4−/5+/1−/6\/2
f3Fmean2.64 × 1085.51 × 10105.44 × 1071.226.46 × 1052.88 × 1071.53 × 10626344.71.31 × 1012
SD5.76 × 1083.91 × 10108.74 × 1075.221.97 × 1061.40 × 1073.19 × 1061.01 × 1031672.61 × 1012
Max2.89 × 1091.51 × 10112.99 × 10828.59.62 × 1067.14 × 1071.28 × 1075.42 × 1037531.11 × 1011
Min3.57 × 1065.18 × 1096.45 × 1052.14 × 1077.50 × 10127.79 × 1062.83 × 1012.04 × 10121.52 × 1020
Compare/Rank−/9−/10−/8−/2−/5−/7−/6−/4−/3\/1
f4Fmean7.34 × 1034.68 × 1038.05 × 1031.32 × 1031.03 × 10417.84.903.94 × 10137763.91 × 103
SD2.77 × 1033.98 × 1032.60 × 1038401.67 × 10414.24.371.57 × 10133035.19 × 103
Max1.60 × 1041.67 × 1041.46 × 1043.61 × 1035.57 × 10460.519.76.82 × 10131.51 × 1031.55 × 102
Min3.13 × 1031.03 × 1033.32 × 1033025.03 × 1083.174.61 × 1012.27 × 10133641.30 × 104
Compare/Rank−/8−/7−/9−/6−/10−/4−/3+/1−/5\/2
f5Fmean7.50 × 10131.29 × 1031.44 × 10129.47 × 10149.09 × 10141.35 × 1059.09 × 10149.32 × 10131.53 × 10139.47 × 1014
SD5.11 × 10139627.16 × 10134.30 × 10144.62 × 10143.73 × 1064.62 × 10141.74 × 10125.56 × 10144.30 × 1014
Max2.27 × 10123.31 × 1034.32 × 10121.13 × 10131.13 × 10132.26 × 1051.13 × 10137.73 × 10122.27 × 10131.13 × 1013
Min3.41 × 10131916.82 × 1013007.04 × 10602.27 × 10131.13 × 10130
Compare/Rank−/6−/10−/8≈/1≈/1−/9≈/1−/7−/5\/1
−/≈/+5/0/05/0/05/0/03/2/03/2/05/0/03/2/03/0/24/1/0\
Avg-Rank7.809.408.203.404.006.603.203.804.001.40
Table 3. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f6f20 test functions with 10D.
Table 3. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f6f20 test functions with 10D.
FPSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
f6Fmean16.626.56.642.616.218.43 × 1043.19 × 1027.015.67 × 1030
SD24.927.34.574.414.802.93 × 1044.26 × 1024.412.53 × 1020
Max96.890.89.819.819.811.42 × 1032.04 × 1019.811.13 × 1010
Min2.06 × 1019.812.05 × 103001.75 × 1043.31 × 10402.72 × 10120
Compare/Rank−/9−/10−/7−/5−/6−/2−/4−/8−/3\/1
f7Fmean5.5646.31.073.43 × 1047.97 × 1028.294.96 × 1032.07 × 1081.48 × 1037.58 × 105
SD5.3326.43.172.50 × 1041.17 × 1011.935.68 × 1031.13 × 1091.45 × 1031.00 × 104
Max20.511717.19.58 × 1044.81 × 10112.82.00 × 1026.21 × 1095.79 × 1033.65 × 104
Min3.80 × 1018.061.90 × 1042.98 × 1052.04 × 1084.309.58 × 1051.093.73 × 1046.56 × 108
Compare/Rank−/7−/9−/6−/2−/5−/8−/4−/10−/3\/1
f8Fmean20.320.320.320.320.320.320.320.320.320.3
SD8.88 × 1028.26 × 1025.93 × 1028.49 × 1028.25 × 1027.42 × 1026.74 × 1024.55 × 1017.43 × 1027.12 × 102
Max20.520.420.420.420.520.420.421.620.420.4
Min20.120.120.22020.120.120.12020.220.1
Compare/Rank≈/1≈/1≈/1≈/1≈/1≈/1≈/1−/10≈/1\/1
f9Fmean3.114.212.956.20 × 1013.746.065.7614.15.60 × 1011.66 × 101
SD1.541.658.69 × 1017.40 × 1017.45 × 1016.31 × 1016.96 × 1013.726.41 × 1013.23 × 101
Max6.997.014.352.244.987.047.0420.32.399.78 × 101
Min2.65 × 1018.70 × 1011.196.92 × 1081.894.614.507.712.60 × 1040
Compare/Rank−/5−/7−/4−/3−/6−/9−/8−/10−/2\/1
f10Fmean6.52 × 10120.71.16 × 1013.91 × 1011.95 × 1024.59 × 1014.47 × 1021.83 × 1024.82 × 1012.51 × 102
SD4.56 × 10135.85.60 × 1021.45 × 1011.04 × 1025.96 × 1023.71 × 1023.20 × 1029.18 × 1021.08 × 102
Max1.861652.26 × 1015.57 × 1014.03 × 1025.57 × 1011.48 × 1011.75 × 1016.30 × 1014.18 × 102
Min1.08 × 1011.72 × 1022.27 × 1021.72 × 1022.26 × 1033.32 × 1012.56 × 10903.10 × 1015.68 × 1014
Compare/Rank−/9−/10−/5−/6+/2−/7≈/3+/1−/8\/3
f11Fmean3.788.295.2916.703.41 × 10502863.790
SD2.119.652.333.8102.87 × 10503313.140
Max7.9539.49.9423.801.40 × 10409219.350
Min9.94 × 10101.999.0803.73 × 10603.972.43 × 1080
Compare/Rank−/5−/8−/7−/9≈/1−/4≈/1−/10−/6\/1
f12Fmean13.5258.1826.84.382511.42846.575.36
SD5.23123.644.251.225.153.223274.061.75
Max22.854.114.835.57.0733.4191.37 × 10319.18.66
Min4.976.961.2517.31.83115.205.961.981.52
Compare/Rank−/6−/7−/4−/9≈/1−/7−/5−/10≈/2\/2
f13Fmean22.133.511.624.75.2726.614.83117.926.69
SD7.3511.35.083.852.394.313.844124.502.83
Max40.155.925.631.811.532.622.41.36 × 10316.513.5
Min7.223.453.5516.52.4513.76.7312.62.031.98
Compare/Rank−/6−/9−/4−/7+/1−/8−/5−/10≈/2\/2
f14Fmean2262365989952.28 × 10238.84.74 × 10111.80 × 1032756.97
SD1611572561363.47 × 10-27.971.83 × 10-104231154.94
Max6056671.09 × 1031.17 × 1031.24 × 10-153.29.72 × 10-102.80 × 10350615.1
Min3.473.6032.3472019.3099372.36.24 × 10-2
Compare/Rank−/5−/6−/8−/9−/2−/4+/1−/10−/7\/3
f15Fmean9828471.28 × 1031.31 × 1034261.20 × 1031.15 × 1031.88 × 103535760
SD345231188155109141151438195142
Max1.55 × 1031.27 × 1031.56 × 1031.53 × 1036531.46 × 1031.48 × 1032.75 × 103772983
Min2901877438091899639011.00 × 103113479
Compare/Rank−/5−/4−/8−/9+/1−/7−/6−/10+/2\/3
f16Fmean1.095.56 × 1011.131.041.111.101.072.72 × 1011.109.92 × 101
SD2.88 × 1011.80 × 1012.43 × 1012.39 × 1012.07 × 1012.12 × 1011.82 × 1012.55 × 1012.05 × 1011.04 × 101
Max1.729.09 × 1011.601.501.451.541.391.231.411.14
Min4.49 × 1012.50 × 1016.79 × 1015.48 × 1016.63 × 1016.40 × 1017.12 × 1013.62 × 1025.14 × 1017.16 × 101
Compare/Rank−/6+/2−/10≈/3−/9−/7−/5+/1−/7\/3
f17Fmean14.213.324.727.710.111.410.19562810.3
SD4.681.463.313.281.44 × 10145.07 × 1012.05 × 10104692.803.33 × 101
Max21.717.934.435.710.112.310.11.58 × 10333.911.3
Min4.061118.222.210.19.5610.122.423.610.1
Compare/Rank≈/4≈/4−/7−/8+/1+/4+/1−/10−/9\/3
f18Fmean34.621.832.636.118.842.231.192536.325.9
SD10.77.604.003.852.245.463.324623.304.38
Max54.240.241.542.922.952.236.51.85 × 10342.632.7
Min5.607.2125.626.915.431.823.615.130.815.6
Compare/Rank−/6+/2−/5−/7+/1−/9−/4−/10−/8\/3
f19Fmean6.33 × 1011.999.83 × 1012.173.37 × 1019.24 × 1014.00 × 1011.101.954.44 × 101
SD1.78 × 1015.082.24 × 1013.47 × 1014.32 × 1021.49 × 1019.80 × 1024.74 × 1013.22 × 1011.05 × 101
Max1.0020.91.352.663.94 × 1011.245.74 × 1012.522.346.60 × 101
Min2.99 × 1013.66 × 1015.86 × 1019.78 × 1012.08 × 1015.82 × 1011.48 × 1013.99 × 1011.252.39 × 101
Compare/Rank≈/3−/9−/7−/10+/1−/6≈/2−/7−/8\/2
f20Fmean3.313.392.642.572.273.063.103.922.501.99
SD7.65 × 1014.09 × 1014.20 × 1012.61 × 1014.49 × 1012.37 × 1011.75 × 1014.20 × 1012.77 × 1011.87 × 101
Max5.004.013.323.293.233.393.454.993.072.24
Min1.972.261.831.931.722.582.732.922.091.55
Compare/Rank−/8−/9−/5−/4≈/1−/6−/7−/10−/3\/1
−/≈/+14/1/012/1/214/1/014/1/04/3/814/1/09/4/213/0/211/3/1\
Avg-Rank5.676.475.876.202.605.933.808.474.732.00
Table 4. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs ON f6f20 test functions with 30D.
Table 4. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs ON f6f20 test functions with 30D.
FPSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
f6Fmean90.433936.68.888.80 × 10113.518.14.408.051.18 × 107
SD41.225228.26.124.822.518.43 × 101104.391.72 × 107
Max1501.03 × 10380.126.426.426.420.526.426.45.58 × 107
Min6.9050.43.354.18 × 102011.916.41.13 × 10135.631.34 × 1010
Compare/Rank−/9−/10−/8−/5−/2−/6−/7−/3−/4\/1
f7Fmean45.919749.22.40 × 1012.6840.45.9012.13.30 × 1011.07 × 103
SD18.461.5185.52 × 1012.586.675.116.385.47 × 1017.68 × 104
Max9732484.92.8412.45517.727.92.202.99 × 103
Min17.780.225.72.39 × 1031.09 × 10130.34.15 × 1011.856.64 × 1035.42 × 105
Compare/Rank−/8−/10−/9−/2−/4−/7−/5−/6−/3\/1
f8Fmean20.920.920.920.920.920.920.921.420.920.9
SD5.97 × 1026.20 × 1024.74 × 1024.61 × 1021.13 × 1015.74 × 1025.75 × 1028.31 × 1025.46 × 1025.16 × 102
Max2121212121212121.62120.9
Min20.720.620.820.820.420.720.821.220.820.7
Compare/Rank≈/1≈/1≈/1≈/1≈/1≈/1≈/1−/10≈/1\/1
f9Fmean22.825.226.83226.832.529.2416.4822.9
SD3.854.004.3711.11.751.461.8310.12.283.96
Max2.90 × 1032.837.240.129.734.23454.911.228
Min15.418.516.69.4923.529.125.219.82.8615
Compare/Rank≈/2≈/2−/6−/8−/5−/9−/7−/10−/1\/2
f10Fmean1.61 × 1011.01 × 1031.20 × 1017.88 × 1034.54 × 1022.46 × 1013.80 × 1021.78 × 1026.53 × 1030
SD9.72 × 1026.21 × 1027.75 × 1026.85 × 1032.61 × 1021.76 × 1012.03 × 1021.29 × 1024.81 × 1030
Max4.60 × 1012.44 × 1033.40 × 1012.95 × 1021.03 × 1015.98 × 1018.86 × 1025.66 × 1021.47 × 1020
Min2.46 × 1022.13 × 1022.21 × 102002.79 × 1027.39 × 1035.68 × 10145.68 × 10140
Compare/Rank−/8−/10−/7−/3−/6−/9−/5−/4−/2\/1
f11Fmean33.9151105129025.10109712.77
SD8.5244.126.625.802.15033713.41.69
Max58.7261190176028.801.89 × 1031047.59
Min18.974.767.673019026.848.45.68 × 10-14
Compare/Rank−/5−/10−/7−/9+/1−/4+/1−/8−/6\/3
f12Fmean88.220192.118022.916559.648417380.1
SD37.791.224.19.383.3412.18.388287.8722.2
Max22742114719629.819070.62.65 × 103191113
Min41.778.343.715625.714134.325.816144.1
Compare/Rank≈/3−/9≈/3−/8+/1−/6+/2−/10−/7\/3
f13Fmean14025515618050.817589.51.44 × 103173117
SD32.850.432.211.313.514.718.31.41 × 1038.8621.9
Max18637822419876.52011315.06 × 103188140
Min83.417976.714617.912958.879.315565.7
Compare/Rank−/4−/9−/5−/8+/1−/7+/2−/10−/6\/3
f14Fmean1.22 × 1032.65 × 1035.64 × 1036.08 × 1033.12 × 1021.39 × 1038.13 × 1015.27 × 1033.36 × 103292
SD3176561.21 × 1035492.49 × 1021542.11690644118
Max1.84 × 1033.79 × 1037.11 × 1036.87 × 1031.04 × 1011.70 × 1038.897.44 × 1034.43 × 103592
Min6341.56 × 1031.71 × 1034.43 × 1031.81 × 10121.08 × 1035.07 × 1094.07 × 1031.94 × 10391.8
Compare/Rank+/4−/6−/9−/10+/1+/5+/2−/8−/7\/3
f15Fmean6.19 × 1034.34 × 1037.07 × 1037.12 × 1033.20 × 1036.92 × 1035.60 × 1035.16 × 1037.04 × 1036.66 × 103
SD1.25 × 103784331216347369392798288391
Max7.92 × 1036.65 × 1037.62 × 1037.47 × 1033.70 × 1037.43 × 1036.57 × 1036.69 × 1037.37 × 1037.19 × 103
Min3.37 × 1033.07 × 1036.18 × 1036.67 × 1032.37 × 1036.13 × 1034.39 × 1033.79 × 1036.37 × 1036.08 × 103
Compare/Rank≈/5+/2−/9−/10+/1−/7+/4+/3−/8\/5
f16Fmean2.531.872.412.522.002.362.488.14 × 1022.501.92
SD4.26 × 1014.64 × 1012.88 × 1013.76 × 1017.07 × 1012.49 × 1011.69 × 1015.62 × 1022.94 × 1011.69 × 101
Max3.342.662.923.072.962.902.762.85 × 1013.012.06
Min1.467.24 × 1011.641.325.95 × 1011.782.131.99 × 1021.541.23
Compare/Rank−/10≈/2−/6−/9≈/2−/5−/7+/1−/8\/2
f17Fmean74.614210618030.465.130.43.88 × 10318237.5
SD1988.527.116.31.05 × 10143.621.70 × 10666518.42.55
Max10535217321130.471.230.45.00 × 10321342.3
Min35.758.969.815130.456.730.42.54 × 10314633.6
Compare/Rank−/5−/7−/6−/8+/1−/4+/1−/10−/9\/3
f18Fmean20715622021278.32301614.08 × 103206187
SD30.251.715.59.556.439.76169111010.1
Max26825225322994.82481875.97 × 103223199
Min14081.918219365.52111331.76 × 103178155
Compare/Rank−/6+/2−/8−/7+/1−/9+/3−/10−/5\/4
f19Fmean4.431.83 × 10312.4151.448.251.633.4314.62.55
SD1.193.59 × 1035.758.57 × 1011.18 × 1018.60 × 1011.48 × 1018.32 × 1011.154.83 × 101
Max6.691.62 × 10426.216.51.709.601.875.2416.23.26
Min2.315.855.0013.11.116.461.311.6612.41.53
Compare/Rank−/5−/10−/7−/9+/1−/6+/2−/4−/8\/3
f20Fmean1514.51212.210.312.512.612.712.211.7
SD09.53 × 1013.30 × 1012.38 × 1016.17 × 1012.28 × 1013.51 × 1019.28 × 1013.22 × 1013.37 × 101
Max151512.612.611.91313.314.312.712.1
Min1511.511.411.69.0512121011.310.6
Compare/Rank−/10−/9−/3−/4+/1−/6−/7−/8−/5\/2
−/≈/+11/4/010/3/213/2/014/1/04/2/914/1/06/1/813/0/213/1/1\
Avg-Rank5.676.606.276.731.936.073.737.005.332.47
Table 5. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f21f28 test functions with 10D.
Table 5. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f21f28 test functions with 10D.
FPSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
f21Fmean360400400373393210400363365393
SD85.52.97 × 10131.58 × 10169.236.554.82.89 × 101396.487.536.5
Max400400400400400400400400400400
Min100400399200200100400100100200
Compare/Rank≈/2−/8−/8≈/2≈/2+/1−/8≈/2≈/2\/2
f22Fmean2954333661.04 × 1034.07232562.31 × 10333627.7
SD1302193071884.9347.818.447515618.1
Max5318761.24 × 1031.36 × 10317.331893.43.11 × 10368.588
Min69.633.941.44914.42 × 10-611122.81.31 × 1031200
Compare/Rank−/5−/8−/7−/9+/1−/4−/3−/10−/6\/2
f23Fmean9811.02 × 1031.29 × 1031.28 × 1034451.27 × 1031.44 × 1032.24 × 103426371
SD369398238134175199212518234157
Max1.65 × 1031.88 × 1031.77 × 1031.61 × 1038971.66 × 1031.82 × 1033.12 × 103793675
Min2462457061.06 × 1031638797661.14 × 10372.933.9
Compare/Rank−/4−/5−/8−/7≈/1−/6−/9−/10≈/1\/1
f24Fmean211216197202201197214327204202
SD4.0918.919.316.66.8228.911.21493.093.31
Max218228219209211215222758209208
Min200119148115168133160107200200
Compare/Rank−/7−/9≈/2−/5≈/2+/1−/8−/10−/6\/2
f25Fmean211218204202200207218247201200
SD5.124.263.652.968.7711.22.1050.52.271.15
Max223227212212209213222350204204
Min201210200200155148213200200200
Compare/Rank−/7−/8−/5−/4−/2−/6−/8−/10−/3\/1
f26Fmean206188151150141136188247158105
SD75.861.834.536.145.34.2029.2120242.82.17
Max321321200200200146200618200109
Min11010510310510212610640.1104100
Compare/Rank−/9−/7−/5−/4−/3−/2−/7−/10−/6\/1
f27Fmean506562359323300344480360315300
SD10472.582.761.34.88 × 10-130.818.462.848.80
Max635652534481302440512520481300
Min300400300300300316435300300300
Compare/Rank−/9−/10−/6−/4−/2−/5−/8−/7−/3\/1
f28Fmean3204033082462931932861.00 × 103270300
SD80.616390.289.936.510150.71.07 × 10373.20
Max6647565793003003003004.00 × 103300300
Min300300100100100100100300100300
Compare/Rank−/8−/9−/7≈/1≈/1≈/1≈/1−/10≈/1\/1
−/≈/+7/0/18/0/07/1/06/2/03/4/15/1/27/1/07/1/05/3/0\
Avg-Rank6.388.006.004.501.753.256.508.633.501.38
Table 6. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f21f28 test functions with 30D.
Table 6. Experimental results of PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES, CPDE and MCPDE over 30 independent runs on f21f28 test functions with 30D.
FPSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
f21Fmean290774318302305330295316269256
SD83.134370.283.764.710172.394.276.950.4
Max4431.89 × 103443443443443443443443300
Min200200200200200200200200200200
Compare/Rank−/3−/10−/8−/5−/6−/9−/4−/7−/2\/1
f22Fmean1.30 × 1032.60 × 1031.92 × 1036.13 × 10393.52.21 × 1032327.08 × 1033.56 × 103353
SD4056271.18 × 10372730.626843868760118
Max2.36 × 1033.74 × 1036.04 × 1037.18 × 1031222.64 × 1033148.45 × 1034.58 × 103678
Min7541.48 × 1036094.69 × 10315.31.66 × 1031604.66 × 1031.73 × 103167
Compare/Rank−/4−/7−/5−/9+/1−/6+/2−/10−/8\/3
f23Fmean6.19 × 1034.76 × 1037.06 × 1037.18 × 1033.53 × 1037.24 × 1036.18 × 1037.07 × 1037.15 × 1035.96 × 103
SD1.26 × 103999315203325223418634381483
Max7.77 × 1037.07 × 1037.57 × 1037.66 × 1034.13 × 1037.65 × 1037.52 × 1038.18 × 1037.76 × 1036.68 × 103
Min2.99 × 1033.01 × 1036.44 × 1036.78 × 1032.73 × 1036.70 × 1035.49 × 1035.51 × 1035.98 × 1034.99 × 103
Compare/Rank≈/3+/2−/6−/9+/1−/10≈/3−/7−/8\/3
f24Fmean272288261200208237284909200200
SD10.5107.893.257.426.974.306872.72 × 10-16.02 × 10-3
Max2963032782172282522912.23 × 103201200
Min255271242200200222275213200200
Compare/Rank−/7−/9−/6−/3−/4−/5−/8−/10−/2\/1
f25Fmean291296284238271294290254238235
SD109.619.884.9915.15.425.2127.74.122.57
Max315316305247289303297387244238
Min272278266228239279277201230229
Compare/Rank−/8−/10−/6−/2−/5−/9−/7−/4−/2\/1
f26Fmean33331221920.7226200260574211200
SD6184.25027.554.25.20 × 10-386.950435.12.26 × 10-4
Max3733913523163452003891.87 × 103317200
Min200200200200200200200132200200
Compare/Rank−/9−/8−/5−/3−/6−/2−/7−/10−/4\/1
f27Fmean9561.04 × 1038203636919621.11 × 103555416300
SD90.575.985.5 × 1085.422815332.71231091.19 × 10-1
Max1.10 × 1031.20 × 1039615131.00 × 1031.17 × 1031.17 × 103799617300
Min7758616603003096591.04 × 103387300300
Compare/Rank−/7−/9−/6−/2−/5−/8−/10−/4−/3\/1
f28Fmean3852.13 × 103514300300300300300300300
SD3252586392.27 × 10-1306.78 × 10-303.75 × 1031.84 × 10-92.65 × 10-13
Max1.63 × 1032.84 × 1032.69 × 1033003003003001.34 × 104300300
Min3001.67 × 103100300300300300100300300
Compare/Rank−/7−/10−/8≈/1≈/1−/6≈/1−/9≈/1\/1
−/≈/+7/1/07/0/18/0/07/1/05/1/28/0/05/2/18/0/07/1/0\
Avg-Rank6.008.136.254.253.636.885.257.633.751.50
Table 7. Comparison of MCPDE with PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE on the CEC2013 benchmarks (D = 10 and 30 dimensions).
Table 7. Comparison of MCPDE with PSO, PSOcf, TLBO, DE, JADE, CoDE, jDE, CMA-ES and CPDE on the CEC2013 benchmarks (D = 10 and 30 dimensions).
D PSOPSOcfTLBODEJADECoDEjDECMA-ESCPDEMCPDE
10−/≈/+26/2/025/1/226/2/023/5/09/9/1024/2/219/7/222/3/318/9/1\
Avg-rank6.397.436.215.072.465.504.507.464.001.71
30−/≈/+23/5/022/3/326/2/024/4/012/5/1127/1/014/5/924/0/424/3/1\
Avg-rank6.147.546.615.432.796.394.076.614.642.00

Share and Cite

MDPI and ACS Style

Li, W. A Modified Cloud Particles Differential Evolution Algorithm for Real-Parameter Optimization. Algorithms 2016, 9, 78. https://doi.org/10.3390/a9040078

AMA Style

Li W. A Modified Cloud Particles Differential Evolution Algorithm for Real-Parameter Optimization. Algorithms. 2016; 9(4):78. https://doi.org/10.3390/a9040078

Chicago/Turabian Style

Li, Wei. 2016. "A Modified Cloud Particles Differential Evolution Algorithm for Real-Parameter Optimization" Algorithms 9, no. 4: 78. https://doi.org/10.3390/a9040078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop