You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Feature Paper
  • Article
  • Open Access

16 November 2022

On Improving Adaptive Problem Decomposition Using Differential Evolution for Large-Scale Optimization Problems

,
and
Department of System Analysis and Operations Research, Reshetnev Siberian State University of Science and Technology, 660037 Krasnoyarsk, Russia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Applied and Computational Mathematics for Digital Environments

Abstract

Modern computational mathematics and informatics for Digital Environments deal with the high dimensionality when designing and optimizing models for various real-world phenomena. Large-scale global black-box optimization (LSGO) is still a hard problem for search metaheuristics, including bio-inspired algorithms. Such optimization problems are usually extremely multi-modal, and require significant computing resources for discovering and converging to the global optimum. The majority of state-of-the-art LSGO algorithms are based on problem decomposition with the cooperative co-evolution (CC) approach, which divides the search space into a set of lower dimensional subspaces (or subcomponents), which are expected to be easier to explore independently by an optimization algorithm. The question of the choice of the decomposition method remains open, and an adaptive decomposition looks more promising. As we can see from the most recent LSGO competitions, winner-approaches are focused on modifying advanced DE algorithms through integrating them with local search techniques. In this study, an approach that combines multiple ideas from state-of-the-art algorithms and implements Coordination of Self-adaptive Cooperative Co-evolution algorithms with Local Search (COSACC-LS1) is proposed. The self-adaptation method tunes both the structure of the complete approach and the parameters of each algorithm in the cooperation. The performance of COSACC-LS1 has been investigated using the CEC LSGO 2013 benchmark and the experimental results has been compared with leading LSGO approaches. The main contribution of the study is a new self-adaptive approach that is preferable for solving hard real-world problems because it is not overfitted with the LSGO benchmark due to self-adaptation during the search process instead of a manual benchmark-specific fine-tuning.

1. Introduction

Modern numerical continuous global optimization problems deal with high dimensionality and the number of decision variables is still increasing because of the need to take into account more internal and external factors when designing and analyzing complex systems. This is also facilitated by the development of high-performance hardware and algorithms. “Black-box” large-scale global optimization (LSGO) is one of the most important and hardest types of optimization problems. The search space of LSGO problems exponentially grows and many state-of-the-art optimization algorithms, including evolutionary algorithms, lose their efficiency. However, the issue cannot be solved by straightforward increasing the number of objective function evaluations.
Many researchers note that the definition of a LSGO problem depends on the nature of the problem and changes over time and with the development of optimization approaches. For example, the global optimization of Morse clusters is known as a hard real-world optimization problem. The best-found solutions for Morse clusters are collected in the Cambridge Energy Landscape Database []. At the moment, the database contains the highest value equal to 147 atoms, which corresponds to 441 continuous decision variables only. The most popular LSGO benchmark was proposed within the IEEE Congress on Evolutionary Computation and is used for the estimation and comparison of new LSGO approaches. The benchmark contains 1000-dimensional LSGO problems. There exist solutions for real-world problems with many thousands of decision variables.
The general LSGO optimization problem is defined as (1):
f ( x 1 , x 2 , , x n ) min x R n , f : R n R 1 ,
here f is an objective function, x i are box-constrained decision variables. We do not impose any restrictions on the type of the objective function, such as linearity, continuity, convexity, and the need to be defined at all points requested by the search algorithm. In the general case, the objective function is defined algorithmically, there is no information about the properties of its landscape, thus the objective function is a “black-box” model.
As previously mentioned, the performance of many black-box global optimization algorithms cannot be improved by only increasing the budget of function evaluations when solving LSGO problems. One of challenges for researchers in the LSGO field is the development of new approaches, which can deal with the high dimensionality. Various LSGO algorithms that use fundamentally different ideas and demonstrate different performances for different classes of LSGO problems have been proposed. When solving a specific LSGO problem, a researcher must choose an appropriate LSGO algorithm and fine-tune its parameters. Moreover, the algorithm can require different settings at different states of the optimization process (for example, at exploration and exploitation stages). Thus, the development of self-adaptive approaches for solving hard LSGO problem is an actual research task.
In this study, an adaptive hybrid approach that combines three general conceptions, such as problem decomposition using cooperative co-evolution (CC), global search based on differential evolution (DE), and local search is proposed. This approach demonstrates performance comparable with LSGO competition winners and outperforms most of them. At the same time, it demonstrates the same high efficiency for different classes of LSGO problems, which makes the proposed approach preferable for “black-box” LSGO problems when it is not possible to prove the choice of an appropriate search algorithm.
The rest of the paper is organized as follows. Section 2 presents the related works for reviewing state-of-the-art in the field of LSGO and motivates designing a hybrid approach. Section 3 and Section 4 describe the proposed approach, experimental setups, some general top-level settings, and implementation. In Section 5, the experimental results, analysis, and discussion of the algorithm dynamics and convergence, and the comparison of the results with state-of-the-art and competition-winner approaches are presented. In conclusion, the proposed methods and the obtained results are summarized and some further ideas are suggested.

3. The Proposed Approach

As we can see from the review in the previous section, the majority of state-of-the-art approaches use CC. At the same time, many CC approaches apply an additional learning stage before the main subcomponents’ optimization stage. The learning stage is used for identifying interconnected and independent variables. The identification of non-separable groups of variables usually takes a sufficiently large number of function evaluations (FEVs), which could be utilized for the main optimization process. However, the finding of all non-separable groups does not guarantee the high efficiency of solving the obtained optimization sub-problems.
The proposed approach uses an adaptive change of the number of subcomponents, which leads to a dynamic redistribution of the use of computational resources. The set of values of the number of subcomponents is predefined and it is a parameter of the algorithm, which can be set based on the limitations of FEVs. As it was shown in [,], it is better to use different decompositions for different stages of the optimization process while exploring different regions of the search space instead of using the only decomposition even it is correct. We have discovered that, in general, an optimizer better operates small subcomponents at the early stages and the whole solution vector at the final stage, when a basin of a global optimum is discovered. Additionally, the way of adaptive changing the number of subcomponents can vary for different types of LSGO problems.
We will use the following algorithm for adaptive change of the number of subcomponents. We will run many optimizers, which use decompositions with subcomponents of different sizes. Each algorithm uses its number of FEVs based on its success in the previous generations. Thus, we dynamically redistribute resources in favor of a more preferable decomposition variant.
A set of M values of the number of subcomponents is defined as { C C 1 , C C 2 , , C C M } , where all elements should be different, i.e., C C 1 C C 2 C C M . At the initialization stage, for each of M algorithms, we assign equal resources defined as the number of generations ( G i , i = 1 , , M ). After all resources are exhausted by algorithms, we start a new cycle by redistributing the resources. In each cycle, algorithms are applied consequently in random order.
At the end of the run of each algorithm, we evaluate the improvement rate (2):
i m p r o v m e n t _ r a t e i = ( b e s t _ f o u n d b e f o r e b e s t _ f o u n d a f t e r ) b e s t _ f o u n d a f t e r ,
here b e s t _ f o u n d b e f o r e is the best-found solution before the run, and b e s t _ f o u n d a f t e r is the best-found solution after the run, and i = 1 , , M .
After each cycle, all algorithms are ranked by their improvement rates. The best algorithm increases its resource by G w i n generations, which is a sum of G l o s e generations subtracted from resources of all the rest algorithms. For all algorithms, we define G m i n for preventing the situation when the current-winner algorithm takes all resources and eliminates all other participants.
In the proposed approach, we will run the MTS-LS1 algorithm after CC-based SHADE. Usually, MTS-LS1 can find a new best-found solution that becomes far from other individuals in the population. In this case, SHADE cannot improve the best-found solution for a long time but improves the average fitness in the population. Criterion (2) becomes insensitive to differences in the behavior of algorithms if the improving rate is calculated using the best-found solution. To overcome this difficulty, we will calculate the improving rate using the median fitness before and after an algorithm run using (3) instead of (2).
i m p r o v i n g _ r a t e i = m e d i a n F i t n e s s b e f o r e m e d i a n F i t n e s s a f t e r m e d i a n F i t n e s s a f t e r ,
here m e d i a n F i t n e s s is the median fitness of individuals in the population, and i = 1 , , M .
We will use the following approach for changing the number of generations assigned for each algorithm in the cooperation (4)–(8):
I R = i : i m p r o v m e n t _ r a t e i = max j = 1 , , M { i m p r o v m e n t _ r a t e j }
N I = I R
p o o l = j = 1 M G l o s e , if G i G l o s e G m i n i I R 0 , otherwise
G w i n = p o o l N I
G i = G i + G w i n , if i I R G i G l o s e , if G i G l o s e G m i n i I R G m i n , if G i G l o s e < G m i n i I R
here I R is a set of indexes of algorithms with the best improving rate, N I is the number of algorithms with the best improving rate, p o o l is a pool of resources for redistribution, and i = 1 , , M .
We will use SHADE as a core optimizer for subcomponents in CC-based algorithms. In our review, we have shown that almost all competition winners and state-of-the-art algorithms use one of the modifications of DE. The main benefit of the SHADE algorithm in solving “black-box” optimization problems is that it has only two control parameters, which can be automatically tuned during the optimization process [].
The main parameters of DE (scale factor F and crossover rate C r ) in SHADE are self-configuring. SHADE uses a historical memory, which contains H pairs of the parameters from the previous generations. A mutant vector is created using a random pair of the parameters from the historical memory. When applying SHADE with CC, we will use specific parameters C r and F for each subcomponent.
SHADE, like many other DE algorithms, uses an external archive for saving some promising solutions from the previous generations. SHADE records the parameter values and the corresponding function increments when a better solution is found. After each generation, SHADE calculates new values of the control parameters using the weighted Lehmer mean []. New calculated values of C r and F are placed in the historical memory.
SHADE uses the current-to-pbest/1 mutation scheme. The archived solutions can be chosen and reused at the mutation stage for maintaining the population diversity.
Our experimental results have shown that the use of independent populations and archives for each of the algorithms does not increase the overall performance of the proposed approach. In this work, all algorithms in the cooperation use the same population and archive.
One of the important control parameters of EA-based algorithms is the population size. A large population is more preferable at the exploration stage, when the algorithm converges, it loses the population diversity and the population size can be increased. If the variance of coordinates is high (individuals are well distributed in the search space), we reduce the population size and drop out randomly chosen solutions except the best one. We will use an adaptive size of the population based on the analysis of the diversity. The following diversity measure (9) is used []:
D I = 1 N P i = 1 N P j = 1 n x i j x ¯ j 2 , k = 1 , , M ,
here N P is the population size, n is the dimensionality of the objective function, x ¯ j is the average value of the j-th variable of all individuals in the population.
After each cycle, we define a new population size for each algorithm using (10)–(13).
R D = D I D I i n i t
R F E S = F E V s m a x F E V s
r R D = 1 R F E S 0.9
N P = N P + 1 , if N P + 1 m a x N P R D < 0.9 · r R D N P 1 , if N P 1 m i n N P R D > 1.1 · r R D N P , otherwise
here R D is relative diversity, R F E S is a relative spend of the FEV budget ( m a x F E V s ), r R D is the required value of R D , m i n N P , and m a x N P are low and upper bounds for the population size.
The relationship between R D and R F E S is presented in Figure 1.
Figure 1. Diversity-based mechanism of population size adaptation.
If the variance of coordinates is high (individuals are well distributed in the search space), we reduce the population size and drop out randomly chosen solutions except the best one. If the variance is low (individuals are concentrated in some region of the search space), we increase the population size by adding new random solutions. The approach tries to keep relative diversity close to r R D , which linearly decreases with spending FEVs.
The proposed above ideas are implemented in our new algorithm titled COSACC-LS1. One of the hyper-parameters of COSACC-LS1 is the number of algorithms with different subcomponents (M). Because of the high computational cost of LSGO experiments, in this research we have tried M = 3 and the following combinations of the number of subcomponents: { 1 , 2 , 4 } , { 1 , 2 , 8 } , { 1 , 2 , 10 } , { 1 , 4 , 8 } , { 1 , 4 , 10 } , { 1 , 8 , 10 } , { 2 , 4 , 8 } , and { 2 , 4 , 10 } . Thereafter, we will use the notation “COSACC-LS1 { x , y , z } ”, where x, y, and z stands for the number of subcomponents, which are used in three DE algorithms: CC-SHADE(x), CC-SHADE(y), and CC-SHADE(z).
We have tried different mutation schemes and have obtained that the best performance of COSACC-LS1 is reached using the following scheme (14):
u i = x i + F i · x p b e s t x i + F i · x t x r , i = 1 , , N P ,
here, u i is a mutant vector, F i is the scale factor, x p b e s t is a random solution chosen from the p best solutions, x t is an individual chosen using the tournament selection from the population (the tournament size is 2), x r is a random solution chosen from the union of the current population and the archive, and all solutions chosen for performing mutation must be different, i.e., i p b e s t t r .
The size of the archive is set two times larger than the initial population size. The size of historical memory in SHADE is set to 6 (the value is defined using grid search).
We have chosen MTS-LS1 for implementing local search in COSACC, because it demonstrates high performance in solving LSGO problems both alone and when applied with a global search algorithm []. We use the following settings for MTS-LS1. The maximum number of FEVs is 25000 (the value is defined by numerical experiments). MTS-LS1 searches along each i-th coordinate using the search range S R [ i ] . The initialization of S R [ i ] is the same as in the original MTS: S R [ i ] = ( b a ) · 0.4 , where [ a , b ] is low and high bounds for the i-th variable. If a better solution is not found using the current value of S R [ i ] , it is reduced S R [ i ] = S R [ i ] / 2 . If S R [ i ] becomes less than 1E-18 (the original threshold was 1E-15), the value is reinitialized.
MTS-LS1 is applied after each main cycle starting with the current best-found solution until maximum FEVs are reached.
The initial number of generations for all algorithms is 15, the minimum value is 5, respectively. After a cycle, we will add ( M 1 ) generation to G for the algorithm with the highest improving rate. All other algorithms will reduce the number of generations by one. The initial population size is 100, minimum and maximum values are 25 and 200, respectively. After the algorithm spends 90% of its computational resource, the population size is set to its minimum value as proposed in [] (in this work the value is 25).
The whole implementation scheme for the proposed approach is presented using pseudocode in Algorithm 1.
Algorithm 1 The general scheme of COSACC-LS1
  • Require: The number of algorithms M in CC, the number of subcomponents for each algorithm, n, N P , m i n N P , m a x N P , G i n i t , G l o s e , G m i n , m a x F E V s .
  • Ensure: 
  •      p o p u l a t i o n R a n d o m P o p u l a t i o n ( n , N P )
  •      D I i n i t C a l c u l a t e D i v e r s i t y ( p o p u l a t i o n )                     ▹ Using Equation (9)
  •     for all i = 1 , , M do
  •          G i G i n i t
  •     end for
  •     while F E V s < m a x F E V s do
  •         for all  i R a n d o m P e r m u t a t i o n ( 1 , , M )  do
  •              m e d i a n F i t n e s s _ b e f o r e G e t M e d i a n F i t n e s s ( p o p u l a t i o n )
  •             for  g 1 , G i  do
  •                 b e s t _ f o u n d C C - S H A D E ( p o p u l a t i o n , N P , i )
  •                 R D E v a l R D ( D I i n i t , p o p u l a t i o n , N P )                        ▹ Equation (10)
  •                 N P E v a l P o p s i z e ( R D , m a x F E V s , N P , m a x N P ) ▹ Equation (13)
  •             end for
  •              m e d i a n F i t n e s s _ a f t e r G e t M e d i a n F i t n e s s ( p o p u l a t i o n )
  •              i m p r o v i n g _ r a t e i m e d i a n F i t n e s s _ b e f o r e m e d i a n F i t n e s s _ a f t e r m e d i a n F i t n e s s _ a f t e r
  •         end for
  •         for all  i = 1 , , M  do
  •              G i E v a l N u m G e n e r a t i o n s ( i m p r o v i n g _ r a t e i )           ▹ Equation (8)
  •         end for
  •          b e s t _ f o u n d G e t B e s t F o u n d ( p o p u l a t i o n )
  •          b e s t _ f o u n d M T S - L S 1 ( b e s t _ f o u n d )
  •     end while

4. Experimental Setups and Implementation

We have investigated the performance of COSACC-LS1 and have compared the results with other state-of-the-art approaches using the actual LSGO benchmark, proposed at the special session of IEEE Congress on Evolutionary Computation in 2013 []. The benchmark proposes 15 “black-box” real-valued LSGO problems. There are 4 types of problems, namely fully-separable functions (F1–F3), partially separable functions (F4–F11), functions with overlapping subcomponents (F12–F14), and fully-nonseparable functions (F15). The functions have many features, which complicate solving the problems using standard EAs and other metaheuristics. Some of the features are non-uniform subcomponent sizes, imbalance in the contribution of subcomponents, overlapping subcomponents, transformations to the base functions, ill-conditioning, symmetry breaking, and irregularities [,].
The performance measure for LSGO algorithms is the error of the best-found solution averaged over 25 independent runs. The error is an absolute difference between the best-found solution and the true value of a global optimum. The maximum FEVs in a run is 3.0E+06. Based on the benchmark rules, the following additional data is collected: for each problem, the best-found fitness values averaged over 25 runs are saved after 1.2E+05, 6.0E+05, and 3.0E+06 FEVs. We also will estimate the variance of the results using the best, median, worst, mean, and standard deviation of the results.
Authors of the LSGO CEC 2013 benchmark propose software implementation using C++, Java, and Python programming languages. For a fair comparison of the results with other state-of-the-art algorithms, the Toolkit for Automatic Comparison of Optimizers (TACO) [,] is used. TACO is an online database, which proposes the automatic comparison of the results uploaded by users with the results of selected LSGO algorithms stored in the database. TACO presents reports of the results of ranking the selected algorithms based on the Formula 1 ranking system. The ranking is presented for the whole benchmark and each of the 4 types of problems.
Experimental analysis of new LSGO approaches is very expensive in terms of computational time. For all computational experiments, the proposed approach has been implemented using C++. The C++ language usually demonstrates higher computing speed and has wide possibilities for parallelization using many computers with many CPU cores. We have designed and assembled our computational cluster based on 8 AMD Ryzen Pro CPUs, which, in total, supply 128 threads for parallel computing. The MPICH2 (Message Passing Interface Chameleon) framework for connecting all PCs in the cluster is used. The Master–Slave communication scheme with the queue is applied. The operating system is Ubuntu LTS 20.04. One series of experiments using the LSGO benchmark using the cluster takes about 2 h compared to 265 h when using a single computer with regular sequential computing. The source codes and additional information on our cluster are available on https://github.com/VakhninAleksei/COSACC-LS1 (accessed on 01 September 2022).

5. The Experimental Results

The results of evaluating COSACC-LS1 with the best configuration { 1 , 2 , 4 } on the IEEE CEC LSGO benchmark are presented in Table 3. The results contain the best, median, worst, mean, and standard deviation values of the best-found solutions from 25 independent runs after 1.2E+05, 6.0E+05, and 3.0E+06 FEVs (following the benchmark rules).
Table 3. The experimental results on the IEEE CEC 2013 LSGO benchmark.
At first, the results of COSACC-LS1 have been compered with its component algorithms, COSACC and LS1, to prove the benefits of their cooperation. Both component algorithms have been evaluated using their best settings obtained with the grid search. All comparisons have been performed using the median of the best-found solutions in the runs after spending the full FEVs budget. Table 4 contains the medians and the results of the Mann–Whitney–Wilcoxon (MWW) tests and ranking. High values of ranks are better. When the difference in the results is not statistically significant, algorithms share ranks. The average ranks are presented in Figure 2.
Table 4. The comparison of algorithms.
Figure 2. Average ranks of COSACC, LS1, and COSACC-LS1 algorithms.
As we can see from the results, COSACC-LS1 has won 8 times, 5 times has shared first place with a component algorithm, and 2 times has taken second place. On easy separable problems (F1–F6), single COSACC yields to both algorithms, because it spends the budget for exploration of the search space while LS1 greedy converges to an optimum. On average, COSACC-LS1 obtains the best ranks, thus, in the case of black-box LSGO problems, the choice of the hybrid approach is preferable.
The following statistical data for each benchmark problem collected during independent runs of COSACC-LS1 have been visualized: convergence, dynamics of the population size, and redistribution of the computational resources for algorithms with the different number of subcomponents. Each plot presents the mean and standard deviation of 25 runs. The whole set of plots is presented in Appendix A (Figure A1Figure A15).

6. Discussion

In this section, we have analyzed 3 general situations in the algorithm behavior based on plots for F3, F8, and F10 problems.
LSGO problems are hard for many search techniques when they optimize the complete solution vector, and the problem decomposition can ease this issue. In our previous studies, we have discovered that the cooperation of multiple algorithms with a different number of subcomponents usually demonstrates the following usage of decompositions. At the initial generations, the best performance is obtained using many subcomponents of small sizes. Such component-wise search performs the exploration strategy. After that, the approach usually chooses algorithms with a smaller number of subcomponents and at the final generations, it optimizes the complete solution vector. Optimization without decomposition tries to improve the final solution and performs the exploitation strategy []. A similar behavior we can see for COSACC-LS1.
Figure A3 (see Appendix A) shows the dynamics of the algorithms on the F3 problem. F3 is a fully separable problem based on the Ackley function. At the same time, the problem is one of the hardest in the benchmark. The basin of global optimum is narrow. F3 has a huge number of local optima with almost the same values, which cover most of the search space.
As we can see in Figure A3a, the algorithm demonstrates fast convergence at the initial generations and after that, there are no significant improvements in the best-found value. The population size at the initial generations is big because the population diversity ( D I ) becomes less than the required relative diversity ( r R D ). This is the result of the fast convergence, and the algorithm tries to increase the population size up to the threshold value (Figure A3b). When the algorithm falls into stagnation, individuals save their positions, and the diversity becomes greater than r R D , thus the population size decreases. As we can see from low STD values, the situation is repeated in every run. The resource redistribution plots (Figure A3c) show that at the initial generations the algorithm prefers to use many subcomponents, but when it falls into stagnation, the algorithm takes this as the end of the exploration and gives resources for optimizing the complete solution vector.
Figure A8 shows the dynamics of the algorithms on the F8 problem, which is a combination of 20 non-separable shifted and rotated elliptic functions. The problem is assumed to be a good test function for decomposition-based approaches, but each subcomponent is a hard optimization problem, which is non-separable and has local irregularities.
As we can see in Figure A8a, the proposed approach demonstrates good convergence at the beginning of the optimization process and then stagnates. Figure A8b shows that the fast convergence leads to a loss in diversity ( D I ) and the algorithm increases the population size until 50% of FEVs is reached. In the middle of the budget spend, individuals have almost the same fitness values and do not improve the best-found value (plateau area in Figure A8b). Finally, the diversity ( D I ) becomes less than the required relative diversity ( r R D ) and the population size decreases. In contrast with the results on F3, before the algorithm falls into stagnation, the fast improvements in the objective lead to an increase in the population size for preventing local convergence.
Figure A8c shows that the algorithm distributes computational resources almost in equal portions on average. We can see an example of the true cooperative search when all component algorithms support each other. The standard deviation of the redistribution is high because the algorithm permanently adapts G i values in the run.
Figure A10 presents the convergence on the F10 problem. F10 is a combination of 20 non-separable shifted and rotated Ackey’s functions. As it was said previously, the Ackley function is one of the hardest in the benchmark and all Ackley-based problems are also very challenging tasks for LSGO approaches.
As we can in Figure A10a, the algorithm improves the fitness value permanently during the run. At the same time, the relative value of the improvements is low, and coordinates of individuals remain almost the same. The D I value becomes less than r R D at the early generations and the algorithm decreases the population size (Figure A10b). As we have mentioned previously, slow convergence and stagnation usually are the result of the end of the exploration stage and the algorithm prefers to optimize the complete solution vector instead of decomposition-based subcomponents. As we can see in Figure A10c, COSACC-LS1 gives all resources to the component algorithm with no decomposition.
Here it should be noted that in all experiments all component algorithms have a minimum guarantee amount of the computational resource. Even when one of the algorithms is leading, this can still be the result of the cooperation of multiple decompositions, and their small contribution essentially increases the performance of the leading algorithm.
As we can see from the presented convergence plots, COSACC-LS1 demonstrates the self-configuration capability. The approach can adaptively select the best decomposition option using redistribution of the computational resource. Different behavior for different LSGO problems ensures that COSACC-LS1 adapts to the topology of the given objective function. Another feature of the proposed approach is the adaptive control of the population size that maintains the population diversity and prevents the premature convergence.
Finally, the results of COSACC-LS1 have been compared with state-of-the-art approaches using the TACO online database. For the comparison, we have selected all algorithms, which were winners and prize-winners of all previous IEEE CEC LSGO competitions: CC-CMA-ES, CC-RDG3, IHDELS, MLSHADE-SPA, MOS, MPS, SGCC, and SHADEILS (see Table 2). Additionally, we have added DECC-G as it is used as a baseline in the majority of studies and experimental comparisons. Table 5 and Figure 3 show the results of the comparison. For all algorithms, we can see the sum of scores obtained on all benchmark problems and the sum of scores for each type of LSGO problems. The following notation for classes of LSGO problems is used: non-separable functions (Class 1), overlapping functions (Class 2), functions with no separable subcomponents (Class 3), functions with a separable subcomponent (Class 4), and fully-separable functions (Class 5).
Table 5. Comparison of state-of-the-art algorithms.
Figure 3. Summary scores of state-of-the-art algorithms.
The LSGO benchmark contains only one fully non-separable problem and three problems with overlapping components, which are the hardest problems. At the same time, an algorithm can obtain high summary scores if it has high scores for the type of LSGO problem, which contains many problems.
As we can see from the comparison, CC-RDG3, SHADE-SPA, and MOS have average scores for non-separable and overlapping problems but perform well for other types of LSGO problems. SGCC is the second-best in solving non-separable problems and yields in solving all the rest. SHADEILS is still the competition winner, but it demonstrates low performance when solving fully separable problems because it does not use any decomposition approach that can improve the results for this type.
To better investigate the results for each class of LSGO problems, we have adjusted the given scores by the number of problems of each class. Table 6 shows the results adjusted for the number of problems in each class. Figure 4 demonstrates the variance of scores for the 5 best algorithms. MLSHADE-SPA and MOS obtain high scores for fully-separable functions (outliers in Figure 4), although the results for all other classes have low variance, they are below median values of leading approaches. Median values of SHADEILS and COSACC-LS1 are close, but, as we can see, COSACC-LS1 has less variance thus the results are more stable. We can see that the variance in SHADEILS is towards larger ranks, at the same time this is true only with this benchmark set, because the approach is fine-tuned for the benchmark.
Table 6. Comparison of state-of-the-art algorithms.
Figure 4. Variance of the adjusted scores.
Taking into account the number of problems of each type, we can conclude that the proposed algorithm performs well with all types of LSGO problems. This fact makes COSACC-LS1 preferable in solving “black-box” LSGO problems when information on the problem type is not available. At the same time, COSACC-LS1 proposes a general framework for hybridization of multiple problem decomposition schemes, a global optimizer, and a local search algorithm, thus it has great potential for further improving its performance by applying other component approaches.

7. Conclusions

In this paper, a framework for solving LSGO problems has been proposed and a new optimization algorithm COSACC-LS1 based on the framework has been designed and investigated. The performance of COSACC-LS1 has been evaluated and compared with state-of-the-art approaches using the IEEE CEC LSGO 2013 benchmark and the TACO database. The proposed approach outperforms all LSGO competition winners except for one approach—SHADEILS. At the same time, COSACC-LS1 performs well with all types of LSGO problems, while SHADEILS shows poor results on fully-separable problems.
COSACC-LS1 proposed an original hybridization of three main LSGO techniques: CC, DE, and LS. In this work, we have applied SHADE as a DE component, MTS LS1 as LS, and a new approach for the adaptive selection of problem decomposition (several variants with different sizes of subcomponents). The proposed framework does not specify the exact choice of component algorithms, and the user may apply any global and local search algorithm. In that sense, the proposed approach has potential for improvement. In our further research we will examine the proposed framework with other stochastic population-based metaheuristics.
Interaction of three CC-based algorithms demonstrates high performance due to adaptive redistribution of computational resources. We have visualized the redistribution and have found that the approach can adapt to a new environment (new landscape of a LSGO problem). Instead of selecting one variant of decomposition, the interaction allows the component algorithm with the least amount of resources to still participate in the optimization process, and we can see that the algorithm contributes to the optimization process in some regions of the search space.
The well-known “No free lunch” theorem says that it is not possible to choose one optimization algorithm that performs well for all types and instances of optimization problems. At the same time, we can relax the theorem by introducing self-adaptive control of multiple approaches. The approach can adaptively design an effective algorithm (by giving more computations to the best component algorithm) for a specific optimization problem, as well as for a specific region of the search space within the optimization process.
Even though the LSGO benchmark contains many types of LSGO problems, many real-world optimization problems are not well studied and can require fine adjustment of some COSACC-LS1 parameters. In further work, we will address the issue of developing an approach for online adaptation of the internal parameters of the subcomponent optimizers.

Author Contributions

A.V.: methodology, software, validation, investigation, resources, writing—original draft preparation, visualization; E.S. (Evgenii Sopov): conceptualization, methodology, formal analysis, writing—review and editing, visualization, supervision; E.S. (Eugene Semenkin): conceptualization, formal analysis, writing—review and editing, supervision, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation within limits of state contract No. FEFE-2020-0013.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

The following abbreviations are used in this manuscript:
b e s t _ f o u n d b e f o r e The best found solution before an optimization cycle
b e s t _ f o u n d a f t e r The best found solution after an optimization cycle
C a l c u l a t e D i v e r s i t y ( p o p u l a t i o n ) Function for calculating the diversity of the population
C C i The number of subcomponents of the i-th algorithm
C C S H A D E ( p o p u l a t i o n , N P , i ) Function for evolving the population using the cooperative co-evolution algorithm with N P individuals and i subcomponents
C r Crossover rate
c u r r e n t t o p b e s t / 1 Mutation scheme in SHADE
D I Population diversity
E v a l N u m G e n e r a t i o n s Function for calculating a new number of generations
E v a l P o p s i z e Function for calculating a new value of the population size
E v a l R D Function for calculating relative diversity of the population
FScale factor
F E V The number of function evaluations
G i The number of generations of the i-th algorithm
G l o s e The minimal number of generations
G l o s e The number of generations by which the budget of algorithms is reduced
G w i n The number of generations by which the budget of algorithms is increases
G e t B e s t F o u n d Function that returns the best-found solution from the population
G e t M e d i a n F i t n e s s ( p o p u l a t i o n ) Function that returns the median fitness value in the population
HThe number of F and C r pairs in SHADE
i m p r o v m e n t _ r a t e i The change of the best-found fitness of the i-th algorithm in an optimization cycle
I R A set of indexes of algorithms with the best improvement rate
MThe number of algorithms
m a x F E V s The maximum number of fitness function evaluations
m a x N P The upper bound for the population size
m i n N P The lower bound for the population size
m e d i a n F i t n e s s b e f o r e The median fitness in the population before an optimization cycle
m e d i a n F i t n e s s a f t e r The median fitness in the population after an optimization cycle
nThe number of decision variables
N I The number of algorithms with the best improvement rate
N P The population size
p o o l The number of generations for redistribution
R a n d o m P e r m u t a t i o n Function that randomly permutes values of a vector
R a n d o m P o p u l a t i o n ( n , N P ) Function that generates a random population with N P individuals of n variables
R D The relative diversity
R F E S The relative spend of the FEV budget
r R D The required value of R D
S R [ i ] A search range for the i-th coordinate in MTS-LS1
S T D The standard deviation
uA mutant vector
x p b e s t A random solution chosen from the p best individuals
x t An individual chosen using the tournament selection

Acronyms

The following acronyms are used in this manuscript:
ABBOAutomated Black-box Optimization
BICCABi-space Interactive Cooperative Co-evolutionary Algorithm
CABCCooperative Artificial Bee Colony
CBCCContribution Based Cooperative Co-evolution
CBFOCooperative Bacterial Foraging Optimization
CCCooperative Co-evolution
CC-CMA-ESScaling up Covariance Matrix Adaptation Evolution Strategy
CCDECooperative Co-evolutionary Differential Evolution
CCEA-AVPCorrelation-based Adaptive Variable Partitioning
CCFR2Extended Cooperative Co-evolution Framework
CCGACooperative Co-evolutionary Approach for Genetic Algorithm
CC-GDG- CMAESCompetitive Divide-and-conquer Algorithm Covariance Matrix Adaptation Evolution Strategy
CCOABCCooperative Co-evolution Orthogonal Artificial Bee Colony
CCPSOCooperatively Co-evolving Particle Swarms Algorithm
CC-RDG3Cooperative Co-evolution Recursive Differential Grouping
CCVILCooperative Co-evolution with Variable Interaction Learning
C-DEEPSOCanonical Differential Evolutionary Particle Swarm Optimization
CECIEEE Congress on Evolutionary Computation
COSACC-LS1Coordination of Self-adaptive Cooperative Co-evolution Algorithms with Local Search
CPSOCooperative Approach to Particle Swarm Optimization
CPSO-HkCombination of Cooperative Approach to Particle Swarm Optimization with k Subcomponents with the Standard Particle Swarm Optimization
CPSO-SkCooperative Approach to Particle Swarm Optimization with k Subcomponents
CPUCentral Processing Unit
DEDifferential Evolution
DECC-DGCooperative Co-Evolution with Differential Grouping
DECC-DG2Cooperative Co-Evolution with A Faster and More Accurate Differential Grouping
DECC-DMLCooperative Co-evolution with Delta Grouping
DECC-GSelf-Adaptive Differential Evolution with Neighborhood Search with Cooperative Co-evolution
DECC-MLMultilevel Cooperative Co-evolution with More Frequent Random Grouping
DECC-XDGCooperative Co-Evolution with Extended Differential Grouping
DGSCDifferential Grouping with Spectral Clustering
DIMADependency Identification with Memetic Algorithm
DMS-PSODynamic Multi-Swarm Particle Swarm Optimizer
EAEvolutionary Algorithm
FEPCCFast Evolutionary Programming with Cooperative Co-evolution
GAGenetic Algorithm
IHDELSIterative Hybridization of Differential Evolution with Local Search
IPOPCMA-ESRestart Covariance Matrix Adaptation Evolution Strategy with Increasing Population Size
IRRGIncremental Recursive Ranking Grouping
JADEAdaptive Differential Evolution with Optional External Archive
L-BFGSBLimited-memory the Broyden–Fletcher–Goldfarb–Shanno algorithm
LSGOLarge-Scale Global Optimization
L-SHADEIteratively Applies Success History-Based Differential Evolution with Linear Population Size Reduction
MLCCMultilevel Cooperative Co-evolution
MLSHADE-SPAMemetic Framework for Solving Large-scale Optimization Problems
MOSMultiple Offspring Sampling
MPICH2Message Passing Interface Chameleon
MPS-CMA-ESHybrid of Minimum Population Search and Covariance Matrix Adaptation Evolution Strategy
MTSMulti-trajectory Search
MWWMann-Whitney-Wilcoxon
NSGA-2Non-dominated Sorting Genetic Algorithm
PSOParticle Swarm Optimization
SACCCooperative Co-evolution with Sensitivity Analysis-based Budget Assignment Strategy
SaDESelf-Adaptive Differential Evolution
SaNSDESelf-Adaptive Differential Evolution with Neighborhood Search
SGCCCooperative Co-evolution with Soft Grouping
SHADE-ILSSuccess-History Based Parameter Adaptation for Differential Evolution with Iterative Local Search
TACOToolkit for Automatic Comparison of Optimizers
VMODEVariable Mesh Optimization Differential Evolution

Appendix A. Plots of Convergence, the Population Size, and Redistribution of the Computational Resources

Figure A1. The dynamics of COSACC-LS1 on the F1 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A2. The dynamics of COSACC-LS1 on the F2 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A3. The dynamics of COSACC-LS1 on the F3 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A4. The dynamics of COSACC-LS1 on the F4 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A5. The dynamics of COSACC-LS1 on the F5 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A6. The dynamics of COSACC-LS1 on the F6 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A7. The dynamics of COSACC-LS1 on the F7 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A8. The dynamics of COSACC-LS1 on the F8 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A9. The dynamics of COSACC-LS1 on the F9 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A10. The dynamics of COSACC-LS1 on the F10 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A11. The dynamics of COSACC-LS1 on the F11 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A12. The dynamics of COSACC-LS1 on the F12 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A13. The dynamics of COSACC-LS1 on the F13 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A14. The dynamics of COSACC-LS1 on the F14 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.
Figure A15. The dynamics of COSACC-LS1 on the F15 problem: (a) Convergence; (b) Population size; (c) Redistribution of resources.

References

  1. The Cambridge Energy Landscape Database. Available online: https://www-wales.ch.cam.ac.uk/CCD.html (accessed on 31 January 2022).
  2. Tang, K.; Yao, X.; Suganthan, P.N.; Macnish, C.; Chen, Y.P.; Chen, C.M.; Yang, Z. Benchmark Functions for the CEC’2008 Special Session and Competition on Large Scale Global Optimization; Technical report; Nature Inspired Computation and Application Laboratory, USTC: Hefei, China, 2007. [Google Scholar]
  3. Tang, K. Summary of Results on CEC’08 Competition on Large Scale Global Optimization; Technical report; Nature Inspired Computation and Application Laboratory, USTC: Hefei, China, 2008. [Google Scholar]
  4. Tang, K.; Li, X.; Suganthan, P.N.; Yang, Z.; Weise, T. Benchmark Functions for the CEC’2010 Special Session and Competition on Large-Scale Global Optimization; Technical report; Nature Inspired Computation and Applications Laboratory; USTC: Hefei, China, 2009. [Google Scholar]
  5. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K. Benchmark Functions for the CEC’2013 Special Session and Competition on Large-Scale Global Optimization; Technical report; RMIT University: Melbourne, Australia, 2013. [Google Scholar]
  6. Molina, D.; LaTorre, A. Toolkit for the Automatic Comparison of Optimizers: Comparing Large-Scale Global Optimizers Made Easy. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  7. Mahdavi, S.; Shiri, M.E.; Rahnamayan, S. Metaheuristics in large-scale global continues optimization: A survey. Inf. Sci. 2015, 295, 407–428. [Google Scholar] [CrossRef]
  8. Singh, A.; Dulal, N. A Survey on Metaheuristics for Solving Large Scale Optimization Problems. Int. J. Comput. Appl. 2017, 170, 1–7. [Google Scholar] [CrossRef]
  9. Del Ser, J.; Osaba, E.; Molina, D.; Yang, X.S.; Salcedo-Sanz, S.; Camacho, D.; Das, S.; Suganthan, P.N.; Coello Coello, C.A.; Herrera, F. Bio-inspired computation: Where we stand and what’s next. Swarm Evol. Comput. 2019, 48, 220–250. [Google Scholar] [CrossRef]
  10. Omidvar, M.N.; Li, X.; Yao, X. A review of population-based metaheuristics for large-scale black-box global optimization: Part A. IEEE Trans. Evol. Comput. 2021, 26, 1–21. [Google Scholar] [CrossRef]
  11. Omidvar, M.N.; Li, X.; Yao, X. A review of population-based metaheuristics for large-scale black-box global optimization: Part B. IEEE Trans. Evol. Comput. 2021, 26, 823–843. [Google Scholar] [CrossRef]
  12. Sun, Y.; Li, X.; Ernst, A.; Omidvar, M.N. Decomposition for Large-scale Optimization Problems with Overlapping Components. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 326–333. [Google Scholar] [CrossRef]
  13. Molina, D.; LaTorre, A.; Herrera, F. SHADE with Iterative Local Search for Large-Scale Global Optimization. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  14. LaTorre, A.; Muelas, S.; Peña, J.M. Multiple Offspring Sampling in Large Scale Global Optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation (CEC), Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef]
  15. Zhao, S.Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic multi-swarm particle swarm optimizer with local search for Large Scale Global Optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3845–3852. [Google Scholar] [CrossRef]
  16. Liang, J.; Suganthan, P. Dynamic multi-swarm particle swarm optimizer. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 124–129. [Google Scholar] [CrossRef]
  17. Marcelino, C.; Almeida, P.; Pedreira, C.; Caroalha, L.; Wanner, E. Applying C-DEEPSO to Solve Large Scale Global Optimization Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–6. [Google Scholar] [CrossRef]
  18. López, E.; Puris, A.; Bello, R. VMODE: A hybrid metaheuristic for the solution of large scale optimization problems. Investig. Oper. 2015, 36, 232–239. [Google Scholar]
  19. Tseng, L.Y.; Chen, C. Multiple trajectory search for Large Scale Global Optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3052–3059. [Google Scholar] [CrossRef]
  20. Molina, D.; Herrera, F. Iterative hybridization of DE with local search for the CEC’2015 special session on large scale global optimization. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1974–1978. [Google Scholar] [CrossRef]
  21. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  22. Morales, J.; Nocedal, J. Remark on “Algorithm 778: L-BFGS-B: Fortran Subroutines for Large-Scale Bound Constrained Optimization”. ACM Trans. Math. Softw. 2011, 38, 1–4. [Google Scholar] [CrossRef]
  23. Auger, A.; Hansen, N. A restart CMA evolution strategy with increasing population size. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation (CEC), Edinburgh, UK, 2–5 September 2005; Volume 2, pp. 1769–1776. [Google Scholar] [CrossRef]
  24. Storn, R.; Price, K. Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization Over Continuous Spaces. J. Glob. Optim. 1995, 23, 1–15. [Google Scholar]
  25. LaTorre, A.; Muelas, S.; Peña, J.M. Large scale global optimization: Experimental results with MOS-based hybrid algorithms. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 2742–2749. [Google Scholar] [CrossRef]
  26. Tanabe, R.; Fukunaga, A. Evaluating the performance of SHADE on CEC 2013 benchmark problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 1952–1959. [Google Scholar] [CrossRef]
  27. Bolufé-Röhler, A.; Fiol-González, S.; Chen, S. A minimum population search hybrid for large scale global optimization. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1958–1965. [Google Scholar] [CrossRef]
  28. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  29. Meunier, L.; Rakotoarison, H.; Wong, P.K.; Roziere, B.; Rapin, J.; Teytaud, O.; Moreau, A.; Doerr, C. Black-Box Optimization Revisited: Improving Algorithm Selection Wizards Through Massive Benchmarking. IEEE Trans. Evol. Comput. 2022, 26, 490–500. [Google Scholar] [CrossRef]
  30. Potter, M.; De Jong, K. A cooperative coevolutionary approach to function optimisation. In Proceedings of the 3rd Conference on Parallel Probiem Solving Form Nature, Jerusalem, Israel, 9–14 October 1994; pp. 245–257. [Google Scholar]
  31. Liu, Y.; Yao, X.; Zhao, Q.; Higuchi, T. Scaling up fast evolutionary programming with cooperative coevolution. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–31 May 2001; Volume 2, pp. 1101–1108. [Google Scholar] [CrossRef]
  32. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  33. Shi, Y.; Teng, H.f.; Li, Z.q. Cooperative Co-evolutionary Differential Evolution for Function Optimization. Lect. Notes Comput. Sci. 2005, 3611, 1080–1088. [Google Scholar] [CrossRef]
  34. Bergh, F.; Engelbrecht, A. A Cooperative Approach to Particle Swarm Optimization. Evol. Comput. IEEE Trans. Neural Netw. 2004, 8, 225–239. [Google Scholar] [CrossRef]
  35. Chen, H.; Zhu, Y.; Hu, K.; He, X.; Niu, B. Cooperative Approaches to Bacterial Foraging Optimization. Lect. Notes Comput. Sci. 2008, 5227, 541–548. [Google Scholar] [CrossRef]
  36. El-Abd, M. A cooperative approach to The Artificial Bee Colony algorithm. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–5. [Google Scholar] [CrossRef]
  37. Yang, Z.; Tang, K.; Yao, X. Large scale evolutionary optimization using cooperative coevolution. Inf. Sci. 2008, 178, 2985–2999. [Google Scholar] [CrossRef]
  38. Yang, Z.; Tang, K.; Yao, X. Self-adaptive Differential Evolution with Neighborhood Search. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC), Hong Kong, China, 1–6 June 2008; pp. 1110–1116. [Google Scholar] [CrossRef]
  39. Yang, Z.; Tang, K.; Yao, X. Multilevel Cooperative Coevolution for Large Scale Optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC), Hong Kong, China, 1–6 June 2008; pp. 1663–1670. [Google Scholar] [CrossRef]
  40. Omidvar, M.N.; Li, X.; Yang, Z.; Yao, X. Cooperative Co-evolution for large scale optimization through more frequent random grouping. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
  41. Li, X.; Yao, X. Tackling high dimensional nonseparable optimization problems by cooperatively coevolving particle swarms. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation (CEC), Trondheim, Norway, 18–21 May 2009; pp. 1546–1553. [Google Scholar] [CrossRef]
  42. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput. 2012, 16, 210–224. [Google Scholar] [CrossRef]
  43. Ren, Y.; Wu, Y. An efficient algorithm for high-dimensional function optimization. Soft Comput. 2013, 17, 1–10. [Google Scholar] [CrossRef]
  44. Hadi, A.; Wagdy, A.; Jambi, K. LSHADE-SPA memetic framework for solving large-scale optimization problems. Complex Intell. Syst. 2018, 5, 25–40. [Google Scholar] [CrossRef]
  45. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6-11 July 2014; pp. 1658–1665. [Google Scholar] [CrossRef]
  46. Ray, T.; Yao, X. A cooperative coevolutionary algorithm with Correlation based Adaptive Variable Partitioning. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation (CEC), Trondheim, Norway, 18–21 May 2009; pp. 983–989. [Google Scholar] [CrossRef]
  47. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  48. Omidvar, M.N.; Li, X.; Yao, X. Smart Use of Computational Resources Based on Contribution for Cooperative Co-evolutionary Algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO’11, Dublin, Ireland, 12–16 July 2011; pp. 1115–1122. [Google Scholar] [CrossRef]
  49. Omidvar, M.N.; Li, X.; Yao, X. Cooperative Co-evolution with delta grouping for large scale non-separable function optimization. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar] [CrossRef]
  50. Chen, W.; Weise, T.; Yang, Z.; Tang, K. Large-Scale Global Optimization Using Cooperative Coevolution with Variable Interaction Learning. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Krakow, Poland, 11–15 September 2010; Ser. Lecture Notes in Computer Science. Volume 6239, pp. 300–309. [Google Scholar]
  51. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution With Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  52. Sayed, E.; Essam, D.; Sarker, R. Dependency Identification technique for large scale optimization problems. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation (CEC), Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar] [CrossRef]
  53. Molina, D.; Lozano, M.; García-Martínez, C.; Herrera, F. Memetic Algorithms for Continuous Optimisation Based on Local Search Chains. Evol. Comput. 2010, 18, 27–63. [Google Scholar] [CrossRef]
  54. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative Co-Evolution With Differential Grouping for Large Scale Optimization. IEEE Trans. Evol. Comput. 2014, 18, 378–393. [Google Scholar] [CrossRef]
  55. Sun, Y.; Kirley, M.; Halgamuge, S. Extended Differential Grouping for Large Scale Global Optimization with Direct and Indirect Variable Interactions. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 313–320. [Google Scholar] [CrossRef]
  56. Omidvar, M.N.; Yang, M.; Mei, Y.; Li, X.; Yao, X. DG2: A Faster and More Accurate Differential Grouping for Large-Scale Black-Box Optimization. IEEE Trans. Evol. Comput. 2017, 21, 929–942. [Google Scholar] [CrossRef]
  57. Mei, Y.; Yao, X.; Li, X.; Omidvar, M.N. A Competitive Divide-and-Conquer Algorithm for Unconstrained Large Scale Black-Box Optimization. ACM Trans. Math. Softw. 2015, 42, 1–24. [Google Scholar] [CrossRef]
  58. Li, L.; Fang, W.; Wang, Q.; Sun, J. Differential Grouping with Spectral Clustering for Large Scale Global Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 334–341. [Google Scholar] [CrossRef]
  59. Liu, J.; Tang, K. Scaling Up Covariance Matrix Adaptation Evolution Strategy Using Cooperative Coevolution. In Proceedings of the Intelligent Data Engineering and Automated Learning—IDEAL 2013, Hefei, China, 20–23 October 2013; Lecture Notes in Computer Science. Volume 8206, pp. 350–357. [Google Scholar] [CrossRef]
  60. Mahdavi, S.; Rahnamayan, S.; Shiri, M. Cooperative co-evolution with sensitivity analysis-based budget assignment strategy for large-scale global optimization. Appl. Intell. 2017, 47, 1–26. [Google Scholar] [CrossRef]
  61. Ge, H.; Zhao, M.; Hou, Y.; Kai, Z.; Sun, L.; Tan, G.; Zhang, Q. Bi-space Interactive Cooperative Coevolutionary algorithm for large scale black-box optimization. Appl. Soft Comput. 2020, 97, 1–18. [Google Scholar] [CrossRef]
  62. Liu, W.; Zhou, Y.; Li, B.; Tang, K. Cooperative Co-evolution with Soft Grouping for Large Scale Global Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 318–325. [Google Scholar] [CrossRef]
  63. Sun, Y.; Kirley, M.; Halgamuge, S. A Recursive Decomposition Method for Large Scale Continuous Optimization. IEEE Trans. Evol. Comput. 2017, 22, 647–661. [Google Scholar] [CrossRef]
  64. Komarnicki, M.M.; Przewozniczek, M.W.; Kwasnicka, H. Incremental Recursive Ranking Grouping for Large Scale Global Optimization. IEEE Trans. Evol. Comput. 2022. [Google Scholar] [CrossRef]
  65. Vakhnin, A.; Sopov, E. Investigation of Improved Cooperative Coevolution for Large-Scale Global Optimization Problems. Algorithms 2021, 14, 146. [Google Scholar] [CrossRef]
  66. Vakhnin, A.; Sopov, E. Investigation of the iCC Framework Performance for Solving Constrained LSGO Problems. Algorithms 2020, 13, 108. [Google Scholar] [CrossRef]
  67. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar] [CrossRef]
  68. Poláková, R.; Bujok, P. Adaptation of Population Size in Differential Evolution Algorithm: An Experimental Comparison. In Proceedings of the 2018 25th International Conference on Systems, Signals and Image Processing (IWSSIP), Maribor, Slovenia, 20–22 June 2018; pp. 1–5. [Google Scholar] [CrossRef]
  69. Hansen, N.; Finck, S.; Ros, R.; Auger, A. Real-Parameter Black-Box Optimization Benchmarking 2009: Noisy Functions Definitions; Technical Report RR-6869; INRIA: Paris, France, 2009. [Google Scholar]
  70. TACO: Toolkit for Automatic Comparison of Optimizers. Available online: https://tacolab.org/ (accessed on 31 January 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.