Next Article in Journal
The Effect of Railway Projects Increasing Safety on the Frequency of Occurrences
Previous Article in Journal
Adaptive Cybersecurity Neural Networks: An Evolutionary Approach for Enhanced Attack Detection and Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Knowledge-Based Perturbation LaF-CMA-ES for Multimodal Optimization

1
School of Computer and Communication Technology, Lanzhou University of Technology, Lanzhou 730050, China
2
College of Information Science and Technology, Gansu Agricultural University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 9133; https://doi.org/10.3390/app14199133
Submission received: 26 June 2024 / Revised: 6 August 2024 / Accepted: 13 August 2024 / Published: 9 October 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Multimodal optimization presents a significant challenge in optimization problems due to the existence of multiple attraction basins. Balancing exploration and exploitation is essential for the efficiency of algorithms designed to solve these problems. In this paper, we propose the KbP-LaF-CMAES algorithm to address multimodal optimization problems based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) framework. The Leaders and Followers (LaF) and Knowledge-based Perturbation (KbP) strategies are the primary components of the KbP-LaF-CMAES algorithm. The LaF strategy is utilized to extensively explore the potential local spaces, where two cooperative populations evolve in synergy. The KbP strategy is employed to enhance exploration capabilities. Improved variants of CMA-ES are used to exploit specific domains containing local optima, thereby potentially identifying the global optimum. Simulation results on the test suite demonstrate that KbP-LaF-CMAES significantly outperforms other meta-heuristic algorithms.

1. Introduction

Multimodal continuous optimization (MMOP) is particularly challenging due to the presence of multiple attraction basins, each containing a local optimum. In these landscapes, solutions within a specific basin quickly converge to the local optimum through a local greedy search [1,2]. However, the presence of multiple local optima complicates the direct identification of the global optimum using local search strategies.
In real-world scenarios, optimal results are often unattainable due to physical constraints, such as limited time [3]. Some methods have been introduced to address MMOP [4]. For single-objective continuous optimization problems, evolutionary algorithms (EAs) typically search for one optimal solution. However, for MMOPs, numerous local optimal solutions must be identified to provide different perspectives to the user [5]. Novel strategies, including niching methods and population decomposition, have been introduced to tackle MMOPs.
Niching techniques divide the entire population into species, and subsequent operators, such as mutation and crossover, are applied based on these species divisions. For example, using affinity propagation clustering (APC), Wang et al. [6] introduced an automatic niching technique and developed a niching differential evolution (DE) algorithm to solve MMOPs. Similarly, Hu et al. [7] introduced a niching backtracking search algorithm with adaptive local search to address MMOPs. Various techniques such as speciation, crowding, fitness sharing, clustering, derating, and dynamic cluster size niching have been employed to enhance the performance of niching-based multimodal algorithms, particularly in low-dimensional MMOPs. However, these multimodal algorithms often suffer from performance sensitivity due to multiple parameter-dependent strategies and struggle in higher-dimensional scenarios.
The method of multi-objective evolutionary algorithms (MOEAs) is also applied to solve MMOPs [2]. This approach involves constructing objectives for each subproblem to reformulate the optimization problem [8]. Peng introduced a subset selection framework and multi-objective multimodal algorithms [9]. The effectiveness of these algorithms largely depends on the constructed objectives.
The performance of EAs in solving MMOPs often surpasses that of classical methods [10,11]. EAs simulate natural processes by maintaining a population through selection, mutation, and recombination strategies [12]. This evolutionary path approximates the solution’s evolutionary trajectory. By extracting hidden information from historical solutions, EAs can uncover problem characteristics and identify the optimum accordingly.
Improved EAs have demonstrated strong performance in certain MMOPs [13]. For example, Wang et al. [14] developed an adaptive distributed differential evolution algorithm for MMOPs. Agrawal [15] introduced a distributed mutation framework along with an elite archive mechanism. Sheng et al. [16] proposed an adaptive neighborhood mutation method based on memetic differential evolution for MMOPs. Additionally, Lin et al. [17] developed an enhanced differential evolution algorithm incorporating strategies such as nearest-better clustering (NBC), species balance, and key point-based mutation for MMOPs.
Despite their capabilities, these algorithms often rely on multiple search procedures, leading to a somewhat random search process. Additionally, their ability to self-learn problem knowledge is limited. Consequently, researchers have been working to develop EAs that can identify multiple global optima and reliably reach the global optimum in MMOPs, thereby enhancing EA performance.
CMA-ES is an effective optimization strategy that is widely utilized across various problems. In the context of MMOPs, CMA-ES incorporates several strategies. Ahrari et al. [18] utilized small populations to explore the problem space in parallel, designating identified basins as taboo fields to prevent other subpopulations from converging on the same solution. Li et al. [19] employed opposition learning to investigate unknown regions. Luo et al. [1] proposed an ensemble nearest-better-neighbor clustering method to ensure that solutions are distributed across different basins.
In this paper, we propose an enhanced Leader–Follower (LaF) approach to minimize the overexploitation phenomenon and introduce a CMA-ES-based MMOP solver. The method combines the adaptive evolutionary properties of CMA-ES, the powerful over-exploitation capability of LaF, and the Knowledge-based Perturbation (KbP) strategy. The algorithm proposed in [20] utilizes some of the algorithmic ideas presented in this paper. However, this paper further proposes a knowledge-driven perturbation-based strategy, which significantly improves the convergence speed and solution accuracy of the algorithm. In addition, this paper not only extends and enriches the experimental content but also adds analyses and discussions not previously covered. Specifically, we incorporate more comprehensive experimental results, refine the methodology, and delve into the implications of the research findings. These improvements significantly enhance the robustness and depth of the study.
The structure of this paper is organized around the algorithm’s components. Section 2 is an introductory overview of CMA-ES, the LaF strategy, and the KbP method. Section 3 elaborates on an improved algorithm and details the employed strategies. Section 4 presents simulation results and analysis to demonstrate the algorithm’s performance. Finally, the paper concludes in Section 5.

2. Materials and Methods

2.1. Leaders and Followers (LaF)

Multiple attraction basins exist in MMOPs, each varying in size. To effectively solve MMOPs, it is crucial to identify sufficient basins of the function to extract adequate knowledge about the optimization problem. To avoid getting trapped in local optima, some algorithms address MMOPs by controlling parameters [21]. For solving MMOPs, exploration is more important than exploitation. Overemphasis on exploitation can negatively impact algorithm efficiency [22].
The LaF mechanism is a novel approach designed to address multimodal problems [23]. In the LaF mechanism, there are two sub-populations: leaders and followers. The leader population consists of the optimal solutions identified during the optimization process, while the follower population is guided by these leaders. Figure 1 illustrates the LaF strategy’s procedure. In the strategy of LaF, the direct comparison of exploratory solutions with the best-known solutions can be avoided. The leader population is periodically updated with solutions from the follower population over several generations. By repeatedly isolating the accumulated information, the LaF strategy is appropriate for MMOPs.
Firstly, two individuals are selected from the leaders and followers as l e a d e r i and f o l l o w e r i , respectively. Then, a new individual t r i a l i is generated by recombination.
t r i a l i = l e a d e r i + ε i 2 l e a d e r i f o l l o w e r i
where ε i is a random number, the value is chosen from 0 and 1. Then, the solution t r i a l i is compared with the corresponding solution in the followers, and the individual t r i a l i with better fitness value is retained. In the updating process of the followers, the offspring solutions are not compared with the current optimal solutions, that is, the individuals in the leaders, but with the individuals of followers. Therefore, the degree of exploitation in the LaF mechanism is relatively low, which improves the success rate of exploration.

2.2. Knowledge-Based Perturbation

The MMOPs are complex and consist of numerous local optima, making it challenging for algorithms to find the global optimum. A restart strategy can temporarily address this issue [24]. However, after restarting, the algorithm may quickly cease to converge. Due to the varying dimensions and complexities of landscapes, the efficiency of restart-based algorithms is often unstable.
The Knowledge-based Perturbation (KbP) mechanism is a perturbation restart method designed to prevent the algorithm from getting trapped in local optima and to continue exploring the problem space [25]. In the perturbation process, it is crucial to determine the size and timing of the perturbations. The framework of the KbP strategy is illustrated in Figure 2.
If the introduced method is detected as stalled for the optimization problem, the KbP method is triggered. In the KbP strategy, the size of the local basin is used to guide the exploration process. The size of local basins d b a s i n in MMOPs is determined using a linear search mechanism, requiring 1000 d function evaluations, where d is the dimension of the MMOP. Specifically, d line searches are performed, each utilizing 1000 function evaluations. In each line search, one term spans the entire range of the search space, while the other terms vary between random pairs of values. This approach reveals the landscape of local basins for subsequent strategies in the algorithm.
During the perturbation process, the size of each dimension is generated based on d b a s i n . The perturbation is conducted on the Leaders’ individuals.
l e a d e r s n e w = l e a d e r s o l d + d p e r t
d p e r t = 3 / 4 r a n d d b a s i n , d b a s i n
Here, the new individuals in the Leaders’ population are generated by combining the original individuals l e a d e r s o l d with d p e r t .

2.3. CMA-ES

The CMA-ES algorithm, proposed by Hansen & Ostermeier in 1996 [26], is one of the most advanced algorithms for solving continuous optimization problems. In CMA-ES, a multivariate Gaussian distribution model is hypothesized to describe the previous motion pattern of the problem in the search space. During the iterative process, relevant parameters such as the covariance matrix and step size are adjusted based on historical information. New individuals are generated through sampling from the adjusted distribution, and the best individuals are selected to update the parameters. The CMA-ES is invariant to translation and rotation. Therefore, CMA-ES is invariant to translation and rotation, making it highly effective for solving ill-conditioned and complex problems.
The CMA-ES algorithm consists of four main processes: sampling, selection, recombination, and parameter updating. The most common CMA-ES is (μ, λ)-CMA-ES, where μ and λ are the numbers of parent and child populations, respectively. In the g + 1 generation of the iterative process of (μ, λ)-CMA-ES, λ individuals are generated by sampling: X k g + 1 ~ m g + σ g N 0 , C g , k = 1,2 , , λ , where m g is the mean value of individuals after population selection of g generation; σ g is the standard deviation of g generation, also known as step size; C g is the covariance matrix and N 0 , C g is the multivariate normal distribution with the mean value of 0 and covariance matrix of C g .
Individuals are selected according to their fitness value. The first μ individuals are selected as parents to enter the recombination process. In the process of recombination, m g + 1 is updated as m g + 1 = i = 1 μ ω i x i : λ g + 1 , where ω i is the weight coefficient of reorganization, x i : λ g + 1 denotes the i-th best individual among the λ individuals of the g + 1 generation.
f x 1 : λ g + 1 f x 2 : λ g + 1 f x λ : λ g + 1
where f x is the fitness of the individual x .
Finally, the parameters of σ g and C g are updated. The step size σ is updated according to the evolution path. The evolution path p σ is updated by a distribution that obeys N 0 , I according to the deviation of the mean value as shown in the following formula.
p σ g + 1 = 1 c σ p σ g + c σ 2 c σ μ e f f C g 1 2 m g + 1 m g σ g
where, c δ is the learning rate of the cumulative evolutionary path and μ e f f is the variance effective selection quality.
ln σ g + 1 = ln σ g + c σ d σ E N 0 , I
where d σ is the attenuation coefficient and E N 0 , I is the expectation of the Euclidean norm of a vector of N 0 , I distribution.
The update of the covariance matrix C is the core part of the CMA-ES algorithm. C is updated with the following formula.
C g + 1 = 1 c 1 c μ C g + c 1 p c g + 1 p c g + 1 T + c μ i = 1 μ ω i y i : λ g + 1 y i : λ g + 1 T
where c 1 and c μ are the learning rates of the rank 1 and rank μ update of the covariance matrix, respectively, and p c g + 1 is an evolutionary path similar to p c g + 1 .
y i : λ g + 1 = x i : λ g + 1 m g σ g
p c g + 1 = 1 c c p c g + c c 2 c c μ e f f m g + 1 m g σ g

3. The KbP-LaF-CMAES Algorithm

The KbP-LaF-CMAES algorithm operates in two distinct stages: identifying sufficient attraction basins and exploiting the identified basins. The LaF method is used to explore local basins by leveraging problem-specific knowledge during the initial search, specifically the size of the attraction basin across various dimensions. The improved variants of CMA-ES are introduced for exploitation.

3.1. Bi-Population Cooperative Evolution Strategy

The goal of methods designed for MMOPs is to discover sufficient global and local optima within the optimization problem’s landscape. This paper proposes a new method: a bi-population cooperative approach with a KbP strategy. This method leverages two distinct populations that evolve cooperatively, as depicted in Figure 3.
In the bi-population method, individuals are updated using the LaF method, as outlined in Section 2. New solutions are juxtaposed with corresponding solutions in the follower population to maintain population diversity. The LaF strategy mitigates comparison bias associated with elitism and effectively identifies promising basins. In instances of stagnation, perturbations based on multimodal knowledge are implemented.
In the KbP strategy, the population is perturbed using a random value determined by the multimodal characteristics of the problem. Subsequently, the search process of the LaF strategy is restarted from the perturbed solutions. The contribution of the KbP strategy in multimodal problems is illustrated in Figure 4. For convenience, the 2D Rastrigin problem serves as an example to demonstrate the influence of KbP. As shown in Figure 4a, a small perturbation value is added to the original solutions, with the new solution remaining in the same basin. This perturbation does not affect the exploitation process. In Figure 4b,c, the perturbation value corresponds to the size of the basin, allowing solutions to explore adjacent basins. These figures indicate that the direction of perturbation is crucial for exploration. In Figure 4d, if the perturbation size is too large, the new solution’s basin becomes random.
In the KbP strategy, the perturbation value is determined by the basin size of the multimodal problem. This knowledge is acquired through the linear search method. A detailed description is provided in Section 2. The minimum basin size in a specific dimension is used to generate the magnitude of the perturbation. During the KbP strategy, solutions are perturbed by a random value related to the basin size.

3.2. CMA-ES with Local Refinement

The CMA-ES is a highly effective algorithm for locating a single global optimum. Therefore, the CMA-ES is employed in the exploitation stage of the proposed algorithm. In an evolutionary algorithm, the population evolves as a whole, and the population’s information is utilized to explore the problem space. However, individual information has not been fully leveraged in this algorithm [27]. This paper proposes the CMA-ES with local refinement (LR-CMA-ES) to enhance the algorithm’s exploitability.
The proposed LR-CMA-ES can be regarded as a synergy of evolutionary learning and self-learning. The evolutionary learning process consists of the evolution operators of the CMA-ES, which are detailed in Section 2.3. Self-learning is achieved through local heuristic methods. The framework of the introduced LR-CMA-ES is illustrated in Figure 5.

3.3. CMA-ES with Population Reduction

To enhance scalability, the CMA-ES with population reduction (PR-CMA-ES) utilizes a relatively small population. Typically, the number of fitness evaluations is fixed. Small populations enable multiple generations and the efficient use of fitness evaluations (FEs) [28]. The optimization of multimodal problems involves two distinct tasks: locating promising attraction basins and identifying the local optimum within these basins. For the first task, the population size should be sufficiently large to account for the number of basins. Conversely, for the second task, the algorithm’s solutions must be iterated through many generations to find the optimum. Therefore, the population size of the CMA-ES is defined as shown in Equation (10).
p o p _ s i z e t = i n i t p o p F E s k F E s
where p o p _ s i z e t is the population size of iteration t , i n i t p o p is the initial size of the population, F E s is the total available amount of CMA-ES, and k is the number of evaluations used in CMA-ES so far.

3.4. Pseudocode of KbP-LaF-CMAES

The KbP-LaF-CMAES algorithm comprises two primary procedures: the discovery of local basins and subsequent optimization within these areas. The KbP-LaF-CMAES algorithm outlines the procedural steps involved (Algorithm 1).
In the KbP-LaF-CMAES algorithm, a bi-population cooperative strategy is employed for exploratory searches across the problem space, complemented by the use of two CMA-ES variants for exploitation. The pseudocode illustrates the synergy between the novel global search strategy and the refined CMA-ES, facilitating a seamless transition from exploration to exploitation. This integration represents more than a mere amalgamation of strategies; it embodies an organic fusion aimed at enhancing overall algorithmic efficacy.

3.5. Complexity Analysis

The complexity of an algorithm is generally measured by the number of floating-point operations and the required iterations or recursions. In the context of metaheuristic optimization algorithms, the number of FEs is a critical aspect of complexity. The time complexity of an EA is analyzed based on a set of fitness evaluations and iterations [29]. In this paper, the complexity of KbP-LaF-CMAES is associated with the following operations: LaF, KbP, and two variants of CMA-ES. In the strategy of LaF, the size of two co-evolutionary populations is denoted as N 1 . The time complexity is O 2 N 1 D . The sizes of attraction basins in each dimension need to be measured. Therefore, the complexity of LaF operation is O D . According to the time complexity of the CMA-ES, the corresponding complexity of the improved CMA-ES is O N 2 D . N 2 is the population size of the CMA-ES algorithm. Therefore, the time complexity of KbP-LaF-CMAES is O 2 N 1 D + N 2 D + D .
Algorithm 1 KbP-LaF-CMAES
Input: the parameters configuration;
Output: optimal solution;
Initialize two populations, Leaders and Followers;
while fEval < maxEvals of KbP-LaF do
  Update the population Followers;
  Merge the two populations;
  If fall into stalled then
   Conduct KbP strategy;
  Update X b e s t ;
end
while not meet stop criterion do
  k-means clustering on Leaders;
  LR-CMA-ES for the best_k solutions;
  Update X b e s t ;
  PR-CMA-ES for the Leaders;
  Update X b e s t ;
end

3.6. Convergence Analysis of KbP-LaF-CMAES

In the proposed algorithm, the solutions of exploration are not directly applied to the two variants of CMA-ES. Therefore, the adaptability and convergence of CMAES are not affected by the strategies of KbP and LaF. The convergence of CMA-ES proved to be equivalent to the convergence of KbP-LaF-CMAES.
The evolutionary process of the evolutionary algorithm can be modeled as an inhomogeneous Markovian process [30]. As a population-based stochastic optimization algorithm, the evolutionary process of CMA-ES is considered a resampling process [31]. Two primary strategies are included in CMA-ES, covariance matrix self-adaptation and step-size adaptation [32]. Then, the description of the theoretical derivation of its empirically observed convergence rate is tackled in this Section [33].
In CMA-ES, the state space X t , σ t R n × R is constructed. Where X t R n represent the individual at iteration t . σ t is the overall step size of an underlying sampling distribution at iteration t . Then, CMA-ES is the stochastically recursive sequence on the state space.
X t + 1 , σ t + 1 = F X t , σ t , U t
F is the transition function of the algorithm. In CMA-ES, the transition function F depends on the benchmark function f only through the comparison of candidate solutions. U t is the distribution obtained by the selected solutions.
The p individuals of the population are created by a S o l function.
X t i = S o l X t , σ t , U t i , i = 1 , , p
where S o l x , σ , u i = x + σ u i .
Then, the p candidate individuals are evaluated on f and ordered by the objective. The permutation process is denoted as S S p . Where S p is the permutation set of p elements.
S p × U p U p
S , U t S U t = U t S 1 , , U t S p
The update process of X t , σ t is executed according to the strategy of selection and mutation. More precisely, a measurable function G called the update function that maps R n × R × U p onto R n × R .
X t + 1 , σ t + 1 = G X t , σ t , S U t = G X t , σ t , Y t
where Y t denotes the ordered coordinates of U t .
CMA-ES is determined by the quadruplet S o l , G , U p , p .
(1)
Invariance properties of CMA-ES
Suppose that M is a strictly monotonic mapping, f is the mapping function. X t , σ t is the Markov chain obtained when optimizing f using the same sequence X t , σ t t N and the same initial state X 0 , σ 0 = X 0 , σ 0 .
Proof 1.
Invariance to Strictly Monotonic Transformations of f .
Assume X t , σ t = X t , σ t , and denote X t i = S o l X t , σ t , U t i . Because O r d f X t 1 , , f X t p = O r d f X t 1 , , f X t p = S , the following equation is induced. O r d is the ordering function that returns the permutation set of p elements.
X t + 1 , σ t + 1 = G X t , σ t , S U t = G X t , σ t , S U t = X t + 1 , σ t + 1
Proof 2.
Invariance to Translation of f .
Φ is defined by Φ x 0 x , σ = x + x 0 , σ for all x 0 , x , σ .
Then, as the homomorphism definition,
G Φ x 0 x , σ , y = Φ x 0 G x , σ , y
G Φ x 0 x , σ , S Φ x 0 x , σ f x x 0 u = Φ x 0 G x , σ , S x , σ f x u
CMA-ES algorithm is translation invariance.  □
Proof 3.
Invariance to Scaling of f .
φ is defined by φ α x , σ = x / α , σ / α for all x 0 , x , σ .
f S o l x , σ , u i = f α S o l x α , σ α , u i implies that the same permutation S is obtained by the x , σ of f and x α , σ α of f α x , denoted as S x , σ f x = S x α , σ α f α x .
F f x x , σ , u = G x , σ , S x , σ f x u
F f α x x α , σ α , u = G x α , σ α , S x α , σ α f α x u
Then,
F f x x , σ , u = φ 1 / α F f α x φ α x , σ , u
Hence, the property of scaling-invariance of CMA-ES is held.
(2)
Linear Convergence of CMA-ES
In this part, Z t is defined as X t σ t . The log-progress ln X t + 1 / X t is investigated in the following for the analysis of linear convergence.
ln X t + 1 X t = ln Z t + 1 η Y Z t , U t Z t
η is the multiplicative step-size update. Y Z t , U t is the ordered vector S Z t , 1 U t . According to the property of logarithm, 1 t ln X t X 0 is expressed as follows.
1 t ln X t X 0 = 1 t ln k = 0 t 1 X k + 1 X k = 1 t k = 0 t 1 ln Z k + 1 Z k η Y Z k , U k = 1 t k = 0 t 1 ln Z k + 1 1 t k = 0 t 1 ln Z k + 1 t k = 0 t 1 ln η Y Z k , U k
R z is defined as the expectation of the logarithm of η Y z , U . P t z , A is the transition probabilities of the Markov chain. The σ -finite measure π is invariant if it satisfies
π A = π d z P z , A
E π ln X t + 1 X t = E U p U ln η Y z , U π d z = R z π d z
CR is the convergence rate as the opposite of RHS of the previous equation.
C R = E U p U ln η Y z , U π d z = R z π d z
Then,
lim t 1 t ln X t X 0 = ln z π d z ln z π d z + E ln η Y z , U π d z = C R
The linear convergence can be proven by exploiting the application of a Law of Large Numbers (LLN).
is the drift operation defined as follows.
V z = P z , d y V y V z = E Z V Z 1 V Z 0
Then,
P t z , . π h t 0
KbP-LaF-CMAES can reach the global optimum in probability.  □

4. Experiment and Analysis

The optimization capacity of the KbP-LaF-CMAES algorithm is evaluated using the CEC2013 benchmark problems. It is compared with several other metaheuristic algorithms, including LaF-CMA-ES [34], EBOwithCMAR [35], IPOP-CMAES [24], CMA-ES, and SPSR-DEMMS [36]. These test problems are categorized into three classes based on their landscape properties: unimodal, basic multimodal, and composition functions [37].
The simulation experiments were conducted on a MATLAB 2019 platform using a PC equipped with an Intel(R) Xeon(R) W-2123 CPU running at 3.6 GHz and 16 GB of memory with Windows 10 x64 OS, and the manufacturer of this device is Lenovo Group Ltd. This company is headquartered in Beijing, China. Performance was measured based on a predetermined number of fitness evaluations, which is 10,000 * D. In Section 4.1, the typical parameters of KbP-LaF-CMAES are analyzed through a design of experiments (DOE). Section 4.2 discusses the contributions of the KbP-LaF-CMAES components based on experimental results from 28 benchmark test functions. Section 4.3 compares KbP-LaF-CMAES with five other algorithms.
Authors should interpret the results in light of previous studies and the underlying hypotheses. The findings and their implications should be discussed within the broadest possible context, and potential future research directions may also be highlighted.

4.1. Parameters Analysis

The configuration of parameters is crucial for achieving optimized results in algorithms. However, determining the optimal parameter settings can be challenging. By analyzing statistical data, the impact of parameters on performance can be assessed throughout the optimization process. In this part, the DOE method is applied to control the parameters of the proposed KbP-LaF-CMAES in the test suite of CEC2013. Three typical parameters mainly influence the capacity of the KbP-LaF-CMAES. These parameters are MainEvals (Evaluation number of LaF strategy), PertFac (Factor of perturbation), and PopFac (Factor of parent population).
A 1/2 fractional factorial DOE was conducted for these parameters. The parameter choices are shown in Table 1: MainEvals ∈ [0.3, 0.5, 0.7, 0.9], PertFac ∈ [0.5, 1, 1.5, 2], and PopFac ∈ [0.1, 0.25, 0.5, 0.75]. A variety of dimensions (D = 30, D = 50, and D = 100) were selected as the dimensions of the test space to enhance the statistical significance. The orthogonal array of the parameter configurations is shown in Table 2. For each parameter group, the algorithm was run 10 times independently on all of the 28 functions of CEC2013.
The ranks of these parameters are presented in Table 3. The MainEvals parameter is the most critical for performance. For each parameter combination in the orthogonal array, the average value (AVE) across all dimensions and the total average value (TAVE) are displayed in Table 4.
According to the experimental results, the performance variation with these parameters is shown in Figure 6. The analysis indicates that the parameter mainEvals is the most influential among the three parameters, with an optimal value of 0.3. The mainEvals parameter determines the proportion of LaF in the algorithm, which is a powerful exploration strategy in KbP-LaF-CMAES. In contrast, PertFac and PopFac are relatively minor parameters but still affect population diversity, which, in turn, influences the quality of the exploitation process. Based on Table 3 and Figure 6, the parameter values for KbP-LaF-CMAES are as follows: mainEvals = 0.3, PertFac = 1.5, and PopFac = 0.5.

4.2. Operator Analysis

In this Section, the effectiveness of the subcomponents of the improved method is analyzed through comparisons with CMA-ES, LaF-CMAES, and LaF. The average results of KbP-LaF-CMAES, CMA-ES, LaF-CMAES, and LaF across three different dimensions are evaluated using the CEC2013 test suite. Figure 7, Figure 8 and Figure 9 illustrate the performance of the four algorithms through boxplots.
The selected test functions are Functions 11, 14, 17, and 22 from the CEC2013 benchmarks. The first three functions are multimodal problems, while Function 22 is classified as a composition function. These functions exemplify the characteristics of multimodality based on their fitness landscapes. The results indicate that all strategies contribute to the effectiveness of KbP-LaF-CMAES.
The results indicate that the introduced method outperforms CMA-ES, LaF-CMAES, and LaF in both high-dimensional and low-dimensional functions. This finding suggests that the combination of multiple strategies yields superior effectiveness and stability compared to single-strategy algorithms.

4.3. Experimental Analysis and Result

Simulations were conducted across three different dimensions. Five statistical indicators—Best, Worst, Median, Mean, and Standard Deviation (Std)—were calculated, as shown at the end of this paper. Each algorithm was run independently 51 times for each instance, and the Mean and Std indicators were derived from the performance of each algorithm on the test functions. The results of the introduced method on the CEC benchmark across the three dimensions are illustrated in Table 5, with the best performance for each function highlighted in bold. The test functions are notably complex due to their high dimensionality. Consequently, comparison experiments were performed on the test problems across these three types of dimensions to assess the robustness of the introduced algorithms.
The statistical variances between the introduced method and the compared algorithms are analyzed using Friedman’s test. The results of Friedman’s test for the improved method and comparison algorithms across each dimension are illustrated in Table 6. These figures consistently show KbP-LaF-CMAES achieving the highest ranking for the test benchmark. Subsequently, the significance levels of the differences between the algorithms are computed using Bonferroni–Dunn’s test.
C D = q α k k + 1 6 N
In Equation (30), k and N represent the number of compared algorithms and test functions, respectively, while α denotes the confidence level, commonly set to 0.05 or 0.1. The critical difference for Bonferroni–Dunn’s test was calculated with α = 0.05 and α = 0.1, as shown in Equation (30).
In this study, with k = 5 and N = 28, q α = 2.394 for α = 0.05, and q α = 2.128 for α = 0.1. The statistical results are presented in Table 5, demonstrating that the introduced method outperforms the comparison methods. Regardless of the α value and the dimension, a significant difference is detected among the comparison methods.
The experimental results lead to several conclusions. The performance of the proposed algorithm surpasses that of most comparison algorithms across a wide range of functions in all dimensions. For low-dimensional problems (30D and 50D), the effectiveness of the introduced method is comparable to that of EBOwithCMAR. Subsequently, post hoc tests were conducted to identify the differences between these algorithms [38]. The introduced method served as the control algorithm for these tests. The objective of the post hoc tests is to establish a comparison set of p-values. These p-values are derived from the results, with the Bonferroni–Dunn method employed to compute adjusted p-values. The test results are presented in Table 6.
Table 6 displays both the p-values and the adjusted p-values, highlighting differences among the five algorithms across all three dimensions. Specifically, in the 50D and 100D dimensions, statistical disparities have emerged between the KbP-LaF-CMAES and IPOP-CMAES algorithms. To confirm the directionality of these statistical differences, reference to the raw data is necessary.
Further statistical analysis of the simulation results involves making an all-pair comparison without a control algorithm. The outcomes of the n × n comparison using post hoc Friedman tests for the three kinds of dimensions are detailed in Table 7, Table 8 and Table 9, respectively. n is the count of the compared algorithms. From these tables, the n × n 1 2 pairwise comparison results can be derived. Specifically, for the 50D and 100D test functions, the introduced method outperforms the other compared algorithms.
The experimental results demonstrate a significant advantage of the introduced method over the comparison algorithms. The statistical analysis indicates that the introduced method exhibits distinct performance improvements over SPSRDEMMS and the three other variants of CMA-ES, validating its efficiency. The superiority of the introduced method becomes more evident as dimensionality increases. The comparison between KbP-LaF-CMAES and EBOwithCMAR reveals that the performance of KbP-LaF-CMAES is comparable to that of the state-of-the-art algorithm.
Detailed analysis of the experimental results shows that KbP-LaF-CMAES performs well on several typical multimodal functions, such as F7, F8, F14, F15, F18, F19, and F20. Additionally, KbP-LaF-CMAES also excels in the two composite problems F22 and F23. For the remaining multimodal and composite problems, KbP-LaF-CMAES exhibits comparable or slightly inferior performance compared to the other algorithms. Overall, the KbP-LaF-CMAES algorithm has demonstrated preferable performance on most test functions, particularly in high-dimensional problems. The test functions exhibit strong multimodal properties, confirming that the introduced method can effectively solve multimodal problems.
Some of the comparison test results for the 30, 50, and 100 dimensions are shown in Figure 10, Figure 11 and Figure 12, respectively. The statistical analysis validates that the introduced method performs best on the test functions across these dimensions. Although KbP-LaF-CMAES does not always find the global optimal solution, the solutions it produces are comparable to those found by other algorithms.
For multimodal functions, the introduced method achieves the best performance among the compared algorithms. On the remaining functions, KbP-LaF-CMAES maintains a comparable performance. The detailed experimental results for KbP-LaF-CMAES, CMA-ES, LaF-CMAES, IPOP-CMAES, and EBOwithCMAR are presented in Table 10, Table 11 and Table 12 for 30, 50, and 100 dimensions, respectively, where the bolded data in the table indicates the optimal results.
In summary, the introduced method has better Friedman’s test results on all three dimensions of the CEC benchmark relative to the other five compared algorithms, which proves that the robustness of the introduced algorithm is the best relative to the other five compared algorithms, and the introduced method outperforms several comparison algorithms, including SPSRDEMMS and three variants of CMA-ES, especially as the dimensionality increases. It performs well on many multimodal and composite functions, and its overall performance is preferable in high-dimensional problems. Although it does not always find the global optimal solution, its results are comparable to other leading algorithms, particularly excelling in multimodal functions.

5. Conclusions

In this paper, KbP, LaF strategies, and two variants of CMA-ES are integrated into the introduced method to solve MMOPs. These strategies are employed to balance exploration and exploitation. Simulation results on the CEC problems demonstrate that the performance of the introduced method surpasses that of five other compared algorithms. Statistical analysis indicates that the introduced method significantly differs from the other four compared algorithms. Its performance is comparable to that of EBOwithCMAR. Comparative analysis confirms that the introduced method is an efficient and effective algorithm for MMOPs.
In multimodal problems, the complexity of the landscape increases the difficulty of the problems. The strategy of multi-objective evolutionary algorithms is a promising research direction for solving multimodal problems. Additionally, landscape analysis methods can be integrated with CMA-ES to measure the size of attraction basins. For the application of CMA-ES, its framework can be extended to combinatorial optimization problems, such as distributed production scheduling.

Author Contributions

Conceptualization, H.L.; Methodology, H.L.; Software, H.L.; Validation, L.Q. and Z.Z.; Investigation, L.Q.; Writing—original draft, H.L.; Writing—review & editing, L.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Gansu Natural Science Foundation grant number [21JR7RA204, 1506RJZA007] and Gansu Province Higher Education Innovation Foundation [2022B-107, 2019A-056].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luo, W.; Qiao, Y.; Lin, X.; Xu, P.; Preuss, M. Hybridizing Niching, Particle Swarm Optimization, and Evolution Strategy for Multimodal Optimization. IEEE Trans. Cybern. 2022, 52, 6707–6720. [Google Scholar] [CrossRef] [PubMed]
  2. Tanabe, R.; Ishibuchi, H. A Review of Evolutionary Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2020, 24, 193–200. [Google Scholar] [CrossRef]
  3. Li, X.; Ren, J. MICQ-IPSO: An effective two-stage hybrid feature selection algorithm for high-dimensional data. Neurocomputing 2022, 501, 328–342. [Google Scholar] [CrossRef]
  4. Zhao, F.; He, X.; Wang, L. A Two-Stage Cooperative Evolutionary Algorithm With Problem-Specific Knowledge for Energy-Efficient Scheduling of No-Wait Flow-Shop Problem. IEEE Trans. Cybern. 2021, 51, 5291–5303. [Google Scholar] [CrossRef]
  5. Zhao, F.; Ma, R.; Wang, L. A Self-Learning Discrete Jaya Algorithm for Multiobjective Energy-Efficient Distributed No-Idle Flow-Shop Scheduling Problem in Heterogeneous Factory System. IEEE Trans. Cybern. 2021, 52, 12675–12686. [Google Scholar] [CrossRef]
  6. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Wang, H.; Kwong, S.; Zhang, J. Automatic Niching Differential Evolution With Contour Prediction Approach for Multimodal Optimization Problems. IEEE Trans. Evol. Comput. 2020, 24, 114–128. [Google Scholar] [CrossRef]
  7. Hu, Z.; Zhou, T.; Su, Q.; Liu, M. A niching backtracking search algorithm with adaptive local search for multimodal multiobjective optimization. Swarm Evol. Comput. 2022, 69, 101031. [Google Scholar] [CrossRef]
  8. Yazdinejad, A.; Dehghantanha, A.; Parizi, R.M.; Epiphaniou, G. An optimized fuzzy deep learning model for data classification based on NSGA-II. Neurocomputing 2023, 522, 116–128. [Google Scholar] [CrossRef]
  9. Peng, Y.; Ishibuchi, H. A Diversity-Enhanced Subset Selection Framework for Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2022, 26, 886–900. [Google Scholar] [CrossRef]
  10. Liu, M.; Liu, J.; Hu, Z.; Ge, Y.; Nie, X. Bid optimization using maximum entropy reinforcement learning. Neurocomputing 2022, 501, 529–543. [Google Scholar] [CrossRef]
  11. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  12. Zhao, F.; Di, S.; Cao, J.; Tang, J.; Jonrinaldi. A novel cooperative multi-stage hyper-heuristic for combination optimization problems. Complex Syst. Model. Simul. 2021, 1, 91–108. [Google Scholar] [CrossRef]
  13. Chen, Z.; Zhan, Z.; Wang, H.; Zhang, J. Distributed Individuals for Multiple Peaks: A Novel Differential Evolution for Multimodal Optimization Problems. IEEE Trans. Evol. Comput. 2020, 24, 708–719. [Google Scholar] [CrossRef]
  14. Wang, Z.-J.; Zhou, Y.-R.; Zhang, J. Adaptive Estimation Distribution Distributed Differential Evolution for Multimodal Optimization Problems. IEEE Trans. Cybern. 2022, 52, 6059–6070. [Google Scholar] [CrossRef] [PubMed]
  15. Agrawal, S.; Tiwari, A. Solving multimodal optimization problems using adaptive differential evolution with archive. Inf. Sci. 2022, 612, 1024–1044. [Google Scholar] [CrossRef]
  16. Sheng, M.; Chen, S.; Liu, W.; Mao, J.; Liu, X. A differential evolution with adaptive neighborhood mutation and local search for multi-modal optimization. Neurocomputing 2022, 489, 309–322. [Google Scholar] [CrossRef]
  17. Lin, X.; Luo, W.; Xu, P. Differential Evolution for Multimodal Optimization with Species by Nearest-Better Clustering. IEEE Trans. Cybern. 2021, 51, 970–983. [Google Scholar] [CrossRef] [PubMed]
  18. Ahrari, A.; Deb, K.; Preuss, M. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations. Evol. Comput. 2017, 25, 439–471. [Google Scholar] [CrossRef] [PubMed]
  19. Li, W.; Xu, Q. Covariance Matrix adaptation based on Opposition learning for multimodal optimization. In Proceedings of the Chinese Control and Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 23–26. [Google Scholar] [CrossRef]
  20. Liu, H.; Zhang, J. Multimodal LaF-CMA-ES algorithm based on homotopic convex transformation. In Proceedings of the 5th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM 2023), Brussels, Belgium, 20–21 October 2023. [Google Scholar]
  21. Zhao, F.; Qin, S.; Yang, G.; Ma, W.; Zhang, C.; Song, H. A factorial based particle swarm optimization with a population adaptation mechanism for the no-wait flow shop scheduling problem with the makespan objective. Expert Syst. Appl. 2019, 126, 41–53. [Google Scholar] [CrossRef]
  22. Zhao, F.; Xue, F.; Zhang, Y.; Ma, W.; Zhang, C.; Song, H. A hybrid algorithm based on self-adaptive gravitational search algorithm and differential evolution. Expert Syst. Appl. 2018, 113, 515–530. [Google Scholar] [CrossRef]
  23. Gonzalez-Fernandez, Y.; Chen, S. Leaders and followers—A new metaheuristic to avoid the bias of accumulated information. In Proceedings of the IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015; pp. 776–783. [Google Scholar] [CrossRef]
  24. Auger, A.; Hansen, N. A restart CMA evolution strategy with increasing population size. In Proceedings of the IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1769–1776. [Google Scholar] [CrossRef]
  25. Chen, S.; Abdulselam, I.; Yadollahpour, N.; Gonzalez Fernandez, Y. Particle Swarm optimization with pbest Perturbations. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
  26. Hansen, N.; Ostermeier, A. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 312–317. [Google Scholar] [CrossRef]
  27. Quang Huy, N.; Yew-Soon, O.; Meng Hiot, L. A Probabilistic Memetic Framework. IEEE Trans. Evol. Comput. 2009, 13, 604–623. [Google Scholar] [CrossRef]
  28. Bolufe Rohler, A.; Fiol Gonzalez, S.; Chen, S. A minimum population search hybrid for large scale global optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015; pp. 1958–1965. [Google Scholar] [CrossRef]
  29. Ren, Z.; Zhang, A.; Wen, C.; Feng, Z. A Scatter Learning Particle Swarm Optimization Algorithm for Multimodal Problems. IEEE Trans. Cybern. 2014, 44, 1127–1140. [Google Scholar] [CrossRef] [PubMed]
  30. Beyer, H.-G. The Theory of Evolution Strategies; Springer: Berlin/Heidelberg, Germany, 2001; Volume 39. [Google Scholar]
  31. Zhao, F.; Zhao, L.; Wang, L.; Song, H. An ensemble discrete differential evolution for the distributed blocking flowshop scheduling with minimizing makespan criterion. Expert Syst. Appl. 2020, 160, 113678. [Google Scholar] [CrossRef]
  32. Auger, A.; Hansen, N. Linear convergence of comparison-based step-size adaptive randomized search via stability of Markov chains. SIAM J. Optim. 2016, 26, 1589–1624. [Google Scholar] [CrossRef]
  33. Hellwig, M.; Beyer, H.-G. On the steady state analysis of covariance matrix self-adaptation evolution strategies on the noisy ellipsoid model. Theor. Comput. Sci. 2020, 832, 98–122. [Google Scholar] [CrossRef]
  34. Bolufe Rohler, A.; Tamayo Vera, D.; Chen, S. An LaF-CMAES hybrid for optimization in multi-modal search spaces. In Proceedings of the IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 757–764. [Google Scholar] [CrossRef]
  35. Kumar, A.; Misra, R.K.; Singh, D. Improving the local search capability of Effective Butterfly Optimizer using Covariance Matrix Adapted Retreat Phase. In Proceedings of the IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 1835–1842. [Google Scholar] [CrossRef]
  36. Zamuda, A.; Brest, J.; Mezura-Montes, E. Structured Population Size Reduction Differential Evolution with Multiple Mutation Strategies on CEC 2013 real parameter optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  37. Liang, J.; Qu, B.; Suganthan, P.; Hernández-Díaz, A. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, 2013. [Google Scholar]
  38. Carrasco, J.; García, S.; Rueda, M.M.; Das, S.; Herrera, F. Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol. Comput. 2020, 54, 100665. [Google Scholar] [CrossRef]
Figure 1. The framework of the LaF strategy.
Figure 1. The framework of the LaF strategy.
Applsci 14 09133 g001
Figure 2. The procedure of the KbP strategy.
Figure 2. The procedure of the KbP strategy.
Applsci 14 09133 g002
Figure 3. The framework of the bi−population cooperative evolution strategy.
Figure 3. The framework of the bi−population cooperative evolution strategy.
Applsci 14 09133 g003
Figure 4. The illustration of the KbP strategy.
Figure 4. The illustration of the KbP strategy.
Applsci 14 09133 g004
Figure 5. The framework of the introduced LR-CMA-ES.
Figure 5. The framework of the introduced LR-CMA-ES.
Applsci 14 09133 g005
Figure 6. The tendency of the parameters.
Figure 6. The tendency of the parameters.
Applsci 14 09133 g006
Figure 7. Boxplots of benchmark functions for each strategy (30D).
Figure 7. Boxplots of benchmark functions for each strategy (30D).
Applsci 14 09133 g007
Figure 8. Boxplots of benchmark functions for each strategy (50D).
Figure 8. Boxplots of benchmark functions for each strategy (50D).
Applsci 14 09133 g008
Figure 9. Boxplots of benchmark functions for each strategy (100D).
Figure 9. Boxplots of benchmark functions for each strategy (100D).
Applsci 14 09133 g009
Figure 10. Boxplots of some typical benchmark functions (30D).
Figure 10. Boxplots of some typical benchmark functions (30D).
Applsci 14 09133 g010
Figure 11. Boxplots of some typical benchmark functions (50D).
Figure 11. Boxplots of some typical benchmark functions (50D).
Applsci 14 09133 g011
Figure 12. Boxplots of some typical benchmark functions (100D).
Figure 12. Boxplots of some typical benchmark functions (100D).
Applsci 14 09133 g012
Table 1. The choices of the parameter.
Table 1. The choices of the parameter.
Factor Level1234
mainEvals0.30.50.70.9
PertFac0.511.52
PopFac0.10.250.50.75
Table 2. The combinations of the parameters.
Table 2. The combinations of the parameters.
Serial NumbermainEvalsPertFacPopFac
10.30.50.1
20.310.25
30.31.50.50
40.32.00.75
50.50.50.25
60.51.00.1
70.51.50.75
80.52.00.50
90.70.50.5
100.71.00.75
110.71.50.1
120.72.00.25
130.90.50.75
140.91.00.5
150.91.50.25
160.92.00.1
Table 3. The ranks of parameters.
Table 3. The ranks of parameters.
Parameters
LevelmainEvalsPertFacPopFac
13.2 × 1036.5 × 1038.1 × 103
26.1 × 1037.8 × 1036.1 × 103
31.0 × 1043.3 × 1035.5 × 103
48.1 × 1038.1 × 1037.7 × 103
Std.2.9 × 1032.2 × 1031.2 × 103
Rank123
Table 4. Performance of the parameter combination.
Table 4. Performance of the parameter combination.
No.ParametersAVETAVE
mainEvalsPertFacPopFac30D50D100DTotal
11116.78 × 1021.35 × 1037.87 × 1033.33 × 103
21224.12 × 1028.56 × 1026.60 × 1032.62 × 103
31332.95 × 1027.48 × 1021.98 × 1031.01 × 103
41444.36 × 1028.89 × 1021.62 × 1045.84 × 103
52126.58 × 1021.37 × 1032.16 × 1047.87 × 103
62233.42 × 1028.58 × 1021.74 × 1046.20 × 103
72344.06 × 1028.44 × 1026.47 × 1032.57 × 103
82415.90 × 1021.19 × 1032.15 × 1047.76 × 103
93134.80 × 1021.24 × 1031.93 × 1047.01 × 103
103243.80 × 1023.93 × 1033.96 × 1041.46 × 104
113314.87 × 1021.13 × 1031.95 × 1047.07 × 103
123426.73 × 1022.55 × 1033.06 × 1041.13 × 104
134146.34 × 1021.56 × 1032.12 × 1047.79 × 103
144236.78 × 1021.26 × 1032.14 × 1047.78 × 103
154323.66 × 1027.43 × 1026.54 × 1032.55 × 103
164417.33 × 1024.17 × 1033.77 × 1041.42 × 104
Table 5. Friedman’s test rankings of the different algorithms.
Table 5. Friedman’s test rankings of the different algorithms.
AlgorithmMean Rank (30D)Mean Rank (50D)Mean Rank (100D)
KbP-LaF-CMAES2.752.482.41
CMA-ES3.774.113.86
LaF-CMA-ES3.753.683.54
IPOP-CMA-ES3.593.663.45
EBOwithCMAR2.772.772.70
SPSRDEMMS4.384.305.05
Table 6. Results of the post hoc test.
Table 6. Results of the post hoc test.
30D50D100D
p-ValueAdjusted
p-Value
p-ValueAdjusted
p-Value
p-ValueAdjusted
p-Value
SPSRDEMMS0.0010.0170.0000.0040.0000.000
CMA-ES0.0420.6270.0010.0170.0040.057
LaF-CMAES0.0480.6830.0170.2510.0240.367
IPOP-CMAES0.0931.0000.0180.2760.0380.575
EBOwithCMAR0.9721.0000.5681.0000.5681.000
Table 7. Results of 6 vs. 6 comparisons on the benchmark of 30D.
Table 7. Results of 6 vs. 6 comparisons on the benchmark of 30D.
KbP-LaF-CMAESEBOwithCMARIPOP-CMAESLaF-CMAESCMA-ESSPSRDEMMS
KbP-LaF-CMAES 0.9720.0930.0460.0420.001
EBOwithCMAR0.972 0.1000.0490.0460.001
IPOP-CMAES0.0930.100 0.7480.7210.116
LaF-CMAES0.0460.0490.748 0.9720.211
CMA-ES0.0420.0460.7210.972 0.225
SPSRDEMMS0.0010.0010.1160.2110.225
Table 8. Results of 6 vs. 6 comparisons on the benchmark of 50D.
Table 8. Results of 6 vs. 6 comparisons on the benchmark of 50D.
KbP-LaF-CMAESEBOwithCMARIPOP-CMAESLaF-CMAESCMA-ESSPSRDEMMS
KbP-LaF-CMAES 0.3910.0180.0170.0010.000
EBOwithCMAR0.391 0.6940.0690.0070.002
IPOP-CMAES0.0180.694 0.9720.3720.199
LaF-CMAES0.0170.0690.972 0.3910.211
CMA-ES0.0010.0070.3720.391 0.694
SPSRDEMMS0.0000.0020.1990.2110.694
Table 9. Results of 6 vs. 6 comparisons on the benchmark of 100D.
Table 9. Results of 6 vs. 6 comparisons on the benchmark of 100D.
KbP-LaF-CMAESEBOwithCMARIPOP-CMAESLaF-CMAESCMA-ESSPSRDEMMS
KbP-LaF-CMAES 0.5680.0380.0240.0040.000
EBOwithCMAR0.568 0.1340.0930.0200.000
IPOP-CMAES0.0380.134 0.8580.4110.001
LaF-CMAES0.0240.0930.858 0.5200.002
CMA-ES0.0040.0200.4110.520 0.017
SPSRDEMMS0.0000.0000.0010.0020.017
Table 10. The results of different algorithms for 30-dimensional benchmark functions.
Table 10. The results of different algorithms for 30-dimensional benchmark functions.
FunctionCriterionCMA-ESLaF-CMA-ESIPOP-CMAESEBOwithCMARSPSRDEMMSKbP-LaF-CMAES
1Mean
Std. Dev.
0
0
0
0
0
0
0
0
0
0
0
0
2Mean
Std. Dev.
0
0
0
0
0
0
2.28 × 10−4
3.47 × 10−4
1.02 × 105
5.30 × 104
0
0
3Mean
Std. Dev.
4.39 × 101
2.92 × 102
2.80 × 101
1.50 × 102
1.22 × 102
7.42 × 102
3.78 × 107
2.40 × 10−6
1.10 × 107
1.39 × 107
1.29 × 10−1
6.21 × 10−1
4Mean
Std. Dev.
0
0
0
0
0
0
5.46 × 10−5
1.17 × 10−4
2.41 3.280
0
5Mean
Std. Dev.
0
0
0
0
0
0
0
0
0
0
0
0
6Mean
Std. Dev.
0
0
1.11
5.19
3.08 × 10−1
2.20
0
0
1.74 × 101
1.14 × 101
1.03
5.16
7Mean
Std. Dev.
2.45 × 101
2.18 × 101
2.82
2.21
7.16 × 101
4.91 × 102
6.66 × 10−2
1.31 × 10−1
1.10 × 101
6.18
2.47
2.16
8Mean
Std. Dev.
2.12 × 101
5.58 × 10−2
2.10  × 101
4.39  × 102
2.12 × 101
6.65 × 10−2
2.09 × 101
1.03
2.10 × 101
5.01 × 10−2
2.10 × 101
5.97 × 10−2
9Mean
Std. Dev.
2.14 × 101
3.10
1.10 × 101
2.52
6.77
2.08
2.39 × 101
1.68
2.49 × 101
3.45
1.18 × 101
2.91
10Mean
Std. Dev.
8.70 × 10−4
2.41 × 10−3
5.12 × 10−3
3.72 × 10−3
0
0
1.45 × 10−4
1.04 × 10−3
5.40 × 10−2
4.02 × 10−2
7.25 × 10−4
2.22 × 10−3
11Mean
Std. Dev.
2.92 × 101
3.89
1.98 × 101
4.98
6.05
1.72
1.00 × 101
0
0
0
8.15
3.02
12Mean
Std. Dev.
2.87 × 101
4.04
2.66 × 101
5.09
5.64
2.01
3.37 × 101
2.71
4.27 × 101
1.36 × 101
2.65 × 101
6.01
13Mean
Std. Dev.
7.14 × 101
1.29 × 101
5.90 × 101
1.93 × 101
7.04
4.68
6.80 × 101
1.17 × 101
7.98 × 101
2.09 × 101
5.22 × 101
1.53 × 101
14Mean
Std. Dev.
3.71 × 103
5.11 × 102
1.49 × 103
6.81 × 102
2.70 × 103
5.13 × 102
6.00 × 102
4.56 × 102
3.26
6.13
6.16 × 102
1.92 × 102
15Mean
Std. Dev.
3.71 × 103
4.73
2.99 × 103
5.08
2.65 × 103
4.61
2.58 × 103
3.62 × 102
4.42 × 103
6.98 × 102
2.46  × 103
4.38  × 102
16Mean
Std. Dev.
3.72 × 10−2
1.53 × 10−2
5.40 × 10−2
2.73 × 10−2
1.50
2.56
1.04 × 10−1
7.35 × 10−2
2.28
3.74 × 10−1
5.60 × 10−2
3.40 × 10−2
17Mean
Std. Dev.
6.04 × 101
5.20
6.16 × 101
5.97
3.95 × 101
2.82 × 101
3.04 × 101
2.44 × 101
3.04  × 101
7.58  × 10−3
4.53 × 101
4.47
18Mean
Std. Dev.
6.72 × 101
2.83 × 101
6.92 × 101
9.31
1.14 × 102
9.57 × 101
7.40 × 101
1.26 × 101
8.93 × 101
2.07 × 101
6.22  × 101
7.10
19Mean
Std. Dev.
2.26
3.69 × 10−1
2.52
4.67 × 10−1
2.85
4.26 × 10−1
1.12
1.57  × 10−1
1.16
1.79 × 10−1
2.09
3.32 × 10−1
20Mean
Std. Dev.
1.49 × 101
1.65 × 10−1
1.14 × 101
6.50 × 10−1
1.46 × 101
9.82 × 10−1
1.03 × 101
1.32
1.12 × 101
5.25 × 10−1
1.00 × 101
6.90 × 10−1
21Mean
Std. Dev.
1.88  × 102
3.25  × 101
3.04 × 102
8.56 × 101
2.82 × 102
3.83 × 101
2.99 × 102
5.13 × 101
2.85 × 102
6.94 × 101
2.82 × 102
6.57 × 101
22Mean
Std. Dev.
4.79 × 103
8.25 × 102
1.05 × 103
5.54 × 102
4.62 × 103
3.06 × 103
6.08 × 102
2.29 × 102
7.66  × 101
4.88  × 101
5.18 × 102
1.96 × 102
23Mean
Std. Dev.
5.16 × 103
6.31 × 102
3.00 × 103
6.55 × 102
5.24 × 103
1.33 × 103
2.19  × 103
4.10  × 102
4.77 × 103
7.75 × 102
2.59 × 103
4.86 × 102
24Mean
Std. Dev.
1.92 × 102
4.01 × 101
2.10 × 102
7.28
2.00 × 102
0
1.97 × 102
1.08 × 101
2.53 × 102
8.98
2.08 × 102
7.63
25Mean
Std. Dev.
2.67 × 102
6.95
2.54 × 102
6.97
2.45 × 102
8.63
2.40 × 102
3.92
2.64 × 102
8.34
2.50 × 102
1.16 × 101
26Mean
Std. Dev.
1.76 × 102
4.03 × 101
2.00 × 102
0
3.00 × 102
1.72
2.00 × 102
1.20 × 10−5
2.00 × 102
4.94 × 10−3
2.00 × 102
0
27Mean
Std. Dev.
6.19 × 102
1.20 × 102
4.03 × 102
6.37 × 101
3.01 × 102
3.47
3.00  × 102
3.51  × 101
8.88 × 102
9.20 × 101
4.07 × 102
8.14 × 101
28Mean
Std. Dev.
2.22 × 102
9.86 × 101
3.00 × 102
0
3.00 × 102
0
2.73 × 102
6.95 × 101
3.00 × 102
3.94 × 10−13
3.00 × 102
0
Table 11. The results of different algorithms for 50-dimensional benchmark functions.
Table 11. The results of different algorithms for 50-dimensional benchmark functions.
FunctionCriterionCMA-ESLaF-CMA-ESIPOP-CMAESEBOwithCMARSPSRDEMMSKbP-LaF-CMAES
1Mean
Std. Dev.
0
0
0
0
0
0
0
0
0
0
0
0
2Mean
Std. Dev.
0
0
0
0
3.38 × 102
2.67 × 102
2.20 × 10−3
2.57 × 10−3
5.65 × 105
2.56 × 105
0
0
3Mean
Std. Dev.
1.45 × 103
5.65 × 103
9.80 × 103
3.94 × 104
3.11 × 105
2.21 × 106
1.16 × 103
2.45 × 103
4.44 × 107
4.56 × 107
2.68
1.54 × 101
4Mean
Std. Dev.
0
0
0
0
0
0
1.97 × 10−4
3.69 × 10−4
5.17
4.72
0
0
5Mean
Std. Dev.
0
0
0
0
1.21
8.62
0
0
0
0
0
0
6Mean
Std. Dev.
3.54 × 101
1.66 × 101
4.34 × 101
0
4.34 × 101
0
4.34 × 101
4.81 × 10−14
4.37 × 101
1.11
4.34 × 101
0
7Mean
Std. Dev.
3.91 × 101
1.81 × 101
1.34 × 101
4.50
6.08
1.86 × 101
6.57 × 10−2
7.14 × 101
3.17 × 101
9.20
9.19
4.33
8Mean
Std. Dev.
2.14 × 101
4.56 × 10−2
2.12 × 101
5.09 × 10−2
2.14 × 101
5.21 × 10−2
2.11 × 101
7.59 × 10−2
2.11 × 101
4.19 × 10−2
2.12 × 101
4.07 × 10−2
9Mean
Std. Dev.
4.09 × 101
3.43
2.37 × 101
3.50
1.32 × 101
2.68
4.86 × 101
3.55
5.12 × 101
4.97
2.15 × 101
4.52
10Mean
Std. Dev.
7.25 × 10−4
2.22 × 10−3
3.38 × 10−3
4.24 × 10−3
0
0
6.14 × 10−3
6.67 × 10−3
5.86 × 10−2
3.89 × 10−2
1.45 × 10−4
1.04 × 10−3
11Mean
Std. Dev.
6.85 × 101
8.76
5.24 × 101
9.48
1.33 × 101
3.10
5.00 × 101
0
0
0
4.52 × 101
9.79
12Mean
Std. Dev.
6.76 × 101
8.64
6.68 × 101
8.97
1.21 × 101
2.02
6.49 × 101
5.87
8.74 × 101
2.16 × 101
5.43 × 101
9.95
13Mean
Std. Dev.
1.61 × 102
2.66 × 101
1.61 × 102
2.88 × 101
2.41 × 101
9.54
1.39 × 102
1.49 × 101
1.59 × 102
2.60 × 101
1.28 × 102
1.94 × 101
14Mean
Std. Dev.
6.39 × 103
6.72 × 102
4.76 × 103
8.66 × 102
6.21 × 103
2.70× 103
3.01 × 103
8.72 × 102
2.07 × 101
1.35 × 101
3.72 × 103
1.21 × 103
15Mean
Std. Dev.
7.11 × 103
7.75 × 102
5.98 × 103
8.44 × 102
8.23 × 103
1.87 × 103
6.09 × 103
8.97 × 102
8.63 × 103
8.47 × 102
5.63
6.17
16Mean
Std. Dev.
3.50 × 10−2
1.08 × 10−2
4.40 × 10−2
1.92 × 10−2
1.14
2.41
5.66 × 10−2
2.61 × 10−2
2.83
6.01 × 10−1
2.86 × 10−2
1.33 × 10−2
17Mean
Std. Dev.
1.20 × 102
9.68
1.34 × 102
9.43
8.51 × 101
8.90 × 101
5.16 × 101
5.56 × 10−1
5.08 × 101
2.44 × 10−2
1.11 × 102
7.37
18Mean
Std. Dev.
1.43 × 102
7.66 × 101
1.38 × 102
5.01 × 101
3.00 × 102
1.70 × 102
1.37 × 102
1.55 × 101
1.57 × 102
3.65 × 101
1.14 × 102
1.10 × 101
19Mean
Std. Dev.
4.69
6.32 × 10−1
5.03
1.04
5.08
8.84 × 10−1
2.48
3.80 × 10−1
1.96
3.13 × 10−1
4.23
5.77 × 10−1
20Mean
Std. Dev.
2.47 × 101
3.37 × 10−1
2.10 × 101
7.58 × 10−1
2.26 × 101
2.26
1.98 × 101
8.57 × 10−1
2.06 × 101
7.92 × 10−1
1.98 × 101
7.17 × 10−1
21Mean
Std. Dev.
1.98 × 102
1.40 × 101
6.14 × 102
4.22 × 102
3.87 × 102
3.15 × 102
6.32 × 102
3.64 × 102
6.06 × 102
4.42 × 102
5.82 × 102
4.34 × 102
22Mean
Std. Dev.
9.71 × 103
1.30 × 103
4.40 × 103
9.40 × 102
1.21 × 104
6.87 ×102
3.01 × 103
1.53 × 103
3.94 × 101
2.97 × 101
2.65 × 103
1.03 × 103
23Mean
Std. Dev.
9.56 × 103
1.37 × 103
6.45 × 103
1.08 × 103
1.13 × 104
5.40 × 102
5.04 × 103
7.19 × 102
8.91 × 103
9.56 × 102
5.79 × 103
5.83 × 102
24Mean
Std. Dev.
2.71 × 102
1.63 × 101
2.28 × 102
1.07 × 101
2.00 × 102
3.89 × 10−1
2.00 × 102
5.11 × 10−2
3.11 × 102
1.43 × 101
2.30 × 102
1.09 × 101
25Mean
Std. Dev.
3.40 × 102
1.15 × 101
2.93 × 102
9.15
2.83 × 102
7.87
2.71 × 102
5.27
3.35 × 102
1.29 × 101
2.90 × 102
9.93
26Mean
Std. Dev.
2.07 × 102
3.92 × 101
2.59 × 102
6.93 × 101
2.99 × 102
1.48 × 101
2.00 × 102
2.65 × 10−4
2.87 × 102
1.10 × 102
2.50 × 102
6.58 × 101
27Mean
Std. Dev.
1.08 × 103
9.85 × 101
7.46 × 102
1.54 × 102
4.58 × 102
1.93 × 102
3.06 × 102
7.87
1.53 × 103
1.45 × 102
7.14 × 102
1.37 × 102
28Mean
Std. Dev.
4.53 × 102
4.22 × 102
4.00 × 102
0
4.01 × 102
5.36
4.00 × 102
2.87 × 10−13
7.54 × 102
9.78 × 102
4.00 × 102
0
Table 12. The results of different algorithms for 100-dimensional benchmark functions.
Table 12. The results of different algorithms for 100-dimensional benchmark functions.
FunctionCriterionCMA-ESLaF-CMA-ESIPOP-CMAESEBOwithCMARSPSRDEMMSKbP-LaF-CMAES
1Mean
Std. Dev.
0
0
0
0
0
0
0
0
0
0
0
0
2Mean
Std. Dev.
0
0
1.00 × 10−6
2.00 × 10−6
2.31 × 106
8.03 × 105
4.96 × 10−3
1.80 × 10−3
7.13 × 105
3.42 × 105
0
0
3Mean
Std. Dev.
1.69 × 105
3.07 × 105
8.74 × 105
9.47 × 105
1.63 × 105
2.39 × 105
6.93 × 105
1.10 × 106
5.12 × 107
4.91 × 107
1.35 × 104
2.79 × 104
4Mean
Std. Dev.
0
0
0
0
0
0
4.56 × 10−3
9.26 × 10−3
7.14
5.26
0
0
5Mean
Std. Dev.
0
0
1.00 × 10−6
0
1.40 × 10−4
7.6 × 10−5
0
0
0
0
0
0
6Mean
Std. Dev.
4.13
1.73 × 101
7.43 × 101
8.61 × 101
2.06 × 102
5.45 × 101
1.58 × 102
9.09 × 101
4.71 × 101
8.15 × 101
5.84 × 101
7.77 × 101
7Mean
Std. Dev.
5.73 × 101
1.32 × 101
4.70 × 101
8.60
3.06 × 101
1.46 × 101
1.76
7.80 × 10−1
1.37 × 102
3.22 × 101
3.92 × 101
7.79
8Mean
Std. Dev.
2.14 × 101
4.29 × 10−2
2.13 × 101
2.74 × 10−2
2.15 × 101
4.92 × 10−2
2.13 × 101
3.75 × 10−2
2.13 × 101
4.19 × 10−2
2.13 × 101
2.47 × 10−2
9Mean
Std. Dev.
9.15 × 101
7.70
8.14 × 101
7.48
3.01 × 101
3.91
1.28 × 102
4.21
1.52 × 102
7.51
7.68 × 101
6.43
10Mean
Std. Dev.
4.35 × 10−4
1.76 × 10−3
3.14 × 10−3
3.82 × 10−3
2.90 × 10−3
4.03 × 10−3
1.23 × 10−2
8.35 × 10−3
1.32 × 10−1
6.91 × 10−2
2.90 × 10−4
1.45 × 10−3
11Mean
Std. Dev.
1.94 × 102
1.53 × 101
1.90 × 102
2.22 × 101
3.48 × 101
5.05
1.77 × 102
7.67
0
0
1.58 × 102
2.00 × 101
12Mean
Std. Dev.
1.83 × 102
1.43 × 101
2.31 × 102
2.50 × 101
3.33 × 101
3.64
3.13 × 102
5.65
2.69 × 102
3.63 × 101
1.99 × 102
2.23 × 101
13Mean
Std. Dev.
3.99 × 102
4.05 × 101
4.74 × 102
3.74 × 101
6.91 × 101
1.99 × 101
9.80 × 102
3.07 × 102
4.72 × 102
5.91 × 101
3.98 × 102
3.91 × 101
14Mean
Std. Dev.
1.35 × 104
1.03 × 103
1.17 × 104
1.21 × 103
1.40 × 104
9.62 × 102
1.04 × 104
1.37 × 103
7.93 × 102
8.32 × 101
1.10 × 104
1.12 × 103
15Mean
Std. Dev.
1.27 × 104
1.27 × 103
1.13 × 104
1.13 × 103
1.20 × 104
1.39 × 103
1.36 × 104
1.29 × 103
1.82 × 104
1.79 × 103
1.11 × 104
1.10 × 103
16Mean
Std. Dev.
3.24 × 10−2
8.70 × 10−3
3.93 × 10−2
1.30 × 10−2
8.59 × 10−1
2.17
3.70 × 10−2
1.67 × 10−2
2.19
8.11 × 10−1
2.68 × 10−2
5.74 × 10−3
17Mean
Std. Dev.
2.99 × 102
1.77 × 101
3.31 × 102
2.52 × 101
1.33 × 102
4.19
1.13 × 102
1.10 × 101
1.26 × 102
5.31 × 10−1
3.01 × 102
1.43 × 101
18Mean
Std. Dev.
4.21 × 102
2.68 × 102
3.91 × 102
1.87 × 102
6.30 × 102
3.85 × 102
3.64 × 102
2.54 × 101
3.98 × 102
7.72 × 101
3.04 × 102
2.18 × 101
19Mean
Std. Dev.
1.19 × 101
1.33
1.37 × 101
2.20
1.35 × 101
1.75 × 101
7.52
8.63 × 10−1
5.34 × 101
7.35 × 10−1
1.20 × 101
1.30
20Mean
Std. Dev.
5.00 × 101
0
5.00 × 101
0
5.00 × 101
0
5.00 × 101
1.10 × 10−12
5.87 × 101
0
5.00 × 101
0
21Mean
Std. Dev.
2.88 × 102
3.25 × 101
3.61 × 102
4.93 × 101
3.88 × 102
3.25 × 101
3.90 × 102
3.00 × 101
3.88 × 102
4.98 × 101
3.67 × 102
4.76 × 101
22Mean
Std. Dev.
2.10 × 104
2.37 × 103
1.22 × 104
1.52 × 103
2.43 × 104
1.90 × 103
1.19 × 104
1.80 × 103
8.31 × 101
4.09 × 101
1.16 × 104
1.27 × 103
23Mean
Std. Dev.
1.95 × 103
1.82 × 103
1.31 × 104
1.63 × 103
2.08 × 104
8.95 × 102
1.22 × 104
1.61 × 103
1.72 × 104
1.95 × 103
1.30 × 104
1.24 × 103
24Mean
Std. Dev.
3.80 × 102
2.09 × 101
3.01 × 102
1.21 × 101
2.00  × 102
5.61  × 10−2
2.02 × 102
8.25 × 10−1
3.58 × 102
1.92 × 101
2.89 × 102
1.11 × 101
25Mean
Std. Dev.
5.00 × 102
1.70 × 101
4.22 × 102
1.51 × 101
3.71 × 102
1.11 × 101
3.59  × 102
1.00  × 101
5.73 × 102
2.78 × 101
4.10 × 102
1.20 × 101
26Mean
Std. Dev.
3.46 × 102
1.33 × 102
4.34 × 102
1.49 × 101
3.13 × 102
1.50 × 101
3.08  × 102
3.86
5.43 × 102
5.01 × 101
4.19 × 102
1.41 × 101
27Mean
Std. Dev.
2.11 × 103
1.48 × 102
1.56 × 103
1.96 × 102
4.33 × 102
2.44 × 102
1.53 × 103
1.93 × 102
2.71 × 103
4.12 × 102
1.41  × 103
1.54  × 102
28Mean
Std. Dev.
2.83 × 103
6.72 × 102
2.97 × 103
8.47 × 102
3.08 × 103
9.48 × 102
2.46  × 103
2.33  × 101
5.72 × 103
1.62 × 103
3.03 × 103
8.98 × 102
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Qin, L.; Zhou, Z. Knowledge-Based Perturbation LaF-CMA-ES for Multimodal Optimization. Appl. Sci. 2024, 14, 9133. https://doi.org/10.3390/app14199133

AMA Style

Liu H, Qin L, Zhou Z. Knowledge-Based Perturbation LaF-CMA-ES for Multimodal Optimization. Applied Sciences. 2024; 14(19):9133. https://doi.org/10.3390/app14199133

Chicago/Turabian Style

Liu, Huan, Lijing Qin, and Zhao Zhou. 2024. "Knowledge-Based Perturbation LaF-CMA-ES for Multimodal Optimization" Applied Sciences 14, no. 19: 9133. https://doi.org/10.3390/app14199133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop