Next Article in Journal
Elevating Decision Management in Sustainable Energy Planning through Spherical Fuzzy Aggregation Operators
Previous Article in Journal
Interval Type-3 Fuzzy Aggregation for Hybrid-Hierarchical Neural Classification and Prediction Models in Decision-Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Slime Mould Algorithm Combines Multiple Strategies

1
Information Engineering School, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
College of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004, China
3
School of Software Engineering, Jiangxi University of Science and Technology, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(10), 907; https://doi.org/10.3390/axioms12100907
Submission received: 18 August 2023 / Revised: 17 September 2023 / Accepted: 19 September 2023 / Published: 24 September 2023

Abstract

:
In recent years, due to the growing complexity of real-world problems, researchers have been favoring stochastic search algorithms as their preferred method for problem solving. The slime mould algorithm is a high-performance, stochastic search algorithm inspired by the foraging behavior of slime moulds. However, it faces challenges such as low population diversity, high randomness, and susceptibility to falling into local optima. Therefore, this paper presents an enhanced slime mould algorithm that combines multiple strategies, called the ESMA. The incorporation of selective average position and Lévy flights with jumps in the global exploration phase improves the flexibility of the search approach. A dynamic lens learning approach is employed to adjust the position of the optimal slime mould individual, guiding the entire population to move towards the correct position within the given search space. In the updating method, an improved crisscross strategy is adopted to reorganize the slime mould individuals, which makes the search method of the slime mould population more refined. Finally, the performance of the ESMA is evaluated using 40 well-known benchmark functions, including those from CEC2017 and CEC2013 test suites. It is also recognized by Friedman’s test as statistically significant. The analysis of the results on two real-world engineering problems demonstrates that the ESMA presents a substantial advantage in terms of search capability.

1. Introduction

With the advancement of human society, problems occur, such as in medical image segmentation [1], parameter identification of photovoltaic cells and modules [2], and DNA sequence design [3]. Optimization is a frequently discussed topic in the expanding field of scientific and engineering applications. Generally, optimization is a process aimed at finding the optimal value of a function in various types of problems, which can be continuous or discrete, constrained or unconstrained, at minimum costs. However, most of the optimization problems are nonlinear, high-dimensional, multimodal, and not differentiable [4,5].
Traditional optimization algorithms, such as the gradient descent method, can effectively solve problems involving unimodal continuous functions. The gradient descent method utilizes the gradient information of the function to iteratively update the values of the independent variables, gradually approaching the minimum value of the objective function [6]. The diagonal quasi-newton updating method is based on the concept of the mimetic newton method. It approximates the inverse matrix of the Hessian matrix of the objective function and approximates the diagonal elements of this inverse matrix using a diagonal matrix [7]. The least squares method is used to determine the parameters of the fitted curve by minimizing the squared difference between the observed and fitted values [8], among other things. However, the traditional mathematical methods mentioned above have significant limitations when dealing with complex optimization problems. Firstly, they are unable to handle non-deterministic parameters. Secondly, they are constrained by the feasible domain, which often leads to issues such as insufficient solution accuracy and low solution efficiency. The stochastic search algorithm is a branch of mathematical algorithms that effectively solves the aforementioned problems by utilizing matrix computations involving multidimensional and diverse groups, and by employing random distributions, such as the normal distribution [9], uniform distribution [10], and other distributions from probability theory. This algorithm is advantageous for its gradient-free nature and high adaptability.
SSAs typically involve two phases in their search process: global diversification (exploration) and local precise exploitation (exploitation) [11,12]. In order to adapt to different practical engineering problems, a large number of SSAs continue to appear between the balance of exploration and exploitation, such as monarch butterfly optimization (MBO) [13], thermal exchange optimization (TEO) [14], whale optimization algorithm (WOA) [15], seagull optimization algorithm (SOA) [16], pigeon-inspired optimization (PIO) [17], Harris hawks optimization (HHO) [18], etc.
Although much evidence shows that SSAs are effective optimization methods, when dealing with extremely complex problems, these SSAs can also suffer from phenomena such as premature convergence and being easily trapped in local optima. To improve the trajectory of the stochastic search algorithm’s own population in the search space, researchers have used methods such as chaos maps [19], matching games [20], or a combination of other stochastic search algorithms [21].
Scholars Li et al. [22] proposed a novel stochastic search algorithm—the slime mould algorithm (SMA)—which simulates the oscillations of slime moulds. The SMA utilizes weight changes to replicate the positive and negative feedback processes that occur during slime mould ontogeny in the foraging process, resulting in the emergence of three distinct stages of foraging morphology.

Motivation and Contribution

The SMA has the advantages of simple principles, few adjustment parameters, and high scalability. Although the excellent performance of the SMA has been strongly confirmed, the SMA also has its limitations, such as its high reliance on a single search individual within the group, the absence of a strong global search capability, and the tendency to be trapped in local optimization when dealing with complex high-dimensional problems [23].
It should be noted that the “No Free Lunch” theorem (NFL) [24] implies that any performance improvement of an algorithm in one class of problems may be offset by another class of problems. Although the SMA algorithm has been improved by many scholars, there is still room to further enhance its optimization accuracy. Therefore, it is relevant to propose an improved algorithm with better optimization performance which still has practical significance.
A single search strategy can potentially limit the optimization capabilities of the stochastic search algorithm itself. When a search method has a strong local search capability, its global search capability would be weak, and vice versa. In the SMA, a single search strategy can lead to a lack of population diversity, making it prone to local minima in the solution of complex optimization problems. This suggests that a multi-strategy algorithm has potential.
In this paper, a new search mechanism is proposed to address the inadequacy of the global exploration capability of the SMA. Selective population-averaged positions are used instead of random positions, and a Lévy flight function with jumping characteristics is used to improve the weights of positive and negative feedback. Then, a learning method based on the dynamic lens learning strategy is proposed for fine-tuning the positions of the best individual. Lastly, a practical update method is proposed which utilizes an improved crisscross technique to reorganize the current individuals, generating a perturbation effect on the SMA population.
Among the 40 functions in the test suite of CEC2017 and CEC2014, the ESMA is compared with the AGSMA [25], AOSMA [26], MSMA [27], DE [28], particle swarm optimization (PSO) [29], grey wolf optimizer (GWO) [30], salp swarm algorithm (SSA) [31], HHO, crowd search algorithm (CSA) [32], and SMA. Additionally, a Friedman’s test was conducted for statistical analysis. The results presented above demonstrate the feasibility of the ESMA. The ESMA’s usefulness in solving real-world problems is illustrated through its application to two specific problems: optimizing path finding and designing pressure vessels. The contribution of this paper is briefly described as follows:
  • To solve the aimless random search of slime mould individuals and to improve the global exploration capability of the algorithm, a novel approach involving population-averaged position and Lévy flight with conditional selection is proposed.
  • To improve the leadership abilities of the best slime mould individual, a novel technique called dynamic lens mapping learning is proposed.
  • A novel and improved crisscross method is proposed. This method effectively utilizes the valuable information in dimension of the population renewal process, eliminating the influence of local extremes and consequently enhancing the quality of each solution.
  • A total of 40 benchmark functions from the CEC2017 and CEC2013 test suites were employed to evaluate the numerical performance, along with 2 real-world problems to assess the feasibility of optimization. A comparison of the test results between the EMSA and other participating algorithms clearly demonstrates that the EMSA exhibits significantly superior optimization performance.
The remaining structure of this paper is organized as follows: Section 2 provides an introduction to the standard SMA and its related work. Section 3 explains the improved EISMA and its enhancement strategies in detail. In Section 4, the test results on CEC2013 and CEC2017 test suites, as well as the analysis of engineering optimization problems, are presented. Finally, Section 5 discusses the conclusions and future development directions.

2. Background

2.1. Slime Mould Algorithm (SMA)

The SMA simulates the behavioral and morphological changes of slime moulds during the foraging phase, where they search for and surround food. The front of the slime mould exhibits a fan-shaped morphology, which is subsequently accompanied by a network of interconnecting veins. As the veins approach the food source, the slime mould’s bio-oscillator produces diffusive waves, which modify the cytoplasmic flow within the veins. This leads to the movement of the slime mould towards the more favorable food source.
Two crucial parameters for maintaining balance in the SMA are the constant, z , and the transition probabilities, p . If the number of randomly generated values is less than z , the algorithm enters the normal distribution random search stage. If the number of random values, r , is greater than or less than p , it indicates that the population’s basic search direction has been determined, leading to either the exploration or the development phase. The above shows that the mathematical model of the SMA comprises the following three parts.

2.1.1. Approach Food

When the concentration of food in the air is higher, the oscillator amplitude of the slime moulds becomes stronger, the width of their veins increases, and a larger quantity of slime mould accumulates in the region. Conversely, when the concentration of food in the area is low, the slime moulds will change direction and explore alternative areas. The mathematical model for the movement of slime moulds approaching food is represented by Equation (1):
X t + 1 = v b × W × X A t X B t ,     r < p v c × X t ,     o t h e r s
where, t is the current iteration number, X b is the best individual position, X A and X B are the positions of two randomly selected individuals, W is the weight coefficient calculated from the fitness, v b and v c are the control parameters, respectively, where v b [ l , l ] , v c linearly decreases from 1 to 0, r is a random number between 0 and 1, the search stage is determined by the adaptive transformation variable p , and l is described as shown in Equations (2) and (3):
p = t a n h S i b F
l = arctanh ( t t m a x i t e r + 1 )
where, i 1,2 , 3 , ,   N , S i is the current individual fitness, b F is the current best fitness, and m a x i t e r is the maximum number of iterations. W is the weight coefficient, and it simulates the magnitude of positive and negative feedback produced by the slime mould when encountering different concentrations of food, as shown in Equation (4):
W S I i = 1 + r a n d × log b F S i b F w F + 1 ,     c o n d i t i o n 1 r a n d × log b F S i b F w F + 1 ,     o t h e r s
S I i = s o r t ( S )
In Equation (4), condition indicates that S ranks in the first half of the population, r a n d represents a random number following a uniform distribution, b F represents the best fitness value, w F is the worst fitness, and S I i represents the sorted fitness indicating the sequence of food concentrations.

2.1.2. Wrap Food

After locating a food source, the slime mould will allocate some individuals to explore other unknown areas randomly, in search of a food source of higher quality. Therefore, the updating of the entire population’s positions is shown in Equations (6)–(8):
X t + 1 = r a n d × U B L B + L B ,     r < z
X t + 1 = v b × W × X A t X B t ,     r > z   a n d   r < p
X t + 1 = v c × X t ,     r > z   a n d   r p
where, U B and L B are the upper and lower bounds of the dimension of the problem to be solved, respectively, r denotes random numbers obeying a uniform distribution, and z is a fixed parameter.
Equations (7) and (8) reveal that the SMA alters the search direction of individuals by modifying the parameters W , v b , and v c . Additionally, it is demonstrated that during the exploration phase, the population moves around the best individual, while during the exploitation phase, the population moves within itself. Furthermore, when the algorithm tends to stagnate, it is also allowed to search in other directions.

2.1.3. Grabble Food

Slime moulds rely on a biological oscillatory response to regulate the flow of cytoplasm in the vein and thus adjust their position to find the optimal food location. To simulate the oscillatory response of slime moulds, the parameters W , v b , and v c are used in the SMA. W represents the behavior of selecting different oscillation patterns based on fitness. It simulates a search pattern where a slime mould chooses its next behavior based on the food situation of individuals at other locations. v b was utilized to model the process of dynamically adjusting oscillations by changing the diameter of mucus veins based on information from other individuals. v c is a linear change. It simulates a change in the extent to which the slime mould retains information about its own history.

2.2. Related Work

Currently, many researchers have made a series of improvements to the SMA to make up for the shortcomings of the algorithm itself. They have successfully improved the performance of the SMA at various levels and effectively resolved many real-world engineering problems. The methods for enhancing the performance of the SMA can be broadly categorized into three groups: (1) parameter control, (2) implementation and development of strategies for generating new solutions, and (3) hybridization with other algorithms.
Some important work has been conducted in the literature [33,34,35,36] on the SMA’s control parameters. Miao et al. proposed a novel MSMA in the literature [33]. The algorithm employs a tent map to generate a nonlinear parameter z , overcoming the limitation of a constant value. Additionally, it utilizes the randomness of tent map chaos to improve the distribution of elite individuals within the population, resulting in a more reasonable location. The algorithm was applied to a real terraced reservoir system along the Dadu River in China to simulate the maximization of annual power generation. Altay introduced 10 different chaotic maps instead of using the random number r a n d to adjust the parameter W in the CSMA, as mentioned in the literature [34]. The CSMA evaluates the advantages and disadvantages of the 10 chaotic maps in the SMA algorithm through function testing. Tang et al. in Ref. [35] proposed a new variation called MSMA. It uses two adaptive parameter strategies to improve the SMA parameters, and also employs a chaotic dyadic approach to improve the distribution of the initial population and jumps out of the local optimum with the help of a spiral search strategy. In Ref. [36], Gao noted that the standard SMA employs the arctanh function to compute the parameter a, and subsequently suggested the utilization of the cos function to generate continuous integers for a generation, which is referred to as ISMA.
Several works focused on implementing and developing strategies for generating new solutions, as evidenced by the following literature. Hu et al. [37] were inspired by the ABC algorithm [38] and introduced a variant of the algorithm that incorporates a dispersed foraging strategy named DFSMA. The DFSMA successfully balances global exploration and local exploitation during the iterative process, which has shown promising results on the CEC2017 test suite. Furthermore, it has achieved higher classification accuracy and reduced feature count in feature selection tasks. Allah et al. [39] proposed an improved algorithm called CO-SMA, which is based on the logistic map for a chaotic sequence search strategy (CSS) and incorporates a crossover-opposition strategy (COS) for opposite learning. CSS and COS are transformed by a certain probability P . The former is intended to preserve the diversity of solutions, while the latter aims to conduct a neighborhood search around the position of the best individual, thereby enhancing the quality of solution discovery. Results show that CO-SMA minimizes the cost of energy consumption of wind turbines at high altitude. Ren et al. [40] proposed an improved MGSMA by employing two strategies. In this algorithm, strategy 1 applies the MVO strategy with a roulette wheel selection mechanism to the foraging process of slime mould, which partially resolves the issue of getting trapped in local optima. On the other hand, strategy 2 utilizes the Gaussian kernel probability strategy throughout the entire search process. The kernel probability strategy is used during the entire search process, allowing for the perturbation of newly generated solutions and expanding the search range of the slime mould individuals. Furthermore, MGSMA demonstrated improved performance in multi-threshold image segmentation. Ahmadianfar et al. [41] proposed an improved version of MSMA that incorporated three enhancements. Firstly, they introduced additional parameters, such as the number of iterations, into the mutation operator of the differential evolution (DE) used in MSAM. Secondly, they introduced an effective crossover operator with an adjustable parameter to enhance the diversity of the population. Lastly, they utilized a deviation from the current solution to further improve the solution quality. Simulation experiments showed that MSAM significantly improved the generation performance of multi-reservoir hydropower systems. Pawani et al. [42] addressed the issue of insufficient local exploitation in the SMA when dealing with complex searches that involve a wide range of neighborhood optima. To tackle this problem, they proposed a method called WMSMA that utilizes wavelet mutation to adapt individuals and prevent them from getting trapped in local minima. It effectively solves the cogeneration scheduling problem with nonlinear and discontinuous constraints. Sun et al. [43] proposed two strategies to enhance the SMA. The first strategy, called the BTSMA (Brownian motion and tournament selection mechanism SMA), focuses on improving global exploration by incorporating Brownian motion and tournament selection. The second strategy involves integrating an adaptive hill climbing strategy with SMA to promote the development trend, thus facilitating the local search. Experimental results on structural engineering design problems and training multi-layer perceptrons validate the effectiveness of the BTSMA in real-world tasks.
An active research trend is to combine the SMA with other evolutionary computation techniques, as evidenced by the following literature. Zhong et al. [44] proposed a hybrid algorithm (TLSMA) that utilizes a teaching-based learning optimizer (TLBO) integrated with the SMA. The TLSMA is divided into two subgroups with dynamic selection, where the former utilizes a TLBO search and the latter employs an SMA search. This division improves the optimization capability of the SMA. Performance gains are achieved in simulating five RBRO problems, including the problem of optimizing numerical design. Liu et al. [45] proposed a hybrid algorithm called MDE, which utilizes the DE algorithm exclusively. After the mutation, crossover, and selection steps, the SMA begins searching for the optimum in the search space using the improved population. This approach, known as MDE, helps mitigate the risk of falling into a local optimum, to some extent. MDE effectively carries out multi-threshold segmentation of breast cancer images, achieving satisfactory results. Lzcili [46] proposed a refined algorithm, called OBL-SMA-SA, which introduces SA [47]. The algorithm features a mechanism that allows for the acceptance of suboptimal solutions with a probability, p, during global exploration. Additionally, an opposite learning approach is employed for local exploitation. The combination of these two strategies aids in mitigating the risk of getting trapped in a local optimum. OBL-SMA-SA efficiently adjusts the parameters of the FOPID controller, ensures the stability of the automatic voltage regulator’s endpoint voltages, and regulates the speed of the DC motor. Örnek et al. [48] were inspired by the SCA [49] and introduced a variant of the SMA, called the SCSMA. The SCSMA improves the algorithm update process by leveraging the reciprocal oscillation property of sine and cosine during each iteration, which effectively advances the algorithm’s exploration and development. The efficacy of SCSMA is validated through its application to real-world problems, including the design of four cantilever beams. Yin et al. [50] proposed an improved algorithm called EOSMA. During the exploration and development phase, EOSMA exchanges the roles of the best individual and the current individual. Then, the random search phase of the SMA is replaced by the EO [51]. Finally, it is supplemented with a different Mutation strategy to increase the probability of escaping from local optima. The superior optimization capability of EOSMA is demonstrated through its application to nine problems, including the design of a car in a side-impact collision.
In summary the SMA variant algorithms and their applications are shown in Table 1.
Figure 1 shows the steps of the whole search process of the SMA.
The pseudo-code of the SMA is as follows in Algorithm 1.
Algorithm 1 SMA
1. Input: the parameters N,  m a x i t e r , the positions of slime mould X i ( i = 1,2 , , n ) ;
2.  t = 0
3. while ( t m a x i t e r )  do
4.  Calculate the fitness of all slime mould;
5.  Update best fitness and best position b F ,   X b ;
6.  Calculate the W by Equation (4);
7.  for  i   t o   N
8.   Update p , v b ,   v c ;
9.   if  r < z
10.    Update positions by Equation (6);
11   else if  r < p
12.    Update positions by Equation (7);
13   else
14.    Update positions by Equation (8);
15   end if
16.  end for
17.   t = t + 1 ;
18. end while
19. output: b F ,   X b ;

3. Enhanced Slime Mould Algorithm That Combines Multiple Strategies (ESMA)

In order to improve the optimization search capability of the SMA, three different approaches are proposed to address each of its three deficiencies: (1) An effective global search method can be achieved by utilizing a conditional mean position of the population along with a Lévy function that incorporates a non-deterministic random walk mechanism. (2) Efficiently learning the best individual can be accomplished through the implementation of a dynamic lens-based learning method. (3) By employing an enhanced method of updating slime mould individuals using dimensional crossover, the updating process can be improved to prevent falling into local optima.

3.1. A New Global Search Mechanism

In SSAs, search techniques demonstrate superior performance when they exhibit strong exploration ability during the initial and intermediate stages of the global search. For instance, in Tang’s study [52], the tabu search technique based on the critical path and critical path method was employed to augment the number of teachers in the dynamic teacher group during the global exploration stage of the process.
There are two important features in the early global exploration process of the SMA. (1) When r < p , the step-size portion of the slime mould’s position update is computed by combining the positions of two random individuals using W and v b . During this stage, the search behavior of the slime mould is characterized by aimless and random exploration [25], which partially undermines the effectiveness of the preliminary search of the SMA. As a result, a rational selection of search individuals becomes crucial. (2) The appropriate magnitude of the weighting factor W , used to simulate the positive or negative feedback between the pulse width of the slime mould and the concentration of explored food, restricts the extent of perturbation in the current random individual, thereby influencing the global exploration capability of the algorithm. To solve the aforementioned challenges, the following discussion is conducted.
Fusion of other populations individual positions iteratively superimposed inertia weights and other parameters to compose the global exploration of the updating method.
Facilitating the exchange of information regarding the positions of individuals in the current population is a beneficial search strategy in stochastic search algorithms. Inspired by the AO algorithm [12] that utilizes the average position of the population for global exploration. However, the average position of the population does not always play a dominant role in the search. Therefore, in this paper, random individuals with the weighting factor W are replaced in this paper with a conditionally selective population average position in the exploration Formula (7) of the SMA. This approach allows for the better sharing of information and positions with other individuals, as well as the advantage of learning from the information obtained from other individuals in the population. The average population position is shown in Equation (9).
X M t = 1 N i = 1 N X i j
where, in Equation (9), N is the population size and i and j are the location and dimension component of the current individual, respectively.
Next, the question of the proper use of the location is discussed in detail. In other words, how to use the position conditionally. Case (1): Typically, individuals in proximity to the best individual tend to have a shorter Euclidean distance compared to other individuals. The d i in Equation (10) reflects the distance between the average individual X M of the population, a random individual X A , and the leader slime mould X b . Case (2): Based on the aforementioned distance conditions, if the randomly selected slime mould in the search space deviates from the direction leading to the global best position, updating its position with the location of an individual that has a better fitness value can enhance its search ability. Equation (11) reflects the relative fitness of the average position X M and X A .
d i = s q r t ( j = 1 D i m X A j X b j 2 ) s q r t ( j = 1 D i m X M j X b j 2 )
f i = f X A f ( X M )
where X A is the random position, X M is the average position, X b is the best position, and f is the problem function to be solved.
Furthermore, determining how to improve the weighting factor, W , becomes crucial for solving the global exploration problem. The compliance with uniformly distributed random numbers, r , in SSAs lack a jumping mechanism. Author Altay, in Ref. [31], has demonstrated in detail how the introduction of a suitable method can effectively upgrade the weighting factor,   W .
It is well known that Lévy flight is a specific random variable that follows a random walk that follows a non-Gaussian distributed random walk to solve the stagnation problem in SSAs. It moves in both small range wandering and large distance jumping [53,54]. In this paper, in order to improve the rational feedback of the weight factor on the search of slime mould individual, expand the leap capability of the algorithm, and also prevent the algorithm from falling into a local optimum. Lévy variables are generated according to Equation (12). The Lévy random variable is then incorporated into the weighting factor, W, as depicted in Equation (14). Lévy variables are generated as shown in Equation (12).
L e v y ξ = s × u × σ | v | 1 β
where s is a fixed value of 0.01 and u and v are random numbers between 0 and 1, respectively. The value of σ is calculated using Equation (7).
σ = Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 β 1 2
where β is set to a fixed value of 1.5.
W S I i = 1 + L e v y × log b F S i b F w F + 1 ,     c o n d i t i o n 1 L e v y × log b F S i b F w F + 1 ,     o t h e r s
where L e v y is the value of the Lévy function produced by Equation (12).
To address both of the aforementioned problems, a proposed global exploration update method is presented in Equation (15). This method combines the average position of the population with the Lévy function as an improvement strategy.
X t = X b t + v b × L W × X M t X B t ,     r < p   a n d   d i > 0   a n d   f i > 0 X b t + v b × L W × X M t X B t ,     o t h e r s

3.2. An Effective Learning Method

During the search process of the standard SMA, the leader slime mould guides the population to conduct an extensive search within the specified problem boundaries. This optimization process can be considered as employing an elite strategy, where the search method directly influences the overall search performance. Therefore, it is crucial for the leader slime mould individual to possess a flexible search mechanism. Furthermore, the update operation of the leader individual in the SMA is exclusively determined by the fitness evaluation. According to Equation (7), as the number of iterations increases, the entire population of slime moulds will gradually converge towards the current best position. This convergence tendency poses a challenge for the SMA, as it increases the likelihood of getting trapped in local optima when attempting to solve complex multi-model functions.
The algorithm’s susceptibility to falling into local optima has been extensively studied. It has been observed that the opposite solution of the majority of population individuals is often closer to the best point than the inverse solution of ordinary population individuals. This suggests that it is easier to escape local optima. This finding is supported by multiple studies [55,56]. Equation (16) represents the standard opposition learning form, which solves the optimization problem in one fixed direction. However, it still carries the risk of monotonicity and local optimization in other search directions.
x = U B + L B x
where, U B and L B are the upper and lower boundaries of the given problem, respectively.
Lens learning, based on opposition learning [57], increases the search in various directions within the search space. As a result, the generated opposition individuals exhibit flexible diversity, which is advantageous for exploring unknown domains. The principle of opposition learning is illustrated in Figure 2. In this context, the principle of standard lens learning is assumed to operate within a given two-dimensional plane. The height of point   P is represented as   h , and X b represents the projection of point P onto the transverse coordinate axis. The center of the lens is set at point o , where o = ( a + b ) / 2 . By performing the imaging operation, the height, h , of point P on the transverse coordinate axis can be obtained, with its projection denoted as X o b . Based on the above, a new point, X b , can be generated through the imaging operation, as expressed mathematically in Equation (17).
h h = a + b 2 X b X o b a + b 2
where, let k = h / h , the transformation of Equation (17) is adjusted as shown in Equation (18).
X o b = a + b 2 + a + b 2 k X b k
From the transformed Equation (18), it can be observed that the size and direction of the new point, X o b , are determined by the parameter k . The newly generated solutions, based on the lens learning principle, are converted into the perturbation and updating of individual population members in the stochastic search algorithm, while the dynamically scaled parameter, k , helps improve the randomness of the generated population members.
The principle of lens learning in the two-dimensional plane, described above, is used to avoid biased search individuals resulting from unreasonable upper and lower bounds in the lens learning process. In general, better results can be achieved by using the upper and lower bounds of the decision variables of the current iteration individuals in SSAs. To enhance the leadership ability of the best slime mould individual in the ESMA, a new lens learning strategy is proposed based on the aforementioned principle, which adjusts the current position in the problem area being solved. However, the dimensions of complex functions vary. To accommodate the high-dimensional position vector of the leader slime mould individual in the ESMA, it needs to be generalized to a multidimensional state, as shown in Equation (19).
X o b j = max X b j + min ( X b j ) 2 + max X b j + min ( X b j ) 2 k X b j k
where max X b j and min ( X b j ) are the maximum and minimum components of X b j , respectively, and X o b j is the   j t h dimensional component of the best individual after lens learning.
Equation (19) produces the best position of the individual after performing multidimensional mapping. Parameter k , which controls the direction and distance of the search, affects the angles and distances between the original best individual and the newly generated best individual. It also enhances the ability of the best individual to move in the best direction within the search space. To improve the applicability of the lens learning method to the ESMA, the parameters are adjusted and a dynamic parameter, k, based on the number of iterations, is proposed. The dynamic parameter, k , is given by Equation (20).
k = ( 1 + ( t m a x i t e r ) λ 1 ) λ 2
In Equation (20), λ 1 is 0.5 and λ 2 is 10 after several experiments.
Finally, the next iteration is updated by selecting the best individual between X b at the current iteration and the X o b formed by lens learning, as shown in Equation (21).
X b t = X o b t ,     i f   f X o b t < f X b t X b t ,     o t h e r

3.3. A Feasible Update Method

As a population-based stochastic search algorithm, the search process of the SMA can be primarily categorized into two phases: global exploration, where the objective is to approach food, and local exploitation, where the focus is on wrapping food. However, during the exploration stage of the SMA, two random individuals with position vectors are utilized to search around the best individual. On the other hand, during the local exploitation phase, the update method is computed solely based on the current individual and the Oscillatory factor for linear descent oscillations v c . As the depth of population iteration increases, the concentration degree of individuals greatly improves. However, this also leads to an increase in the probability of stagnation during the iterative process, resulting in a significant reduction in population diversity. In other words, the solutions obtained by the SMA in each search may not always be reliable. The jump-out mechanism makes it susceptible to encountering local extremes, leading to the occurrence of premature states. This weakness in seeking the best solution to the problem can be addressed by combining the SMA with other stochastic search algorithms [58,59].
The crisscross optimization algorithm (CSO) [60] is a stochastic search algorithm that alternates with a unique double crossover operator. The dimensions of the search individuals are combined and crossed in two different ways, horizontally and vertically. This approach aims to fully utilize the useful information from each dimension in solving the objective function, thereby mitigating the influence of local extremes and enhancing the quality of the solution. In this paper, we leverage the unique capability of CSO to capture dimensional information and propose an improved CSO search strategy that incorporates the ESMA to refine the fusion of information among individuals.

3.3.1. Horizontal Crossover

The horizontal operator in the CSO conducts the crossover operation within the same dimension of two individuals from the population in a randomized and non-repetitive manner, as shown in Equations (22) and (23).
M S X i j t = r 1 × X i j t + 1 r 1 × X A j t + c 1 × ( X i j t X A j t )
M S X A j t = r 2 × X A j t + 1 r 2 × X i j t + c 2 × ( X A j t X i j t )
where r 1 and r 2 are random numbers between 0 and 1; c 1 and c 2 are random numbers selected from the interval [−1, 1], respectively; and M S X i j and M S X A j represent the jth dimensional positions generated by slime shapes X i j and X A j , respectively, after crosswise crossover. The dimension for vertical crossover is selected as the better one between the two. A is the subscript of the random individual.
The influence of random numbers c 1 and c 2 are chosen from the interval [−1, 1] on the j t h dimensional distance between two random individuals X i j t X A j t and X A j t X i j t in a standard CSO. To some extent, this operation will lead to the emergence of an unordered stochastic state during the search process. To change this phenomenon, the sine and cosine functions are utilized instead of the random numbers c 1 and c 2 to regulate the distance control, as shown in Equations (24) and (25).
M S X i j t = r 1 × X A j t + 1 r 2 × X i j t + sin ( 2 π r 1 ) × ( X A j t X i j t )
M S X A j t = r 2 × X A j t + 1 r 2 × X i j t + cos ( 2 π r 2 ) × ( X A j t X i j t )
The combination of sine and cosine functions in the algorithmic search process exhibits the characteristic of seeking constant oscillations between each other within a given space. This feature enables the algorithm to consistently maintain the population’s diversity among the individuals involved in the horizontal crossover phase. Figure 3 presents a schematic diagram illustrating the operational process for function values within the range of [−1, 1]. Through the horizontal crossover operation, diverse individuals are able to exchange information, facilitating an enhanced global search capability as they learn from each other.

3.3.2. Vertical Crossover

In the later stages, the SMA is susceptible to becoming trapped in local optima. This phenomenon primarily occurs when some individuals within the population succumb to local optima in a particular dimension, consequently causing premature convergence of the entire population. The SMA itself lacks the capability to regulate individuals that have become trapped in local optima, impeding the search process and limiting the potential for discovering the global best solution. Therefore, the advantageous solutions resulting from horizontal crossover are selected as parent populations and subjected to Vertical crossover, which operates between two different dimensions within an individual.
Assuming that the newborn individual, denoted as X i j , corresponds to the individual with subscript i , in the j t h dimension, and undergoes longitudinal crossover in the d t h dimension. The calculation is depicted in Equation (26).
M S X i j t = r × X i j t + 1 r × X i d t
where M S X i j is the offspring individual generated from the j t h and d t h dimensions of individual X i j by vertical crossover, and j d .
The competitive mechanism, as shown in Equation (27), incorporates slime mould individuals engaged in crossover, thus minimizing the risk of losing important dimensional information. This mechanism effectively enhances population diversity and consistently improves solution quality. The individuals who have fallen into the local optimum can make full use of the information in each dimension, and thus have the opportunity to jump out of the local optimum.
X i t = M S X i ( t ) ,     i f   f M S X i t < f X i t X i t ,     o t h e r
In summary, after horizontal crossover between slime moulds, individuals with oscillatory properties in the preliminary to intermediate stages phases, and vertical crossover within individuals in the later phases, the degree of aggregation of the population was greatly reduced and the diversity of the population was increased. Figure 4 shows the process of horizontal and vertical crossover.

3.4. Improved ESMA

The aforementioned improvement approaches are synthesized, followed by a concise description of the enhanced slime mould algorithm (ESMA) proposed in this paper. The ESMA first uses a new search mechanism that incorporates a global search method modified by a strategy of population mean position and Lévy flight function to the algorithm’s ability to explore unknown space in the pre-intermediate period. Then, a new learning method is proposed, which adopts the dynamic parameters to adjust the search angle and distance of lens learning to generate better leader slime mould individuals to achieve the goal of better guiding the whole population. Finally, a feasible update method is proposed. The method adopts the vertical and horizontal and cross-sectional approaches to reorganize the dimensions of the slime mould individuals, thus the SMA adds a perturbation mechanism, which is of great benefit to improve the algorithm’s ability to find the best solution.
Figure 5 shows the algorithmic workflow of the ESMA, which incorporates three improvement strategies.
The pseudo-code of the ESMA is as follows in Algorithm 2.
Algorithm 2 ESMA
1. Input: the parameters N,  m a x i t e r , the positions of slime mould X i ( i = 1,2 , , n ) ;
2.  t = 0
3. while  t m a x i t e r
4.  Calculate the fitness of all slime mould;
5.  Update best fitness and best position b F ,   X b ;
6.  Calculate the W by Equation (14);
7.  Calculate the d i , f i by Equations (10) and (11);
8.  for  i   t o   N
9.      Update p , v b ,   v c ;
10.   if  r < z
11.    Update positions by Equation (6);
12.   else if  r < p   &   d i > 0   &   f i > 0
13.    Update positions by Equation (15);
14.   else
15.    Update positions by Equation (8);
16.   end if
17.  end for
18.  for  i   t o   N
19.   Calculate horizontal crossover positions by Equations (24) and (25);
20.   Calculate vertical crossover positions by Equation (26);
21.  end for
22. Calculate Lens learning position X o b by Equation (19);
23. Update best position by Equation (21);
24.  t = t + 1 ;
25. end while
26. output: b F ,   X b ;
To demonstrate the improved optimization mechanism of the ESMA and validate its scientific rigor and effectiveness, this paper adopts the CEC2017 Multimodal Function Shifted and Rotated Expanded Scaffer’s F6 Function as the test benchmark. Additionally, two algorithms are employed in the optimization process to analyze the individual distribution graph. Assuming that the maximum number of iterations is 200 and the number of populations is 50, the individual distribution plots of the two algorithms are shown in Figure 6 and Figure 7. The ESMA rapid convergence can be seen, as evident from the more dispersed distribution of individuals from initial to final stages. This improved population diversity is attributed to most individuals closely following the leader individual, which approach the best value. Conversely, the SMA exhibits slow convergence and remains trapped in local optimal points, as indicated by its concentrated state.

3.5. Complexity Analysis

This paper employs the O notation to compute the time complexity. The improved ESMA mainly includes the following parts: random initialization, fitness value evaluation, sorting, position update, and distance judgment. Assuming that the number of slime mould populations is N , the dimension of the problem is x, and the maximum number of iterations is N , then the computational complexity of initialization is D i m , the computational cost of sorting is m a x i t e r , the computational complexity of weights updating is O = ( N × D i m ) , the time complexity of population position updating is O = ( N × D i m ) , the computational complexity of distance judging is O = ( N × D i m ) , the computational complexity of the lens learning method is O = ( D i m ) , the complexity of parameter tuning is O = ( 1 ) , and the complexity of horizontal and vertical crossover is the same as the population position update, which is O = ( N × D i m ) . In summary, the total complexity of the ESMA can be estimated as follows: O N + m a x i t e r × ( N × 1 + log N + 1 + D i m ) .

4. Experimental Results and Discussion

In this section, the performance of the ESMA is verified by 40 standard test functions and 2 real-world optimization problems. The entire testing procedure is structured as follows: (1) establishment of experimental criteria and setup, (2) comparison of results obtained from CEC2017 functions, (3) comparison of results obtained from CEC2013 test functions, (4) execution time, (5) statistical test, (6) analysis of diversity in the ESMA, (7) analysis of exploration and exploitation in the ESMA, and (8) comparison of the results obtained from real-world engineering problems.

4.1. Experimental Criteria and Setup

There are 29 CEC2017 and 11 CEC2013 test functions used in this test. Generally, unimodal functions, which typically have a single global optimum, are commonly employed to evaluate the efficacy of SSAs. Additionally, the aforementioned multimodal functions are more intricate than unimodal functions, exhibiting numerous local optimum solutions. These multimodal functions can be utilized to examine the exploration capability of optimization techniques. Furthermore, composite and hybrid functions encompass a combination of the aforementioned two types of functions.
The parameter selection is mostly based on the parameters used by the original authors in the various literature. Due to the longer development time of PSO and DE, the parameter settings for PS were obtained from reference [19], while parameters for DE were determined based on extensive research conducted by various researchers, as described in reference [61], in order to achieve better performance. Detailed parameter settings are presented in Table 2. Table 3 presents the categorization of function used in the experiment. In CEC2017 test functions, F1–F2 are classified as unimodal functions, F3–F9 as multimodal functions, and F10–F19 as hybrid functions. Moreover, F20–F29 are composition functions. The CEC2013 test functions consist of F30–F31 as unimodal functions, F32–F38 as multimodal functions, and F39–F40 are composition functions. The experiment incorporates a total of 11 comparison algorithms, including three variants of the SMA (AGSMA, AOSMA, and AOSMA), six widely recognized stochastic search algorithms (DE, PSO, CSA, HHO, SSA, and GWO), and the standard SMA with the improved ESMA.
To ensure consistency and reliability, the aforementioned algorithms were implemented for 1000 iterations, with a population size of 30 individuals, to solve the designated test benchmark problem. Thirty independent tests were conducted, and the mean (Mean), standard deviation (Std), and best solution (Best) of the results are provided. The experiments were carried out on a PC with an Intel Core i5-5800H CPU @ 2.30 GHz and the Windows 10 operating system. MATLAB R2021a software (version number is 9.10.0.1601886)was employed for calculating the test results.
Friedman’s test was utilized to assess the statistical significance of differences by examining the rank of ESMA in the tables below. The Friedman rank, R j , of the algorithm was calculated using the following procedure:
R j = 1 N j r i j ,     j = 1,2 , , n
The statistical distribution, χ F 2 , is shown in Equation (29).
χ F 2 = 12 N n ( n + 1 ) [ j R j 2 n n + 1 2 4 ]

4.2. CEC2017-Based Effectiveness Analysis

This subsection provides a detailed analysis of the test results for each participating algorithm, based on the CEC2017 functions in dimensions of 30 Dim, 50 Dim, and 100 Dim, respectively.
Table 4 presents the experimental data obtained from simulating the ESMA with 11 participating algorithms across 29 functions, each with a dimension of 30. It can be shown that the ESMA has an excellent performance in terms of mean value (Mean) compared to other algorithms. The ESMA achieves the highest rank, with a Friedman Rank of 1.7241, placing first in the overall ranking across 19 functions. These functions include unimodal F1–F2, multimodal F3, F5–F6, hybrid F10–F12, and combined F25–F28. In contrast, the standard SMA, which did not achieve first place in any of the functions, obtains a Friedman Rank of 4.3793, ranking fourth overall. The variant algorithms AGSMA, AOSMA, and MSMA have Friedman Rank values of 2.5172, 3.5172, and 5.0345, respectively. Among the other six stochastic search algorithms, namely DE, GWO, HHO, PSO, CSA, and SSA, only GWO achieved first rank in F16 with Friedman Rank values of 8.6552, 5.7241, 7.8276, 10.0690, 6.5172, and 10.0345, respectively. In terms of the best value “Best”, the ESMA achieved the first-ranked best value in function F19, F20, and four other functions, even though the mean value did not reach the best. This indicates that the ESMA not only exhibits superior performance but also demonstrates better robustness, as evidenced by its lower standard deviation (Std).
The enhanced global exploration approach, which incorporates mean and Lévy flight functions, dynamic lens mapping learning, and dimensional reorganization of search individuals during iterations, significantly enhances its ability to locate the optimum. As a result, it becomes more effective at solving complex problems by reducing the risk of getting trapped in local optima.
Figure 8 lists the convergence curves of 15 representative functions. The ESMA demonstrates rapid convergence and high solution quality for unimodal functions (F1 and F2). Additionally, the ESMA exhibits significantly superior optimization accuracy for multimodal functions. It is worth noting that for eight complex hybrid and composition functions such as F10, the performance of the ESMA remains unaffected, while the clustering of the convergence curve is due to the fact that the solution values represented by PSO are too large, i.e., the function is trapped in a local optimum. These results demonstrate the effectiveness of the ESMA in searching the unknown space of a given function. Furthermore, the experimental results presented above indicate that employing multiple strategies enhances the SMA by improving population diversity, accelerating convergence speed, and enhancing the ability to search for the best solutions.
Table 5 shows the effectiveness of the improvement method employed by the ESMA in a 50 Dim environment. The utilization of the three improvement strategies significantly enhances the algorithm’s run speed in high-dimensional spaces, enabling faster contraction of the solution space and laying the groundwork for local development. The ESMA also exhibits greater stability and robustness when compared to the comparison algorithm. The results indicate that the ESMA demonstrates superior performance, consistently achieving the highest ranking in finding the best value for 19 out of 29 functions. With a Friedman average rank of 1.6207, the ESMA accounts for 65.5% of the total, solidifying its dominance in the evaluation. The variant AGSMA achieves the search best value in 13.8% of the functions, while the AOSMA achieves it in 17.2%. On the other hand, the MSMA and SMA do not achieve the best value in any of the functions, with Friedman average ranks of 2.7241, 3.0690, 5.9655, and 3.8276, respectively. In contrast, other stochastic search algorithms such as DE, HHO, PSO, CSA, and SSA do not outperform any function. Only the GWO algorithm secures the first place in one function, specifically F15. The majority of functions falling under multimodal, hybrid, and composition categories, where the ESMA excels, confirms its ability to strike a better balance between exploration and development phases and effectively handle combinatorial functions with multiple local optima.
Figure 9 presents the boxplots of the 15 CEC2017 functions, showcasing the best solution achieved by the ESMA and each participating algorithm through 30 independent executions. The height of the box represents the fluctuation of the algorithm’s best value, and the bottom of the box indicates the best value of the algorithm. The narrower ESMA box in the functions F2, F5, F6, F9, and F13 represents a small fluctuation of all its best values, i.e., the algorithm converges faster, resulting in a smaller span between best solutions in each generation. Conversely, the participating algorithms, such as AGSMA, AOSMA, etc., exhibit wider boxes, indicating a greater variation in the solutions obtained throughout the search process. This suggests that these algorithms have lower robustness compared to the ESMA, as their solutions exhibit larger fluctuations from the beginning of the search to the end of the iteration. Furthermore, it is obvious that the lower limit of the ESMA box in functions such as F2, F3, F5–F7, F13, etc., is lower than that of the other algorithms. This indicates that the ESMA achieves higher search accuracy. These findings demonstrate that the incorporation of the three strategies in the ESMA aids in escaping local optimal solutions and guides the subsequent search process of the algorithm.
Table 6 shows a comparison of the mean, best, and standard deviation results obtained by each algorithm computed on the 29 CEC2017 benchmark functions when the dimension is 100 Dim. With increasing dimension, the performance of the ESMA remains consistently stable. It achieves the first rank in 19 functions including F1, F3, F6, F10–F14, and F24–F29, among others. The rank of Friedman’s test is calculated to be 1.5172. These results highlight the ability of the ESMA to effectively tackle high-dimensional problems, showcasing its exceptional stability and robustness. Among the 11 participating randomized search algorithms, the ESMA proves to be the top-performing method. In contrast, the standard SMA demonstrates weaker performance compared to the ESMA. It fails to achieve the best solution for any function and obtains a Friedman mean rank value of 3.7241, ranking fourth among the eleven algorithms. The aforementioned results demonstrate that the novel global exploration approach, lens mapping learning with dynamic parameters and dimensional reorganization using vertical and horizontal crossover, effectively solves and surpasses the limitations of the standard SMA, particularly in high-dimensional scenarios.
The performance of the other variants of SMA, namely AGSMA, AOSMA, and MSMA, demonstrates a decreasing trend. AGSMA outperforms the other variants in four functions, while AOSMA excels in five functions and MSMA demonstrates superiority in one function. The corresponding Friedman mean rank values are 2.8276, 3.5172, and 6.4138, respectively.
Among six other standard stochastic search algorithms, namely GWO, DE, HHO, PSO, CSA, and SSA, the ESMA exhibits the lowest mean value for all functions. However, except for ESMA, it is worth noting that GWO outperforms all other standard stochastic search algorithms in terms of optimization performance. The mean values demonstrate that the ESMA improves the performance of the other algorithms to varying degrees across different types of benchmark function optimizations. Furthermore, the ESMA also demonstrates superior test results in terms of robustness and effectiveness, as indicated by the Std and Best values.
In conclusion, the ESMA has successfully enhanced both the convergence speed and optimization accuracy through improvements in population diversity and prevention of local optimization.

4.3. CEC2013-Based Effectiveness Analysis

Table 7 shows the benchmark function optimization results of the ESMA and other comparative algorithms in 11 CEC2013-based 30-dimensional problems. From the Mean and Std values in the table, it is evident that the ESMA achieves the highest ranking for the two unimodal functions. The ESMA outperforms other comparative algorithms in terms of optimization accuracy and solution stability. These findings prove that the ESMA exhibits superior performance in unimodal function optimization compared to the standard SMA and other algorithms used in the test. The slime mould individuals demonstrate effective exploration of the solution space, resulting in improved optimization outcomes. The optimization performance of the algorithms is significantly challenged when dealing with the basic multimodal and composition functions F32–F40, which contain numerous local minima. In this regard, the ESMA demonstrates superior convergence accuracy compared to other comparative algorithms, while also exhibiting higher algorithmic stability. The ESMA also achieves the best results in F30–F34 and F36–F39 for the Best value, surpassing other SMA variants and non-SMA algorithms. This suggests that the ESMA test algorithm performs exceptionally well even in extreme scenarios. Only in the case of F35, do we observe that the optimization accuracy of the ESMA is slightly lower than that of the AOSMA. However, in the overall function test, the ESMA achieves a Friedman mean rank of 1.4545, indicating that it is the top-performing algorithm for most functions. This shows that the optimization ability of the ESMA can be further enhanced through multi-strategy improvement.
Table 8 presents the results of optimizing functions in 100 dimensions for each algorithm. In the case of all 11 functions, the ESMA consistently achieves superior optimization accuracy as well as better solution stability. These findings suggest that, even with the incorporation of multiple improvement strategies, the optimization ability of the ESMA does not diminish as the dimension of the function increases. Thus, the ESMA consistently delivers good optimization results.

4.4. Execution Time

Table 9 shows the execution time of each algorithm involved for dimensions ranging from 30 to 100. The calculation is performed by executing the 29 functions of CEC2017 and the 11 functions of CEC2013 independently for each algorithm, repeating the process 30 times. The maximum number of iterations is set to 1000. The table indicates a distinction between the improved and unimproved algorithms, with the former generally requiring more time than the latter. This discrepancy arises from the fact that the improved algorithm necessitates modifications to the formula structure, as well as the inclusion of additional strategies, in order to enhance its performance. Among the unimproved algorithms, HHO takes the longest time due to its numerous update methods, while PSO and CSA require less time as they rely on a smaller number of simpler update methods. The second highest time consumption is observed in the SMA, which is attributed to the presence of three update methods involving the calculation of fitness values for the weights, W , and the need for sequencing. Among the improved algorithms, the MSMA has the highest time consumption, followed by the ESMA which has a similar time consumption to the AOSMA. It is worth noting that the AOSMA incorporates only one contrastive learning method, whereas the ESMA incorporates three methods, yet achieves a comparable time cost. In comparison to the MSMA, the ESMA exhibits significantly lower time consumption, despite both algorithms incorporating three strategies. In summary, the ESMA utilizes multiple strategies to enhance the SMA, which incurs a certain amount of execution time but achieves optimal optimization search capability compared to other algorithms.

4.5. Atatistical Test

Table 10 displays the p-values, which serve as an essential indicator for determining the presence of a significant difference between the algorithms. A p-value below 0.01 indicates a significant difference in the data. In the case of the ESMA, the asymptotic significance of the p-value is consistently below 0.01 for the 30 Dim, 50 Dim, and 100 Dim scenarios, suggesting a significant difference in performance compared to other algorithms. This variability is likely attributed to the three improvement strategies implemented by the ESMA.

4.6. ESMA Diversity Analysis

The conducted experiments successfully passed the CEC2017 and CEC2013 benchmarks, encompassing a total of 40 functions tested in various directions. The experimental results demonstrate that the ESMA exhibited favorable optimization-seeking capabilities. This subsection aims to investigate the algorithm’s dimensional diversity during the iterative process. Hussain [62] emphasizes the equal importance of thoroughly exploring the diversity of algorithms and studying performance metrics, such as mean and variance. The key distinction is that the former specifically focuses on analyzing the behavioral state of each searching individual within the entire population.
In this paper, we adopt Hussain’s proposed method of measuring dimensional diversity to investigate the behavioral state of slime mould individuals in the ESMA. Equations (30) and (31) show this method.
D i v j = 1 N j N m e d i a n X j X i j
D i v = 1 D i m j D i m D i v j
where m e d i a n X j is the median of dimension j in the whole population, and X i j is the dimension j of the slime mould individual i , and N is the size of the population. D i v j is the mean of the dimensions of the searched individuals, and D i v is the mean of the diversity of all dimensions.
To comprehensively demonstrate the diversity of the ESMA, Figure 10 presents the dimensional diversity curves obtained by calculating the nine CEC2017 functions, including unimodal, multimodal, hybrid, and composition functions, with a dimension of 30. The figure illustrates that the amalgamation of various improvement strategies enables the algorithm to maintain a favorable population diversity. In the stage of global exploration, each function exhibits significant diversity, facilitating improved exploration of unknown regions in the entire search space. With an increasing number of iterations, the ESMA continues to maintain strong population diversity in functions such as multimodal F3 and composition function F26, enabling it to identify best solutions during local searching.

4.7. ESMA Exploration and Exploitation Analysis

This part of the experiment builds upon the previously described measurements of algorithmic diversity. It quantifies the relationship between exploration and development, highlighting the importance of balancing global and local searches in achieving effective search performance. The specific calculations can be found in Ref. [62] and are detailed in Equations (32) and (33).
X p l % = D i v D i v m a x × 100
X p t % = | D i v D i v m a x | D i v m a x × 100
where X p l % and X p t % are the percentages of exploration and exploitation, respectively, and D i v m a x is the maximum diversity of the population.
Figure 11 presents the results of the ESMA’s computation of the exploration and exploitation percentages for the nine functions in CEC2017 under 30 Dim.
From the figure, it can be observed that the algorithm exhibits a high percentage of the exploitation process, which is based on its ability to extensively search unknown regions in the global exploration mode.
Specifically, the average exploitation percentages for unimodal F2, multimodal F3, and composition F22 are remarkably high at 94.57%, 93.02%, and 93.32%, respectively. This enables the ESMA to achieve improved approximation results for the problem. Furthermore, when confronted with other complex, multidimensional, and multimodal problems, the ESMA maintains an appropriate exploration percentage, effectively enhancing its ability to prevent premature convergence in the early stage of the algorithm. In conclusion, the ESMA effectively strikes a balance between global exploration and local exploitation.

4.8. Real-World Engineering Problems

All stochastic search algorithms are developed and enhanced with the aim of solving real-world problems. In the previous test, their effectiveness was examined through numerical experiments. In this subsection, the ESMA method is applied to evaluate the exploration and exploitation capabilities in real-world scenarios. To ensure fairness, the experimental environment for these engineering problems remains consistent with the numerical tests.

4.8.1. Robot Path Planning Problem

Robot path planning plays a crucial role in various fields, including agricultural production, underwater operations, air transportation, etc.
Hao et al. [63] utilized a genetic algorithm (GA) to address the obstacle avoidance problem of an underwater autonomous underwater vehicle (AUV). On the other hand, Song et al. [64] applied an ant colony optimization (ACO) algorithm to solve the path planning issue encountered by a coal mining robot.
In this paper, we take the classic two-dimensional path selection problem of a robot as an illustrative example to discuss the application of the ESMA. It is assumed that in path planning, the movement of each searching individual from the starting point to the end point is considered a feasible path. The environment modeling employs the grid method to construct a map matrix, denoted as G , with a size of m × n and each element occupying a space of 1 × 1 . The robot’s movement should be within the 2D map and limited by obstacles. Obstacles at equivalent locations are computed based on the elements of matrix G . Element 0 is noted as a feasible node and 1 is noted as an obstacle region. The robot can move on the grid with element value 0. The dimension D i m represents the column subscripts in G . The path planning objective is to search for the shortest movement distance of an individual, so its length can be expressed as Equation (34):
L x = i = 0 D i m 1 x i + 1 x i 2 + ( y i + 1 y i ) 2
In this path planning test, a total of seven algorithms, including the AOSMA, AOA [65], GWO, DE, PSO, CSA, and SMA, were compared with the ESMA. The parameters for AOA were set according to the original literature, while the parameters for other algorithms remained consistent with Table 2. The population size was set to 30, and the number of iterations was set to 20. Each algorithm was run independently 30 times. The map size was set to 20 × 20 . The best distance and average distance of each algorithm test results were recorded, and rankings were assigned based on these values.
From the statistical data in Table 11, it is shown that the ESMA achieves the shortest distance and average distance among all algorithms in the case of obstacle avoidance. The best distance measures 29.7989 and the average distance measures 29.8109, ranking the ESMA in the first position. This indicates that the path planned by the ESMA demonstrates excellent stability. The standard SMA ranks second, with the best distance of 29.9159 and an average distance of 29.8989, both of which are larger than those of the ESMA. AOSMA, AOA, GWO, DE, PSO, and CSA rank third, sixth, seventh, fifth, fourth, and eighth, respectively, all performing weaker than the ESMA. Figure 12 illustrates the trajectory of each participating algorithm mentioned above. It can be observed that ESMA exhibits superior obstacle avoidance capabilities and achieves the shortest path.

4.8.2. Pressure Vessel Design Problem

The objective of pressure vessel design (PVD) [66] is to minimize the cost of manufacturing (including pairing, forming, and welding) the pressure vessel while meeting the required specifications. The design, as shown in Figure 13, includes covers at both ends of the pressure vessel, with the cover at the head end being semi-spherical in shape. L is the cross-sectional length of the cylindrical portion of the head without considering the head, and R is the diameter of the inner wall of the cylindrical portion, T S and Th denote the wall thickness of the cylindrical part and the wall thickness of the head, respectively. L , R , T s , and T h are the four optimization variables of the pressure vessel design problem. The objective function of the problem and the four optimization constraints are shown in Equations (35)–(41).
x = x 1 , x 2 , x 3 , x 4 = [ T s , T h , R , L ]
min f ( x ) = 0.6224 x 1 x 2 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
The constraints are as follows:
g 1 ( x ) = x 1 + 0.0193 x 3 0
g 2 ( x ) = x 2 + 0.00954 x 3 0
g 3 ( x ) = π x 1 4 x 3 3 3 + 1296000 0
g 4 ( x ) = x 4 240 0
0 x 1 100 ,   0 x 2 100 ,   0 x 3 100 ,   0 x 4 100 ,
The ESMA is compared with the AGWO [66], WOA, ALO [67], PSO, PIO, and SMA, each of which is implemented based on the original literature. According to the results presented in Table 12, the ESMA outperforms all other algorithms in solving the pressure vessel design problem, achieving the lowest total cost.

5. Summary and Future

This paper introduces a modification to the global exploration phase of the SMA in order to improve aimless random exploration with positive and negative feedback. The improved approach, utilizing the Lévy flight technique with selective average position and jumping, enhances the global optimality seeking performance of the ESMA compared to the original operation. The introduction of a dynamic lens mapping learning strategy improves the position of the leader individual and its ability to escape local optima. Additionally, incorporating a vertical and horizontal crossover between dimensions to rearrange the current individuals during each iteration promotes population diversity. The performance of the ESMA was assessed by employing a set of 40 functions from CEC2017 and CEC2013, which yielded favorable numerical results across various metrics such as Best, Mean, Standard Deviation, and the Friedman test. Furthermore, the ESMA demonstrates its superiority in addressing two classical real-world problems: path planning and pressure vessel design.
In future research, it is worth exploring the applicability of the ESMA to a wider range of engineering problems, including but not limited to imaging and chemistry, in order to expand its utilization. Furthermore, integrating the ESMA with other approaches can facilitate the development of more valuable and improved versions.

Author Contributions

Conceptualization, W.X. and D.L.; methodology, W.X. and D.L.; software, W.X.; validation, W.X., D.L. and D.Z.; formal analysis, W.X. and D.L.; investigation, W.X. and D.L.; resources, D.L. and D.Z.; data curation, W.X. and D.L.; writing—original draft preparation, W.X.; writing—review and editing, Z.L.; visualization, R.L.; supervision, D.L. and Z.L.; project administration, D.L.; funding acquisition, W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Si, T.; Patra, D.K.; Mondal, S.; Mukherjee, P. Breast DCE-MRI segmentation for lesion detection using Chimp Optimization Algorithm. Expert Syst. Appl. 2022, 204, 117481. [Google Scholar] [CrossRef]
  2. Huiling, C.; Shan, J.; Mingjing, W.; Asghar, H.A.; Xuehua, Z. Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. J. Clean. Prod. 2020, 244, 118788. [Google Scholar]
  3. Zhu, D.; Huang, Z.; Liao, S.; Zhou, C.; Yan, S.; Chen, G. Improved Bare Bones Particle Swarm Optimization for DNA Sequence Design. IEEE Trans. NanoBiosci. 2023, 22, 603–613. [Google Scholar] [CrossRef]
  4. Yan, Z.; Wang, J.; Li, G.C. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization. Neural Netw. Off. J. Int. Neural Netw. Soc. 2014, 55, 20–29. [Google Scholar] [CrossRef]
  5. Sahoo, S.K.; Saha, A.K. A Hybrid Moth Flame Optimization Algorithm for Global Optimization. J. Bionic Eng. 2022, 19, 1522–1543. [Google Scholar] [CrossRef]
  6. Chen, X.; Tang, B.; Fan, J.; Guo, X. Online gradient descent algorithms for functional data learning. J. Complex. 2022, 70, 101635. [Google Scholar] [CrossRef]
  7. Neculai, A. A diagonal quasi-Newton updating method for unconstrained optimization. Numer. Algorithms 2019, 81, 575–590. [Google Scholar]
  8. Bellet, J.-B.; Croisille, J.-P. Least squares spherical harmonics approximation on the Cubed Sphere. J. Comput. Appl. Math. 2023, 429, 115213. [Google Scholar] [CrossRef]
  9. Bader, A.; Randa, A.; Hafez, E.H.; Riad Fathy, H. On the Mixture of Normal and Half-Normal Distributions. Math. Probl. Eng. 2022, 2022, 3755431. [Google Scholar]
  10. Natido, A.; Kozubowski, T.J. A uniform-Laplace mixture distribution. J. Comput. Appl. Math. 2023, 429, 115236. [Google Scholar] [CrossRef]
  11. Laith, A.; Ali, D.; Woo, G.Z. A Comprehensive Survey of the Harmony Search Algorithm in Clustering Applications. Appl. Sci. 2020, 10, 3827. [Google Scholar]
  12. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  13. Wang, G.G.; Deb, S.; Cui, Z.H. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef]
  14. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  15. Seyedali, M.; Andrew, L. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
  16. Gaurav, D.; Vijay, K. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar]
  17. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  18. Asghar, H.A.; Seyedali, M.; Hossam, F.; Ibrahim, A.; Majdi, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar]
  19. Baykasoglu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  20. Zhu, D.; Wang, S.; Zhou, C.; Yan, S. Manta ray foraging optimization based on mechanics game and progressive learning for multiple optimization problems. Appl. Soft Comput. 2023, 145, 110561. [Google Scholar] [CrossRef]
  21. Bhargava, G.; Yadav, N.K. Solving combined economic emission dispatch model via hybrid differential evaluation and crow search algorithm. Evol. Intell. 2022, 15, 1161–1169. [Google Scholar] [CrossRef]
  22. Li, S.; Chen, H.; Wang, M.; Asghar, H.A.; Seyedali, M. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  23. Zhou, X.; Chen, Y.; Wu, Z.; Heidari, A.A.; Chen, H.; Alabdulkreem, E.; Escorcia-Gutierrez, J.; Wang, X. Boosted local dimensional mutation and all-dimensional neighborhood slime mould algorithm for feature selection. Neurocomputing 2023, 551, 126467. [Google Scholar] [CrossRef]
  24. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  25. Deng, L.; Liu, S. An enhanced slime mould algorithm based on adaptive grouping technique for global optimization. Expert Syst. Appl. 2023, 222, 119877. [Google Scholar] [CrossRef]
  26. Kumar, N.M.; Pand; Rutuparna; Ajith, A. Adaptive opposition slime mould algorithm. Soft Comput. 2021, 25, 14297–14313. [Google Scholar]
  27. Deng, L.; Liu, S. A multi-strategy improved slime mould algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 404, 116200. [Google Scholar] [CrossRef]
  28. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  29. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks—Conference Proceedings, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  30. Seyedali, M.; Mohammad, M.S.; Andrew, L. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar]
  31. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  32. Alireza, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar]
  33. Hong, M.; Zhongrui, Q.; Chengbi, Z. Multi-Strategy Improved Slime Mould Algorithm and its Application in Optimal Operation of Cascade Reservoirs. Water Resour. Manag. 2022, 36, 3029–3048. [Google Scholar]
  34. Altay, O. Chaotic slime mould optimization algorithm for global optimization. Artif. Intell. Rev. 2022, 55, 3979–4040. [Google Scholar] [CrossRef]
  35. Tang, A.D.; Tang, S.Q.; Han, T.; Zhou, H.; Xie, L. A Modified Slime Mould Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 2298215. [Google Scholar] [CrossRef]
  36. Gao, Z.-M.; Zhao, J.; Li, S.-R. The Improved Slime Mould Algorithm with Cosine Controlling Parameters. J. Phys. Conf. Ser. 2020, 1631, 012083. [Google Scholar] [CrossRef]
  37. Hu, J.; Gui, W.; Asghar, H.A.; Cai, Z.; Liang, G.; Chen, H.; Pan, Z. Dispersed foraging slime mould algorithm: Continuous and binary variants for global optimization and wrapper-based feature selection. Knowl.-Based Syst. 2022, 237, 107761. [Google Scholar] [CrossRef]
  38. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC)algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  39. Rizk-Allah, R.M.; Hassanien, A.E.; Song, D. Chaos-opposition-enhanced slime mould algorithm for minimizing the cost of energy for the wind turbines on high-altitude sites. ISA Trans. 2022, 121, 191–205. [Google Scholar] [CrossRef]
  40. Ren, L.; Heidari, A.A.; Cai, Z.; Shao, Q.; Liang, G.; Chen, H.L.; Pan, Z. Gaussian kernel probability-driven slime mould algorithm with new movement mechanism for multi-level image segmentation. Measurement. J. Int. Meas. Confed. 2022, 192, 110884. [Google Scholar] [CrossRef]
  41. Ahmadianfar, I.; Noori, R.M.; Togun, H. Multi-strategy Slime Mould Algorithm for hydropower multi-reservoir systems optimization. Knowl.-Based Syst. 2022, 250, 109048. [Google Scholar] [CrossRef]
  42. Pawani, K.; Singh, M. Combined Heat and Power Dispatch Problem Using Comprehensive Learning Wavelet-Mutated Slime Mould Algorithm. Electr. Power Compon. Syst. 2023, 51, 12–28. [Google Scholar] [CrossRef]
  43. Sun, K.; Jia, H.; Li, Y.; Jiang, Z. Hybrid improved slime mould algorithm with adaptive β hill climbing for numerical optimization. J. Intell. Fuzzy Syst. 2020, 40, 1667–1679. [Google Scholar] [CrossRef]
  44. Zhong, C.; Li, G.; Meng, Z. A hybrid teaching–learning slime mould algorithm for global optimization and reliability-based design optimization problems. Neural Comput. Appl. 2022, 34, 16617–16642. [Google Scholar] [CrossRef]
  45. Liu, L.; Zhao, D.; Yu, F.; Heidari, A.A.; Ru, J.; Chen, H.; Mafarja, M.; Turabieh, H.; Pan, Z. Performance optimization of differential evolution with slime mould algorithm for multilevel breast cancer image segmentation. Comput. Biol. Med. 2021, 138, 104910. [Google Scholar] [CrossRef]
  46. Izci, D.; Ekinci, S.; Zeynelgil, H.L.; Hedley, J. Fractional Order PID Design based on Novel Improved Slime Mould Algorithm. Electr. Power Compon. Syst. 2022, 49, 901–918. [Google Scholar] [CrossRef]
  47. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  48. Nafi, Ö.B.; Berkan, A.S.; Timur, D.; Bilal, Ö. A novel version of slime mould algorithm for global optimization and real world engineering problems Enhanced slime mould algorithm. Math. Comput. Simul. 2022, 198, 253–288. [Google Scholar]
  49. Mirjalili, S. SCA: A Sine Cosine algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  50. Yin, S.; Luo, Q.; Zhou, Y. EOSMA: An Equilibrium Optimizer Slime Mould Algorithm for Engineering Design Problems. Arab. J. Sci. Eng. 2022, 47, 10115–10146. [Google Scholar] [CrossRef]
  51. Faramarzi, A.; Heidarinejad, M.; Stephen, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  52. Tang, H.; Fang, B.; Liu, R. A hybrid teaching and learning-based optimization algorithm for distributed sand casting job-shop scheduling problem. Appl. Soft Comput. 2022, 120, 108694. [Google Scholar] [CrossRef]
  53. Joshi, S.K. Levy flight incorporated hybrid learning model for gravitational search algorithm. Knowl.-Based Syst. 2023, 265, 110374. [Google Scholar] [CrossRef]
  54. Ewees Ahmed, A.; Mostafa Reham, R.; Ghoniem Rania, M.; Gaheen Marwa, A. Improved seagull optimization algorithm using Lévy flight and mutation operator for feature selection. Neural Comput. Appl. 2022, 34, 7437–7472. [Google Scholar] [CrossRef]
  55. Park, S.-Y.; Lee, J.-J. Stochastic opposition-based learning using a beta distribution in differential evolution. IEEE Trans. Cybern. 2016, 46, 2184–2194. [Google Scholar] [CrossRef]
  56. Yu, X.; Xu Wang, Y.; Li, C.-L. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  57. Long, W.; Jiao, J.; Xu, M. Lens-imaging learning Harris hawks optimizer for global optimization and its application to feature selection. Expert Syst. Appl. 2022, 202, 117255. [Google Scholar] [CrossRef]
  58. Han, M.; Du, Z.; Zhu, H.; Li, Y.; Yuan, Q.; Zhu, H. Golden-Sine dynamic marine predator algorithm for addressing engineering design optimization. Expert Syst. Appl. 2022, 210, 118460. [Google Scholar] [CrossRef]
  59. Yue, S.; Zhang, H. A hybrid grasshopper optimization algorithm with bat algorithm for global optimization. Multimed. Tools Appl. 2021, 80, 3863–3884. [Google Scholar] [CrossRef]
  60. Meng, A.B.; Chen, Y.C.; Yin, H.; Chen, S.Z. Crisscross optimization algorithm and its application. Knowl.-Based Syst. 2014, 67, 218–229. [Google Scholar] [CrossRef]
  61. Mohamed, A.; Mohamed, A. Adaptive guided differential evolution algorithm with novel mutation for numerical optimization. Int. J. Mach. Learn. Cybern. 2019, 10, 253–277. [Google Scholar] [CrossRef]
  62. Kashif, H.; Mohd, S.M.N.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar]
  63. Hao, K.; Zhao, J.; Li, Z.; Liu, Y.; Zhao, L. Dynamic path planning of a three-dimensional underwater AUV based on an adaptive genetic algorithm. Ocean Eng. 2022, 263, 112421. [Google Scholar] [CrossRef]
  64. Song, B.; Miao, H.; Xu, L. Path planning for coal mine robot via improved ant colony optimization algorithm. Syst. Sci. Control Eng. 2021, 9, 283–289. [Google Scholar] [CrossRef]
  65. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  66. Ma, C.; Huang, H.; Fan, Q. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  67. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
Figure 1. Search process of the SMA.
Figure 1. Search process of the SMA.
Axioms 12 00907 g001
Figure 2. Lens imaging principle.
Figure 2. Lens imaging principle.
Axioms 12 00907 g002
Figure 3. Crossover of sine and cosine functions.
Figure 3. Crossover of sine and cosine functions.
Axioms 12 00907 g003
Figure 4. Process of horizontal and vertical crossover.
Figure 4. Process of horizontal and vertical crossover.
Axioms 12 00907 g004
Figure 5. Scheme of the ESMA.
Figure 5. Scheme of the ESMA.
Axioms 12 00907 g005
Figure 6. Individual distribution of the SMA. (a) The SMA individual initialization map. (b) Individual distribution of the SMA in 100 generations.
Figure 6. Individual distribution of the SMA. (a) The SMA individual initialization map. (b) Individual distribution of the SMA in 100 generations.
Axioms 12 00907 g006
Figure 7. Individual distribution of the ESMA. (a) The ESMA individual initialization map. (b) Individual distribution of the ESMA in 100 generations.
Figure 7. Individual distribution of the ESMA. (a) The ESMA individual initialization map. (b) Individual distribution of the ESMA in 100 generations.
Axioms 12 00907 g007
Figure 8. Convergence curves for each comparison algorithm.
Figure 8. Convergence curves for each comparison algorithm.
Axioms 12 00907 g008aAxioms 12 00907 g008b
Figure 9. Fifty-dimensional boxplot of each comparison algorithm.
Figure 9. Fifty-dimensional boxplot of each comparison algorithm.
Axioms 12 00907 g009aAxioms 12 00907 g009b
Figure 10. Diversity measurement for the ESMA.
Figure 10. Diversity measurement for the ESMA.
Axioms 12 00907 g010
Figure 11. Curves of exploitation and exploration rates for the ESMA.
Figure 11. Curves of exploitation and exploration rates for the ESMA.
Axioms 12 00907 g011
Figure 12. Movement paths for each algorithm.
Figure 12. Movement paths for each algorithm.
Axioms 12 00907 g012
Figure 13. Pressure vessel design.
Figure 13. Pressure vessel design.
Axioms 12 00907 g013
Table 1. Summary of algorithms.
Table 1. Summary of algorithms.
Algorithms/LiteratureImproved WaysPractical Application
MSMA/[33]Nonlinear parameter z , and tent map improvement for elite individuals.Generation of electricity from a system of terraced reservoirs.
CSMA/[34]Improvements to the weight are made using 10 different chaotic mappings.Three engineering problems, including tension–compression spring design.
MSMA/[35]Two adaptive parameters, the initial population is improved using chaotic opposites and spiral strategy.Welded beam design and tension–compression spring problems.
ISMA/[36]Cosine instead of arctanh function to improve the parameter a .
DFSMA/[37]Dispersed foraging strategy.Feature selection tasks.
CO-SMA/[39]Logistic map for population perturbation and opposite learning to improve elite individuals.High altitude wind turbines.
MGSMA/[40]MVO strategy for foraging process and Gaussian kernel probability strategy to perturb the current individual.Multi-threshold image segmentation.
MSMA/[41]Based on the DE mutation operator, adjustable parameters are used to perturb the current solution.Generating power of hydroelectric multi-reservoir systems.
WMSMA/[42]Wavelet mutations perturb current individuals.Cogeneration scheduling issues.
BTSMA/[43]Brownian motion and tournament selection mechanisms for improved global exploration and adaptive hill climbing strategy for localized exploitation.Structural engineering design, multilayer perceptron.
TLSMA/[44]Hybrid TLBO algorithm. Divided into two subpopulations, the first subpopulation uses TLBO search, and the second subpopulation uses SMA search.Five RBRO problems including numerical design optimization.
MDE/[45]Mixing with the DE algorithm, the mutation and crossover stages have been undergone.Multi-threshold segmentation of breast cancer images.
OBL-SMA-SA/[46]SA is used to guide global exploration, while adversarial learning is employed to improve local exploitation.Adjusting FOPID controller parameters.
SCSMA/[48]The update mechanism for exploration and exploitation in SMA has been improved using sine and cosine functions.Four real-world problems, including the design of a cantilever beam.
EOSMA/[50]The exploration and exploitation formulas have been adjusted to update the positions of the best individual and random individuals. The EO update replaces the simple random search stage in SMA and applies differential mutation to the current individual.Nine issues, including vehicle side impact design.
Table 2. Parameter settings for each algorithm.
Table 2. Parameter settings for each algorithm.
AlgorithmParameters
AGSMA/AOSMA/SMA z = 0.03
MSMA z = 0.03 , E = 100 ,   and   N = 10
AGWO B = 0.8 , a = 2 ,   and   r 1   and   r 2 ( 0,1 )
DE S c a l i n g   f a c t o r = 0.85   and   C r o s s o v e r   p r o b a b i l i t y = 0.8
PSO c 1 = 2 , c 2 = 2   and   V m a x = 6
CSA A p = 0.1   and   f l = 1.5
HHO β = 1.5
SSA c 1 0,1   and   c 2 ( 0,1 )
GWO a = [ 2,0 ]
AOA μ = 0.499 , M O P m a x = 1   and   M O P m i n = 0.2
ALO w [ 2,6 ]
WOA a   and   A 2,0 , l 1,1 ,   and   b = 1
Table 3. Standard test functions.
Table 3. Standard test functions.
FunctionsOptimalRange
CEC2017 Unimodal Functions
F 1 ( x ) : Shifted and Rotated Bent Cigar Function100[−100, 100]
F 2 ( x ) : Shifted and Rotated Zakharov Function300[−100, 100]
CEC2017 Simple Multimodal Functions
F 3 ( x ) : Shifted and Rotated Rosenbrock’s Function400[−100, 100]
F 4 ( x ) : Shifted and Rotated Rastrigin’s Function500[−100, 100]
F 5 ( x ) : Shifted and Rotated Expanded Scaffer’s F6 Function600[−100, 100]
F 6 ( x ) : Shifted and Rotated Lunacek Bi-Rastrigin Function700[−100, 100]
F 7 ( x ) : Shifted and Rotated Non-Continuous Rastrigin’s Function800[−100, 100]
F 8 ( x ) : Shifted and Rotated Levy Function900[−100, 100]
F 9 ( x ) : Shifted and Rotated Schwefel’s Function1000[−100, 100]
CEC2017 Hybrid Functions
F 10 ( x ) : Hybrid Function 1 (N = 3)1100[−100, 100]
F 11 ( x ) : Hybrid Function 2 (N = 3)1200[−100, 100]
F 12 ( x ) : Hybrid Function 3 (N = 3)1300[−100, 100]
F 13 ( x ) : Hybrid Function 4 (N = 4)1400[−100, 100]
F 14 ( x ) : Hybrid Function 5 (N = 4)1500[−100, 100]
F 15 ( x ) : Hybrid Function 6 (N = 4)1600[−100, 100]
F 16 ( x ) : Hybrid Function 6 (N = 5)1700[−100, 100]
F 17 ( x ) : Hybrid Function 6 (N = 5)1800[−100, 100]
F 18 ( x ) : Hybrid Function 6 (N = 5)1900[−100, 100]
F 19 ( x ) : Hybrid Function 6 (N = 6)2000[−100, 100]
CEC2017 Composition Functions
F 20 ( x ) : Composition Function 1 (N = 3)2100[−100, 100]
F 21 ( x ) : Composition Function 2 (N = 3)2200[−100, 100]
F 22 ( x ) : Composition Function 3 (N = 4)2300[−100, 100]
F 23 ( x ) : Composition Function 4 (N = 4)2400[−100, 100]
F 24 ( x ) : Composition Function 5 (N = 5)2500[−100, 100]
F 25 ( x ) : Composition Function 6 (N = 5)2600[−100, 100]
F 26 ( x ) : Composition Function 7 (N = 6)2700[−100, 100]
F 27 ( x ) : Composition Function 8 (N = 6)2800[−100, 100]
F 28 ( x ) : Composition Function 9 (N = 3)2900[−100, 100]
F 29 ( x ) : Composition Function 10 (N = 3)3000[−100, 100]
CEC2013 Unimodal Functions
F 30 ( x ) : Rotated Discus Function 4−1100[−100, 100]
F 31 ( x ) : Different Powers Function 5−1000[−100, 100]
CEC2013 Basic Mutilmodal Functions
F 32 ( x ) : Rotated Rosenbrock’s Function−900[−100, 100]
F 33 ( x ) : Rotated Griewank’s Function−500[−100, 100]
F 34 ( x ) : Rastrigin’s Function−400[−100, 100]
F 35 ( x ) : Non-Continuous Rastrigin’s Function−200[−100, 100]
F 36 ( x ) : Schwefel’s Function−100[−100, 100]
F 37 ( x ) : Lunacek Bi-Rastrigin Function300[−100, 100]
F 38 ( x ) : Expanded Griewank’s plus Rosenbrock’s Function500[−100, 100]
CEC2013 Composition Functions
F 39 ( x ) : Composition Function 2 (N = 3 Unrotated)800[−100, 100]
F 40 ( x ) : Composition Function 4 (N = 3 Rotated)1000[−100, 100]
Table 4. Result of the ESMA 30 Dim on CEC2017 functions.
Table 4. Result of the ESMA 30 Dim on CEC2017 functions.
FResultAGSMAAOSMAMSMADEGWOHHOPSOCSASSASMAESMA
F1Mean5.38 × 1042.33 × 1084.05 × 1082.31 × 1092.77 × 1092.95 × 1078.97 × 1093.31 × 1072.43 × 10101.31 × 1048.80 × 103
Best6.20 × 1021.78 × 1036.34 × 1074.26 × 1064.88 × 1082.13 × 1073.01 × 1097.82 × 1061.48 × 10102.76 × 1032.70 × 102
Std2.43 × 1053.78 × 1087.66 × 1085.45 × 1092.65 × 1099.79 × 1064.68 × 1092.00 × 1076.24 × 1097.85 × 1037.76 × 103
rank3678951041121
F2Mean1.89 × 1042.96 × 1043.21 × 1042.37 × 1055.00 × 1043.71 × 1041.85 × 1053.43 × 1047.28 × 1046.19 × 1037.35 × 102
Best4.50 × 1038.04 × 1032.10 × 1041.23 × 1054.04 × 1042.63 × 1046.56 × 1041.97 × 1044.34 × 1041.20 × 1033.16 × 102
Std6.22 × 1031.13 × 1045.87 × 1038.07 × 1049.77 × 1036.24 × 1039.48 × 1048.56 × 1032.17 × 1044.13 × 1038.85 × 102
rank3451187106921
F3Mean5.13 × 1025.08 × 1025.45 × 1026.98 × 1026.11 × 1025.54 × 1021.67 × 1036.08 × 1023.30 × 1035.15 × 1024.99 × 102
Best4.94 × 1024.76 × 1024.87 × 1024.68 × 1025.30 × 1025.00 × 1028.42 × 1025.28 × 1021.69 × 1034.70 × 1024.64 × 102
Std2.26 × 1011.83 × 1013.22 × 1013.88 × 1028.54 × 1013.78 × 1011.54 × 1035.17 × 1019.47 × 1022.07 × 1012.34 × 101
rank3259861071141
F4Mean6.09 × 1025.87 × 1026.28 × 1026.86 × 1026.20 × 1027.57 × 1027.91 × 1027.06 × 1028.35 × 1026.19 × 1026.13 × 102
Best5.74 × 1025.42 × 1025.91 × 1026.01 × 1025.74 × 1026.97 × 1027.38 × 1026.55 × 1027.64 × 1025.63 × 1025.59 × 102
Std3.21 × 1011.74 × 1012.35 × 1015.85 × 1012.72 × 1013.53 × 1012.19 × 1013.54 × 1012.95 × 1012.74 × 1012.95 × 101
rank2167591081143
F5Mean6.13 × 1026.12 × 1026.15 × 1026.35 × 1026.11 × 1026.65 × 1026.56 × 1026.55 × 1026.67 × 1026.11 × 1026.09 × 102
Best6.08 × 1026.04 × 1026.09 × 1026.18 × 1026.05 × 1026.51 × 1026.32 × 1026.38 × 1026.53 × 1026.03 × 1026.03 × 102
Std4.79 × 1005.16 × 1004.14 × 1001.60 × 1013.83 × 1005.52 × 1001.69 × 1017.90 × 1001.31 × 1018.29 × 1004.15 × 100
rank5467210981131
F6Mean8.57 × 1028.54 × 1029.14 × 1021.22 × 1039.00 × 1021.30 × 1031.19 × 1031.07 × 1031.49 × 1038.66 × 1028.57 × 102
Best8.14 × 1028.21 × 1028.55 × 1021.00 × 1038.26 × 1021.12 × 1031.08 × 1039.29 × 1021.33 × 1037.99 × 1028.17 × 102
Std3.11 × 1013.06 × 1012.59 × 1011.57 × 1026.19 × 1015.63 × 1014.99 × 1019.95 × 1011.29 × 1023.22 × 1012.85 × 101
rank2369510871141
F7Mean9.12 × 1028.79 × 1029.14 × 1021.00 × 1038.89 × 1029.80 × 1021.09 × 1039.41 × 1021.11 × 1039.26 × 1028.99 × 102
Best8.66 × 1028.38 × 1028.81 × 1028.95 × 1028.60 × 1029.11 × 1021.05 × 1039.26 × 1021.06 × 1038.57 × 1028.65 × 102
Std2.94 × 1011.94 × 1012.20 × 1015.98 × 1012.07 × 1012.40 × 1014.01 × 1012.36 × 1012.89 × 1012.55 × 1012.69 × 101
rank4159281071163
F8Mean2.70 × 1032.48 × 1032.55 × 1037.02 × 1032.47 × 1038.41 × 1037.00 × 1034.39 × 1038.58 × 1033.86 × 1032.40 × 103
Best1.14 × 1031.11 × 1031.35 × 1033.06 × 1031.16 × 1036.05 × 1033.12 × 1032.53 × 1035.52 × 1031.73 × 1031.17 × 103
Std1.02 × 1034.46 × 1021.17 × 1033.84 × 1037.95 × 1026.50 × 1023.03 × 1037.84 × 1022.60 × 1031.50 × 1038.50 × 102
rank5349210871161
F9Mean4.56 × 1034.80 × 1038.58 × 1037.68 × 1035.01 × 1036.03 × 1038.06 × 1035.16 × 1038.77 × 1034.75 × 1034.36 × 103
Best3.39 × 1033.50 × 1037.42 × 1035.13 × 1033.01 × 1034.55 × 1036.53 × 1033.47 × 1038.10 × 1033.66 × 1033.62 × 103
Std6.10 × 1028.86 × 1023.83 × 1021.42 × 1031.55 × 1036.34 × 1025.65 × 1027.31 × 1022.75 × 1026.66 × 1025.79 × 102
rank2410857961131
F10Me0an1.26 × 1031.28 × 1031.28 × 1036.16 × 1032.24 × 1031.31 × 1035.92 × 1031.44 × 1033.86 × 1031.27 × 1031.25 × 103
Best1.15 × 1031.14 × 1031.19 × 1031.28 × 1031.33 × 1031.25 × 1031.84 × 1031.27 × 1032.35 × 1031.18 × 1031.18 × 103
Std3.53 × 1014.66 × 1013.66 × 1016.46 × 1031.01 × 1036.47 × 1015.87 × 1038.07 × 1019.22 × 1025.38 × 1015.33 × 101
rank2541186107931
F11Mean2.22 × 1063.39 × 1061.48 × 1078.07 × 1079.37 × 1074.23 × 1071.37 × 1098.35 × 1072.17 × 1093.74 × 1062.19 × 106
Best1.17 × 1053.20 × 1053.97 × 1062.78 × 1051.26 × 1077.21 × 1063.26 × 1083.71 × 1061.28 × 1091.17 × 1059.43 × 104
Std1.58 × 1062.52 × 1061.06 × 1072.01 × 1089.10 × 1073.77 × 1071.08 × 1097.16 × 1076.51 × 1082.47 × 1061.23 × 106
rank2357961081141
F12Mean2.25 × 1043.83 × 1045.80 × 1063.51 × 1082.26 × 1072.01 × 1061.27 × 1097.08 × 1047.64 × 1084.57 × 1042.21 × 104
Best1.70 × 1031.02 × 1046.47 × 1041.65 × 1043.48 × 1042.23 × 1053.80 × 1071.88 × 1042.46 × 1081.29 × 1043.08 × 103
Std2.14 × 1042.59 × 1042.52 × 1079.09 × 1084.37 × 1076.63 × 1061.72 × 1094.15 × 1043.97 × 1082.67 × 1041.93 × 104
rank2379861151041
F13Mean5.27 × 1041.07 × 1058.22 × 1041.89 × 1066.38 × 1058.95 × 1052.17 × 1069.41 × 1045.25 × 1051.02 × 1056.84 × 104
Best1.65 × 1031.76 × 1049.89 × 1032.09 × 1032.04 × 1034.41 × 1045.52 × 1043.10 × 1034.59 × 1041.17 × 1046.83 × 103
Std5.16 × 1049.02 × 1047.15 × 1045.46 × 1061.01 × 1067.11 × 1054.32 × 1061.19 × 1054.23 × 1058.32 × 1044.33 × 104
rank1631089114752
F14Mean3.32 × 1043.23 × 1043.75 × 1043.23 × 1062.21 × 1068.91 × 1043.51 × 1072.38 × 1041.33 × 1083.70 × 1042.21 × 104
Best1.58 × 1035.02 × 1038.89 × 1032.41 × 1031.51 × 1042.51 × 1042.93 × 1068.69 × 1034.96 × 1072.24 × 1031.84 × 103
Std8.70 × 1031.52 × 1043.40 × 1041.69 × 1076.79 × 1066.31 × 1042.82 × 1079.45 × 1038.95 × 1071.47 × 1041.53 × 104
rank4369871021151
F15Mean2.46 × 1032.51 × 1032.58 × 1033.39 × 1032.54 × 1033.49 × 1034.11 × 1033.52 × 1033.97 × 1032.66 × 1032.46 × 103
Best1.91 × 1031.86 × 1032.06 × 1032.82 × 1032.09 × 1032.33 × 1033.37 × 1032.75 × 1033.51 × 1031.97 × 1031.76 × 103
Std3.19 × 1022.50 × 1023.45 × 1025.98 × 1023.49 × 1023.85 × 1026.59 × 1024.22 × 1023.21 × 1023.15 × 1023.13 × 102
rank2368591141071
F16Mean2.17 × 1032.18 × 1032.21 × 1032.66 × 1032.03 × 1032.64 × 1032.96 × 1032.52 × 1032.70 × 1032.25 × 1032.19 × 103
Best1.80 × 1031.84 × 1031.92 × 1031.91 × 1031.81 × 1032.09 × 1032.16 × 1032.21 × 1032.55 × 1031.95 × 1031.85 × 103
Std1.87 × 1021.88 × 1022.03 × 1022.93 × 1021.52 × 1022.76 × 1024.64 × 1022.52 × 1022.05 × 1022.02 × 1022.18 × 102
rank2359181171064
F17Mean1.02 × 1061.12 × 1067.87 × 1051.27 × 1071.04 × 1063.57 × 1062.43 × 1071.00 × 1069.46 × 1061.64 × 1066.21 × 105
Best8.12 × 1041.55 × 1053.77 × 1048.96 × 1049.59 × 1041.34 × 1051.65 × 1066.19 × 1041.31 × 1063.13 × 1056.18 × 104
Std6.55 × 1051.40 × 1067.73 × 1052.50 × 1071.17 × 1064.76 × 1068.33 × 1071.01 × 1064.67 × 1061.87 × 1063.88 × 105
rank4621058113971
F18Mean1.28 × 1043.16 × 1044.99 × 1041.03 × 1072.88 × 1068.34 × 1051.08 × 1082.61 × 1062.21 × 1082.57 × 1041.20 × 104
Best1.97 × 1032.19 × 1033.52 × 1032.07 × 1034.37 × 1047.27 × 1041.44 × 1071.53 × 1056.42 × 1072.15 × 1032.06 × 103
Std1.22 × 1042.80 × 1046.17 × 1042.62 × 1076.94 × 1064.04 × 1051.55 × 1082.01 × 1061.02 × 1082.09 × 1041.33 × 104
rank2459861071131
F19Mean2.44 × 1032.51 × 1032.63 × 1033.10 × 1032.44 × 1032.81 × 1032.89 × 1032.58 × 1032.95 × 1032.59 × 1032.51 × 103
Best2.16 × 1032.21 × 1032.19 × 1032.42 × 1032.22 × 1032.40 × 1032.57 × 1032.33 × 1032.62 × 1032.12 × 1032.10 × 103
Std1.66 × 1021.99 × 1021.70 × 1023.19 × 1021.95 × 1022.12 × 1022.22 × 1021.68 × 1021.35 × 1021.88 × 1021.95 × 102
rank1471128951063
F20Mean2.41 × 1032.38 × 1032.41 × 1032.51 × 1032.41 × 1032.59 × 1032.62 × 1032.50 × 1032.60 × 1032.41 × 1032.40 × 103
Best2.37 × 1032.35 × 1032.37 × 1032.40 × 1032.36 × 1032.46 × 1032.56 × 1032.44 × 1032.56 × 1032.38 × 1032.35 × 103
Std2.61 × 1012.24 × 1011.78 × 1016.95 × 1013.85 × 1016.76 × 1014.73 × 1013.04 × 1012.55 × 1012.63 × 1012.72 × 101
rank4138691171052
F21Mean2.32 × 1035.10 × 1032.96 × 1038.89 × 1034.87 × 1037.44 × 1038.14 × 1032.98 × 1038.25 × 1035.97 × 1035.19 × 103
Best2.32 × 1032.34 × 1032.47 × 1032.87 × 1032.62 × 1032.33 × 1033.63 × 1032.32 × 1034.26 × 1032.32 × 1032.30 × 103
Std6.21 × 1001.73 × 1036.61 × 1021.69 × 1031.98 × 1039.24 × 1022.39 × 1031.41 × 1032.57 × 1031.24 × 1031.49 × 103
rank1521148931076
F22Mean2.76 × 1032.76 × 1032.77 × 1032.87 × 1032.78 × 1033.25 × 1033.38 × 1033.20 × 1033.02 × 1032.76 × 1032.75 × 103
Best2.71 × 1032.71 × 1032.73 × 1032.71 × 1032.73 × 1032.86 × 1033.07 × 1032.96 × 1032.93 × 1032.72 × 1032.71 × 103
Std2.69 × 1012.04 × 1012.49 × 1016.53 × 1014.72 × 1011.46 × 1021.54 × 1021.01 × 1023.85 × 1012.41 × 1012.49 × 101
rank4257610119831
F23Mean2.91 × 1032.91 × 1032.97 × 1033.02 × 1032.94 × 1033.50 × 1033.61 × 1033.27 × 1033.16 × 1032.94 × 1032.92 × 103
Best2.87 × 1032.87 × 1032.92 × 1032.91 × 1032.88 × 1033.23 × 1033.43 × 1033.07 × 1033.11 × 1032.88 × 1032.87 × 103
Std2.17 × 1012.00 × 1012.49 × 1016.72 × 1015.05 × 1011.27 × 1021.64 × 1021.64 × 1023.87 × 1013.13 × 1013.39 × 101
rank2167510119843
F24Mean2.89 × 1032.93 × 1032.93 × 1033.02 × 1033.00 × 1032.94 × 1033.42 × 1032.98 × 1034.29 × 1032.94 × 1032.92 × 103
Best2.89 × 1032.89 × 1032.91 × 1032.89 × 1032.96 × 1032.91 × 1033.12 × 1032.95 × 1033.90 × 1032.90 × 1032.88 × 103
Std1.87 × 1012.35 × 1011.58 × 1011.33 × 1025.62 × 1012.12 × 1011.84 × 1023.19 × 1014.33 × 1021.55 × 1011.30 × 101
rank1439861071152
F25Mean4.57 × 1034.66 × 1034.84 × 1036.07 × 1034.85 × 1037.70 × 1037.23 × 1036.42 × 1037.68 × 1034.94 × 1034.51 × 103
Best3.91 × 1034.13 × 1034.38 × 1034.89 × 1034.14 × 1033.00 × 1034.21 × 1032.91 × 1035.46 × 1034.24 × 1033.97 × 103
Std4.33 × 1022.21 × 1023.03 × 1029.51 × 1024.70 × 1021.71 × 1032.37 × 1032.14 × 1038.52 × 1023.53 × 1024.53 × 102
rank2347511108961
F26Mean3.23 × 1033.23 × 1033.24 × 1033.28 × 1033.26 × 1033.51 × 1033.78 × 1033.71 × 1033.48 × 1033.24 × 1033.23 × 103
Best3.21 × 1033.21 × 1033.22 × 1033.22 × 1033.22 × 1033.35 × 1033.37 × 1033.44 × 1033.38 × 1033.22 × 1033.20 × 103
Std1.84 × 1011.75 × 1011.34 × 1016.51 × 1012.60 × 1011.58 × 1023.65 × 1021.74 × 1026.36 × 1011.17 × 1011.68 × 101
rank3257691110841
F27Mean3.31 × 1033.35 × 1033.33 × 1033.75 × 1033.47 × 1033.34 × 1034.06 × 1033.38 × 1034.85 × 1033.29 × 1033.25 × 103
Best3.22 × 1033.27 × 1033.28 × 1033.21 × 1033.33 × 1033.28 × 1033.44 × 1033.29 × 1033.91 × 1033.21 × 1033.20 × 103
Std6.01 × 1012.36 × 1023.56 × 1016.30 × 1021.03 × 1023.02 × 1016.70 × 1024.98 × 1014.56 × 1024.10 × 1012.73 × 101
rank3649851071121
F28Mean3.86 × 1033.97 × 1033.95 × 1034.45 × 1033.88 × 1034.85 × 1035.13 × 1034.76 × 1035.12 × 1033.89 × 1033.84 × 103
Best3.37 × 1033.54 × 1033.55 × 1033.84 × 1033.57 × 1034.34 × 1034.13 × 1034.10 × 1034.60 × 1033.62 × 1033.44 × 103
Std2.11 × 1022.15 × 1022.26 × 1024.82 × 1022.12 × 1024.58 × 1027.23 × 1024.13 × 1023.61 × 1021.65 × 1022.23 × 102
rank2657381191041
F29Mean1.35 × 1042.77 × 1057.43 × 1051.58 × 1078.47 × 1065.56 × 1061.25 × 1081.18 × 1071.76 × 1084.76 × 1041.82 × 104
Best8.11 × 1031.73 × 1041.09 × 1059.60 × 1031.34 × 1061.83 × 1069.79 × 1062.32 × 1063.66 × 1071.27 × 1048.31 × 103
Std4.81 × 1032.81 × 1054.80 × 1058.06 × 1076.00 × 1064.23 × 1062.66 × 1088.97 × 1067.05 × 1073.22 × 1048.04 × 103
rank1459761081132
Count540010000019
Friedman Rank2.51723.51725.03458.65525.72417.827610.06906.517210.03454.37931.7241
Table 5. Result of the ESMA 50 Dim on CEC2017 functions.
Table 5. Result of the ESMA 50 Dim on CEC2017 functions.
FResultAGSMAAOSMAMSMADEGWOHHOPSOCSASSASMAESMA
F1Mean8.34 × 1073.47 × 1094.46 × 1091.15 × 10101.21 × 10102.68 × 1082.68 × 10101.11 × 1097.88 × 10105.49 × 1053.17 × 104
Best9.97 × 1038.00 × 1071.97 × 1091.70 × 1072.38 × 1091.45 × 1081.62 × 10106.11 × 1084.10 × 10102.59 × 1051.03 × 104
Std1.87 × 1082.86 × 1091.96 × 1091.64 × 10104.88 × 1098.24 × 1077.79 × 1093.43 × 1081.44 × 10102.25 × 1051.91 × 104
rank3678941051121
F2Mean9.84 × 1049.19 × 1041.56 × 1055.33 × 1051.28 × 1051.35 × 1054.25 × 1051.12 × 1051.82 × 1058.16 × 1042.86 × 104
Best5.28 × 1045.18 × 1041.02 × 1053.22 × 1058.87 × 1041.01 × 1051.98 × 1057.91 × 1041.37 × 1054.68 × 1041.42 × 104
Std2.19 × 1041.99 × 1043.51 × 1041.84 × 1052.01 × 1042.12 × 1041.72 × 1052.05 × 1044.20 × 1042.94 × 1049.77 × 103
rank4381167105921
F3Mean5.89 × 1027.82 × 1029.32 × 1022.36 × 1031.54 × 1038.89 × 1023.60 × 1031.13 × 1031.64 × 1045.91 × 1025.52 × 102
Best4.95 × 1025.71 × 1026.12 × 1026.24 × 1027.52 × 1027.63 × 1022.00 × 1038.40 × 1026.02 × 1034.89 × 1024.49 × 102
Std7.35 × 1012.30 × 1023.49 × 1023.14 × 1035.68 × 1021.68 × 1021.15 × 1031.97 × 1026.09 × 1036.43 × 1013.81 × 101
rank2469851071131
F4Mean7.61 × 1027.12 × 1027.95 × 1029.09 × 1027.42 × 1029.28 × 1021.11 × 1038.45 × 1021.18 × 1037.88 × 1027.48 × 102
B × 10st6.61 × 1026.36 × 1027.18 × 1027.62 × 1026.51 × 1028.38 × 1021.00 × 1037.67 × 1021.11 × 1036.71 × 1026.58 × 102
Std4.03 × 1013.91 × 1013.35 × 1019.15 × 1014.13 × 1012.88 × 1014.40 × 1013.11 × 1015.33 × 1014.57 × 1015.43 × 101
rank4168291071153
F5Mean6.46 × 1026.27 × 1026.29 × 1026.54 × 1026.29 × 1026.76 × 1026.75 × 1026.66 × 1026.90 × 1026.41 × 1026.23 × 102
Best6.24 × 1026.12 × 1026.19 × 1026.41 × 1026.13 × 1026.68 × 1026.54 × 1026.56 × 1026.69 × 1026.20 × 1026.16 × 102
Std1.32 × 1016.28 × 1004.01 × 1001.36 × 1014.86 × 1005.26 × 1001.07 × 1015.17 × 1008.89 × 1001.27 × 1018.26 × 100
rank6237410981151
F6Mean1.08 × 1031.12 × 1031.19 × 1032.11 × 1031.13 × 1031.85 × 1031.80 × 1031.64 × 1032.41 × 1031.11 × 1031.07 × 103
Best9.78 × 1029.85 × 1021.06 × 1031.23 × 1039.92 × 1021.70 × 1031.61 × 1031.31 × 1032.03 × 1031.00 × 1039.70 × 102
Std6.87 × 1019.01 × 1015.92 × 1012.52 × 1027.09 × 1018.69 × 1017.46 × 1011.26 × 1021.87 × 1028.35 × 1016.01 × 101
rank2461059871131
F7Mean1.08 × 1031.04 × 1031.09 × 1031.24 × 1031.05 × 1031.20 × 1031.42 × 1031.17 × 1031.48 × 1031.07 × 1031.02 × 103
Best9.66 × 1029.48 × 1021.03 × 1031.01 × 1039.85 × 1021.12 × 1031.31 × 1031.08 × 1031.36 × 1031.01 × 1039.45 × 102
Std5.74 × 1013.16 × 1013.25 × 1011.00 × 1023.67 × 1013.06 × 1015.17 × 1014.27 × 1015.09 × 1015.91 × 1014.66 × 101
rank5269381071141
F8Mean1.14 × 1046.83 × 1039.09 × 1032.88 × 1041.10 × 1042.88 × 1042.96 × 1041.35 × 1043.54 × 1041.38 × 1041.05 × 104
Best4.43 × 1033.64 × 1034.65 × 1038.89 × 1034.60 × 1032.28 × 1041.12 × 1041.09 × 1042.33 × 1045.73 × 1034.26 × 103
Std4.97 × 1033.77 × 1032.78 × 1031.43 × 1044.72 × 1033.12 × 1039.08 × 1032.20 × 1037.91 × 1034.70 × 1034.02 × 103
rank5129481061173
F9Mean7.58 × 1037.63 × 1031.50 × 1041.28 × 1047.85 × 1039.65 × 1031.47 × 1048.50 × 1031.53 × 1047.81 × 1037.25 × 103
Best5.32 × 1035.71 × 1031.39 × 1048.25 × 1035.87 × 1037.92 × 1031.22 × 1046.54 × 1031.48 × 1046.02 × 1035.75 × 103
Std8.96 × 1029.11 × 1024.52 × 1021.61 × 1031.54 × 1038.61 × 1028.66 × 1029.90 × 1024.96 × 1021.07 × 1037.49 × 102
rank2310857961141
F10Mean1.53 × 1031.42 × 1036.93 × 1032.16 × 1046.28 × 1031.73 × 1032.09 × 1043.16 × 1031.21 × 1041.43 × 1031.36 × 103
Best1.33 × 1031.24 × 1035.42 × 1032.49 × 1032.88 × 1031.56 × 1035.49 × 1032.33 × 1035.61 × 1031.24 × 1031.23 × 103
Std3.09 × 1028.29 × 1011.83 × 1031.83 × 1041.97 × 1031.54 × 1021.81 × 1046.72 × 1024.21 × 1037.16 × 1015.00 × 101
rank4281175106931
F11Mean1.51 × 1079.07 × 1074.15 × 1081.41 × 1091.31 × 1092.46 × 1081.52 × 10104.69 × 1081.52 × 10102.49 × 1071.25 × 107
Best1.94 × 1069.82 × 1064.19 × 1071.62 × 1071.45 × 1084.75 × 1074.99 × 1091.35 × 1088.19 × 1095.71 × 1067.75 × 105
Std8.64 × 1062.35 × 1087.60 × 1082.40 × 1091.25 × 1092.13 × 1081.11 × 10103.20 × 1083.55 × 1091.17 × 1076.83 × 106
rank2469851171031
F12Mean2.08 × 1044.57 × 1051.34 × 1073.57 × 1083.45 × 1081.35 × 1076.11 × 1097.29 × 1054.49 × 1091.03 × 1053.92 × 104
B × 10st4.19 × 1031.62 × 1041.50 × 1062.35 × 1047.07 × 1051.61 × 1068.22 × 1085.95 × 1042.38 × 1093.64 × 1041.19 × 104
Std1.13 × 1041.95 × 1063.10 × 1078.17 × 1088.59 × 1083.37 × 1076.66 × 1095.77 × 1051.18 × 1093.33 × 1041.75 × 104
rank1469871151032
F13Mean2.67 × 1054.00 × 1051.10 × 1061.89 × 1061.21 × 1063.20 × 1062.78 × 1071.17 × 1064.50 × 1064.69 × 1052.65 × 105
Best8.84 × 1046.31 × 1043.94 × 1041.79 × 1056.35 × 1042.12 × 1059.70 × 1058.45 × 1041.34 × 1068.69 × 1045.53 × 104
Std1.91 × 1052.44 × 1059.82 × 1052.96 × 1061.26 × 1062.42 × 1065.00 × 1079.21 × 1052.33 × 1062.59 × 1051.64 × 105
rank2358791161041
F14Mean1.17 × 1041.10 × 1056.46 × 1052.33 × 1073.93 × 1078.07 × 1053.03 × 1084.56 × 1041.58 × 1093.23 × 1041.65 × 104
Best1.92 × 1031.01 × 1044.55 × 1043.05 × 1033.51 × 1041.18 × 1051.02 × 1088.41 × 1035.68 × 1085.26 × 1034.65 × 103
Std7.44 × 1031.92 × 1051.87 × 1068.23 × 1071.13 × 1083.12 × 1051.34 × 1084.59 × 1045.64 × 1081.22 × 1048.23 × 103
rank1568971041132
F15Mean3.36 × 1033.34 × 1033.45 × 1035.00 × 1033.20 × 1034.40 × 1036.26 × 1034.90 × 1036.20 × 1033.59 × 1033.43 × 103
Best2.21 × 1032.71 × 1032.52 × 1033.19 × 1032.46 × 1032.91 × 1034.87 × 1033.85 × 1035.68 × 1032.66 × 1032.75 × 103
Std3.98 × 1024.59 × 1024.88 × 1021.01 × 1034.09 × 1025.13 × 1021.19 × 1036.09 × 1025.01 × 1024.14 × 1024.35 × 102
rank3259171181064
F16Mean3.04 × 1033.04 × 1033.24 × 1034.04 × 1033.15 × 1033.87 × 1035.12 × 1033.70 × 1035.42 × 1033.32 × 1033.14 × 103
Best2.29 × 1032.19 × 1032.55 × 1033.44 × 1032.46 × 1032.99 × 1034.08 × 1032.90 × 1034.54 × 1032.42 × 1032.21 × 103
Std3.72 × 1022.88 × 1023.99 × 1026.98 × 1024.28 × 1024.04 × 1026.12 × 1023.80 × 1024.82 × 1023.79 × 1023.65 × 102
rank2159481071163
F17Mean4.22 × 1063.68 × 1065.13 × 1062.76 × 1071.19 × 1076.84 × 1063.99 × 1074.86 × 1064.89 × 1074.73 × 1062.74 × 106
Best2.79 × 1055.26 × 1051.12 × 1051.95 × 1061.52 × 1061.13 × 1064.17 × 1064.23 × 1051.09 × 1072.43 × 1057.67 × 105
Std3.21 × 1063.22 × 1065.04 × 1064.20 × 1071.71 × 1076.77 × 1064.31 × 1073.24 × 1062.57 × 1072.74 × 1061.71 × 106
rank3269871051141
F18Mean2.28 × 1042.11 × 1041.55 × 1053.18 × 1086.66 × 1061.41 × 1062.25 × 1082.90 × 1066.62 × 1082.34 × 1041.49 × 104
Best2.13 × 1033.14 × 1034.29 × 1042.35 × 1036.79 × 1043.39 × 1055.08 × 1075.54 × 1042.46 × 1082.53 × 1032.43 × 103
Std1.63 × 1042.18 × 1041.27 × 1059.35 × 1081.00 × 1071.30 × 1062.07 × 1082.99 × 1062.05 × 1081.78 × 1041.27 × 104
rank3251086971141
F19Mean3.23 × 1033.22 × 1033.79 × 1034.50 × 1033.34 × 1033.45 × 1034.24 × 1033.29 × 1034.33 × 1033.35 × 1033.16 × 103
Best2.50 × 1032.47 × 1033.42 × 1033.34 × 1032.47 × 1033.07 × 1033.81 × 1032.72 × 1033.82 × 1032.86 × 1032.56 × 103
Std2.76 × 1023.44 × 1022.29 × 1026.11 × 1024.69 × 1022.51 × 1022.80 × 1023.39 × 1022.06 × 1022.43 × 1022.84 × 102
rank3281157941061
F20Mean2.54 × 1032.52 × 1032.59 × 1032.76 × 1032.54 × 1032.89 × 1032.95 × 1032.74 × 1032.94 × 1032.55 × 1032.48 × 103
Best2.47 × 1032.44 × 1032.53 × 1032.53 × 1032.46 × 1032.75 × 1032.84 × 1032.53 × 1032.83 × 1032.47 × 1032.47 × 103
Std4.56 × 1013.15 × 1012.78 × 1019.95 × 1017.53 × 1018.33 × 1016.88 × 1017.80 × 1015.70 × 1015.72 × 1014.14 × 101
rank3268491171051
F21Mean9.35 × 1039.46 × 1031.64 × 1041.44 × 1048.81 × 1031.19 × 1041.60 × 1041.09 × 1041.67 × 1049.37 × 1038.70 × 103
Best6.96 × 1037.38 × 1031.63 × 1041.09 × 1047.87 × 1031.02 × 1041.39 × 1047.21 × 1031.07 × 1047.06 × 1036.58 × 103
Std1.16 × 1038.74 × 1029.38 × 1021.94 × 1031.53 × 1038.20 × 1028.64 × 1021.91 × 1031.10 × 1039.86 × 1021.51 × 103
rank3510827961141
F22Mean2.98 × 1032.97 × 1033.05 × 1033.19 × 1033.04 × 1033.94 × 1034.36 × 1033.82 × 1033.57 × 1033.00 × 1032.95 × 103
Best2.88 × 1032.85 × 1033.00 × 1032.99 × 1032.93 × 1033.52 × 1033.86 × 1033.40 × 1033.41 × 1032.88 × 1032.91 × 103
Std5.20 × 1013.85 × 1013.90 × 1019.37 × 1018.17 × 1012.30 × 1023.72 × 1022.06 × 1026.32 × 1015.28 × 1014.18 × 101
rank3267510119841
F23Mean3.11 × 1033.10 × 1033.29 × 1033.36 × 1033.21 × 1034.43 × 1034.68 × 1033.97 × 1033.62 × 1033.15 × 1033.13 × 103
Best3.02 × 1033.01 × 1033.21 × 1033.14 × 1033.03 × 1033.74 × 1034.16 × 1033.60 × 1033.51 × 1033.04 × 1033.05 × 103
Std4.71 × 1014.15 × 1013.87 × 1011.02 × 1021.17 × 1022.23 × 1023.09 × 1022.74 × 1025.67 × 1016.25 × 1014.26 × 101
rank2167510119843
F24Mean3.16 × 1033.32 × 1033.46 × 1033.76 × 1033.84 × 1033.27 × 1036.19 × 1033.50 × 1031.38 × 1043.18 × 1033.08 × 103
Best3.08 × 1033.09 × 1033.27 × 1033.11 × 1033.39 × 1033.14 × 1034.14 × 1033.26 × 1039.45 × 1032.97 × 1032.99 × 103
Std7.06 × 1013.30 × 1021.46 × 1027.74 × 1023.51 × 1026.40 × 1011.13 × 1031.24 × 1022.03 × 1033.92 × 1013.25 × 101
rank2568941071131
F25Mean5.49 × 1036.03 × 1036.64 × 1038.98 × 1036.98 × 1031.10 × 1041.28 × 1041.13 × 1041.45 × 1045.60 × 1035.22 × 103
Best3.02 × 1035.10 × 1034.81 × 1037.33 × 1035.80 × 1036.95 × 1036.84 × 1034.65 × 1031.09 × 1042.91 × 1032.90 × 103
Std1.39 × 1035.50 × 1021.10 × 1039.62 × 1026.86 × 1021.63 × 1033.62 × 1032.51 × 1031.77 × 1031.81 × 1032.21 × 103
rank2457681091131
F26Mean3.51 × 1033.46 × 1033.54 × 1033.75 × 1033.71 × 1034.62 × 1035.22 × 1035.38 × 1034.44 × 1033.49 × 1033.53 × 103
Best3.41 × 1033.36 × 1033.40 × 1033.44 × 1033.44 × 1033.84 × 1034.28 × 1034.59 × 1034.13 × 1033.36 × 1033.25 × 103
Std9.09 × 1017.00 × 1018.62 × 1011.72 × 1021.29 × 1024.15 × 1026.15 × 1025.08 × 1022.00 × 1028.61 × 1018.98 × 101
rank3157691011824
F27Mean3.35 × 1034.54 × 1033.98 × 1036.19 × 1034.38 × 1033.89 × 1036.47 × 1034.21 × 1039.20 × 1033.48 × 1033.41 × 103
Best3.32 × 1033.54 × 1033.66 × 1033.62 × 1033.93 × 1033.49 × 1034.75 × 1033.73 × 1037.80 × 1033.39 × 1033.30 × 103
Std1.17 × 1029.13 × 1022.48 × 1021.96 × 1034.03 × 1022.08 × 1021.37 × 1032.24 × 1021.24 × 1032.62 × 1013.60 × 101
rank1859741061132
F28Mean4.36 × 1034.71 × 1034.80 × 1035.52 × 1034.90 × 1036.38 × 1038.79 × 1037.41 × 1038.53 × 1034.67 × 1034.38 × 103
Best3.94 × 1034.26 × 1034.21 × 1034.57 × 1034.04 × 1035.20 × 1036.13 × 1035.92 × 1037.54 × 1033.99 × 1033.93 × 103
Std3.25 × 1024.38 × 1023.62 × 1026.92 × 1023.59 × 1026.60 × 1021.55 × 1031.00 × 1039.89 × 1024.47 × 1022.70 × 102
rank1457681191032
F29Mean2.51 × 1061.96 × 1072.94 × 1077.00 × 1071.29 × 1086.66 × 1077.40 × 1082.19 × 1081.41 × 1095.41 × 1061.56 × 106
Best1.02 × 1067.37 × 1061.23 × 1071.45 × 1066.48 × 1073.92 × 1072.53 × 1081.20 × 1086.77 × 1083.50 × 1068.90 × 105
Std1.09 × 1069.32 × 1069.54 × 1061.96 × 1085.47 × 1072.47 × 1076.37 × 1086.19 × 1074.94 × 1081.69 × 1064.86 × 105
rank2457861091131
Count450010000019
Friedman Rank2.72413.06905.96558.51725.82767.241410.03456.862110.31033.82761.6207
Table 6. Result of the ESMA 100 Dim on CEC2017 functions.
Table 6. Result of the ESMA 100 Dim on CEC2017 functions.
FResultAGSMAAOSMAMSMADEGWOHHOPSOCSASSASMAESMA
F1Mean1.05 × 10103.96 × 10104.54 × 10104.41 × 10105.34 × 10107.98 × 1091.27 × 10112.26 × 10102.63 × 10115.76 × 1075.82 × 106
Best4.05 × 1091.70 × 10103.94 × 10102.57 × 10103.07 × 10105.45 × 1098.98 × 10101.72 × 10102.08 × 10113.13 × 1072.95 × 106
Std4.96 × 1099.10 × 1098.74 × 1091.82 × 10109.14 × 1091.80 × 1091.73 × 10103.81 × 1092.59 × 10101.22 × 1071.62 × 106
rank4687931051121
F2Mean3.30 × 1053.99 × 1052.77 × 1051.28 × 1064.14 × 1053.63 × 1051.13 × 1063.12 × 1055.21 × 1055.66 × 1053.21 × 105
Best2.66 × 1052.51 × 1052.51 × 1057.31 × 1053.08 × 1052.75 × 1054.84 × 1052.60 × 1053.67 × 1052.28 × 1051.50 × 105
Std2.18 × 1042.06 × 1041.39 × 1044.92 × 1055.02 × 1041.48 × 1043.23 × 1052.53 × 1041.00 × 1052.41 × 1059.56 × 104
rank4611175102893
F3Mean1.42 × 1032.02 × 1034.01 × 1035.72 × 1035.76 × 1032.64 × 1032.30 × 1044.56 × 1036.08 × 1049.17 × 1028.49 × 102
Best9.40 × 1021.16 × 1032.67 × 1032.38 × 1033.34 × 1031.79 × 1031.12 × 1043.13 × 1033.94 × 1047.77 × 1027.22 × 102
Std3.34 × 1025.17 × 1029.91 × 1022.83 × 1032.18 × 1033.73 × 1026.76 × 1036.77 × 1021.04 × 1047.42 × 1016.89 × 101
rank3479106581121
F4Mean1.31 × 1031.15 × 1031.44 × 1031.68 × 1031.23 × 1031.60 × 1032.00 × 1031.48 × 1032.21 × 1031.37 × 1031.21 × 103
Best1.05 × 1039.78 × 1021.33 × 1031.38 × 1031.09 × 1031.51 × 1031.80 × 1031.30 × 1032.08 × 1031.08 × 1031.12 × 103
Std8.50 × 1017.40 × 1017.66 × 1011.90 × 1021.18 × 1025.10 × 1019.50 × 1016.99 × 1011.00 × 1021.03 × 1029.27 × 101
rank4169381071152
F5Mean6.64 × 1026.49 × 1026.56 × 1026.76 × 1026.59 × 1026.86 × 1027.01 × 1026.77 × 1027.09 × 1026.71 × 1026.58 × 102
Best6.50 × 1026.32 × 1026.49 × 1026.53 × 1026.31 × 1026.80 × 1026.83 × 1026.64 × 1026.95 × 1026.42 × 1026.40 × 102
Std7.29 × 1005.52 × 1003.78 × 1001.19 × 1014.79 × 1003.89 × 1009.80 × 1003.26 × 1008.65 × 1006.85 × 1007.71 × 100
rank5127491081163
F6Mean2.11 × 1032.27 × 1032.26 × 1035.82 × 1032.11 × 1033.73 × 1033.88 × 1033.27 × 1035.25 × 1032.11 × 1032.00 × 103
Best1.77 × 1031.92 × 1032.01 × 1033.62 × 1031.87 × 1033.37 × 1033.38 × 1032.86 × 1034.37 × 1031.65 × 1031.64 × 103
Std2.06 × 1021.46 × 1021.51 × 1028.78 × 1021.33 × 1021.31 × 1021.68 × 1022.41 × 1023.08 × 1022.46 × 1021.94 × 102
rank3651128971041
F7Mean1.61 × 1031.43 × 1031.76 × 1032.05 × 1031.57 × 1032.06 × 1032.31 × 1031.93 × 1032.50 × 1031.58 × 1031.55 × 103
Best1.41 × 1031.27 × 1031.56 × 1031.68 × 1031.41 × 1031.94 × 1032.15 × 1031.82 × 1032.30 × 1031.38 × 1031.30 × 103
Std1.14 × 1026.67 × 1016.97 × 1012.16 × 1026.02 × 1014.88 × 1019.59 × 1019.96 × 1011.22 × 1028.61 × 1011.01 × 102
rank5168391071142
F8Mean3.23 × 1042.26 × 1045.74 × 1047.80 × 1044.40 × 1046.41 × 1049.13 × 1043.47 × 1041.00 × 1053.45 × 1042.89 × 104
Best2.35 × 1041.68 × 1043.77 × 1044.81 × 1042.16 × 1045.21 × 1046.37 × 1043.08 × 1048.63 × 1042.63 × 1041.81 × 104
Std7.40 × 1033.94 × 1031.24 × 1042.50 × 1041.15 × 1045.61 × 1031.49 × 1043.03 × 1039.75 × 1035.27 × 1033.96 × 103
rank3179681051142
F9Mean1.74 × 1041.70 × 1043.27 × 1042.79 × 1041.77 × 1042.27 × 1043.20 × 1041.94 × 1043.26 × 1041.74 × 1041.62 × 104
Best1.40 × 1041.40 × 1043.08 × 1042.12 × 1041.42 × 1042.09 × 1042.88 × 1041.58 × 1043.19 × 1041.27 × 1041.37 × 104
Std2.09 × 1031.27 × 1037.21 × 1023.15 × 1034.02 × 1031.91 × 1031.07 × 1031.48 × 1036.32 × 1022.68 × 1031.51 × 103
rank3211857961041
F10Mean2.38 × 1042.00 × 1049.67 × 1043.20 × 1057.38 × 1048.47 × 1043.85 × 1057.77 × 1041.58 × 1055.98 × 1033.20 × 103
Best7.84 × 1035.11 × 1035.16 × 1041.54 × 1054.62 × 1042.90 × 1041.81 × 1055.43 × 1049.77 × 1043.27 × 1032.53 × 103
Std1.02 × 1049.00 × 1033.16 × 1041.82 × 1051.56 × 1042.13 × 1042.28 × 1051.76 × 1043.54 × 1041.98 × 1037.16 × 102
rank4379116105821
F11Mean3.58 × 1082.78 × 1094.88 × 1099.02 × 1091.15 × 10101.49 × 1094.06 × 10104.84 × 1098.65 × 10102.31 × 1081.05 × 108
Best4.14 × 1073.13 × 1082.46 × 1096.04 × 1083.86 × 1098.71 × 1081.35 × 10102.17 × 1096.51 × 10107.42 × 1072.60 × 107
Std3.18 × 1081.86 × 1092.81 × 1091.00 × 10105.59 × 1095.87 × 1081.24 × 10101.58 × 1091.61 × 10101.02 × 1083.61 × 107
rank3578104961121
F12Mean5.34 × 1051.14 × 1081.69 × 1081.33 × 1091.49 × 1091.72 × 1076.29 × 1092.67 × 1071.70 × 10109.75 × 1051.46 × 105
Best8.33 × 1033.86 × 1041.81 × 1074.06 × 1063.02 × 1071.16 × 1072.19 × 1093.72 × 1061.03 × 10103.06 × 1042.40 × 104
Std1.72 × 1061.90 × 1081.90 × 1082.04 × 1091.23 × 1093.99 × 1063.18 × 1091.87 × 1073.33 × 1091.54 × 1062.30 × 105
rank2678951041131
F13Mean1.91 × 1065.10 × 1069.63 × 1061.88 × 1077.70 × 1065.38 × 1063.51 × 1076.92 × 1065.78 × 1073.32 × 1061.80 × 106
Best3.46 × 1059.33 × 1052.01 × 1063.12 × 1061.42 × 1061.65 × 1061.73 × 1072.21 × 1062.89 × 1079.46 × 1054.91 × 105
Std1.15 × 1062.94 × 1064.95 × 1062.17 × 1075.62 × 1061.78 × 1061.33 × 1072.43 × 1061.91 × 1071.95 × 1061.03 × 106
rank2489751061131
F14Mean1.61 × 1041.29 × 1074.77 × 1074.93 × 1083.16 × 1084.37 × 1063.63 × 1092.12 × 1055.93 × 1091.84 × 1054.86 × 104
Best2.66 × 1033.62 × 1043.65 × 1061.31 × 1043.91 × 1062.46 × 1066.18 × 1083.76 × 1042.94 × 1092.74 × 1041.17 × 104
Std3.59 × 1046.22 × 1071.46 × 1081.04 × 1092.90 × 1081.32 × 1061.61 × 1091.35 × 1051.46 × 1093.71 × 1053.59 × 104
rank1679851041132
F15Mean5.77 × 1036.17 × 1037.97 × 1038.23 × 1036.51 × 1038.55 × 1031.30 × 1041.13 × 1041.45 × 1046.24 × 1035.71 × 103
Best4.40 × 1034.44 × 1035.98 × 1035.49 × 1034.15 × 1036.51 × 1031.10 × 1048.54 × 1031.21 × 1044.49 × 1034.75 × 103
Std9.26 × 1028.95 × 1027.89 × 1021.63 × 1036.90 × 1027.66 × 1021.38 × 1031.12 × 1031.38 × 1036.14 × 1027.72 × 102
rank2367581091141
F16Mean5.18 × 1035.20 × 1036.40 × 1038.23 × 1035.30 × 1036.84 × 1035.69 × 1056.82 × 1033.50 × 1045.66 × 1035.45 × 103
Best4.06 × 1034.49 × 1035.80 × 1035.82 × 1034.00 × 1035.71 × 1031.02 × 1045.46 × 1031.37 × 1044.46 × 1034.37 × 103
Std6.17 × 1024.78 × 1026.82 × 1023.11 × 1037.90 × 1027.67 × 1022.88 × 1069.46 × 1023.48 × 1045.59 × 1025.40 × 102
rank1269381171054
F17Mean4.50 × 1066.62 × 1066.41 × 1074.41 × 1076.88 × 1068.76 × 1066.50 × 1074.76 × 1061.09 × 1087.10 × 1064.16 × 106
Best1.30 × 1061.68 × 1061.65 × 1074.97 × 1062.62 × 1062.25 × 1061.69 × 1072.25 × 1064.62 × 1071.56 × 1065.91 × 105
Std2.20 × 1063.52 × 1064.65 × 1073.05 × 1073.77 × 1064.55 × 1063.17 × 1071.75 × 1063.66 × 1072.83 × 1062.43 × 106
rank2498571031161
F18Mean1.99 × 1041.13 × 1071.67 × 1074.60 × 1082.45 × 1081.96 × 1072.83 × 1091.49 × 1075.51 × 1091.22 × 1053.26 × 104
Best3.08 × 1037.23 × 1044.44 × 1068.81 × 1035.84 × 1064.71 × 1066.77 × 1082.69 × 1053.82 × 1095.69 × 1049.20 × 103
Std5.74 × 1042.08 × 1071.33 × 1071.18 × 1092.74 × 1081.60 × 1071.70 × 1091.51 × 1071.25 × 1097.04 × 1041.97 × 104
rank1469871051132
F19Mean5.10 × 1035.12 × 1037.19 × 1037.98 × 1035.37 × 1036.04 × 1038.03 × 1035.50 × 1037.97 × 1035.74 × 1034.99 × 103
Best3.88 × 1034.48 × 1036.38 × 1035.84 × 1034.06 × 1035.36 × 1037.13 × 1034.56 × 1037.27 × 1034.25 × 1033.79 × 103
Std6.43 × 1025.52 × 1023.34 × 1021.01 × 1031.22 × 1035.53 × 1023.55 × 1025.71 × 1024.42 × 1025.19 × 1024.72 × 102
rank2381047115961
F20Mean3.11 × 1032.88 × 1033.24 × 1033.50 × 1033.05 × 1034.27 × 1034.36 × 1033.99 × 1034.07 × 1033.09 × 1033.03 × 103
Best2.85 × 1032.79 × 1033.14 × 1032.99 × 1032.94 × 1033.89 × 1033.73 × 1033.59 × 1033.79 × 1032.84 × 1032.86 × 103
Std1.11 × 1027.97 × 1016.59 × 1012.05 × 1027.49 × 1011.75 × 1022.41 × 1021.87 × 1021.02 × 1021.32 × 1029.40 × 101
rank5167310118942
F21Mean2.11 × 1041.98 × 1043.53 × 1043.08 × 1042.24 × 1042.60 × 1043.37 × 1042.34 × 1043.52 × 1042.00 × 1041.94 × 104
Best1.81 × 1041.71 × 1043.27 × 1042.26 × 1041.75 × 1042.36 × 1043.12 × 1042.13 × 1043.37 × 1041.63 × 1041.63 × 104
Std3.77 × 1031.89 × 1035.50 × 1023.41 × 1035.52 × 1031.55 × 1031.33 × 1031.96 × 1035.15 × 1021.38 × 1031.52 × 103
rank4211857106931
F22Mean3.41 × 1033.40 × 1033.69 × 1034.06 × 1033.66 × 1035.73 × 1036.31 × 1036.04 × 1034.83 × 1033.48 × 1033.38 × 103
Best3.26 × 1033.27 × 1033.58 × 1033.74 × 1033.52 × 1034.90 × 1035.50 × 1035.10 × 1034.42 × 1033.26 × 1033.24 × 103
Std8.78 × 1015.72 × 1017.06 × 1012.20 × 1028.76 × 1015.25 × 1024.90 × 1023.16 × 1021.27 × 1028.43 × 1016.65 × 101
rank3267591110841
F23Mean3.97 × 1033.99 × 1034.36 × 1034.86 × 1034.39 × 1037.82 × 1039.96 × 1038.72 × 1036.11 × 1034.16 × 1034.08 × 103
Best3.77 × 1033.82 × 1034.20 × 1034.40 × 1034.01 × 1036.55 × 1038.41 × 1037.09 × 1035.74 × 1033.89 × 1033.80 × 103
Std1.27 × 1028.69 × 1017.93 × 1012.66 × 1021.51 × 1024.86 × 1028.84 × 1027.49 × 1021.54 × 1021.15 × 1021.00 × 102
rank1257691110843
F24Mean4.27 × 1035.25 × 1036.44 × 1031.00 × 1046.79 × 1034.55 × 1031.74 × 1045.94 × 1033.19 × 1043.56 × 1033.52 × 103
Best3.88 × 1034.24 × 1035.31 × 1034.97 × 1035.51 × 1034.28 × 1031.31 × 1045.21 × 1032.37 × 1043.40 × 1033.40 × 103
Std2.90 × 1027.60 × 1025.46 × 1023.51 × 1039.09 × 1022.05 × 1022.01 × 1035.11 × 1023.17 × 1036.55 × 1015.98 × 101
rank3579841061121
F25Mean4.27 × 1035.25 × 1036.44 × 1031.00 × 1046.79 × 1034.55 × 1031.74 × 1045.94 × 1033.19 × 1043.56 × 1033.50 × 103
Best1.03 × 1041.12 × 1041.55 × 1041.69 × 1041.46 × 1042.36 × 1042.32 × 1041.31 × 1042.92 × 1041.25 × 1041.22 × 104
Std8.47 × 1021.03 × 1031.59 × 1032.74 × 1031.15 × 1032.88 × 1031.17 × 1042.95 × 1036.70 × 1038.68 × 1022.99 × 103
rank3579841061121
F26Mean3.71 × 1033.77 × 1033.98 × 1034.16 × 1034.22 × 1035.81 × 1038.75 × 1037.92 × 1037.37 × 1033.72 × 1033.69 × 103
Best3.54 × 1033.63 × 1033.84 × 1033.66 × 1033.77 × 1034.69 × 1035.86 × 1036.69 × 1036.12 × 1033.57 × 1033.51 × 103
Std9.92 × 1011.07 × 1029.29 × 1012.60 × 1021.71 × 1026.91 × 1021.68 × 1031.15 × 1036.48 × 1029.95 × 1019.28 × 101
rank2456781110931
F27Mean4.68 × 1038.49 × 1037.46 × 1031.47 × 1049.50 × 1035.63 × 1031.84 × 1048.55 × 1033.13 × 1043.63 × 1033.59 × 103
Best3.94 × 1034.16 × 1035.92 × 1037.24 × 1037.14 × 1035.07 × 1031.37 × 1046.77 × 1032.40 × 1043.55 × 1033.52 × 103
Std6.17 × 1024.83 × 1031.02 × 1033.51 × 1031.64 × 1035.00 × 1022.80 × 1039.76 × 1023.09 × 1034.67 × 1015.59 × 101
rank3659841071121
F28Mean7.18 × 1037.89 × 1038.67 × 1031.10 × 1049.15 × 1031.15 × 1042.03 × 1041.45 × 1043.52 × 1047.94 × 1037.10 × 103
Best6.25 × 1035.51 × 1037.44 × 1036.94 × 1037.77 × 1039.32 × 1031.28 × 1041.13 × 1041.97 × 1047.08 × 1036.24 × 103
Std6.56 × 1025.70 × 1026.11 × 1026.89 × 1039.36 × 1021.11 × 1036.52 × 1031.70 × 1031.75 × 1045.33 × 1025.69 × 102
rank2357681091141
F29Mean1.91 × 1066.74 × 1071.04 × 1081.44 × 1091.05 × 1091.83 × 1086.26 × 1097.57 × 1081.04 × 10105.47 × 1061.62 × 106
Best1.67 × 1052.53 × 1063.98 × 1076.56 × 1051.93 × 1084.27 × 1071.93 × 1093.56 × 1086.21 × 1091.07 × 1065.48 × 105
Std2.49 × 1061.06 × 1084.60 × 1073.69 × 1091.13 × 1099.50 × 1072.96 × 1093.30 × 1082.74 × 1092.81 × 1069.17 × 105
rank2459861071131
Count451000000019
Friedman Rank2.82763.51726.41388.37936.31036.68979.93106.482810.20693.72411.5172
Table 7. Result of the ESMA 30Dim on CEC2013 functions.
Table 7. Result of the ESMA 30Dim on CEC2013 functions.
FResultAGSMAAOSMAMSMADEGWOHHOPSOCSASSASMAESMA
F30Mean3.41 × 1043.09 × 1043.58 × 1041.78 × 1054.83 × 1044.68 × 1041.30 × 1054.17 × 1045.25 × 1043.55 × 1048.28 × 103
Best2.56 × 1041.40 × 1042.48 × 1046.62 × 1043.13 × 1043.69 × 1044.70 × 1042.57 × 1042.09 × 1042.56 × 1042.07 × 103
Std5.89 × 1037.33 × 1035.88 × 1035.53 × 1046.83 × 1034.76 × 1035.34 × 1047.80 × 1031.66 × 1049.13 × 1033.75 × 103
rank3251187106941
F31Mean−9.90 × 102−9.78 × 102−9.11 × 1025.29 × 1032.05 × 102−8.91 × 1024.14 × 103−7.53 × 102−9.17 × 102−9.53 × 102−1.00 × 103
Best−9.98 × 102−9.91 × 102−9.48 × 102−9.30 × 102−5.26 × 102−9.75 × 1025.92 × 102−8.56 × 102−9.75 × 102−9.98 × 102−1.00 × 103
Std3.22 × 1002.41 × 1012.26 × 1011.04 × 1049.29 × 1023.68 × 1013.93 × 1036.87 × 1013.20 × 1011.30 × 1014.41 × 10−2
rank2361197108541
F32Mean−8.17 × 102−8.16 × 102−7.87 × 102−7.55 × 102−7.30 × 102−7.60 × 1022.33 × 102−7.25 × 102−8.39 × 102−8.41 × 102−8.44 × 102
Best−8.73 × 102−8.73 × 102−8.14 × 102−8.72 × 102−7.91 × 102−8.25 × 102−6.88 × 102−8.17 × 102−8.87 × 102−8.83 × 102−8.88 × 102
Std2.89 × 1012.72 × 1011.90 × 1019.84 × 1015.75 × 1013.59 × 1018.36 × 1025.82 × 1012.73 × 1012.13 × 1012.95 × 101
rank4568971110321
F33Mean−4.88 × 102−3.98 × 102−3.27 × 1021.43 × 102−9.34 × 101−3.80 × 1021.41 × 103−3.47 × 102−4.96 × 102−4.63 × 102−4.96 × 102
Best−4.90 × 102−4.87 × 102−4.22 × 102−4.66 × 102−4.29 × 102−4.63 × 1021.52 × 102−4.35 × 102−4.97 × 102−4.98 × 102−4.99 × 102
Std2.07 × 1017.22 × 1017.21 × 1014.93 × 1022.10 × 1026.53 × 1019.31 × 1025.58 × 1013.50 × 1001.10 × 1011.61 × 100
rank3581196107241
F34Mean−3.30 × 102−3.30 × 102−2.81 × 102−1.32 × 102−2.63 × 1021.25 × 101−1.72 × 100−1.86 × 101−1.96 × 102−2.11 × 102−3.74 × 102
Best−3.55 × 102−3.63 × 102−3.25 × 102−2.33 × 102−3.29 × 102−1.13 × 102−1.08 × 102−1.60 × 102−3.26 × 102−2.96 × 102−3.85 × 102
Std1.56 × 1012.16 × 1012.05 × 1017.61 × 1014.71 × 1016.62 × 1018.37 × 1018.53 × 1018.19 × 1014.65 × 1016.45 × 100
rank2341051198761
F35Mean2.52 × 101−9.08 × 1004.26 × 1012.10 × 1022.88 × 1014.88 × 1021.47 × 1022.38 × 1021.30 × 1027.98 × 1014.79 × 101
Best−4.71 × 101−7.42 × 101−3.79 × 1014.28 × 101−6.09 × 1012.54 × 1028.83 × 1016.76 × 101−1.40 × 101−1.16 × 101−7.33 × 101
Std4.52 × 1013.30 × 1012.86 × 1018.82 × 1015.09 × 1011.11 × 1023.88 × 1018.10 × 1016.81 × 1014.93 × 1014.51 × 101
rank3152411810976
F36Mean1.62 × 1032.36 × 1033.72 × 1034.98 × 1033.53 × 1033.96 × 1037.23 × 1034.34 × 1034.25 × 1033.61 × 1036.70 × 102
Best7.23 × 1021.28 × 1032.41 × 1033.15 × 1031.73 × 1032.45 × 1036.13 × 1033.30 × 1033.19 × 1032.48 × 1036.27 × 101
Std5.63 × 1026.89 × 1028.94 × 1021.01 × 1031.49 × 1037.96 × 1025.23 × 1026.44 × 1026.58 × 1026.31 × 1022.60 × 102
rank2361057119841
F37Mean4.56 × 1024.51 × 1025.04 × 1028.27 × 1025.08 × 1021.06 × 1037.74 × 1027.38 × 1025.39 × 1025.66 × 1024.00 × 102
Best3.90 × 1023.95 × 1024.60 × 1025.22 × 1024.11 × 1028.74 × 1026.39 × 1025.66 × 1024.50 × 1024.70 × 1023.71 × 102
Std3.33 × 1013.78 × 1013.14 × 1012.13 × 1025.64 × 1011.01 × 1026.39 × 1011.07 × 1026.12 × 1015.73 × 1011.86 × 101
rank2341051198671
F38Mean5.15 × 1025.12 × 1021.46 × 1031.06 × 1057.63 × 1025.44 × 1022.01 × 1045.47 × 1025.13 × 1025.12 × 1025.06 × 102
Best5.06 × 1025.04 × 1025.64 × 1025.37 × 1025.13 × 1025.28 × 1025.93 × 1025.23 × 1025.07 × 1025.06 × 1025.03 × 102
Std5.49 × 1001.59 × 1011.29 × 1033.77 × 1054.12 × 1021.07 × 1015.16 × 1041.74 × 1014.49 × 1004.03 × 1001.43 × 100
rank5391186107421
F39Mean2.59 × 1033.29 × 1035.82 × 1036.77 × 1034.65 × 1036.41 × 1038.67 × 1036.57 × 1035.56 × 1035.41 × 1031.53 × 103
Best1.78 × 1032.24 × 1032.38 × 1033.69 × 1033.07 × 1034.89 × 1037.42 × 1034.64 × 1034.11 × 1033.42 × 1031.08 × 103
Std4.47 × 1025.69 × 1021.76 × 1031.41 × 1038.01 × 1029.40 × 1026.38 × 1028.84 × 1029.75 × 1028.47 × 1022.51 × 102
Rank2371048119651
F40Mean1.28 × 1031.28 × 1031.29 × 1031.30 × 1031.26 × 1031.34 × 1031.35 × 1031.33 × 1031.28 × 1031.29 × 1031.27 × 103
Best1.26 × 1031.26 × 1031.27 × 1031.27 × 1031.24 × 1031.32 × 1031.31 × 1031.29 × 1031.26 × 1031.27 × 1031.25 × 103
Std9.27 × 1009.15 × 1001.07 × 1011.12 × 1011.15 × 1011.23 × 1012.59 × 1011.86 × 1011.06 × 1011.12 × 1011.02 × 101
Rank4368210119571
Count010000000010
Friedman Rank2.90913.09096.00009.27276.18188.272710.00008.27275.81824.72731.4545
Table 8. Result of the ESMA 100Dim on CEC2013 functions.
Table 8. Result of the ESMA 100Dim on CEC2013 functions.
FResultAGSMAAOSMAMSMADEGWOHHOPSOCSASSASMAESMA
F30Mean1.72 × 1051.48 × 1051.93 × 1056.20 × 1051.61 × 1052.13 × 1055.36 × 1051.78 × 1052.96 × 1051.87 × 1057.32 × 104
Best1.47 × 1051.03 × 1051.56 × 1053.99 × 1051.32 × 1051.84 × 1052.99 × 1051.35 × 1052.19 × 1051.46 × 1055.78 × 104
Std1.49 × 1041.57 × 1041.91 × 1041.14 × 1051.42 × 1041.37 × 1041.91 × 1051.43 × 1044.60 × 1042.65 × 1049.95 × 103
rank4271138105961
F31Mean−5.72 × 1029.07 × 1013.49 × 1032.98 × 1045.67 × 1036.39 × 1022.16 × 1041.95 × 103−8.10 × 101−7.44 × 102−9.34 × 102
Best−8.34 × 102−5.70 × 1021.05 × 1033.56 × 1032.19 × 1031.71 × 1021.34 × 1048.95 × 102−5.00 × 102−7.95 × 102−9.72 × 102
Std2.61 × 1024.38 × 1021.45 × 1033.86 × 1042.23 × 1033.57 × 1027.83 × 1036.65 × 1022.41 × 1023.15 × 1011.68 × 101
rank4571086911231
F32Mean−5.42 × 1019.80 × 1022.88 × 1032.51 × 1032.36 × 1033.66 × 1021.24 × 1041.62 × 103−3.56 × 102−4.03 × 102−4.78 × 102
Best−4.25 × 102−2.03 × 1021.53 × 1032.36 × 1021.07 × 103−1.51 × 1017.65 × 1031.05 × 103−4.92 × 102−5.66 × 102−5.64 × 102
Std1.94 × 1027.42 × 1021.10 × 1032.01 × 1038.17 × 1022.06 × 1022.96 × 1034.11 × 1028.21 × 1018.45 × 1015.53 × 101
rank4610875119321
F33Mean7.20 × 1021.36 × 1032.69 × 1034.38 × 1033.42 × 1031.51 × 1031.14 × 1042.73 × 1033.62 × 102−4.75 × 101−2.59 × 102
Best3.59 × 1021.69 × 1022.01 × 1032.09 × 1032.15 × 1039.78 × 1027.10 × 1031.93 × 1036.59 × 101−2.38 × 102−3.61 × 102
Std2.95 × 1025.81 × 1024.73 × 1021.47 × 1037.81 × 1023.41 × 1022.01 × 1035.04 × 1021.82 × 1021.09 × 1025.94 × 101
rank4571096118321
F34Mean2.00 × 1024.77 × 1025.29 × 1022.00 × 1035.50 × 1021.58 × 1031.70 × 1031.73 × 1039.67 × 1028.17 × 1025.55 × 101
Best6.32 × 1012.08 × 1023.58 × 1021.21 × 1034.25 × 1021.28 × 1031.35 × 1031.24 × 1035.42 × 1024.50 × 102−9.04 × 101
Std9.60 × 1011.71 × 1027.85 × 1015.33 × 1028.97 × 1011.54 × 1021.87 × 1022.08 × 1023.11 × 1022.59 × 1027.11 × 101
rank2341158910761
F35Mean1.33 × 1031.14 × 1031.24 × 1032.64 × 1031.27 × 1032.76 × 1031.94 × 1032.18 × 1031.70 × 1031.57 × 1031.19 × 103
Best9.68 × 1029.20 × 1021.08 × 1031.99 × 1038.27 × 1022.45 × 1031.56 × 1031.83 × 1031.34 × 1031.30 × 1039.16 × 102
Std1.84 × 1021.39 × 1021.00 × 1024.56 × 1021.38 × 1021.61 × 1022.24 × 1022.25 × 1022.15 × 1021.97 × 1021.82 × 102
rank5131041189762
F36Mean1.26 × 1041.36 × 1042.13 × 1042.44 × 1041.95 × 1042.21 × 1043.23 × 1042.16 × 1041.83 × 1041.57 × 1049.59 × 103
Best9.21 × 1039.90 × 1031.76 × 1042.01 × 1041.53 × 1041.91 × 1042.98 × 1041.90 × 1041.33 × 1041.34 × 1046.36 × 103
Std2.26 × 1031.56 × 1031.53 × 1032.01 × 1035.18 × 1031.51 × 1031.02 × 1031.37 × 1032.18 × 1031.86 × 1031.95 × 103
rank2371169108541
F37Mean1.59 × 1031.95 × 1031.68 × 1035.04 × 1031.70 × 1033.56 × 1033.47 × 1032.95 × 1032.25 × 1032.46 × 1031.31 × 103
Best1.23 × 1031.63 × 1031.46 × 1033.63 × 1031.40 × 1033.24 × 1033.06 × 1032.43 × 1031.74 × 1031.65 × 1039.94 × 102
Std1.66 × 1022.31 × 1021.25 × 1027.79 × 1021.84 × 1021.32 × 1021.75 × 1022.60 × 1022.95 × 1024.23 × 1021.35 × 102
rank2531141098671
F38Mean2.96 × 1031.88 × 1031.04 × 1055.04 × 1069.55 × 1041.05 × 1039.62 × 1051.05 × 1046.68 × 1027.64 × 1025.63 × 102
Best8.38 × 1026.90 × 1025.83 × 1043.33 × 1051.80 × 1047.37 × 1023.29 × 1052.46 × 1036.36 × 1025.87 × 1025.44 × 102
Std2.73 × 1031.70 × 1032.64 × 1044.51 × 1069.01 × 1042.71 × 1024.71 × 1055.15 × 1039.31 × 1013.77 × 1018.12 × 100
rank6591184107231
F39Mean1.37 × 1041.74 × 1042.37 × 1042.87 × 1042.31 × 1042.58 × 1043.39 × 1042.71 × 1042.33 × 1041.97 × 1041.31 × 104
Best1.01 × 1041.46 × 1042.09 × 1042.10 × 1042.04 × 1042.41 × 1043.18 × 1042.51 × 1041.78 × 1041.59 × 1041.03 × 104
Std2.23 × 1031.27 × 1031.91 × 1033.34 × 1032.88 × 1031.04 × 1031.12 × 1031.78 × 1032.00 × 1032.37 × 1031.68 × 103
Rank2371058119641
F40Mean1.55 × 1031.55 × 1031.59 × 1031.60 × 1031.51 × 1032.14 × 1032.05 × 1032.22 × 1031.58 × 1031.59 × 1031.57 × 103
Best1.52 × 1031.52 × 1031.55 × 1031.55 × 1031.48 × 1031.71 × 1031.75 × 1031.95 × 1031.54 × 1031.54 × 1031.54 × 103
Std1.62 × 1011.87 × 1011.61 × 1013.24 × 1011.84 × 1018.10 × 1022.48 × 1022.03 × 1022.85 × 1012.20 × 1011.64 × 101
Rank2368110911574
Count01001000009
Friedman Rank3.00003.72736.363610.09095.45457.72739.72738.63645.00004.54551.3636
Table 9. Execution time/s spent on each algorithm.
Table 9. Execution time/s spent on each algorithm.
DimAGSMAAOSMAMSMADEGWOHHOPSOCSASSASMAESMA
CEC201730 Dim1828205040616693827262802483376762174
50 Dim273731716792109966111514303925269973253
100 Dim60817457165722451153026391065989121420557563
CEC201330 Dim6657791766238147271111103137203814
100 Dim2075261655108575079163893814575462743
Table 10. Asymptotic results of Friedman’s test.
Table 10. Asymptotic results of Friedman’s test.
Dimensionp-Value
CEC201730 Dim3.62 × 10−41
50 Dim1.60 × 10−42
100 Dim2.88 × 10−41
CEC201330 Dim2.13 × 10−10
100 Dim1.66 × 10−12
Table 11. Comparison results of the path planning problem.
Table 11. Comparison results of the path planning problem.
AlgorithmAverageLongestRank
AOA31.722033.55636
AOSMA29.916130.97053
GWO32.219035.55637
DE30.033330.38475
PSO29.974730.97054
CSA37.077639.89948
SMA29.915929.89892
ESMA29.798929.81091
Table 12. Comparison results of the PVD problem.
Table 12. Comparison results of the PVD problem.
AlgorithmBest Values for VariablesBest CostRank
TsThRL
ESMA1.35990.657467.420510.00008.4162 × 1031
MSMA1.36730.684867.623413.58728.9391 × 1034
AGWO1.45980.721967.921410.00009.4779 × 1035
WOA1.93970.815067.386022.45231.3705 × 1046
ALO1.30080.643067.400123.70218.8770 × 1033
PSO1.42340.655668.110410.00008.8129 × 1032
PIO1.45060.986971.178328.55201.3887 × 1047
SMA1.33750.707072.189727.21724.7433 × 1058
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiong, W.; Li, D.; Zhu, D.; Li, R.; Lin, Z. An Enhanced Slime Mould Algorithm Combines Multiple Strategies. Axioms 2023, 12, 907. https://doi.org/10.3390/axioms12100907

AMA Style

Xiong W, Li D, Zhu D, Li R, Lin Z. An Enhanced Slime Mould Algorithm Combines Multiple Strategies. Axioms. 2023; 12(10):907. https://doi.org/10.3390/axioms12100907

Chicago/Turabian Style

Xiong, Wenqing, Dahai Li, Donglin Zhu, Rui Li, and Zhang Lin. 2023. "An Enhanced Slime Mould Algorithm Combines Multiple Strategies" Axioms 12, no. 10: 907. https://doi.org/10.3390/axioms12100907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop