Next Article in Journal
Neural Networks—Deflection Prediction of Continuous Beams with GFRP Reinforcement
Next Article in Special Issue
Thread-Aware Mechanism to Enhance Inter-Node Load Balancing for Multithreaded Applications on NUMA Systems
Previous Article in Journal
Optimal Feature Set Size in Random Forest Regression
Previous Article in Special Issue
A Parallel Meta-Heuristic Approach to Reduce Vehicle Travel Time in Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Search Patterns Based on Trajectories Extracted from the Response of Second-Order Systems

1
Departamento de Electrónica, Universidad de Guadalajara, CUCEI Av. Revolución 1500, Guadalajara 44430, Mexico
2
Faculty of Science, Al-Azhar University, Cairo 11651, Egypt
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(8), 3430; https://doi.org/10.3390/app11083430
Submission received: 6 March 2021 / Revised: 30 March 2021 / Accepted: 1 April 2021 / Published: 12 April 2021
(This article belongs to the Special Issue Applied (Meta)-Heuristic in Intelligent Systems)

Abstract

:
Recently, several new metaheuristic schemes have been introduced in the literature. Although all these approaches consider very different phenomena as metaphors, the search patterns used to explore the search space are very similar. On the other hand, second-order systems are models that present different temporal behaviors depending on the value of their parameters. Such temporal behaviors can be conceived as search patterns with multiple behaviors and simple configurations. In this paper, a set of new search patterns are introduced to explore the search space efficiently. They emulate the response of a second-order system. The proposed set of search patterns have been integrated as a complete search strategy, called Second-Order Algorithm (SOA), to obtain the global solution of complex optimization problems. To analyze the performance of the proposed scheme, it has been compared in a set of representative optimization problems, including multimodal, unimodal, and hybrid benchmark formulations. Numerical results demonstrate that the proposed SOA method exhibits remarkable performance in terms of accuracy and high convergence rates.

1. Introduction

Metaheuristic algorithms refer to generic optimization schemes that emulate the operation of different natural or social processes. In metaheuristic approaches, the optimization strategy is performed by a set of search agents. Each agent maintains a possible solution to the optimization problem, and is initially produced by considering a random feasible solution. An objective function determines the quality of the solution of each agent. By using the values of the objective function, at each iteration, the position of the search agents is modified, employing a set of search patterns that regulate their movements within the search space. Such search patterns are abstract models inspired by natural or social processes [1]. These steps are repeated until a stop criterion is reached. Metaheuristic schemes have confirmed their supremacy in diverse real-world applications in circumstances where classical methods cannot be adopted.
Essentially, a clear classification of metaheuristic methods does not exist. Despite this, several categories have been proposed that considered different criteria, such as a source of inspiration, type of operators or cooperation among the agents. In relation to inspiration, nature-inspired metaheuristic algorithms are classified into three categories: Evolution-based, swarm-based, and physics-based. Evolution-based approaches correspond to the most consolidate search strategies that use evolution elements as operators to produce search patterns. Consequently, operations, such as reproduction, mutation, recombination, and selection are used to generate search patterns during their operations. The most representative examples of evolution-based techniques, include Evolutionary Strategies (ES) [2,3,4], Genetic Algorithms (GA) [5], Differential Evolution (DE) [6] and Self-Adaptative Differential Evolution (JADE) [7]. Swarm-inspired techniques use behavioral schemes extracted from the collaborative interaction of different animals or species of insects to produce a search strategy. Recently, a high number of swarm-based approaches have been published in the literature. Among the most popular swarm-inspired approaches, include the Crow Search Algorithm (CSA) [8], Artificial Bee Colony (ABC) [9], Particle Swarm Optimization (PSO) algorithm [10,11,12], Firefly Algorithm (FA) [13,14], Cuckoo Search (CS) [15], Bat Algorithm (BA) [16], Gray Wolf Optimizer (GWO) [17], Moth-flame optimization algorithm (MFO) [18] to name a few. Metaheuristic algorithms that consider the physics-based scheme use simplified physical models to produce search patterns for their agents. Some examples of the most representative physics-based techniques involve the States of Matter Search (SMS) [19,20], the Simulated Annealing (SA) algorithm [21,22,23], the Gravitational Search Algorithm (GSA) [24], the Water Cycle Algorithm (WCA) [25], the Big Bang-Big Crunch (BB-BC) [26] and Electromagnetism-like Mechanism (EM) [27]. Figure 1 visually exhibits the taxonomy of the metaheuristic classification. Although all these approaches consider very different phenomena as metaphors, the search patterns used to explore the search space are exclusively based on spiral elements or attraction models [10,11,12,13,14,15,16,17,28]. Under such conditions, the design of many metaheuristic methods refers to configuring a recycled search pattern that has been demonstrated to be successful in previous approaches for generating new optimization schemes through a marginal modification.
On the other hand, the order of a differential equation refers to the highest degree of derivative considered in the model. Therefore, a model whose input-output formulation is a second-order differential equation is known as a second-order system [29]. One of the main elements that make a second-order model important is its ability to present very different behaviors, depending on the configuration of its parameters. Through its different behaviors, such as oscillatory, underdamped, or overdamped, a second-order system can exhibit distinct temporal responses [30]. Such behaviors can be observed as search trajectories under the perspective of metaheuristic schemes. Therefore, with second-order systems, it is possible to produce oscillatory movements within a certain region or build complex search patterns around different points or sections of the search space.
In this paper, a set of new search patterns are introduced to explore the search space efficiently. They emulate the response of a second-order system. The proposed set of search patterns have been integrated as a complete search strategy, called Second-Order Algorithm (SOA), to obtain the global solution of complex optimization problems. To analyze the performance of the proposed scheme, it has been compared in a set of representative optimization problems, including multimodal, unimodal, and hybrid benchmark formulations. The competitive results demonstrate the promising results of the proposed search patterns.
The main contributions of this research can be stated as follows:
  • A new physics-based optimization algorithm, namely SOA, is introduced. It uses search patterns obtained from the response of second-order systems.
  • New search patterns are proposed as an alternative to those known in the literature.
  • The statistical significance, convergence speed and exploitation-exploration ratio of SOA are evaluated against other popular metaheuristic algorithms.
  • SOA outperforms other competitor algorithms on two sets of optimization problems.
The remainder of this paper is structured as follows: A brief introduction of the second-order systems is given in Section 2; in Section 3, the most important search patterns in metaheuristic methods are discussed; in Section 4, the proposed search patterns are defined; in Section 5, the measurement of exploration-exploitation is described; in Section 6, the proposed scheme is introduced; Section 7 presents the numerical results; in Section 8, the main characteristics of the proposed approach are discussed; in Section 9, finally, the conclusions are drawn.

2. Second-Order Systems

A model whose input R s -output C s formulation is a second-order closed-loop transfer function is known as a second-order system. One of the main elements that make a second-order model important is its ability to present very different behaviors depending on the configuration of its parameters. A generic second-order model can be formulated under the following expression [29],
C s R s = ω n 2 s 2 + 2 ζ ω n s + ω n 2 ,
where ζ and ωn represent the damping ratio and ωn the natural frequency, respectively, while s symbolizes the Laplace domain.
The dynamic behavior of a system is evaluated in terms of the temporal response obtained through a unitary step signal as input R s . The dynamic behavior is defined as the way in which the system reacts, trying to reach the value of one as time evolves. The dynamic behavior of the second-order system is described in terms of ζ and ω n [30]. Assuming such parameters, the second-order system presents three different behaviors: Underdamped ( 0 < ζ < 1 ) , critically damped ( ζ = 1 ) , and overdamped ( ζ > 1 ) .

2.1. Underdamped Behavior ( 0 < ζ < 1 )

In this behavior, the poles (roots of the denominator) of Equation (1) are complex conjugated and located in the left-half of the s plane. Under such conditions, the system underdamped response C U s in the Laplace domain can be characterized as follows:
C U s = ω n 2 s s + ζ ω n 2 + ω n 2 1 ζ 2 .
Applying partial fraction operations and the inverse Laplace transform, it is obtained the temporal response that describe the underdamped behavior c U t as it is indicated in Equation (3):
c U t = 1 e ζ ω n t 1 ζ 2 sin ω n 1 ζ 2 + tan 1 1 ζ 2 ζ .
If ζ = 0 , a special case is presented in which the temporal system response is oscillatory. The output of these behaviors is visualized in Figure 2 for the cases of ζ = 0 , ζ = 0.2 , ζ = 0.5 and ζ = 0.707 . Under the underdamped behavior, the system response starts with high acceleration. Therefore, the response produces an overshoot that surpasses the value of one. The size of the overshoot inversely depends on the value of ζ .

2.2. Critically Damped Behavior ( ζ = 1 )

Under this behavior, the two poles of the transfer function of Equation (1) present a real number and maintain the same value. Therefore, the response of the critically damped behavior C C s in the Laplace domain can be described as follows:
C C s = ω n 2 s s + ω n 2 .
Considering the inverse Laplace transform of Equation (4), the temporal response of the critically damped behavior c C t is determined under the following model:
c C t = 1 e ω n t 1 + ω n t .
Under the critically damped behavior, the system response presents a temporal pattern similar to a first-order system. It reaches the objective value of one without experimenting with an overshoot. The output of the critically damped behavior is visualized in Figure 2.

2.3. Overdamped Behavior ( ζ > 1 )

In the overdamped case, the two poles of a transfer function of Equation (1) have real numbers but with different values. Its response C O s in the Laplace domain is modeled under the following formulation:
C O s = ω n 2 s s + ζ ω n + ω n ζ 2 1 s + ζ ω n ω n ζ 2 1 .
After applying the inverse Laplace transform, it is obtained the temporal response of the overdamped behavior c O t defined as follows:
c O t = 1 + e ζ ζ 2 1 ω n t .
Under the Overdamped behavior, the system slowly reacts until reaching the value of one. The deceleration of the response depends on the value of ζ . The greater the value of ζ , the slower the response will be. The output of this behavior is visualized in Figure 2 for the case of ζ = 1.67 .

3. Search Patterns in Metaheuristics

The generation of efficient search patterns for the correct exploration of a fitness landscape could be complicated, particularly in the presence of ruggedness and multiple local optima. Recently, several new metaheuristic schemes have been introduced in the literature. Although all these approaches consider very different phenomena as metaphors, the search patterns, used to explore the search space, are very similar. A search pattern is a set of movements produced by a rule or model in order to examine promising solutions from the search space.
Exploration and exploitation correspond to the most important characteristics of a search pattern. Exploration refers to the ability of a search pattern to examine a set of solutions spread in distinct areas of the search space. On the other hand, exploitation represents the capacity of a search pattern to improve the accuracy of the existent solutions through a local examination. The combination of both mechanisms in a search pattern is crucial for attaining success when solving a particular optimization problem.
To solve the optimization formulation, from a metaheuristic point of view, a population of P k x 1 k , , x N k of N candidate solutions (individuals) evolve from an initial point ( k = 1) to a M a x g e n number of generations ( k = M a x g e n ). In the population, each individual x i k i 1 , , N corresponds to a d -dimensional element x i , 1 k , , x i , d k , which symbolizes the decision variables involved by the optimization problem. At each generation, search patterns are applied over the individuals of the population P k to produce the new population P k + 1 . The quality of each individual x i k is evaluated in terms of its solution regarding the objective function J x i k whose result represents the fitness value of x i k . As the metaheuristic method evolves, the best current individual b b 1 , , b d is maintained since b represents the best available solution seen so-far.
In general, a search pattern is applied to each individual x i k using the best element b as a reference. Then, following a particular model, a set of movements are produced to modify the position of x i k until the location of b has been reached. The idea behind this mechanism is to examine solutions in the trajectory from x i k to b with the objective to find a better solution than the current b . Search patterns differ in the model employed to produce the trajectories x i k from to b .
Two of the most popular search models are attraction and spiral trajectories. The attraction model generates attraction movements from x i k to b . The attraction model is used extensively by several metaheuristic methods such as PSO [10,11,12], FA [13,14], CS [15], BA [16], GSA [24], EM [27] and DE [6]. On the other hand, the spiral model produces a spiral trajectory that encircles the best element b . The spiral model is employed by the recently published WOA and GWO schemes. Trajectories produced by the attraction, and spiral models are visualized in Figure 3a,b, respectively.

4. Proposed Search Patterns

In this paper, a set of new search patterns are introduced to explore the search space efficiently. They emulate the response of a second-order system. The proposed set of search patterns have been integrated as a complete search strategy to obtain the global solution of complex optimization problems. Since the proposed scheme is based on the response of the second-order systems, it can be considered as a physics-based algorithm. In our approach, the temporal response of second-order system is used to generate the trajectory from the position of x i k = x i , 1 k , , x i , d k to the location of b = b 1 , , b d . With the use of such models, it is possible to produce more complex trajectories that allow a better examination of the search space. Under such conditions, we consider the three different responses of a second-order system to produce three distinct search patterns. They are the underdamped, critically damped and overdamped modeled by the expressions Equations (8)–(10), respectively:
x i , j k = 1 e ζ ω n k 1 ζ 2 sin ω n 1 ζ 2 + tan 1 1 ζ 2 ζ b j x i , j k ;
x i , j k = 1 e ω n k 1 + ω n k b j x i , j k ;
x i , j k = 1 + e ζ ζ 2 1 ω n k b j x i , j k ;
where i ( 1 , N ) corresponds to the search agent while j ( 1 , d ) symbolizes the decision variable or dimension. Since the behavior of each search pattern depends on the value of ζ , it is easy to combine elements to produce interesting trajectories. Figure 4 presents some examples of trajectories produced by using different values for ζ . In the Figure, it is assumed a two-dimensional case ( d = 2 ) where the initial position of the search agent x i k is 0.5 , 0.5 and the final location or the best location 1 , 1 . Figure 4a presents the case of x i , 1 k ζ = 0 and x i , 2 k ζ = 1 . Figure 4b presents the case of x i , 1 k ζ = 0.1 and x i , 2 k ζ = 0.5 . Figure 4c presents the case of x i , 1 k ζ = 1 and x i , 2 k ζ = 1.67 . Finally, Figure 4d presents the case of x i , 1 k ζ = 0.5 and x i , 2 k ζ = 1 . From the figures, it is clear that the second-order responses allow producing several complex trajectories, which include most of the other search patterns known in the literature. In all cases (a)–(d), the value of ω n has been set to 1.

5. Balance of Exploration and Exploitation

Metaheuristic methods employ a set of search agents to examine the search space with the objective to identify a satisfactory solution for an optimization formulation. In metaheuristic schemes, search agents that present the best fitness values tend to regulate the search process, producing an attraction towards them. Under such conditions, as the optimization process evolves, the distance among individuals diminishes while the effect of exploitation is highlighted. On the other hand, when the distance among individuals increases, the characteristics of the exploration process are more evident.
To compute the relative distance among individuals (increase and decrease), a diversity indicator known as the dimension-wise diversity index [31] is used. Under this approach, the diversity is formulated as follows,
D i v j =   1 N i = 1 N m e d i a n x j   x i , j D i v = 1 d   j = 1 d D i v j
where m e d i a n x j symbolizes the median of dimension j of all search agents. x i , j represents the variable decision j of the individual i . N is the number of individuals in the population P k while d corresponds to the number of dimensions of the optimization formulation. The diversity D i v j (of the j -th dimension) evaluates the relative distance between the variable j of each individual and its median value. The complete diversity D i v (of the entire population) corresponds to the averaged diversity in each dimension. Both elements D i v j and D i v are calculated in every iteration.
Having evaluated the diversity values, the level of exploration and exploitation can be computed as the percentage of the time that a search strategy invests exploring or exploiting in terms of its diversity values. These percentages are calculated in each iteration by means of the following models,
X P L % =   D i v D i v m a x × 100 X P T % =   D i v D i v m a x D i v m a x × 100
where D i v m a x symbolizes the maximum diversity value obtained during the optimization process. The percentage of exploration X P L % corresponds to the size of exploration as the rate between D i v and D i v m a x . On the other hand, the percentage of exploitation X P T % symbolizes the level of exploitation. X P T % is computed as the complemental percentage to X P L % since the difference between D i v m a x and D i v is generated because of the concentration of individuals.

6. Proposed Metaheuristic Algorithm

The set of search patterns based on the second-order systems have been integrated as a complete search strategy to obtain the global solution of complex optimization problems. In this section, the complete metaheuristic method, called Second-Order Algorithm (SOA), is completely described.
The scheme considers four different stages: (A) Initialization, (B) trajectory generation, (C) reset of bad elements, and (D) avoid premature convergence mechanism. The steps (B)–(D) are sequentially executed until a stop criterion has been reached. Figure 5 shows the flowchart of the complete metaheuristic method.

6.1. Initialization

In the first iteration k = 0 , a population P 0 of N agents x 1 0 , , x N 0 is randomly produced considering to the following equation,
x i , j 0 = r a n d · b j h i g h b j l o w + b j l o w i = 1 ,   2 ,   ,   N ;   j = 1 , , d
where b j h i g h and b j l o w are the limits of the j decision variable and r a n d is a uniformly distributed random number between [0,1].
To each individual x i from the population, it is assigned a vector ζ i = ζ i , 1 ,   , ζ i , d whose elements ζ i , j determine the trajectory behavior of each j -th dimension. Initially, each element ζ i , j is set to a random value between [0,2]. Under this interval, all the second-order behavior are possible: Underdamped ( 0 < ζ < 1 ) , critically damped ( ζ = 1 ) , and overdamped ( ζ > 1 ) .

6.2. Trajectory Generation

Once the population has been initialized, it is obtained the best element of the population b . Then, the new position x i k + 1 of each agent x i k is computed as a trajectory generated by a second-order system. Once all new positions in the population P k are determined, it is also defined the best element b .

6.3. Reset of Bad Elements

To each agent x i k is allowed to move in its own trajectory for ten iterations. After ten iterations, if the search agent x i k maintains the worst performance in terms of the fitness function, it is reinitialized in both position and in its vector ζ i . Under such conditions, the search agent will be in another position and with the ability to perform another kind of trajectory behavior.

6.4. Avoid Premature Convergence Mechanism

If the percentage of exploration X P L % is less than 5%, the best value b is replaced by the best virtual value b v . The element b v is computed as the averaged value of the best five individuals of the population. The idea behind this mechanism is to identify a new position to generate different trajectories that avoid that the search process gets trapped in a local optimum.

7. Experimental Results

To evaluate the results of the proposed SOA algorithm, a set of experiments has been conducted. Such results have been compared to those produced by the Artificial Bee Colony (ABC) [9], the Covariance matrix adaptation evolution strategy (CMAES) [4], the Crow Search Algorithm (CSA) [8], the Differential Evolution (DE) [6], the Moth-flame optimization algorithm (MFO) [18] and the Particle Swarm Optimization (PSO) [10], which are considered the most popular metaheuristic schemes in many optimization studies [32].
For the comparison, all methods have been set according to their reported guidelines. Such configurations are described as follows:
  • ABC: Onlooker Bees = 50, acceleration coefficient = 1 [9].
  • DE: crossover probability = 0.2, Betha = 1 [6].
  • CMAES: Lambda = 50, father number = 25, sigma = 60, csigma = 0.32586, dsigma = 1.32586 [4].
  • CSA: Flock = 50, awareness probability = 0.1, flight length = 2 [8].
  • MFO: search agents = 50, “a” linearly decreases from 2 to 0 [18].
  • SOA: the experimental results give the best algorithm performance with the next parameter set par1 = 0.7, par2 = 0.3 and par3 = 0.05.
In our analysis, the population size N has been set to 50 search agents. The maximum iteration number M a x g e n for all functions has been set to 1000. This stop criterion has been decided to keep compatibility with similar works published in the literature [33,34]. To evaluate the results, three different indicators are considered: The Average Best-so-far (AB) solution, the Median Best-so-far (MB) solution and the Standard Deviation (SD) of the best-so-far solutions. In the analysis, each optimization problem is solved using every algorithm 30 times. From this operation, 30 results are produced. From all these values, the mean value of all best-found solutions represents the Average Best-so-far (AB) solution. Likewise, the median of all 30 results is computed to generate MB and the standard deviation of the 30 data is estimated to obtain SD of the best-so-far solutions. Indicators AB and MB correspond to the accuracy of the solutions, while SD their dispersion, and thus, the robustness of the algorithm.
The experimental section is divided into five sub-sections. In the first Section 7.1, the performance of SOA is evaluated with regard to multimodal functions. In the second Section 7.2, the results of the OTSA method in comparison with other similar approaches are analyzed in terms of unimodal functions. In the third Section 7.3, a comparative study among the algorithms examining hybrid functions is accomplished. In the fourth Section 7.4, the ability of all algorithms to converge is analyzed. Finally, in the fifth Section 7.5, the performance of the SOA method to solve the CEC 2017 set of functions is also analyzed.

7.1. Multimodal Functions

In this sub-section, the SOA approach is evaluated considering 12 unimodal functions ( f 1 x f 12 x ) reported in Table A1 from Appendix A. Multimodal functions present optimization surfaces that involve multiple local optima. For this reason, these function presents more complications in their solution. In this analysis, the performance of the SOA method is examined in comparison with ABC, CMAES, CSA, DE, MFO and PSO in terms of the multimodal functions. Multimodal objective functions correspond to functions from f 1 x to f 12 x in Table A1 from the Appendix, where the set of local minima augments as the dimension of the function also increases. Therefore, the study exhibits the capacity of each metaheuristic scheme to identify the global optimum when the function contains several local optima. In the experiments, it is assumed objective functions operating in 30 dimensions ( n = 30). The averaged best (AB) results considering 30 independent executions are exhibit in Table 1. It also reports the median values (MD) and the standard deviations (SD).
According to Table 1, the proposed SOA scheme obtain a better performance than ABC, CMAES, CSA, DE, MFO and PSO in functions f 1 x , f 4 x , f 5 x , f 6 x , f 8 x , f 9 x , f 10 x , f 11 x and f 12 x . Nevertheless, the results of SOA exhibit similar as the obtained by DE, CMAES and MFO in functions f 2 x , f 3 x and f 7 x .
To statistically validate the conclusions from Table 1, a non-parametric study is considered. In this test, the Wilcoxon rank-sum analysis [35] is adopted with the objective to validate the performance results. This statistical test evaluates if exists a significant difference when two methods are compared. For this reason, the analysis is performed considering a pairwise comparison such as SOA versus ABC, SOA versus CMAES, SOA versus CSA, SOA versus DE, SOA versus MFO and SOA versus PSO. In the Wilcoxon analysis, a null hypothesis (H0) was adopted that showed that there is no significant difference in the results. On the other hand, it is assumed as an alternative hypothesis (H1) that the result has a similar structure. For the Wilcoxon analysis, it is assumed a significance value of 0.05 considering 30 independent execution for each test function. Table 2 shows the p-values assuming the results of Table 2 (where n = 30) produced by the Wilcoxon study. For faster visualization, in the Table, we use the following symbols ▲ ▼, and ►. The symbol ▲ refers that the SOA algorithm produces significantly better solutions than its competitor. ▼ symbolizes that SOA obtains worse results than its counterpart. Finally, the symbol ► denotes that both compared methods produce similar solutions. A close inspection of Table 2 demonstrates that for functions f 1 , f 4 , f 5 , f 6 , f 8 , f 9 , f 10 , f 11 and f 12 the proposed SOA scheme obtain better solutions than the other methods. On the other hand, for functions f 2 , f 2 and f 7 , it is clear that the groups SOA versus ABC, SOA versus CMAES, SOA versus DE and SOA versus MFO and EA-HC versus SCA present similar solutions.

7.2. Unimodal Functions

In this subsection, the performance of SOA is compared with ABC, DE, DE, CMAES CSA and MFO, considering four unimodal functions with only one optimum. Such functions are represented by functions from f 13 x to f 16 x in Table A1. In the test, all functions are considered in 30 dimensions ( d = 30 ). The experimental results, obtained from 30 independent executions, are presented in Table 3. They report the results in terms of AB, MB and SD obtained in the executions. According to Table 3, the SOA approach provides better performance than ABC, DE, DE, CMAES CSA and MFO for all functions. In general, this study demonstrates big differences in performance among the metaheuristic scheme, which is directly related to a better trade-off between exploration and exploitation produced by the trajectories of the SOA scheme. Considering the information from Table 3, Table 4 reports the results of the Wilcoxon analysis. An inspection of the p -values from Table 4, it is clear that the proposed SOA method presents a superior performance than each metaheuristic algorithm considered in the experimental study.

7.3. Hybrid Functions

In this study, hybrid functions are used to evaluate the optimization solutions of the SOA scheme. Hybrid functions refer to multimodal optimization problems produced by the combination of several multimodal functions. These functions correspond to the formulations from f 17 x to f 20 x , which are shown in Table A1 in Appendix A. In the experiments, the performance of our proposed SOA approach has been compared with other metaheuristic schemes.
The simulation results are reported in Table 5. It exhibits the performance of each algorithm in terms of AB, MB and SD. From Table 5, it can be observed that the SOA method presents a superior performance than the other techniques in all functions. Table 6 reports the results of the Wilcoxon analysis assuming the index of the Average Best-so-far (AB) values of Table 5. Since all elements present the symbol ▲, they validate that the proposed SOA method produces better results than the other methods. The remarkable performance of the proposed SOA scheme for hybrid functions is attributed to a better balance between exploration and exploitation of its operators provoked by the properties of the second system trajectories. This denotes that the SOA approach generates an appropriate number of promising search agents that allow an adequate exploration of the search space. On the other hand, a balanced number of candidate solutions is also produced that make it possible to improve the quality of the already-detected solutions, in terms of the objective function.

7.4. Convergence Analysis

The evaluation of accuracy in the final solution cannot completely assess the abilities of an optimization algorithm. On the other hand, the convergence of a metaheuristic scheme represents an important property to assess its performance. This analysis determined the velocity, which determined metaheuristic scheme reaches the optimum solution. In this subsection, a convergence study has been carried out. In the comparisons, for the sake of space, the performance of the best four metaheuristic schemes is considered adopting a representative set of six functions (two multimodal, two unimodal and two hybrids), operated in 30 dimensions. To generate the convergence graphs, the raw simulation data produced in the different experiments was processed. Since each simulation is executed 30 times for each metaheuristic method, the convergence data of the execution corresponds to the median result. Figure 6, Figure 7 and Figure 8 show the convergence graphs for the four best-performing metaheuristic methods. A close inspection of Figure 6 demonstrates that the proposed SOA scheme presents a better convergence than the other algorithms for all functions.

7.5. Performance Evaluation with CEC 2017

In this sub-section, the performance of the SOA method to solve the CEC 2017 set of functions is also analyzed. The set of functions from the CEC2017 [36] represents one the most elaborated platform for benchmarking and comparing search strategies for numerical optimization. The CEC2017 benchmarks correspond to a test environment of 30 different functions with distinct features. They will be identified from F 1 x to F 30 x . Most of these functions are similar to those exhibited in Appendix A, but with different translations and/or rotations effects. The average obtained results, corresponding to 30 independent executions, are re-registered in Table 7. The results are reported in terms of the performance indexes: Average Best fitness (AB), Median Best fitness (MB), and the standard deviation of the best finesses (SD).
According to Table 7, SOA provides better results than ABC, CMAES, CSA, DE, MFO and PSO for almost all functions. A close inspection of this table reveals that the SOA scheme attained the best performance level, obtaining the best results in 22 functions from the CEC2017 function set. Likewise, the CMAES presents second place, in terms of most of the performance indexes, while DE and CSA techniques reach the third category with performance slightly minor. On the other hand, the MFO and PSO methods produce the worst results. In particular, the results show considerable precision differences, which are directly related to the different search patterns presented by each metaheuristic algorithm. This demonstrates that the search patterns, produced by second-order systems, are able to provide excellent search patterns. The results show good performance of the proposed SOA method in terms of accuracy and high convergence rates.

8. Analysis and Discussion

The extensive experiments, performed in previous sections, demonstrate the remarkable characteristics of the proposed SOA algorithm. The experiments included not only standard benchmark functions but also the complex set of optimization functions from CEC2017. In both sets of functions, they have been solved in 30 dimensions. Therefore, a total of 50 optimization problems were employed to comparatively evaluate the performance of SOA with other popular metaheuristic approaches, such as ABC, CMAES, CSA, DE, MFO and PSO. From the experiments, important information has been obtained by observing the end-results, in terms of the mean and standard deviations found over a certain number of runs or convergence graphs, but also in-depth search behavioral evidence in the form of exploration and exploitation measurements were also used.
The generation of efficient search patterns for the correct exploration of a fitness landscape could be complicated, particularly in the presence of ruggedness and multiple local optima. A search pattern is a set of movements produced by a rule or model that is used to examine promising solutions from the search space. Exploration and exploitation correspond to the most important characteristics of a search strategy. The combination of both mechanisms in a search pattern is crucial for attaining success when solving a particular optimization problem.
In our approach, the temporal response of second-order system is used to generate the trajectory from the position of x i k = x i , 1 k , , x i , d k to the location of b = b 1 , , b d . Three different search patterns have been considered based on the second-order system responses. The proposed search patterns can explore areas of considerable size by using a high rate of velocity and the same time, refining the solution of the best individual b by the exploitation of its location. This behavior represents the most important property of the proposed search patterns. According to the results provided by the experiments, the search patterns produce more complex trajectories that allow a better examination of the search space.
Similar to other metaheuristic methods, SOA tries to improve its solutions based on its interaction with the objective function or on a ‘trial and error’ scheme through defined stochastic processes. Different from other popular metaheuristic methods such as DE, ABC, GA or CMAES, our proposed approach uses search patterns represented by trajectories to explore and exploit the search space. Since SOA employs search patterns, it presents more similarities with algorithms such as CSA, MFO and GWO. However, the search patterns used in their search strategy are very different. While CSA, MFO and GWO consider only spiral patterns, our proposed method uses complex trajectories produced by the response of second-order systems.

9. Conclusions

A search pattern is a set of movements produced by a rule or model, in order to examine promising solutions from the search space. In this paper, a set of new search patterns are introduced to explore the search space efficiently. They emulate the response of a second-order system. Under such conditions, it is considered three different responses of a second-order system to produce three distinct search patterns, such as underdamped, critically damped and overdamped. These proposed set of search patterns have been integrated as a complete search strategy, called Second-Order Algorithm (SOA), to obtain the global solution of complex optimization problems.
The form of the search patterns allows for balancing the exploration and exploitation abilities by efficiently traversing the search-space and avoiding suboptimal regions. The efficiency of the proposed SOA has been evaluated through 20 standard benchmark functions and 30 functions of CEC2017 test-suite. The results over multimodal functions show remarkable exploration capabilities, while the result over unimodal test functions denotes adequate exploitation of the search space. On hybrid functions, the results demonstrate the effectivity of the search patterns on more difficult formulations. The search efficacy of the proposed approach is also analyzed in terms of the Wilcoxon test results and convergence curves. In order to compare the performance of the SOA scheme, many other popular optimization techniques such as the Artificial Bee Colony (ABC), the Covariance matrix adaptation evolution strategy (CMAES), the Crow Search Algorithm (CSA), the Differential Evolution (DE), the Moth-flame optimization algorithm (MFO) and the Particle Swarm Optimization (PSO), have also been tested on the same experimental environment. Future research directions include topics such as multi-objective capabilities, incorporating chaotic maps and include acceleration process to solve other real-scale optimization problems.

Author Contributions

Conceptualization, E.C. and M.J.; Data curation, M.P.; Formal analysis, H.E.; Funding acquisition, M.J.; Investigation, H.B. and H.F.E.; Methodology, H.B., A.L.-C. and H.F.E.; Resources, H.E. and M.J.; Supervision, A.L.-C.; Visualization, M.P.; Writing—original draft, E.C. and H.F.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. List of Benchmark Functions.
Table A1. List of Benchmark Functions.
NameFunctionSDimMinimum
f 1 ( x ) Levy s i n 2 π ω 1 + i = 1 d 1 ω i 1 2 1 + 10 s i n 2 π ω i + 1 + ω d 1 2 1 + s i n 2 2 π ω d 10 ,   10 n 30 f x * = 0 ;
x * = 1 , ,   1
f 2 ( x ) Mishra 1 1 + x n x n ;                 x n = n i = 1 n 1 x i 0 ,   1 n 30 f x * = 2 ;
x * = 1 , ,   1
f 3 ( x ) Mishra 2 1 + x n x n ;                 x n = n i = 1 n 1 x i + x i + 1 2 0 ,   1 n 30 f x * = 2 ;
x * = 1 , ,   1
f 4 ( x ) Mishra 11 1 n i = 1 n x i i = 1 n x i 1 n   2 10 ,   10 n 30 f x * = 0 ;
x * = 0 , ,   0
f 5 ( x ) Penalty 1 π 30 10 sin 2 π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i ,   10 ,   100 ,   4 ;
y i = 1 + x i + 1 4 ;
u x i ,   a ,   k ,   m = k x i a m ,     x i > a 0 ,   a x i a k x i a m ,     x i < a
50 ,   50 n 30 f x * = 0 ;
x * = 1 , , 1
f 6 ( x ) Perm1 k = 1 n i = 1 n ( i k + 50 ) x i / i k 1 2 n ,   n n 30 f x * = 0 ;
x * = 1 ,   2 , ,   n
f 7 ( x ) Plateau 30 + i = 1 n x i 5.12 ,   5.12 n 30 f x * = 30 ;
x * = 0 , ,   0
f 8 ( x ) Step i = 1 n x i + 0.5 2 100 ,   100 n 30 f x * = 0 ;
x * = 0 , ,   0
f 9 ( x ) Styblinski tang 1 2 i = 1 n x i 4 16 x i 2 + 5 x i 5 ,   5 n 30 f x * = 39.1659 n ;
x * = 2.90 , ,   2.90
f 10 ( x ) Trid i = 1 n x i 1 2 i = 1 n x i x i 1 n 2 ,   n 2 n 30 f x * = n n + 4 n 1 / 6 ;
x * = i n + 1 i
for   i = 1 , , n
f 11 ( x ) Vincent i = 1 n sin 10 log x i 0.25 ,   10 n 30 f x * = n ;
x * = 7.70 , ,   7.70
f 12 ( x ) Zakharov i = 1 n x i 2 + i = 1 n 0.5 i x i 2 + i = 1 n 0.5 i x i 4 5 ,   10 n 30 f x * = 0 ;
x * = 0 , ,   0
f 13 ( x ) Rothyp i = 1 d j = 1 i x j 2 65.536 ,   65.536 n 30 f x * = 0 ;
x * = 0 , ,   0
f 14 ( x ) Schwefel 2 i = 1 n j = 1 i x i 2 100 ,   100 n 30 f x * = 0 ;
x * = 0 , ,   0
f 15 ( x ) Sum2 i = 1 d j = 1 i x j 2 10 ,   10 n 30 f x * = 0 ;
x * = 0 , ,   0
F 16 ( x ) Sum of different powers i = 1 d x i i + 1 1 ,   1 n 30 f x * = 0 ;
x * = 0 , ,   0
f 17 ( x ) Rastringin + Schwefel22 + Sphere 10 n + i = 1 n x i 2 10 cos 2 π x i + i = 1 n x i + i = 1 n x i + i = 1 n x i 2 100 ,   100 n 30 f x * = 0 ;
x * = 0 , ,   0
f 18 ( x ) Griewank + Rastringin + Rosenbrock 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 + 10 n + i = 1 n x i 2 10 cos 2 π x i + i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 100 ,   100 n 30 f x * = n 1 ;
x * = 0 , ,   0
f 19 ( x ) Ackley + Penalty2 + Rosenbrock + Schwefel2 ( 20 e x p ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e x p ) + ( 0.1 { sin ( 3 π x i ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + [ ( x n 1 ) 2 [ 1 + s i n 2 ( 2 π x n ) ] ] } + i = 1 n u ( x i , 5.100 , 4 ) ) + ( i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] ) + ( i = 1 n x i + i = 1 n x i ) 100 , 100 n 30 f x * = 1.1 n 1 ;
x * = 0 , ,   0
f 20 ( x ) Ackley + Griewnk + Rastringin + Rosenbrock + Schwefel22 20 e 0.2 1 n i = 1 n x i 2 e 1 n i = 1 n cos 2 π x i + 20 + e + 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 + 10 n + i = 1 n x i 2 10 cos 2 π x i + i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 + i = 1 n x i + i = 1 n x i 100 ,   100 n 30 f x * = n 1 ;
x * = 0 , ,   0

References

  1. Cuevas, E.; Gálvez, J.; Avila, K.; Toski, M.; Rafe, V. A new metaheuristic approach based on agent systems principles. J. Comput. Sci. 2020, 47, 101244. [Google Scholar] [CrossRef]
  2. Beyer, H.-G.; Schwefel, H.-P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  3. Bäck, T.; Hoffmeister, F.; Schwefel, H.-P. A survey of evolution strategies. In Proceedings of the Fourth International Conference on Genetic Algorithms, San Diego, CA, USA, 13–16 July 1991; p. 8. [Google Scholar]
  4. Hansen, N. The CMA Evolution Strategy: A Tutorial. arXiv 2016, arXiv:1604.00772102, 75–102. [Google Scholar]
  5. Tang, K.S.; Man, K.F.; Kwong, S.; He, Q. Genetic algorithms and their applications. IEEE Signal Process. Mag. 1996, 13, 22–37. [Google Scholar] [CrossRef]
  6. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  7. Zhang, J.; Sanderson, A.C. JADE: Self-adaptive differential evolution with fast and reliable convergence performance. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, CEC 2007, Singapore, 25–28 September 2007. [Google Scholar]
  8. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  9. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICCN’95 International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  11. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  12. Marini, F.; Walczak, B. Particle swarm optimization (PSO): A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  13. Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In Proceedings of the 5th International Conference on Stochastic Algorithms: Foundations and Applications, Sapporo, Japan, 26–28 October 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar] [CrossRef] [Green Version]
  14. Yang, X.-S. Firefly Algorithm, Lévy Flights and Global Optimization. In Research and Development in Intelligent Systems XXVI; Springer: London, UK, 2010; pp. 209–218. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  16. Yang, X.-S. A new metaheuristic Bat-inspired Algorithm. Stud. Comput. Intell. 2010, 284, 65–74. [Google Scholar]
  17. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  18. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  19. Cuevas, E.; Echavarría, A.; Ramírez-Ortegón, M.A. An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl. Intell. 2013, 40, 256–272. [Google Scholar] [CrossRef] [Green Version]
  20. Valdivia-Gonzalez, A.; Zaldívar, D.; Fausto, F.; Camarena, O.; Cuevas, E.; Perez-Cisneros, M. A States of Matter Search-Based Approach for Solving the Problem of Intelligent Power Allocation in Plug-in Hybrid Electric Vehicles. Energies 2017, 10, 92. [Google Scholar] [CrossRef] [Green Version]
  21. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  22. Rutenbar, R.A. Simulated Annealing Algorithms: An Overview. IEEE Circuits Devices Mag. 1989, 5, 19–26. [Google Scholar] [CrossRef]
  23. Siddique, N.; Adeli, H. Simulated Annealing, Its Variants and Engineering Applications. Int. J. Artif. Intell. Tools 2016, 25, 1630001. [Google Scholar] [CrossRef]
  24. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  25. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
  26. Erol, O.K.; Eksin, I. A new optimization method: Big Bang–Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  27. Birbil, Ş.I.; Fang, S.C. An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 2003, 25, 263–282. [Google Scholar] [CrossRef]
  28. Sörensen, K.; Glover, F. Metaheuristics. In Encyclopedia of Operations Research and Management Science; Gass, S.I., Fu, M., Eds.; Springer: New York, NY, USA, 2013; pp. 960–970. [Google Scholar]
  29. Zill, D.G. A First Course in Differential Equations with Modeling Applications; Cengage Learning: Boston, MA, USA, 2012; ISBN 978-1-285-40110-2. [Google Scholar]
  30. Haidekker, M.A. Linear Feedback Controls; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  31. Morales-Castañeda, B.; Zaldívar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  32. Boussaïda, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  33. Han, M.; Liu, C.; Xing, J. An evolutionary membrane algorithm for global numerical optimization problems. Inf. Sci. 2014, 276, 219–241. [Google Scholar] [CrossRef]
  34. Meng, Z.; Pan, J.-S. Monkey King Evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization. Knowl. Based Syst. 2016, 97, 144–157. [Google Scholar] [CrossRef]
  35. Wilcoxon, F. Individual comparisons by ranking methods. Biometrics 1945, 80–83. [Google Scholar] [CrossRef]
  36. Wu, G.H.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization. Available online: https://www.researchgate.net/profile/Guohua-Wu-5/publication/317228117_Problem_Definitions_and_Evaluation_Criteria_for_the_CEC_2017_Competition_and_Special_Session_on_Constrained_Single_Objective_Real-Parameter_Optimization/links/5982cdbaa6fdcc8b56f59104/Problem-Definitions-and-Evaluation-Criteria-for-the-CEC-2017-Competition-and-Special-Session-on-Constrained-Single-Objective-Real-Parameter-Optimization.pdf (accessed on 16 September 2019).
Figure 1. Visual taxonomy of the nature-inspired metaheuristic schemes.
Figure 1. Visual taxonomy of the nature-inspired metaheuristic schemes.
Applsci 11 03430 g001
Figure 2. Temporal responses of second-order system considering its different behaviors: Underdamped ( 0 < ζ < 1 ) , critically damped ζ = 1 , and overdamped ( ζ > 1 ) .
Figure 2. Temporal responses of second-order system considering its different behaviors: Underdamped ( 0 < ζ < 1 ) , critically damped ζ = 1 , and overdamped ( ζ > 1 ) .
Applsci 11 03430 g002
Figure 3. Trajectories produced by, (a) attraction, and (b) spiral models.
Figure 3. Trajectories produced by, (a) attraction, and (b) spiral models.
Applsci 11 03430 g003
Figure 4. Some examples of trajectories produced by using different values for ζ. (a) x i , 1 k ζ = 0 and x i , 2 k ζ = 1 , (b) x i , 1 k ζ = 0.1 and x i , 2 k ζ = 0.5 , (c) x i , 1 k ζ = 1 and x i , 2 k ζ = 1.67 and (d) x i , 1 k ζ = 0.5 and x i , 2 k ζ = 1 .
Figure 4. Some examples of trajectories produced by using different values for ζ. (a) x i , 1 k ζ = 0 and x i , 2 k ζ = 1 , (b) x i , 1 k ζ = 0.1 and x i , 2 k ζ = 0.5 , (c) x i , 1 k ζ = 1 and x i , 2 k ζ = 1.67 and (d) x i , 1 k ζ = 0.5 and x i , 2 k ζ = 1 .
Applsci 11 03430 g004
Figure 5. Flowchart of the proposed metaheuristic method based on the response of second-order systems.
Figure 5. Flowchart of the proposed metaheuristic method based on the response of second-order systems.
Applsci 11 03430 g005
Figure 6. Convergence graphs in two representative multimodal-functions.
Figure 6. Convergence graphs in two representative multimodal-functions.
Applsci 11 03430 g006
Figure 7. Convergence graphs in two representative unimodal functions.
Figure 7. Convergence graphs in two representative unimodal functions.
Applsci 11 03430 g007
Figure 8. Convergence graphs in two representative hybrid functions.
Figure 8. Convergence graphs in two representative hybrid functions.
Applsci 11 03430 g008
Table 1. Minimization results of multimodal benchmark functions.
Table 1. Minimization results of multimodal benchmark functions.
ABCDECMAESCSAPSOMFOSOA
AB8.91326220.79325352.8976 × 10−1955.9185040.201280327.9832710.1119774
f 1 ( x ) MD8.43927500.79931662.4779 × 10−1957.0382634.1459 × 10−2325.3651991.0714 × 10−10
SD2.67480590.13785381.5343 × 10−196.20325781.102296812.2832540.2272439
AB2221,897,783.327.422
f 2 ( x ) MD22235,691.155222
SD9.9512 × 10−12009,636,632.1113.4349500
AB2223,620,834.434.72312822
f 3 ( x ) MD222676,981.46922
SD2.3308 × 10−11007,265,352.7113.3667200
AB0.13715510.0021.7942 × 10−60.08629192.2285 × 10−85.5194 × 10−101.164 × 10−11
f 4 ( x ) MD0.13490760.0100.0892256007.7118 × 10−12
SD0.03998610.1235.4833 × 10−60.02133321.2206 × 10−73.0231 × 10−91.0995 × 10−11
AB13,781,2911,331,987.722,307.19544,274,76182.53962585.75614971.964984
f 5 ( x ) MD13,876,2631,365,502.172.37751646,153,72881.69848885.66561572.362277
SD3,237,147.7306,385.3450,923.63710,118,1807.19167263.28570880.9929591
AB1.152 × 10855.850 × 10811.812 × 10836.429 × 10831.397 × 10813.0883 × 10811.0051 × 1081
f 6 ( x ) MD4.622 × 10843.405 × 10817.928 × 10822.939 × 10835.977 × 10807.9607 × 10804.901 × 1080
SD1.685 × 10857.33 × 10813.047 × 10837.551 × 10832.197 × 10815.6651 × 10811.9457 × 1081
AB30.033333303058.7666663033.63333330
f 7 ( x ) MD30303059303030
SD0.18257419001.7749858305.45504090
AB9.28.06666661.066666619,543.2660.03333332000.03330
f 8 ( x ) MD98019,797000
SD2.57842501.94640843.19410372077.74590.18257414842.32770
AB−745.05202−1125.4815−1127.8626−725.09353−1071.7869−1031.2617−1146.3478
f 9 ( x ) MD−743.4462−1174.9722−1132.5748−719.10777−1068.9596−1033.6178−1145.2467
SD25.13759378.70625.80999925.81565234.05584734.24418810.928362
AB110,282.54665,278.86−49301,170,939.045,556.260222,833.73−501.79356
f 10 ( x ) MD96,461.061673,449.49−49301,126,234.25076.815271,582.051−332.82466
SD44,933.417129,147.273.7318 × 10−9159,175.6475,990.880305,159.37663.92006
AB−18.26109−26.056561−29.6576−16.504756−28.367666−28.863589−30
f 11 ( x ) MD−18.131984−26.092349−29.9286−16.183447−28.14029−29.070145−30
SD1.63668730.54289060.4661.169171421.54085361.28374150
AB1502.3129369.60375786.36819519.17242196.95838261.5233211.905761
f 12 ( x ) MD1457.6865368.82916778.72369465.93238213.00730252.767470.3841574
SD420.7061135.389136215.93893228.6986086.542862106.5235329.544101
Table 2. Wilcoxon analysis for multimodal benchmark functions.
Table 2. Wilcoxon analysis for multimodal benchmark functions.
SOASOASOASOASOASOA
Functionvs.vs.vs.vs.vs.vs.
ABCCMAESCSADEMFOPSO
f 1 ( x ) 7.13 × 10−92.61 × 10−82.40 × 10−119.77 × 10−73.97 × 10−116.87 × 10−8
f 2 ( x ) 1.21 × 10−121►1.21 × 10−121►1►4.13 × 10−9
f 3 ( x ) 1.21 × 10−121►1.21 × 10−121►1►8.33 × 10−7
f 4 ( x ) 6.48 × 10−122.43 × 10−86.48 × 10−121.10 × 10−71.18 × 10−75.79 × 10−8
f 5 ( x ) 3.02 × 10−112.18 × 10−63.02 × 10−113.02 × 10−118.30 × 10−19.94 × 10−8
f 6 ( x ) 3.69 × 10−111.29 × 10−92.87 × 10−106.73 × 10−81.75 × 10−59.52 × 10−4
f 7 ( x ) 3.34 × 10−11►1.57 × 10−123.34 × 10−12.23 × 10−53.34 × 10−7
f 8 ( x ) 3.96 × 10−61.10 × 10−77.87 × 10−123.28 × 10−62.45 × 10−15.58 × 10−7
f 9 ( x ) 2.97 × 10−115.75 × 10−82.97 × 10−117.72 × 10−84.20 × 10−41.83 × 10−5
f 10 ( x ) 3.02 × 10−112.85 × 10−113.02 × 10−113.02 × 10−112.57 × 10−71.86 × 10−3
f 11 ( x ) 2.80 × 10−111.12 × 10−072.80 × 10−113.00 × 10−118.88 × 10−18.86 × 10−6
f 12 ( x ) 3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.78 × 10−109.76 × 10−10
Table 3. Minimization results of unimodal benchmark functions.
Table 3. Minimization results of unimodal benchmark functions.
ABCDECMAESCSAPSOMFOSOA
AB25.64889123.2340060.0382940117,864.482433.814816,893.5384.416 × 10−16
f 13 ( x ) MD26.05436422.2187471.503 × 10−23120,885.375.9145 × 10−1010,737.4184.1862 × 10−16
SD8.33056635.51583970.142885614,742.8554322.034119,501.2762.4373 × 10−16
AB0.01366000.01455261.2398 × 10−551.4767354.0467 × 10−137.86432021.5 × 10−20
f 14 ( x ) MD0.01441990.01449211.3237 × 10−2051.7980317.5065 × 10−142.8398 × 10−71.3059 × 10−20
SD0.00414680.00375303.908 × 10−55.93782897.8227 × 10−1314.0242609.6811 × 10−21
AB0.54427770.56961219.7891972466.201720403.333651.2053 × 10−18
f 15 ( x ) MD0.49524160.57339130.22571902478.72468.9126 × 10−12200.000001.025 × 10−18
SD0.19863490.137695539.959487344.5657866.436383520.929746.4946 × 10−19
AB0.00066471.8659 × 10−106.9743 × 10−100.00683425.6588 × 10−249.5555 × 10−190
f 16 ( x ) MD0.00043061.2946 × 10−107.1112 × 10−100.00668583.6687 × 10−292.8339 × 10−220
SD0.00061871.8814 × 10−104.3714 × 10−100.00320373.0973 × 10−233.3608 × 10−180
Table 4. Wilcoxon analysis for unimodal benchmark functions.
Table 4. Wilcoxon analysis for unimodal benchmark functions.
SOASOASOASOASOASOA
Functionvs.vs.vs.vs.vs.vs.
ABCCMAESCSADEMFOPSO
f 13 ( x ) 2.80 × 10−113.86 × 10−12.80 × 10−112.80 × 10−111.75 × 10−91.22 × 10−4
f 14 ( x ) 5.51 × 10−92.11 × 10−12.72 × 10−113.22 × 10−91.07 × 10−41.74 × 10−2
f 15 ( x ) 1.58 × 10−13.02 × 10−113.02 × 10−111.81 × 10−15.26 × 10−43.11 × 10−1
f 16 ( x ) 1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
Table 5. Minimization results of hybrid benchmark functions.
Table 5. Minimization results of hybrid benchmark functions.
ABCDECMAESCSAPSOMFOSOA
AB396.754587.71783663.1526 × 10−920,330.245334.6308223,758.7900.8147792
f 17 ( x ) MD210.311737.77727342.7959 × 10−919,814.6136.9181 × 10−720,077.8494.3564 × 10−11
SD497.028471.31903841.1535 × 10−92074.47421832.848518,166.6032.5336865
AB212.4026675.917033105.99474731.3815165.728942161.3608130.785661
f 18 ( x ) MD212.0926975.57597931.783896741.0863965.904649116.8668328.998449
SD25.80514510.60476184.80972068.75915414.502755107.408203.5251022
AB221,724.731264.086257.57041770,494,77087.95141380.88778331.999808
f 19 ( x ) MD200,128.191278.157932.66101665,110,85984.26795978.04059931.999808
SD114,341.27268.8214554.60214723,890,49425.25657623.1695036.4582 × 10−10
AB319.5759249.74128297.542580867.75158122.92508802.2144430.307556
f 20 ( x ) MD299.5031249.70810329.002196879.0554365.254961685.3460829
SD64.7090104.472054093.393879103.61875127.74698500.431744.0252793
Table 6. Wilcoxon analysis for hybrid benchmark functions.
Table 6. Wilcoxon analysis for hybrid benchmark functions.
SOASOASOASOASOASOA
Functionvs.vs.vs.vs.vs.vs.
ABCCMAESCSADEMFOPSO
f 17 ( x ) 4.35 × 10−116.63 × 10−52.92 × 10−118.16 × 10−85.40 × 10−106.55 × 10−2
f 18 ( x ) 1.16 × 10−77.28 × 10−43.02 × 10−116.63 × 10−72.01 × 10−15.20 × 10−6
f 19 ( x ) 3.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.17 × 10−44.94 × 10−5
f 20 ( x ) 4.91 × 10−112.71 × 10−52.98 × 10−116.73 × 10−55.43 × 10−118.11 × 10−5
Table 7. Optimization results from benchmark functions of CEC2017.
Table 7. Optimization results from benchmark functions of CEC2017.
ABCCMAESCSADEMFOPSOSOA
AB31,543,508.9316,816,680,16562,350,211,79592,576,614.719,869,100,2685,754,474,631389,770,985.4
F 1 x MD31,272,497.0710,317,001,63762,180,271,04490,540,094.668,120,831,7865,716,932,117348,397,609.3
SD11,711,445.5518,287,769,8336,701,490,74819,876,885.36,276,129,7683,760,493,838175,590,731.3
AB4.9441 × 10321.9584 × 10429.584 × 10434.6409 × 10321.32094 × 10371.8354 × 10432.4753 × 1019
F 2 x MD8.92 × 10317.2683 × 10415.1378 × 10429.165 × 10315.8224 × 10311.5174 × 10314.8076 × 1017
SD8.1141 × 10323.0961 × 10422.7242 × 10449.8904 × 10325.35797 × 10371.0053 × 10441.2579 × 1020
AB143,585.796206,258.738105,227.778184,983.575141,570.445679,667.25750,153.7766
F 3 x MD143,614.585201,904.141104,066.094187,839.624132,229.111569,068.229947,032.1654
SD17,888.252552,000.298414,953.904827,184.157857,378.8477339,077.804220,560.6136
AB558.758333855.662216,385.5764558.093851026.441012937.069787547.952065
F 4 x MD562.1426773566.5716116,520.3929556.117066856.5465617890.326594544.342148
SD20.29146171429.474473040.6233523.4459834629.655489361.01531720.9881356
AB730.381143825.655628951.170114721.540918692.7089629629.566432631.994737
F 5 x MD732.333214847.872393951.759754720.602523694.7441044634.793518630.746916
SD12.734543266.249778322.73617919.2933764242.1867123129.11080326.6185453
AB603.83984669.750895691.817816604.901193632.8602211612.05324609.329809
F 6 x MD603.786524668.724864691.322797604.969262632.4713596611.116445608.748467
SD0.606308289.33757236.333245960.5508838.8752768895.831306252.23326029
AB977.074304889.8580041868.59237983.8965451085.414574850.625181933.44086
F 7 x MD977.307172899.4160461847.81046988.6198651067.152488837.191947939.402368
SD13.67481839.0420544125.68730417.0217457139.976484143.966677125.6723087
AB1033.747471047.67131181.820551023.57155991.3574401914.911462920.212894
F 8 x MD1035.621631026.792461181.417591023.90313992.7702619916.328125921.260403
SD13.577616186.616931529.004136912.133635943.3464642724.985269321.5473475
AB1926.9647690015,096.70816434.262856487.61522436.077143251.16746
F 9 x MD1839.5533990015,358.90476327.721885976.8019982313.971942702.63581
SD333.20890401637.81695887.5003632250.718589920.3381841462.45439
AB8559.094258026.514168695.922797235.231295260.9020215136.848274524.56624
F 10 x MD8598.361837995.609268710.133647246.563675291.6797884941.915414512.99611
SD327.683849246.346496312.712801234.084094711.3287215845.232077357.978315
AB1594.9777819,382.01227888.110461813.686154011.7549041465.882251248.42954
F 11 x MD1594.0440418,978.04337566.70151774.919242427.2072841465.340721247.83322
SD90.0441559762.537382030.84932246.8450293525.449844125.92059632.5337336
AB22,035,530.64,281,754,7608,661,838,22792,723,516.691,292,958.57354,557,5694,401,381.17
F 12 x MD20,991,516.34,358,279,1858,610,836,60793,208,101.723,663,273.6246,467,7123,700,514.39
SD7,163,344.911,489,535,7372,020,262,53018,696,916.2134,545,916.4411,606,2063,405,855.44
AB19,266.96983,652,661,3826,874,450,8943,979,238.9838,881,437.05111,741,89726,817.0052
F 13 x MD18,527.16463,905,472,9447,021,073,0653,726,551.68186,879.08854,517,059.7516,966.594
SD9421.824941,271,069,3902,248,014,2491659,861.04193,180,811.6371,472,00223,502.6252
AB131,154.7726,816,811.852,093,007.76270,517.596369,042.9999333,813.27438,907.1977
F 14 x MD109,800.3115,506,707.51,692,106.9248,357.925137,669.847999,365.047424,004.7416
SD74,612.91364,656,160.181,375,028.96113,787.245640,038.53331,180,870.4640,344.6637
AB8915.09332519,343,539512,525,748522,995.48358,693.3157486,213.88817888.58976
F 15 x MD5428.6522428,173,625475,300,661514,399.61335,343.8246963,456.34213430.78723
SD12,023.8419350,472,927266,935,668282,908.14574,805.6665461,784.91188715.94813
AB3371.674324820.886865247.630562937.151253160.3169092823.542072647.38414
F 16 x MD3388.421684838.747965230.029922987.836543135.1284722823.764082599.69633
SD194.740886282.787939336.291377168.503372330.2217186407.164335272.998599
AB2395.867773482.110313378.548152177.583182441.4653022294.798932174.93599
F 17 x MD2389.45623468.225853372.193732187.840112454.4701822268.30032159.79812
SD102.889656285.667211324.22374785.3307491250.2293907294.2882194.053384
AB5,046,671.637,362,36523,998,373.22,228,087.664,549,050.5421,341,517.26818,677.628
F 18 x MD4,728,588.9733,258,799.822,232,367.52,033,807.451,182,780.284682,252.246300,231.736
SD2,468,179.0622,078,270.513,062,667.1914,951.2489,866,226.4021,857,506.171,622,268.27
AB16,681.7816620,387,919616,153,747551,809.08316,702,987.1213,973,268.14686.90195
F 19 x MD7134.24743509,584,907600,010,030481,336.814143,655.9903541,084.2593334.90276
SD24,639.1966466,495,950315,926,138399,361.44146,487,264.3944,969,899.63974.19763
AB2776.665822804.51262945.577252439.775722697.4173522452.704852439.26615
F 20 x MD2764.336732834.351842955.484412445.18772627.8919782488.705572449.42599
SD94.7983596199.062724106.81541888.0385339220.4185834179.132846147.240337
AB2523.449272641.224672739.236092514.758132506.0004852434.74532421.58999
F 21 x MD2524.345062643.059062741.530872518.372322499.782192433.212012426.23287
SD15.404659436.247973141.908048510.976250740.0530508323.697088721.141776
AB4836.586439577.041049095.475016970.191976645.227015532.200974165.49182
F 22 x MD4242.357229932.084969075.494596750.169116848.7885426523.548293767.22155
SD2051.835731437.66121763.4358641310.370511508.1450861932.036721801.85242
AB2872.691943047.394623398.546772848.150242821.3438942904.066272785.54406
F 23 x MD2872.473133045.838593425.044312848.076452814.3269532917.301592781.95288
SD10.930638335.801548280.918469.8385662434.3980968257.546779822.306126
AB3029.763153180.27963622.261093048.389062970.5767253097.141132998.32425
F 24 x MD3029.472713188.32693620.258683050.328952969.9798313072.584582996.90489
SD12.283418730.811903799.447197314.17060527.4035078661.135755935.0296809
AB2918.770033333.221026228.942022976.741793234.9603273006.65612931.4471
F 25 x MD2919.854562980.521186253.42952977.539963142.800232992.451752927.85824
SD9.81487821759.162584864.9563216.7265818310.0654881102.11653417.6935953
AB5896.237648340.3738211,171.19375643.829625776.4202845702.640174464.46857
F 26 x MD5889.237318393.2618811,286.18125659.370675796.1651915707.347664933.20537
SD115.549909452.740525720.862035138.81176469.5232252888.194496945.460049
AB3235.303853408.044144035.727733228.083243245.9124143361.798713225.24773
F 27 x MD3235.680493407.307334021.857893228.325513242.8556563349.639753224.0298
SD6.5757866433.3663071190.0240173.1005194823.1679686769.428861911.3386465
AB3328.577926580.859387194.471123389.180884365.3385233623.743473292.31393
F 28 x MD3326.196795.887767205.844493392.760224053.7377723542.199393289.42285
SD17.9291186558.703896801.13779329.1593474920.6971447251.61747836.390848
AB4459.906225654.190676442.472534169.157154120.3295914068.183313734.31699
F 29 x MD4476.483045655.834056379.404714183.030854153.3332654030.034043709.79866
SD160.899424250.599138525.490575134.81878272.9470092334.451062167.593361
AB401,261.311629,251,110736,794,634317,868.587876,056.43012,818,096.3127,288.2215
F 30 x MD343,273.6484,558,438795,847,326289,961.292213,206.50631,323,187.3623,028.0874
SD260,605.287448,734,951231,471,227159,645.7741,156,943.7443,490,925.3712,690.9933
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cuevas, E.; Becerra, H.; Escobar, H.; Luque-Chang, A.; Pérez, M.; Eid, H.F.; Jiménez, M. Search Patterns Based on Trajectories Extracted from the Response of Second-Order Systems. Appl. Sci. 2021, 11, 3430. https://doi.org/10.3390/app11083430

AMA Style

Cuevas E, Becerra H, Escobar H, Luque-Chang A, Pérez M, Eid HF, Jiménez M. Search Patterns Based on Trajectories Extracted from the Response of Second-Order Systems. Applied Sciences. 2021; 11(8):3430. https://doi.org/10.3390/app11083430

Chicago/Turabian Style

Cuevas, Erik, Héctor Becerra, Héctor Escobar, Alberto Luque-Chang, Marco Pérez, Heba F. Eid, and Mario Jiménez. 2021. "Search Patterns Based on Trajectories Extracted from the Response of Second-Order Systems" Applied Sciences 11, no. 8: 3430. https://doi.org/10.3390/app11083430

APA Style

Cuevas, E., Becerra, H., Escobar, H., Luque-Chang, A., Pérez, M., Eid, H. F., & Jiménez, M. (2021). Search Patterns Based on Trajectories Extracted from the Response of Second-Order Systems. Applied Sciences, 11(8), 3430. https://doi.org/10.3390/app11083430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop