Next Article in Journal
Characterizing Complex Dynamics in the Classical and Semi-Classical Duffing Oscillator Using Ordinal Patterns Analysis
Next Article in Special Issue
The Complex Neutrosophic Soft Expert Relation and Its Multiple Attribute Decision-Making Method
Previous Article in Journal
Classical-Equivalent Bayesian Portfolio Optimization for Electricity Generation Planning
Previous Article in Special Issue
Context-Aware Generative Adversarial Privacy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory

1
Department of Computer and Information Science, University of Macau, Macau 999078, China
2
Decision Sciences and Modelling Program, Victoria University, Melbourne 8001, Australia
3
Institute for Information Systems, University of Applied Sciences and Arts Northwestern Switzerland, 4600 Olten, Switzerland
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(1), 37; https://doi.org/10.3390/e20010037
Submission received: 13 October 2017 / Revised: 13 December 2017 / Accepted: 4 January 2018 / Published: 10 January 2018
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)

Abstract

:
Nowadays, swarm intelligence algorithms are becoming increasingly popular for solving many optimization problems. The Wolf Search Algorithm (WSA) is a contemporary semi-swarm intelligence algorithm designed to solve complex optimization problems and demonstrated its capability especially for large-scale problems. However, it still inherits a common weakness for other swarm intelligence algorithms: that its performance is heavily dependent on the chosen values of the control parameters. In 2016, we published the Self-Adaptive Wolf Search Algorithm (SAWSA), which offers a simple solution to the adaption problem. As a very simple schema, the original SAWSA adaption is based on random guesses, which is unstable and naive. In this paper, based on the SAWSA, we investigate the WSA search behaviour more deeply. A new parameter-guided updater, the Gaussian-guided parameter control mechanism based on information entropy theory, is proposed as an enhancement of the SAWSA. The heuristic updating function is improved. Simulation experiments for the new method denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA) validate the increased performance of the improved version of WSA in comparison to its standard version and other prevalent swarm algorithms.

1. Introduction

In computer science, efficient algorithms for optimizing applications ranging from robot control [1], logistics applications [2] to healthcare management [3] have always evoked great interest. The general aim of an optimization problem is to obtain a solution with a maximum or minimum value to solve the problem. The solution often can be measured as a fitness from a function f ( x ) where the search space is too huge for a deterministic algorithm to come up with a best solution within a given amount of time [4]. The optimization algorithms are usually either deterministic, of which there are many in operation research, or non-deterministic, which iteratively and stochastically refine a solution using heuristics. For example, in data mining, heuristics-based search algorithms optimize the data clustering efficiency [5] and improve the classification accuracy by feature selection [6]. In the clustering case, different candidate formations of clusters are tried until one is found to be most ideal in terms of the highest similarity among the data in the same cluster. In the classification case, the best feature subset that is most relevant to the prediction target is selected using heuristic means. What these two cases have in common is that the optimization problem is a combinatorial search in nature. The possibilities of choosing a solution in a search space are too huge, which contributes to the NP-hardness of the problem. The search algorithms that are guided by stochastic heuristics are known as meta-heuristics, which literately means a tier of logics controlling the heuristics functions. In this paper, we focus on devising a new meta-heuristic that is parameter-free, based on the semi-swarming type of search algorithms. The swarming kind of search algorithms are contemporary population-based algorithms where search agents form a swarm that moves according to some nature-inspired or biologically-inspired social behavioural patterns. For example, Particle Swarm Optimization (PSO) is the most developed population-based metaheuristic algorithm by which the search agents swarm as a single group during the search operation. Each search particle in PSO has its own velocity, thereby influencing each another; collectively, the agents, which are known as particles, move as one whole large swarm. There are other types of metaheuristics that mimic animal or insect behaviours such as the ant colony algorithm [7] and the firefly algorithm [8] and some new and nature-inspired methods like the water wave algorithm. These algorithms do not always have the agents glued together, moving as one swarm. Instead, the agents move independently, and sometimes, they are scattered. In contrast, these algorithms are known as loosely-packed or semi-swarm bio-inspired algorithms. They have certain advantages in some optimization scenarios. Some well-known semi-swarm algorithms are the Bat Algorithm (BA) [9], the polar bear algorithm [10], the ant lion algorithm, as well as the wolf search algorithm. These algorithms usually embrace search methods that explore the search space both in breath and in depth and mimic swarm movement patterns of animals, insects or even plants found in nature. Their performance in heuristic optimization has been proven to be on par with that of many classical methods including those tight swarm or full swarm algorithms.
However, as optimization problems can be very different from case to case, it is imperative for nature-inspired swarm intelligence algorithms to be adaptive to different situations. The traditional way to solve this kind of problem is to adjust the control parameters manually. This may involve massive trial-and-error to adapt the model behaviour to changing patterns. Once the situation changes, the model may need to be reconfigured for optimal performance. That is why self-adaptive approaches have become more and more attractive for many researchers in recent years.
Inspired by the preying behaviour of a wolf pack, a contemporary heuristic optimization called the Wolf Search Algorithm (WSA) [11] was proposed. In the wolf swarm, each wolf can not only search for food individually, but they can also merge with their peers when the latter are in a better situation. By this action model, the search can become more efficient as compared to the other single-leader swarms. By mimicking the hunting patterns of a wolf pack, the wolf in WSA as a search agent can find solutions independently, as well as merge with its peers within its visual range. Sometimes, wolves in WSA are simulated to encounter human hunters from whom they will escape to a position far beyond their current one. The human hunters always pose a natural threat to wolves. In the optimization process, this enemy of wolves triggers the search to stay out of local optima and tries out other parts of the search space in the hope of finding better solutions by the algorithm design. As shown in Figure 1, w i is the current search agent (the wolf) and w j is its peer in its visual range γ . Δ and δ are locations in the search agent’s visual range; S is the step size of its movement, and Γ is the search space for the objective function. The basic movement of an individual search agent is guided by Brownian motion. In this figure, In most of the metaheuristic algorithms, two of the most popular search methods are Levy search and Brownian search. Levy search is good for exploration [12], and Brownian is efficient for exploiting the optimal solution [13]. In WSA, both search methods were considered. The Brownian motion is used as the basic movement, and the Levy search is for pack movement.
As a typical swarm intelligence heuristic optimization algorithm, WSA shares a common structure and also a common drawback with other algorithms, involving heavy dependence of the efficacy of the algorithm on the chosen parameter values. It is hardly possible to guess the most suitable parameter values for the best algorithm performance. These values are either taken from some suggested defaults or they are manually adjusted. In Figure 1 [14], the parameters’ values remain unchanged during the search operation in the original version of WSA. Quite often, the performance and efficacy of the algorithms for different problems, applications or experimentations would differ greatly, when different parameter values are used. Since there is no golden rule on how the model parameters should be set and the models are sensitive to the parameter values used, users may only guess the values or find the parameter values by trial-and-error. In summary, given that the nature of the swarm search is dynamic, the parameters should be made self-adaptive to the dynamic nature of the problem. Some parameters’ values may be the best at yielding the maximum performance at one time, while other values may be shown to be better in the next moment.
In order to solve this problem, the Self-Adaptive Wolf Search Algorithm (SAWSA) is modified with a combination of techniques, such as a random selection method and a core-guided (or global best-guided) method integrated with the Differential Evolution (DE) crossover function as the parameter-updating mechanism [14]. This SAWSA is based on a randomization, which clearly is not the best option for the rule-based WSA. Compared with the other swarm intelligence algorithms, the most valuable advantage of the original WSA is the stability. However, even though the average results of the published SAWSA are better than those of the WSA, the stability is weakened. To generate a better schema of the algorithm, the implicit relations between the parameters and the performance should be studied. In this paper, we try to find a way to stabilise the performance and generate a self-adaption-guided part for the algorithm. Furthermore, the coding structure is modified. The new algorithm is denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA).
The contributions of this paper are summarized as follows. Firstly, the self-adaptive parameter range for WSA is carefully improved. Secondly, the parameter updater is not embedded in the main algorithm any longer, and it evolves as an independent updater in this new version. These two changes essentially show different and better optimization performance from the previous version of SAWSA [14]. To verify the performances of the new model once the changes have been made, the experiments are redesigned with settings that enhance the clarity of the result display. The novelty in this paper is the Gaussian-guided parameter updater method. It is a method based on information entropy theory. To improve the performance of SAWSA further, we proposed this novel idea by treating the search agent behaviour as chaotic behaviour, so the algorithm can be perceived as a chaotic system. Using the chaotic stability theory, we can use chaotic maps to guide the operation of the system. Additionally, the entropy value can be used as a measurement of the stability and the inner information communication. In our paper, the feasibility of this new method is analysed, and a suitable map for WSA is found to be a Gaussian map. The advantage of using a Gaussian map as a new method is observed via an extensive simulation experiment.
We verify the efficacy of the considered methods with fourteen typical benchmark functions and compare the performance of GSAWSA with the original WSA, the original bat algorithm and the Hybrid Self-Adaptive Bat Algorithm (HSABA) [15]. The self-adaptiveness is powered by Differential Equations (DE) in SABA. The concept is based on Particle Swarm Optimization (PSO), which moves in some sort of mixed random order and swarming patterns. PSO is one of the classical swarm search algorithms that often shows superior performance with standard benchmark functions. From our investigations, it is supposed that the parameter control by some entropy function in GSAWSA would possibly offer further improvement. The self-adaptive method is a totally hands-free approach that lets the search evolve itself. Parameter control is a guiding approach that steers the course of parameter changes during runtime.
The remainder of the paper is structured as follows: The original wolf search algorithm and the published SAWSA [14] are briefly introduced in Section 2. The chaos system entropy analysis and Gaussian-guided parameter control method are discussed in Section 3, followed by Section 4, which presents the comparison experiments of both the self-adaptive method and the parameter control method with several optional DE functions. The paper ends after presenting concluding remarks in Section 5.

2. Related Works and Background

Researchers have extended and improved metaheuristic optimization algorithms to a large extent during the past few decades. The classic algorithms have become more and more mature. Many new and efficient algorithms were invented like those presented in the Introduction. They are tested and shown to be suitable for real case studies. Researchers from other areas have started to use the metaheuristic algorithms in their studies, as well, because of their ease of use. Lately, self-adaptive methods have become popular, and many works in this direction have been published. The purpose of the self-adaptive methods for metaheuristic algorithms is fitting the same algorithm to different problems by self-tuning the parameter values. In the population-based optimization algorithms, two common parameters are very easy to handle: these are the search agents’ population and the search iterations. These two parameters have almost a linear effect with the performance, so the user can choose them judging by the expected level of accuracy and the calculation resources. However, the other parameters are very different from one another in nature. For loosely-packed type of algorithms, one typical research direction is the self-adaptive methods for PSO, the idea being to introduce a check and repair operation to every iterative generation of the search [16]. The idea is suitable for the strong collective swarm algorithms, as well. The self-adaptive firefly algorithm [17] and the hybrid self-adaptive bat algorithm [15] have both proven that the parameter control method makes a great contribution to the search performance. In this paper, as we want to introduce a new parameter control method for WSA, we will first introduce the original WSA and some related self-adaptive methods. Then, this method could be potentially applied to all the other strong collective swarm algorithms.

2.1. The Original Wolf Search Algorithm

The WSA is a relatively young, but efficient member of the family of swarm intelligence algorithms. The logic of the WSA search agents is inspired by the hunting behaviour of wolf packs. When preying, wolves use both cooperation and individual work [11]. The wolf applies both local search and a global communication at the same time. To transfer the wolf swarm preying behaviour into a computing method, some basic rules of the WSA are formulated as below.
  • Each wolf search agent has a specific visual range γ , defined by Equation (1).
    γ d ( w i , w c ) = ( k = 1 n | w i , k w c , k | λ ) 1 λ
    w i is the position of the current search agent; w c is a nearby search agent within visual range γ ; and λ is the order of the hyper space.
  • The result of the objective function, which is the fitness value produced from benchmark functions, is used as the measurement of the local position of each search agent (wolf). The search agent can move towards a better location by communication with other agents. Two situations can be found here. One is that the wolf can sense a neighbour with a better location in its visual range. Then, the wolf will move directly towards it. Another situation is that the wolf cannot sense any better peers. Then, the wolf will try to find a better location using a random Brownian movement.
  • To avoid local optima, an escape strategy is introduced to the WSA. An enemy is randomly generated in the search space. If a wolf search agent senses the presence of an enemy, from the current position, it will jump very far away to a new position. The function e s c a p e ( p a ) requires a user input parameter p a , and it generates a new location for an escaped wolf. The function equation is shown in Equation (2) [14].
    w i = w i + [ r a n d · ( 1 2 Γ γ ) + γ ] i f r a n d > p a
    Γ is the measure of the search space range based on the given upper and lower bounds of the variables, and p a is the escape probability. The behaviour control parameters are listed in Table 1.
An example of the wolves’ preying behaviour is illustrated in Figure 1. The original WSA consists of four main parts, which are shown as the four blocks in Figure 2.
The initialization process is important for the original WSA. This is when all the parameters are set with some values. Once the parameters are set, the algorithm proceeds to search for solutions iteratively according to the parameter values, which are fixed throughout the runtime. For all iterative metaheuristics, the parameters control how the iterative search proceeds, such as how the solution evolves and how new candidate solutions are discovered, and the fitter new solutions are replacing the old ones according to some rules coded in the algorithm. For the evolution part, the wolves’ location update function can be summarized by Equation (3).
w i = w i + α · γ · r a n d , random movement w i + β 0 · e γ 2 · ( w j w i ) , w j is the result from local search

2.2. The Self-Adaptive Wolf Search Algorithm

As the parameter training is very important for the performance except for the approach based on preset static parameter values, there are three parameter control methods that are usually used. One is the rule-based parameter setting method. The other one is the feedback adaptive method where the parameters are made adaptive to the feedback from the result of the search algorithm [18]. The third approach is the free self-adaptive method, where the parameters can be freely changed during the algorithm run time [15]. Obviously, the adaptive and the self-adaptive methods can be considered as more user friendly, as users do not need to know how to set the parameters or the rules by themselves. From these two methods, the self-adaptive one has turned out to be more popular in the research area, because the search agents’ situation and the local fitness landscape can be dynamically different from generation to generation during the algorithm run. These three methods are conceptually shown in Figure 3.
In 2014, the original bat algorithm is hybridized with a differential evolution strategy called the DE strategy; it was published in [19], known as the Hybrid Self-Adaptive Bat Algorithm (HSABA). The working logics of HSABA are briefly listed as follows:
  • Execute the local search using the DE strategy. A self-adaptation rate, r, should be present, which states the ratio of self-adaptive bats to the whole population of bats. In [19], the ratio of 1:10 was used for the number of self-adaptive bats.
  • Choose four virtual bats randomly from the population, each of which was initialized with a new position.
  • Apply the DE strategy to improve the candidate solution.
The WSA was designed for solving complex problems, and the advantage in efficiency becomes more apparent when the search space dimension grows. Distinctive from HSABA, in our SAWSA, the calculation cost is taken into account in the evolution. In contrast to SABA, the SAWSA uses the DE functions instead of embedding the parameters into the search agents. DE functions are a kind of well-developed local search and update function with very low calculation cost.
In the SAWSA, two kinds of self-adaptive methods are used. The first one is called the core-guided one, which is related to the HSABA. During the parameter updating process, HSABA considers the current global best solution from four selected bats using the DE function D E / b e s t / 2 / b i n [15], which is depicted in Equation (4). In this equation, F > 0 is a real-valued constant, which adjusts the amplification of the differential variation.
b j = b a t b e s t + F · ( b a t r 1 , j + b a t r 2 , j b a t r 3 , j b a t r 4 , j )
The other approach is fully random, which is therefore called the random selection DE method, and a simple description is as follows [14]:
  • Randomly select a sufficient number of search agents.
  • Apply the DE functions on the crossover mechanism.
  • Determine if updating the parameters is needed by looking into the current global best solution.
  • Load new values from some allowable range into the parameters.
The pseudocode of the published SAWSA is shown in Algorithm 1. In this algorithm, w i ( i = 1 , . . . , w ) is the wolf population, g l o b a l f i t n e s s w i is the global fitness values of each wolf, f ( x ) is the objective function, the number of parameters is n p a r (with WSA, the number is four), the control parameters are p a r a [ 1 ] , . . . , p a r a [ n p a r ] representing γ , s , α , p a , the lower bounds for parameters are p a r a l o w e r b o u n d [ 1 ] , . . . , p a r a l o w e r b o u n d [ n p a r ] , the upper bounds for parameters are p a r a u p p e r b o u n d [ 1 ] , . . . , p a r a u p p e r b o u n d [ n p a r ] , the update probability of parameters is P u p d a t e and the self-adaptation probability is P s e l f a d a p t e d .
Algorithm 1: The self-adaptive wolf search algorithm.
Entropy 20 00037 i001

3. Gaussian-Guided SAWSA Based on Information Entropy Theory

For the self-adaptive method, the parameter boundaries constitute crucial information [20]. By the parameter definition, an extensive testing was carried out for finding the possible values or ranges for the parameters. Some validated parameter boundaries for SAWSA are shown in Table 2. In the previously-proposed self-adaptive wolf search algorithm, the parameter boundaries are the only limitations of the parameter updating. Clearly, this is not good enough because the ideal parameter control should follow the same patterns such that the changing of the parameter will not affect the algorithm performance.
Just like other classic probabilistic algorithms [21], a mathematical model of WSA is hard to analyse with formal proofs given its stochastic nature. Therefore, we use extensive experiments to gain insights from the experiences with WSA parameter control. To examine the effects of each parameter, the static step size s, the velocity factor α , the escape frequency p a and the visual range γ as model variables are used. The results show that already a small change of the parameter limits can obviously affect the performance. When the parameter boundaries are well defined, the performance can hardly be affected. Thereby, it provides consistent optimization performance. In Figure 4, an example with the Ackley 1function for D = 2 is shown. Each curve in this figure is the average convergence curve of 20 individuated results (experiment repetitions). Here, the static parameters are s = 1 , α = 0.2 and p a = 0.25 . The parameter γ changes from one to Γ = 70 with a step size of one. Here, as an example, we only evaluate parameter γ . The other parameters follow a similar pattern.
In Figure 4, most of the 70 curves are overlapping because changing the visual range does not bring much improvement. Only in the range from γ = 1 to γ = 8 , the improvement is obvious. It can be clearly seen that for the Ackley 1function, updating γ in the definitional domain can be used, but is not necessary. The best solution comes from updating γ within a reasonable range where a large improvement can be obtained with the least number of attempts. By analysing the distribution of the best fitness values with the corresponding γ values in Figure 5, we can conclude that the best improvement can be obtained by the approach from the lower bound. This pattern can be shown in our experiments with other benchmark functions, as well. Therefore, for parameter γ , we can give a more reasonable updating domain from the experiments:
0 < γ 2 · l n ( Γ ) Γ > 0
Another pattern can be used as shown in Figure 5, as well, where the best parameter solution is always located in a small range. If we find a sufficiently suitable parameter value, the best way to update is to do so within a certain range, which means the next generated parameter should be guided by the previous one. By using this method, we can avoid unstable performance and save time from random guesses. Subsequently, the challenge now is to choose a suitable entropy function to guide the parameter control approach.
However, as a probability-based algorithm, the current optimization states can influence the results in the next generation, while the future move is highly unpredictable. This model behaviour can be described by the theory of chaos [22], and the parameter update behaviour can be treated as a typical chaotic behaviour; this behaviour seems like a random updating with an uncertain, unrepeatable and unpredictable behaviour in a certain described system [23]. The chaotic phenomenon has to take into account population control, hybrid control or initialized control in metaheuristic algorithm studies [24] and other data science topics [25]. In our study, the entropy theory is used to control the parameter self-update. The reason is analysed as below.
The individual parameter update behaviour is unpredictable by the updater; however, as a chaotic system, the stability can be measured. The chaotic and entropy theory have been used in many schemes [26]. The entropy or entropy rate is one of the efficient measures for the stability of a chaotic system.
The logistic map is one of the most famous chaos maps [27]. It has a polynomial mapping of degree two, which is also denoted as a recurrence relation. Often as an archetypal example, it is cited and used to describe how chaotic and complex a behaviour can become after evolving from some straightforward non-linear dynamical equations. In our paper, the logistic map is used as an example. The map is defined by Equation (6) [28]:
f μ : = [ 0 , 1 ] [ 0 , 1 ] given by x n + 1 = f μ ( x n ) , where f μ ( x ) = μ x ( 1 x )
This map is one of the simplest and most widely-used maps in chaos research, which is also very popular in swarm intelligence. For example, in water engineering, chaos PSO using the logistic map provides a very stable solution to design a flood hydro-graph [29], and it was also used in a resource allocation problem in 2007 [30]. Equation (6) is the mathematic definition of a logistic map; it gives the possible iteration result of a linear problem. To analyse the stability of this map, we can calculate the entropy by using Equation (7).
H L = S L A P r S L l o g 2 ( P r S L )
The entropy evaluation for the logistic map and the bifurcation diagram is shown in Figure 6. It is clear that the most probable iteration solutions are located near the upper or lower bound of the definition domain. Comparing with the entropy value, the larger the entropy value is, the larger the range of the distribution for the corresponding μ can be provided in the logistic map or defined in information theory as larger information communication. The logistic map can provide a usable way to solve the chaos iteration problem. However, it is not the best way for swarm intelligence, because through many experiments, it is known that the best solution should hardly be located near the boundary. Therefore, here, we consider using another chaos map, which can provide the same stability with a more reasonable iteration outcome.
The Gauss iterated map, which is popularly known as the Gaussian map or mouse map, is another type of a nonlinear one-dimensional iterative map given by the Gaussian function [31] in mathematics:
x n + 1 = e x p ( α · x n 2 ) + β
With this function, the Gaussian map can be described as the equation:
G : defined by G ( x ) = e α x 2 + β
where α and β are constant variables. There is a bell-shaped Gaussian function named after Johann Carl Friedrich Gauss. Its shape is similar to the logistic map. As the Gaussian map has two parameters, it can provide more controllability to the iteration outcome, and the G : definition domain can meet more needs than the logistic map. The next step is to choose the most stable and suitable model for use in parameter control. The goal for this step is to satisfy the stability requirement of a chaos system, while keeping up a higher entropy value for more information communication in the system.
The stability of the chaotic maps is defined by the theorem below [32]:
Theorem 1.
Suppose that the map f μ ( x ) has a fixed point at x . Then, the fixed point is stable if:
d d x f μ ( x ) < 1
and it is unstable if:
d d x f μ ( x ) > 1
The stability analysis of the Gaussian map is shown below.
G ( x ) d x = e α x 2 + β d x = π α
Using Theorem 1, we get the following stability result depending on the parameter values:
x is stable , if π α < 1 x is unstable , if π α > 1
From the analysis, the stability is just related to α ; the Gaussian map will move to stable regions when α > π . In Figure 7, we can see the possible iterative outcome with each β when α = 4 and α = 9 to visualize the effects of different α values. As shown in Figure 8, this map shows the period-doubling and period-undoubling bifurcations. Considering the parameter iteration needs in our program, based on the entropy theory, the final parameters are set as α = 5.4 and β = 0.52 , which can provide both stability and enough inner information change.
Using this Gaussian map, the new parameter iterative method will be updated from:
t e m p p a r a = p a r a l o w e r b o u n d + r a n d · ( p a r a u p p e r b o u n d p a r a l o w e r b o u n d )
to:
t e m p p a r a = e x p ( α · p a r a 2 ) + β
where α and β will be set as above for generating modified parameter values for WSA within the specified boundaries. If this equation leads to negative values in our algorithm, we simply use the absolute value for the parameter update, as all used parameter values should be positive.
Our experiments show that this parameter control method offers more stability and better performance compared to the proposed SAWSA. The parameter control mechanism will be added into the original random selection-based SAWSA, as it has a much better performance than the core-guided DE, which is strongly based on the global best agent (this experiment result is shown in the next section). With this modification, the final version of the Gaussian-guided SAWSA is generated.
To show the advantage of the entropy-based Gaussian-guided parameter update method, which avoids the randomness, we modified all the optional DE functions to self-adaptive methods. By adapting to all these functions, the result can show if the improvement is caused by the adaptive function or the parameter update-guided methods. We mix two different solution selection methods with DE, and the combinations are shown in Table 3. The two self-adaptive types of methods are compared in this paper. Hence, we have a list of differential evolution crossover functions such as D E 1 , D E 2 , D E 3 functions. In these functions, the current global best solution is taken into consideration. This approach is supposed to increase the stability of the system because the direction of movement is calculated in relation to the location of the current global best. The calculation by R D E 1 , R D E 2 , R D E 3 and R D E 4 is only done at the chosen search agents, so it will not overload the system. The solution by b e s t s e l e c t e d is the one picked as the best current fitness among the other chosen ones. In this way, more randomness can be added to the algorithm by this method. Wider steps are taken for more optional behaviour. So far, it has been tested and worked well with the semi-swarm type of algorithms, such as bat and wolf. It is however not know whether it may become unstable when coupled with other search algorithms.

4. Experimental Results

To validate the efficiency of the GSAWSA, we compare it with other well-established and efficient swarm intelligence algorithms such as BA, HSABA and PSO in this paper. In order to prove that the improvement is not caused by the different DE self-adaptive functions, we also use all the optional DE functions in the experiments and select the most suitable function. Afterwards, we test whether the Gaussian-guided method can bring a better performance for the considered problems.

4.1. SAWSA Comparative Experiments

The main purpose of this part is to prove that the SAWSA is better than the other algorithms in most cases. Then, we figure out which is the most suitable DE self-adaptive method for the next Gaussian-guided modification. Furthermore, the outcomes of these experiments determine the comparison group of the next experiment. Fourteen standard benchmark functions that are typically used for benchmarking swarm algorithms are used for a credible comparison [33]. They are shown in Table 4.
The objective of the experiments is to compare the suitability of integrating the DE functions, which are listed in Table 3, on various swarm search algorithms, such as SAWSA, WSA, BA, HSABA and PSO. The algorithm combos are benchmarked with those typical standard benchmark functions as appeared in Table 4. The default parameter values used are those suggested from their original papers. An exception is the self-adaptive approaches: there is no parameter value because it is designed to be parameter free. Each benchmark function was tested in various dimensions increasing in complexity from 2, 5, 10, 20, 30 to 50. The population sizes are maintained at two extremes: 20 and 10,000 iterations are repeated for each case. To achieve consistent results, for each function and each dimension, the program was run 50 times for 10,000 cycles, which was how it was done in [34]. Their average curves are then computed, which are shown as the final result.
When the search space dimension is low, all the algorithms can achieve impressive results. The differences are too small to show in both figures and tables. Therefore, for a clearer view, we only present the results for D = 30 in Table 5 and the result for D = 50 in Table 6. In the result tables, the best results are coloured in bold red, and the second best results are coloured in bold black.
Figure 9, Figure 10 and Figure 11 show the box-plots of some standard benchmark functions’ comparisons for the same or different dimensions. The black line in the middle of the box-plots indicates the average baseline; half of the result range is shown by the size of the inner box. For easy visual comparison on the same axis, the ranges for all the data have been normalized.

4.2. Gaussian-Guided Parameter Control Comparative Experiments

In this section, the experiment purpose is to prove the efficiency of the entropy-guided parameter control mechanism using the Gaussian function for a substantiation of the expected benefits of SAWSA. From the previous section, we can see clearly that the random selection method is more suitable as a self-adaptive method for WSA. Therefore, in this experiment, to reduce the redundancy, we only add the entropy-guided parameter control mechanism using the Gaussian function to the randomization part of the DE self-adaptive WSA with the four functions listed in Table 3. They are referred to as R D E 1 , R D E 2 , R D E 3 and R D E 4 . For comparison, the entropy-guided SAWSA in the searching part is generated using the Gaussian function, and they referred to as G R D E 1 , G R D E 2 , G R D E 3 and G R D E 4 in this experiment. We also use the 14 benchmark functions from Table 4, which are tested for the dimensions 2, 5, 10, 20, 30 and 50. The maximum number of generations and the population size are set as g e n = 10,000 and static p o p = 20 , respectively. For consistent results and a fair comparison, each case was run 50 times, and we use the average for the final data analysis.
When the dimension is low, all algorithms have a similar performance, and the algorithmic enhancement seems unnecessary. However, when D increases to a large number (which means the objective function is very complex), the enhancement provided by this entropy-based Gaussian-guided parameter control method can be clearly observed. Therefore, here, we only show the experiment results for D = 30 and D = 50 in Table 7 and Table 8. The best results are marked in red.
In Table 7 and Table 8, we can see that most of the best results are obtained by the entropy-based Gaussian-guided SAWSA. The entropy-based Gaussian-guided methods not only enhance the performance, but also stabilize it. In the box charts as shown in Figure 12 and Figure 13, the boxes show the location ranges of middle half of the result data, while the black lines show the average data. For better visual comparison, all the data are normalized. In most cases, the entropy-guided SAWSA produces better performance than the original SAWSA, and even every entropy-guided algorithm has a much smaller box in the box chart than the SAWSA comparison algorithm. For a metaheuristic algorithm. this improvement is significant, as stability is a very important attribute, which is often very hard to achieve.
For the statistical tests, we use the p-values as the measurement and the WSA as the control method. Due to page limitation, we cannot show all the comparisons with other control methods. Therefore, we focus on improving the original WSA. The hand p-values are shown in Table 9. Knowing the “no free lunch” policy [35], the improvement resulted in the algorithm costing more calculation resources than the original WSA. In Table 10, the CPU times are listed for all the algorithms with D = 30 and D = 50 . The CPU time is defined as the total run time for each algorithm that runs 1000 generations with 20 search agents.

5. Conclusions

Self-adaptive methods are effective to enable parameter-free metaheuristic algorithms; they can even improve the performance because the parameters’ values are self-tuned to be optimal (hence optimal results). Based on the authors’ prior work about the hybridizing self-adaptive DE functions in the bat algorithm, a similar, but new self-adaptive algorithm was developed called SAWSA. However, due to the lack of stability control and missing knowledge of the inner connection of the parameters and performance, the SAWSA is not sufficiently satisfactory for us. In this paper, the self-adaptive method is considered from the perspective of entropy theory for metaheuristic algorithms, and we developed an improved parameter control called the Gaussian-guided self-adaptive method. After designing this method denoted as GSAWSA, we configured test experiments with fourteen standard benchmark functions. Based on the results, the following conclusions can be drawn.
Firstly, the self-adaptive approach is proven to be very efficient for metaheuristic algorithms, especially those that require much calculation cost. By using this method, the parameter training part can be removed in the real case usage. However, as the self-adaptive modification would increase the complexity of the algorithm, the calculation cost of the self-adaptive method must be taken into consideration.
Secondly, comparing all the optional self-adaptive DE methods in the experiment, the type of random selection is a better choice for SAWSA. However, as the average outcome is improved, the stability is clearly decreased by introducing more randomness into the algorithm. How to balance the random effects and to improve the stability is, in general, a difficult challenge for a the metaheuristic algorithm study.
Thirdly, the parameter-performance changing pattern can be considered a very important feature of metaheuristic algorithms. Normally, researchers use the performance or the situation feedback as an update reference. However, how the parameters influence the performance usually remains outside of consideration. By analysing how the parameters influence the performance, a better self-adaptive updating method could be developed and a better performance could be achieved with less computing resources.
In conclusion, a parameter-free metaheuristic model where a Gaussian map is fused with the WSA algorithm is proposed. It has the advantages of not requiring the parameters’ values to remain static and the parameters will tune themselves as the search operation proceeds. Often, from our experiment results, the new model shows improvement in performance compared to the naive version of WSA, as well as other similar swarm algorithms. As future work, we want to investigate in depth the computational analysis of how the Gaussian map contributes to refining the performance and preventing the search from converging prematurely to local optima. The analysis should be done together with the runtime cost, as well. It is known from the experiments reported in this paper that there is a certain overhead when the Gaussian map is fused with WSA, extending its performance, but at the same time, the extra computation consumes additional resources. In the future the GAWSA should be enhanced with the capability of balancing the runtime cost and the best possible performance in terms of the solution quality obtained.

Acknowledgments

The authors are thankful for the financial support from the research grants: (1) MYRG2016-00069, titled “Nature-Inspired Computing and Metaheuristics Algorithms for Optimizing Data Mining Performance” offered by RDAO/FST (Research & Development Administration Office/Faculty of Science and Technology), University of Macau and the Macau SAR government; (2) FDCT/126/2014/A3, titled “A Scalable Data Stream Mining Methodology: Stream-based Holistic Analytics and Reasoning in Parallel” offered by FDCT (The Science and Technology Development Fund) of the Macau SAR government.

Author Contributions

Qun Song has written the source codes, implemented the experiments and collected the results. The main contribution of Simon Fong is the development direction of the metaheuristics algorithms. Suash Deb and Thomas Hanne contributed to the discussion and analysis of the results. Qun Song has written the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Processor: Intel(R) Core(TM) i7-4790 CPU @3.60 GHz
Installed memory (RAM): 8.00 GB
System: Windows 7 Enterprise Service Pack 1 64-bit
Development environment: MATLAB R2014a 64-bit

References

  1. Brambilla, M.; Ferrante, E.; Birattari, M.; Dorigo, M. Swarm robotics: A review from the swarm engineering perspective. Swarm Intell. 2013, 7, 1–41. [Google Scholar] [CrossRef]
  2. Hanne, T.; Dornberger, R. Computational intelligence. In Computational Intelligence in Logistics and Supply Chain Management; Springer: Cham, Switzerland, 2017; pp. 13–41. [Google Scholar]
  3. Fikar, C.; Hirsch, P. Home health care routing and scheduling: A review. Comput. Oper. Res. 2017, 77, 86–95. [Google Scholar] [CrossRef]
  4. Fister, I., Jr.; Yang, X.S.; Fister, I.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. arXiv, 2013; arXiv:1307.4186. [Google Scholar]
  5. Senthilnath, J.; Omkar, S.; Mani, V.; Tejovanth, N.; Diwakar, P.; Shenoy, A.B. Hierarchical clustering algorithm for land cover mapping using satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 762–768. [Google Scholar] [CrossRef]
  6. Senthilnath, J.; Omkar, S.; Mani, V.; Karnwal, N.; Shreyas, P. Crop stage classification of hyperspectral data using unsupervised techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 861–866. [Google Scholar] [CrossRef]
  7. Liao, T.; Socha, K.; de Oca, M.A.M.; Stützle, T.; Dorigo, M. Ant colony optimization for mixed-variable optimization problems. IEEE Trans. Evol. Comput. 2014, 18, 503–518. [Google Scholar] [CrossRef]
  8. Yang, X.S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  9. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  10. Połap, D.; Wozniak, M. Polar Bear Optimization Algorithm: Meta-Heuristic with Fast Population Movement and Dynamic Birth and Death Mechanism. Symmetry 2017, 9, 203. [Google Scholar] [CrossRef]
  11. Tang, R.; Fong, S.; Yang, X.S.; Deb, S. Wolf search algorithm with ephemeral memory. In Proceedings of the 2012 Seventh International Conference on Digital Information Management (ICDIM), Macau, China, 22–24 August 2012; pp. 165–172. [Google Scholar]
  12. Senthilnath, J.; Das, V.; Omkar, S.; Mani, V. Clustering Using Levy Flight Cuckoo Search; Springer: New Delhi, India, 2013. [Google Scholar]
  13. Senthilnath, J.; Kulkarni, S.; Benediktsson, J.A.; Yang, X.S. A novel approach for multispectral satellite image classification based on the bat algorithm. IEEE Geosci. Remote Sens. Lett. 2016, 13, 599–603. [Google Scholar] [CrossRef]
  14. Song, Q.; Fong, S.; Tang, R. Self-Adaptive Wolf Search Algorithm. In Proceedings of the 2016 5th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI), Kumamoto, Japan, 10–14 July 2016; pp. 576–582. [Google Scholar]
  15. Fister, I.; Fong, S.; Brest, J.; Fister, I. A novel hybrid self-adaptive bat algorithm. Sci. World J. 2014, 2014, 709738. [Google Scholar] [CrossRef] [PubMed]
  16. Chih, M. Self-adaptive check and repair operator-based particle swarm optimization for the multidimensional knapsack problem. Appl. Soft Comput. 2015, 26, 378–389. [Google Scholar] [CrossRef]
  17. Fister, I.; Yang, X.S.; Brest, J.; Fister, I., Jr. Memetic self-adaptive firefly algorithm. In Swarm Intelligence and Bio-Inspired Computation: Theory And Applications; Elsevier: Amsterdam, The Netherlands, 2013; pp. 73–102. [Google Scholar]
  18. Beyer, H.G.; Deb, K. Self-Adaptive Genetic Algorithms with Simulated Binary Crossover; Technical Report; Universität Dortmund: Dortmund, Germany, 2001. [Google Scholar]
  19. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  20. Shi, Y.; Eberhart, R.C. Parameter selection in particle swarm optimization. In International Conference on Evolutionary Programming; Springer: Berlin/Heidelberg, Germany, 1998; pp. 591–600. [Google Scholar]
  21. Fradkov, A.L.; Evans, R.J. Control of chaos: survey 1997–2000. IFAC Proc. Vol. 2002, 35, 131–142. [Google Scholar] [CrossRef]
  22. Devaney, R.L.; Siegel, P.B.; Mallinckrodt, A.J.; McKay, S. A first course in chaotic dynamical systems: Theory and experiment. Comput. Phys. 1993, 7, 416–417. [Google Scholar] [CrossRef]
  23. Gandomi, A.; Yang, X.S.; Talatahari, S.; Alavi, A. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  24. Hu, W.; Liang, H.; Peng, C.; Du, B.; Hu, Q. A hybrid chaos-particle swarm optimization algorithm for the vehicle routing problem with time window. Entropy 2013, 15, 1247–1270. [Google Scholar] [CrossRef]
  25. Hou, L.; Gao, J.; Chen, R. An Information Entropy-Based Animal Migration Optimization Algorithm for Data Clustering. Entropy 2016, 18, 185. [Google Scholar] [CrossRef]
  26. Chen, Y.L.; Yau, H.T.; Yang, G.J. A maximum entropy-based chaotic time-variant fragile watermarking scheme for image tampering detection. Entropy 2013, 15, 3170–3185. [Google Scholar] [CrossRef]
  27. Li, S.; Chen, G.; Mou, X. On the dynamical degradation of digital piecewise linear chaotic maps. Int. J. Bifurc. Chaos 2005, 15, 3119–3151. [Google Scholar] [CrossRef]
  28. Kanso, A.; Smaoui, N. Logistic chaotic maps for binary numbers generations. Chaos Solitons Fractals 2009, 40, 2557–2568. [Google Scholar] [CrossRef]
  29. Dong, S.F.; Dong, Z.C.; Ma, J.J.; Chen, K.N. Improved PSO algorithm based on chaos theory and its application to design flood hydrograph. Water Sci. Eng. 2010, 3, 156–165. [Google Scholar]
  30. Wang, S.; Meng, B. Chaos particle swarm optimization for resource allocation problem. In Proceedings of the 2007 IEEE International Conference on Automation and Logistics, Jinan, China, 18–21 August 2007; pp. 464–467. [Google Scholar]
  31. Bruin, H.; Troubetzkoy, S. The Gauss map on a class of interval translation mappings. Isr. J. Math. 2003, 137, 125–148. [Google Scholar] [CrossRef]
  32. Lynch, S. Nonlinear discrete dynamical systems. In Dynamical Systems with Applications Using Maple; Springer: Basel, Switzerland, 2010; pp. 263–295. [Google Scholar]
  33. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  34. Yang, X.S. Engineering Optimization: An Introduction With Metaheuristic Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  35. Yang, X.S. Swarm-based metaheuristic algorithms and no-free-lunch theorems. In Theory and New Applications of Swarm Intelligence; InTech: Rijeka, Croatia, 2012. [Google Scholar]
Figure 1. Movement patterns of wolf preying and the algorithm parameters.
Figure 1. Movement patterns of wolf preying and the algorithm parameters.
Entropy 20 00037 g001
Figure 2. The four main parts of the original Wolf Search Algorithm (WSA).
Figure 2. The four main parts of the original Wolf Search Algorithm (WSA).
Entropy 20 00037 g002
Figure 3. The processing of the self-adaptive method.
Figure 3. The processing of the self-adaptive method.
Entropy 20 00037 g003
Figure 4. Convergence curves of WSA with γ as the variable.
Figure 4. Convergence curves of WSA with γ as the variable.
Entropy 20 00037 g004
Figure 5. Best fitness value with each γ value.
Figure 5. Best fitness value with each γ value.
Entropy 20 00037 g005
Figure 6. Bifurcation diagram (a) and entropy value (b) of the logistic map.
Figure 6. Bifurcation diagram (a) and entropy value (b) of the logistic map.
Entropy 20 00037 g006
Figure 7. Bifurcation diagram of a Gaussian map when α = 4 (a) and α = 9 (b).
Figure 7. Bifurcation diagram of a Gaussian map when α = 4 (a) and α = 9 (b).
Entropy 20 00037 g007
Figure 8. Bifurcation diagram of a Gaussian map when α = 5 (a) and α = 5 . 4 (b).
Figure 8. Bifurcation diagram of a Gaussian map when α = 5 (a) and α = 5 . 4 (b).
Entropy 20 00037 g008
Figure 9. Alpine function (a) and Levy 3 test function (b) when D = 50.
Figure 9. Alpine function (a) and Levy 3 test function (b) when D = 50.
Entropy 20 00037 g009
Figure 10. Rastrigin function (a) and penalty function (b) when D = 50.
Figure 10. Rastrigin function (a) and penalty function (b) when D = 50.
Entropy 20 00037 g010
Figure 11. Ackley function when D = 30 (a) and D = 50 (b).
Figure 11. Ackley function when D = 30 (a) and D = 50 (b).
Entropy 20 00037 g011
Figure 12. Deb 1 function (a) and Levy 3 test function (b) when D = 50.
Figure 12. Deb 1 function (a) and Levy 3 test function (b) when D = 50.
Entropy 20 00037 g012
Figure 13. Michalewicz test function (a) and Penalty 1 function (b) when D = 50.
Figure 13. Michalewicz test function (a) and Penalty 1 function (b) when D = 50.
Entropy 20 00037 g013
Table 1. Behaviour control parameters.
Table 1. Behaviour control parameters.
ParameterDefinition
γ The visual radius of a wolf agent
sThe step size of a wolf agent
α The velocity of the wolf agent
p a The probability of having enemy presence
Table 2. Boundaries of behaviour control parameters.
Table 2. Boundaries of behaviour control parameters.
ParameterUpdate RangeDefinition
γ (0, 2 · l n ( Γ ) ] by experimentvisual range
s(0, 1] by definitionstep size
α (0, 1] by both experiment and definitionvelocity factor
p a [0, 1] by definitionescape probability
Table 3. The names of Differential Evolution (DE) functions that are implemented with various solution selection methods.
Table 3. The names of Differential Evolution (DE) functions that are implemented with various solution selection methods.
Function NameDE Function
Core-guided DE method
DE1 w ^ = w g l o b a l b e s t + F · ( w r 1 , j + w r 2 , j w r 3 , j w r 4 , j )
DE2 w ^ = w r 1 , j + F · ( w g l o b a l b e s t w r 2 , j ) F · ( w r 3 , j w r 4 , j )
DE3 w ^ = w g l o b a l b e s t + F · ( w r 1 , j w r 2 , j )
Random selection DE method
RDE1 w ^ = w s e l e c t e d b e s t + F · ( w r 1 , j + w r 2 , j w r 3 , j w r 4 , j )
RDE2 w ^ = w r 1 , j + F · ( w s e l e c t e d b e s t w r 2 , j ) F · ( w r 3 , j w r 4 , j )
RDE3 w ^ = w s e l e c t e d b e s t + F · ( w r 1 , j w r 2 , j )
RDE4 w ^ = w r 1 , j + F · ( w r 2 , j w r 3 , j )
Table 4. Standard benchmark functions.
Table 4. Standard benchmark functions.
fFunction NameSearch RangeGlobal Best
f 1 Ackley function[ 35 , 35] f ( x ) = 0
f 2 Alpine function[ 10 , 10] f ( x ) = 0
f 3 Csendes function[ 1 , 1] f ( x ) = 0
f 4 Deb 1 function[ 1 , 1] f ( x ) = 0
f 5 Deflected Corrugated Spring function[0, 10] f ( x ) = 0
f 6 Dixon and Price function[ 10 , 10] f ( x ) = 0
f 7 Infinity test function[ 1 , 1] f ( x ) = 0
f 8 Levy 3 test function[ 10 , 10] f ( x ) = 0
f 9 Michalewicz test function[0, pi] f ( x ) = 1.8013
f 10 Mishra 7 test function[ 10 , 10] f ( x ) = 0
f 11 Moved axis function[ 5 . 12 , 5.12] f ( x ) = 0
f 12 Penalty 1 function[ 50 , 50] f ( x ) = 0
f 13 Rastrigin function[ 15 , 15] f ( x ) = 0
f 14 Rosenbrock function[ 15 , 15] f ( x ) = 0
Table 5. Result data when D = 30. HSABA, Hybrid Self-Adaptive Bat Algorithm; SAWSA, Self-Adaptive Wolf Search Algorithm.
Table 5. Result data when D = 30. HSABA, Hybrid Self-Adaptive Bat Algorithm; SAWSA, Self-Adaptive Wolf Search Algorithm.
Fun. aMeas. bPSOBAHSABAWSASAWSA
DE1DE2DE3RDE1RDE2RDE3RDE4
f1Aver. c2.00 × 10 01 1.89 × 10 01 2.00 × 10 01 2.02 × 10 01 1.36 × 10 01 1.22 × 10 01 1.45 × 10 01 9.30 × 10 00 3.60 × 10 00 1.00 × 10 01 7.06 × 10 01
Stdev. d5.58 × 10 07 1.98 × 10 00 1.31 × 10 06 5.30 × 10 02 8.32 × 10 01 5.90 × 10 00 1.96 × 10 00 9.93 × 10 01 1.26 × 10 00 6.99 × 10 00 5.71 × 10 01
f2Aver.2.30 × 10 01 7.59 × 10 00 3.75 × 10 01 1.51 × 10 01 7.99 × 10 01 6.85 × 10 01 2.15 × 10 00 6.09 × 10 01 1.99 × 10 06 3.46 × 10 03 5.34 × 10 04
Stdev.3.13 × 10 01 1.65 × 10 01 1.00 × 10 02 2.40 × 10 00 2.37 × 10 02 1.89 × 10 02 1.07 × 10 01 8.95 × 10 03 7.90 × 10 11 1.75 × 10 04 1.79 × 10 06
f3Aver.6.40 × 10 03 3.94 × 10 06 5.42 × 10 01 1.62 × 10 03 2.55 × 10 11 2.34 × 10 11 3.43 × 10 12 2.65 × 10 11 6.07 × 10 13 3.14 × 10 12 2.51 × 10 14
Stdev.1.07 × 10 04 1.40 × 10 12 3.66 × 10 00 1.23 × 10 06 1.40 × 10 22 7.88 × 10 23 1.31 × 10 23 9.01 × 10 23 9.31 × 10 25 2.88 × 10 23 7.44 × 10 27
f4Aver. 7.77 × 10 01 9.03 × 10 01 4.64 × 10 01 6.35 × 10 01 9.96 × 10 01 9.96 × 10 01 9.97 × 10 01 9.96 × 10 01 9.97 × 10 01 9.97 × 10 01 9.97 × 10 01
Stdev.2.21 × 10 03 3.49 × 10 03 1.12 × 10 03 1.98 × 10 04 3.52 × 10 07 1.37 × 10 06 4.50 × 10 07 4.59 × 10 07 7.05 × 10 07 4.85 × 10 07 6.90 × 10 07
f5Aver.5.07 × 10 00 7.11 × 10 00 1.29 × 10 01 1.46 × 10 01 3.47 × 10 00 2.83 × 10 00 5.06 × 10 00 2.37 × 10 00 1.17 × 10 00 9.22 × 10 01 1.84 × 10 00
Stdev.8.68 × 10 00 1.32 × 10 01 3.34 × 10 01 5.15 × 10 00 3.01 × 10 00 4.61 × 10 00 4.09 × 10 00 1.50 × 10 00 5.67 × 10 01 2.35 × 10 00 2.54 × 10 01
f6Aver.1.17 × 10 04 3.98 × 10 01 2.94 × 10 05 1.16 × 10 01 7.75 × 10 01 1.56 × 10 00 1.83 × 10 00 5.86 × 10 01 2.65 × 10 01 7.45 × 10 01 1.19 × 10 00
Stdev.4.71 × 10 08 1.73 × 10 01 1.35 × 10 11 8.53 × 10 00 3.54 × 10 01 6.90 × 10 00 3.30 × 10 00 2.79 × 10 01 6.92 × 10 05 8.33 × 10 01 9.57 × 10 01
f7Aver.1.36 × 10 02 3.46 × 10 06 2.58 × 10 01 1.73 × 10 03 3.48 × 10 11 2.16 × 10 11 2.71 × 10 12 2.85 × 10 11 7.01 × 10 13 2.31 × 10 12 1.93 × 10 14
Stdev.2.30 × 10 04 1.27 × 10 12 4.96 × 10 01 1.24 × 10 06 2.47 × 10 22 1.22 × 10 22 9.58 × 10 24 1.73 × 10 22 1.73 × 10 24 1.20 × 10 23 2.81 × 10 27
f8Aver.3.64 × 10 01 3.22 × 10 00 5.17 × 10 01 4.48 × 10 01 1.40 × 10 04 9.06 × 10 02 1.69 × 10 00 9.88 × 10 05 9.05 × 10 02 6.63 × 10 01 1.39 × 10 07
Stdev.3.56 × 10 02 1.97 × 10 01 1.01 × 10 03 4.52 × 10 01 1.32 × 10 08 7.77 × 10 02 6.65 × 10 00 2.99 × 10 10 7.77 × 10 02 1.06 × 10 00 3.69 × 10 13
f9Aver. 1.26 × 10 01 1.41 × 10 01 1.60 × 10 01 1.08 × 10 01 2.47 × 10 01 2.66 × 10 01 2.87 × 10 01 2.59 × 10 01 2.95 × 10 01 −2.92 × 10 01 2.96 × 10 01
Stdev.1.09 × 10 00 4.71 × 10 00 5.19 × 10 00 3.56 × 10 01 4.64 × 10 01 1.60 × 10 01 2.36 × 10 01 3.48 × 10 01 9.69 × 10 03 9.18 × 10 02 1.09 × 10 03
f10Aver.6.98 × 10 64 1.34 × 10 49 6.98 × 10 64 9.67 × 10 52 3.51 × 10 51 3.27 × 10 51 3.34 × 10 51 4.60 × 10 51 3.18 × 10 51 8.05 × 10 51 5.61 × 10 51
Stdev.5.76 × 10 98 5.59 × 10 98 5.76 × 10 98 2.74 × 10 106 2.50 × 10 103 3.97 × 10 103 6.43 × 10 103 4.22 × 10 103 1.79 × 10 103 3.81 × 10 104 1.18 × 10 104
f11Aver.1.83 × 10 03 9.93 × 10 04 2.89 × 10 03 2.89 × 10 01 1.69 × 10 01 1.85 × 10 01 7.29 × 10 01 1.40 × 10 01 2.77 × 10 30 1.65 × 10 13 7.80 × 10 05
Stdev.7.90 × 10 05 5.36 × 10 08 5.45 × 10 06 2.62 × 10 02 1.11 × 10 02 8.92 × 10 03 9.67 × 10 02 2.16 × 10 03 6.80 × 10 59 5.47 × 10 25 7.53 × 10 08
f12Aver.8.98 × 10 01 7.04 × 10 01 2.21 × 10 02 1.32 × 10 02 1.84 × 10 01 7.16 × 10 00 2.51 × 10 01 5.04 × 10 00 3.63 × 10 02 7.39 × 10 01 9.88 × 10 09
Stdev.2.16 × 10 03 1.66 × 10 03 1.84 × 10 04 1.21 × 10 02 1.75 × 10 02 5.01 × 10 01 6.93 × 10 02 3.34 × 10 00 1.05 × 10 02 3.57 × 10 00 6.32 × 10 16
f13Aver.5.42 × 10 02 8.60 × 10 02 4.26 × 10 02 9.62 × 10 02 2.40 × 10 02 3.00 × 10 02 5.75 × 10 02 1.35 × 10 02 4.48 × 10 01 2.11 × 10 02 5.00 × 10 02
Stdev.1.80 × 10 04 1.05 × 10 05 6.20 × 10 04 1.04 × 10 04 5.94 × 10 03 2.27 × 10 04 2.01 × 10 04 2.17 × 10 03 1.40 × 10 00 1.21 × 10 04 4.97 × 10 02
f14Aver.1.26 × 10 05 8.07 × 10 01 2.20 × 10 06 7.50 × 10 01 1.10 × 10 01 8.16 × 10 01 1.15 × 10 02 1.83 × 10 01 4.25 × 10 01 2.07 × 10 01 5.01 × 10 01
Stdev.2.09 × 10 10 8.05 × 10 03 1.59 × 10 13 1.18 × 10 02 7.95 × 10 02 1.80 × 10 04 1.58 × 10 04 6.48 × 10 02 1.45 × 10 03 7.11 × 10 02 2.26 × 10 03
a Short for Function, listed the benchmark function number. b Short for Measures, in this table two measures are used, the average number and the standard deviation. c The average number of the best fitness result set. d The standard deviation of the best fitness result set.
Table 6. Result data when D = 50.
Table 6. Result data when D = 50.
Fun.Meas.PSOBAHSABAWSASAWSA
DE1DE2DE3RDE1RDE2RDE3RDE4
f1Aver.2.00 × 10 01 1.89 × 10 01 2.05 × 10 01 2.00 × 10 01 1.49 × 10 01 1.27 × 10 01 1.49 × 10 01 1.20 × 10 01 1.32 × 10 00 1.22 × 10 01 6.21 × 10 00
Stdev.6.80 × 10 07 1.22 × 10 00 8.55 × 10 03 3.49 × 10 07 7.00 × 10 01 4.90 × 10 00 3.15 × 10 00 4.93 × 10 01 8.02 × 10 01 3.62 × 10 00 4.56 × 10 00
f2Aver.4.41 × 10 01 1.73 × 10 01 3.82 × 10 01 6.07 × 10 01 2.89 × 10 00 2.44 × 10 00 6.80 × 10 00 2.59 × 10 00 4.93 × 10 02 2.67 × 10 01 4.27 × 10 02
Stdev.1.23 × 10 02 6.32 × 10 01 4.94 × 10 00 3.26 × 10 02 2.16 × 10 01 2.83 × 10 01 8.84 × 10 01 1.01 × 10 00 9.06 × 10 03 3.18 × 10 01 3.50 × 10 02
f3Aver.9.57 × 10 02 4.67 × 10 06 2.36 × 10 03 1.45 × 10 00 1.15 × 10 09 8.21 × 10 10 1.92 × 10 10 1.08 × 10 09 1.69 × 10 12 9.25 × 10 10 3.17 × 10 10
Stdev.6.83 × 10 02 1.64 × 10 12 5.92 × 10 06 4.51 × 10 00 1.99 × 10 19 1.09 × 10 19 3.92 × 10 20 2.86 × 10 19 6.04 × 10 24 6.34 × 10 18 4.34 × 10 19
f4Aver. 6.63 × 10 01 9.12 × 10 01 5.60 × 10 01 4.45 × 10 01 9.87 × 10 01 9.87 × 10 01 9.88 × 10 01 9.88 × 10 01 9.89 × 10 01 9.90 × 10 01 9.89 × 10 01
Stdev.2.46 × 10 03 2.05 × 10 03 6.75 × 10 05 1.22 × 10 03 2.13 × 10 06 5.99 × 10 06 1.86 × 10 06 4.22 × 10 06 2.76 × 10 06 3.79 × 10 06 2.61 × 10 06
f5Aver.8.03 × 10 00 1.31 × 10 01 2.71 × 10 01 2.17 × 10 01 1.02 × 10 01 8.98 × 10 00 1.31 × 10 01 7.87 × 10 00 1.89 × 10 00 3.73 × 10 00 5.62 × 10 01
Stdev.6.65 × 10 00 2.85 × 10 01 6.14 × 10 00 3.03 × 10 01 3.65 × 10 00 3.08 × 10 00 1.28 × 10 01 4.83 × 10 00 8.01 × 10 01 8.86 × 10 00 1.59 × 10 00
f6Aver.3.11 × 10 05 1.63 × 10 00 3.80 × 10 01 9.00 × 10 05 1.16 × 10 00 2.07 × 10 00 3.26 × 10 00 1.42 × 10 00 6.61 × 10 00 1.48 × 10 00 6.69 × 10 01
Stdev.1.57 × 10 11 8.64 × 10 00 1.37 × 10 02 7.01 × 10 11 2.13 × 10 01 3.44 × 10 00 9.54 × 10 00 4.32 × 10 01 1.93 × 10 01 2.52 × 10 00 6.77 × 10 01
f7Aver.3.03 × 10 02 4.37 × 10 06 2.08 × 10 03 1.25 × 10 00 1.24 × 10 09 7.58 × 10 10 1.61 × 10 10 1.21 × 10 09 1.32 × 10 12 1.97 × 10 10 2.13 × 10 09
Stdev.2.09 × 10 03 3.45 × 10 12 1.52 × 10 06 8.03 × 10 00 2.91 × 10 19 9.67 × 10 20 4.87 × 10 20 3.46 × 10 19 3.08 × 10 24 2.66 × 10 19 8.12 × 10 17
f8Aver.7.84 × 10 01 5.70 × 10 00 8.59 × 10 01 1.07 × 10 02 1.11 × 10 00 1.40 × 10 01 1.52 × 10 00 1.36 × 10 01 8.59 × 10 07 5.68 × 10 01 1.85 × 10 01
Stdev.8.69 × 10 02 4.69 × 10 01 8.78 × 10 01 2.81 × 10 03 6.72 × 10 00 2.12 × 10 01 6.45 × 10 00 3.67 × 10 01 1.65 × 10 12 1.82 × 10 00 2.39 × 10 01
f9Aver. 1.75 × 10 01 2.00 × 10 01 1.41 × 10 01 2.63 × 10 01 3.84 × 10 01 4.09 × 10 01 4.69 × 10 01 3.94 × 10 01 4.71 × 10 01 4.81 × 10 01 4.67 × 10 01
Stdev.4.10 × 10 00 1.01 × 10 01 2.97 × 10 01 2.65 × 10 01 9.17 × 10 01 6.45 × 10 01 7.22 × 10 01 5.30 × 10 01 1.90 × 10 01 2.11 × 10 01 2.46 × 10 01
f10Aver.9.25 × 10 128 9.94 × 10 128 1.01 × 10 117 9.55 × 10 119 8.11 × 10 115 5.04 × 10 115 1.09 × 10 116 7.82 × 10 115 3.60 × 10 115 3.47 × 10 115 1.07 × 10 116
Stdev.9.97 × 10 226 9.97 × 10 226 4.10 × 10 226 9.97 × 10 226 1.71 × 10 226 1.25 × 10 226 4.52 × 10 226 2.19 × 10 226 4.84 × 10 226 2.19 × 10 226 4.92 × 10 226
f11Aver.8.35 × 10 03 9.44 × 10 03 8.85 × 10 01 1.09 × 10 04 1.76 × 10 00 2.19 × 10 00 5.08 × 10 00 1.56 × 10 00 8.03 × 10 02 3.44 × 10 03 3.15 × 10 08
Stdev.1.10 × 10 07 4.83 × 10 06 1.89 × 10 03 3.77 × 10 07 1.44 × 10 00 1.32 × 10 00 8.50 × 10 00 6.74 × 10 01 5.38 × 10 02 1.06 × 10 04 1.70 × 10 14
f12Aver.1.99 × 10 02 9.64 × 10 01 2.66 × 10 02 4.02 × 10 02 4.90 × 10 01 1.09 × 10 01 2.70 × 10 01 2.01 × 10 01 5.04 × 10 02 1.50 × 10 00 1.04 × 10 02
Stdev.6.40 × 10 03 8.99 × 10 02 6.66 × 10 02 5.64 × 10 04 3.18 × 10 03 1.38 × 10 02 2.05 × 10 03 1.25 × 10 01 5.08 × 10 02 1.58 × 10 01 1.02 × 10 03
f13Aver.1.27 × 10 03 1.52 × 10 03 1.58 × 10 03 1.05 × 10 03 5.02 × 10 02 6.35 × 10 02 1.21 × 10 03 3.10 × 10 02 1.24 × 10 00 6.22 × 10 02 1.85 × 10 01
Stdev.6.42 × 10 04 3.04 × 10 05 1.51 × 10 04 1.68 × 10 05 2.96 × 10 04 4.63 × 10 04 9.37 × 10 04 4.30 × 10 03 2.16 × 10 01 5.52 × 10 04 2.82 × 10 03
f14Aver.7.47 × 10 05 7.95 × 10 02 1.58 × 10 02 5.39 × 10 06 2.57 × 10 01 6.16 × 10 01 1.03 × 10 02 3.24 × 10 01 1.57 × 10 02 6.14 × 10 01 7.55 × 10 01
Stdev.3.62 × 10 11 3.97 × 10 06 7.22 × 10 02 4.26 × 10 13 5.52 × 10 02 3.65 × 10 03 8.46 × 10 03 2.53 × 10 03 1.02 × 10 04 1.51 × 10 03 1.87 × 10 03
Table 7. Gaussian-guided SAWSA comparison result data when D = 30.
Table 7. Gaussian-guided SAWSA comparison result data when D = 30.
Fun.Meas.SAWSA D = 30
RDE1GRDE1RDE2GRDE2RDE3GRDE3RDE4GRDE4
f1Aver.9.30 × 10 00 3.60 × 10 00 1.52 × 10 01 2.12 × 10 00 1.00 × 10 01 7.06 × 10 01 1.24 × 10 01 2.18 × 10 00
Stdev.9.93 × 10 01 1.26 × 10 00 3.13 × 10 00 1.36 × 10 00 6.99 × 10 00 5.71 × 10 01 3.24 × 10 00 2.11 × 10 00
f2Aver.6.09 × 10 01 1.99 × 10 06 1.47 × 10 00 2.94 × 10 03 3.46 × 10 03 5.34 × 10 04 9.00 × 10 01 4.64 × 10 03
Stdev.8.95 × 10 03 7.90 × 10 11 3.03 × 10 00 4.00 × 10 05 1.75 × 10 04 1.79 × 10 06 8.00 × 10 01 1.74 × 10 04
f3Aver.2.65 × 10 11 6.07 × 10 13 4.26 × 10 18 1.40 × 10 20 3.14 × 10 12 2.51 × 10 14 6.48 × 10 15 3.57 × 10 21
Stdev.9.01 × 10 23 9.31 × 10 25 9.54 × 10 36 4.48 × 10 40 2.88 × 10 23 7.44 × 10 27 8.39 × 10 28 1.33 × 10 40
f4Aver. 9.96 × 10 01 9.97 × 10 01 1.00 × 10 00 1.00 × 10 00 9.97 × 10 01 9.97 × 10 01 1.00 × 10 00 1.00 × 10 00
Stdev.4.59 × 10 07 7.05 × 10 07 2.02 × 10 10 9.41 × 10 10 4.85 × 10 07 6.90 × 10 07 2.24 × 10 10 8.77 × 10 10
f5Aver.2.37 × 10 00 1.17 × 10 00 3.61 × 10 00 2.37 × 10 00 9.22 × 10 01 1.84 × 10 00 7.95 × 10 01 2.21 × 10 00
Stdev.1.50 × 10 00 5.67 × 10 01 1.44 × 10 00 9.57 × 10 26 2.35 × 10 00 2.54 × 10 01 4.49 × 10 01 1.04 × 10 01
f6Aver.5.86 × 10 01 2.65 × 10 01 1.28 × 10 00 1.87 × 10 00 7.45 × 10 01 1.19 × 10 00 5.07 × 10 01 1.32 × 10 00
Stdev.2.79 × 10 01 6.92 × 10 05 3.17 × 10 00 1.86 × 10 00 8.33 × 10 01 9.57 × 10 01 2.96 × 10 01 3.54 × 10 00
f7Aver.2.85 × 10 11 7.01 × 10 13 7.14 × 10 18 2.19 × 10 20 2.31 × 10 12 1.93 × 10 14 1.94 × 10 19 1.45 × 10 21
Stdev.1.73 × 10 22 1.73 × 10 24 8.83 × 10 35 2.66 × 10 39 1.20 × 10 23 2.81 × 10 27 2.14 × 10 37 7.08 × 10 42
f8Aver.9.88 × 10 05 9.05 × 10 02 1.59 × 10 01 8.98 × 10 09 6.63 × 10 01 1.39 × 10 07 2.30 × 10 01 1.31 × 10 09
Stdev.2.99 × 10 10 7.77 × 10 02 1.16 × 10 01 8.31 × 10 17 1.06 × 10 00 3.69 × 10 13 2.62 × 10 01 2.95 × 10 18
f9Aver. 2.59 × 10 01 2.95 × 10 01 2.50 × 10 01 2.92 × 10 01 2.92 × 10 01 2.96 × 10 01 2.56 × 10 01 2.90 × 10 01
Stdev.3.48 × 10 01 9.69 × 10 03 8.58 × 10 01 3.17 × 10 02 9.18 × 10 02 1.09 × 10 03 1.74 × 10 00 8.78 × 10 02
f10Aver.4.60 × 10 51 3.18 × 10 51 7.05 × 10 48 7.28 × 10 48 8.05 × 10 51 5.61 × 10 51 2.45 × 10 49 1.01 × 10 49
Stdev.4.22 × 10 103 1.79 × 10 103 1.66 × 10 98 1.49 × 10 98 3.81 × 10 104 1.18 × 10 104 1.04 × 10 99 1.77 × 10 98
f11Aver.1.40 × 10 01 2.77 × 10 30 7.65 × 10 03 2.09 × 10 03 1.65 × 10 13 7.80 × 10 05 7.05 × 10 05 5.70 × 10 04
Stdev.2.16 × 10 03 6.80 × 10 59 1.92 × 10 04 2.49 × 10 06 5.47 × 10 25 7.53 × 10 08 1.02 × 10 08 1.52 × 10 06
f12Aver.5.04 × 10 00 3.63 × 10 02 3.05 × 10 01 5.18 × 10 03 7.39 × 10 01 9.88 × 10 09 4.59 × 10 00 1.04 × 10 02
Stdev.3.34 × 10 00 1.05 × 10 02 1.32 × 10 02 5.37 × 10 04 3.57 × 10 00 6.32 × 10 16 4.51 × 10 01 1.02 × 10 03
f13Aver.1.35 × 10 02 4.48 × 10 01 3.88 × 10 02 1.47 × 10 01 2.11 × 10 02 5.00 × 10 02 1.78 × 10 02 2.06 × 10 01
Stdev.2.17 × 10 03 1.40 × 10 00 4.78 × 10 03 5.63 × 10 01 1.21 × 10 04 4.97 × 10 02 4.00 × 10 03 8.82 × 10 01
f14Aver.1.83 × 10 01 4.25 × 10 01 4.62 × 10 01 5.55 × 10 01 2.07 × 10 01 5.01 × 10 01 4.63 × 10 01 4.04 × 10 01
Stdev.6.48 × 10 02 1.45 × 10 03 1.15 × 10 03 5.43 × 10 03 7.11 × 10 02 2.26 × 10 03 1.56 × 10 03 2.60 × 10 03
Table 8. Gaussian-guided SAWSA comparison result data when D = 50.
Table 8. Gaussian-guided SAWSA comparison result data when D = 50.
Fun.Meas.SAWSA D = 50
RDE1GRDE1RDE2GRDE2RDE3GRDE3RDE4GRDE4
f1Aver.1.72 × 10 01 1.20 × 10 01 5.15 × 10 00 1.32 × 10 00 1.22 × 10 01 1.47 × 10 01 6.21 × 10 00 5.10 × 10 00
Stdev.1.79 × 10 00 4.93 × 10 01 3.64 × 10 00 8.02 × 10 01 3.62 × 10 00 1.29 × 10 00 4.56 × 10 00 1.30 × 10 01
f2Aver.2.59 × 10 00 6.12 × 10 00 4.93 × 10 02 4.09 × 10 02 2.67 × 10 01 5.06 × 10 00 4.27 × 10 02 7.12 × 10 02
Stdev.1.01 × 10 00 3.80 × 10 01 9.06 × 10 03 3.72 × 10 03 3.18 × 10 01 3.17 × 10 00 3.50 × 10 02 5.61 × 10 03
f3Aver.1.08 × 10 09 1.28 × 10 09 2.41 × 10 12 1.69 × 10 12 9.25 × 10 10 5.01 × 10 09 3.17 × 10 10 1.29 × 10 19
Stdev.2.86 × 10 19 7.54 × 10 18 9.79 × 10 23 6.04 × 10 24 6.34 × 10 18 2.38 × 10 16 4.34 × 10 19 1.58 × 10 38
f4Aver.−9.88 × 10 01 −9.96 × 10 01 −9.89 × 10 01 −1.00 × 10 00 −9.90 × 10 01 −1.00 × 10 00 −9.89 × 10 01 −1.00 × 10 00
Stdev.4.22 × 10 06 5.50 × 10 05 2.76 × 10 06 2.72 × 10 09 3.79 × 10 06 3.04 × 10 10 2.61 × 10 06 3.99 × 10 09
f5Aver.7.87 × 10 00 1.05 × 10 01 −1.89 × 10 00 −3.47 × 10 00 3.73 × 10 00 2.66 × 10 01 −5.62 × 10 01 −3.03 × 10 00
Stdev.4.83 × 10 00 2.82 × 10 00 8.01 × 10 01 1.15 × 10 01 8.86 × 10 00 3.29 × 10 00 1.59 × 10 00 3.21 × 10 01
f6Aver.1.42 × 10 00 2.95 × 10 00 6.61 × 10 00 6.48 × 10 00 1.48 × 10 00 6.69 × 10 01 8.28 × 10 01 3.77 × 10 00
Stdev.4.32 × 10 01 1.11 × 10 01 1.93 × 10 01 1.07 × 10 02 2.52 × 10 00 6.77 × 10 01 1.29 × 10 00 1.19 × 10 01
f7Aver.1.21 × 10 09 1.75 × 10 09 1.32 × 10 12 1.09 × 10 17 1.97 × 10 10 2.45 × 10 09 2.13 × 10 09 8.39 × 10 20
Stdev.3.46 × 10 19 4.78 × 10 18 3.08 × 10 24 2.67 × 10 34 2.66 × 10 19 4.81 × 10 17 8.12 × 10 17 6.46 × 10 39
f8Aver.1.36 × 10 01 2.52 × 10 01 8.59 × 10 07 9.72 × 10 08 5.68 × 10 01 3.29 × 10 01 1.85 × 10 01 1.10 × 10 08
Stdev.3.67 × 10 01 2.13 × 10 02 1.65 × 10 12 4.86 × 10 15 1.82 × 10 00 5.98 × 10 01 2.39 × 10 01 7.24 × 10 17
f9Aver.−3.94 × 10 01 −3.92 × 10 01 −4.71 × 10 01 −4.86 × 10 01 −4.81 × 10 01 −4.00 × 10 01 −4.67 × 10 01 −4.86 × 10 01
Stdev.5.30 × 10 01 4.85 × 10 00 1.90 × 10 01 2.05 × 10 01 2.11 × 10 01 3.47 × 10 00 2.46 × 10 01 1.49 × 10 01
f10Aver.7.82 × 10 115 5.62 × 10 112 3.60 × 10 115 9.41 × 10 112 3.47 × 10 115 1.01 × 10 113 1.07 × 10 116 7.39 × 10 112
Stdev.2.19 × 10 226 3.82 × 10 225 4.84 × 10 226 3.08 × 10 226 2.19 × 10 226 2.18 × 10 226 4.92 × 10 226 1.20 × 10 226
f11Aver.1.56 × 10 00 5.17 × 10 01 8.03 × 10 02 5.54 × 10 02 3.44 × 10 03 3.15 × 10 08 8.48 × 10 04 2.66 × 10 02
Stdev.6.74 × 10 01 1.39 × 10 01 5.38 × 10 02 1.03 × 10 02 1.06 × 10 04 1.70 × 10 14 1.30 × 10 06 3.65 × 10 03
f12Aver.2.01 × 10 01 1.04 × 10 02 5.04 × 10 02 1.04 × 10 02 1.50 × 10 00 2.48 × 10 01 4.66 × 10 02 1.04 × 10 02
Stdev.1.25 × 10 01 3.16 × 10 03 5.08 × 10 02 2.15 × 10 03 1.58 × 10 01 2.94 × 10 02 1.07 × 10 02 1.02 × 10 03
f13Aver.3.10 × 10 02 8.46 × 10 02 4.84 × 10 01 1.24 × 10 00 6.22 × 10 02 5.42 × 10 02 4.95 × 10 01 1.85 × 10 01
Stdev.4.30 × 10 03 8.49 × 10 03 2.67 × 10 02 2.16 × 10 01 5.52 × 10 04 1.46 × 10 04 2.82 × 10 03 5.36 × 10 02
f14Aver.9.67 × 10 01 3.24 × 10 01 1.57 × 10 02 6.14 × 10 01 1.30 × 10 02 5.93 × 10 01 7.55 × 10 01 1.37 × 10 02
Stdev.9.73 × 10 03 2.53 × 10 03 1.02 × 10 04 1.51 × 10 03 6.80 × 10 03 1.57 × 10 03 1.87 × 10 03 4.85 × 10 03
Table 9. h and p-value for the statistical test with WSA as the control method.
Table 9. h and p-value for the statistical test with WSA as the control method.
D = 30DE1DE2DE3RDE1RDE2RDE3RDE4GRDE1GRDE2GRDE3GRDE4
hphphphphphphphphphphp
f118.02 × 10 36 12.00 × 10 36 11.31 × 10 27 11.02 × 10 39 13.10 × 10 17 11.99 × 10 44 11.07 × 10 45 12.53 × 10 41 19.25 × 10 26 11.35 × 10 72 11.31 × 10 40
f211.69 × 10 56 12.30 × 10 63 11.14 × 10 59 11.48 × 10 62 11.75 × 10 71 15.04 × 10 66 16.23 × 10 58 16.82 × 10 43 11.17 × 10 77 12.68 × 10 77 11.50 × 10 151
f312.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 12.03 × 10 208 10.00 × 10 00
f411.37 × 10 124 17.95 × 10 125 15.99 × 10 124 11.41 × 10 124 18.09 × 10 125 16.01 × 10 125 14.22 × 10 124 13.00 × 10 120 11.75 × 10 130 13.72 × 10 103 15.69 × 10 88
f511.76 × 10 42 13.88 × 10 39 16.02 × 10 46 12.60 × 10 55 15.42 × 10 76 11.96 × 10 76 14.87 × 10 60 15.96 × 10 45 11.05 × 10 70 11.96 × 10 78 17.76 × 10 142
f611.07 × 10 15 12.91 × 10 10 14.87 × 10 10 12.84 × 10 14 14.34 × 10 31 15.32 × 10 32 15.42 × 10 35 12.88 × 10 32 02.02 × 10 01 15.86 × 10 18 09.16 × 10 01
f717.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 17.09 × 10 186 10.00 × 10 00
f819.35 × 10 45 13.60 × 10 38 13.75 × 10 42 11.27 × 10 36 12.22 × 10 69 16.62 × 10 73 15.89 × 10 59 11.07 × 10 40 11.35 × 10 75 13.53 × 10 77 14.59 × 10 150
f919.53 × 10 82 18.29 × 10 82 15.97 × 10 67 15.37 × 10 76 16.33 × 10 88 16.11 × 10 98 12.08 × 10 94 12.18 × 10 58 13.69 × 10 75 13.81 × 10 76 19.80 × 10 81
f1009.40 × 10 02 11.81 × 10 02 11.49 × 10 02 09.15 × 10 01 08.93 × 10 02 11.91 × 10 02 11.62 × 10 02 19.75 × 10 03 19.74 × 10 03 19.74 × 10 03 17.87 × 10 06
f1115.65 × 10 08 17.67 × 10 20 12.20 × 10 06 11.90 × 10 03 19.16 × 10 21 19.13 × 10 21 19.15 × 10 21 11.09 × 10 29 11.39 × 10 20 11.66 × 10 20 13.90 × 10 51
f1212.49 × 10 42 19.71 × 10 38 15.12 × 10 35 15.23 × 10 35 11.61 × 10 36 12.61 × 10 48 13.09 × 10 49 17.61 × 10 45 12.21 × 10 66 11.49 × 10 78 11.66 × 10 132
f1318.46 × 10 68 12.39 × 10 45 13.15 × 10 50 11.28 × 10 52 18.80 × 10 97 16.28 × 10 97 11.37 × 10 70 15.58 × 10 55 11.44 × 10 96 12.44 × 10 97 13.21 × 10 155
f1412.79 × 10 18 11.94 × 10 06 14.93 × 10 14 17.06 × 10 20 01.46 × 10 01 17.86 × 10 04 12.37 × 10 04 11.47 × 10 31 12.18 × 10 05 13.21 × 10 06 13.67 × 10 03
D = 50DE1DE2DE3RDE1RDE2RDE3RDE4GRDE1GRDE2GRDE3GRDE4
hphphphphphphphphphphp
f114.20 × 10 35 16.44 × 10 28 14.12 × 10 35 12.73 × 10 36 14.00 × 10 11 11.20 × 10 45 15.52 × 10 33 14.54 × 10 47 15.75 × 10 17 18.03 × 10 75 14.34 × 10 27
f219.99 × 10 43 12.70 × 10 48 11.79 × 10 55 12.80 × 10 52 14.53 × 10 66 11.43 × 10 63 17.30 × 10 44 11.95 × 10 36 15.95 × 10 69 11.23 × 10 69 11.60 × 10 133
f311.82 × 10 188 11.80 × 10 188 11.81 × 10 188 11.82 × 10 188 11.80 × 10 188 11.80 × 10 188 11.80 × 10 188 11.88 × 10 188 11.80 × 10 188 11.80 × 10 188 10.00 × 10 00
f415.81 × 10 125 11.51 × 10 127 13.78 × 10 132 15.17 × 10 131 18.09 × 10 127 11.54 × 10 126 16.80 × 10 126 12.86 × 10 107 17.01 × 10 136 14.82 × 10 123 12.42 × 10 94
f511.07 × 10 22 16.12 × 10 37 13.06 × 10 47 11.70 × 10 46 18.79 × 10 78 11.17 × 10 80 11.02 × 10 60 18.69 × 10 45 11.33 × 10 70 14.51 × 10 83 12.26 × 10 135
f614.12 × 10 22 09.42 × 10 01 11.09 × 10 18 12.17 × 10 33 03.83 × 10 01 12.65 × 10 20 11.14 × 10 21 16.92 × 10 44 13.08 × 10 05 05.81 × 10 02 11.06 × 10 04
f713.11 × 10 178 13.07 × 10 178 13.09 × 10 178 13.09 × 10 178 13.07 × 10 178 13.07 × 10 178 13.07 × 10 178 13.15 × 10 178 13.07 × 10 178 13.07 × 10 178 10.00 × 10 00
f811.22 × 10 36 11.05 × 10 37 15.78 × 10 36 16.06 × 10 49 12.13 × 10 40 13.12 × 10 57 17.67 × 10 46 11.56 × 10 38 12.48 × 10 66 12.41 × 10 82 12.02 × 10 119
f914.12 × 10 74 12.40 × 10 71 11.59 × 10 71 13.13 × 10 76 13.58 × 10 83 11.47 × 10 90 12.49 × 10 83 11.13 × 10 57 15.19 × 10 54 12.55 × 10 57 11.29 × 10 64
f1003.20 × 10 01 03.20 × 10 01 03.20 × 10 01 03.20 × 10 01 03.20 × 10 01 03.20 × 10 01 03.20 × 10 01 01.76 × 10 01 03.20 × 10 01 03.12 × 10 01 07.95 × 10 01
f1114.76 × 10 23 13.98 × 10 29 12.41 × 10 16 11.02 × 10 28 12.53 × 10 31 12.13 × 10 31 12.07 × 10 31 13.70 × 10 49 12.05 × 10 29 11.36 × 10 30 11.18 × 10 70
f1211.51 × 10 35 13.43 × 10 27 11.75 × 10 18 12.06 × 10 29 12.90 × 10 22 11.64 × 10 45 11.87 × 10 27 15.47 × 10 37 11.42 × 10 52 11.26 × 10 85 11.38 × 10 102
f1312.62 × 10 41 14.16 × 10 42 14.73 × 10 50 13.14 × 10 72 16.89 × 10 99 11.08 × 10 98 11.06 × 10 73 14.37 × 10 55 11.21 × 10 97 11.30 × 10 99 11.84 × 10 136
f1413.62 × 10 34 13.04 × 10 04 11.47 × 10 22 14.39 × 10 33 18.42 × 10 04 18.91 × 10 05 05.79 × 10 01 19.15 × 10 51 12.23 × 10 08 11.82 × 10 04 17.67 × 10 05
Table 10. CPU time comparison with 20 search agents.
Table 10. CPU time comparison with 20 search agents.
D = 30PSOBAHSABAWSADE1DE2DE3RDE1RDE2RDE3RDE4GRDE1GRDE2GRDE3GRDE4
SAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSA
f14.401.929.268.1911.2411.4411.4411.9812.6112.6712.5513.1213.7913.7013.49
f24.322.029.387.8010.9411.1711.0411.6412.0512.3011.7312.8512.7812.8412.59
f37.162.5113.7710.6814.8115.1015.5415.8716.5716.4316.8316.4116.6516.5416.74
f47.132.7613.7210.1514.6014.5514.4415.5415.3515.3715.5215.9615.9716.2415.83
f55.062.1610.628.4011.7611.8211.7212.4712.4612.5812.5413.5413.6913.5213.53
f63.881.888.497.8410.6310.6111.1711.2512.0212.0911.9512.3012.2812.5912.18
f76.752.4413.0810.2614.3914.4915.2815.2916.5115.9916.1215.8215.9716.1616.12
f86.302.5712.659.4513.1413.3014.0913.7914.8414.8514.6115.2815.6115.4415.62
f97.282.7214.3610.5414.6714.7315.0415.5115.4615.7715.6116.4916.7316.5916.62
f1010.003.4218.7712.4517.0517.2717.1417.8418.0417.9918.1219.8920.1419.9420.06
f113.431.777.827.4710.1210.3010.5110.9311.7511.7311.5011.6612.0511.8111.87
f129.763.5619.1212.5816.1515.6417.4716.3816.8116.8316.4119.5017.5517.9917.60
f134.161.909.197.7311.0810.8811.1711.5412.1812.1112.0212.7612.7312.7512.79
f143.771.888.287.6610.4410.5510.8811.0911.9711.9111.7512.1012.4812.2012.08
D = 50PSOBAHSABAWSADEDEDE3RDE1RDE2RDE3RDE4GRDE1GRDE2GRDE3GRDE4
SAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSASAWSA
f15.092.2510.759.5612.8213.0613.4313.5614.3313.4913.9114.9014.8914.8615.03
f25.052.3611.089.2212.5912.4612.5913.4513.7613.4413.1414.4014.2314.5214.44
f39.823.2118.2113.5218.4518.4419.3119.3920.5920.8020.5520.2920.6120.4020.66
f49.543.4218.5813.2618.0518.0518.0018.8918.9618.8818.9619.7119.8619.8319.79
f56.212.5812.849.9713.6913.5813.7214.2514.2914.3314.5315.7015.9515.6915.72
f64.312.279.608.7611.5411.6912.2812.2113.2013.3212.9913.5213.9313.7013.56
f79.153.0517.0712.8717.8317.7218.5618.5819.8219.4719.5119.6519.7319.3919.64
f88.143.2716.4611.7016.0215.8816.9816.5018.0818.0517.5918.5718.7018.7318.61
f910.343.6219.7213.7718.5718.4318.5119.0519.3319.1719.2120.8920.7520.7620.64
f1010.233.8419.7713.6918.1418.1217.9818.7619.0218.9519.0621.1721.3020.9621.26
f113.762.038.828.4111.3811.2111.5512.0212.6512.7012.4312.9513.1512.8813.17
f1213.464.8126.5517.0421.0920.6520.8021.3621.4621.4421.1125.4222.3123.8922.22
f135.092.2611.019.2912.3212.4413.0313.0313.4814.0813.3914.4814.4714.3514.41
f144.312.319.538.6011.5511.5312.1312.2413.1613.3012.9813.5813.8513.7613.31

Share and Cite

MDPI and ACS Style

Song, Q.; Fong, S.; Deb, S.; Hanne, T. Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory. Entropy 2018, 20, 37. https://doi.org/10.3390/e20010037

AMA Style

Song Q, Fong S, Deb S, Hanne T. Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory. Entropy. 2018; 20(1):37. https://doi.org/10.3390/e20010037

Chicago/Turabian Style

Song, Qun, Simon Fong, Suash Deb, and Thomas Hanne. 2018. "Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory" Entropy 20, no. 1: 37. https://doi.org/10.3390/e20010037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop