Skip to Content
You are currently on the new version of our website. Access the old version .
EntropyEntropy
  • Article
  • Open Access

10 January 2018

Gaussian Guided Self-Adaptive Wolf Search Algorithm Based on Information Entropy Theory

,
,
and
1
Department of Computer and Information Science, University of Macau, Macau 999078, China
2
Decision Sciences and Modelling Program, Victoria University, Melbourne 8001, Australia
3
Institute for Information Systems, University of Applied Sciences and Arts Northwestern Switzerland, 4600 Olten, Switzerland
*
Author to whom correspondence should be addressed.

Abstract

Nowadays, swarm intelligence algorithms are becoming increasingly popular for solving many optimization problems. The Wolf Search Algorithm (WSA) is a contemporary semi-swarm intelligence algorithm designed to solve complex optimization problems and demonstrated its capability especially for large-scale problems. However, it still inherits a common weakness for other swarm intelligence algorithms: that its performance is heavily dependent on the chosen values of the control parameters. In 2016, we published the Self-Adaptive Wolf Search Algorithm (SAWSA), which offers a simple solution to the adaption problem. As a very simple schema, the original SAWSA adaption is based on random guesses, which is unstable and naive. In this paper, based on the SAWSA, we investigate the WSA search behaviour more deeply. A new parameter-guided updater, the Gaussian-guided parameter control mechanism based on information entropy theory, is proposed as an enhancement of the SAWSA. The heuristic updating function is improved. Simulation experiments for the new method denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA) validate the increased performance of the improved version of WSA in comparison to its standard version and other prevalent swarm algorithms.

1. Introduction

In computer science, efficient algorithms for optimizing applications ranging from robot control [1], logistics applications [2] to healthcare management [3] have always evoked great interest. The general aim of an optimization problem is to obtain a solution with a maximum or minimum value to solve the problem. The solution often can be measured as a fitness from a function f ( x ) where the search space is too huge for a deterministic algorithm to come up with a best solution within a given amount of time [4]. The optimization algorithms are usually either deterministic, of which there are many in operation research, or non-deterministic, which iteratively and stochastically refine a solution using heuristics. For example, in data mining, heuristics-based search algorithms optimize the data clustering efficiency [5] and improve the classification accuracy by feature selection [6]. In the clustering case, different candidate formations of clusters are tried until one is found to be most ideal in terms of the highest similarity among the data in the same cluster. In the classification case, the best feature subset that is most relevant to the prediction target is selected using heuristic means. What these two cases have in common is that the optimization problem is a combinatorial search in nature. The possibilities of choosing a solution in a search space are too huge, which contributes to the NP-hardness of the problem. The search algorithms that are guided by stochastic heuristics are known as meta-heuristics, which literately means a tier of logics controlling the heuristics functions. In this paper, we focus on devising a new meta-heuristic that is parameter-free, based on the semi-swarming type of search algorithms. The swarming kind of search algorithms are contemporary population-based algorithms where search agents form a swarm that moves according to some nature-inspired or biologically-inspired social behavioural patterns. For example, Particle Swarm Optimization (PSO) is the most developed population-based metaheuristic algorithm by which the search agents swarm as a single group during the search operation. Each search particle in PSO has its own velocity, thereby influencing each another; collectively, the agents, which are known as particles, move as one whole large swarm. There are other types of metaheuristics that mimic animal or insect behaviours such as the ant colony algorithm [7] and the firefly algorithm [8] and some new and nature-inspired methods like the water wave algorithm. These algorithms do not always have the agents glued together, moving as one swarm. Instead, the agents move independently, and sometimes, they are scattered. In contrast, these algorithms are known as loosely-packed or semi-swarm bio-inspired algorithms. They have certain advantages in some optimization scenarios. Some well-known semi-swarm algorithms are the Bat Algorithm (BA) [9], the polar bear algorithm [10], the ant lion algorithm, as well as the wolf search algorithm. These algorithms usually embrace search methods that explore the search space both in breath and in depth and mimic swarm movement patterns of animals, insects or even plants found in nature. Their performance in heuristic optimization has been proven to be on par with that of many classical methods including those tight swarm or full swarm algorithms.
However, as optimization problems can be very different from case to case, it is imperative for nature-inspired swarm intelligence algorithms to be adaptive to different situations. The traditional way to solve this kind of problem is to adjust the control parameters manually. This may involve massive trial-and-error to adapt the model behaviour to changing patterns. Once the situation changes, the model may need to be reconfigured for optimal performance. That is why self-adaptive approaches have become more and more attractive for many researchers in recent years.
Inspired by the preying behaviour of a wolf pack, a contemporary heuristic optimization called the Wolf Search Algorithm (WSA) [11] was proposed. In the wolf swarm, each wolf can not only search for food individually, but they can also merge with their peers when the latter are in a better situation. By this action model, the search can become more efficient as compared to the other single-leader swarms. By mimicking the hunting patterns of a wolf pack, the wolf in WSA as a search agent can find solutions independently, as well as merge with its peers within its visual range. Sometimes, wolves in WSA are simulated to encounter human hunters from whom they will escape to a position far beyond their current one. The human hunters always pose a natural threat to wolves. In the optimization process, this enemy of wolves triggers the search to stay out of local optima and tries out other parts of the search space in the hope of finding better solutions by the algorithm design. As shown in Figure 1, w i is the current search agent (the wolf) and w j is its peer in its visual range γ . Δ and δ are locations in the search agent’s visual range; S is the step size of its movement, and Γ is the search space for the objective function. The basic movement of an individual search agent is guided by Brownian motion. In this figure, In most of the metaheuristic algorithms, two of the most popular search methods are Levy search and Brownian search. Levy search is good for exploration [12], and Brownian is efficient for exploiting the optimal solution [13]. In WSA, both search methods were considered. The Brownian motion is used as the basic movement, and the Levy search is for pack movement.
Figure 1. Movement patterns of wolf preying and the algorithm parameters.
As a typical swarm intelligence heuristic optimization algorithm, WSA shares a common structure and also a common drawback with other algorithms, involving heavy dependence of the efficacy of the algorithm on the chosen parameter values. It is hardly possible to guess the most suitable parameter values for the best algorithm performance. These values are either taken from some suggested defaults or they are manually adjusted. In Figure 1 [14], the parameters’ values remain unchanged during the search operation in the original version of WSA. Quite often, the performance and efficacy of the algorithms for different problems, applications or experimentations would differ greatly, when different parameter values are used. Since there is no golden rule on how the model parameters should be set and the models are sensitive to the parameter values used, users may only guess the values or find the parameter values by trial-and-error. In summary, given that the nature of the swarm search is dynamic, the parameters should be made self-adaptive to the dynamic nature of the problem. Some parameters’ values may be the best at yielding the maximum performance at one time, while other values may be shown to be better in the next moment.
In order to solve this problem, the Self-Adaptive Wolf Search Algorithm (SAWSA) is modified with a combination of techniques, such as a random selection method and a core-guided (or global best-guided) method integrated with the Differential Evolution (DE) crossover function as the parameter-updating mechanism [14]. This SAWSA is based on a randomization, which clearly is not the best option for the rule-based WSA. Compared with the other swarm intelligence algorithms, the most valuable advantage of the original WSA is the stability. However, even though the average results of the published SAWSA are better than those of the WSA, the stability is weakened. To generate a better schema of the algorithm, the implicit relations between the parameters and the performance should be studied. In this paper, we try to find a way to stabilise the performance and generate a self-adaption-guided part for the algorithm. Furthermore, the coding structure is modified. The new algorithm is denoted as the Gaussian-Guided Self-Adaptive Wolf Search Algorithm (GSAWSA).
The contributions of this paper are summarized as follows. Firstly, the self-adaptive parameter range for WSA is carefully improved. Secondly, the parameter updater is not embedded in the main algorithm any longer, and it evolves as an independent updater in this new version. These two changes essentially show different and better optimization performance from the previous version of SAWSA [14]. To verify the performances of the new model once the changes have been made, the experiments are redesigned with settings that enhance the clarity of the result display. The novelty in this paper is the Gaussian-guided parameter updater method. It is a method based on information entropy theory. To improve the performance of SAWSA further, we proposed this novel idea by treating the search agent behaviour as chaotic behaviour, so the algorithm can be perceived as a chaotic system. Using the chaotic stability theory, we can use chaotic maps to guide the operation of the system. Additionally, the entropy value can be used as a measurement of the stability and the inner information communication. In our paper, the feasibility of this new method is analysed, and a suitable map for WSA is found to be a Gaussian map. The advantage of using a Gaussian map as a new method is observed via an extensive simulation experiment.
We verify the efficacy of the considered methods with fourteen typical benchmark functions and compare the performance of GSAWSA with the original WSA, the original bat algorithm and the Hybrid Self-Adaptive Bat Algorithm (HSABA) [15]. The self-adaptiveness is powered by Differential Equations (DE) in SABA. The concept is based on Particle Swarm Optimization (PSO), which moves in some sort of mixed random order and swarming patterns. PSO is one of the classical swarm search algorithms that often shows superior performance with standard benchmark functions. From our investigations, it is supposed that the parameter control by some entropy function in GSAWSA would possibly offer further improvement. The self-adaptive method is a totally hands-free approach that lets the search evolve itself. Parameter control is a guiding approach that steers the course of parameter changes during runtime.
The remainder of the paper is structured as follows: The original wolf search algorithm and the published SAWSA [14] are briefly introduced in Section 2. The chaos system entropy analysis and Gaussian-guided parameter control method are discussed in Section 3, followed by Section 4, which presents the comparison experiments of both the self-adaptive method and the parameter control method with several optional DE functions. The paper ends after presenting concluding remarks in Section 5.

3. Gaussian-Guided SAWSA Based on Information Entropy Theory

For the self-adaptive method, the parameter boundaries constitute crucial information [20]. By the parameter definition, an extensive testing was carried out for finding the possible values or ranges for the parameters. Some validated parameter boundaries for SAWSA are shown in Table 2. In the previously-proposed self-adaptive wolf search algorithm, the parameter boundaries are the only limitations of the parameter updating. Clearly, this is not good enough because the ideal parameter control should follow the same patterns such that the changing of the parameter will not affect the algorithm performance.
Table 2. Boundaries of behaviour control parameters.
Just like other classic probabilistic algorithms [21], a mathematical model of WSA is hard to analyse with formal proofs given its stochastic nature. Therefore, we use extensive experiments to gain insights from the experiences with WSA parameter control. To examine the effects of each parameter, the static step size s, the velocity factor α , the escape frequency p a and the visual range γ as model variables are used. The results show that already a small change of the parameter limits can obviously affect the performance. When the parameter boundaries are well defined, the performance can hardly be affected. Thereby, it provides consistent optimization performance. In Figure 4, an example with the Ackley 1function for D = 2 is shown. Each curve in this figure is the average convergence curve of 20 individuated results (experiment repetitions). Here, the static parameters are s = 1 , α = 0.2 and p a = 0.25 . The parameter γ changes from one to Γ = 70 with a step size of one. Here, as an example, we only evaluate parameter γ . The other parameters follow a similar pattern.
Figure 4. Convergence curves of WSA with γ as the variable.
In Figure 4, most of the 70 curves are overlapping because changing the visual range does not bring much improvement. Only in the range from γ = 1 to γ = 8 , the improvement is obvious. It can be clearly seen that for the Ackley 1function, updating γ in the definitional domain can be used, but is not necessary. The best solution comes from updating γ within a reasonable range where a large improvement can be obtained with the least number of attempts. By analysing the distribution of the best fitness values with the corresponding γ values in Figure 5, we can conclude that the best improvement can be obtained by the approach from the lower bound. This pattern can be shown in our experiments with other benchmark functions, as well. Therefore, for parameter γ , we can give a more reasonable updating domain from the experiments:
0 < γ 2 · l n ( Γ ) Γ > 0
Figure 5. Best fitness value with each γ value.
Another pattern can be used as shown in Figure 5, as well, where the best parameter solution is always located in a small range. If we find a sufficiently suitable parameter value, the best way to update is to do so within a certain range, which means the next generated parameter should be guided by the previous one. By using this method, we can avoid unstable performance and save time from random guesses. Subsequently, the challenge now is to choose a suitable entropy function to guide the parameter control approach.
However, as a probability-based algorithm, the current optimization states can influence the results in the next generation, while the future move is highly unpredictable. This model behaviour can be described by the theory of chaos [22], and the parameter update behaviour can be treated as a typical chaotic behaviour; this behaviour seems like a random updating with an uncertain, unrepeatable and unpredictable behaviour in a certain described system [23]. The chaotic phenomenon has to take into account population control, hybrid control or initialized control in metaheuristic algorithm studies [24] and other data science topics [25]. In our study, the entropy theory is used to control the parameter self-update. The reason is analysed as below.
The individual parameter update behaviour is unpredictable by the updater; however, as a chaotic system, the stability can be measured. The chaotic and entropy theory have been used in many schemes [26]. The entropy or entropy rate is one of the efficient measures for the stability of a chaotic system.
The logistic map is one of the most famous chaos maps [27]. It has a polynomial mapping of degree two, which is also denoted as a recurrence relation. Often as an archetypal example, it is cited and used to describe how chaotic and complex a behaviour can become after evolving from some straightforward non-linear dynamical equations. In our paper, the logistic map is used as an example. The map is defined by Equation (6) [28]:
f μ : = [ 0 , 1 ] [ 0 , 1 ] given by x n + 1 = f μ ( x n ) , where f μ ( x ) = μ x ( 1 x )
This map is one of the simplest and most widely-used maps in chaos research, which is also very popular in swarm intelligence. For example, in water engineering, chaos PSO using the logistic map provides a very stable solution to design a flood hydro-graph [29], and it was also used in a resource allocation problem in 2007 [30]. Equation (6) is the mathematic definition of a logistic map; it gives the possible iteration result of a linear problem. To analyse the stability of this map, we can calculate the entropy by using Equation (7).
H L = S L A P r S L l o g 2 ( P r S L )
The entropy evaluation for the logistic map and the bifurcation diagram is shown in Figure 6. It is clear that the most probable iteration solutions are located near the upper or lower bound of the definition domain. Comparing with the entropy value, the larger the entropy value is, the larger the range of the distribution for the corresponding μ can be provided in the logistic map or defined in information theory as larger information communication. The logistic map can provide a usable way to solve the chaos iteration problem. However, it is not the best way for swarm intelligence, because through many experiments, it is known that the best solution should hardly be located near the boundary. Therefore, here, we consider using another chaos map, which can provide the same stability with a more reasonable iteration outcome.
Figure 6. Bifurcation diagram (a) and entropy value (b) of the logistic map.
The Gauss iterated map, which is popularly known as the Gaussian map or mouse map, is another type of a nonlinear one-dimensional iterative map given by the Gaussian function [31] in mathematics:
x n + 1 = e x p ( α · x n 2 ) + β
With this function, the Gaussian map can be described as the equation:
G : defined by G ( x ) = e α x 2 + β
where α and β are constant variables. There is a bell-shaped Gaussian function named after Johann Carl Friedrich Gauss. Its shape is similar to the logistic map. As the Gaussian map has two parameters, it can provide more controllability to the iteration outcome, and the G : definition domain can meet more needs than the logistic map. The next step is to choose the most stable and suitable model for use in parameter control. The goal for this step is to satisfy the stability requirement of a chaos system, while keeping up a higher entropy value for more information communication in the system.
The stability of the chaotic maps is defined by the theorem below [32]:
Theorem 1.
Suppose that the map f μ ( x ) has a fixed point at x . Then, the fixed point is stable if:
d d x f μ ( x ) < 1
and it is unstable if:
d d x f μ ( x ) > 1
The stability analysis of the Gaussian map is shown below.
G ( x ) d x = e α x 2 + β d x = π α
Using Theorem 1, we get the following stability result depending on the parameter values:
x is stable , if π α < 1 x is unstable , if π α > 1
From the analysis, the stability is just related to α ; the Gaussian map will move to stable regions when α > π . In Figure 7, we can see the possible iterative outcome with each β when α = 4 and α = 9 to visualize the effects of different α values. As shown in Figure 8, this map shows the period-doubling and period-undoubling bifurcations. Considering the parameter iteration needs in our program, based on the entropy theory, the final parameters are set as α = 5.4 and β = 0.52 , which can provide both stability and enough inner information change.
Figure 7. Bifurcation diagram of a Gaussian map when α = 4 (a) and α = 9 (b).
Figure 8. Bifurcation diagram of a Gaussian map when α = 5 (a) and α = 5 . 4 (b).
Using this Gaussian map, the new parameter iterative method will be updated from:
t e m p p a r a = p a r a l o w e r b o u n d + r a n d · ( p a r a u p p e r b o u n d p a r a l o w e r b o u n d )
to:
t e m p p a r a = e x p ( α · p a r a 2 ) + β
where α and β will be set as above for generating modified parameter values for WSA within the specified boundaries. If this equation leads to negative values in our algorithm, we simply use the absolute value for the parameter update, as all used parameter values should be positive.
Our experiments show that this parameter control method offers more stability and better performance compared to the proposed SAWSA. The parameter control mechanism will be added into the original random selection-based SAWSA, as it has a much better performance than the core-guided DE, which is strongly based on the global best agent (this experiment result is shown in the next section). With this modification, the final version of the Gaussian-guided SAWSA is generated.
To show the advantage of the entropy-based Gaussian-guided parameter update method, which avoids the randomness, we modified all the optional DE functions to self-adaptive methods. By adapting to all these functions, the result can show if the improvement is caused by the adaptive function or the parameter update-guided methods. We mix two different solution selection methods with DE, and the combinations are shown in Table 3. The two self-adaptive types of methods are compared in this paper. Hence, we have a list of differential evolution crossover functions such as D E 1 , D E 2 , D E 3 functions. In these functions, the current global best solution is taken into consideration. This approach is supposed to increase the stability of the system because the direction of movement is calculated in relation to the location of the current global best. The calculation by R D E 1 , R D E 2 , R D E 3 and R D E 4 is only done at the chosen search agents, so it will not overload the system. The solution by b e s t s e l e c t e d is the one picked as the best current fitness among the other chosen ones. In this way, more randomness can be added to the algorithm by this method. Wider steps are taken for more optional behaviour. So far, it has been tested and worked well with the semi-swarm type of algorithms, such as bat and wolf. It is however not know whether it may become unstable when coupled with other search algorithms.
Table 3. The names of Differential Evolution (DE) functions that are implemented with various solution selection methods.

4. Experimental Results

To validate the efficiency of the GSAWSA, we compare it with other well-established and efficient swarm intelligence algorithms such as BA, HSABA and PSO in this paper. In order to prove that the improvement is not caused by the different DE self-adaptive functions, we also use all the optional DE functions in the experiments and select the most suitable function. Afterwards, we test whether the Gaussian-guided method can bring a better performance for the considered problems.

4.1. SAWSA Comparative Experiments

The main purpose of this part is to prove that the SAWSA is better than the other algorithms in most cases. Then, we figure out which is the most suitable DE self-adaptive method for the next Gaussian-guided modification. Furthermore, the outcomes of these experiments determine the comparison group of the next experiment. Fourteen standard benchmark functions that are typically used for benchmarking swarm algorithms are used for a credible comparison [33]. They are shown in Table 4.
Table 4. Standard benchmark functions.
The objective of the experiments is to compare the suitability of integrating the DE functions, which are listed in Table 3, on various swarm search algorithms, such as SAWSA, WSA, BA, HSABA and PSO. The algorithm combos are benchmarked with those typical standard benchmark functions as appeared in Table 4. The default parameter values used are those suggested from their original papers. An exception is the self-adaptive approaches: there is no parameter value because it is designed to be parameter free. Each benchmark function was tested in various dimensions increasing in complexity from 2, 5, 10, 20, 30 to 50. The population sizes are maintained at two extremes: 20 and 10,000 iterations are repeated for each case. To achieve consistent results, for each function and each dimension, the program was run 50 times for 10,000 cycles, which was how it was done in [34]. Their average curves are then computed, which are shown as the final result.
When the search space dimension is low, all the algorithms can achieve impressive results. The differences are too small to show in both figures and tables. Therefore, for a clearer view, we only present the results for D = 30 in Table 5 and the result for D = 50 in Table 6. In the result tables, the best results are coloured in bold red, and the second best results are coloured in bold black.
Table 5. Result data when D = 30. HSABA, Hybrid Self-Adaptive Bat Algorithm; SAWSA, Self-Adaptive Wolf Search Algorithm.
Table 6. Result data when D = 50.
Figure 9, Figure 10 and Figure 11 show the box-plots of some standard benchmark functions’ comparisons for the same or different dimensions. The black line in the middle of the box-plots indicates the average baseline; half of the result range is shown by the size of the inner box. For easy visual comparison on the same axis, the ranges for all the data have been normalized.
Figure 9. Alpine function (a) and Levy 3 test function (b) when D = 50.
Figure 10. Rastrigin function (a) and penalty function (b) when D = 50.
Figure 11. Ackley function when D = 30 (a) and D = 50 (b).

4.2. Gaussian-Guided Parameter Control Comparative Experiments

In this section, the experiment purpose is to prove the efficiency of the entropy-guided parameter control mechanism using the Gaussian function for a substantiation of the expected benefits of SAWSA. From the previous section, we can see clearly that the random selection method is more suitable as a self-adaptive method for WSA. Therefore, in this experiment, to reduce the redundancy, we only add the entropy-guided parameter control mechanism using the Gaussian function to the randomization part of the DE self-adaptive WSA with the four functions listed in Table 3. They are referred to as R D E 1 , R D E 2 , R D E 3 and R D E 4 . For comparison, the entropy-guided SAWSA in the searching part is generated using the Gaussian function, and they referred to as G R D E 1 , G R D E 2 , G R D E 3 and G R D E 4 in this experiment. We also use the 14 benchmark functions from Table 4, which are tested for the dimensions 2, 5, 10, 20, 30 and 50. The maximum number of generations and the population size are set as g e n = 10,000 and static p o p = 20 , respectively. For consistent results and a fair comparison, each case was run 50 times, and we use the average for the final data analysis.
When the dimension is low, all algorithms have a similar performance, and the algorithmic enhancement seems unnecessary. However, when D increases to a large number (which means the objective function is very complex), the enhancement provided by this entropy-based Gaussian-guided parameter control method can be clearly observed. Therefore, here, we only show the experiment results for D = 30 and D = 50 in Table 7 and Table 8. The best results are marked in red.
Table 7. Gaussian-guided SAWSA comparison result data when D = 30.
Table 8. Gaussian-guided SAWSA comparison result data when D = 50.
In Table 7 and Table 8, we can see that most of the best results are obtained by the entropy-based Gaussian-guided SAWSA. The entropy-based Gaussian-guided methods not only enhance the performance, but also stabilize it. In the box charts as shown in Figure 12 and Figure 13, the boxes show the location ranges of middle half of the result data, while the black lines show the average data. For better visual comparison, all the data are normalized. In most cases, the entropy-guided SAWSA produces better performance than the original SAWSA, and even every entropy-guided algorithm has a much smaller box in the box chart than the SAWSA comparison algorithm. For a metaheuristic algorithm. this improvement is significant, as stability is a very important attribute, which is often very hard to achieve.
Figure 12. Deb 1 function (a) and Levy 3 test function (b) when D = 50.
Figure 13. Michalewicz test function (a) and Penalty 1 function (b) when D = 50.
For the statistical tests, we use the p-values as the measurement and the WSA as the control method. Due to page limitation, we cannot show all the comparisons with other control methods. Therefore, we focus on improving the original WSA. The hand p-values are shown in Table 9. Knowing the “no free lunch” policy [35], the improvement resulted in the algorithm costing more calculation resources than the original WSA. In Table 10, the CPU times are listed for all the algorithms with D = 30 and D = 50 . The CPU time is defined as the total run time for each algorithm that runs 1000 generations with 20 search agents.
Table 9. h and p-value for the statistical test with WSA as the control method.
Table 10. CPU time comparison with 20 search agents.

5. Conclusions

Self-adaptive methods are effective to enable parameter-free metaheuristic algorithms; they can even improve the performance because the parameters’ values are self-tuned to be optimal (hence optimal results). Based on the authors’ prior work about the hybridizing self-adaptive DE functions in the bat algorithm, a similar, but new self-adaptive algorithm was developed called SAWSA. However, due to the lack of stability control and missing knowledge of the inner connection of the parameters and performance, the SAWSA is not sufficiently satisfactory for us. In this paper, the self-adaptive method is considered from the perspective of entropy theory for metaheuristic algorithms, and we developed an improved parameter control called the Gaussian-guided self-adaptive method. After designing this method denoted as GSAWSA, we configured test experiments with fourteen standard benchmark functions. Based on the results, the following conclusions can be drawn.
Firstly, the self-adaptive approach is proven to be very efficient for metaheuristic algorithms, especially those that require much calculation cost. By using this method, the parameter training part can be removed in the real case usage. However, as the self-adaptive modification would increase the complexity of the algorithm, the calculation cost of the self-adaptive method must be taken into consideration.
Secondly, comparing all the optional self-adaptive DE methods in the experiment, the type of random selection is a better choice for SAWSA. However, as the average outcome is improved, the stability is clearly decreased by introducing more randomness into the algorithm. How to balance the random effects and to improve the stability is, in general, a difficult challenge for a the metaheuristic algorithm study.
Thirdly, the parameter-performance changing pattern can be considered a very important feature of metaheuristic algorithms. Normally, researchers use the performance or the situation feedback as an update reference. However, how the parameters influence the performance usually remains outside of consideration. By analysing how the parameters influence the performance, a better self-adaptive updating method could be developed and a better performance could be achieved with less computing resources.
In conclusion, a parameter-free metaheuristic model where a Gaussian map is fused with the WSA algorithm is proposed. It has the advantages of not requiring the parameters’ values to remain static and the parameters will tune themselves as the search operation proceeds. Often, from our experiment results, the new model shows improvement in performance compared to the naive version of WSA, as well as other similar swarm algorithms. As future work, we want to investigate in depth the computational analysis of how the Gaussian map contributes to refining the performance and preventing the search from converging prematurely to local optima. The analysis should be done together with the runtime cost, as well. It is known from the experiments reported in this paper that there is a certain overhead when the Gaussian map is fused with WSA, extending its performance, but at the same time, the extra computation consumes additional resources. In the future the GAWSA should be enhanced with the capability of balancing the runtime cost and the best possible performance in terms of the solution quality obtained.

Acknowledgments

The authors are thankful for the financial support from the research grants: (1) MYRG2016-00069, titled “Nature-Inspired Computing and Metaheuristics Algorithms for Optimizing Data Mining Performance” offered by RDAO/FST (Research & Development Administration Office/Faculty of Science and Technology), University of Macau and the Macau SAR government; (2) FDCT/126/2014/A3, titled “A Scalable Data Stream Mining Methodology: Stream-based Holistic Analytics and Reasoning in Parallel” offered by FDCT (The Science and Technology Development Fund) of the Macau SAR government.

Author Contributions

Qun Song has written the source codes, implemented the experiments and collected the results. The main contribution of Simon Fong is the development direction of the metaheuristics algorithms. Suash Deb and Thomas Hanne contributed to the discussion and analysis of the results. Qun Song has written the paper. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Processor: Intel(R) Core(TM) i7-4790 CPU @3.60 GHz
Installed memory (RAM): 8.00 GB
System: Windows 7 Enterprise Service Pack 1 64-bit
Development environment: MATLAB R2014a 64-bit

References

  1. Brambilla, M.; Ferrante, E.; Birattari, M.; Dorigo, M. Swarm robotics: A review from the swarm engineering perspective. Swarm Intell. 2013, 7, 1–41. [Google Scholar] [CrossRef]
  2. Hanne, T.; Dornberger, R. Computational intelligence. In Computational Intelligence in Logistics and Supply Chain Management; Springer: Cham, Switzerland, 2017; pp. 13–41. [Google Scholar]
  3. Fikar, C.; Hirsch, P. Home health care routing and scheduling: A review. Comput. Oper. Res. 2017, 77, 86–95. [Google Scholar] [CrossRef]
  4. Fister, I., Jr.; Yang, X.S.; Fister, I.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. arXiv, 2013; arXiv:1307.4186. [Google Scholar]
  5. Senthilnath, J.; Omkar, S.; Mani, V.; Tejovanth, N.; Diwakar, P.; Shenoy, A.B. Hierarchical clustering algorithm for land cover mapping using satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 762–768. [Google Scholar] [CrossRef]
  6. Senthilnath, J.; Omkar, S.; Mani, V.; Karnwal, N.; Shreyas, P. Crop stage classification of hyperspectral data using unsupervised techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 861–866. [Google Scholar] [CrossRef]
  7. Liao, T.; Socha, K.; de Oca, M.A.M.; Stützle, T.; Dorigo, M. Ant colony optimization for mixed-variable optimization problems. IEEE Trans. Evol. Comput. 2014, 18, 503–518. [Google Scholar] [CrossRef]
  8. Yang, X.S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  9. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  10. Połap, D.; Wozniak, M. Polar Bear Optimization Algorithm: Meta-Heuristic with Fast Population Movement and Dynamic Birth and Death Mechanism. Symmetry 2017, 9, 203. [Google Scholar] [CrossRef]
  11. Tang, R.; Fong, S.; Yang, X.S.; Deb, S. Wolf search algorithm with ephemeral memory. In Proceedings of the 2012 Seventh International Conference on Digital Information Management (ICDIM), Macau, China, 22–24 August 2012; pp. 165–172. [Google Scholar]
  12. Senthilnath, J.; Das, V.; Omkar, S.; Mani, V. Clustering Using Levy Flight Cuckoo Search; Springer: New Delhi, India, 2013. [Google Scholar]
  13. Senthilnath, J.; Kulkarni, S.; Benediktsson, J.A.; Yang, X.S. A novel approach for multispectral satellite image classification based on the bat algorithm. IEEE Geosci. Remote Sens. Lett. 2016, 13, 599–603. [Google Scholar] [CrossRef]
  14. Song, Q.; Fong, S.; Tang, R. Self-Adaptive Wolf Search Algorithm. In Proceedings of the 2016 5th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI), Kumamoto, Japan, 10–14 July 2016; pp. 576–582. [Google Scholar]
  15. Fister, I.; Fong, S.; Brest, J.; Fister, I. A novel hybrid self-adaptive bat algorithm. Sci. World J. 2014, 2014, 709738. [Google Scholar] [CrossRef] [PubMed]
  16. Chih, M. Self-adaptive check and repair operator-based particle swarm optimization for the multidimensional knapsack problem. Appl. Soft Comput. 2015, 26, 378–389. [Google Scholar] [CrossRef]
  17. Fister, I.; Yang, X.S.; Brest, J.; Fister, I., Jr. Memetic self-adaptive firefly algorithm. In Swarm Intelligence and Bio-Inspired Computation: Theory And Applications; Elsevier: Amsterdam, The Netherlands, 2013; pp. 73–102. [Google Scholar]
  18. Beyer, H.G.; Deb, K. Self-Adaptive Genetic Algorithms with Simulated Binary Crossover; Technical Report; Universität Dortmund: Dortmund, Germany, 2001. [Google Scholar]
  19. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  20. Shi, Y.; Eberhart, R.C. Parameter selection in particle swarm optimization. In International Conference on Evolutionary Programming; Springer: Berlin/Heidelberg, Germany, 1998; pp. 591–600. [Google Scholar]
  21. Fradkov, A.L.; Evans, R.J. Control of chaos: survey 1997–2000. IFAC Proc. Vol. 2002, 35, 131–142. [Google Scholar] [CrossRef]
  22. Devaney, R.L.; Siegel, P.B.; Mallinckrodt, A.J.; McKay, S. A first course in chaotic dynamical systems: Theory and experiment. Comput. Phys. 1993, 7, 416–417. [Google Scholar] [CrossRef]
  23. Gandomi, A.; Yang, X.S.; Talatahari, S.; Alavi, A. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  24. Hu, W.; Liang, H.; Peng, C.; Du, B.; Hu, Q. A hybrid chaos-particle swarm optimization algorithm for the vehicle routing problem with time window. Entropy 2013, 15, 1247–1270. [Google Scholar] [CrossRef]
  25. Hou, L.; Gao, J.; Chen, R. An Information Entropy-Based Animal Migration Optimization Algorithm for Data Clustering. Entropy 2016, 18, 185. [Google Scholar] [CrossRef]
  26. Chen, Y.L.; Yau, H.T.; Yang, G.J. A maximum entropy-based chaotic time-variant fragile watermarking scheme for image tampering detection. Entropy 2013, 15, 3170–3185. [Google Scholar] [CrossRef]
  27. Li, S.; Chen, G.; Mou, X. On the dynamical degradation of digital piecewise linear chaotic maps. Int. J. Bifurc. Chaos 2005, 15, 3119–3151. [Google Scholar] [CrossRef]
  28. Kanso, A.; Smaoui, N. Logistic chaotic maps for binary numbers generations. Chaos Solitons Fractals 2009, 40, 2557–2568. [Google Scholar] [CrossRef]
  29. Dong, S.F.; Dong, Z.C.; Ma, J.J.; Chen, K.N. Improved PSO algorithm based on chaos theory and its application to design flood hydrograph. Water Sci. Eng. 2010, 3, 156–165. [Google Scholar]
  30. Wang, S.; Meng, B. Chaos particle swarm optimization for resource allocation problem. In Proceedings of the 2007 IEEE International Conference on Automation and Logistics, Jinan, China, 18–21 August 2007; pp. 464–467. [Google Scholar]
  31. Bruin, H.; Troubetzkoy, S. The Gauss map on a class of interval translation mappings. Isr. J. Math. 2003, 137, 125–148. [Google Scholar] [CrossRef]
  32. Lynch, S. Nonlinear discrete dynamical systems. In Dynamical Systems with Applications Using Maple; Springer: Basel, Switzerland, 2010; pp. 263–295. [Google Scholar]
  33. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  34. Yang, X.S. Engineering Optimization: An Introduction With Metaheuristic Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  35. Yang, X.S. Swarm-based metaheuristic algorithms and no-free-lunch theorems. In Theory and New Applications of Swarm Intelligence; InTech: Rijeka, Croatia, 2012. [Google Scholar]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.