Next Article in Journal
Direct Superbubble Detection
Previous Article in Journal
Speech Act Theory as an Evaluation Tool for Human–Agent Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Squirrel Search Algorithm for Global Function Optimization

School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(4), 80; https://doi.org/10.3390/a12040080
Submission received: 10 March 2019 / Revised: 11 April 2019 / Accepted: 12 April 2019 / Published: 17 April 2019

Abstract

:
An improved squirrel search algorithm (ISSA) is proposed in this paper. The proposed algorithm contains two searching methods, one is the jumping search method, and the other is the progressive search method. The practical method used in the evolutionary process is selected automatically through the linear regression selection strategy, which enhances the robustness of squirrel search algorithm (SSA). For the jumping search method, the ‘escape’ operation develops the search space sufficiently and the ‘death’ operation further explores the developed space, which balances the development and exploration ability of SSA. Concerning the progressive search method, the mutation operation fully preserves the current evolutionary information and pays more attention to maintain the population diversity. Twenty-one benchmark functions are selected to test the performance of ISSA. The experimental results show that the proposed algorithm can improve the convergence accuracy, accelerate the convergence speed as well as maintain the population diversity. The statistical test proves that ISSA has significant advantages compared with SSA. Furthermore, compared with five other intelligence evolutionary algorithms, the experimental results and statistical tests also show that ISSA has obvious advantages on convergence accuracy, convergence speed and robustness.

1. Introduction

Optimization is one of the most common problems in the engineering field, and with the development of new technology, the problems that need to be optimized have gradually turn to large scale, multi peak and nonlinear approaches. The intelligence evolutionary algorithm is a mature global optimization method with high robustness and wide applicability. The fact that the evolutionary process is not constrained by search space and does not require other auxiliary information means that the intelligence evolutionary algorithm can deal with complex problems effectively, which are too difficult to be solved by the traditional optimization algorithms [1,2]. The applications of the intelligence evolutionary algorithms have covered system control, machine design and engineering planning, for example [3,4,5,6,7].
The intelligence evolutionary algorithms can be divided into the evolutionary heuristic algorithms, the physical heuristic algorithms and the group heuristic algorithms, according to their inspiration. The evolutionary heuristic algorithms originate from the genetic evolution process, with the representative algorithms described as follows: The genetic algorithm imitates Darwin’s theory of natural selection and finds the optimal solution by selection, crossover and mutation [8]. Similarly, the essence of the differential evolutionary algorithm is the genetic algorithm based on real coding; the mutation operation modifies each individual according to the difference vectors of population [9]. In the covariance-matrix adaptation evolution strategy, the direction of mutation steps of a population is directly described by the covariance matrix, where the search range of the next generation is increased or decreased adaptively. The individuals produced by sampling are optimized through the iterative loop [10]. The clonal selection algorithm is based on the clonal selection theory; the fitness value corresponds to the cell affinity, and the optimization process imitates the affinity maturation process of cells with low antigen affinity [11]. Given that human beings have higher survivability because they are good at observing and drawing experience from others’ habits, the social cognitive optimization algorithm was proposed, with better solutions being selected by the imitating process and new solutions being produced by the observing process [12]. Imitating the toning process of musicians, the melody search is aimed at finding the best melody of continuity, the harmony memory considering rate controls the search range of each solution (harmony) and the pitch adjusting rate produces a local perturbation of the new solution [13]. The teaching-learning-based optimization algorithm is proposed through imitating the teachers’ teaching and the students’ learning process. The teacher is the individual with the best grade (fitness value) and the other individuals in the population are students. In order to improve the grades of the whole class, each student studies from the teacher in the teaching stage and the students learn from each other in learning stage [14]. There are three relationships among living things: mutualism, commensalism and parasitism; creatures can benefit themselves by any of these relationships. The symbiotic organisms search was proposed according to this phenomenon. Each individual interacts with other individuals during the optimization and the better individuals are retained after each interaction [15]. The mouth brooding fish algorithm simulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. The proposed algorithm uses the movement, dispersion and protection behavior of a mouth brooding fish as an update mode, and the individuals in the algorithm are updated after these three stages to find the best possible answer [16].
The physical heuristic algorithms are inspired by physical phenomena, with the representative algorithms as follows: The freedom of molecules increases after a solid melts, and the temperature needs to drop slowly to return to stable solids with minimum energy. Simulated annealing takes the fitness value as the energy of the solid, with the energy decreasing gradually with the optimization proceeding and the optimal solution being found [17]. The gravitational search algorithm is based on the law of universal gravitation—for each individual the fitness value represents its resultant force produced by all the individuals in the population [18]. The magnetic optimization algorithm is inspired by the theory of magnetic field, where the resultant forces of individuals are changed by the field strength and the distance among individuals. The acceleration, the velocity and positions of individuals are also updated, with the individuals reaching the optimum values gradually [19]. Considering the refraction that occurs when light travels from a light scattering medium to a denser medium, the ray optimization algorithm was proposed. For each individual, the normal vector is determined by its optimal solution and the global optimal solution; the optimal solution can be found with the exit rays close to normal [20]. The kinetic energy of gas molecules takes the energy of gas as the fitness value. When the pressure remains unchangeable and the temperature decreases, the molecules gradually accumulate to the position where the temperature is the lowest and the kinetic energy is the smallest in the container [21]. Inspired by the physical phenomenon of water evaporation, the water evaporation optimization algorithm was proposed. The factors that affect the water evaporation rate are taken as the fitness values. According to the water evaporation rate model, the evaporation probability matrix was considered as the individual renewal probability. Considering that the aggregated forms of water molecules are different, the algorithm is divided into a monolayer evaporation phase in the early evolutionary stage and a droplet evaporation phase in the later evolutionary stage [22]. The lightning attachment procedure optimization algorithm simulates the lightning formation process, which takes the test points between cloud and ground as individuals and the corresponding electrical fields represent fitness values. The three evolutionary operations—downward pilot, upward pilot and branch fading—imitate the downward leader movement, the upward leader propagation and the discharge of lightning, respectively [23].
The group heuristic algorithms mainly simulate biological habits in nature. The representative algorithms are as follows: The ant colony optimization algorithm was proposed according to the way that ants leave pheromones on their path during movement, with better paths having more pheromones, and thus better paths have greater possibilities to be chosen by ants. As a result, more and more pheromones will be left on those paths, and the optimal solution will be found with the increasing concentration of pheromones [24]. The particle swarm optimization algorithm is inspired by the behavior of birds seeking food. For each individual, the position is updated by its current speed, its optimal position and the global best position [25]. The artificial bee colony algorithm was proposed by imitating honeybee foraging behavior. The whole population is divided into three groups: the leading bees, the following bees and the detecting bees. The leading bees are responsible for producing a new honey source, while the following bees search greedily near better honey sources. If the quality of the honey source remains unchanged after many iterations, the leading bees will change to detecting bees and continue to search for a high quality honey source [26]. The social spider optimization regards the whole search space as the spiders’ attached web, with the spiders’ positions as the possible solutions of the optimization problem, and the corresponding weights representing the fitness values of individuals. Female and male subpopulations produce offspring through their respective cooperation and mating behavior [27]. The selfish herd theory proves that when animals encounter predators, each individual increases its survival possibilities by aggregating with other individuals in the herd, whether this approach affects the survival probability of other individuals or not. According to this theory, the selfish herd optimizer was proposed, wherein each individual updates the location in this way to obtain a greater probability of survival [28]. Inspired by the foraging process of hummingbirds, the hummingbirds optimization algorithm was proposed. The hummingbird can search according to its cognitive behavior without interacting with other individuals in a self-searching phase. In addition to searching through experience, hummingbirds can also search by using various dominant individuals as guidance information in a guided-search phase, with the two phases cooperating to promote the population evolution [29].
A large number of experimental results show that the intelligence optimization algorithms can obtain, exact or approximate an optimal solution to large-scale optimization problems in a limited time frame. However, there are also disadvantages, such as the convergence speed being not fast enough and easily falling into the local optimal. Therefore, scholars have put forward various new intelligence evolutionary algorithms.
In 2018, the squirrel search algorithm (SSA) [30] was proposed by Jain M. The algorithm imitates the dynamic jumping strategies and the gliding characters of flying squirrels. The mathematical model mainly consists of the location of a food source and the appearance of predators. The whole optimization process includes the summer phase and the winter phase. However, similar to other intelligent evolutionary algorithms, SSA also has some shortcomings, such as low convergence accuracy and slow convergence speed [31,32]. According to SSA, the single winter search method of the global search ability is not enough, which makes the algorithm easily fall into local optimal. Furthermore, the random summer search method decreases the convergence speed, and the convergence precision is also reduced. In order to improve the convergence precision and the convergence speed, this paper proposed an improved squirrel search algorithm (ISSA). The proposed algorithm includes the jumping search method and the progressive search method. When the squirrels meet with predators, the ‘escape’ and ‘death’ operations are introduced into the jumping search method and the ‘mutation’ operation is introduced into the progressive search method. ISSA also chooses the suitable search method through the linear regression selection strategy during the optimization process. Twenty-one benchmark functions are used to evaluate the performance of the proposed algorithm. The experiments contain three parts: the influence of the parameter on ISSA, the comparison of the proposed methods and SSA and the comparison of ISSA and five other improved evolutionary algorithms.
The remaining sections are arranged as follows: Section 2 reviews the basic SSA. Section 3 presents the proposed ISSA. The experiments and results analysis are reported in Section 4. Section 5 concludes this paper.

2. The Squirrel Search Algorithm

The standard SSA updates the positions of individuals according to the current season, the type of individuals and whether predators appear [30].

2.1. Initialize the Population

Assuming that the number of the population is N, the upper and lower bounds of the search space are FSU and FSL. N individuals are randomly produced according to Formula (1):
F S i = F S L + r a n d ( 1 , D ) × ( F S U F S L )
FSi represents the i-th individual, (i = 1…N); rand is a random number between 0 and 1; D is the dimension of the problem.

2.2. Classify the Population

Taking the minimization problem as an example, SSA requires that there is only one squirrel at each tree, assuming the total number of the squirrels is N, therefore, there are N trees in the forest. All the N trees contain one hickory tree and Nfs (1 < Nfs < N) acorn trees; the others are normal trees which have no food. The hickory tree is the best food resource for the squirrels and the acorn tree takes second place. Nfs can be different depending on the different problems. Ranking the fitness values of the population in ascending order, the squirrels are divided into three types: individuals located at hickory trees (Fh), individuals located at acorn trees (Fa) and individuals located at normal trees (Fn). Fh refers to the individual with the minimum fitness value, Fa contains the individuals whose fitness rank 2 to Nfs + 1 and the remaining individuals are noted as Fn. In order to find the better food resource, the destination of Fa is Fh; the destinations of Fn are randomly determined as either Fa or Fh.

2.3. Update the Position

The individuals update their positions by gliding to the hickory trees or acorn trees. The specific updating formulas are shown as Formulas (2) and (3), respectively:
{ F S i t + 1 = F S i t + d g × G c × ( F h t F S i t ) i f r > P d p r a n d o m l o c a t i o n o t h e r w i s e
{ F S i t + 1 = F S i t + d g × G c × ( F a i t F S i t ) i f r > P d p r a n d o m l o c a t i o n o t h e r w i s e
r is a random number between 0 and 1; P d p valued at 0.1 represents the predator appearance probability; if r > P d p , then no predator appears, the squirrels glide in the forest to find the food, and the individuals are safe; if r P d p , the predators appear, the squirrels are forced to narrow the scope of activities, the individuals are endangered, and their positions are relocated randomly (the specific method will be introduced in Section 2.4); t represents the current iteration; Gc is the constant with the value of 1.9; Fai (i = 1,2,…Nfs) is the individual randomly selected from Fa; dg is the gliding distance which can be calculated by Formula (4):
d g = h g t a n ( φ ) × s f
hg is the constant valued 8; sf is the constant valued 18; t a n ( φ ) represents the gliding angle which can be calculated by Formula (5):
t a n ( φ ) = D L
D is the drag force and L is the lift force which can be calculated by calculated by Formulas (6) and (7), respectively:
D = 1 2 ρ V 2 S C D
L = 1 2 ρ V 2 S C L
ρ , V, S and CD are all the constants which are equal to 1.204 kg m−3, 5.25 ms−1, 154 cm2 and 0.6, respectively; CL is a random number between 0.675 and 1.5.

2.4. Seasonal Transition Judgement and Random Updating

At the beginning of each iteration, the standard SSA requires that the whole population is in winter, which means all the individuals are updated in the way introduced in Section 2.3. When all the individuals have been updated, whether the season changes is judged according to Formulas (8) and (9):
S c t = k = 1 D ( F a i , k t F h , k t ) 2 i = 1 , 2 , , N f s
S m i n = 10 e 6 ( 365 ) t / ( T / 2.5 )
T is the maximum number of iterations, if S c t < S m i n , winter is over and the season turns to summer, otherwise, the season is unchanged. When the season turns to summer, all the individuals who glide to Fh stay at the updated location, and all the individuals who glide to Fa and do not meet with predators relocate their positions by Formula (10):
F S i n e w t + 1 = F S L + L e v y ( n ) × ( F S U F S L )
L e v y is the random walk model whose step obey the L e v y distribution and can be calculated by Formula (11):
L e v y ( x ) = 0.01 × r a × σ | r b | 1 β
β is the constant valued 1.5; σ can calculated by Formula (12):
σ = ( Γ ( 1 + β ) × s i n ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 ( β 1 2 ) ) 1 β w h e r e Γ ( x ) = ( x 1 ) !
In conclusion, the procedure of the standard SSA is shown in Figure 1:

3. The improved Squirrel Search Algorithm

3.1. Motivation

A large number of experiments have proven that different evolutionary strategies are suitable for different problems, and also that the requirements are also different with the development of evolution. In the early stage of optimization, individuals are distributed dispersedly in the search space, and there are still large distances among the individuals with better fitness values. Thus it is important to maintain the diversity of the population to develop the search space sufficiently. Meanwhile, the convergence speed should be improved as well. In the later stage of optimization, the difference among the individuals are increasingly shorter, and thus the main work is to search around the elite individuals to improve the convergence speed. In addition to this, in order to prevent the algorithm from falling into a local optimal, the diversity of the population also needs to be supplemented.
Considering the analysis above, an improved squirrel search algorithm (ISSA) is proposed in this paper to improve the performance and the robustness of SSA. The proposed algorithm includes the jumping search method and the progressive search method, both of having an independent winter search strategy for the early evolutionary stage when S c t S m i n and summer search strategy for the later evolutionary stage when S c t < S m i n . Algorithm 1 shows the detailed steps of ISSA:
Algorithm 1. Pseudo Code of ISSA
Input:pop
Output:fbest (fbest is the best fitness value optimized by the algorithm)
fort = 1 to T (T is the total generation of the algorithm to be executed)
evaluate the fitness values of the population
update the population through the jumping search method introduced in Section 3.2
  ift == T/n (n is the total substages of the whole optimization, details in Section 3.4)
     calculate the corresponding linear regression equations introduced in Section 3.4
  if two or more calculated slopes are positive
  continue optimizing through the progressive search method introduced in Section 3.3
     else
     continue optimizing through the jumping search method mentioned in Section 3.2
     end
  end
end

3.2. The Jumping Search Method

3.2.1. ‘Escape’ Operation in Winter

According to the winter updating method, a random relocation makes the endangered individuals abandon the current evolutionary direction, which decreases the convergence speed even if it can explore the new position to maintain the population diversity. In addition, the safe individuals evolve towards either Fh or Fa based on themselves, which can maintain the current evolutionary information and supplement the population diversity. However, the convergence speed will decrease because Fa is ultimately not the best individual.
In order to maintain the population diversity and improve the convergence speed, a new winter search strategy was designed in the jumping search method. The details are as follows:
If r P d p , FSi is safe, the position is updated by Formula (13):
F S i t + 1 = F S i t + d g × G c × ( F h t F S i t )
If r < P d p , FSi is endangered, FSi is considered to be dead and generates a new one by Formula (14):
F S i t + 1 = L + r a n d ( 1 , D ) × ( U L )
In Formula (14), U is the maximum of FSi and L is the minimum of FSi.
Producing a subpopulation GS, all the individuals in GS have not been updated, and the original FSi is considered to be the predator with the hunting radius calculated by Formula (15). The individual in GS is threatened if the distance between itself and the predator is shorter than the hunting radius. All the threatened individuals ‘escape’ by Formula (16) and continue searching around the new position by Formula (17) after ‘escaping’.
R = U L 2
F S j n e w = F S j t + d g × G c × ( F a i t F S j t ) d g × G c × ( F S i t F S j t ) i = 1 , 2 , , N f s
F S j t + 1 = F S j n e w × ( 0.5 + r a n d )
In the formula above, FSj represents the threatened individual.
The advantages of the new winter search strategy include: (1) Considering the evolutionary information in early stage is abundant enough to maintain the population diversity, the safe individuals only evolve to the best individual Fh, which will improve the convergence speed; (2) The endangered individuals reinitialize in a smaller range, and thus the current evolutionary information can be retained in a way that avoids a blind search, which will improve the searching efficiency. More importantly, the threatened individuals evolve towards Fa which avoids the individuals concentrating excessively. The further development after ‘escaping’ supplements the population diversity and prevents the algorithm from falling into a local optimal. In summary, the new winter search strategy maintains the population diversity as well as improves the convergence speed, which satisfies the requirement of the early evolutionary stage.

3.2.2. ‘Death’ Operation in Summer

According to the summer updating method, only safe individuals who evolve towards Fa are randomly relocated; the others stay at their updated positions without any change, although it supplements the population diversity and retains the current evolutionary information, and the blindness of random relocation decreases the convergence speed, which is not fit for the requirement of the later evolutionary stage.
In order to satisfy the corresponding requirement, a new summer search strategy was proposed. The proposed strategy searches around the elite individual carefully and supplements the population diversity to make up the disadvantages introduced above. The details are as follows:
If r P d p , FSi is safe, the position is updated by Formula (18):
F S i t + 1 = F h t + d g × G c × ( F h t F S i t )
If r < P d p , FSi is endangered, FSi is considered to be dead and a new one will be generated by Formula (19). Furthermore, all the threatened individuals are considered to be dead and the new ones will be generated by Formula (20):
F S i t + 1 = F S i t × ( 0.5 ( N ( 0 , 1 ) ) )
F S i t + 1 = F h t × ( 0.5 + r a n d )
In the formula above, N (0,1) is a random number which obeys the standard normal distribution.
Fh is the best individual found so far. Formula (18) takes Fh as the base vector and takes the differential vector between Fh and FSi as the disturbance. Due to the fact that differences among the individuals in summer ( S c t < S m i n ) are smaller than those in winter ( S c t S m i n ), the essence of Formula (18) is to search finely around Fh and retain the current evolutionary information. In Formula (19), the random number is generated by 0.5 ( N ( 0 , 1 ) ) scattered in [0.5−2,0.52] but close to 1 in greater possibilities, thus the individuals generated by Formula (19) are more similar to FSi, while the individuals generated by Formula (20) distribute in [0.5Fh,1.5Fh] uniformly. Therefore, the search space of Formulas (19) and (20) are smaller than the random relocation shown in Formula (10). In addition, Formula (19) pays more attention to retain the current evolutionary information and Formula (20) pays more attention to developing the search space.

3.2.3. Characters of the Jumping Search Method

According to the new winter search strategy introduced in Section 3.2.1 and the new summer search strategy introduced in Section 3.2.2: (1) If r P d p , the winter search strategy takes the FSi as the base vector and takes the differential vectors between Fh and FSi as the disturbance; the summer search strategy takes the Fh as the base vector and takes the differential vectors between Fh and FSi as the disturbance. As both of them evolve towards Fh, the difference between the winter search strategy and the summer search strategy is that the former focuses on maintaining the population diversity, while the latter focuses on improving the convergence speed. (2) If r < P d p , the winter search strategy updates the individuals by changing the learning target and gets away from the current position, while the summer search strategy generates new individuals around FSi or Fh. In summary, the different requirements in different stages are satisfied by the coordination of the two search strategies. The pseudo code of the jumping search method is shown in Algorithm 2:
Algorithm 2. Pseudo Code of the Jumping Search Method
Input:pop
Output:popnew
if S c t S m i n
season = winter
else
season = summer
end
ifseason == winter
forp1 = 1 to popsize (popsize is the total number of squirrels)
if r P d p
F S i t + 1 = F S i t + d g × G c × ( F h t F S i t )
end
if r < P d p
F S i t + 1 = L + r a n d ( 1 , D ) × ( U L )
forp2 = 1 to nt1 (nt1 is the total number of threatened squirrels)
       F S j n e w = F S j t + d g × G c × ( F a i t F S j t ) d g × G c × ( F S i t F S j t ) i = 1 , 2 , , N f s
F S j t + 1 = F S j n e w × ( 0.5 + r a n d )
      end
end
   end
end
ifseason == summer
forp1 = 1 to popsize
if r P d p
F S i t + 1 = F h t + d g × G c × ( F h t F S i t )
end
if r < P d p
F S i t + 1 = F S i t × ( 0.5 ( N ( 0 , 1 ) ) )
forp2 = 1 to nt2 (nt2 is the total number of dead squirrels)
F S i t + 1 = F h t × ( 0.5 + r a n d )
      end
end
end

3.3. The Progressive Search Method

3.3.1. The Principle of the Progressive Search Method

The progressive search method is designed to improve the robustness of SSA, compared with the jumping search method. The progressive search method has a similar thought but different details.
When S c t S m i n , the season is winter, update and mutate the individuals as follows:
If r P d p , FSi is safe, update the position according to Formula (13);
If r < P d p FSi is endangered, select a dimension randomly and mutate it in the range of FSL and FSH, which is aimed at retaining the current evolutionary direction and information.
When S c t < S m i n , the season is summer, update and mutate the individuals as follows:
If r P d p , FSi is safe, update the position according to Formula (21):
F S i t + 1 = F h t + L e v y ( x ) × ( F h t F S i t )
L e v y ( x ) is calculated by Formulas (11) and (12), which makes the individuals search in a short distance with greater possibilities and search in a long distance occasionally.
If r < P d p , FSi is endangered, select a dimension randomly and mutate it in the range of L and U.

3.3.2. The Analysis of the Progressive Search Method

According to the introduction of the progressive search method above, for the winter stage, if the individuals are safe, the search strategy is the same as that in the jumping search method. However, if the individuals are endangered, mutate one dimension of the individuals randomly, and the convergence speed will slow down but more evolutionary information is retained. For the summer stage, if the individuals are safe, the search strategy is similar to the jumping search method, but the gliding step is replaced by the L e v y flight. If the individuals are endangered, mutate one dimension of the individuals randomly to maintain the diversity of the population. Algorithm 3 shows the pseudo code of the progressive search method:
Algorithm 3. Pseudo Code of the Progressive Search Method
Input:pop
Output:popnew
if S c t S m i n
if r P d p
season = winter
else
season = summer
end
ifseason == winter
forp1 = 1 to popsize
       F S i t + 1 = F S i t + d g × G c × ( F h t F S i t )
end
if r < P d p
      select one dimension randomly and change it between FSL and FSH
end
   end
end
ifseason == summer
forp1 = 1 to popsize
if r P d p
F S i t + 1 = F h t + L e v y ( x ) × ( F h t F S i t )
end
if r < P d p
select one dimension randomly and change it between U and L
end
end

3.4. Linear Regression Selection Strategy

According to the introduction in Section 3.2 and Section 3.3, due to the fact that individuals generated in summer are more similar to Fh, the jumping search method improves the convergence speed in an obvious manner. As the progressive search method retains the evolutionary information efficiently, the population diversity can be better maintained. Meanwhile, during the optimization process of the minimization problem, the best fitness value of the population is supposed to be a downward trend. If the population diversity is not abundant enough, the search space will not be developed sufficiently, meaning the algorithm will not convergence efficiently and the best fitness value will fluctuate or even become larger. Considering that different problems are suitable for different evolutionary strategies, this paper proposes a linear regression selection strategy to choose the appropriate updating method from the two methods mentioned above. The details are as follows:
Divide the whole evolutionary process into n substages evenly. The optimization starts with the jumping search method, calculating the linear regression equations of the best fitness value when a substage is finished. There are three linear regression equations needed to be calculated: the best fitness value of the first half substage, the second half substage and the whole substage. If two or more equations’ slopes are positive, the best fitness value may fluctuate or become larger, that is, the population diversity needs to be supplemented. The jumping search method is not suitable for the current problem. Therefore, the progressive search method is selected to finish the evolution. Otherwise, the population diversity is abundant enough, and will continue evolving through the jumping search method to converge faster. Figure 2 is the illustration of the linear regression selection strategy introduced above. Taking n = 10 and t = 0.2 T as an example, calculate the linear regression equations of the three regions: [0.1 T, 0.15 T], [0.15 T, 0.2 T] and [0.1 T, 0.2 T]. It can be seen from Figure 2 that the slope of ab is negative, and the slopes of bc and ac are positive, which means that the best fitness value will fluctuate or become larger, therefore, select the progressive search method to finish the evolution. Figure 3 is the procedure of ISSA.

4. Analysis of the Experimental Results

4.1. Benchmark Functions

In order to test the performance of the proposed ISSA, a series of experiments are carried out. All the experiments work on CPU: Intel Core i5-7200 M, 4 G RAM, 2.70 GHZ, Windows 10 and Matlab R2016a.
The experiment selects 21 benchmark functions from reference [15,33,34,35], which include the low dimensional unimodal functions (F1, F2), the low dimensional multimodal functions (F3–F10), the high dimensional unimodal functions (F11–F17) and the high dimensional multimodal functions (F18–F21). The specific functions are shown in Table 1:

4.2. The Influence of Parameter on ISSA

ISSA divides the whole evolutionary process into n substages and selects the proper strategy when a substage is completed. In order to obtain good performances as fast as possible, take n = 0, n = 5, n = 10, n = 15 and n = 20 and test the ISSA on low dimensional unimodal function F2, the low dimensional multimodal functions F6 and F10, the high dimensional unimodal functions F16 and F17, the high dimensional multimodal functions F18 and the total number of evolutions at 18,000, respectively. Nfs is 3 which is the same as the reference [30]. The population size is 30, and every function runs 30 times independently in order to avoid the occasionality of the single execution. The results are as follows: the number before ‘±’ is the mean and the number after ‘±’ is the deviation of the obtained best fitness value.
According to Table 2, there is almost no impact of different n on low dimensional unimodal function; for low dimensional multimodal functions, the results of n = 0 are much worse than that of n ≠ 0. According to 3.4, ISSA starts with the jumping search method and ISSA is same as the jumping search method in total when n = 0. ISSA is the combination of two strategies and the searching strategy may turn to the progressive search method when n ≠ 0. The results of F2 and F6 mean that different strategies are fit for different problems and the linear regression selection strategy is effective. Aside from this, the results of F2 and F6 are better with a higher selection frequency; for high dimensional unimodal functions, the experimental results are not much different. However, there are obvious differences on a high dimensional multimodal function between n = 5 and n = 0, n = 10, 15 and 20. In order to guarantee the convergence performance and the operating efficiency of ISSA at the same time, take n = 10 for the remaining experiments in this paper.

4.3. The Efficiency of the Proposed Methods

The proposed ISSA selects the jumping search method or the progressive method according to the linear regression selection strategy automatically. In order to verify the efficiency of the proposed methods, compare the standard SSA, the jumping search method, the progressive method and the ISSA through the 21 benchmark functions in terms of convergence speed, population diversity and convergence precision.

4.3.1. Comparison of Convergence Speed on Four Methods

For each method mentioned above, the total number of evolution is 30,000 and Nfs is 3, the population size is 30, which is aimed at comparing the methods fairly effectively. Every method runs 30 times independently in order to avoid the randomness of the single execution.
Figure 4a–i shows the convergence curves of SSA, the jumping search method, the progressive search method and ISSA on the low dimensional unimodal functions F2, the low dimensional multimodal functions F3 and F5, the high dimensional unimodal functions F11, F13, F14 and F16 and the high dimensional multimodal functions F18 and F21 when the evaluation time is 30,000. The blue, green, yellow and red curves refer to SSA, the jumping search method, the progressive method and ISSA, respectively. The abscissa of each figure is the iteration and the ordinate of each figure is the best value found so far. The title of each figure is given as the name F2, F3, F5, F11, F13, F14 and F16. For the low dimensional functions, the jumping search method, the progressive method and ISSA all have obvious improvements on convergence speed compared with SSA. The convergence speed from high to low is the jumping search method, ISSA and the progressive search method. In relation to the high dimensional functions, the convergence speed of the jumping search method and ISSA is still much better than SSA and the jumping search method performs better, but the improvement of the progressive search method is not as obvious as that on the low dimensional functions.

4.3.2. Comparison of Population Diversity on Four Methods

In order to compare the population diversity of the four methods intuitively, all the population sizes are 30. Table 3 is the comparison of the individuals’ distribution on the 2-dimensional unimodal function F2 and the 2-dimensional multimodal function F5 when the convergence accuracy is up to 10−6, 10−8 and 10−10. The points in the figures refer to the individuals’ positions, and the blue, green, black and red points refer to SSA, the jumping search method, the progressive search method and ISSA, respectively. The abscissa and the ordinate of each figure are the search range for the function; for F2 the search range is from −10 to 10 and for F5 the search range is from −100 to 100. The title of each figure is the name of F2 or F5. Table 4 shows the variance of the population’s fitness values when the algorithms converge to about 10−4 of benchmark functions. The functions contain the high dimensional unimodal functions F11 and F12 and the high dimensional multimodal functions F20 and F21. The number before ‘±’ is the variance’s mean and the number after ‘±’ is the variance’s standard deviation. ‘/’ represents the algorithm failure to converge to 10−4 after evaluating 30,000 times.
It can be seen from Table 3 that SSA has the most dispersed individual distribution because of the individuals’ re-initialization. For the three proposed strategies mentioned in this paper, individuals of the progressive search method distribute most dispersedly. ISSA takes second place, and the jumping search method has the most concentrated distribution. The progressive search method mutates individuals on a certain dimension, which will make more individuals distribute on lines x1 = 0 and x2 = 0. As a result, the benchmark functions have a great chance to converge to the optimal value. SSA re-initializes the whole individual, which makes the individuals distribute in the search space irregularly. From the analysis above, the progressive search method has the best population diversity while the jumping search method performs worst when the population evaluates to the fixed accuracy.
Data in Table 4 show that the variances of SSA and the progressive search method are much larger than the variances of the jumping method and ISSA. Excessive population diversity slows down the convergence speed and some functions cannot converge to the fixed accuracy. Meanwhile, for the population diversity of the three proposed strategies in this paper, the progressive search method performs best, ISSA takes second place and the jumping search method performs worst.

4.3.3. Comparison of Comprehensive Performance on Four Methods

In order to compare the comprehensive performance of the four methods, Table 5 shows the mean and standard deviation of the optimal value obtained in 30 independent experiments on the 21 benchmark functions. For each method, the population size is 30, the total of evolution times is 30,000 and the Nfs is 3. The number before ‘±’ is the mean and the number after ‘±’ is the deviation of the obtained best fitness value; ‘+’, ‘−’ and ‘=’ represent that the means of the corresponding method are better than ISSA, worse than ISSA and equal to ISSA, respectively.
Table 5 shows that the convergence precision of the jumping search method, the progressive method and the ISSA are all obviously better than SSA. Meanwhile, compared with ISSA, the numbers of benchmark functions with the better mean, the worse mean and the equivalent mean of SSA are 1, 18, and 2, respectively; the corresponding numbers of the jumping search method are 1, 4 and 16, respectively; the corresponding numbers of the progressive search method are 3, 12 and 6, respectively. In order to compare the differences of each method, a Friedman test was taken to check the data in Table 5 [36]. The specific process is shown below:
The Friedman test ranks the algorithms for each data set separately; k refers to the number of algorithms and n refers to the number of data sets of each algorithm. The results are shown in Table 6, and is calculated as follows: χ r 3 = 12 n k ( k + 1 ) j = 1 k R j 2 3 n ( k + 1 ) = 12 21 × 4 × ( 4 + 1 ) ( 75 2 + 44 2 + 51 2 + 40 2 ) 3 × 21 × ( 4 + 1 ) = 21.0571 ; α = 0.05, df = 4 − 1 = 3 at the 5% significant level and χ 0.05 3 = 7.81 < 21.0571 according to the Chi-square distribution table. Therefore, the four methods are considered to have significant differences at the 5% significance level.
To further compare the performance of the four methods, assuming that the convergence performance of ISSA is better than the other three methods, a Holm test was carried out and the results are shown in Table 7:
It can be seen from Table 7 that P 1 < α ( k 1 ) , P 2 > α ( k 2 ) , P 3 > α ( k 3 ) . The original hypothesis is rejected at the 5% significance level. Therefore, compared with SSA, ISSA has significantly better performance. ISSA has a smaller average rank, though it does not outperform the progressive search method and the jumping search method.
In conclusion, the jumping search method has the best convergence speed and the progressive search method performs best on maintaining the population diversity. Compared with SSA, both have obvious advantages in convergence accuracy. ISSA combines the two methods together, improves the convergence speed and the convergence accuracy and maintains the population diversity as well. ISSA can find the global optimal of more benchmark functions and has the best comprehensive performance.

4.4. Performance Compared with Other Algorithms

To verify the advantages of ISSA, we compared it with five algorithms with better optimization results: the improved differential evolutionary algorithm—MDE (modified differential evolution, with self-adaptive parameters method) [37], the improved gravitational search algorithm—IGSA/PSO (an improved gravitational search algorithm for green partner selection in virtual enterprises) [38], the improved artificial bee colony algorithm—distABC (artificial bee colony algorithm, with distribution-based update rule) [39], the improved particle swarm optimization—ADN-RSN-PSO (all-dimension neighborhood based particle swarm optimization with randomly selected neighbors) [40] and the improved grey wolf optimization algorithm—PSO-GWO (an improved hybrid grey wolf optimization algorithm) [41].
For each algorithm mentioned above, the population number is 30, the total number of evolution times is 24,000 and each method runs 30 times independently to ensure the comparison is fair enough. The relevant parameters are set as follows:
  • ISSA: n = 10; Nfs = 3;
  • MDE: the crossover probability CR = 0.4, the mutation probability F is determined by the random number between 0 and 1.
  • IGSA/PSO: the gravitational constant G0 = 100, α = 20;
  • distABC: limit = (the population number × dimension)/2;
  • ADN-RSN-PSO: the weight factor w = 0.7298, c1 = c2 = 2.05;
  • PSO-GWO: the weight factor c1 = c2 = 2.05; aini = 2, afin = 0; r1, r2, r3, r4 are all the random numbers between 0 and 1.
Table 8 shows the optimization results of low dimensional functions (F1–F10) and Table 11 shows the ones of high dimensional functions (F11–F21). Best, Worst, Mean and SD respectively represent the best fitness value, the worst fitness value, the mean fitness value and the standard deviation obtained by 30 independent executions. R represents the times of the algorithm converges to the appointed precision. The appointed precision is 10−8 for the benchmark functions, whose optimal is 0. For the benchmark functions F1, F7, F9 and F10 whose optimal is not equal to 0, the appointed precision is −0.6, −1.6, −3.6 and −8.6 respectively.
It can be seen from Table 8 that MDE can converge to the optimal on F1, F3, F4, F5, F6, F7 and F8, but the performances of F7 and F8 are not stable enough; distABC only converges to the optimal stably on F4 and has a certain probability to converge to the optimal on F3 and F5; IGSA/PSO has no stable convergence performance on all the functions but still has a certain probability to converge to the optimal on F1, F3, F4 and F5; ADN-RSN-PSO has the worst performance without any function convergences to the optimal; PSO-GWO can converge stably to the optimal on F3, F4, F5 and F8, and there is also a certain probability for F7 to converge to the optimal. For ISSA, all the low dimensional functions can converge to the optimal and have the stable convergence performances on F2, F3, F4, F5 and F8. Aside from this, ISSA obtains the minimum mean on all functions except F6. In order to compare the differences of each method, we used the Friedman test to check the data in Table 8. The specific process is shown below, and the results can be obtained by Table 9:
χ r 5 = 12 n k ( k + 1 ) j = 1 k R j 2 3 n ( k + 1 ) = 12 10 × 6 × ( 6 + 1 ) ( 20.5 2 + 51.5 2 + 39 2 + 53 2 + 29 2 + 17 2 ) 3 × 10 × ( 6 + 1 ) = 33.7857 ; α = 0.05, df = 6 − 1 = 5 at the 5% significant level and χ 0.05 5 = 11.07 < 33.7857 according to the Chi-square distribution table. Therefore, the six algorithms are considered to have significant differences at the 5% significance level.
To further compare the performance of the six algorithms, assume that the convergence performance of ISSA is better than the other five algorithms. A Holm test was carried out and the results are shown in Table 10:
It can be seen from Table 10 that P 1 < α ( k 1 ) , P 2 < α ( k 2 ) , P 3 < α ( k 3 ) , P 4 > α ( k 4 ) , P 5 > α ( k 5 ) ; the original hypothesis is rejected at the 5% significance level. Therefore, compared with AND-RSN-PSO, distABC and IGSA/PSO, ISSA has significantly better performance. ISSA has a smaller average rank, though it does not outperform PSO-GWO and MDE. In summary, compared with five other algorithms, the proposed algorithm ISSA has better performance on the low dimensional functions.
Table 11 shows the convergence results on the high dimensional functions. It can be seen that MDE can converge to a certain precision on F11, F12, F13, F16, F18 and F20, but the obtained precision has obvious gaps compared with ISSA; distABC almost has no efficient convergence performance on the high dimensional functions; IGSA/PSO can converge to a certain precision on F11, F12 and F13, but the results are not good enough; ADN-RSN-PSO can converge to a certain precision on all the high dimensional functions except on F18, but the obtained precision is far from ISSA; PSO-GWO can converge to a better precision except on F18, whose convergence result is not good enough and on F21 which cannot converge efficiently, with the obtained convergence precision still being worse than that of ISSA. For ISSA, all the functions can converge to the optimal except F15, F16 and F20. Aside from this, the best fitness value, the worst fitness value, the mean fitness value and the standard deviation of F11, F12, F13, F14, F17, F19 and F21 are all equal to zero and keep unchanged even if the dimensions become higher. The best fitness value of F18 is also equal to zero no matter whether the dimension is 30, 50 or 100, and the deviation is much smaller than the other five algorithms for F15, F16 and F20, which cannot converge to the optimal. The best fitness value, the worst fitness value, the mean fitness value and the standard deviation are still much better than the other five algorithms. In order to compare the differences of each method, a Friedman test is taken to check the data in Table 11 when the dimension is 30. The specific process is shown below:
It can be obtained by: χ r 5 = 12 n k ( k + 1 ) j = 1 k R j 2 3 n ( k + 1 ) = 12 11 × 6 × ( 6 + 1 ) ( 37 2 + 55 2 + 46 2 + 55 2 + 27 2 + 11 2 ) 3 × 11 × ( 6 + 1 ) = 38.7403 with the results shown in Table 12; α = 0.05, df = 6 − 1 = 5 at the 5% significant level and χ 0.05 5 = 11.07 < 38.7403 according to the Chi-square distribution table. Therefore, the six algorithms are considered to have significant differences at the 5% significance level.
To further compare the performance of the six algorithms, assume that the convergence performance of ISSA is better than the other five methods. A Holm test is carried out and the results are shown in Table 13:
It can be seen from Table 13 that P 1 < α ( k 1 ) , P 2 < α ( k 2 ) , P 3 < α ( k 3 ) , P 4 < α ( k 4 ) , P 5 > α ( k 5 ) , and the original hypothesis is rejected at the 5% significance level. Therefore, compared with AND-RSN-PSO, distABC, IGSA/PSO and MDE, ISSA has a significantly better performance. ISSA has a smaller average rank, though it does not outperform PSO-GWO. In summary, the ISSA proposed in this paper has obvious advantages on convergence precision and stability in terms of the high dimensional functions. Figure 5a–u shows the convergence curves of the algorithms mentioned above on F1–F21 in order to compare the convergence performances intuitively. The yellow, cyan, purple, green, black, blue and red curves refer to MDE, distABC, IGSA/PSO, AND-RSN-PSO, PSO-GWO, SSA and ISSA, respectively. The abscissa of each figure is the iteration and the ordinate of each figure and is the best value found so far. The title of each figure is given the name of F1–F21.

5. Conclusions

This paper proposes an improved squirrel search algorithm. In terms of SSA, the winter searching method cannot develop the search space sufficiently, and the summer searching method is too random to guarantee the convergence speed. ISSA introduces the jumping search method and the progressive search method. For the jumping search method, the ‘escape’ operation in winter supplements the population diversity and fully exploits the search space, while the ‘death’ operation in summer explores the search space more sufficiently and improves the convergence speed. Aside from this, the mutation in the progressive search method retains the evolutionary information more effectively and maintains the population diversity. ISSA selects the suitable method by the linear regression selection strategy according to the variation tendency of the best fitness value, which improves the robustness of the algorithm. Compared with SSA, ISSA pays more attention to developing the search space in winter, and pays more attention to exploring around the elite individual in summer, which keeps a good balance between development and exploration improves the convergence speed and the convergence accuracy. Moreover, ISSA selects a proper search strategy along with the optimization processing, so ISSA has greater possibilities in finding the optimal solution. The experimental results on 21 benchmark functions and the statistical tests show that the proposed algorithm can promote the convergence speed, improve the convergence accuracy and maintain the population diversity at the same time. Furthermore, ISSA has obvious advantages in convergence performances compared with other five intelligent evolutionary algorithms.

Author Contributions

Y.J.W. designed the experiments and revised the paper; T.L.D. shared the ideas and mainly wrote the paper.

Funding

This research was funded by the National Natural Science Foundation of China under grants NO.61501107 and NO.61603073, and the Project of Scientific and Technological Innovation Development of Jilin NO.201750227 and NO.201750219.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. He, J.; Lin, G. Average Convergence Rate of Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2016, 20, 316–321. [Google Scholar] [CrossRef]
  2. Chugh, T.; Sindhya, K.; Hakanen, J. A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms. Soft Comput. 2017, 23, 3137–3166. [Google Scholar] [CrossRef]
  3. Bhattacharyya, B.; Raj, S. Swarm intelligence based algorithms for reactive power planning with Flexible AC transmission system devices. Int. J. Electr. Syst. 2016, 78, 158–164. [Google Scholar] [CrossRef]
  4. Laina, R.; Lamzouri, F.E.-Z.; Boufounas, E.-M.; El Amrani, A.; Boumhidi, I. Intelligent control of a DFIG wind turbine using a PSO evolutionary algorithm. Procedia Comput. Sci. 2018, 127, 471–480. [Google Scholar] [CrossRef]
  5. Manjunath, P.G.C.; Krishna, P.; Parappagoudar, M.B.; Vundavilli, P.R. Multi-Objective Optimization of Squeeze Casting Process using Evolutionary Algorithms. IJSIR 2016, 7, 55–74. [Google Scholar]
  6. Ntouni, G.D.; Paschos, A.E.; Kapinas, V.M.; Karagiannidis, G.K.; Hadjileontiadis, L.J. Optimal detector design for molecular communication systems using an improved swarm intelligence algorithm. Micro Nano Lett. 2018, 13, 383–388. [Google Scholar] [CrossRef]
  7. Liu, M.; Zhang, F.; Ma, Y.; Pota, H.R.; Shen, W. Evacuation path optimization based on quantum ant colony algorithm. Adv. Eng. Inform. 2016, 30, 259–267. [Google Scholar] [CrossRef]
  8. Yang, J.H.; Honavar, V. Feature Subset Selection Using a Genetic Algorithm. IEEE Intell. Syst. 1998, 13, 44–49. [Google Scholar] [CrossRef]
  9. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Hansen, N.; Ostermeier, A. Completely Derandomized Self-Adaptation in Evolution Strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef] [Green Version]
  11. De Castro, L.N.; Von Zuben, F.J. Learning and Optimization Using the Clonal Selection Principle. IEEE Trans. Evol. Comput. 2002, 6, 239–251. [Google Scholar] [CrossRef]
  12. Xie, X.F.; Zhang, W.J.; Yang, Z.L. Social cognitive optimization for nonlinear programming problems. In Proceedings of the 2002 International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2002; pp. 779–783. [Google Scholar]
  13. Ashrafi, S.M.; Dariane, A.B. A novel and effective algorithm for numerical optimization: Melody Search (MS). In Proceedings of the 2011 11th International Conference on Hybrid Intelligent Systems, Malacca, Malaysia, 5–8 December 2011; pp. 109–114. [Google Scholar]
  14. Rao, R.V.; Patel, V. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. Int. J. Ind. Eng. Comput. 2012, 3, 535–560. [Google Scholar] [CrossRef]
  15. Cheng, M.Y.; Prayogo, D. Symbiotic Organisms Search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  16. Jahani, E.; Chizari, M. Tackling global optimization problems with a novel algorithm—Mouth Brooding Fish algorithm. Appl. Soft Comput. 2018, 62, 987–1002. [Google Scholar] [CrossRef]
  17. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [Green Version]
  18. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Sadiq, A.S. Magnetic Optimization Algorithm for training Multi Layer Perceptron. In Proceedings of the IEEE 3rd International Conference on Communication Software and Networks, Xi’an, China, 27–29 May 2011; pp. 42–46. [Google Scholar]
  20. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  21. Moein, S.; Logeswaran, R. KGMO: A swarm optimization algorithm based on the kinetic energy of gas molecule. Inf. Sci. 2014, 275, 127–144. [Google Scholar] [CrossRef]
  22. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  23. Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
  24. Colorni, A. Distributed Optimization by Ant Colonies. In Proceedings of the 1st European Conference on Artificial Life, Paris, France, 11–13 December 1991; pp. 134–142. [Google Scholar]
  25. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  26. Basturk, B.; Karaboga, D. An artificial bee colony (ABC) algorithm for numeric function optimization. In Proceedings of the Swarm Intelligence Symposium, Indianapolis, IN, USA, 12–14 May 2006; pp. 687–697. [Google Scholar]
  27. Cuevas, E.; Cienfuegos, M. A swarm optimization algorithm inspired in the behavior of the social-spider. Expert. Syst. Appl. 2013, 40, 6374–6384. [Google Scholar] [CrossRef] [Green Version]
  28. Fausto, F.; Cuevas, E.; Valdivia, A. A global optimization algorithm inspired in the behavior of selfish herds. Biosystems 2017, 160, 39–55. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, Z.; Huang, C.; Huang, H.; Tang, S.; Dong, K. An optimization method: Hummingbirds optimization algorithm. J. Syst. Eng. Electron. 2018, 29, 168–186. [Google Scholar] [CrossRef]
  30. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2018, 44, 148–175. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Liu, M. An improved genetic algorithm encoded by adaptive degressive ary number. Soft Comput. 2018, 22, 6861–6875. [Google Scholar] [CrossRef]
  32. Gomes, W.C.; dos Santos Filho, R.C.; de Sales Junior, C.D.S. An Improved Artificial Bee Colony Algorithm with Diversity Control. In Proceedings of the 2018 Brazilian Conference on Intelligent Systems, Sao Paulo, Brazil, 22–25 October 2018; pp. 19–24. [Google Scholar]
  33. Wang, S.; Li, Y.; Yang, H. Self-adaptive differential evolution algorithm with improved mutation strategy. Appl. Intell. 2017, 47, 644–658. [Google Scholar] [CrossRef]
  34. Kiran, M.S.; Hakli, H.; Gunduz, M. Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf. Sci. 2015, 300, 140–157. [Google Scholar] [CrossRef]
  35. Feng, X.; Xu, H.; Wang, Y. The social team building optimization algorithm. Appl. Soft Comput. 2018, 2, 1–22. [Google Scholar] [CrossRef]
  36. Demišar, J.; Schuurmans, D. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  37. Li, X.; Yin, M. Modified differential evolution with self-adaptive parameters method. J. Comb. Optim. 2016, 31, 546–576. [Google Scholar] [CrossRef]
  38. Xiao, J.; Niu, Y.; Chen, P. An improved gravitational search algorithm for green partner selection in virtual enterprises. Neurocomputing 2016, 217, 103–109. [Google Scholar] [CrossRef]
  39. Babaoglu, I. Artificial bee colony algorithm with distribution-based update rule. Appl. Soft Comput. 2015, 34, 851–861. [Google Scholar] [CrossRef]
  40. Sun, W.; Lin, A.; Yu, H. All-dimension neighborhood based particle swarm optimization with randomly selected neighbors. Inf. Sci. 2017, 405, 141–156. [Google Scholar] [CrossRef]
  41. Teng, Z.J.; Lv, J.L.; Guo, L.W. An improved hybrid grey wolf optimization algorithm. Soft Comput. 2018, 22, 1–15. [Google Scholar] [CrossRef]
Figure 1. The procedure of the standard squirrel search algorithm (SSA).
Figure 1. The procedure of the standard squirrel search algorithm (SSA).
Algorithms 12 00080 g001
Figure 2. The illustration of the linear regression selection strategy.
Figure 2. The illustration of the linear regression selection strategy.
Algorithms 12 00080 g002
Figure 3. The procedure of improved squirrel search algorithm (ISSA).
Figure 3. The procedure of improved squirrel search algorithm (ISSA).
Algorithms 12 00080 g003
Figure 4. Convergence curves of the proposed methods and SSA.
Figure 4. Convergence curves of the proposed methods and SSA.
Algorithms 12 00080 g004
Figure 5. The convergence curves of ISSA and other algorithms.
Figure 5. The convergence curves of ISSA and other algorithms.
Algorithms 12 00080 g005aAlgorithms 12 00080 g005b
Table 1. Benchmark functions.
Table 1. Benchmark functions.
NameFunctionDRangeOptimal
F1 Easom f ( x ) = cos ( x 1 ) cos ( x 2 ) exp ( ( x 1 π ) 2 ( x 2 π ) 2 ) 2[−100,100]−1
F2 Matyas f ( x ) = 0.26 ( x 1 2 + x 2 2 ) 0.48 x 1 x 2 2[−10,10]0
F3 Bohachevsky1 f ( x ) = x 1 2 + 2 x 2 2 0.3 c o s ( 3 π x 1 ) 0.4 c o s ( 4 π x 2 ) + 0.7 2[−100,100]0
F4 Bohachevsky2 f ( x ) = x 1 2 + 2 x 2 2 0.3 c o s ( 3 π x 1 ) ( 4 π x 2 ) + 0.3 2[−100,100]0
F5 Bohachevsky3 f ( x ) = x 1 2 + 2 x 2 2 0.3 c o s ( 3 π x 1 + 4 π x 2 ) + 0.3 2[−100,100]0
F6 Booth f ( x ) = ( x 1 + 2 x 2 + 7 ) 2 + ( 2 x 1 + x 2 5 ) 2 2[−10,10]0
F7 Michalewicz2 f ( x ) = i = 1 D s i n ( x i ) ( s i n ( i x i 2 / π ) ) 20 2 [ 0 , π ] −1.8013
F8 Schaffer f ( x ) = 0.5 + s i n 2 ( x 1 2 + x 2 2 ) 0.5 ( 1 + 0.001 ( x 1 2 + x 2 2 ) ) 2 2[−100,100]0
F9 Michalewicz5 f ( x ) = i = 1 D s i n ( x i ) ( s i n ( i x i 2 / π ) ) 20 5 [ 0 , π ] −4.6877
F10 Michalewicz10 f ( x ) = i = 1 D s i n ( x i ) ( s i n ( i x i 2 / π ) ) 20 10 [ 0 , π ] −9.6602
F11 Zakharov f ( x ) = i = 1 D x i 2 + ( i = 1 D 0.5 i x i ) 2 + ( i = 1 D 0.5 i x i ) 4 30/50/100[−5,10]0
F12 Sphere f ( x ) = i = 1 D x i 2 30/50/100[−100,100]0
F13 SumSquares f ( x ) = i = 1 D i x i 2 30/50/100[−10,10]0
F14 Schwefel 1.2 f ( x ) = i = 1 D ( j = 1 i x j ) 2 30/50/100[−100,100]0
F15 Schwefel 2.21 f ( x ) = max i = 1 D { | x i | } 30/50/100[−100,100]0
F16 Schwefel 2.22 f ( x ) = i = 1 D | x i | + i = 1 D | x i | 30/50/100[−10,10]0
F17 Elliptic f ( x ) = i = 1 D ( 10 6 ) i 1 D 1 x i 2 30/50/100[−100,100]0
F18 Griewank f ( x ) = 1 4000 ( i = 1 D ( x i 100 ) 2 ) ( i = 1 D cos ( x i 100 i ) ) + 20 + e 30/50/100[−600,600]0
F19 Salomon f ( x ) = cos ( 2 π i = 1 D x i 2 ) + 0.1 × i = 1 D x i 2 + 1 30/50/100[−100,100]0
F20 Alpine f ( x ) = i = 1 D | x i sin ( x i ) + 0.1 x i | 30/50/100[−10,10]0
F21 Powell f ( x ) = i = 1 D 4 [ ( x 4 i 3 + 10 x 4 i 2 ) 2 + 5 ( x 4 i 1 x 4 i 2 ) 2 + ( x 4 i 2 2 x 4 i 1 ) 4 + 10 ( x 4 i 3 + x 4 i ) 4 ] 32/52/100[4,5]0
Table 2. The experimental results of ISSA with different n.
Table 2. The experimental results of ISSA with different n.
Fn
05101520
F20 ± 00 ± 00 ± 00 ± 00 ± 0
F66.2502e−04 ± 1.9021e−041.7129e−10 ± 9.2634e−113.5323e−08 ± 1.2626e−097.7188e−07 ± 2.8698e−087.5656e−07 ± 1.9151e−08
F10−7.6632 ± 0.7985−9.4261 ± 0.0680−9.2260 ± 0.7780−8.1919 ± 1.3054−8.0299 ± 1.1781
F166.2909e−163 ± 07.4432e−162 ± 01.0302e−164 ± 02.0455e−165 ± 07.1391e−164 ± 0
F170 ± 00 ± 00 ± 00 ± 00 ± 0
F181.8504e−15 ± 1.0135e−154.4095e−08 ± 1.9301e−072.1945e−15 ± 8.2481e−162.9976e−15 ± 6.5204e−153.1826e−15 ± 7.1639e−15
Table 3. The comparison of the individuals’ distribution.
Table 3. The comparison of the individuals’ distribution.
Fun AccuracyMethod
SSAJumping SearchProgressive SearchISSA
F210−6 Algorithms 12 00080 i001 Algorithms 12 00080 i002 Algorithms 12 00080 i003 Algorithms 12 00080 i004
10−8 Algorithms 12 00080 i005 Algorithms 12 00080 i006 Algorithms 12 00080 i007 Algorithms 12 00080 i008
10−10 Algorithms 12 00080 i009 Algorithms 12 00080 i010 Algorithms 12 00080 i011 Algorithms 12 00080 i012
F510−6 Algorithms 12 00080 i013 Algorithms 12 00080 i014 Algorithms 12 00080 i015 Algorithms 12 00080 i016
10−8 Algorithms 12 00080 i017 Algorithms 12 00080 i018 Algorithms 12 00080 i019 Algorithms 12 00080 i020
10−10 Algorithms 12 00080 i021 Algorithms 12 00080 i022 Algorithms 12 00080 i023 Algorithms 12 00080 i024
Table 4. The comparison of the population’s fitness value.
Table 4. The comparison of the population’s fitness value.
NameMethod
SSAJumping Search MethodProgressive Search MethodISSA
F111.2837e+05 ± 7.3449e+042.5981e−04 ± 2.4867e−041.1115e+04 ± 3.3201e+030.0017 ± 0.0015
F125.7955e+04 ± 2.1202e+047.1891e−05 ± 4.6142e−051.3107e+03 ± 543.91957.2366e−04 ± 6.0848e−04
F20/6.0584e−05 ± 2.5350e−051.2767 ± 0.29462.2921e−04 ± 1.7366e−04
F21/8.5775e−05 ± 8.4886e−057.2621e+04 ± 6.0527e+046.2753e−04 ± 4.7156e−04
Table 5. The comparison of the convergence precision.
Table 5. The comparison of the convergence precision.
NameMethod
SSAJumping Search MethodProgressive Search MethodISSA
F1−1 ± 0 (=)−1 ± 1.0909e−16 (=)−1 ± 0 (=)−1 ± 3.9171e−16
F24.4212e−21 ± 1.5032e−20 (−)0 ± 0 (=)9.7986e−149 ± 5.3669e−148 (−)0 ± 0
F37.4015e−18 ± 4.0540e−17 (−)0 ± 0 (=)0 ± 0 (=)0 ± 0
F41.0191e−20 ± 2.9625e−20 (−)0 ± 0 (=)0 ± 0 (=)0 ± 0
F51.2019e−20 ± 4.4285e−20 (−)0 ± 0 (=)0 ± 0 (=)0 ± 0
F61.4402e−18 ± 3.0624e−18 (+)0.0202 ± 0.0238 (−)2.6295e−33 ± 1.4403e−32 (+)4.9331e−12 ± 1.6068e−11
F7−1.8468 ± 0.0838 (−)−1.7911 ± 0.0116 (−)−1.8013 ± 6.8344e−16 (=)−1.8013 ± 8.5739e−16
F80 ± 0 (=)0 ± 0 (=)0 ± 0 (=)0 ± 0
F9−4.6459 ± 2.7201e−15 (−)−4.4543 ± 0.1818 (−)−4.6808 ± 0.1336 (+)−4.6617 ± 0.0304
F10−9.5333 ± 0.2598 (−)−8.2037 ± 0.3744 (−)−9.6602 ± 0 (+)−9.5882 ± 0.0891
F112.7472e−13 ± 1.1640e−12 (−)0 ± 0 (=)2.1318e−16 ± 1.1676e−15 (−)0 ± 0
F128.0478e−13 ± 4.3612e−12 (−)0 ± 0 (=)1.9301e−15 ± 1.0572e−14 (−)0 ± 0
F132.0435e−10 ± 1.0129e−09 (−)0 ± 0 (=)3.5232e−11 ± 1.9297e−10 (−)0 ± 0
F140.5472 ± 2.7161 (−)0 ± 0 (=)0.0865 ± 0.4149 (−)0 ± 0
F157.2194e−08 ± 6.4247e−08 (−)0 ± 0 (=)0.0241 ± 0.1321 (−)0 ± 0
F160.0978 ± 0.4017 (−)0 ± 0 (=)1.4322e−11 ± 7.8443e−11 (−)0 ± 0
F179.7812e+03 ± 3.5168e+04 (−)0 ± 0 (=)2.3545e−15 ± 1.2896e−14 (−)0 ± 0
F180.6323 ± 0.3594 (−)0 ± 0 (+)0.0129 ± 0.0650 (−)1.9614e−16 ± 3.6843e−16
F190.1099 ± 0.0548 (−)0 ± 0 (=)0.0233 ± 0.0773 (−)0 ± 0
F200.0025 ± 0.0109 (−)0 ± 0 (=)9.1222e−13 ± 4.9964e−12 (−)0 ± 0
F218.6605 ± 38.6332 (−)0 ± 0 (=)7.3093e−05 ± 2.2255e−05 (−)0 ± 0
Table 6. The results of the Friedman test.
Table 6. The results of the Friedman test.
FunctionRank
SSAJumping SearchProgressive SearchISSA
F11.531.54
F241.531.5
F34222
F44222
F54222
F62413
F74312
F82.52.52.52.5
F93412
F103412
F1141.531.5
F1241.531.5
F1341.531.5
F1441.531.5
F1531.541.5
F1641.531.5
F1741.531.5
F184132
F1941.531.5
F2041.531.5
F2141.531.5
Total rank75445140
Average rank3.57142.09522.42861.9048
Sort1324
Table 7. The results of the Holm test.
Table 7. The results of the Holm test.
iAlgorithmz = (RiR4)/ k ( k + 1 ) 6 n = (RiR4)/ 4 × ( 4 + 1 ) 6 × 21 = (RiR4)/0.3984 Pi α / ( k i )
1SSA(3.5714 − 1.9048)/0.3984 = 4.18322e−050.0167
2progressive search(2.4286 − 1.9048)/0.3984 = 1.31480.18850.0250
3jumping search(2.0952 − 1.9048)/0.3984 = 0.47790.63480.0500
Table 8. The convergence results of F1–F10.
Table 8. The convergence results of F1–F10.
NameMethodBestWorstMeanSDR
F1Modified differential evolution (MDE)−1−1−1030
Improved artificial bee colony (distABC)−0.9957−0.4070−0.80230.171630
Improved gravitational search algorithm (IGSA/PSO)−10−0.83330.379030
All-dimension neighborhood based particle swarm optimization with randomly selected neighbors (ADN-RSN-PSO)−0.91260−0.03450.166830
Improved grey wolf optimization (PSO-GWO)−1.0000−0.9983−0.99974.3750e−0430
ISSA−1−1.0000−14.1233e−1730
F2MDE1.8981e−1246.0392e−1112.0134e−1121.1026e−11130
distABC6.4478e−121.3143e−044.5038e−062.3974e−0521
IGSA/PSO3.5151e−201.2328e−184.3235e−193.5081e−1930
ADN-RSN-PSO1.5194e−149.3798e−063.8898e−071.7203e−0620
PSO-GWO8.0797e−2715.0693e−2251.6898e−226030
ISSA000030
F3MDE000030
distABC06.6613e−162.2204e−171.2162e−1630
IGSA/PSO06.6613e−165.1810e−175.1810e−1730
ADN-RSN-PSO1.7529e−090.34430.02090.06701
PSO-GWO000030
ISSA000030
F4MDE000030
distABC000030
IGSA/PSO05.5511e−171.8504e−181.0135e−1730
ADN-RSN-PSO2.2204e−160.00251.5597e−044.6084e−046
PSO-GWO000030
ISSA000030
F5MDE000030
distABC02.2128e−107.3769e−124.0400e−1130
IGSA/PSO01.6653e−162.9606e−174.3081e−1730
ADN-RSN-PSO4.3178e−120.53150.01940.09683
PSO-GWO000030
ISSA000030
F6MDE000030
distABC9.3892e−040.91140.14840.17470
IGSA/PSO2.3141e−196.8399e−171.4187e−171.5244e−1730
ADN-RSN-PSO4.6791e−133.3521e−042.3684e−057.2443e−057
PSO-GWO4.7856e−079.8150e−041.4743e−042.1193e−040
ISSA01.5479e−125.1596e−145.1596e−1430
F7MDE−1.8013−1.9996−1.82060.058730
distABC−1.7576−1.1419−1.41700.185212
IGSA/PSO−1.8381−1.9969−1.95540.044330
ADN-RSN-PSO−1.7829−1.4155−1.70560.138525
PSO-GWO−1.8013−1.2138−1.76970.121827
ISSA−1.8013−1.8013−1.80137.5040e−1630
F8MDE00.00976.4773e−040.002528
distABC0.00140.01030.00850.00270
IGSA/PSO5.7927e−040.00970.00710.00340
ADN-RSN-PSO0.23250.47670.37920.07690
PSO-GWO000030
ISSA000030
F9MDE−4.6459−4.9833−4.73630.086630
distABC−2.6563−1.7364−2.20890.26440
IGSA/PSO−4.3109−2.5452−3.65190.464619
ADN-RSN-PSO−3.4226−2.1793−2.68790.32100
PSO-GWO−4.4527−2.7246−3.41830.428914
ISSA−4.6877−4.6459−4.68560.009330
F10MDE−9.6552−9.1153−9.53280.109530
distABC−3.5674−2.7672−3.16270.19690
IGSA/PSO−8.6839−3.8323−5.94541.212712
ADN-RSN-PSO−4.9021−3.3798−3.75630.31430
PSO-GWO−6.7936−4.1522−5.49230.64880
ISSA−9.6602−9.5403−9.62410.041130
Table 9. The results of the Friedman test.
Table 9. The results of the Friedman test.
FunctionRank
MDEdistABCIGSA/PSOAND-RSN-PSOPSO-GWOISSA
F1154632
F2364521
F3245622
F42.52.5562.52.5
F5254622
F6162453
F7265431
F835461.51.5
F9263541
F10263541
Total rank20.551.539532917
Average rank2.055.153.95.32.91.7
Sort523146
Table 10. The results of the Holm test.
Table 10. The results of the Holm test.
iAlgorithmz = (RiR6)/ k ( k + 1 ) 6 n = (RiR6)/ 6 × ( 6 + 1 ) 6 × 10 = (RiR6)/0.8367Pi α / ( k i )
1AND-RSN-PSO(5.3 − 1.7)/0.8367 = 4.30262e−050.01
2distABC(5.15 − 1.7)/0.8367 = 4.12334e−050.0125
3IGSA/PSO(3.9 − 1.7)/0.8367 = 2.62940.00870.0167
4PSO-GWO(2.9 − 1.7)/0.8367 = 1.43420.15130.025
5MDE(2.05 − 1.7)/0.8367 = 0.41830.67810.05
Table 11. The convergence results of F11–F21.
Table 11. The convergence results of F11–F21.
NameDMethodBestWorstMeanSDR
F1130MDE1.0401e−141.5722e−121.9227e−133.1621e−1330
distABC0.03662.27110.31270.42930
IGSA/PSO7.9175e−070.00515.0970e−040.00110
ADN-RSN-PSO5.7203e−1727.64841.03885.06319
PSO-GWO5.5036e−2767.3227e−2042.4409e−205030
ISSA000030
50MDE2.4111e−077.1216e−061.3533e−061.6316e−060
distABC5.3447e+031.5571e+055.4741e+043.8374e+040
IGSA/PSO0.16643.51931.05410.78960
ADN-RSN-PSO1.9193e−13100.59566.621819.72652
PSO-GWO1.6753e−2811.4189e−2214.7307e−223030
ISSA000030
100MDE0.782624.02485.37145.51510
distABC1.3689e+062.7723e+062.2532e+063.6327e+050
IGSA/PSO637.56373.1652e+031.3765e+03659.79350
ADN-RSN-PSO1.8339e−120.40170.04100.08773
PSO-GWO1.2087e−2842.6558e−2149.3853e−216030
ISSA000030
F1230MDE9.2637e−147.6003e−115.8122e−121.3863e−1130
distABC0.00860.21220.06050.05670
IGSA/PSO1.0212e−081.0867e−041.1358e−052.3307e−050
ADN-RSN-PSO9.6397e−26397.213514.054772.40373
PSO-GWO6.2166e−2926.3326e−2312.1637e−232030
ISSA000030
50MDE1.6536e−079.7707e−062.9899e−062.6119e−060
distABC449.16682.6220e+031.2305e+03564.35900
IGSA/PSO0.00411.50930.24860.34470
ADN-RSN-PSO8.2423e−17403.181026.258982.88923
PSO-GWO6.4181e−2784.5968e−2231.5336e−224030
ISSA000030
100MDE1.6536e−079.7707e−062.9899e−062.6119e−060
distABC2.2131e+052.8567e+052.5701e+051.4272e+040
IGSA/PSO405.51802.5861e+031.0468e+03448.90130
ADN-RSN-PSO1.3639e−26146.54265.946326.79146
PSO-GWO7.3808e−2733.3942e−2261.1314e−227030
ISSA000030
F1330MDE4.9141e−152.3773e−122.3077e−134.2943e−1330
distABC0.00190.02480.01030.00690
IGSA/PSO1.6976e−050.01010.00180.00260
ADN-RSN-PSO8.0743e−2938.40532.41147.60959
PSO-GWO2.0610e−2902.3985e−2308.0010e−232030
ISSA000030
50MDE4.3369e−082.3226e−065.8020e−075.5813e−070
distABC60.5866437.6214192.817090.60970
IGSA/PSO0.217415.58682.61513.14420
ADN-RSN-PSO2.3824e−1935.95283.32259.74737
PSO-GWO4.7500e−2864.9925e−2302.9195e−231030
ISSA000030
100MDE0.14754.83330.89940.94330
distABC7.3635e+041.2853e+051.0609e+051.4043e+040
IGSA/PSO215.7341673.2718428.7446119.28570
ADN-RSN-PSO2.7413e−202.0323e+0387.6861369.42961
PSO-GWO2.8630e−2894.3016e−2271.4339e−228030
ISSA000030
F1430MDE5.2747e+033.2657e+041.3414e+046.1786e+030
distABC1.3735e+049.6878e+045.2398e+041.8624e+040
IGSA/PSO4.7593149.236247.765837.31980
ADN-RSN-PSO2.6758e−361.4738e+0356.9252268.15442
PSO-GWO2.0189e−2945.7304e−2391.9102e−240030
ISSA000030
50MDE5.2107e+041.3519e+058.0055e+041.9715e+040
distABC1.7554e+054.6751e+052.8911e+056.8753e+040
IGSA/PSO472.89382.5645e+031.1703e+03534.26550
ADN-RSN-PSO4.9794e−131.9352e+0373.0431352.45162
PSO-GWO3.1521e−2841.7900e−2405.9668e−242030
ISSA000030
100MDE3.1074e+056.3577e+054.2536e+058.1732e+040
distABC9.0218e+052.2811e+061.4725e+063.2973e+050
IGSA/PSO5.9127e+032.2013e+041.1214e+043.2236e+030
ADN-RSN-PSO3.6628e−116.1001e+03354.58331.1445e+032
PSO-GWO5.5639e−2793.6892e−2386.3759e−240030
ISSA000030
F1530MDE0.678915.65944.62493.35050
distABC29.423688.119869.398114.31070
IGSA/PSO0.14215.01551.38941.23030
ADN-RSN-PSO3.5252e−063.25810.44360.80710
PSO-GWO4.9641e−1434.1240e−1061.3747e−1077.5293e−10730
ISSA6.8624e−2941.9355e−2216.4516e−223030
50MDE7.311226.340715.34714.57630
distABC86.268795.106292.39512.44490
IGSA/PSO5.005011.47198.69781.50230
ADN-RSN-PSO5.6962e−092.53630.40270.60491
PSO-GWO3.6326e−1395.3333e−1111.7782e−1129.7371e−11230
ISSA2.2890e−2755.3435e−2221.7813e−223030
100MDE24.585946.371333.88564.97490
distABC92.193097.964296.04751.19590
IGSA/PSO14.181322.485417.81451.97720
ADN-RSN-PSO3.5409e−132.37980.25410.51421
PSO-GWO1.3222e−1433.3096e−1161.2837e−1176.0495e−11730
ISSA1.3362e−2711.3887e−2304.8821e−232030
F1630MDE6.3446e−142.0800e−112.1415e−124.0992e−1230
distABC0.00360.15110.01450.02620
IGSA/PSO0.00160.24060.04510.04950
ADN-RSN-PSO9.8349e−0637.30922.41687.39070
PSO-GWO5.6633e−1414.9625e−1131.6893e−1149.0549e−11430
ISSA9.3727e−3071.1611e−2253.8704e−227030
50MDE3.4911e−051.9115e−049.2416e−053.9599e−050
distABC3.5850100.004016.939616.78890
IGSA/PSO0.530713.36902.60482.47390
ADN-RSN-PSO6.1043e−0820.32002.51464.87670
PSO-GWO4.9924e−1401.5073e−1075.0244e−1092.7520e−10830
ISSA2.1080e−2727.1760e−2272.4359e−228030
100MDE0.11462.29320.50970.47620
distABC3.5037e+136.5783e+272.9767e+261.2607e+270
IGSA/PSO16.154440.193325.29795.37380
ADN-RSN-PSO1.7105e−04106.28578.205021.33860
PSO-GWO3.4506e−1424.3916e−1131.5163e−1148.0109e−11430
ISSA8.5914e−2751.5030e−2265.0101e−228030
F1730MDE75.003875.003875.00382.4914e−060
distABC14.1555967.8401263.9005227.97310
IGSA/PSO6.3987e+031.3092e+053.2932e+042.9068e+040
ADN-RSN-PSO9.8556e−091.2370e+061.0690e+052.8807e+051
PSO-GWO1.5518e−2821.3001e−2264.3877e−228030
ISSA000030
50MDE125.0040125.0089125.00610.00110
distABC8.7542e+058.0466e+063.7711e+061.7995e+060
IGSA/PSO5.9214e+049.9143e+053.6022e+052.4168e+050
ADN-RSN-PSO1.0970e−102.1368e+061.2419e+054.5201e+051
PSO-GWO1.7465e−2845.6730e−2261.9183e−227030
ISSA000030
100MDE246.0984254.7257250.20101.74310
distABC7.9086e+081.7132e+091.1912e+092.6043e+080
IGSA/PSO2.8580e+062.8682e+071.0857e+075.9353e+060
ADN-RSN-PSO1.6856e−103.7886e+071.9511e+067.5422e+061
PSO-GWO6.8705e−2821.1504e−2073.8348e−209030
ISSA000030
F1830MDE5.6399e−140.00491.6441e−049.0052e−0429
distABC37.477352.351244.49243.83270
IGSA/PSO2.208516.70387.24533.60230
ADN-RSN-PSO473.9074756.4108618.188158.67850
PSO-GWO1.4714e−040.52050.13910.13640
ISSA01.8646e−116.2199e−133.4042e−1230
50MDE2.1559e−070.02470.00200.00500
distABC88.2182191.3685142.399820.95680
IGSA/PSO21.879481.548842.390414.22070
ADN-RSN-PSO968.51571.3856e+031.1812e+03107.68130
PSO-GWO0.00120.57820.19440.19660
ISSA02.0095e−143.8525e−155.4146e−1530
100MDE0.19501.11880.60490.24380
distABC1.7147e+032.7003e+032.4415e+03200.05420
IGSA/PSO147.2077249.3046191.129525.75750
ADN-RSN-PSO2.3436e+033.0052e+032.6587e+03141.58650
PSO-GWO0.01481.00700.44670.30820
ISSA02.0095e−143.8525e−155.4146e−1530
F1930MDE0.20080.39990.28990.03820
distABC0.58391.33550.87750.19320
IGSA/PSO1.09993.69991.84350.51020
ADN-RSN-PSO1.8736e−052.42220.31080.55090
PSO-GWO9.8614e−1423.7041e−1061.4517e−1076.7762e−10730
ISSA000030
50MDE0.49991.10000.73420.14390
distABC3.24147.40635.67320.96570
IGSA/PSO3.49997.09995.05390.73850
ADN-RSN-PSO1.5276e−043.71680.29540.71850
PSO-GWO1.1372e−1477.0961e−1032.3655e−1041.2956e−10330
ISSA000030
100MDE2.22285.71383.56480.75040
distABC46.417155.041752.05111.89330
IGSA/PSO9.499914.048811.17321.13830
ADN-RSN-PSO1.0459e−073.57300.50940.91230
PSO-GWO1.1118e−1343.5210e−771.1737e−786.4285e−7830
ISSA000030
F2030MDE2.0970e−070.00172.0093e−044.2588e−040
distABC0.03490.98000.15460.23420
IGSA/PSO0.00190.16760.02490.03400
ADN-RSN-PSO1.9121e−0718.84590.72253.42920
PSO-GWO3.1745e−1414.1132e−1191.5246e−1207.5032e−12030
ISSA1.1472e−2681.7507e−2326.9323e−234030
50MDE4.9616e−050.00760.00170.00200
distABC23.244564.715436.00169.10270
IGSA/PSO0.07723.29930.79890.69530
ADN-RSN-PSO2.7451e−0520.69951.00873.79810
PSO-GWO1.1263e−1431.1212e−1103.7456e−1122.0469e−11130
ISSA2.4273e−2793.4792e−2241.1597e−225030
100MDE0.04180.21530.10250.04530
distABC178.0676239.7760209.657413.74260
IGSA/PSO6.769316.226110.75822.36830
ADN-RSN-PSO9.8116e−0730.92752.52947.50490
PSO-GWO1.7729e−1445.6582e−1212.2664e−1221.0321e−12130
ISSA9.0375e−2881.1851e−2203.9503e−222030
F2132MDE0.020438.68845.434010.53320
distABC272.55278.4642e+041.9675e+042.1057e+040
IGSA/PSO0.046820.97922.52573.83080
ADN-RSN-PSO4.9194e−202.82460.10530.51479
PSO-GWO1.2817e+071.2817e+071.2817e+0700
ISSA000030
52MDE0.3480360.721822.476067.43480
distABC4.9677e+055.6793e+062.3595e+061.3184e+060
IGSA/PSO1.21185.0887e+03497.22591.0764e+030
ADN-RSN-PSO3.3860e−310.05620.00200.010217
PSO-GWO2.0828 e+072.0828 e+072.0828 e+0700
ISSA000030
100MDE61.95706.2051e+031.0242e+031.3454e+030
distABC3.7772e+076.2101e+075.8466e+074.7813e+060
IGSA/PSO1.0160e+046.5678e+043.5827e+041.3363e+040
ADN-RSN-PSO1.3081e−214.50140.23860.882513
PSO-GWO4.0055 e+074.0055 e+074.0055 e+0700
ISSA000030
Table 12. The results of the Friedman test.
Table 12. The results of the Friedman test.
FunctionRank
MDEdistABCIGSA/PSOAND-RSN-PSOPSO-GWOISSA
F11354621
F12354621
F13354621
F14563421
F15564321
F16345621
F17345621
F18254631
F19356421
F20354621
F21453261
Total rank375546552711
Average rank3.363654.181852.45451
Sort413156
Table 13. The results of the Holm test.
Table 13. The results of the Holm test.
iAlgorithmz = (RiR6)/ k ( k + 1 ) 6 n = (RiR6)/ 6 × ( 6 + 1 ) 6 × 11 = (RiR6)/0.7977Pi α / ( k i )
1AND-RSN-PSO(5 − 1)/0.7977 = 5.014400.01
2distABC(5 − 1)/0.7977 = 5.014400.01
3IGSA/PSO(4.1818 − 1)/0.7977 = 3.98876e−050.0167
4MDE(3.3636 − 1)/0.7977 = 2.96300.00300.025
5PSO-GWO(2.4545 − 1)/0.7977 = 1.82340.0680.05

Share and Cite

MDPI and ACS Style

Wang, Y.; Du, T. An Improved Squirrel Search Algorithm for Global Function Optimization. Algorithms 2019, 12, 80. https://doi.org/10.3390/a12040080

AMA Style

Wang Y, Du T. An Improved Squirrel Search Algorithm for Global Function Optimization. Algorithms. 2019; 12(4):80. https://doi.org/10.3390/a12040080

Chicago/Turabian Style

Wang, Yanjiao, and Tianlin Du. 2019. "An Improved Squirrel Search Algorithm for Global Function Optimization" Algorithms 12, no. 4: 80. https://doi.org/10.3390/a12040080

APA Style

Wang, Y., & Du, T. (2019). An Improved Squirrel Search Algorithm for Global Function Optimization. Algorithms, 12(4), 80. https://doi.org/10.3390/a12040080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop