Next Article in Journal
Fragile Watermarking for Image Authentication Using the Characteristic of SVD
Previous Article in Journal
An On-Line Tracker for a Stochastic Chaotic System Using Observer/Kalman Filter Identification Combined with Digital Redesign Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis and Improvement of Fireworks Algorithm

School of Computer, Shenyang Aerospace University, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(1), 26; https://doi.org/10.3390/a10010026
Submission received: 12 December 2016 / Accepted: 14 February 2017 / Published: 17 February 2017

Abstract

:
The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA) with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

1. Introduction

The Fireworks Algorithm (FWA) [1] is a newly developed evolutionary algorithm. Like other evolution algorithms, it also aims to find the vector with the best (usually minimum) fitness in the search space. Inspired by real fireworks, the main idea of the FWA is to use the explosion of the fireworks to search the feasible space of the optimization function, which is a brand new search manner. At present, the fireworks algorithm has been applied to many practical optimization problems [2], the application areas include the factorization of a non-negative matrix [3], the design of digital filters [4], the parameter optimization for the detection of spam [5], the reconfiguration of networks [6], the mass minimization of trusses [7], the parameter estimation of chaotic systems [8], the scheduling of multi-satellite control resources [9], etc.
However, similar to other intelligent optimization algorithms, fireworks algorithms have some disadvantages such as slow convergence speed and low accuracy, thus, many improved algorithms have been proposed. An enhanced fireworks algorithm (EFWA) was put forward by analyzing the explosion operator, mutation operator, selection strategy and mapping rule of the fireworks algorithm [10]. An adaptive firework algorithm (AFWA) is proposed to carry out the self-tuning for the explosion amplitude [11]. That is to say, the explosion amplitude of the fireworks is determined by the distance between the individual with the best population fitness value and the distance between a specific individual. A dynamic search firework algorithm (dynFWA) is proposed, which divides the fireworks population into optimal firework with the optimal fitness value and non-optimal fireworks. By doing so, the two populations maintained a good balance during the evolution and showed good performance [12]. However, the fireworks algorithm is still in the development stage, to be further studied to enhance the performance.
In this paper, firstly we use the opposition-based learning strategy to initialize population to increase the diversity of the population and improve the probability to search the global optimal solution. Secondly a new explosion amplitude mechanism for the optimal firework is proposed from the aspects of population evolution rate and population aggregation degree, to enhance the ability to search the global optimal solution of the optimal firework. In addition, adaptive t-distribution for non-optimal fireworks and elite opposition-based learning for optimal firework are applied, in order to improve the global exploration ability and local development ability and make the FWA jump out of the local optimum effectively. Finally, a new selection strategy is proposed to reduce the run-time of the algorithm. Based on this, an improved fireworks optimization algorithm (IFWA) is proposed to improve the convergence speed and precision and reduce the run-time.
The paper is organized as follows. In Section 2, the fireworks algorithm is introduced. The IFWA algorithm is presented in Section 3. The simulation experiments and results analysis are given in details in Section 4. Finally, the conclusion summarizes in final part.

2. Fireworks Algorithm

In FWA, the explosion amplitude of each fireworks and the number of sparks produced by the explosion are calculated based on its relative fireworks fitness values in the fireworks population. Assume that the number of fireworks is N and the number of dimensions is d, then the explosion amplitude Ai (Equation (1)) and the number of explosion sparks Si (Equation (2)) for each firework Xi are calculated as follows.
A i = A × f ( X i ) y min + ε i = 1 N ( f ( X i ) y min ) + ε
S i = m × y max f ( X i ) + ε i = 1 N ( y max f ( X i ) ) + ε
where ymax = max(f(Xi)), ymin = min(f(Xi)), A and m are two constants to control the explosion amplitude and the number of explosion sparks, and ε is the machine epsilon to avoid Ai or Si is equal to 0.
In order to limit the good fireworks do not produce too many explosive sparks, while the poor fireworks do not produce too few sparks, its scope Si is defined as.
S i = { r o u n d ( a × m ) , S i < a × m r o u n d ( b × m ) , S i > b × m r o u n d ( S i ) ,    o t h e r w i s e
where a and b are fixed constant parameters that confine the range of the population size.
Based on Ai and Si, the explosion operator is performed (confer Algorithm 1). For each of the Si explosion sparks of each firework Xi, Algorithm 1 is performed once. In Algorithm 1, the operator % refers to the modulo operation, and Xmink and Xmaxk refer to the lower and upper bounds of the search space in dimension k.
Algorithm 1: Generating Explosion Sparks
  Initialize the location of the explosion sparks: Xj = Xi
  Calculate the number of explosion sparks Si
  Calculate the explosion amplitude Ai
  Set z = rand(1,d)
  For k = 1:d do
   If kz then
    Xjk = Xjk + rand(0,Ai)
    If Xjk out of bounds
     Xjk = Xmink + |Xjk|% (XmaxkXmink)
    End if
   End if
  End for
In order to increase the diversity of the population, the fireworks algorithm introduced mutation operator used to generate mutation sparks, namely Gaussian mutation sparks. Gaussian mutation of the spark produced as follows: First of all in the fireworks population randomly select a fireworks Xi, and then the fireworks randomly select a certain number of dimensions for Gaussian mutation operation. Algorithm 2 shows this process, it is performed Ng times, each time with a randomly selected firework Xi (Ng is a constant to control the number of Gaussian sparks).
Algorithm 2: Generating Gaussian Sparks
  Initialize the location of the explosion sparks: Xj = Xi
  Calculate offset displacement: g = Gaussian(1,1)
  Set z = rand(1,d)
  For k = 1:d do
   If kz then
    Xjk = Xjk × g
    If Xjk out of bounds
     Xjk = Xmink + |Xjk| % (XmaxkXmink)
    End if
   End if
  End for
In order to transmit the excellent information in the fireworks population to the next generation population, the algorithm chooses a certain number of individuals (including fireworks, explosion sparks and Gaussian sparks) as the next generation fireworks.
In FWA, the current best location is always kept for the next iterations. In order to keep the diversity, the remaining N − 1 locations are selected based on the method of roulette. For location Xi, the selection probability pi is calculated as follows:
p ( X i ) = R ( X i ) j K R ( X j )
R ( X i ) = j K d ( X i , X j ) = j K | | X i X j | |
where K is the set of all current locations including original fireworks and both types of sparks (without the best location). As a result, fireworks or sparks in low crowded regions will have a higher probability to be selected for the next iteration than fireworks or sparks in crowded regions.
Algorithm 3 summarizes the framework of FWA. Based on Algorithms 1 and 2, explosion sparks and Gaussian sparks are produced, respectively. For explosion sparks, the number of sparks and the amplitude of the explosion depend on the fitness value of the fireworks. In contrast, Gaussian sparks are generated by using the Gaussian mutation process. After that, a best location and n − 1 locations are selected for the next explosion.
Algorithm 3: Construction of FWA
  Initialize n positions of fireworks randomly
  While Stop criterion is false
   For each firework Xi
    Calculate the explosion amplitude Ai by Equation (1)
    Calculate the number of explosion sparks Si by Equation (2)
    Generating explosion sparks based on Algorithm 1
   End for
   For k = 1:Ng do
    Select a firework Xi randomly
    Generating Gaussian sparks based on Algorithm 2
   End for
   Save the best position to the next explosion
   Based on the given probability by Equation (4), select n − 1 position randomly from two sparks and the current fireworks.
  End while

3. Analysis and Improvement of Fireworks Algorithm

3.1. Opposition-Based Learning Population Initialization

Opposition-based learning (OBL) is first proposed by Tizhoosh [13]. OBL simultaneously considers a solution and its opposite solution; the fitter one is then chosen as a candidate solution in order to accelerate convergence and improve solution accuracy. It has been used to enhance various optimization algorithms, such as the differential evolution [14], the particle swarm optimization [15], the firefly algorithm [16], the adaptive fireworks algorithm [17] and the quantum firework algorithm [18]. Inspired by these, OBL was add to FWA to initialize population.
Definition 1.
Assume X = (x1,x2,...,xd) is a solution with d dimensions, where x1,x2,...,xdR and xi∈[Li,Ui], i=1,2,...,d. The opposite solution OX = (ox1,ox2,...,oxd) is defined as follows:
o x i = L i + U i x i
In fact, according to probability theory, 50% of the time an opposite solution is better. Therefore, based on a solution and an opposite solution, OBL has the potential to accelerate convergence and improve solution accuracy.
In the population initialization, both a random solution and an opposite solution OP are considered to obtain fitter starting candidate solutions.
Algorithm 4 is performed for opposition-based population initialization as follows.
Algorithm 4: Opposition-Based Population
Initialize fireworks P with a size of N randomly
Calculate an opposite fireworks OP based on Equation (6)
Assess 2 × N fireworks’ fitness
Choose the fittest individuals from P and OP as initial fireworks

3.2. Analysis and Improvement of Explosion Amplitude

The main purpose of Equation (1) is that the explosion amplitude of the fireworks is inversely proportional to the fitness value of the function. It enhances the local search ability of the fireworks. However, if we apply the optimal fireworks into Equation (1), the result as follows.
A i = A × ε i = 1 N ( f ( X i ) y min ) + ε
Since the numerator is the smallest constant expressed in the computer, the result of the Equation (7) is equal to 0. It is obviously inconsistent with the original design intent of the fireworks algorithm. The fireworks algorithm requires the optimal firework generated the largest number of sparks, i.e., sparks do not create much searches but increase the amount of calculation in vain.
To solve this problem, [10] gives the linear decreasing (Equation (8)) and non-linear decreasing (Equation (9)) explosion amplitude strategies as follows.
A i = A i n i t ( A i n i t A f i n a l ) × t T
A i = A i n i t A i n i t A f i n a l T × ( 2 T t ) t
where T is the maximum number of iterations, t is the number of iterations, Ainit and Afinal are the initial and final value of the explosion amplitude respectively.
Although the two strategies effectively avoid the optimal firework explosion amplitude approaching to 0, it relies on the maximum number of iterations heavily which needs to be set manually. Based on this, we propose a new method to calculate the explosion amplitude for the optimal firework. According to the aspects of population evolution rate and population aggregation degree, this method dynamically changes the explosion amplitude.
Definition 2.
Assume the t generation global optimal value is denoted as Ymin(t), the global optimal value of the t − 1 generation is Ymin(t − 1). The population evolution rate a(t) is defined as follows.
0 < a ( t ) = Y min ( t ) + ε Y min ( t 1 ) + ε 1
Fireworks algorithm retains the optimal fireworks for each iteration, the current global optimal value is always better than or equal to the global optimal value of the last iteration. From Equation (10), the value of a(t) changes greatly means that the evolution speed is fast. When a(t) is equal to 1 after several iterations, it indicates that the algorithm stagnates or finds the optimal value.
Definition 3.
Assume the t generation global optimal value is denoted as Ymin(t), the average fitness value of all fireworks in the t generation is denoted as Yavg(t). The population aggregation degree b(t) is defined as follows.
0 < b ( t ) = Y min ( t ) + ε Y a v g ( t ) + ε 1
From Equation (11), b(t) is larger, indicating the distribution of fireworks in the population more concentrated.
According to Definitions 2 and 3 can be clearly reflected optimization process in FWA. If we adjust explosion amplitude of the optimal firework with the population evolution rate and population aggregation degree, it means combining the explosion amplitude and optimization process.
When a(t) is small, the evolution speed is fast, and the algorithm can search in a large space. That is, the optimal firework can be optimized in a large scope; when a(t) is too large, the search is performed in a small scope to find the optimal solution faster.
When b(t) is small, the fireworks are scattered and are less likely to fall into local optima, which is more likely to happen when b(t) assumes greater values. At this time, it is necessary to increase the explosion amplitude to increase the search space and improve the global searching ability of FWA.
To sum up, the explosion amplitude should decrease as the population evolution rate increases, and increase with population aggregation degree increases. This paper describes this phenomenon in a simplified way.
{ A i = A i × u p , a ( t ) 1 o r b ( t ) = 1 A i = A i × l o w o t h e r w i s e
where Ai is the explosion amplitude , and the initial value is set as the size of the objective function search space. up is the enlargement factor, low is the reduction factor. Of course, larger up and smaller low cannot help to search exactly. Thus, up and low should be set a fit value.
This improvement is discussed in the following section:
  • When the a(t) is not equal to 1, it means the algorithm finds a better solution than the last generation, and the explosion amplitude should be enlarged. We emphasize that increasing the explosion amplitude may speed up the rate of convergence: assume that the current optimal firework is far from the global optimum. Increasing the explosion amplitude is a direct and efficient way to help the algorithm move faster towards global optimization. However, it should also be noted that the probability of finding a better firework will decrease as the search space increases (obviously, this depends on the optimization function to a large extent).
  • When the b(t) is equal 1, it means the algorithm may fall into local optima, and the fireworks are concentrated, increasing the explosion amplitude to make the fireworks are scattered, which help the algorithm jump out the local optima effectively.
  • When the a(t) is equal 1 and the b(t) is not equal 1, it means the algorithm does not find out a better solution than the last generation and the fireworks are scattered. In this case, the optimal firework explosion amplitude will be reduced to narrow the search to a smaller area, thereby enhancing local development capability of the optimal firework. In general, the probability of finding a better firework increases as the explosion amplitude decreases.
In this paper, the explosion amplitude of the optimal firework is calculated by Equation (12). In contrast, the explosion amplitude of non-optimal fireworks is calculated still by Equation (1). Algorithm 5 is performed for updating explosion amplitude as follows.
Algorithm 5: Update Explosion Amplitude
Find the optimal firework from all fireworks of t generation : Xbest
Calculate the fitness of the optimal firework of t generation : Ymin(t)
Calculate the fitness of the optimal firework of last iteration: Ymin(t − 1)
Calculate the average fitness value of all fireworks of t generation : Yavg(t)
Calculate the population evolution rate : a(t)
Calculate the population aggregation degree : b(t)
For the optimal firework:
If a(t) ≠ 1 or b(t) = 1 then
Ai = Ai × up
else
Ai = Ai × low
End if
For the non-optimal fireworks:
Ai is calculated by Equation (1)
Figure 1 depicts the process of explosion enlargement and reduction during the optimization of the Sphere function. An alternating behavior is noted, with reduction being performed more often, on the one hand because up and low are set to 1.2 and 0.9, on the other hand because the initial value of the explosion amplitude is set to the size of the search space, which is a considerable initial value.

3.3. Analysis and Improvement of Gaussian Mutation

Zheng pointed out the shortcomings of Gaussian mutation in FWA [10], and proposed a new type to generating location of Gaussian sparks, which is calculated as follows.
x i k = X i k + ( X b k X i k ) × g
where g = Gaussian(0,1), Xbk is the position of the optimal firework in the k dimension of the current fireworks population.
Cauchy mutation has a strong global search ability due to larger search range, and Gaussian mutation has a strong local development ability with small search range [19]. Therefore, the advantage of Equation (13) is to improve the local development capability of the algorithm, and does not improve the global search ability in the early stage of algorithm. Zhou pointed out t-distribution mutation combined with the two advantages of the Cauchy and Gaussian mutation [20], which has a strong global search ability in the early stage of algorithm and a good local development ability in the later stage of algorithm.
From Equation (13), when the optimal firework of the current population is selected for Gaussian mutation exactly, apply it into Equation (13).
x i k = X i k
As we know, the optimal firework is the best information for the current population carrier, but Gaussian mutation does not have any effect on the optimal firework in Equation (14).
To sum up, the adaptive t-distribution mutation is proposed for non-optimal fireworks to effectively keep a better balance between exploration and exploitation. Elite opposition-based learning for optimal firework to make the FWA jump out of the local optimum effectively and accelerate the global search ability.

3.3.1. Adaptive t-Distribution Mutation

T-distribution, also known as the student’s t-distribution, includes n degrees of freedom. When t(n→∞), it is equal to Gaussian(0,1); when t(n→1), it is equal to Cauchy(0,1). That is the Gaussian distribution and the Cauchy distribution are two boundary special cases of t-distribution [20].
Definition 4.
Adaptive t-distribution mutation for non-optimal fireworks is used to generate location of sparks as follows.
x i k = X i k + ( X b k X i k ) × t ( n )
where n is the number of iterations, that is the number of iterations is the freedom of t-distribution.
Algorithm 6 is performed for Adaptive t-distribution mutation for non-optimal fireworks to generate location of sparks as follows.
Algorithm 6: Generating t-Distribution Mutation Sparks
  Initialize the location of the explosion sparks: Xj = Xi
  Set z = rand(1,d)
  For k = 1:d do
   If kz then
    Xjk = Xjk + (Xbkxjk) × t(n)
    If Xjk out of bounds
     Xjk = Xmink + |Xjk|% (XmaxkXmink)
    End if
   End if
  End for
In the early stage of the algorithm, the value of n is small and the t-distribution mutation is similar to Cauchy distribution mutation, and it has a good global exploratory ability. In the later stage of the algorithm, the value of n is large, and the t-distribution mutation is similar to Gaussian distribution mutation, and it has a good local development ability. In the mid-run of the algorithm, the t-distribution mutation is between the Cauchy distribution mutation and the Gaussian distribution mutation. Therefore, the t-distribution combines the advantages of Gaussian distribution and Cauchy distribution, balancing the exploration and exploitation.

3.3.2. Elite Opposition-Based Learning

The basic idea of opposition-based learning is as follows: for a feasible solution, we evaluate the opposition-based solution simultaneously, and the optimal solution is selected as the next generation in the current feasible solution and opposition-based solution. Opposition-based learning keeps the diversity of population but large, if all the fireworks produce opposition-based solution, it is blind and increasing the amount of calculation. Therefore, here we choose the optimal individual to perform opposition-based learning.
Definition 5.
Assume Xbest = (xbest,1,xbest,2,...,xbest,d) is a solution of the optimal firework with d dimensions, where xbest,1,xbest,2,...,xbest,dR and xi∈[mini,maxi], i = 1,2,...,d. The opposite solution OXbest = (oxbest,1,oxbest,2,...,oxbest,d) is defined as follows.
o x b e s t , i = r a n d × ( min i + max i ) x b e s t , i
where rand is a uniform distribution on the interval [0, 1], and mini and maxi are the minimum and maximum values of the current search interval on the i dimension.
By Definition 5, rand is a uniformly distributed random number on [0, 1]. When rand takes different values, the optimal firework from the current population can produce a number of different optimal opposition-based fireworks, which are effective in increasing the diversity of the population and avoid the algorithm getting into the local optimal solution.
Algorithm 7 is performed for elite opposition-based learning for optimal firework to generate location of sparks. This algorithm is performed Nop times (Nop is a constant to control the number of elite opposition-based sparks).
Algorithm 7: Generating Elite Opposition-Based Sparks
  Find the location of optimal firework: Xbest = (x1,x2,...,xd)
  For i = 1:d do
   Find mini and maxi of the current search interval on the i dimension
   oxbest,I = rand × (mini + maxi) − xbest,i
   If oxi out of bounds
   oxi = Xmini + |oxi|% (XmaxiXmini)
   End if
  End for

3.4. Analysis and Improvement of Selection Strategy

From Equations (4) and (5), the selection strategy is based on the distance measure in FWA. However, this requires that the euclidean distance matrix between any two points in each generation, which will lead to fireworks algorithm time consuming. Based on this, this paper proposes a new selection strategy: Elitism-Disruptive selection strategy.
The same as FWA, the Elitism-Disruptive selection also requires that the current best location is always kept for the next iterations. In order to keep the diversity, the remaining N − 1 locations are selected based on disruptive selection operator. For location Xi, the selection probability pi is calculated as follows [21]:
p i = Y i n = 1 S N Y n
Y i = | Y i Y a v g |
where Yi is the fitness value of the objective function, Yavg is the mean of all fitness values of the population in generation t, SN is the set of all fireworks.
The selection probabilities determined by this method can give both good and poor individuals more chances to be selected for the next iteration, while individuals with mid-range fitness values will be eliminated. This method can not only maintain the diversity of the population, reflect the better global searching ability, but also reflect greatly reduce the run-time compared with the FWA.

4. Global Convergence Analysis of IFWA

Tan studied the convergence of the fireworks algorithm, the main conclusions are as follows [2]:
Lemma 1.
The random process of FWA ( { ε ( t ) } t = 0 ) is an absorbing Markov random process [2].
Definition 6.
Given an absorption Markov process ( { ε ( t ) } t = 0 ) and an optimization state space (Y*⊂Y). λ(t) = P{ε(t)∈Y*} represents the probability of reaching the optimal state at the t time. If lim t λ ( t ) = 1 , { ε ( t ) } t = 0 convergence [2].
Theorem 1.
Given an absorption Markov process ( { ε ( t ) } t = 0 ) and an optimization state space (Y*⊂Y), for∀t. If P { ε ( t ) Y * | ε ( t 1 ) Y * } d 0 and P { ε ( t ) Y * | ε ( t 1 ) Y * } = 1 , P { ε ( t ) Y * } 1 ( 1 d ) t [2].
Based on this, the global convergence of IFWA is given as follows:
IFWA contains a t-distribution mutation, which is assumed to be a random variation for simplicity.
Theorem 2.
Given an absorption Markov process ({ε(t)}t=0) of IFWA and an optimization state space (Y*⊂Y). The conclusion as follows:
lim t λ ( t ) = 1
Proof of Theorem 2.
Assume Pt is the probability that a firework from the non-optimal region to the optimal region Rbest in IFWA, under the action of t-distribution mutation:
P t = ν ( R b e s t ) × N ν ( S )
where ν ( S ) is the Lebesgue measure of the problem space S; N is the number of fireworks.
Obviously, ν ( R b e s t ) > 0 , so Pt> 0;
Based on random Markov process of IFWA, the conclusion as follows:
λ ( t ) = P { ε ( t ) Y * | ε ( t 1 ) Y * } = P t + P e
where Pe is the probability that a firework from the non-optimal region to the optimal region Rbest in IFWA, under the action of explosion.
From Equation (21), the conclusion is as follows:
P { ε ( t ) Y * | ε ( t 1 ) Y * } P t 0
Because the iterative process of the IFWA retains the optimal firework, that is, if the optimal firework is the global solution in the last iterative process, the optimal firework must be the global solution in the current iterative process. The conclusion as follows:
P { ε ( t ) Y * | ε ( t 1 ) Y * } = 1
And because the Markov process of IFWA is an absorbing Markov process, the condition of theorem 1 is satisfied, so have the follow conclusion:
P { ε ( t ) Y * } = 1 ( 1 P t ) t
that is, lim t P { ε ( t ) Y * } = lim t ( 1 ( 1 P t ) t ) = 1
Based on Definition 6, the Markov process of IFWA will converge to the optimal state.

5. Simulation Results and Analysis

To assess the performance of IFWA, it is compared with FWA [8], EFWA [10], dynFWA [12] and SPSO2011 [22].

5.1. Simulation Settings

Similar to FWA, the number of fireworks in IFWA is set to 5; and the number of elite opposition-based sparks is also set to 5; but in IFWA, the maximum number of sparks each generation is set to 200. The reduction and amplification factors of IFWA are set to 0.9 and 1.2 based on experience. The explosion amplitude is initialized to the size of the search space to keep the high exploratory ability at the beginning of the algorithm. FWA parameters set in accordance with [1]. EFWA parameters set in accordance with [10]. dynFWA parameters set in accordance with [12]. SPSO2011 parameters set in accordance with [22].
In the experiment, the function of each algorithm is repeated 51 times, and the final results after the 300,000 function evaluations are presented. In order to verify the performance of the algorithm proposed in this paper, we use the CEC2013 test set [23], including 28 different types of test functions, which are listed in Table 1. All experimental test functions dimensions are set to 30, d = 30.
Finally, we use Matlab R2014a software on a PC with a 3.2 GHz CPU (Intel Core i5-3470), and 4 GB RAM, and Windows 7(64 bit).

5.2. Simulation Results and Analysis

5.2.1. Verify Each Improvement

This paper proposed the below four improvements:
  • The opposition-based learning used to initialize population.
  • A new mechanism to adjust an explosion amplitude of the optimal firework.
  • t-distribution mutation for non-optimal fireworks, and the elite opposition-based learning for optimal firework.
  • A new selection strategy, called disruptive selection, is used to select next generation.
This section verifies each improvement to compare with FWA, the results are shown in Table 2. The FWA is the basic fireworks algorithm, FWA-I is basic fireworks algorithm with improvement 1, FWA-II is basic fireworks algorithm with improvements 1 and 2, FWA-III is basic fireworks algorithm with improvements 1–3, and the IFWA is the FWA with all improvements.
As Table 2 shows, FWA-I, FWA-II, FWA-III and IFWA compared to the fireworks algorithm have different degrees of performance, and the IFWA shows better performance.

5.2.2. Searching Curves Comparison

Due to limited space, this paper selects eight functions which have great difference in evolution speed in five algorithms. Figure 2 shows searching curves of eight functions for FWA, EFWA, dynFWA, SPSO2011 and IFWA. The Figure A1 shows the searching curves of remaining 20 functions.
Figure 2 shows that IFWA have faster convergence for eight functions than FWA, EFWA, dynFWA. However, in f2 and f12, the SPSO2011 is better, and in other functions, IFWA still has faster convergence. Thus, IFWA is the best one in terms of solution accuracy on most functions.

5.2.3. Comparison of Average Fitness Value and Average Rank

Table 3 shows comparison of average fitness value and total number of rank 1 for FWA, EFWA, dynFWA, SPSO2011 and IFWA.
The results from Table 3 indicate that the total number of rank 1 of IFWA (17) is the best in the five algorithms.

5.2.4. Comparison of Average Run-Time Cost

Figure 3 and Figure 4 show comparison of average run-time cost in 28 functions for FWA, EFWA, dynFWA, SPSO2011 and IFWA.
The results from Figure 3 and Figure 4 indicate that the average run-time cost of SPSO2011 is the most expensive among the five algorithm. The time cost of IFWA is the least.

5.2.5. Comparison of Statistical Test

To evaluate whether the IFWA results were significantly different from those of the FWA, EFWA, dynFWA and SPSO2011, the IFWA mean results during iteration for each test function were compared with those of the FWA, EFWA, dynFWA and SPSO2011. The T test [24], which is a safe and robust, was utilized at the 5% level to detect significant differences between these pairwise samples for each test function.
The ttest2 function in Matlab R2014a was used to run the T test, as shown in Table 4. The null hypothesis is that the results of IFWA, SPSO2011, FWA, EFWA and dynFWA are derived from distributions of equal mean, and in order to avoid to increase type I error, we correct the p-values using the Holm's method, and order the p-value for the four hypotheses being tested from smallest to largest, and here we have four t tests. Thus, the p-value 0.05 is changed to 0.0125, 0.0167, 0.025, and 0.05, and then the corrected p-values were used to compare with p-values respectively.
Here p-value is the result of the T test. The “+” indicates the rejection of the null hypothesis at the 5% significance level, and the “−” indicates accept the null hypothesis at the 5% significance level.
Table 4 indicates that IFWA showed a large improvement over FWA, EFWA, dynFWA and SPSO2011 in most functions.

6. Conclusions

Based on the analysis of the FWA, an improved fireworks algorithm (IFWA) is proposed in this paper. IFWA firstly puts opposition-based learning into FWA to initialize the population. Moreover, aiming at the shortage of explosion amplitude in FWA, a new explosion amplitude mechanism is proposed. Then, the adaptive t-distribution mutation is proposed for non-optimal fireworks, and elite opposition-based learning for optimal firework. At last, a new selection mechanism is proposed, which reduces the run-time of algorithm.
We apply the CEC2013 standard functions to examine and compare the proposed algorithm IFWA with SPSO2011, FWA, EFWA and dynFWA. The results clearly indicate that IFWA can perform significantly better than FWA, EFWA, dynFWA and SPSO2011 in terms of solution accuracy. Overall, the research demonstrates that IFWA performed the best for both solution accuracy and run-time cost.

Acknowledgments

The authors are thankful to the anonymous reviewers for their valuable comments to improve the technical content and the presentation of the paper. This paper is supported by the Liaoning Provincial Department of Education Science Foundation (Grant No. L2013064) and AVIC Technology Innovation Fund (basic research) (Grant No. 2013S60109R).

Author Contributions

Xi-Guang Li participated in the draft writing. Shou-Fei Han participated in the concept, design and perform the experiments and commented on the manuscript. Chang-Qing Gong participated in the data collection, and analyze the data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. The FWA, EFWA, dynFWA, SPSO2011 and IFWA searching curves. (a) f1 function; (b) f3 function; (c) f4 function; (d) f5 function; (e) f6 function; (f) f7 function; (g) f8 function; (h) f9 function; (i) f10 function; (j) f11 function; (k) f13 function; (l) f14 function; (m) f15 function; (n) f16 function; (o) f17 function; (p) f18 function; (q) f19 function; (r) f20 function; (s) f21 function; (t) f24 function.
Figure A1. The FWA, EFWA, dynFWA, SPSO2011 and IFWA searching curves. (a) f1 function; (b) f3 function; (c) f4 function; (d) f5 function; (e) f6 function; (f) f7 function; (g) f8 function; (h) f9 function; (i) f10 function; (j) f11 function; (k) f13 function; (l) f14 function; (m) f15 function; (n) f16 function; (o) f17 function; (p) f18 function; (q) f19 function; (r) f20 function; (s) f21 function; (t) f24 function.
Algorithms 10 00026 g005aAlgorithms 10 00026 g005bAlgorithms 10 00026 g005c

References

  1. Tan, Y.; Zhu, Y. Fireworks Algorithm for Optimization. In Advances in Swarm Intelligence; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Tan, Y. Fireworks Algorithm Introduction, 1st ed.; Science Press: Beijing, China, 2015; pp. 13–136. (In Chinese) [Google Scholar]
  3. Andreas, J.; Tan, Y. Using population based algorithms for initializing nonnegative matrix factorization. In Advances in Swarm Intelligence; Springer: Berlin, Germany, 2011; pp. 307–316. [Google Scholar]
  4. Gao, H.Y.; Diao, M. Cultural firework algorithm and its application for digital filters design. Int. J. Model. Identif. Control 2011, 4, 324–331. [Google Scholar] [CrossRef]
  5. Wen, R.; Mi, G.Y.; Tan, Y. Parameter optimization of local-concentration model for spam detection by using fireworks algorithm. In Proceedings of the 4th International Conference on Swarm Intelligence, Harbin, China, 12–15 June 2013; pp. 439–450.
  6. Imran, A.M.; Kowsalya, M.; Kothari, D.P. A novel integration technique for optimal network reconfiguration and distributed generation placement in power distribution networks. Int. J. Electr. Power 2014, 63, 461–472. [Google Scholar] [CrossRef]
  7. Nantiwat, P.; Bureerat, S. Comparative performance of meta-heuristic algorithms for mass minimisation of trusses with dynamic constraints. Adv. Eng. Softw. 2014, 75, 1–13. [Google Scholar]
  8. Li, H.; Bai, P.; Xue, J.; Zhu, J.; Zhang, H. Parameter estimation of chaotic systems using fireworks algorithm. In Advances in Swarm Intelligence; Springer: Berlin, Germany, 2015; pp. 457–467. [Google Scholar]
  9. Liu, Z.B.; Feng, Z.R.; Ke, L.J. Fireworks algorithm for the multi-satellite control. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015; pp. 1280–1286.
  10. Zheng, S.; Janecek, A.; Tan, Y. Enhanced fireworks algorithm. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2069–2077.
  11. Zheng, S.; Li, J.; Tan, Y. Adaptive fireworks algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, Beijing, China, 6–11 July 2014; pp. 3214–3221.
  12. Zheng, S.; Tan, Y. Dynamic search in fireworks algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, Beijing, China, 6–11 July 2014; pp. 3222–3229.
  13. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the 2005 International Conference on Computational Intelligence for Modeling, Control and Automation, Vienna, Austria, 28–30 November 2005; pp. 695–701.
  14. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution algorithms. In Proceedings of the 2006 IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 7363–7370.
  15. Wang, H.; Liu, Y.; Zeng, S.Y.; Li, H.; Li, C.H. Opposition-based particle swarm algorithm with Cauchy mutation. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4750–4756.
  16. Yu, S.H.; Zhu, S.L.; Ma, Y. Enhancing firefly algorithm using generalized opposition-based learning. Computing 2015, 97, 741–754. [Google Scholar] [CrossRef]
  17. Chibing, G. Opposition-Based Adaptive Fireworks Algorithm. Algorithms 2016, 9, 43. [Google Scholar]
  18. Gao, H.; Li, C. Opposition-based quantum firework algorithm for continuous optimisation problems. Int. J. Comput. Sci. Math. 2015, 6, 256–265. [Google Scholar] [CrossRef]
  19. Lan, K.-T.; Lan, C.-H. Notes on the distinction of Gaussian and Cauchy mutations. In Proceedings of the Eighth International Conference on Intelligent Systems Design and Applications, Kaohsiung, Taiwan, 26–28 November 2008; pp. 272–277.
  20. Zhou, F.J.; Wang, X.J.; Zhang, M. Evolutionary Programming Using Mutations Based on the t Probability Distribution. Acta Electron. Sin. 2008, 36, 121–123. (in Chinese). [Google Scholar]
  21. Kuo, T.; Hwang, S.Y. A genetic algorithm with disruptive selection. IEEE Trans. Sys. Man Cybern. Part B Cybern. 1996, 26, 299–307. [Google Scholar]
  22. Zambrano-Bigiarini, M.; Clerc, M.; Rojas, R. Standard particle swarm optimization 2011 at CEC2013: A baseline for future PSO improvements. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2337–2344.
  23. Liang, J.; Qu, B.; Suganthan, P.; Hernandez-Diaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report 201212; Zhengzhou University: Zhengzhou, China, January 2013. [Google Scholar]
  24. Teng, S.Z.; Feng, J.H. Mathematical Statistics, 4st ed.; Dalian University of Technology Press: Dalian, China, 2005; pp. 34–35. (In Chinese) [Google Scholar]
Figure 1. Enlargement and reduction of explosion amplitude (on the Sphere function).
Figure 1. Enlargement and reduction of explosion amplitude (on the Sphere function).
Algorithms 10 00026 g001
Figure 2. The FWA, EFWA, dynFWA, SPSO2011 and IFWA searching curves. (a) f2 function; (b) f12 function; (c) f22 function; (d) f23 function; (e) f25 function; (f) f26 function; (g) f27 function; (h) f28 function.
Figure 2. The FWA, EFWA, dynFWA, SPSO2011 and IFWA searching curves. (a) f2 function; (b) f12 function; (c) f22 function; (d) f23 function; (e) f25 function; (f) f26 function; (g) f27 function; (h) f28 function.
Algorithms 10 00026 g002
Figure 3. The FWA, EFWA, dynFWA, SPSO2011 and IFWA run-time cost.
Figure 3. The FWA, EFWA, dynFWA, SPSO2011 and IFWA run-time cost.
Algorithms 10 00026 g003
Figure 4. The FWA, EFWA, dynFWA, SPSO2011 and IFWA run-time cost (Continued).
Figure 4. The FWA, EFWA, dynFWA, SPSO2011 and IFWA run-time cost (Continued).
Algorithms 10 00026 g004
Table 1. CEC2013 test set.
Table 1. CEC2013 test set.
Function Number Function NameOptimal Value
Unimodal Functions1Sphere function−1400
2Rotated high conditioned elliptic function−1300
3Rotated bent cigar function−1200
4Rotated discus function−1100
5Different powers function−1000
Basic Multimodal Functions6Rotated rosenbrock’s function−900
7Rotated schaffers F7 function−800
8Rotated Ackley’s function−700
9Rotated weierstrass function−600
10Rotated griewank’s function−500
11Rastrigin’s function−400
12Rotated rastrigin’s function−300
13Non-continuous rotated rastrigin’s function−200
14Schewefel’s function−100
15Rotated schewefel’s function100
16Rotated katsuura function200
17Lunacek Bi_Rastrigin function300
18Rotated Lunacek Bi_Rastrigin function400
19Expanded griewank’s plus rosenbrock’s function500
20Expanded scaffer’s F6 function600
Composition Functions21Composition function 1 (N = 5)700
22Composition function 2 N = 3)800
23Composition function 3 (N = 3)900
24Composition function 4 (N = 3)1000
25Composition function 5 (N = 3)1100
26Composition function 6 (N = 5)1200
27Composition function 7 (N = 5)1300
28Composition function 8 (N = 5)1400
Table 2. Average fitness value and total number of rank 1.
Table 2. Average fitness value and total number of rank 1.
Functions FWAFWA-IFWA-IIFWA-IIIIFWA
f1Mean−1396.7−1396.81−1400−1400−1400
f2Mean2.3 × 1072.21 × 1073.62 × 1053.39 × 1064.03 × 105
f3Mean7.2 × 1097.0 × 1096.76 × 1084.92 × 1081.21 × 108
f4Mean2.18 × 1042.13 × 1041.33 × 1041.47 × 104−1099.89
f5Mean−997.58−997.71−1000−1000−1000
f6Mean−815−820.71−856.34−859−872
f7Mean−639 −646.13 −614.85−528−709
f8Mean−679.06−679.07−679.07−679.08−679.13
f9Mean−565.52−566.13−566.46−566.78 −576.12
f10Mean-464.8−468.12−499.63−499.70−499.978
f11Mean−384.10−385.32−368.52−348.19 −304.89
f12Mean114.19121.19 8.43 6.78 −158.02
f13Mean191.23185.04165.74159.10−1.124
f14Mean647.11723.65609.74797.02 2644.91
f15Mean5014.045012.964473.484601.223930.46
f16Mean201.73201.64201.48201.41200.377
f17Mean357.08357.16365.53399.63410.71
f18Mean825.03816.93784.66 773.59575.27
f19Mean505.4505.85506.20507.29506.6
f20Mean614.76614.57614.15 614.06612.38
f21Mean1082.41081.181050.15 1027.131008.53
f22Mean1528.441506.691488.481755.951488.47
f23Mean7009.336996.976270.166217.963294.51
f24Mean1307.751307.541306.381279.661266.55
f25Mean1458.451436.271431.781430.131387.58
f26Mean1419.421418.341400.111415.181409.01
f27Mean2582.522580.522555.852534.582224.13
f28Mean4647.64645.673627.813245.981640.48
total number of rank 1
215222
Table 3. Average fitness value and total number of rank 1.
Table 3. Average fitness value and total number of rank 1.
Functions SPSO2011FWAEFWAdynFWAIFWA
f1Mean−1400 −1396.7−1399 −1400−1400
Rank13211
f2Mean3.371 × 1052.3 × 1076.85 × 1058.69 × 1054.03 × 105
Rank15342
f3Mean2.88 × 1087.2 × 1097.76 × 1071.23× 1081.21 × 108
Rank45132
f4Mean3.75 × 1042.18 × 104−1098.9−1089.6−1099.89
Rank54231
f5Mean−1000−997.58−999.92 −1000-1000
Rank13211
f6Mean−862-815−850−869−872
Rank35421
f7Mean−712-639 −627−700−709
Rank14532
f8Mean-679.08−679.06−679.07 −679.10−679.13
Rank35421
f9Mean−571.23−565.52 −568.46−575.87 −576.12
Rank35421
f10Mean−499.66−464.8 −499.16−499.95−499.978
Rank35421
f11Mean−295.04 −384.10 5.8198 −295.89 −304.89
Rank41532
f12Mean−196.04114.19 399.44 −142.22 −158.02
Rank14532
f13Mean−6.1406191.23 298.5753.83−1.124
Rank14532
f14Mean3891647.1127242918 2644.91
Rank51342
f15Mean3909.35014.044459.54022.73930.46
Rank15432
f16Mean201.31201.73200.63200.58200.377
Rank45321
f17Mean416.26357.08624.61442.61410.71
Rank31542
f18Mean520.63825.03576.61587.82575.27
Rank15342
f19Mean509.51505.4510.22507.26506.6
Rank41532
f20Mean613.46614.76614.66 613.28612.38
Rank35421
f21Mean1008.81082.41117.8 1010.21008.53
Rank24531
f22Mean5098.81528.446318.14126.21488.47
Rank42531
f23Mean5731.37009.337580.95652.63294.51
Rank34521
f24Mean1266.71307.751345.21272.91266.55
Rank24531
f25Mean1399.31458.451442.613971387.58
Rank35421
f26Mean1486.11419.421546.11460.71409.01
Rank42531
f27Mean2304.62582.5226212280.42224.13
Rank34521
f28Mean1801.34647.64765.11696.11640.48
Rank34521
total number of rank 1
841217
Table 4. T test results of IFWA compare with SPSO2011, FWA, EFWA, dynFWA.
Table 4. T test results of IFWA compare with SPSO2011, FWA, EFWA, dynFWA.
Functions SPSO2011FWAEFWAdynFWA
f1p-valueNaN00NaN
significance++
f2p-value0.00451.102 × 10−2015.186 × 10−221.0556 × 10−37
significance++++
f3p-value7.9152 × 10−91.352 × 10−1450.06660.9222
significance++
f4p-value004.251 × 10−1023.192 × 10−202
significance++++
f5p-valueNaN 00 NaN
significance++
f6p-value0.01414.73 × 10−294.7756 × 10−80.6126
significance+++
f7p-value0.46216.677 × 10−351.9078 × 10−400.0140
significance+++
f8p-value0.40810.28950.90700.0076
significance+
f9p-value3.325 × 10−131.9057 × 10−331.7148 × 10−230.6621
significance+++
f10p-value2.006 × 10−1093.799 × 10−3141.247 × 10−1505.2876 × 10−15
significance++++
f11p-value0.01083.1582 × 10−381.836 × 10−930.0196
significance++++
f12p-value1.5512 × 10−79.2258×10−646.3131 × 10−940.0209
significance++++
f13p-value0.30416.1849 × 10−632.018 × 10−810.1833
significance++
f14p-value1.5171 × 10−339.1071 × 10−510.25021.2445 × 10−4
significance+++
f15p-value0.82346.6992 × 10−201.9623 × 10−70.3320
significance++
f16p-value9.3987×10−511.0955 × 10−653.6577 × 10−126.7638 × 10−9
significance++++
f17p-value1.033 × 10−76.7524 × 10−353.4902 × 10−640.8783
significance+++
f18p-value2.8098×10−162.0435 × 10−590.19240.7002
significance++--
f19p-value9.2738 × 10−141.5168 × 10−41.6272 × 10−180.0824
significance+++
f20p-value4.9539 × 10−163.9176 × 10−391.3166×10−371.7149 × 10−12
significance++++
f21p-value0.98152.2951 × 10−94.1069 × 10−160.8828
significance++
f22p-value4.1687×10−240.88224.648 × 10−332.6117 × 10−16
significance+++
f23p-value1.7789×10−96.0406 × 10−172.4782 × 10−204.8364 × 10−9
significance+++
f24p-value0.88231.6749 × 10−423.0488 × 10−689.0041 × 10−4
significance+++
f25p-value1.2705 × 10−142.6922 × 10−769.3371 × 10−668.3714 × 10−11
significance++++
f26p-value1.1275 × 10−270.04331.1038 × 10−474.2343 × 10−17
significance++++
f27p-value1.0172×10−64.8226×10−427.6683×10−464.2938×10−4
significance++++
f28p-value0.01919.4714×10−682.4439×10−690.4121
significance+++
total number of significance
20262415

Share and Cite

MDPI and ACS Style

Li, X.-G.; Han, S.-F.; Gong, C.-Q. Analysis and Improvement of Fireworks Algorithm. Algorithms 2017, 10, 26. https://doi.org/10.3390/a10010026

AMA Style

Li X-G, Han S-F, Gong C-Q. Analysis and Improvement of Fireworks Algorithm. Algorithms. 2017; 10(1):26. https://doi.org/10.3390/a10010026

Chicago/Turabian Style

Li, Xi-Guang, Shou-Fei Han, and Chang-Qing Gong. 2017. "Analysis and Improvement of Fireworks Algorithm" Algorithms 10, no. 1: 26. https://doi.org/10.3390/a10010026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop