Skip to Content
Applied SciencesApplied Sciences
  • Article
  • Open Access

9 November 2022

A Hybrid of Fully Informed Particle Swarm and Self-Adaptive Differential Evolution for Global Optimization

,
,
,
and
1
Faculty of Art, Computing and Creative Industry, Universiti Pendidikan Sultan Idris (UPSI), Tanjong Malim 35900, Perak, Malaysia
2
Centre of Global Sustainability Studies (CGSS), Level 5, Hamzah Sendut Library, Universiti Sains Malaysia, Minden 11800, Pulau Pinang, Malaysia
3
School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia
4
School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Pulau Pinang, Malaysia

Abstract

Evolutionary computation algorithms (EC) and swarm intelligence have been widely used to solve global optimization problems. The optimal solution for an optimization problem is called by different terms in EC and swarm intelligence. It is called individual in EC and particle in swarm intelligence. Self-adaptive differential evolution (SaDE) is one of the promising variants of EC for solving global optimization problems. Adapting self-manipulating parameter values into SaDE can overcome the burden of choosing suitable parameter values to create the next best generation’s individuals to achieve optimal convergence. In this paper, a fully informed particle swarm (FIPS) is hybridized with SaDE to enhance SaDE’s exploitation capability while maintaining its exploration power so that it is not trapped in stagnation. The proposed hybrid is called FIPSaDE. FIPS, a variant of particle swarm optimization (PSO), aims to help solutions jump out of stagnation by gathering knowledge about its neighborhood’s solutions. Each solution in the FIPS swarm is influenced by a group of solutions in its neighborhood, rather than by the best position it has visited. Indirectly, FIPS increases the diversity of the swarm. The proposed algorithm is tested on benchmark test functions from “CEC 2005 Special Session on Real-Parameter Optimization” with various properties. Experimental results show that the FIPSaDE is more effective and reasonably competent than its standalone variants, FIPS and SaDE, in solving the test functions, considering the solutions’ quality.

1. Introduction

The real-life optimization problems are often unpredictable and dynamic, which means that the optimization problems are facing many time-varying and unimaginable constraints, such as routing problems, scheduling problems, prediction and more variants of broad application problems. The primary purpose of the optimization problems is to find the most optimum minimum or maximum value in the search space where often the problems have more than one maximum or minimum point. This type of problem is considered an NP-hard (nondeterministic polynomial time) problem, requiring high computational processing to solve it. Currently, evolutionary computation (EC) is widely used in numerous studies to solve this type of problem. The EC technique has proven to provide an effective search in a complex search to achieve optimal or near-optimal solutions and has been proven to have strong global search ability [1].
As a classic heuristic method, Particle Swarm Optimization (PSO) is one of the most used and reliable swarm intelligence techniques [2]. It has been successfully adopted into many practical applications due to its efficiency, fast optimization speed and simplicity of implementation. In the traditional PSO, each particle of the population is learning from its nearest neighbors, and this may cause the particle swarm to stagnate in the local region due to rapid convergence [1]. Unlike the traditional PSO, one of its variants, the Fully Informed Particle Swarm Optimization, FIPS, introduced by Mendes, Kennedy and Neves in 2004, has the weight contributions of all particles to have the same value. The particles are influenced by their whole neighborhood in a specific way according to different neighborhood topologies. This technique enables each particle to access the most successful solutions from the whole swarm, not necessarily the best from its nearest neighbor. Accordingly, the performance of FIPS algorithm types is also generally more dependent on the neighborhood topology [3].
Unlike PSO and its variants, Differential Evolution (DE) is also a widely used algorithm with a remarkable ability to find the optimal solution. DE relies upon its strength in handling starting initial points where multiple starting points are randomly chosen during sampling of potential solutions [4,5]. Several versions of the proposed adaptive DE focus on manipulating the parameter of DE with the introduction of a new way of controlling the value of existing parameters. The adaptive differential evolution (jDE) [6] implemented several DE strategies to control the diversity of the population. Additionally, it can self-adapt the scaling factor F and the crossover rate C r . The parameter adaptive differential evolution (JADE) [7] relies on greedy mutation strategy (DE/current-to- p best ) with optimal external achieve and utilizing the previously explored inferior solutions. In SaDE [8], suitable learning strategies and parameter settings are gradually self-adapted according to the previous learning experiences. Even more, the choice of learning strategy and the two essential control parameters of DE, F and C r , are not required to be specified before the evolution phase.
In the past few years, the development of adaptive mechanisms has emerged to be one of the critical issues in the branch of the EC algorithm. Adaptive mechanism refers to the ability of the algorithm to change its behavior according to information available during its running phase. The number of works in which adaptive mechanisms are being successfully used has increased enormously over the past few decades. The applications include a wide variety of areas, such as routing problems, signal processing, optimization problem and medical fields. Furthermore, the efficiency of the adaptive mechanism mainly depends on the algorithm design and the algorithm used for adaptation.
One of the common drawbacks EC algorithms face in solving optimization problems is the lack of diversity in solutions causing a suboptimal solution. A potential approach to overcoming this drawback is a hybrid of EC and swarm intelligence. FIPS and SaDE emerge as promising EC and swarm intelligence algorithms in solving optimization problems. Therefore, they are chosen to form a hybrid in our study. FIPS and SaDE are hybridized owing to the solutions’ diversity in FIPS and the optimal solution quality in SaDE. FIPS improves the solutions’ diversity by gathering knowledge about its neighborhood’s solutions. Each solution in FIPS receives information from all its neighbors rather than just the best one. SaDE is a simple, powerful and self-adaptive type of DE which finds an optimal solution through exploration and exploitation. SaDE minimally relies on user-specified parameters because it can self-adapt its parameters to solve optimization problems. FIPS improves the solutions through information gathering from its neighbors and maneuvering the group of solutions to move toward the optimal region, which is explored and exploited by SaDE. Therefore, in the hybrid of FIPS and SaDE, they complement each other by balancing the exploration of neighbors and exploitation of the optimal solutions.

3. Methodology

FIPS and SaDE are hybridized owing to the solutions’ diversity in FIPS and the quality of optimal solution in SaDE. One of the common drawbacks of using optimization algorithms is the lack of diversity leading to a suboptimal solution [31]. The use of FIPS could improve the drawback. In the FIPS, the neighbors around a solution become the source of influence to improve the solution’s diversity. DE is remarkable in finding the optimal solution in a group of solutions. However, choosing suitable control parameter values is tricky as the values need to vary according to each problem. SaDE has the advantage that the user does not need to determine the optimal parameters settings, and time complexity does not increase as the rules for applying SaDE are simple [29]. SaDE performs well in finding the optimal solution in a group of solutions through self-adapting its parameters. Each particle in FIPS receives information from all of its neighbors rather than just the best one. The particle may sometimes trap in local optima, and the FIPS topology network may help to monitor and maneuver the particle’s position. Therefore, the operation in FIPS improves the solutions through the exploration of its neighbors and guides the group of solutions moving toward the optimal region.
A hybridization between SaDE and FIPS, called FIPSaDE, could improve the performance in solving complex optimization problems. The main process of FIPSaDE is structured according to the usual structure of the PSO algorithm. The solution found in FIPSaDE is called individual or solution interchangeably. The FIPS parameter is updated each time before the mutation process of the SaDE algorithm and again during the selection process in controlling the quality of individuals chosen for the next generation. The process is repeated iteratively until the algorithm reaches the optimum value. The presence of the FIPS process creates a disturbance in the population in which the FIPS is focused on moving the solution to explore its surrounding. This helps in maintaining the diversity of the population created and producing an excellent optimal solution. The pseudocode of FIPSaDE is given in Algorithm 1.
Algorithm 1: The Pseudocode of the FIPSaDE for Solving a Minimization Problem
******************************Start of FIPS******************************
Initialization
Define the swarm size N p
for each individual i  ϵ [ 1 N p ] do
 Randomly generate x i and v i .
 Evaluate the fitness of x i denoting it as f ( x i ) .
 Set P b e s t i = x i and f ( P b e s t i ) = f ( x i ) .
end for
Set G b e s t = P b e s t 1 and f ( G b e s t ) = f ( P b e s t 1 ) .
for each individual i  ϵ [ 1 N p ] do
if f ( P b e s t i ) < f ( G b e s t ) then
   f ( G b e s t ) = f ( P b e s t i )
end if
end for
whilet < maximum iterations do
for each individual i  ϵ [ 1 N p ] do
  -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.Start of SaDE-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-..-.-.-.-
  Generate vector [ x i , F i , C r i ] .
  Update the parameters F i and C r i based on Equations (3) and (4).
  Mutate x i based on Equation (5).
  Crossover x i based on Equation (6).
  Select x i based on Equation (7).
  -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.End of SaDE-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
  Evaluate its velocity v i ( t + 1 ) based on Equation (1).
  Update the position x i ( t + 1 ) of the particle based on Equation (2).
  if f ( x i ( t + 1 ) ) < f ( P b e s t i ) then
    P b e s t i = x i ( t + 1 )
    f ( P b e s t i ) = f ( x i ( t + 1 ) )
  end if
  if f ( P b e s t i ) < f ( G b e s t ) then
    G b e s t = P b e s t i
    f ( G b e s t ) = f ( P b e s t i )
  end if
end for
t = t + 1
end while
return G b e s t
*******************************End of FIPS******************************
In FIPSaDE, a swarm of individuals flies in a D-dimensional search space seek an optimal solution. Each individual i possesses a current velocity vector v i = [ v i 1 , v i 2 , , v i D ] and a current position vector x i = [ x i 1 , x i 2 , , x i D ] , where D is the number of dimensions. The FIPSaDE process starts by randomly initializing v i and x i .
In each iteration t, FIPSaDE uses a self-adapting mechanism of two control parameters, i.e., F and C r . Each individual i is extended with control parameter values F and C r as a vector [ x i , F i , C r i ] . New control parameters F i , ( t + 1 ) and C r i , ( t + 1 ) are calculated before the mutation operator based on Equations (3) and (4). Therefore, the parameters influence the mutation, crossover and selection operations of the new vector x i , ( t + 1 ) .
F i , ( t + 1 ) = F l + r a n d 1 F u if r a n d 2 < τ 1 F i , t otherwise
C r i , ( t + 1 ) = r a n d 3 if r a n d 4 < τ 2 C r i , t otherwise
where r a n d j , for j ϵ { 1 , 2 , 3 , 4 } are uniform random values within the range [0, 1]. The parameters τ 1 , τ 2 , F l , F u are fixed to values 0.1, 0.1, 0.1, 0.9, respectively in [8].
In the mutation process, for each target vector x i , a mutant vector q i is generated according to Equation (5)
q i , ( t + 1 ) = x r 1 , t + F i , t ( x r 2 , t x r 3 , t )
with randomly chosen indexes r 1 , r 2 , r 3 ϵ [ 1 , N p ] .
Then, a crossover operator forms a trial vector p i = ( p i 1 , p i 2 , p i D ) , with the target vector is mixed with the mutated vector using Equation (6).
p i j , ( t + 1 ) = q i j , ( t + 1 ) if ( r a n d j ( 0 , 1 ) C r i , t ) or ( j = j r a n d ) x i j , t otherwise
for i = 1 , 2 , N p and j = 1 , 2 , D . Index j r a n d ϵ { 1 , N p } is a randomly chosen integer that is responsible for the trial vector containing at least one component from the mutant vector.
In the selection process, a greedy selection scheme for a minimization is used and it is shown in Equation (7).
x i , ( t + 1 ) = p i , ( t + 1 ) if f ( p i , ( t + 1 ) ) f ( x i , t ) x i , t otherwise
In each iteration t, the best position that has been found by individual i, Pbest i = [ P b e s t i 1 , P b e s t i 2 , , P b e s t i D ] and the best position that has been found by the whole swarm Gbest = [ G b e s t 1 , G b e s t 2 , , G b e s t D ] guide individual i to update its velocity and position by Equations (1) and (2).
The FIPSaDE is effective in traversing through swarm spaces while avoiding getting stuck inside local search using the information collected from individuals’ neighborhood. The individuals of each population will contain a control parameter adapted from SaDE and also a traversing parameter from FIPS in order to exploit the swarm space that is searching for the best global point, while avoiding getting trapped inside a local point. If some population suffers from slow and premature convergence or if the population becomes stagnant, the velocity parameter and position parameter from each individual will be able to help the population to evolve out into a better point. Experimental results validated the effectiveness and strength of our proposed model.

4. Experimental Setup

The performance of FIPSaDE is evaluated against the 25 benchmark functions from “CEC 2005 Special Session on Real-Parameter Optimization” (CEC 2005) [32], namely, F1–F25. The benchmark functions consist of 5 unimodal functions, 7 basic multimodal functions, 2 expanded multimodal functions and 11 composition functions. The details of the functions are described in Table 1. The number of variables for each function is represented by D and the ranges of variable search are represented by S. All functions are a minimization of problems with their best minimum solutions can refer to [32]. All experiments were conducted in the Eclipse software by using the Java programming language and on a laptop featuring an Intel(R) Core (TM) i5-7200U CPU @ 2.50 GHz processor with 4 GB of RAM.
Table 1. Details on the benchmark functions.
In this experiment, the performance of FIPSaDE is studied by comparing the algorithm with four known algorithms, which are FIPS by Mendes et al. [1], DE by Storn and Price [10], SaDE by Brest et al. [8] and DE-PSO by Pant et al. [31]. FIPS, DE and SaDE were selected as these algorithms can be considered as the parent or the ancestor of FIPSaDE. As for DE-PSO, it was selected as the algorithm is a hybrid algorithm that features a similar structure and concept as FIPSaDE. The parameters for the algorithms are shown in Table 2 and Table 3. The optimization test for all algorithms is restricted to a maximum number of function evaluation ( MAX _ FEs ), which are 5 × 10 4 for both 10 D and 50 D problem, and 3 × 10 4 for 30 D problem. A success-threshold of 10 14 was administered for the experiments. This means the evolutionary processes are terminated if the best-fitness, f b e s t < f t a r g e t is reached, with f t a r g e t = f ( x ) + 10 14 . Otherwise, the processes continue until they reach MAX _ FEs . The evolutions for all models of DEs are run until they meet the stopping criterion and repeated by using the k-th seed numbers, k = 1 , 2 , , K , with K referring to the maximum number of runs.
Table 2. Parameter settings of DE, PSO, DE-PSO, SaDE and FIPS.
Table 3. Parameter settings of FIPSaDE.

5. Results and Analysis

The difference between current fitness value and optimum value, also known as error value, is used to compare the algorithms’ performance. The average errors of the independent runs of all algorithms for 10 D , 30 D and 50 D problems are summarized in Table 4, Table 5 and Table 6. The last rows in Table 4, Table 5 and Table 6 show the frequency of having the best result for the algorithms, h w i n . Based on the results in Table 4, Table 5 and Table 6, DE cannot find the solutions for functions F15–F17 because its solutions have undefined numeric results for the functions that are divided by zero.
Table 4. Results of average error values for 10D test problem and N = 25 (10D25N).
Table 5. Results of average error values for 30D test problem and N = 30 (30D30N).
Table 6. Results of average error values for 50D test problem and N = 50 (50D50N).
When the settings are 10D and N is 25 (10D25N), SaDE, DE-PSO and FIPSaDE have the highest h w i n , which is 6. This means the three algorithms have compatible performances in solving the problems. An interesting finding from the comparison of h w i n for SaDE, DE-PSO and FIPSaDE for the setting of 10D25N is that they have the lowest error for different functions. This finding may indicate that they perform well for different characteristics of problems. As the combinations of D and N increase to 30D30N and 50D50N, FIPSaDE shows distinctive performances compared to the other algorithms by having the highest h w i n in both settings. FIPSaDE has h w i n = 9 and h w i n = 12 in 30D30N and 50D50N, respectively. In other words, FIPSaDE’s performances improves with the increase of problem dimensionality and population size. In contrast, the performances of FIPS are the worst among the algorithms for the three settings. FIPS, DE and SaDE show deterioration when the settings increase from 10D25N to 50D50N, with the highest deterioration being observed in FIPS. DE-PSO, which has similar characteristics to FIPSaDE, shows moderate performances regardless of the settings.
FIPSaDE with its original variants, that is, FIPS and SaDE are further analyzed based on the best, the mean and standard deviation of f b e s t for different configurations of problem dimensionality and population size for all functions. The f b e s t obtained by FIPSaDE, FIPS and SaDE for all functions are shown in Table 7, Table 8 and Table 9, summarizing the best, the mean and standard deviation for f b e s t over a number of runs, which can be either 20 runs or 30 runs. The best, mean and standard deviation results of f b e s t are denoted as best - f b e s t , mean - f b e s t and std - f b e s t , respectively. For each table, the algorithm producing the best results for the same function is bolded and its frequency, h w i n is calculated at the bottom.
Table 7. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 10D25N.
Table 8. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 30D30N.
Table 9. Best - f b e s t , mean - f b e s t and std - f b e s t obtained by FIPS, SaDE and FIPSaDE algorithms for the settings of 50D50N.
For the setting of 10D25N, FIPSaDE has the highest h w i n for best - f b e s t , mean - f b e s t and std - f b e s t . FIPS and SaDE have similar h w i n values best - f b e s t , mean - f b e s t for the same setting. When the setting increases to 30D30N and 50D50N, FIPSaDE is still associated with the highest h w i n for best - f b e s t , mean - f b e s t and std - f b e s t . Therefore, FIPSaDE shows better results for best - f b e s t , mean - f b e s t and std - f b e s t with the increase of problem dimensionality and population size.
FIPS’s h w i n values for best - f b e s t , mean - f b e s t deteriorate from 10D25N to 50D50N. Based on h w i n for std - f b e s t , FIPS has consistent values that are 6 to 7, regardless of the problem dimensionality and population size. The finding indicates that the frequency of FIPS producing the highest best - f b e s t and mean - f b e s t among the algorithms deteriorates with the increase of problem dimensionality and population size. Additionally, FIPS’s frequency in producing the lowest variants of solutions is consistent across the problem dimensionality and population size.
SaDE shows a decreasing trend as FIPS for best - f b e s t , mean - f b e s t when the settings change from 10D25N to 50D50N. However, the decreasing trend is not as drastic as FIPS. Based on the comparisons of std - f b e s t , SaDE and FIPS have similar values for the settings, except 10D25N. SaDE’s std - f b e s t = 10 is higher than FIPS for the lower problem dimensionality and population size, 10D25N. The finding indicates that SaDE’s frequency in producing lowest variations of solutions is slightly lower at high problem dimensionality and population size.
The overall findings from the comparisons show that FIPSaDE has the highest probability of producing a better solution compared to its original variants, which are FIPS and SaDE, with an increase in problem dimensionality and population size. An EC algorithm commonly uses a large population size to solve an optimization problem with high dimensionality. However, the approach may be more effective for FIPSaDE than FIPS and SaDE. Based on the comparisons of h w i n for best - f b e s t and mean - f b e s t , FIPSaDE has the highest values, followed by FIPS and SaDE. Therefore, FIPSaDE consistently produces a group of better-quality solutions than its original variants when problem dimensionality and population size increase.
For most tested functions, FIPSaDE proves to produce better best - f b e s t and mean - f b e s t in the swarm space compared to the other algorithms as the problem space increases. FIPSaDE, a hybrid of both FIPS and SaDE, can maneuver and manage the swarm solutions more effectively. Therefore, FIPSaDE improves performance in finding the best fitness point as the problem space increases.
The variable mean - f b e s t for FIPSaDE, FIPS and SaDE is denoted as μ F I P S a D E , μ F I P S and μ S a D E , respectively. A t-test is performed with a significance level, α of 0.05 to evaluate the hypothesis whether μ F I P S a D E of the proposed algorithm is better than μ F I P S and μ S a D E or not, as shown below.
H 0 : μ F I P S a D E μ F I P S H 1 : μ F I P S a D E < μ F I P S H 0 : μ F I P S a D E μ S a D E H 1 : μ F I P S a D E < μ S a D E
The results of hypothesis tests for the paired FIPSaDE-FIPS and FIPS-SaDE based on different settings of problem dimensionality and population size are shown in Table 10. If the p-values are less α , they are bolded, indicating that the associated H 0 is rejected. As a result, there is enough evidence to accept H 1 . The p-values associated with rejecting H 0 are bolded and its frequency is denoted as h ( H 0 ) . At the bottom of the table, h ( H 0 ) shows the frequency H 0 being rejected for different settings. For the setting 10D25N, μ F I P S a D E is significantly better than μ F I P S and μ S a D E in 10 and 9 functions, respectively. When the setting increases from 10D25N to 50D50N, the values h ( H 0 ) for FIPSaDE-FIPS increase from 10 to 18 and 22. Therefore, the frequency that FIPSaDE is significantly better than FIPS increases. When similar comparisons are made for FIPSaDE-SaDE, an increasing trend is observed, but it is less obvious. The findings about h ( H 0 ) show that the performances of FIPSaDE are significantly better than FIPS and SaDE when the problem dimensionality and population size increase.
Table 10. Hypothesis test between μ F I P S a D E , μ F I P S and μ S a D E for the settings of 10D25N, 30D30N and 50D50N.

6. Conclusions

In this study, a hybrid of FIPS and SaDE called FIPSaDE is proposed, and its performance is validated by running the algorithm against the benchmark functions while also being compared to their respective original version, the SaDE and FIPS as well as it variants, such as DE and DE-PSO. The self-adaptation strategy of SaDE is adapted and maneuvered by the FIPS particle swarm, preventing the solutions from being trapped in the local region. Each algorithm can adaptively adjust the parameter values while the swarm is searching for the best solution needed. Based on different configurations of problem dimensionality and population sizes, the FIPSaDE algorithm consistently has the highest frequency of having lowest average errors than FIPS, DE, SaDE and DE-PSO. The frequency analysis of h w i n and the hypothesis test show that FIPSaDE performs better than its respective original versions in terms of the best and mean of f b e s t as the problems’ dimensionality increases. Future research will investigate the strength of proposed algorithms with other benchmark test functions. Since the current FIPSaDE is only tested on one topology, that is, Four Clusters, the impact of the other four other FIPS topologies, namely, All, Ring, Pyramid, Square, should be investigated in the future for their effects on the hybrid’s performances. Moreover, various performance metrics could be applied to strengthen the comparison among the algorithms. The study of the convergence profiles and coverage curve of the proposed algorithm as compared to other algorithms could also be investigated in the future. Furthermore, investigation should be conducted to broaden the comparison among the hybridization methods.

Author Contributions

Conceptualization, S.L.W., S.H.A. and T.F.N.; Data curation, S.L.W., S.H.A., H.I. and P.R.; Formal analysis, S.L.W., S.H.A. and P.R.; Funding acquisition, T.F.N.; Investigation, S.L.W. and S.H.A.; Methodology, S.L.W., S.H.A., H.I. and T.F.N.; Project administration, T.F.N.; Resources, S.H.A., H.I. and P.R.; Software, S.H.A., H.I. and P.R.; Supervision, S.L.W., H.I. and T.F.N.; Validation, S.L.W. and T.F.N.; Visualization, S.L.W., S.H.A. and P.R.; Writing—Original draft, S.L.W., S.H.A., H.I. and T.F.N.; Writing—Review and editing, S.L.W., H.I. and T.F.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Higher Education Malaysia under the Fundamental Research Grant Scheme (FRGS) with grant number FRGS/1/2022/ICT02/UPSI/02/1.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquires can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCArtificial bee colony
ACAAnt colony algorithm
CGSSCentre of Global Sustainability Studies
DEDifferential evolution
ECEvolutionary computation
FIPSFully informed particle swarm
FIPSaDEFIPS-SaDE
GUIGraphical user interface
NPnondeterministic polynomial
PSOParticle swarm optimization
UPSIUniversiti Pendidikan Sultan Idris
USMUniversiti Sains Malaysia
SaDESelf-adaptive differential evolution

References

  1. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  2. Zhalechian, M.; Tavakkoli-Moghaddam, R.; Rahimi, Y. A self-adaptive evolutionary algorithm for a fuzzy multi-objective hub location problem: An integration of responsiveness and social responsibility. Eng. Appl. Artif. Intell. 2017, 62, 1–16. [Google Scholar] [CrossRef]
  3. Surekha, P.; Archana, N.; Sumathi, S. Unit commitment and economic load dispatch using self adaptive differential evolution. WSEAS Trans. Power Syst. 2012, 7, 159–171. [Google Scholar]
  4. Chen, Y.; Mahalex, V.; Chen, Y.; He, R.; Liu, X. Optimal satellite orbit design for prioritized multiple targets with threshold observation time using self-adaptive differential evolution. J. Aerosp. Eng. 2015, 28, 04014066. [Google Scholar] [CrossRef]
  5. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  6. Al-Anzi, F.S.; Allahverdi, A. A self-adaptive differential evolution heuristic for two-stage assembly scheduling problem to minimize maximum lateness with setup times. Eur. J. Oper. Res. 2007, 182, 80–94. [Google Scholar] [CrossRef]
  7. Ali, M.; Ahn, C.W. An optimized watermarking technique based on self-adaptive DE in DWT-SVD transform domain. Signal Process. 2014, 94, 545–556. [Google Scholar] [CrossRef]
  8. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  9. Parouha, R.P.; Verma, P. A systematic overview of developments in differential evolution and particle swarm optimization with their advanced suggestion. Appl. Intell. 2022, 52, 10448–10492. [Google Scholar] [CrossRef]
  10. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  11. Bilal; Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar] [CrossRef]
  12. Wang, S.L.; Morsidi, F.; Ng, T.F.; Budiman, H.; Neoh, S.C. Insights into the effects of control parameters and mutation strategy on self-adaptive ensemble-based differential evolution. Inf. Sci. 2020, 514, 203–233. [Google Scholar] [CrossRef]
  13. Al-Dabbagh, R.D.; Neri, F.; Idris, N.; Baba, M.S. Algorithmic design issues in adaptive differential evolution schemes: Review and taxonomy. Swarm Evol. Comput. 2018, 43, 284–311. [Google Scholar] [CrossRef]
  14. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  15. Neri, F.; Tirronen, V. Recent advances in differential evolution: A survey and experimental analysis. Artif. Intell. Rev. 2010, 33, 61–106. [Google Scholar] [CrossRef]
  16. Parouha, R.P.; Verma, P. An innovative hybrid algorithm for bound-unconstrained optimization problems and applications. J. Intell. Manuf. 2022, 33, 1273–1336. [Google Scholar] [CrossRef]
  17. Tao, X.; Guo, W.; Li, Q.; Ren, C.; Liu, R. Multiple scale self-adaptive cooperation mutation strategy-based particle swarm optimization. Appl. Soft Comput. 2020, 89, 106124. [Google Scholar] [CrossRef]
  18. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A Comprehensive Survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  19. Jain, M.; Saihjpal, V.; Singh, N.; Singh, S.B. An Overview of Variants and Advancements of PSO Algorithm. Appl. Sci. 2022, 12, 8392. [Google Scholar] [CrossRef]
  20. Chen, Y.; Liang, J.; Wu, Y.; He, B.; Lin, L.; Wang, Y. Self-Regulating and Self-Perception Particle Swarm Optimization with Mutation Mechanism. J. Intell. Robot. Syst. 2022, 105, 1–21. [Google Scholar] [CrossRef]
  21. Wang, S.; Li, Y.; Yang, H. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Appl. Soft Comput. 2019, 81, 105496. [Google Scholar] [CrossRef]
  22. Dash, J.; Dam, B.; Swain, R. Design and implementation of sharp edge FIR filters using hybrid differential evolution particle swarm optimization. AEU Int. J. Electron. Commun. 2020, 114, 153019. [Google Scholar] [CrossRef]
  23. Yang, X.S. Nature-inspired optimization algorithms: Challenges and open problems. J. Comput. Sci. 2020, 46, 101104. [Google Scholar] [CrossRef]
  24. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  25. Ghosh, A.; Das, S.; Das, A.K.; Senkerik, R.; Viktorin, A.; Zelinka, I.; Masegosa, A.D. Using spatial neighborhoods for parameter adaptation: An improved success history based differential evolution. Swarm Evol. Comput. 2022, 71, 101057. [Google Scholar] [CrossRef]
  26. Do, D.T.; Lee, S.; Lee, J. A modified differential evolution algorithm for tensegrity structures. Compos. Struct. 2016, 158, 11–19. [Google Scholar] [CrossRef]
  27. Piotrowski, A.P. Review of differential evolution population size. Swarm Evol. Comput. 2017, 32, 1–24. [Google Scholar] [CrossRef]
  28. Borowska, B. Learning Competitive Swarm Optimization. Entropy 2022, 24, 283. [Google Scholar] [CrossRef]
  29. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  30. Cleghorn, C.W.; Engelbrecht, A. Fully informed particle swarm optimizer: Convergence analysis. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 164–170. [Google Scholar] [CrossRef]
  31. Pant, M.; Thangaraj, R.; Grosan, C.; Abraham, A. Hybrid differential evolution-particle swarm optimization algorithm for solving global optimization problems. In Proceedings of the 2008 Third International Conference on Digital Information Management, London, UK, 13–16 November 2008; pp. 18–24. [Google Scholar]
  32. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.