Next Article in Journal
An improved Fractional MPPT Method by Using a Small Circle Approximation of the P–V Characteristic Curve
Previous Article in Journal
Adaptive Load Incremental Step in Large Increment Method for Elastoplastic Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid PSO-DE Intelligent Algorithm for Solving Constrained Optimization Problems Based on Feasibility Rules

1
School of Mathematics and Information Sciences, North Minzu University, Yinchuan 750021, China
2
Ningxia Province Cooperative Innovation Center of Scientific Computing and Intelligent Information Processing, North Minzu University, Yinchuan 750021, China
3
Ningxia Province Key Laboratory of Intelligent Information and Data Processing, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 522; https://doi.org/10.3390/math11030522
Submission received: 19 December 2022 / Revised: 12 January 2023 / Accepted: 13 January 2023 / Published: 18 January 2023

Abstract

:
In this paper, we study swarm intelligence computation for constrained optimization problems and propose a new hybrid PSO-DE algorithm based on feasibility rules. Establishing individual feasibility rules as a way to determine whether the position of an individual satisfies the constraint or violates the degree of the constraint, which will determine the choice of the individual optimal position and the global optimal position in the particle population. First, particle swarm optimization (PSO) is used to act on the top 50% of individuals with higher degree of constraint violation to update their velocity and position. Second, Differential Evolution (DE) is applied to act on the individual optimal position of each individual to form a new population. The current individual optimal position and the global optimal position are updated using the feasibility rules, thus forming a hybrid PSO-DE intelligent algorithm. Analyzing the convergence and complexity of PSO-DE. Finally, the performance of the PSO-DE algorithm is tested with 12 benchmark functions of constrained optimization and 57 engineering optimization problems, the numerical results show that the proposed algorithm has good accuracy, effectiveness and robustness.

1. Introduction

Intelligent optimization algorithms are widely used in engineering design, job scheduling, aerospace, intelligent control, traffic optimization, financial investment, network communication and other fields. Therefore, research on optimization methods has important academic significance and practical value. The current existing optimization problems are often restricted by different conditions, which are called constrained optimization problems (COPs). The mathematical model can be expressed as follows:
m i n f ( x ) s . t g j 0 ,   j = 1 , 2 , , n h j = 0 ,   j = q + 1 , 2 , , m L i x i   U i , i = 1 , 2 , , D
where x = ( x 1 , x 2 , , x D ) denotes the D-dimensional vector of decision variables, S = i = 1 D is the decision space, which is the set of solutions satisfying all constraints, L i and U i are the minimum and the maximum permissible values for the i-th variable, f ( x ) is the objective function, g j is called the inequality constraint, h j is called the equality constraint, q is the number of inequality constraints, ( m q ) is the number of equality constraints. A feasible solution satisfies all constraints, and an unfeasible solution does not satisfy at least one constraint.
Constrained evolutionary algorithm is the combination of constraint handing technology and evolutionary algorithms, which can effectively solve constrained optimization problems. Evolutionary algorithms have strong universality, reliability, robustness, less information requirements for problems and easy to realize, so it is widely used to solve constrained optimization problems [1,2,3,4,5]. Meanwhile, a series of constraint handing techniques based on algorithm have been proposed: (1) Penalty function method [6], (2) Feasibility rules [7], (3) Multi-objective method [8], etc. Among them, penalty function method is the most common and simplest method to deal with constraints at present, which is essentially to add (or subtract) a penalty term to the objective function, and transform the constrained optimization problem into the unconstrained optimization problem. The penalty term G ( x ) = j = 1 m G j ( x ) is based on the degree of constraint violation of an individual x, G j ( x ) is defined as:
G j ( x ) = max ( 0 , g j ( x ) ) , 1 j q max ( 0 , h j ( x ) δ ) , q + 1 j m
where δ is the relaxation quantity of the equation constraint, usually a small number. If the constraint of variable x violates degree G ( x ) = 0 , the variable x is called a feasible solution, otherwise, it is called an unfeasible solution. The feasible solution with the smallest target value is called the optimal feasible solution of the constraint optimization problem, so the main purpose of solving the constraint optimization problem is to find this optimal feasible solution.
Evolutionary algorithms (EAs) are “clusters of algorithms”, which are based on stochastic iteration and evolution by natural selection and genetics. This is achieved by simulating the natural principle of “survival of the fittest” and generating new individuals by mutating, crossing and selecting individuals in a population. The new individuals are compared with those of the previous generation to retain the better ones, so that the whole population keeps moving closer to the optimal solution region. The characteristics of EAs are that it has no differentiable and convexity requirements for the objective function, and it has a strong global search ability by conducting parallel search through the population information. Currently, the common and widely used EAs are: genetic algorithm (GA) [9], particle swarm optimization (PSO) [10], immune algorithm (IA) [11], ant colony optimization (ACO) [12], dfferential evolution (DE) [13], shuffled frog leaping algorithm (SFLA) [14], artificial bee colony (ABC) algorithm [15], biogeography-based optimization (BBO) [16], cuckoo search (CS) algorithm [17], grey wolf optimization (GWO) [18], butterfly optimization algorithm (BOA) [19], Harris hawks optimization (HHO) [20], marine predator algorithm (MPA) [21], honey badger algorithm (HBA) [22], etc. Among them, PSO is an intelligent algorithm proposed by Kennedy and Eberhart in 1995, inspired by the research results of modeling and simulation of bird group behavior. Through the transmission of information between individuals, the whole group is guided towards the most likely solution. PSO has the advantages of simplicity, easy to understand and fewer parameters, so it works well in solving many practical application problems. However, it is easy to fall into local optimum and difficult to jump out, so it is not a good global search due to the fast convergence of diversity in some problems. DE was proposed by Storn et al. in 1995, which was originally an idea to solve the Chebyshev polynomial problem, and was later used to solve complex optimization problems with continuous improvement. The unique memory capability of the algorithm allows it to dynamically track the current search situation to adjust its search strategy, which leads to a strong global convergence and robustness of the algorithm. Each algorithm has its own advantages and disadvantages and is suitable for solving different types of problems. As the “No free lunch theorem” [23] points out, it is difficult for a single algorithm to perfectly solve all problems.
The hybrid evolution algorithm is to make the algorithms cooperate with each other to improve the ability to solve optimization problems, so it has attracted the attention of many scholars and has been linked to proposed hybrid algorithms [24,25,26,27,28,29,30,31]. In order to design better hybrid evolutionary algorithm, it is necessary to understand the strengths and weaknesses of each evolutionary algorithm and effectively balance the relationship between exploration and exploitation of the algorithm in the search process to obtain the optimal search capability. Exploration and exploitation are essentially contradictory to each other; if differential evolution performs well in exploration, it will perform weakly in exploitation [32]. The reverse is also true. The particle swarm optimization is a population-based algorithm, which tends to converge quickly to a local optimum in the face of multimodal functions, thus missing the opportunity to converge to a global optimum. In order to improve the advantages of this algorithm and make up for its shortcomings, this paper proposes a hybrid particle swarm-differential evolutionary algorithm (PSO-DE) for solving constrained optimization problems by combining the evolutionary characteristics of DE, and applying it to 12 classical constrained benchmark functions and 57 engineering optimization problems.The numerical results show that PSO-DE has good stability, robustness and global search ability.
The rest of this article is arranged as: Section 2 briefly describes PSO and DE; Section 3 proposes a hybrid particle swarm optimization and differential evolution (PSO-DE); Section 4 analyzes the complexity of PSO-DE; Section 5 proves the convergence of PSO-DE; Section 6 presents simulation experiments and comparison of results; Section 7 is the conclusion. Figure 1 presents a graphical abstract of this paper.

2. Related Work

2.1. Particle Swarm Optimization

Particle swarm optimization (PSO) is an intelligent algorithm that imitates the foraging behavior of birds. In this algorithm, the process of finding the optimal solution of the problem is regarded as the process of birds foraging, and the flight space of birds is compared to the search space of the solution. Each bird is abstracted into a particle without quality and velocity, which is used to represent a possible solution of the problem, and then solve the complex optimization problem.
In PSO, the particle swarm is composed of NP particles. Since particles have no quality and velocity, the swarm can extend to D-dimensional space. Each particle flies at a speed and constantly approaches the individual optimal position and global optimal position. In the process of optimization, the velocity and position of each particle are updated according to Equations (3) and (4):
v i = w v i + c 1 · r a n d · ( p i x i ) + c 2 · r a n d · ( g x i )
x i = x i + v i
where i = 1 , 2 , , N P . p i represents the individual optimal position of the i particle, p i b e s t is the value of the fitness for the optimal position of the individual, where p i b e s t = f ( p i ) . g represents the global optimal position, w is the inertia weight. x i and v i respectively represent the position and velocity of the i particle in the current iteration number. x i d [ x m a x , x m a x ] , x m a x is a constant limiting the position of the particle, v i d [ v m a x , v m a x ] , v m a x is a constant limiting the particle velocity. c 1 and c 2 are self-knowledge coefficient and social cognition coefficient, respectively, also known as acceleration constant, rand is a random number in the range of [ 0 , 1 ] that follows a normal distribution, which increases the randomness of particle motion. The velocity update formula consists of three parts: The first part is “inertia” or “momentum”, and the motion “habit” of reacting particle represents particle’s tendency to maintain previous speed. The second part is “cognition”, which reflects particle’s memory or recollection of its own historical experience, representing particle’s tendency to move closer to its optimal position. The third part is “social”, reflecting the historical experience of groups of cooperative cooperation and knowledge sharing between particles, representing the tendency of particles to move closer to best position in history of group or field. Figure 2 shows the updated diagram of positions.
Based on the concepts of “population” and “evolution”, particle swarm optimization realizes the search for complex spatial optimal solutions through cooperation and competition between individuals, and its process is as follows:
Step 1: The position and velocity of each particle is generated through Equations (5) and (6).
x = r a n d ( N , D ) ( X m a x X m i n ) + X m i n ,
v = r a n d ( N , D ) ( V m a x V m i n ) + V m i n .
Step 2: Calculate the fitness value for each particle f ( x i ) .
Step 3: Compare the fitness value f ( x i ) with the individual optimal value p i b e s t , if f ( x i ) < p i b e s t , then p i b e s t = f ( x i ) .
Step 4: Compare the fitness value f ( x i ) with the global optimal value g b e s t , if f ( x i ) < g b e s t , then g b e s t = f ( x i ) .
Step 5: Update the velocity and position of each particle by Equations (3) and (4).
Step 6: The boundary conditions are handled by Equations (7) and (8),
x i j = r a n d ( X m a x X m i n ) + X m i n , i f   x i j > X m a x   o r   x i j < X m i n ,   x i j , o t h e r w i s e .
v i j =   r a n d ( V m a x V m i n ) + V m i n , i f   v i j > V m a x   o r   v i j < V m i n ,   v i j , o t h e r w i s e .
Step 7: If the termination condition is met, the algorithm is terminated and the optimization result is output, otherwise, go back to Step 2.
Corresponding to the flow of the Algorithm 1, the basic framework of the particle swarm optimization is shown in Figure 3.
Algorithm 1 The pseudocode of particle swarm optimization
1: Initialize PSO parameters, which involve Xmax , Xmin , Vmin , Vmax , c 1 , c 2 ,
 and a random set of N P individuals.
2: According to Equations (5) and (6), the position and velocity of the individuals of the
 initial population are randomly generated.
3: Calculate the fitness value of individual population f ( x i ) .
4: Set individual optimal position p i and p i best .
5: Set global optimal position g and gbest ;
6:  While (criterion)
7:        for  i = 1 , 2 , NP  do
8:             Generate new velocity v i ( t + 1 ) using Equation (3);
9:             Generate new locations x i ( t + 1 ) using Equation (4);
10:             Evaluate fitness value at new locations f ( x i ( t + 1 ) ) ;
11: (Update  individual  optimal)
12:                 if     f ( x i ( t + 1 ) ) p i best   then
13:                           p i ( t + 1 ) = x i ( t + 1 ) , p i best ( t + 1 ) = f ( x i ( t + 1 ) ) ;
14:                 else
15:                           p i ( t + 1 ) = p i ( t ) , p i best = p i best ( t + 1 ) ;
16:                 end (if)
17: (Update  global  optimal)
18:                 if     p i best gbest   then
19:                           g ( t + 1 ) = p i ( t + 1 ) , gbest ( t + 1 ) = p i ( t + 1 ) ;
20:                 else
21:                           g ( t + 1 ) = g ( t ) , gbest ( t + 1 ) = gbest ( t ) ;
22:                 end(if)
23:        end (for)
24:  end(while)
25: Output

2.2. Differential Evolution

Differential Evolution (DE) is a random heuristic search algorithm, which has the characteristics of memorizing the optimal solution of individuals and sharing information within a population. DE is similar to GA, and the main operations of population renewal are mutation, crossover and selection, as follows:
(1) Initialization
The initial population is usually randomly selected from within a given boundary constraint. In general, the randomly generated population follows uniform probability distribution. x j ( L ) x j x j ( U ) , ( j = 1 , 2 , , D ) is the range of the parameter variables, then
x i d = r a n d [ 0 , 1 ] · ( x i j ( U ) x i j ( L ) ) + x i j ( L )
where i = 1 , 2 , , N P ; rand is a random number in the range of [ 0 , 1 ] .
(2) Mutation operation
For each decision vector x i ( i = 1 , 2 , , N P ) , a new individual v i is generated by the following mutation operation:
D E / r a n d / 1 : v i = x r 1 + F · ( x r 2 x r 3 )
D E / b e s t / 1 : v i = x b e s t + F · ( x r 1 x r 2 )
D E / c u r r e n t t o b e s t / 1 : v i = x i + F · ( x b e s t x i ) + F · ( x r 1 x r 2 )
D E / b e s t / 2 : v i = x b e s t + F · ( x r 1 x r 2 ) + F · ( x r 3 x r 4 )
D E / b e s t / 2 : v i = x b e s t + F · ( x r 1 x r 2 ) + F · ( x r 3 x r 4 )
where x r 1 , x r 2 , x r 3 , x r 4 , x r 5 are randomly selected from the population and i r 1 r 2 r 3 r 4 r 5 , F ( F [ 0 , 2 ] ) is an amplification factor.
(3) Crossover operation
To improve population diversity, the following formula is used to cross-operate, and u i = ( u i 1 , u i 2 , , u i D ) as experimental variables:
u i j =   v i j ,   i f   r a n d b ( j ) C R   o r   j = r n b r ( i ) ,   x i j ,   i f   r a n d b ( j ) C R   a n d   j r n b r ( i ) ,
where r a n d b ( j ) represents the j-th estimate generated randomly between [0,1], j = r n b r ( i ) ( 1 , 2 , , D ) represents a randomly selected sequence to ensure u i j gets at least one parameter from v i j , C R ( C R [ 0 , 1 ] ) is a crossover probability.
(4) Select operation
In order to ensure that individuals with better fitness enter the next generation, the “greedy” strategy is adopted for selection operation. By comparison, the superior individuals in u i and v i are retained to the next generation, and the inferior individuals are eliminated.
The differential evolution algorithm (Algorithm 2) uses real number coding, a crossover operation based on differential simple mutation operation and “one-to-one” competitive survival. The specific steps of the strategy are as follows:
Step 1: Determine the parameters of the differential evolution algorithm: number of populations, mutation operator, crossover operator, maximum number of iterations, termination conditions.
Step 2: The initial population is randomly generated, and let the number of evolution k = 1.
Step 3: Fitness values are calculated for the individuals in the initialized population.
Step 4: Determine whether the termination condition is met or the maximum number of iterations is reached: if yes, the evolution is terminated and the best individual at this point is taken as the output; otherwise, the next operation is continued.
Step 5: Mutation operations and crossover operations are performed to process the boundary conditions and obtain temporary populations.
Step 6: Adaptation values are calculated for individuals in the provisional population.
Step 7: The new population is obtained by “one-to-one” selection of the individuals of the temporary population and the corresponding individuals of the original population.
Step 8: Let the number of evolution k = k + 1 and turn to Step (4).
Corresponding to the flow of the above algorithm, the basic framework of the differential evolution algorithm is shown in Figure 4.
Algorithm 2 The pseudocode of Differential Evolution
1: Initialize DE parameters, which involve F , CR ,
 and a random set of N P individuals.
2: Generate the initial population according to Equation (5).
3: Calculate the fitness value of individual population f ( x i ) .
4:   While (criterion).
5:        for  i = 1 , 2 , NP  do;
6:            Select random indexes r 1 , r 2 , r 3 , where r 1 r 2 r 3
7: (Mitation  operation)
8:                v i = x r 1 + F · ( x r 2 x r 3 )
9:              for j = 1 , 2 , , D do
10:                 randb ( j ) = rand ( 0 , 1 ) , j = rnbr ( i ) ( 1 , 2 , , D )
11: (Crossover  operation)
12:                     if   randb ( j ) CR or j = rnbr ( i ) then
13:                               u ij = v ij
14:                     else
15:                              u ij = x ij
16:                     end(if)
17: (Greedy  selection)
18:                     if   f ( u i ) f ( x i ) then
19:                              x t + 1 = u t
20:                     else
21:                              x t + 1 = x t
22:                    end (if)
23:             end(for)
24:        end(for)
25:   end (While)
26: Output

3. Hybrid Particle Swarm Optimization and Differential Evolution (PSO-DE)

In this section, we describe the main steps of the proposed PSO-DE algorithm, as shown in Section 3.4 and the flowcharts in Figure 5 and Figure 6. Before describing the main steps of the proposed algorithm, we focus on how to deal with constraints when the variables violate the constraints by the proposed algorithm.

3.1. Constraint Handling Technology

To ensure that the optimal solution obtained optimizes both the objective function and satisfies all constraints, the proposed PSO-DE is combined with a constrained optimization handling technique in this paper. The penalty function method is currently the most common constraint treatment method. The main idea is that the constrained optimization problem is transformed into an unconstrained optimization problem by subtracting or adding a penalty term to the value of the objective function of the infeasible solution. Among them, the penalty term is usually obtained by multiplying the penalty factor by the degree of constraint violation. Faced with different types of constrained optimization problems, how do we determine the appropriate penalty factor to make the solution of infeasible solutions not overly penalized or too lightly penalized? This is the main challenge in solving constrained optimization problems using the penalty function method. On the other hand, the feasibility rule (Deb) is a more intuitive way to compare the feasibility of two solutions without determining any parameters such as penalty factors. Comparing two solutions using the following principles: (1) any feasible solution takes precedence over any infeasible solution, (2) if both solutions are feasible, the feasible solution is selected with the better objective function value, (3) between two infeasible solutions, the solution with the less constraint violation takes preference. We combine Deb’s rule into a constraint treatment for PSO-DE, assuming that the fitness and total constraint violation of PSO-DE particles are f ( x i ) and G ( x i ) . Correspondingly, for the constraint minimization problem, the two solutions x i and x j are compared according to the following criterion:
(i) If G ( x i ) = 0 and G ( x j ) = 0 f ( x i ) < f ( x j ) , the individual x i  is reserved;
(ii) If G ( x i ) = 0 , G ( x j ) 0 , the individual x i  is reserved;
(iii) If G ( x i ) 0 , G ( x j ) 0 G ( x i ) < G ( x j ) the individual x i  is reserved.
Algorithm 3 is used to compare the current solution x i of the i particle of PSO-DE with the individual best position p i . The same pseudocode can be used to compare the feasibility between the solution x i and the global best position g, and then Deb to determine whether the new position x i generated by the first particle can replace p i and g. By combining Deb with PSO-DE, the population can be guided to search from infeasible region to feasible region and optimize the objective function in the feasible region.
Algorithm 3 Deb’s Rule
1: Input x i , f ( x i ) , G ( x i ) , p i best = f ( p i ) , G ( p i ) ,
2:       if   G ( x i ) = 0 and G ( p i ) = 0 then
3:             else if   f ( x i ) < p i best then
4:                            p i = x i , p i best = f ( x i ) , G ( p i ) = G ( x i ) ,
5:                else
6:                            p i = p i , p i best = f ( p i ) , G ( p i ) = G ( p i ) ,
7:              end (else if)
8:        else
9:                             G ( x i ) 0 and G ( p i ) 0
10:             if G ( x i ) < G ( p i ) then
11:                             p i = x i , p i best = f ( x i ) , G ( p i ) = G ( x i ) ,
12:              else
13:                             p i = p i , p i best = f ( p i ) , G ( p i ) = G ( p i ) ,
14:              end (if)
15:       end(if)

3.2. Applying PSO to the Top 50% of Individuals with a High Degree of Violation

In this section, we will describe PSO-DE in detail. In the particle swarm velocity update equations, the first part is “inertia” or “momentum”, where the reactive particles have the tendency to maintain their previous velocity, which is used to ensure the global convergence performance of the algorithm. The second part is “cognitive”, which represents the tendency of the particle to approach to its previous optimal position. The third part is the “social” part, which represents the tendency of particle towards global optimal position by cooperating and sharing knowledge among the particles. The “cognitive” and “social” components lead to local convergence of the algorithm. When the particle position is close to the global best position, the distance between particle and personal optimal position and global optimal position will be small; at that time, the velocity is influenced by the “inertia” part, and less by the “cognitive” and “social” parts, which leads to a negligible change in the particle velocity, and thus, the particle position will not change much, in which case the particle may stagnate and fall into the local optimum. This indicates that the initial personal optimal position and the global optimal position are playing an important role in the performance of the algorithm.
Based on the feasibility rule, if the degree of particle violation is lower, the probability of particles clustering around the global optimal position is higher, which may cause the global optimal position to stay at the same position for a long time, which makes it difficult for particles to jump out of a certain neighborhood of global optimal position and lose population diversity. In other words, the algorithm may converge prematurely in the early stages of the optimization search process, and if the population converges too quickly to a position that may be the local optimal position, the particles may also give up trying to explore other optimal positions and stagnate in the rest of the evolutionary process. On the other hand, for particles with a high degree of constraint violation, there is a remarkable difference between its personal optimal position and global optimal position, it will extract beneficial information from both the personal optimal position and the global best position belonging to the same population, dragging it toward better performing points, and updating particles that may be better than the current global optimal position, causing the global optimal position to jump to a new position different from the current position. This substitution may stimulate the particle population to adjust its evolutionary direction and guide particles to new regions that have not been searched before, so as to prevent particles from falling into local optimum. Therefore, we rank the constraint violation degree of population individuals in descending order in this paper, and take out the top 50% of individuals with higher constraint violation degree as temporary population, which apply particle swarm optimization to the temporary population, and this mechanism slows down the convergence speed and avoids stagnation to some extent.

3.3. Updating Individual Optimal with DE

After each iteration of the algorithm, the global optimal positions are updated using the feasibility rules. At the same time, the particle swarm optimization is easily trapped in the local optimum. In order to improve the global optimal finding ability of the algorithm, people have continuously tried to integrate the DE strategy into the PSO and achieved good results, so this paper also applies the DE strategy to update individuals’ optimal positions in the PSO. This strategy increases the possibility of finding a more optimal position in order to increase the chances of finding global optimal position in case it is not yet determined. It is well known that individual optimal position and global optimal position guide particles in the best direction and speed up convergence. These two populations work independently of each other, but individuals between them are interrelated. PSO can gradually search the neighborhood of best position so far, and the differential evolution algorithm (Algorithm 4) can avoid convergence to a local optimum. In summary, there is a great balance between accuracy and efficiency by hybridizing PSO and DE.
In the process of PSO-DE, firstly, the initial population is randomly generated ( p o p ), which is copied and assigned to personal optimal positions P = p o p , P is used to store the current personal best position of each particle, and P B e s t is used to store the personal optimal value of each particle. Secondly, the constraint violation degree value of each individual in the population is calculated and sorted in descending order, where the top 50% of individuals with a higher violation degree are taken as temporary population. PSO is applied to the temporary population to update positions of selected individuals and corresponding individuals optimal position. The position order between each particle and corresponding individual optimal position is kept consistent, when the order of individuals in the population changes, the order of the corresponding individual optimal position will also change. Finally, the mutation operation, crossover operation and selection operation of DE are applied to the individual optimal positions corresponding to the individuals in the population pop, and the global optimal position is updated by the feasibility rules. Figure 5 shows the flow chart of PSO-DE.

3.4. Steps of the PSO-DE

Step 1: Denote Population Size N P , fitness function f ( x ) , violation degree function G ( x ) , upper bound X m a x and lower bound X m i n of the decision variables, maximum iterations T.
Step 2: Calculate the degree of violation G ( x ) of the individuals in p o p , while sorting the individuals and the correspondent personal optimal position in descending order, and the top 50 % of the sorted individuals as a temporary population P 1 , P 1 = ( x 1 , x 2 , , x N P / 2 ) , corresponding to the personal best value of the individuals P 1 B e s t = ( p 1 b e s t , p 2 b e s t , , p N P / 2 b e s t ) .
Step 3: Updating the individuals P 1 in according to Equations (4) and (5), after the update, P 1 = a 1 , a 2 , , a N P / 2 . Boundary handling is applied to x i j in excess of [ X m i n , X m a x ] and v i j in excess of [ V m i n , V m a x ] as follows:
x i j =   0.5 × ( X m i n + x i j ) , x i j X m i n ,   0.5 × ( X m a x + x i j ) , x i j X m a x ,   x i j , o t h e r w i s e .
v i j =   r a n d × ( V m a x X m i n ) , v i j X m i n   o r   v i j X m a x   v i j , o t h e r w i s e .
Step 4: Calculate the fitness function values   f ( a i ) and violation values G ( a i ) of individuals after the update of the temporary population P 1 .
Step 5: Compare a i and p i according to the feasibility rule, and if a i is better, replace p i .
Step 6: Update g b e s t according to the feasibility principle.
Step 7: Introduce DE strategy to update P B e s t as follows:
     Step 7.1: Mutation operation: the i-th particle’s personal optimal position p i ,   i = 1 , 2 , , N P / 2 , two different r 1 , r 2 1 , 2 , , N P / 2 are randomly selected, where v i is the intermediate variable formed by the variation operation and F is the factor of scaling.
      Step 7.2: Crossover operation:
u i j =   v i j r a n d j C R   o r   j = j r a n d   p i j , o t h e r w i s e .
where u i j is the intermediate variable formed by the crossover operation, C R is the crossover probability and j r a n d 1 , 2 , , n is the random number; the boundary is handled using the following equation:
u i j =   v i j ,   i f   r a n d b ( j ) C R   o r   j = r n b r ( i ) ,   x i j ,   i f   r a n d b ( j ) C R   a n d   j r n b r ( i ) ,
where r a n d is a normally distributed random number between [ 0 , 1 ] .
      Step 7.3: Using the feasibility principle to compare u i and p i , update p i and P B e s t .
Step 8: By comparing the individuals in P B e s t through the feasibility principle, select the optimal individual and updating g , f b e s t = f ( g ) .
Step 9: If the termination condition is satisfied, the global optimal individual is output as the optimal solution and the algorithm is terminated, otherwise turn to Step 2.
Algorithm 4 The pseudocode of PSO-DE
1: Initialize PSO and DE parameters
2: From Equations (5) and (6), the position and velocity of the individuals of
 the initial population p o p are randomly generated.
3: Calculate fitness function f ( x i ) and constrained violation G ( x i ) of individual.
4:     Set P = pop and PBest
5:     Set g and gbest
6:   While (criterion)
7:     for pop do
8:       Sort pop in descending order according to G ,
9:       Set P 1 =The top 50 % of individuals in pop .
10:          for P 1 do
11:           Update velocity and position by Equations (3) and (4).
12:           Calculate fitness function f and constrained violation G .
13:            end (for)
14:           Update P and PBest ,   g and gbest . Algorithm 3.
15:          for p of P do
16:            Mutation operation Algorithm 2
17:            Crossover operation
18:            Boundary condition treatment
19:           Calculate objective function f and constrained violation G .
20:            end (for)
21:            Update P and PBest , Algorithm 3
22:            Update g and gbest
23:       end (for)
24:   end (While)
25: Output the final result gbest .

3.5. Pseudocode of PSO-DE

Corresponding to the flow of the above algorithm, the basic framework of the PSO-DE algorithm is shown in Figure 6.

4. Complexity Discussion of PSO-DE

In this section, we calculate the time complexity analysis degree of PSO-DE algorithm, and only the main steps in one iteration of the algorithm are considered in the worst-case time complexity, where N P is the population size and D is the number of dimensions of each particle.
(1) To initialize N P particles, with particle being D-dimensional, the time complexity is O ( N P × D ) .
(2) Computation of fitness values and violation degrees for N P particles with a time complexity analysis of O ( N P × D ) .
(3) Ordering the N P particles in descending order according to the degree of violation, the time complexity analysis is O ( N P l o g N P ) , and the calculation time complexity of the update of individuals is O ( 1 / 2 N P × D ) .
(4) The time complexity of updating the individual optimum is O ( 1 / 2 N P × D ) .
(5) Analysis of the time complexity of the DE update strategy to update the optimal position of an individual is O ( N P × D ) .
(6) The time complexity of updating the global optimum is O ( N P × D ) .
To sum up, the time complexity of the PSO-DE algorithm in the worst case of one iteration is O ( N P l o g N P ) , which also indicates the computational effectiveness of the PSO-DE algorithm.

5. Convergence Discussion of PSO-DE

At present, the convergence of most evolutionary algorithms is proved using Markov models or dynamic models. In this paper, we prove PSO-DE convergence by building a sequential convergence model, which is one of the main innovations of this paper.
Theorem 1.
After many iterations, PSO-DE will converge to the global optimal solution with Probability 1.
Proof. 
For the global optimization problem, assume that the optimal solution is g = x , then g b e s t = f ( x ) is the global optimal value. The optimal solution of PSO-DE in t-th iteration is g t = x t , g t b e s t = f ( x t ) , which is the current optimal value. According to the sequential convergence theorem, the equivalence condition that PSO-DE can find the global optimum is | g b e s t g t b e s t | = | f ( x ) f ( x t ) | ε . □
During the evolution of PSO-DE, there is a best individual for each iteration, and the set formed by these individuals is X b e s t = ( g 1 , g 2 , , g T ) = ( x 1 , x 2 , , x T ) , where T is the maximum number of iterations. Therefore, the sequence A can be constructed:
A = ( g 1 b e s t , g 2 b e s t , , g T b e s t ) .
g t b e s t = f ( x t ) = f ( g t ) , t = 1 , 2 , , T .
From the feasibility rule, it follows that in PSO-DE, each iteration further searches more finely along the negative gradient direction. Therefore, the optimal value obtained at each iteration must be better than or equal to the optimal value of the previous generation. Therefore, Equation (22) holds:
g 1 b e s t g 2 b e s t , , g T b e s t .
With iterations, the population will gradually move closer to the region where the optimal solution exists, the probability that the best individual in population will enter the ε -neighborhood of global optimal solution gradually increases. Equation (23) is used to express the probability that the optimal value f ( x t ) of the current population should converge to the global optimum f ( x ) :
P t = | f ( x t ) f ( x ) | ε , t = 1 , 2 , , T .
According to Equations (22) and (23), the following relationship can be derived,
P 1 P 2 P t , t = 1 , 2 , , T .
Therefore, after the t-th iteration, the probability that the current optimal value of the following equation does not converge to the global optimal value is the following:
Q t = ( 1 P 1 ) ( 1 P 1 ) ( 1 P T 1 ) ( 1 P T ) .
From Equation (24), it is shown that it is monotonic and does not decrease, so the following equation holds:
Q t = ( 1 P 1 ) ( 1 P 1 ) ( 1 P T 1 ) ( 1 P T ) ( 1 P 1 ) ( 1 P 1 ) ( 1 P 1 ) ( 1 P 1 ) = ( 1 P 1 ) T .
because P 1 is the probability, so 0 P 1 1 , which gives us 0 ( 1 P 1 ) 1 , and after many iterations, Equation (27) holds.
l i m t + ( 1 P 1 ) T = 0
From Equation (27), after a large number of iterations, the probability that the algorithm does not converge to optimal value is 0. Therefore, as the number of iterations t increases, PSO-DE will eventually converge to global optimal value with Probability 1.

6. Experimental Analysis

6.1. Experimental Preparation

The parameters of the PSO-DE algorithm are set as follows: self-perception coefficient c 1 = 1.5 , social perception coefficient c 2 = 1.5 , maximum value of inertia weights w m a x = 0.8 , minimum value of inertia weights w m i n = 0.4 , scaling factor F = 0.95 , crossover probability C R = 0.95 , relaxation variables of the equation variables δ = 0.0001 , population size and number of iterations are determined by Equations (28) and (29). The maximum number of iterations can be used as a termination condition in the PSO-DE algorithm. All the algorithms in this paper are coded in Matlab, using Intel(R) Core(TM) i5-8500 [email protected] GHz and 8.00 GB. Each problem is independently run 30 times, the best and worst values as well as the median, mean and standard deviation of the results of 30 runs are taken and compared with the results of other algorithms.
N P = 20 D ,   2 D 5 , 10 D ,   5 D 10 , 5 D ,   D > 10 .
T = 1 × 10 5 , D 10 , 2 × 10 5 , 10 D 30 , 4 × 10 5 , 30 D 50 , 8 × 10 5 , 50 D 150 ,

6.2. Benchmark Function Test

To further verify the performance of proposed algorithm, 12 benchmark functions are tested, which include various types (linear, nonlinear and quadratic) of objective functions with different numbers of decision variables and a range of types (linear inequality, nonlinear equation and nonlinear inequality) of constraints. Table 1 shows the specific characteristics of these 12 benchmark functions, where n denotes the number of decision variables number; the type of objective function: linear, nonlinear, polynomial, quadratic and cubic, etc. ρ = | Ω | / | S | denotes the estimated ratio of randomly calculated feasible solution to search space. | Ω | denotes the number of feasible solutions in | S | , and the usual number of simulations is 1,000,000. LI and NI denote linear and nonlinear inequality constraints. Respectively, LE and NE denote linear and nonlinear equation constraints. a is the active number of constraints at the gobal optimal solution, f ( x ) denotes global optimal function value. The simulation of the benchmark functions can be downloaded from http://www5.zzu.edu.cn/cilab/Benchmark/hysdmbyhcsj.htm (accessed on 15 September 2022). The superiority of algorithm is illustrated by comparing the experimental results. From the numerical results in Table 2, it can be found that the optimal value of PSO-DE algorithm on function f 1 , f 3 , f 11 and f 12 , which are consistent with the target value, and others are closer to the target value from now and this shows that the algorithm has strong accuracy.

6.2.1. The PSO-DE Compared with PSO Variant Algorithms in Benchmark Functions (See Appendix A)

To further illustrate the performance of the PSO-DE algorithm, which is applied to 12 benchmark functions, Table 3 and Table 4 show a statistical comparison of the results of PSO-DE for 12 benchmark functions with the results of four PSO variant algorithms of Koon Meng et al. [33] proposing an improved PSO variant of the multigroup particle swarm optimization algorithm (CMPSOWV), Issam Mazhoud et al. [34] proposing an adaptive particle swarm optimization algorithm (CVI-PSO), Manoela et al. [35] proposing a new particle swarm optimization algorithm (PSO+) and Sun et al. [36] proposing a kind of improved vector particle swarm optimization algorithm (IVPSO). As can be seen from Table 2, Table 3 and Table 4, the PSO-DE algorithm is more effective than other algorithms for the 12 constrained optimization problems tested. The optimal values of PSO-DE in the benchmark functions f 1 , f 2 , f 3 , f 6 , f 7 , f 8 , f 9 , f 11 , f 12 are closer to target values. The results of PSO-DE solutions are better than other algorithms and the optimal values of benchmark functions f 1 , f 3 , f 11 , f 12 are consistent with target values, from which it can be found that PSO-DE solves the problem with higher accuracy. For benchmark function f 2 , PSO-DE is more effective than CMPSOWV, CVI-PSO, PSO+, IVPSO for optimal value, and better than other algorithms for average value. For the benchmark function f 3 , the optimal value obtained is equal to the optimal value result of CVI-PSO, but the standard deviation and mean value of PSO-DE are smaller, thus, it can be obtained that PSO-DE algorithm has good robustness. The effect of worst value obtained for benchmark function f 4 is better than optimal values of CMPSOWV, CVI-PSO, PSO+ and IVPSO. The optimal value of PSO+ for benchmark function f 5 works better than other algorithms. For benchmark function f 7 PSO-DE has a better optimal value effect than CVI-PSO, PSO+ and IVPSO. These are the same optimal, worst and mean values as for CMPSOWV. However, the standard deviation of PSO-DE is smaller, indicating better stability of algorithm. PSO-DE and IVPSO outperform CVI-PSO, PSO+ and CMPSOWV at optimal value of benchmark function f 8 . The results obtained on benchmark function f 4 , f 5 , f 10 are non-optimal, but close to optimal value, the average value on the benchmark function f 2 , f 3 , f 4 , f 6 , f 7 , f 8 , f 9 , f 11 , f 12 are the same as optimal value. Subsequently, it is further verified that the PSO-DE algorithm outperforms CVI-PSO, PSO+, IVPSO, and CMPSOWV algorithms in solving constrained optimization problems with better accuracy and robustness.

6.2.2. The PSO-DE Compared with Other Algorithms

Table 5 and Table 6 show the comparison of PSO-DE with the Simultaneous Multi-Memory Evolutionary Strategy algorithm (SMES) proposed by Efrén Mezura-Montes and Coello [37], the Constrained Adaptive Penalty Function-based Optimization algorithm (SAPF) proposed by Biruk Tessema [38], Improved Genetic Algorithm (GA) proposed by Adil Amirjanov [39], and Constrained Laplacian Biogeography-based Optimization algorithm (C-LXBBO) proposed by Vanita Garg et al. [40] are compared for 12 benchmark functions. The table “Rank” represents the ranking of the performance of the five compared algorithms. From the results, it can be seen that the tests of PSO-DE for f 1 , f 3 , f 11 , f 12 all find the global optimal value of the problem. The optimal value obtained is closer to the current optimal value by PSO-DE in function f 2 , f 4 , f 5 , f 6 , f 7 , f 8 , f 9 , f 10 , where the optimal value of the benchmark function f 2 , f 6 , f 7 , f 9 works better than SMES, SAPF, GA and C-LXBBO. For the benchmark function f 1 , PSO-DE has the same optimal, worst and average values with SMES, C-LXBBO, and the optimal value is better than SAPF and GA. For the benchmark function f 3 , PSO-DE has the same optimal value as SMES, SAPF and GA, but the standard deviation of PSO-DE is smaller, indicating its better stability. The optimum values are found by SMES and GA. For the benchmark function, f 4 are equal, the difference between PSO-DE is very small, and the optimum value is equal to the worst value, median and mean, so its stability is better. For the benchmark function f 5 , the optimal value, mean and standard deviation of PSO-DE are better than other algorithms. For the benchmark function f 6 , the optimal values of PSO-DE and SMES are extremely close, but the standard value of PSO-DE is smaller. For the benchmark function f 7 , the worst values of PSO-DE are better than the optimal values of SMES, SAPF, GA, C-LXBBO. For benchmark function f 9 , f 10 , PSO-DE’s optimal values are better than those of SMES, SAPF, GA and C-LXBBO. For the benchmark function f 11 , PSO-DE’s optimal, worst, and mean values are equal to those of SMES and GA, but PSO-DE’s standard deviation is smaller, indicating the stability of its algorithm. The problem we study is the minimization problem with constraints, and we consider the value that makes the fitness function minimum as the optimal value. In the ranking, ‘Best’ is taken as the index of the ranking, and if the optimal values of two algorithms are the same, we take ‘Best’, ‘Worst’, ‘Median’, ‘Mean’ and ‘SD’ together as ranking metrics. For example, for the benchmark functions f 1 and f 12 , PSO-DE and SMES’s ‘Best’, ‘Worst’, ‘Median’ ‘Mean’, ‘SD’ are all the same, and at this time, PSO-DE and SMES are randomly ranked. For the benchmark functions f 3 and f 11 , PSO-DE and SMES with ‘Best’, ‘Worst’, ‘Median’, and ‘Mean’ are the same, we choose the algorithm corresponding to the value with the smallest ‘SD’ as the first because we consider that the smaller the SD sought by the algorithm, the more stable the algorithm. In contrast, the PSO-DE algorithm outperforms other algorithms in solving constrained optimization problems.

6.3. Engineering Constraint Optimization Problem

To further verify the capability of PSO-DE for solving real-world problems, this section focuses on solving 57 real engineering problems for ECEC2020. The 57 problems include chemical process problems (RC01-RC07), process synthesis and design problems (RC08-RC14), mechanical engineering problems (RC15-RC33), power system problems (RC34-RC44), electronic circuit engineering problems (RC45-RC50), and livestock feed rationing problems (RC51-RC57). The specific information of 57 engineering problems is presented in Table 7, where the number of decision variables of problem and the range of decision variables is from 5 to 158, g denotes the number of inequality constraints, h denotes the number of equation constraints and the range of inequality constraints is from 0 to 148, the number of equation constraints is from 0 to 91, and f ( x ) denotes the optimal value of the current function.
In this section, we have selected the top three algorithms in the proceedings of the CEC2020 Genetic and Evolutionary Computation Conference (GECCO2020) for comparison. Since many engineering problems are high-dimensional and constrained by nonlinear inequalities and equation constraints, solving these problems significantly tests the performance of the algorithms. These three algorithms are the more advanced algorithms of recent years. The details of the compared algorithms are shown below.
1.
SASS: Self-adaptive squirrel search algorithm [41].
2.
COLSHADE: LSHADE with Lévy Flight [42].
3.
sCMAgES: Improved Covariance Matrix Adaptive Evolution Strategy [43].
In order to further observe the effectiveness of PSO-DE in the CEC2020 test, the mean and standard deviation are used as evaluation indexes in this section. To compare the performance of solving engineering constrained optimization algorithms more comprehensively, the feasibility of algorithms on different problems is compared by Equation (30). In addition, the Wilcoxon rank sum test (Wil test) is applied for statistical tests to comprehensively evaluate performance of each algorithm, where “−” indicates that the performance of competitor algorithm is inferior to PSO-DE, “+” indicates that the performance of competitor algorithm is better than PSO-DE, and “=” indicates that competitor algorithm’s performance is close to PSO-DE.
F R = T o t a l   f e a s i b l e   t r i a l s T o t a l   t r i a l s × 100 %
For the industrial chemical process problems (RC01–RC07), we can see from Table 8 that RC01–RC05 are relatively simple optimization problems, all the above algorithms can solve these problems and PSO-DE obtains the best results on RC03. To further observe the performance of PSO-DE in solving the seven industrial chemical process problems, the variation of convergence curves of PSO-DE for the seven problems is shown in Figure 7, in which it can be seen that the whole population enters the bounded range with 100% of feasible solutions, especially for RC01, RC02, RC03 and RC05, which converge quickly to the optimal solution from the beginning of the iteration, and only require short evaluation times and computational resources to search for the optimal solution. RC04 converges unstably in the early stage of convergence, with large curve fluctuations, but also converges successfully to the optimal value in the late stage of evolution, with 100% feasibility. From the numerical results and convergence curves, it is easy to find that the PSO-DE algorithm has the best performance among the four algorithms and has better solution effect and stability.
For the process synthesis and design problems (RC08–RC14), PSO-DE results take the lead in RC08, RC09, RC12, RC13 and RC14. PSO-DE significantly improves the performance of RC09, RC12, RC13 and RC14 problems, but it reduces the performance of RC11 problems. SASS shows better results on RC08 and RC11. To further observe the performance of PSO-DE in solving these problems, the variation of the convergence curves of RC08–RC14 is plotted, as shown in Figure 7. RC08, RC10, RC12, RC13 and RC14 converge quickly to the optimal solution from the beginning of the iteration, and only a small number of evaluations and computational resources are required to search for the optimal solution (see Table 9).
For the mechanical engineering problems (RC15–RC33), PSO-DE significantly outperforms the other algorithms for most of the problems. In addition, the mean results for RC15, RC16, RC31–RC33, RC25–RC27 and RC20–RC22 show that they outperform SASS, COLAHADE and sCMAgES. This indicates that PSO-DE can solve this set of problems efficiently (see Table 10).
For the power system problems (RC34–RC44), PSO-DE ranks first in performance on RC38, RC40-RC44 and SASS have the best mean value on RC34–RC36, RC39. The power system problems contain many constraints and are among the more complex realistic constrained optimization problems, and PSO-DE shows excellent performance on these problems, which indicates that PSO-DE can adequately balance exploration and development when dealing with these problems.
For the electronic circuit problem (RC45–RC50), COLAHAD performs significantly better than the other algorithms on RC45–RC50. It can be seen that PSO-DE does not improve the performance of SASS.
For the livestock feed rationing problem (RC51–RC57), PSO-DE ranked first in results on RC52–RC57 and sCMAgES ranked first in results on RC51. However, from an overall perspective, PSO-DE ranked first in terms of mean results.
PSO significantly outperformed SASS, COLAHADE and sCMAgES on more than half of the problems, and PSO-DE significantly improved the performance on 57 problems compared to SASS. In other words, PSO-DE outperformed the other three comparison algorithms overall for these 57 problems of CEC2020.

6.4. RC17 + RC19

To further illustrate that PSO-DE can effectively solve practical problems, we select two common engineering problems, RC17 (Tension/Compression spring design) and RC19 (welded beam design), for further detailed analysis in this paper.

6.4.1. RC17

The optimization goal of the pressure spring design is to calculate the minimum quality of the tension rope under the constraints of tangential stress, vibration frequency and minimum deflection. The design variables are the spring coil diameter d ( x 1 ) , the average diameter of the spring coil D ( x 2 ) , and the number of effective windings P ( x 3 ) . Figure 8 presents the design of pressure springs; there is a mathematical model as follows:
m i n f ( x ) = ( x 3 + 2 ) x 1 3 x 2 s . t   g 1 ( x ) = 1 ( x 2 3 x 3 71784 x 1 4 )   0 , g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 1 3 x 2 x 1 4 )   + 1 5108 x 1 2 1 0 , g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 , g 4 ( x ) = x 1 + x 2 1.5 1 0 , 0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15 .
As observed in Table 11, it can be found that PSO-DE and IAPSO obtain better optimal results than the other algorithms in the simulation results among the four algorithms. In Table 12, we can further see that the optimal values obtained by PSO-DE are better than those obtained by CPSO and APSO. It is close to the optimal values obtained by CVI-PSO and IAPSO, but PSO-DE is significantly better than these two algorithms in terms of the worst value, mean and square deviation, which indicates that the algorithm has better stability. Therefore, the PSO-DE algorithm outperformed the other algorithms in solving the pressure vessel problem.

6.4.2. RC19

The goal of the welding beam design problem is to obtain the minimum manufacturing cost. As shown in Figure 9, there are four design variables associated with this problem, weld thickness h = x 1 , welded joint length l = x 2 , beam width t = x 3 and beam thickness b = x 4 , the decision vector is X = ( x 1 , x 2 , x 3 , x 4 ) . The mathematical expression for the objective function f ( X ) consists mainly of the total manufacturing cost. While optimizing the design variables, we ensure that the seven constraints are not violated. These constraints are shear stress τ , bending stress in beam σ , deflection of beam end δ and buckling load of bar P b . Its mathematical model is as follows:
m i n f ( x ) = 0.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) s . t g 1 ( x ) = τ ( X ) + τ m a x 0 , g 2 ( x ) = σ ( X ) + σ m a x 0 , g 3 ( x ) = δ ( X ) + δ m a x 0 , g 4 ( x ) = x 1 x 4 0 , g 5 ( x ) = P P b 0 , g 6 ( x ) = 0.125 x 1 0 , w h e r e   0.1 x 1 , x 2 2 , 0.1 x 3 , x 4 10 . τ ( X ) = ( τ ) 2 + ( τ ) 2 + ( l τ τ ) / 0.25 ( l 2 + ( h + t ) 2 ) , τ = 6000 2 h l , σ ( X ) = 504000 t 2 b , δ ( X ) = 65856000 ( 30 × 10 6 ) b t 3 τ = 6000 ( 14 + 0.5 l ) 0.25 ( l 2 + ( h + t ) 2 ) 2 [ 0.707 h l ( l 2 / 12 + 0.25 ( h + t ) 2 ) ] , P b ( X ) = 64746.022 ( 1 0.0282346 t ) t b 3 .
Table 13 and Table 14 show the comparison results of PSO-DE with several other algorithms for the optimization problem of the welded beam structure in terms of finding the optimum. The optimal value of the variables of the optimal solution is found to be on the boundary of the feasible region. The optimal value of the objective function obtained by PSO-DE is 1.724852, which is better than the results obtained by CPSO, APSO and IAPSO, and is similar to the optimal value obtained by CVI-PSO, but the mean, the worst value and the standard deviation obtained by PSO-DE are better than those obtained by CVI- PSO. Therefore, the PSO-DE algorithm outperforms the other algorithms in solving the welded beam structure problem.

7. Conclusions

In this paper, we propose the PSO-DE algorithm, which combines both particle swarm optimization and differential evolution algorithm, and uses differential evolution to improve the performance of particle swarm optimization. To verify the effectiveness and advancement of PSO-DE, it is applied to 12 classical constrained benchmark functions and 57 engineering optimization problems, and 2 common engineering optimization problems (RC17 and RC19) are selected from the 57 engineering problems for further analysis. The study of numerical results shows that PSO-DE has good stability and robustness, and good global search capability. Meanwhile, the PSO-DE algorithm has obvious advantages in solving constrained optimization, and its performance is better than other PSO variants and state-of-the-art EAs algorithms, which have great potential in solving complex practical problems. Therefore, PSO-DE is a constrained evolutionary algorithm worth adopting and promoting.
In further research, more efficient constraint processing techniques need to be designed and embedded in various metaheuristic algorithms. Moreover, multi-objective optimization is a very challenging topic. It is also interesting to combine the proposed constraint evolution algorithms with other techniques to solve multi-objective optimization problems. When dealing with realistic problems, it is important to pay more attention to the fundamental impact of the operator in addition to analyzing the diversity and striking a balance between exploration and exploitation. In the future, PSO-DE can be used to solve more complex real-world problems, such as image segmentation, data clustering and feature selection for photovoltaic cells.

Author Contributions

E.G. performed the methodology, investigation and writing the draft. Y.G. supervised the research and edited and reviewed the final draft. C.H. and J.Z. performed the experiments and reviewed the final draft. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Project of Ningxia Natural Science Foundation (2022AAC02043), the Construction Project of First-class Subjects in Ningxia Higher Education (NXYLXK2017B09), the Major Proprietary Funded Project of North Minzu University (ZDZX201901), the Basic Discipline Research Projects supported by Nanjing Securities (NJZQJCXK202201), the North Minzu University Postgraduate Innovation Program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Benchmark Function

( f 01 ) m i n f ( x ) = 5 i = 1 4 x i 5 i = 1 4 x i 2 i = 5 13 x i s . t g 1 ( x ) = 2 x 1 + 2 x 2 + x 10 + x 11 10 0 , g 2 ( x ) = 2 x 1 + 2 x 3 + x 10 + x 12 10 0 , g 3 ( x ) = 2 x 2 + 2 x 3 + x 11 + x 12 10 0 , g 4 ( x ) = 8 x 1 + x 10 0 , g 5 ( x ) = 8 x 2 + x 11 0 , g 6 ( x ) = 8 x 3 + x 12 0 , g 7 ( x ) = 2 x 4 2 x 5 + x 11 0 , g 8 ( x ) = 2 x 6 x 7 + x 11 0 , g 9 ( x ) = 2 x 8 x 9 + x 12 0 , x i [ 0 , 1 ] , i = 1 , , 9 , 13 , x i [ 0 , 100 ] , i = 10 , 11 , 12 .
( f 02 ) m i n f ( x ) = i = 1 n c o s 4 ( x i ) 2 i = 1 n c o s 2 ( x i ) i = 1 n i x i 2 s . t g 1 ( x ) = 0.75 i = 1 n x i 0 , g 2 ( x ) = i = 1 n x i 0.75 n 0 , x i [ 0 , 1 ] , i = 1 , 2 , , 20 .
( f 03 ) m i n f ( x ) = ( n ) n i = 1 n x i s . t h 1 ( x ) = i = 1 n x i 2 1 0 , x i [ 0 , 1 ] , i = 1 , 2 , , 10 .
( f 04 ) m i n f ( x ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40762.141 s . t g 1 ( x ) = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 92 0 , g 2 ( x ) = 85.334407 0.0056858 x 2 x 5 0.0006262 x 1 x 4 + 0.0022053 x 3 x 5 0 , g 3 ( x ) = 80.51249 + 0.0071317 x 2 x 5 0.0029955 x 1 x 2 + 0.0021813 100 0 , g 4 ( x ) = 80.51249 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 0.0021813 + 90 0 , g 5 ( x ) = 9.300961 + 0.0047026 x 3 x 5 + 0.0012547 x 1 x 3 + 0.0019085 x 3 x 5 25 0 , g 6 ( x ) = 9.300961 0.0047026 x 3 x 5 0.0012547 x 1 x 3 0.0019085 x 3 x 5 + 20 0 , x 1 [ 78 , 102 ] , x 2 [ 33 , 45 ] , x i [ 27 , 45 ] , i = 3 , 4 , 5 .
( f 05 ) m i n f ( x ) = 3 x 1 + 0.000001 x 1 3 + 2 x 2 ( 0.000002 / 3 ) x 2 3 s . t g 1 ( x ) = x 4 + x 3 0.55 0 , g 2 ( x ) = x 4 x 3 0.55 0 , h 3 ( x ) = 1000 s i n ( x 3 0.25 ) + 100 s i n ( x 4 0.25 ) + 894.8 x 1 = 0 , h 4 ( x ) = 1000 s i n ( x 3 0.25 ) + 100 s i n ( x 3 x 4 0.25 ) + 894.8 x 2 = 0 , h 5 ( x ) = 1000 s i n ( x 4 0.25 ) + 100 s i n ( x 4 x 3 0.25 ) + 1294.8 = 0 , x 1 , x 2 [ 0 , 1200 ] , x 3 , x 4 [ 0.55 , 0.55 ] .
( f 06 ) m i n f ( x ) = ( x 1 10 ) 3 + ( x 2 20 ) 3 s . t g 1 ( x ) = ( x 1 5 ) 2 ( x 2 5 ) 2 + 100 0 , g 2 ( x ) = ( x 1 6 ) 2 ( x 2 5 ) 2 82.81 0 , x 1 [ 13 , 100 ] , x 2 [ 0 , 100 ] .
( f 07 ) m i n f ( x ) = x 1 2 + x 2 2 + x 1 x 2 14 x 1 16 x 2 + ( x 3 10 ) 2 + 4 ( x 4 5 ) 2 + ( x 3 3 ) 2 + 2 ( x 6 1 ) 2 + 5 x 7 2 + 7 ( x 8 11 ) 2 + 2 ( x 9 10 ) 2 + ( x 10 7 ) 2 + 45 s . t g 1 ( x ) = 105 + 4 x 1 + 5 x 2 3 x 7 + 9 x 8 0 , g 2 ( x ) = 10 x 1 8 x 2 17 x 7 + 2 x 8 0 , g 3 ( x ) = 8 x 1 + 2 x 2 + 5 x 9 x 10 12 0 , g 4 ( x ) = 3 ( x 1 2 ) 2 + 4 ( x 2 3 ) 2 + 2 x 3 2 7 x 4 120 0 g 5 ( x ) = 5 x 1 2 + 8 x 2 + ( x 3 6 ) 2 2 x 4 40 0 , g 6 ( x ) = x 1 2 + 2 ( x 2 2 ) 2 x 1 x 2 + 14 x 5 6 x 6 0 , g 7 ( x ) = 0.5 ( x 1 8 ) 2 + 2 ( x 2 4 ) 2 + 3 x 5 2 x 6 30 0 , g 8 ( x ) = 3 x 1 + 6 x 2 + 12 ( x 9 8 ) 2 7 x 10 0 , x i [ 10 , 10 ] , i = 1 , 2 , , 10 .
( f 08 ) m i n f ( x ) = s i n 3 ( 2 π x 1 ) s i n ( 2 π x 2 ) x 1 3 ( x 1 + x 2 ) s . t g 1 ( x ) = x 1 2 x 2 + 1 0 , g 2 ( x ) = 1 x 1 + ( x 2 4 ) 2 0 , x 1 [ 0 , 10 ] , x 2 [ 0 , 10 ] .
( f 09 ) m i n f ( x ) = ( x 1 10 ) 2 + 5 ( x 2 12 ) 2 + x 3 4 + 3 ( x 4 11 ) 2 + 10 x 5 6 + 7 x 6 2 + x 7 4 4 x 6 x 7 10 x 6 8 x 7 s . t g 1 ( x ) = 127 + 2 x 1 2 + 3 x 2 4 + x 3 + 4 x 4 2 + 5 x 5 0 , g 2 ( x ) = 282 + 7 x 1 + 3 x 2 + 10 x 3 2 + x 4 x 5 0 , g 3 ( x ) = 193 + 23 x 1 + x 2 2 + 6 x 6 2 8 x 7 0 , g 4 ( x ) = 4 x 1 2 + x 2 2 + 3 x 1 x 2 + 2 x 3 2 5 x 6 11 x 7 0 , x i [ 10 , 10 ] , i = 1 , 2 , , 7 .
( f 10 ) m i n f ( x ) = x 1 + x 2 + x 3 s . t g 1 ( x ) = 1 + 0.0025 ( x 4 + x 6 ) 0 , g 2 ( x ) = 1 + 0.0025 ( x 5 + x 7 x 4 ) 0 , g 3 ( x ) = 1 + 0.01 ( x 8 + x 5 ) 0 , g 4 ( x ) = x 1 x 6 + 833.33252 x 4 + 100 x 1 83333.333 0 g 5 ( x ) = x 2 x 7 + 1250 x 5 + x 2 x 4 1250 x 4 0 , g 6 ( x ) = x 3 x 8 + 1250000 + x 3 x 5 2500 x 5 0 , x 1 [ 100 , 10000 ] , x i [ 1000 , 10000 ] , i = 2 , 3 , x i [ 10 , 1000 ] , i = 4 , , 8 .
( f 11 ) m i n f ( x ) = x 1 2 + ( x 2 1 ) s . t h 1 ( x ) = x 2 + x 1 2 = 0 0 , x i [ 1 , 1 ] , i = 1 , 2 .
( f 12 ) m i n f ( x ) = ( 100 ( x 1 5 ) 2 ( x 2 5 ) 2 ( x 3 5 ) 2 ) / 100 s . t g ( x ) = ( x 1 p ) 2 + ( x 2 p ) 2 + ( x 3 p ) 2 0.0625 0 x i [ 0 , 10 ] , i = 1 , 2 , 3 . p , q , r , = 12 , , 9 .

References

  1. Cheng, Z.; Song, H.; Wang, J. Hybrid firefly algorithm with grouping attraction for constrained optimization problem. Knowl.-Based Syst. 2021, 220, 106937. [Google Scholar] [CrossRef]
  2. Kumar, A.; Das, S.; Zelinka, I. A self-adaptive spherical search algorithm for real-world constrained optimization problems. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Lisbon, Portugal, 26 June 2020. [Google Scholar]
  3. Liu, Z.; Qin, Z.; Zhu, P. Improved whale optimization algorithm for solving constrained optimization problems. Eng. Appl. Artif. Intell. 2020, 95, 103771. [Google Scholar] [CrossRef]
  4. Ning, G.Y.; Cao, D.Q. An adaptive switchover hybrid particle swarm optimization algorithm with local search strategy for constrained optimization problems. Discret. Dyn. Nat. Soc. 2021, 95, 103771. [Google Scholar]
  5. Kumar, N.; Mahato, S.K.; Bhunia, A.K. A new QPSO based hybrid algorithm for constrained optimization problems via tournamenting process. Soft Comput. 2020, 24, 11365–11379. [Google Scholar] [CrossRef]
  6. Schlüter, M.; Gerdts, M. The oracle penalty method. J. Glob. Optim. 2010, 47, 293–325. [Google Scholar] [CrossRef]
  7. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  8. Ridha, H.M.; Gomes, C.; Hizam, H. Multi-objective optimization and multi-criteria decision-making methods for optimal design of standalone photovoltaic system: A comprehensive review. Renew. Sustain. Energy Rev. 2021, 135, 110202. [Google Scholar] [CrossRef]
  9. Sampson, J.R. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 1976; Volume 529. [Google Scholar]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November 1995. [Google Scholar]
  11. Burnet, S.F.M. The Clonal Selection Theory of Acquired Immunity, 3rd ed.; Vanderbilt University Press: Rome, Italy, 1961. [Google Scholar]
  12. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  13. Price, K.V. Differential Evolution, 3rd ed.; Springer: Heidelberg/Berlin, Germany, 2013; pp. 187–214. [Google Scholar]
  14. Eusuff, M.; Lansey, K.; Pasha, F. Shuffled frog-leaping algorithm: A memetic meta-heuristic for discrete optimization. Eng. Optim. 2006, 38, 129–154. [Google Scholar] [CrossRef]
  15. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  16. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  17. Khabibullin, A.; Mastan, E.; Matyjaszewski, K. Surface-initiated atom transfer radical polymerization. In Controlled Radical Polymerization at and from Solid Surfaces; Springer: Cham, Switzerland, 2015; pp. 29–76. [Google Scholar]
  18. Medjahed, S.A.; Saadi, T.A.; Benyettou, A. Gray wolf optimizer for hyperspectral band selection. Appl. Soft Comput. 2016, 40, 178–186. [Google Scholar] [CrossRef]
  19. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  20. Heidari, A.A.; Mirjalili, S.; Faris, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  21. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  22. Hashim, F.A.; Houssein, E.H.; Hussain, K. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  23. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1999, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  24. Al Thobiani, F.; Khatir, S.; Benaissa, B.G. A hybrid PSO and Grey Wolf Optimization algorithm for static and dynamic crack identification. Theor. Appl. Fract. Mech. 2022, 118, 103213. [Google Scholar] [CrossRef]
  25. Raval, P.D.; Pandya, A.S. A hybrid PSO-ANN-based fault classification system for EHV transmission lines. IETE J. Res. 2022, 68, 3086–3099. [Google Scholar] [CrossRef]
  26. Tsao, Y.C.; Delicia, M.; Vu, T.L. Marker planning problem in the apparel industry: Hybrid PSO-based heuristics. Appl. Soft Comput. 2022, 123, 108928. [Google Scholar] [CrossRef]
  27. Zhang, M. Marker Classification Prediction of Rockburst in Railway Tunnel Based on Hybrid PSO-BP Neural Network. Geofluids 2022, 2022, 4673073. [Google Scholar]
  28. Pu, Q.; Gan, J.; Qiu, L. An efficient hybrid approach based on PSO, ABC and k-means for cluster analysis. Multimed. Tools Appl. 2022, 81, 19321–19339. [Google Scholar] [CrossRef]
  29. Tawhid, M.A.; Ibrahim, A.M. A hybridization of grey wolf optimizer and differential evolution for solving nonlinear systems. Evol. Syst. 2022, 11, 65–87. [Google Scholar] [CrossRef]
  30. Long, W.; Liang, X.; Huang, Y. An effective hybrid cuckoo search algorithm for constrained global optimization. Neural Comput. Appl. 2014, 25, 911–926. [Google Scholar] [CrossRef]
  31. Jadon, S.S.; Tiwari, R.; Sharma, H. Hybrid artificial bee colony algorithm with differential evolution. Appl. Soft Comput. 2017, 58, 11–24. [Google Scholar] [CrossRef]
  32. Dong, M.; Wang, N.; Cheng, X. Composite differential evolution with modified oracle penalty method for constrained optimization problems. Math. Probl. Eng. 2014, 2014, 617905. [Google Scholar] [CrossRef] [Green Version]
  33. Ang, K.M.; Lim, W.H.; Isa, N.A.M. A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems. Expert Syst. Appl. 2020, 140, 112882. [Google Scholar] [CrossRef]
  34. Mazhoud, I.; Hadj-Hamou, K.; Bigeon, J. Particle swarm optimization for solving engineering problems: A new constraint-handling mechanism. Eng. Appl. Artif. Intell. 2013, 26, 1263–1273. [Google Scholar] [CrossRef]
  35. Kohler, M.; Vellasco, M.M.B.R.; Tanscheit, R. PSO+: A new particle swarm optimization algorithm for constrained problems. Appl. Soft Comput. 2019, 85, 105865. [Google Scholar] [CrossRef]
  36. Sun, C.; Zeng, J.; Pan, J. An improved vector particle swarm optimization for constrained optimization problems. Inf. Sci. 2011, 181, 1153–1163. [Google Scholar] [CrossRef]
  37. Mezura-Montes, E.; Coello, C.A.C. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans. Evol. Comput. 2005, 9, 1–17. [Google Scholar] [CrossRef]
  38. Tessema, B.; Yen, G.G. A self adaptive penalty function based algorithm for constrained optimization. In Proceedings of the IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 246–253. [Google Scholar]
  39. Amirjanov, A. The development a changing range genetic algorithm. Comput. Methods Appl. Mech. Eng. 2006, 195, 2495–2508. [Google Scholar] [CrossRef]
  40. Garg, V.; Deep, K. Constrained Laplacian biogeography-based optimization algorithm. Int. J. Syst. Assur. Eng. Manag. 2017, 8, 867–885. [Google Scholar] [CrossRef]
  41. Kumar, A.; Wu, G.; Ali, M.Z. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  42. Gurrola-Ramos, J.; Hernàndez-Aguirre, A.; Dalmau-Cedeño, O. COLSHADE for real-world single-objective constrained optimization problems. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  43. Hellwig, M.; Beyer, H.G. A modified matrix adaptation evolution strategy with restarts for constrained real-world problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  44. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–90. [Google Scholar] [CrossRef]
  45. Yang, X.S.; Deb, S.; Fong, S. Accelerated Particle Swarm Optimization and Support Vector Machine for Business Optimization and Applications. Commun. Comput. Inf. Sci. 2011, 136, 53–66. [Google Scholar]
  46. Guedria, N.B. Improved accelerated PSO algorithm for mechanical engineering optimization problems. Eng. Appl. Artif. Intell. 2016, 40, 455–467. [Google Scholar] [CrossRef]
Figure 1. Graphical abstract of this paper.
Figure 1. Graphical abstract of this paper.
Mathematics 11 00522 g001
Figure 2. Particle renewal diagram.
Figure 2. Particle renewal diagram.
Mathematics 11 00522 g002
Figure 3. Operation flow of Particle Swarm Optimization.
Figure 3. Operation flow of Particle Swarm Optimization.
Mathematics 11 00522 g003
Figure 4. Operation flow of Differential Evolution.
Figure 4. Operation flow of Differential Evolution.
Mathematics 11 00522 g004
Figure 5. The flow chart of PSO-DE.
Figure 5. The flow chart of PSO-DE.
Mathematics 11 00522 g005
Figure 6. The basic framework of the PSO-DE.
Figure 6. The basic framework of the PSO-DE.
Mathematics 11 00522 g006
Figure 7. Convergence curve variation of RC01–RC14.
Figure 7. Convergence curve variation of RC01–RC14.
Mathematics 11 00522 g007
Figure 8. Design of pressure springs.
Figure 8. Design of pressure springs.
Mathematics 11 00522 g008
Figure 9. Design of welding beam.
Figure 9. Design of welding beam.
Mathematics 11 00522 g009
Table 1. The specific characteristics of the 12 benchmark functions.
Table 1. The specific characteristics of the 12 benchmark functions.
Problem f ( x ) n LI NI LE NE a ρ Type
f 01 15.00000000 13900060.01111%Quadratic
f 02 0.8036191042 200200199.9971%Nonlinear
f 03 1.000501000 10000110.0000%Polynomial
f 04 30665.538671 50600252.1230%Quadratic
f 05 5126.49671400714200330.0000%Cubic
f 06 6961.81387558 2020020.0063%Cubic
f 07 24.306209068110340060.0003%Quadratic
f 08 0.0958250415 2020000.8560%Nonlinear
f 09 680.63005737457040020.5121%Polynomial
f 10 7049.24802052868330060.0010%Linear
f 11 0.74990000002000110.0000%Quadratic
f 12 1.0000000000 3010004.7713%Quadratic
Table 2. The results of PSO-DE algorithm in the benchmark functions.
Table 2. The results of PSO-DE algorithm in the benchmark functions.
ProblemOptimalBestWorstMedianMeanSD
f 01 −15.00000000−15.00000000−15.00000000−15.00000000−15.000000000.00E+00
f 02 −0.8036191042−0.8036191041−0.792607932−0.803619097−0.8009723884.5E-03
f 03 −1.000501000−1.000501000−1.000501000−1.000501000−1.0005010006.2E-16
f 04 30665.538671 30665.538671 30665.538671 30665.538671 30665.538671 3.71E-12
f 05 5126.4967140071 5126.4981095953 5975.5813629093 5126.4981095953 5154.8008847057 1.6E+02
f 06 6961.81387558 6961.81387558 6961.81387558 6961.81387558 6961.81387558 1.9E-12
f 07 24.3062090681 24.3062090682 24.3062090682 24.3062090682 24.3062090682 1.1E-14
f 08 0.0958250415 0.0958250000 0.0958250000 0.0958250000 0.0958250000 2.8E-17
f 09 680.6300573745 680.6300573744 680.6300573744 680.6300573744 680.6300573744 4.5E-13
f 10 7049.24802053 7049.24802053 7049.24802053 7049.24802053 7049.24802053 4.9E-12
f 11 0.7499000000 0.7500000000 0.7500000000 0.7500000000 0.7500000000 0.00E+00
f 12 1.0000000000 1.0000000000 1.0000000000 1.0000000000 1.0000000000 0.00E+00
Table 3. The results of PSO-DE compared with PSO variant algorithms in benchmark functions.
Table 3. The results of PSO-DE compared with PSO variant algorithms in benchmark functions.
ProblemMetricsAlgorithm
PSO-DECMPSOMVCVI-PSOIVPSOPSO+
f 01 Best−15.00000000−15.00000000−15.00000000−15.00000000−15.00000000
Worst−15.00000000−15.00000000−15.00000000−15.00000000−15.00000000
Mean−15.00000000−15.00000000−15.00000000−15.00000000−15.00000000
SD0.00E+000.00E+004.50E-160.00E+000.00E+00
f 02 Best−0.80361910−0.80361951−0.80097742−0.80361900−0.79670000
Worst−0.79260793−0.79308399−0.79087558−0.70347700−0.77430000
Mean−0.80361910−0.80284728−0.74694210−0.76988900−0.79670000
SD4.50E-032.70E-031.09E-024.68E-031.59E-06
f 03 Best−1.00000000−1.00500100−1.00000000−1.00510100−1.00000000
Worst−1.00000000−0.99452331−1.00000000−1.00510100−0.09390000
Mean−1.00000000−1.00026098−0.99999999−1.00510100−0.67390000
SD6.20E-161.2-033.70E-160.00E+001.16E-15
f 04 Best−30,665.53867100−30,665.53867178−30,665.82171022−30,665.53867200−30,665.50000000
Worst−30,665.53867100−30,66.53867178−30,668.82099611−30,665.53867200−30,665.50000000
Mean−30,665.53867100−30,665.53867178−30,665.80324114−30,665.53867200−30,661.09000000
SD3.71E-127.40E-123.39E-030.00E+005.09E-11
f 05 Best5126.498109605126.496714015127.277667355126.492646005126.49000000
Worst5975.58136291512.496714015127.277667355126.492646005905.70000000
Mean5154.80088471512.496714015127.277667355126.492646005126.49000000
SD1.60E+023.00E+020.00E+000.00E+001.66E-10
f 06 Best−6961.81387558−6961.81387558−6961.81387558−6961.81387600−6961.80000000
Worst−6961.81387558−6961.81387558−6961.81387558−6961.81387600−6783.10000000
Mean−6961.81387558−6961.81387558−6961.81387558−6961.81387600−6961.70000000
SD1.90E-120.00E+000.00E+000.00E+003.18E-01
f 07 Best24.3062090724.3062090724.4738268424.3064970024.31000000
Worst24.3062090724.3062090726.5612954325.0048550042.25000000
Mean24.3062090724.3062090729.5242543024.4296690026.46000000
SD1.10E-141.40E-111.64E+004.70E-028.52E-14
Table 4. The results of PSO-DE compared with PSO variant algorithms in benchmark functions.
Table 4. The results of PSO-DE compared with PSO variant algorithms in benchmark functions.
ProblemMetricsAlgorithm
PSO-DECMPSOMVCVI-PSOIVPSOPSO+
f 08 Best−0.09582500−0.09582541−0.10545951−0.09582500−0.09582000
Worst−0.09582500−0.09582541−0.10545951−0.09582500−0.08544000
Mean−0.09582500−0.09582541−0.10545951−0.09582500−0.09582000
SD2.80E-172.40E-170.00E+000.00E+001.89E-04
f 09 Best680.63005737680.63005740680.63540079680.63005800680.63000000
Worst680.63005737680.63005740680.75570515680.63013900680.47000000
Mean680.63005737680.63005740680.86395783680.63007700681.61340000
SD4.50E-134.50E-127.92E-023.00E-069.45E-01
f 10 Best7049.248020537049.248020507049.276585527049.351110007050.26000000
Worst7049.248020537049.248020507053.214310607068.507274007222.77000000
Mean7049.248020537049.248020507091.880859127053.619189507050.26000000
SD4.90E-128.10E-121.06E+017.71E-018.13E-03
f 11 Best0.750000000.750000000.750000000.749000000.75000000
Worst0.750000000.750000000.750000000.749000000.80270000
Mean0.750000000.750000000.750000000.749000000.75000000
SD0.00E+000.00E+000.00E+000.00E+008.88E-02
f 12 Best−1.00000000−1.00000000−1.00000000−1.00000000−1.00000000
Worst−1.00000000−1.00000000−1.00000000−1.00000000−1.00000000
Mean−1.00000000−1.00000000−1.00000000−1.00000000−1.00000000
SD0.00E+000.00E+000.00E+000.00E+001.27E-10
Table 5. The results of PSO-DE compared with other algorithms in the benchmark function.
Table 5. The results of PSO-DE compared with other algorithms in the benchmark function.
ProblemMetricsAlgorithm
PSO-DESMESSAPFGAC-LXBBO
f 01 Best−15.00000000−15.0000000000−15.0000000000−14.9977000000−15.0000000000
Worst−15.00000000−15.00000000−13.0970000000−14.9467000000−15.0000000000
Median−15.00000000−15.00000000−14.9660000000−14.9918000000−15.0000000000
Mean−15.00000000−15.00000000−14.5520000000−14.9850000000−15.0000000000
SD0.00E+000.00E+007.00E-011.40E-020.00E+00
Rank12453
f 02 Best−0.8036191041−0.8036190000−0.8032020000−0.8029590000−0.7998200000
Worst−0.7926079317−0.7513220000−0.7457120000−0.7221090000−0.7243700000
Median−0.80361909710.7925490000−0.7899398000−0.7596500000−0.7765900000
Mean−0.8009723829−0.78523800000.7557980000−0.7644940000−0.7765900000
SD4.50E-031.70E-021.30E-012.60E-023.86E-02
Rank12435
f 03 Best−1.0000000000−1.0000000000−1.0000000000−1.0000000000−1.0010000000
Worst−1.0000000000−1.0000000000−0.8870000000−0.9931000000−0.9990000000
Median−1.0000000000−1.0000000000−0.9710000000−0.99750000000.9997700000
Mean−1.0000000000−1.00000000000.9640000000−0.9972000000−0.9997700000
SD6.20E-162.10E-043.00E-011.40E-035.60E-04
Rank12435
f 04 Best−30665.5386717830−30665.5390000000−30665.4010000000−30665.5390000000−30665.5000000000
Worst−30665.5386717830−30665.5390000000−30656.4710000000−30660.3130000000−30668.5400000000
Median−30665.5386717830−30665.5390000000−30663.9210000000−30665.2520000000−28212.3000000000
Mean−30665.5386717830−30665.5390000000−306659.2210000000−30664.3980000000−28212.3000000000
SD3.71E-120.00E+002.00E+001.60E+003.07E+04
Rank31524
f 05 Best5126.498109605126.599000005126.907000005126.500000005126.66200000
Worst5975.581362915304.167000005564.642000006112.075000005500.05300000
Median5126.498109605160.198000005208.897000005449.979000005190.50700000
Mean5154.800884715174.492000005124.232000005507.041000005190.50700000
SD1.60E+025.00E+012.50E+023.50E+022.00+02
Rank13524
f 06 Best−6961.81387558−6961.81400000−6961.04600000−6956.25100000−6961.68000000
Worst−6961.81387558−6952.48200000−6943.30400000−6077.12300000−6902.92000000
Median−6961.81387558−6961.81400000−6953.82300000−6867.46100000−6933.32000000
Mean−6961.81387558−6961.28400000−6953.06100000−6740.28800000−6933.32000000
SD1.90E-121.90E+005.90E+002.70E+022.94E+01
Rank12453
f 07 Best24.3062090724.3270000024.8380000024.8820000028.24350000
Worst24.3062090724.8430000033.0950000027.3810000036.66190000
Median24.3062090724.4260000025.4150000025.6220000029.08288000
Mean24.3062090724.4750000027.3280000025.7460000029.08288000
SD1.10E-141.30E-012.20E+007.00E-011.69E+01
Rank12345
Table 6. The results of PSO-DE compared with other algorithms in the benchmark function.
Table 6. The results of PSO-DE compared with other algorithms in the benchmark function.
ProblemMetricsAlgorithm
PSO-DESMESSAPFGAC-LXBBO
f 08 Best−0.09582500−0.09582500−0.09582500−0.09582500−0.10152000
Worst−0.09582500−0.09582500−0.09239700−0.09580800−0.09577000
Median−0.09582500−0.09582500−0.09582500−0.09581900−0.10106000
Mean−0.09582500−0.09582500−0.09563500−0.09581900−0.10106000
SD2.80E-170.00E+001.10E-034.40E-063.20E-03
Rank21435
f 09 Best680.63005737680.63200000680.77300000680.72600000680.66240000
Worst680.63005737680.71900000682.08100000682.96500000680.29990000
Median680.63005737680.64200000681.23500000681.20400000681.24540000
Mean680.63005737680.64300000681.24600000681.34700000681.24540000
SD4.50E-131.60E-023.20E-015.70E-018.30E-01
Rank12543
f 10 Best7049.248020537051.903000007069.981000007114.743000007105.05500000
Worst7049.248020537638.366000007069.4060000010826.090000019401.610000
Median7049.248020537253.603000007201.017000008586.7130000012566.0000
Mean7049.248020537253.047000007238.964000008785.1490000012566.00000
SD4.90E-121.40E+021.40E+021.00E+036.16E+03
Rank12354
f 11 Best0.750000000.750000000.749000000.750000000.74996000
Worst0.750000000.750000000.757000000.757000000.75990000
Median0.750000000.750000000.750000000.751000000.75043700
Mean0.750000000.750000000.751000000.752000000.75043700
SD0.00E+001.50E-042.00E-022.50E-035.61E-03
Rank12534
f 12 Best−1.00000000−1.00000000−1.00000000−1.00000000−0.99999900
Worst−1.00000000−1.00000000−1.00000000−1.00000000−0.99999900
Median−1.00000000−1.00000000−1.00000000−1.00000000−0.99999900
Mean−1.00000000−1.00000000−1.00000000−1.00000000−0.99995 00
SD0.00E+000.00E+001.40E-040.00E+004.51E-05
Rank21435
Table 7. The 57 engineering constrained optimization problems.
Table 7. The 57 engineering constrained optimization problems.
ProblemNameDgh f ( x )
Industrial Chemical Processes
RC01 Heat Exchanger Network Deaign(case 1) 9081.8931162966E+02
RC02 Heat Exchanger Network Deaign(case 2) 11097.0490369540E+03
RC03 Optimal operation of AlkylationUnit 7140−4.5291197395E+03
RC04 Reactor Network Design (RND) 614−3.8826043623E-01
RC05 Haverly’s Pooling Problem 924−4.0000560000E+02
RC06 Blending-Pooling-Separation problem 380321.8638304088E+00
RC07 Propane, Isobutane, n-Butane Nonsharp Separation 480381.5670451000E+00
Process Synthesis and Design Problems
RC08 Process synthesis problem 2202.0000000000E+00
RC09 Process synthesis and design problem 3112.55765455740E+00
RC10 Process flow sheeting problem 3301.0765430833E+00
RC11 Two-reactor Problem 7449.9238463653E+01
RC12 Process synthesis problem 7902.9248305537E+00
RC13 Process design Problem 5302.6887000000E+02
RC14 Multi-product batch plant 101005.3638942722E+04
Mechanical Engineering Problems
RC15 Weight Minimization of a Speed Reducer 71102.9944244658E+03
RC16 Optimal Design of Industrial refrigeration System 141503.2213000814E-02
RC17 Tension/compression spring design(case 1) 3301.2665232788E-02
RC18 Pressure vessel design 4405.8853327736E+03
RC19 Welded beam design 4501.6702177263E+00
RC20 Three-bar truss design problem 2302.6389584338E+02
RC21 Multiple disk clutch brake design problem 5702.3524245790E-01
RC22 Planetary gear train design optimization problem 91015.2546870748E-01
RC23 Step-cone pulley problem 5831.6069868725E+01
RC24 Robot gripper problem 7702.5287918415E+00
RC25 Hydro-static thrust bearing design problem 4701.6161197651E+03
RC26 Four-stage gear box problem 228603.5359231973E+06
RC27 10-bar truss design 10305.2445076066E+02
RC28 Rolling element bearing 10901.4614135715E+04
RC29 Gas Transmission Compressor Design (GTCD) 4102.9648954173E+04
RC30 Tension/compression spring design(case2) 3802.6138840583E+00
RC31 Gear train design Problem 4110.0000000000E+00
RC32 Himmelblau’s Function 560−3.0665538672E+04
RC33 Topology Optimization 303002.36393464970E+00
Power System Problem
RC34 Optimal Sizing of Single Phase Distributed Generation with reactive power support for
Phase Balancing at Main Transformer/Grid
11801080.000000000E+00
RC35 Optimal Sizing of Distributed Generation for Active Power Loss Minimization 15301487.9963854000E-02
RC36 Optimal Sizing of Distributed Generation (DG) and Capacitors for Reactive Power Loss
Minimization
15801484.7733529000E-02
RC37 Optimal Power flow (Minimization of Active Power Loss) 12601161.8593563000E-02
RC38 Optimal Power flow (Minimization of Fuel Cost) 12601162.7139366000E+00
RC39 Optimal Power flow (Minimization of Active Power Loss and Fuel Cost) 12601162.7515909000E+00
RC40 Microgrid Power flow (Islanded case) 760760.0000000000E+00
RC41 Microgrid Power flow (Grid-connected case) 740740.0000000000E+00
RC42 Optimal Setting of Droop Controller for Minimization of Active Power Loss in Islanded
Microgrids
860767.7027102000E-02
RC43 Optimal Setting of Droop controller for Minimization of Reactive Power Loss in Islanded
Microgrids
860767.9835970000E-02
RC44 Wind Farm Layout Problem 30910−6.2731715000E+03
Power Electronic Probelms
RC45 SOPWM for 3-level Inverters 252413.0739360000E-02
RC46 SOPWM for 5-level Inverters 252412.0240335000E-02
RC47 SOPWM for 7-level Inverters 252411.2783068000E-02
RC48 SOPWM for 9-level Inverters 302911.6787535766E-02
RC49 SOPWM for 11-level Inverters 302919.3118741800E-03
RC50 SOPWM for13-level Inverters 302911.5051470000E-02
Livestock Feed Ration Optimization
RC51 Beef Cattle (case 1) 591414.5508511497E+03
RC52 Beef Cattle (case 2) 591413.3489821493E+03
RC53 Beef Cattle (case 3) 591414.9976069290E+03
RC54 Beef Cattle (case 4) 591414.2405482538E+03
RC55 Dairy Cattle (case 1) 64066.6964145128E+03
RC56 Dairy Cattle (case 2) 64061.4746580000E+04
RC57 Dairy Cattle (case 3) 64063.2132917019E+03
Table 8. Experimental results for industrial chemistry process problems (RC01-RC07).
Table 8. Experimental results for industrial chemistry process problems (RC01-RC07).
ProblemMetricsPSO-DESASSCOLAHADEsCMAgES
RC1Mean189.3116189.3116235.39687191.89103
Std.6.25E+015.80E-144.93E+015.72E+00
FR(Wil test)100100(+)100(-)100(-)
RC2Mean7049.0377049.0377065.05278035.3169
Std.2.83E-090.00E+007.89E+013.48E+03
FR(Wil test)100100(=)100(-)100(-)
RC3Mean−4609.006−142.719−3621.07174.81604
Std.1.29E+032.02E-057.76E+025.19E+02
FR(Wil test)100100(-)100(-)100(-)
RC4Mean−0.38822−0.38826−0.249146−0.385941
Std.−3.75E-013.97E-07−3.88E-01−3.87E-01
FR(Wil test)100100(+)100(-)100(-)
RC5Mean−399.999−400.003−67.63606−117.3901
Std.4.05E-036.08E-031.75E+027.77E+01
FR(Wil test)100100(+)100(-)100(-)
RC6Mean1.9973751.8699341.97827682.3386743
Std.1.85E-021.52E-E021.50E-012.44E-01
FR(Wil test)100100(+)100(+)100(-)
RC7Mean1.5935441.5739481.57894891.9974088
Std.2.15E-011.63E-023.37E-012.11E-01
FR(Wil test)100100(+)100(+)100(-)
Table 9. Experimental results for process synthesis and design problems (RC08-RC14).
Table 9. Experimental results for process synthesis and design problems (RC08-RC14).
ProblemMetricsPSO-DESASSCOLAHADEsCMAgES
RC8Mean2222
Std.0.00E+000.00E+000.00E+000.00E+00
FR(Wil test)100100(=)100(=)100(=)
RC9Mean2.55762.5576552.5576552.5577
Std.1.36E-150.00E+000.00E+001.36E-15
FR(Wil test)100100(+)100(+)100(+)
RC10Mean1.0765431.0765431.1042961.0765
Std.8.79E-026.80E-166.36E-024.53E-16
FR(Wil test)100100(=)100(-)100(+)
RC11Mean105.1101.1913147.8153299.23886
Std.3.73E+003.55E+002.08E+012.99E+00
FR(Wil test)100100(+)100(+)100(+)
RC12Mean2.92482.9248312.9248310.1756
Std.4.53E-164.53E-164.44E-162.69E+04
FR(Wil test)100100(-)100(-)100(+)
RC13Mean26,88726,887.4226,887.4221.114E-11
Std.1.11E-111.11E-113.64E-125.85E+04
FR(Wil test)100100(-)100(-)100(-)
RC14Mean58,50558,505.4625,505.4558,505
Std.8.06E-091.30E-027.28E-127.33E-06
FR(Wil test)100100(-)100(+)100(=)
Table 10. Experimental results of RC15-RC57.
Table 10. Experimental results of RC15-RC57.
ProblemMetricsPSO-DESASSCOLAHADEsCMAgES
RC15Mean2994.42994.4252994.42452994.4
Std.4.64E-134.64E-134.55E-134.64E-13
FR(Wil test)100100(-)100(-)100(=)
RC16Mean0.0322130.0322130.0322130.043887
Std.3.17E-181.42E-170.00E+001.93E-02
FR(Wil test)100100(=)100(=)100(=)
RC17Mean0.0126550.0126650.0126650.012665
Std.2.04E-050.00E+001.06E-072.25E-11
FR(Wil test)100100(+)100(+)100(+)
RC18Mean6059.76059.7146062.17936067.1
Std.9.28E-133.71E-128.36E+001.34E+01
FR(Wil test)100100(-)100(-)100(-)
RC19Mean1.7248521.6702181.67021771.6702
Std.1.4E-152.27E-160.00E+004.53E-17
FR(Wil test)100100(-)100(+)100(+)
RC20Mean263.9263.8958263.89584263.9
Std.0.00E+005.80E-140.00E+000.00E+00
FR(Wil test)100100(+)100(-)100(=)
RC21Mean0.235240.2352420.2352420.23524
Std.1.13E-162.83E-170.00E+001.13E-16
FR(Wil test)100100(-)100(-)100(=)
RC22Mean0.526911.0015240.5410260.52884
Std.1.44E-033.65E-154.26E-021.82E-03
FR(Wil test)100100(-)100(-)100(-)
RC23Mean16.0716.0698716.06986916.07
Std.3.33E-143.63E-150.00E+001.68E-14
FR(Wil test)100100(+)100(-)100(=)
RC24Mean2.54382.5437862.5437862.5499
Std.1.35E-120.00E+000.00E+007.46E-03
FR(Wil test)100100(+)100(+)100(-)
RC25Mean1616.11616.121639.03741649.9
Std.1.78E-119.43E-041.01E+024.74E+01
FR(Wil test)100100(-)100(-)100(-)
RC26Mean35.72838.51416.61097547.945
Std.5.99E-012.11E+001.37E+005.27E+00
FR(Wil test)100100(-)100(-)100(-)
RC27Mean524.45524.4692524.45076524.72
Std.3.76E-076.62E-030.00E+001.22E+00
FR(Wil test)100100(-)100(-)100(-)
RC28Mean1695814,614.1416,958.20214615
Std.3.71E-120.00E+000.00E+003.79E+00
FR(Wil test)100100(+)100(-)100(+)
RC29Mean2,964,9002964,895294,895.42,964,900
Std.1.43E-094.75E-100.00E+001.43E-09
FR(Wil test)100100(+)100(+)100(=)
RC30Mean2.81492.6585592.6618342.6139
Std.3.66E-014.53E-161.11E-021.04E-13
FR(Wil test)100100(+)100(+)100(+)
RC31Mean001.88E-160
Std.0.00E+008.98E-183.81E-160.00E+00
FR(Wil test)100100(=)100(-)100(=)
RC32Mean−30,666−30,665.5−30,665.54−30,666
Std.3.71E-127.43E-120.00E+003.71E-12
FR(Wil test)100100(+)100(+)100(=)
RC33Mean2.63932.6393472.6393472.6457
Std.1.02E-154.53E-160.00E+004.50E-03
FR(Wil test)100100(-)100(-)100(-)
RC34Mean2.93230.000734.954820.50622
Std.3.51E+002.63E-032.19E-011.95E-01
FR(Wil test)100100(+)100(-)100(+)
RC35Mean73.1810.08033396.0740060.097245
Std.6.43E+011.50E-042.13E+011.31E-02
FR(Wil test)100100(+)100(-)100(+)
RC36Mean0.0479570.04795784.3238480.10745
Std.6.76E+011.85E-041.94E+012.38E-02
FR(Wil test)100100(=)100(-)100(+)
RC37Mean1.13980.0189222.695820.47536
Std.3.54E+007.10E-047.95E-044.14E-01
FR(Wil test)100100(+)100(-)100(+)
RC38Mean2.7182372.7378358.2776465.754
Std.1.83E+007.29E-027.40E-031.44E+00
FR(Wil test)100100(-)100(-)100(-)
RC39Mean−0.817953.0095189.3093636.9625
Std.2.37E+009.63E-016.74E-031.77E+00
FR(Wil test)100100(-)100(-)100(-)
RC40Mean00111.959864.9E-11
Std.0.00E+001.34E-272.45E-011.45E-10
FR(Wil test)100100(=)100(-)100(-)
RC41Mean0018.2764863.615E-20
Std.0.00E+002.60E-281.82E-013.26E-20
FR(Wil test)100100(=)100(-)100(-)
RC42Mean0.08704670.088144−2.61379844.066
Std.2.23E+008.40E-032.22E+003.18E+01
FR(Wil test)100100(-)100(+)100(-)
RC43Mean0.0804670.08340324.02947643.098
Std.5.00E+011.0-E025.49E+002.36E+01
FR(Wil test)100100(-)100(-)100(-)
RC44Mean−6123.97−6109.46−6032.419−5965.4
Std.5.35E+017.40E+011.06E+028.93E+01
FR(Wil test)100100(-)100(+)100(+)
RC45Mean0.143240.0521640.0427950.046045
Std.5.43E-029.83E-035.52E-031.74E-02
FR(Wil test)100100(+)100(+)100(+)
RC46Mean0.0635810.0542070.0260820.035846
Std.5.18E-039.78E-035.68E-031.46E-02
FR(Wil test)100100(+)100(+)100(+)
RC47Mean0.0643660.0462470.0182120.021246
Std.1.69E-022.71E-023.20E-035.14E-03
FR(Wil test)100100(+)100(+)100(+)
RC48Mean0.062970.0570990.0218760.034488
Std.1.00E-011.96E-024.00E-031.51E-02
FR(Wil test)100100(+)100(+)100(+)
RC49Mean0.094150.0369110.0325820.026114
Std.4.81E-028.61E-034.07E-037.62E-03
FR(Wil test)100100(+)100(+)100(+)
RC50Mean0.0327220.0236370.0650910.018026
Std.7.08E-021.03E-024.82E-021.05E-02
FR(Wil test)100100(+)100(-)100(+)
RC51Mean45034550.9734550.94514233.1
Std.1.82E+015.99E-026.78E-021.57E+02
FR(Wil test)100100(+)100(-)100(+)
RC52Mean3368.24165.3083372.12474824.5
Std.1.52E+012.62E+021.30E+016.76E+02
FR(Wil test)100100(-)100(-)100(-)
RC53Mean4676.15252.3665109.49975335.6
Std.4.32E+021.51E+025.65E+012.77E+02
FR(Wil test)100100(-)100(-)100(-)
RC54Mean3334.74241.0974245.93644317.4
Std.3.01E+012.16E+003.41E+001.06E+03
FR(Wil test)100100(-)100(-)100(-)
RC55Mean4937.567006732.50536341.9
Std.1.69E+032.38E+005.47E+011.24E+03
FR(Wil test)100100(-)100(-)100(-)
RC56Mean11,41914,751.5214,646.65613,031
Std.1.22E+033.77E+002.05E+021.68E+03
FR(Wil test)100100(-)100(-)100(-)
RC57Mean2468.63213.3093628.23996627.3
Std.3.74E+024.10E-022.93E+021.75E+03
FR(Wil test)100100(-)100(-)100(-)
Table 11. Comparison of the optimal results of different algorithms for the pressure spring problem. NA indicates that it has not been addressed in the literature.
Table 11. Comparison of the optimal results of different algorithms for the pressure spring problem. NA indicates that it has not been addressed in the literature.
MethodDesign VariablesCost
x 1 x 2 x 3 f ( X )
CPSO [44]0.051127280.35764411.2445430.012674
APSO [45]0.0525880.37834310.1388620.012700
IAPSO [46]0.0516850.35662911.2941750.012665
CVI-PSO [34]NANANA0.0126655
PSO-DE0.0516890.35671711.289650.012655
Table 12. Comparison of optimization results of different algorithms for the pressure spring problem.
Table 12. Comparison of optimization results of different algorithms for the pressure spring problem.
MethodBestWorstMeanStd.
CPSO [44]0.126740.129240.127305.20E-04
APSO [45]0.0127000.0149370.0132976.85E-04
IAPSO [46]0.0126550.0178290.0136771.53E-03
CVI-PSO [34]0.0126660.0128420.0127305.58E-05
PSO-DE0.0126550.0126550.0126552.04E-05
Table 13. Comparison of optimization results of different algorithms for the welding beam problem. NA indicates that it has not been addressed in the literature.
Table 13. Comparison of optimization results of different algorithms for the welding beam problem. NA indicates that it has not been addressed in the literature.
MethodDesign VariablesCost
x 1 x 2 x 3 f ( X )
CPSO [44]0.2023693.5442149.0482100.2057231.728024
APSO [45]0.2027013.5742729.0402090.2052151.736193
IAPSO [46]0.20572963.47088669.036623910.205729641.7248523
CVI-PSO [34]NANANANA1.724852
PSO-DE0.20572963.470488669.036623910.205729631.724852
Table 14. Comparison of optimization results of different algorithms for the welding beam problem.
Table 14. Comparison of optimization results of different algorithms for the welding beam problem.
MethodBestWorstMeanStd.
CPSO [44]1.7821431.7821431.7488311.29E-02
APSO [45]1.7361931.9939991.8778510.076118
IAPSO [46]1.72485231.72486241.72485282.02E-06
CVI-PSO [34]1.7248521.7276651.7251246.12E-04
PSO-DE1.7248521.7248521.7248521.4E-15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, E.; Gao, Y.; Hu, C.; Zhang, J. A Hybrid PSO-DE Intelligent Algorithm for Solving Constrained Optimization Problems Based on Feasibility Rules. Mathematics 2023, 11, 522. https://doi.org/10.3390/math11030522

AMA Style

Guo E, Gao Y, Hu C, Zhang J. A Hybrid PSO-DE Intelligent Algorithm for Solving Constrained Optimization Problems Based on Feasibility Rules. Mathematics. 2023; 11(3):522. https://doi.org/10.3390/math11030522

Chicago/Turabian Style

Guo, Eryang, Yuelin Gao, Chenyang Hu, and Jiaojiao Zhang. 2023. "A Hybrid PSO-DE Intelligent Algorithm for Solving Constrained Optimization Problems Based on Feasibility Rules" Mathematics 11, no. 3: 522. https://doi.org/10.3390/math11030522

APA Style

Guo, E., Gao, Y., Hu, C., & Zhang, J. (2023). A Hybrid PSO-DE Intelligent Algorithm for Solving Constrained Optimization Problems Based on Feasibility Rules. Mathematics, 11(3), 522. https://doi.org/10.3390/math11030522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop