Next Article in Journal
Challenges and Solution Directions of Microservice Architectures: A Systematic Literature Review
Previous Article in Journal
Cycle Mutation: Evolving Permutations via Cycle Induction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Particle Swarm Optimization Analysis Using Differential Models

1
Department of Information Technology, Takming University of Science and Technology, Taipei City 11451, Taiwan
2
Department of Electrical Engineering, National Chin-Yi University of Technology, Taichung City 411030, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(11), 5505; https://doi.org/10.3390/app12115505
Submission received: 12 April 2022 / Revised: 26 May 2022 / Accepted: 27 May 2022 / Published: 29 May 2022

Abstract

:
This paper employs the approach of the differential model to effectively improve the analysis of particle swarm optimization. This research uses a unified model to analyze four typical particle swarm optimization (PSO) algorithms. On this basis, the proposed approach further starts from the conversion between the differential equation model and the difference equation model and proposes a differential evolution PSO model. The simulation results of high-dimensional numerical optimization problems show that the algorithm’s performance can be greatly improved by increasing the step size parameter and using different transformation methods. This analytical method improves the performance of the PSO algorithm, and it is a feasible idea. This paper uses simple analysis to find that many algorithms are improved by using the difference model. Through simple analysis, this paper finds that many AI-related algorithms have been improved by using differential models. The PSO algorithm can be regarded as the social behavior of biological groups such as birds foraging and fish swimming. Therefore, these behaviors described above are an ongoing process and are more suitable for using differential models to improve the analysis of PSO. The simulation results of the experiment show that the differential evolution PSO algorithm based on the Runge–Kutta method can effectively avoid premature results and improve the computational efficiency of the algorithm. This research analyzes the influence of the differential model on the performance of PSO under different differenced conditions. Finally, the analytical results of the differential equation model of this paper also provide a new analytical solution.

1. Introduction

Since the introduction of PSO, many scholars have improved its performance from different viewpoints. Scientists have proposed inertia weights, contraction factors, PSOs with time acceleration constants, and understanding learning and random PSOs from a mathematical point of view. Researchers have proposed PSO with escape behavior from the viewpoint of biology and also proposed PSO with Kalman filtering and a Bayesian network PSO from the viewpoint of information theory [1,2]. Many scholars have proposed quantum PSO from the field of physics. Therefore, researchers are faced with many improved algorithms and methods to analyze and utilize their advantages. The above process can also further improve the performance of the algorithm. However, processing and analyzing these algorithms will become a more difficult problem [3,4].
In 2006, Professor De Jong pointed out that the field of intelligent computing needs to establish a unified model to effectively analyze the various intelligent algorithms that have been proposed; this is a developing research field [5]. The establishment of a unified model of smart algorithms requires the effective integration of each algorithm itself, establishing a unified model for all smart algorithms. Because the PSO algorithm was proposed late, various theoretical systems have not yet been established, so it is urgent to establish such a unified model [6,7,8]. Many researchers believe that the unified intelligent algorithm model is in urgent need of research. Thus, a preliminary unified model of PSO is given from the perspective of software implementation.
Differential equations refer to equations that describe the relationship between the derivatives and arguments of unknown functions; difference equations, also known as recurrence relations, are equations that contain unknown functions and their differences, but do not have derivatives. The solution of a differential equation is a function that satisfies the equation—that is, an analytical solution. In the algebraic equation of elementary mathematics, its solution is a constant value [9,10].
Usually, the differential equation model is more suitable for solving continuous problems, which is its advantage. The differential equation model is less suitable for dealing with discrete problems, which is the disadvantage of the differential equation model. The difference equation replaces the description of the differential equation, and the derivative function is avoided in the equation, which can be solved by iterative operation. Difference equations are more suitable for discrete problems. However, the accuracy of the difference equation model is slightly lower (the secant line is used instead of the tangent line), which is a disadvantage of the difference equation model. Please refer to the description in Figure 1.
Difference equations reflect the values and changing laws of discrete variables. The difference equation can be established using the equilibrium relationship satisfied by the values of one or several discrete variables. Differential equations reflect the values and changing laws of continuous variables [11,12]. The micro-element method means that when building a dynamic model of an object that changes continuously in time or space, it often considers small unit changes in time or space because the balanced relationship between these micro-elements is relatively simple and easy to use. The method of differential calculus is processed. Such models are given in the form of differential equations [13,14].
In the recent PSO-related research literature, some excellent papers have been published [15,16]. For example, Pourzangbar et al. proposed the use of PSO technology to optimize the design of this supported viscous damper and pendulum-tuned mass damper [17]. Authors such as Cui published a paper on a multi-objective PSO algorithm based on a two-archive mechanism [18]. Xu et al. used the PSO algorithm to practice LSTM neural networks for rainfall–runoff simulation [19].
The PSO algorithm is an evolutionary computing technique developed by J. Kennedy and R.C. Eberhart et al. in 1995, derived from an analogy to a simplified social model. Among them, “swarm” comes from the particle swarm, which conforms to the five basic principles of swarm intelligence proposed by M.M. Millonas when developing models for artificial life. “Particle” is a compromise because it is necessary to describe both the members of the group as having no mass and volume, and its velocity and acceleration state. The PSO algorithm, based on observations of animal social behavior, found that social sharing of information in groups provides an evolutionary advantage, and used this as the basis for the development of the algorithm. The initial version of PSO was formed by incorporating velocity matching of nearest neighbors and considering multidimensional search and distance-dependent acceleration. After that, the inertia weight ω was introduced to better control the exploitation and exploration, and the standard version was formed. For the detailed PSO algorithm, please refer to the Appendix A.

2. Unified Model for PSO

2.1. Unified Model

In the unified model case, observe the following system of differential Equation (1):
{ d v j k ( t ) d t = η [ ( ω 1 χ ) v j k ( t ) + c 1 r 1 ( p j k ( t ) x j k ( t ) ) + c 2 r 2 ( p g k ( t ) x j k ( t ) ) ] d x j k ( t ) d t = v j k ( t + 1 )
  • When ω = 1 ,   η = 1 , the Euler method with a step size of 1 can be used to obtain the basic PSO evolution equation.
  • When ω 1 ,   η = 1 , the standard PSO evolution equation can be obtained by using Euler’s method with a step size of 1.
  • When ω = 1 , η 1 , the Euler method with a step size of 1 can be used to obtain the evolution equation of the PSO with shrinkage factor.
  • When ω = 0 ,   η = 1 , the Euler method with a step size of 1 can be used to obtain the evolution equation of stochastic PSO.
That is to say, when the parameters η and ω take different values, Equation (1) represents different evolution equations of the PSO algorithm [20,21]. Therefore, Equation (1) can be used as a unified description model for the above four typical PSO evolution equations. For the convenience of analysis, define φ 1 = c 1 r 1 , φ 2 = c 2 r 2 , φ = φ 1 + φ 2 , φ p = φ 1 p j k ( t ) + φ 2 p g k ( t ) ; then, the above differential equations can be written as Equation (2):
{ d v j k ( t ) d t = η [ ( ω 1 η ) v j k ( t ) φ η j k ( t ) + φ p ] d x j k ( t ) d t = v j k ( t + 1 )

2.2. Evolutionary Behavior Analysis of PSO Algorithm

Taking v j k ( t + 1 ) as a first-order approximation, that is, v j k ( t + 1 ) = v j k ( t ) + d v j k ( t ) d t , we substitute Equation (2) and obtain Equation (3) as:
{ d v j k ( t ) d t = ( ω η 1 ) v j k ( t ) η φ x j k ( t ) + η φ p d x j k ( t ) d t = ω η v j k ( t ) η φ x j k ( t ) + η φ p
with the definition:
y ( t ) = [ v j k ( t ) x j k ( t ) ] ,   u ( t ) = [ p j k ( t ) p g k ( t ) ] ,   A = [ ω η 1 η φ ω η η φ ] ,   B = [ η φ 1 η φ 2 η φ 1 η φ 2 ]
Then, we have Equation (4):
y ( t ) = A y ( t ) + B u ( t )
Solve to get Equation (5):
y ( t ) = e A ( t t 0 ) y ( t 0 ) + t 0 t e A ( t τ ) B u ( τ ) d τ
It can be seen from Equation (5) that when the eigenvalue of the matrix has a negative real part, Equation (4) converges, and its characteristic equation is:
| λ I A | = λ 2 + ( 1 ω η + η φ ) λ + η φ = 0
Find the characteristic root as:
λ 1 , 2 = ω η η φ 1 ± ( 1 ω η + η φ ) 2 4 η φ 2
Since η ,   φ > 0 , as long as ω η η φ 1 < 0 , the eigenvalue of matrix A can be guaranteed to be a negative real part, that is, as long as ω η η φ 1 < 0 , described by Equation (3). The PSO evolution equation converges, while:
lim t + y ( t ) = lim t + e A ( t t 0 ) y ( t 0 ) + t 0 t e A ( t τ ) B u ( τ ) d τ = [ 0 φ p φ ]
That is:
lim t + v j k ( t ) = 0 ,   lim t + x j k ( t ) = φ p φ
That is:
lim t + ( φ 1 + φ 2 ) x j k ( t ) = φ 1 p j k + φ 2 p g k
Since φ 1 and φ 2 are random variables, the above formula must be established if and only if lim t + x j k ( t ) = p j k = p g k .
It can be seen from the above analysis that when ω η η φ 1 < 0 , all particles will converge to the global optimal position p g . That is, when p g is fixed, x j ( t ) converges to p g . Therefore, in the evolution of the PSO algorithm, as long as the global optimal position p g = x * can be explored, it can be guaranteed that all particles will eventually converge at p g = x * .
In addition, from Equation (4) and u ( t ) = [ p j k ( t ) p g k ( t ) ] , the PSO is a linear system under the step signal input and the amplitude of its input step signal changes with evolution [22].
The upper bound estimate for the convergence of Equation (4) is given below. Defining the Lyapunov function:
V ( y ) = y T P y
The derivation formula is:
V ( y ) = y T Q y
where P is a positive definite matrix, Q is a positive definite symmetric matrix and satisfies the following Lyapunov equation:
A T P + P A = Q
The convergence performance of the system can be obtained from Equation (14):
δ = V ( y ) V ( y )
The smaller V ( y ) is and the larger the absolute value of V ( y ) is, the larger δ will be, and the faster the system will converge; on the contrary, the smaller δ will be, the slower the corresponding system will converge [23,24].
Further, integrating the above equation, we get Equation (15):
0 t δ d t = 0 t V ( y ) V ( y ) d t = l n V ( y ) V ( y 0 )
Thus, Equation (16):
V ( y ) = V ( y 0 ) e 0 t η d t
and Equation (17):
δ m i n = min y { V ( y ) V ( y ) }
Since Equation (17) is a constant, there is Equation (18):
V ( y ) V ( y 0 ) e 0 t δ m i n d t = V ( y 0 ) e δ m i n t
This shows that once δ is determined, an upper bound on the convergence of V ( y ) over time can be determined, and Equation (19) is:
δ m i n = min y { V ( x ) V ( x ) } = min y { y T Q y y T P y } = min y { y T Q y ,   y T P y = 1 }
When η , ω , and φ are constants, the system (4) is a linear constant system; then, there is Equation (20):
δ m i n = λ m i n ( Q P 1 )
where λ m i n ( · ) represents the minimum eigenvalue of ( · ) .

2.3. Convergence Analysis

The convergence conditions of several typical PSO algorithms are given below. According to Equation (7), the condition for the asymptotic convergence of the PSO evolution equation is:
{ ω η η φ 1 < 0 ( 1 ω η η φ ) 2 4 η φ 0
  • Equation (22) for the basic PSO evolution equation, when η = ω = 1, substitute Equation (21) to obtain the asymptotic convergence condition as Equation (22):
    { φ 0 φ 2 4 φ
That is, when φ 4 , the basic PSO evolution equation converges asymptotically, and when 0 φ < 4 , the system has two complex roots with negative real parts, and the basic PSO evolution equation oscillates and converges.
2.
Equation (23) for the standard PSO evolution equation, when η = 1 , substitute Equation (21) to obtain the asymptotic convergence condition as Equation (23):
{ ω φ 1 < 0 ( 1 ω φ ) 2 4 φ 0
3.
For the PSO evolution equation with shrinkage factor, substitute ω = 1 ,   η = 2 | 2 φ φ 2 4 φ | into Equation (21), then, when φ > 4 , there is Equation (24):
{ η η φ 1 < 0 ( 1 η η φ ) 2 4 η φ 0
Thus, the system converges asymptotically.

3. Numerical Algorithm Analysis of Standard PSO

The aforementioned unified model shows that various differenced models can be analyzed using the same differential model. In other words, differentiations of the same model have different performances, and the existing algorithms are all differenced models, which can be regarded as converted by taking a step size of 1 in the corresponding differential model. Therefore, starting from this section, different differenced methods of differential models will be discussed to observe whether there is a large difference in their performance [25,26].

3.1. Differential Equation Model of Standard PSO

Set φ 1 = c 1 r 1 , φ 2 = c 2 r 2 , φ = φ 1 + φ 2 , then the standard PSO can be rewritten as:
{ v j k ( t + 1 ) = ω v j k ( t ) φ x j k ( t ) + ( φ 1 p j k ( t ) + φ 2 p g k ( t ) ) x j k ( t + 1 ) = ω v j k ( t ) + ( 1 φ ) x j k ( t ) + ( φ 1 p j k ( t ) + φ 2 p g k ( t ) )
The evolution of Equation (25) is a difference model, and its corresponding differential model can be obtained by using the unified model (Equation (3)) to obtain Equation (26).
{ d v j k ( t ) d t = ( ω 1 ) v j k ( t ) φ x j k ( t ) + ( φ 1 p j k ( t ) + φ 2 p g k ( t ) ) d x j k ( t ) d t = ω v j k ( t ) φ x j k ( t ) + ( φ 1 p j k ( t ) + φ 2 p g k ( t ) )
in:
d v j k ( t ) d t = v j k ( t + 1 ) v j k ( t ) ,   d x j k ( t ) d t = x j k ( t + 1 ) x j k ( t )
The differential model (Equation (26)) can obtain the standard PSO algorithm by using Euler’s method with a step size of 1.
The differential model (Equation (26)) is then solved using existing numerical methods for solving systems of differential equations. For example, for the system of differential Equation (26), taking the step size as h, Euler’s method can be used to obtain:
{ v j k ( t + 1 ) = [ 1 + h ( ω 1 ) ] v j k ( t ) h φ x j k + h ( φ 1 p j k + φ 2 p g k ) x j k ( t + 1 ) = h ω v j k + ( 1 h φ ) x j k + h ( φ 1 p j k + φ 2 p g k )
The biggest difference between Equations (25) and (27) is the introduction of the step size h. Therefore, when using the evolution of Equation (27) to solve the problem, four parameters need to be determined, namely the inertia weight ω , the cognitive coefficient c 1 , the social coefficient c 2 , and step size h.

3.2. Introduction to Common Numerical Methods of Differential Equations

The commonly used numerical calculation methods for solving the initial value problem of ordinary differential equations can be divided into the single- and multi-step method. The single-step method mainly includes the Euler method, the Runge–Kutta method, and the multi-step method includes the Adams method. In order to facilitate the application, the commonly used numerical calculation methods will be briefly introduced.
The initial value problem of the first-order ordinary differential equation can be expressed as:
{ y = f ( x ,   y ) y ( x 0 ) = y 0
Commonly used numerical optimization methods are as follows. The first item introduced here is Euler’s method (Equation (29)):
y n + 1 = y n + h f ( x n ,   y n )
In addition, there are the following methods in Equations (30)–(32):
  • Implicit Euler method:
    y n + 1 = y n + h f ( x n + 1 ,   y n + 1 )
  • Trapezoidal Euler method:
    y n + 1 = y n + h 2 [ f ( x n + 1 ,   y n + 1 ) + f ( x n ,   y n ) ]
  • Improved Euler method:
    { y p = y n + h f ( x n ,   y n ) y c = y n + h f ( x n + 1 , y p ) y n + 1 = 1 2 ( y p + y c )
The second item introduced is the Runge–Kutta method. Starting from the improved Euler method, the Runge–Kutta method can be obtained by adding the points required by the weighted average. The common Runge–Kutta method series has the following in Equations (33)–(35):
  • Second-order Runge–Kutta method (midpoint method):
    { y n + 1 = y n + h K 2 K 1 = f ( x n ,   y n ) K 2 = f ( x n + 1 2 , y n + h 2 K 1 )
  • Third-order Runge–Kutta method:
    { y n + 1 = y n + h 6 ( K 1 + 4 K 2 + K 3 ) K 1 = f ( x n , y n ) K 2 = f ( x n + 1 2 , y n + h 2 K 1 ) K 3 = f ( x n + 1 , y n + h ( 2 K 2 K 1 )
  • Fourth-order Runge-Kutta method:
    { y n + 1 = y n + h 6 ( K 1 + 2 K 2 + 2 K 3 + K 4 ) K 1 = f ( x n , y n ) K 2 = f ( x n + 1 2 , y n + h 2 K 1 ) K 3 = f ( x n + 1 2 , y n + h 2 K 2 ) K 4 = f ( x n + 1 , y n + h K 3 )
The third item introduced is the Adams method. The Adams method is a typical multi-step method, and there are common second-order and fourth-order forms. For simplicity, only the second-order forms in Equations (36) and (37) are given here.
  • Second-order explicit Adams method:
    { y n + 1 = y n + h 2 ( 3 y n y n 1 ) y n = f ( x n , y n ) y n 1 = f ( x n 1 , y n 1 )
  • Second-order implicit Adams method:
    { y n + 1 = y n + h 2 ( 3 y n y n 1 ) y n = f ( x n , y n ) y n + 1 = f ( x n + 1 , y n + 1 )
For the system of first-order differential equations, the corresponding formulas can be obtained similarly, which will not be repeated here. In the field of numerical computing, one criterion for judging the pros and cons of different numerical algorithms is the local truncation error. Its meaning is to assume that until the n step, the calculation result { y k | k = 1 ,   2 ,   ,   n } is the same as the theoretical result { y ( x k ) | k = 1 ,   2 ,   ,   n } . Therefore, when the error at the n + 1 step is a local truncation error, its calculation formula is T n + 1 = y ( x n + 1 ) y n + 1 . If T n + 1 = O ( h p + 1 ) , then p is the order of the single-step method, and the term containing h P + 1 is the main term of the local truncation error. For the algorithms given above, the orders of their local truncation errors have been obtained, as shown in Table 1.
Our method first assumes k = 1, 2, …, n. Because the k-order multi-step method (such as the Adams method) must be calculated by other methods to obtain the first k initial values, and then the values of the following points can be calculated according to the given formula. In this way, although it only calculates a new value per step, the previous k initial values need to be recalculated when the step size is changed. Therefore, it is not only more complicated than single-step programming, but also greatly increases the amount of calculation. Therefore, only single-step methods are considered in this section.

4. Differential Evolution PSO

This section will mainly discuss the selection method of the step size h and verify the computational efficiency of several differential evolution PSO algorithms proposed above through numerical simulation.

4.1. Differential Evolutionary PSO Based on Different Numerical Computation Methods

The standard PSO algorithm (Equation (26)) can be viewed as a system of first-order differential equations containing two differential equations. By applying different numerical calculation methods, different difference forms of PSO can be obtained. Since these PSO algorithms of different difference forms are all differenced using the differential model (Equation (26)). Therefore, they are called differential evolutionary PSO (differential evolutionary PSO, DPSO). According to the difference in the order of the local truncation error, the following three typical numerical calculation methods are selected as representatives to analyze the corresponding differential evolution PSO algorithm.
The differential model (Equation (26)) can be simply viewed as:
{ d v j k ( t ) d t = f 1 ( t , v j k , x j k ) d x j k ( t ) d t = f 2 ( t , v j k , x j k )
  • Euler Method
It is one of the most basic differential numerical calculation methods, and the order of the local truncation error is 1. Applying it to the differential equation system (Equation (38)), the corresponding PSO evolution formula (Equation (27)) can be obtained.
2.
Improved Euler Method
It is also a second-order Runge–Kutta method with a local truncation error of order 2. Applying it to the system of differential equations (Equation (38)), the corresponding evolution formula can be obtained as:
{ v j k p ( t + 1 ) = [ 1 + h ( ω 1 ) ] v j k ( t ) h φ x j k ( t ) + h ( φ 1 p j k + φ 2 p g k ) x j k p ( t + 1 ) = h ω v j k ( t ) + ( 1 h φ ) x j k ( t ) + h ( φ 1 p j k + φ 2 p g k ) v j k c ( t + 1 ) = [ 1 + h ( ω 1 ) ] v j k p ( t + 1 ) h φ x j k p ( t + 1 ) + h ( φ 1 p j k + φ 2 p g k ) v j k c ( t + 1 ) = h ω v j k p ( t + 1 ) + ( 1 h φ ) x j k p ( t + 1 ) + h ( φ 1 p j k + φ 2 p g k ) v j k ( t + 1 ) = 1 2 ( v j k p ( t + 1 ) + v j k c ( t + 1 ) ) x j k ( t + 1 ) = 1 2 ( x j k p ( t + 1 ) + x j k c ( t + 1 ) )
3.
Fourth-Order Runge–Kutta Method
Its local truncation error is of order 4. Applying it to the system of differential equations (Equation (38)), the following evolution formula (Equation (40)) can be obtained:
{ v j k ( t + 1 ) = v j k ( t ) + h 6 ( K 1 + 2 K 2 + 2 K 3 + K 4 ) x j k ( t + 1 ) = x j k ( t ) + h 6 ( L 1 + 2 L 2 + 2 L 3 + K L 4 )
It also contains Equations (41)–(48):
K 1 = ( ω 1 ) v j k ( t ) φ x j k ( t ) + ( φ 1 p j k + φ 2 p g k )
L 1 = ω v j k ( t ) φ x j k ( t ) + ( φ 1 p j k + φ 2 p g k )
K 2 = ( ω 1 ) ( v j k ( t ) + h 2 K 1 ) φ ( x j k ( t ) + h 2 L 1 ) + ( φ 1 p j k + φ 2 p g k )
L 2 = ω ( v j k ( t ) + h 2 K 1 ) φ ( x j k ( t ) + h 2 L 1 ) + ( φ 1 p j k + φ 2 p g k )
K 3 = ( ω 1 ) ( v j k ( t ) + h 2 K 2 ) φ ( x j k ( t ) + h 2 L 2 ) + ( φ 1 p j k + φ 2 p g k )
L 3 = ω ( v j k ( t ) + h 2 K 2 ) φ ( x j k ( t ) + h 2 L 2 ) + ( φ 1 p j k + φ 2 p g k )
K 4 = ( ω 1 ) ( v j k ( t ) + h K 3 ) φ ( x j k ( t ) + h L 3 ) + ( φ 1 p j k + φ 2 p g k )
L 4 = ω ( v j k ( t ) + h K 3 ) φ ( x j k ( t ) + h L 3 ) + ( φ 1 p j k + φ 2 p g k )

4.2. Selection of Parameters

In order to verify the performance of the three different differential evolution PSO algorithms proposed above, it is necessary to give the selection methods of inertia weight ω , cognitive coefficient c 1 , social coefficient c 2 , and step size h. Since the step size h is generated after the differential model (Equation (38)) is differenced, first consider the conditions that need to be satisfied when the differential equation system (Equation (38)) is stable.
To further simplify the model, let:
y j k ( t ) = x j k ( t ) φ 1 p j k + φ 2 p g k φ
Thus, transforming the differential model (Equation (38)) into:
{ d v j k ( t ) d t = ( ω 1 ) v j k ( t ) φ y j k ( t ) d y j k ( t ) d t = ω v j k ( t ) φ y j k ( t )
Its singularity is:
( ω 1 ) v j k ( t ) φ y j k ( t ) = 0 ,   ω v j k ( t ) φ y j k ( t ) = 0
Calculated:
v j k ( t ) = y j k ( t ) = 0
The coefficient determinant is:
| ω 1 φ ω φ | = φ
Since random numbers r 1 and r 2 can only be selected in (0, 1), in the previous formula, φ = c 1 r 1 + c 2 r 2 . Calculate the characteristic equation:
[ λ ( ω 1 ) φ ω λ + φ ] = 0
Arrangement available:
λ 2 + ( 1 + φ ω ) λ + φ = 0
The characteristic root λ is:
λ 1 , 2 = ( 1 + φ ω ) ± ( 1 + φ ω ) 2 4 φ 2
Ordinary differential equation stability theory states that if the real part of the eigenvalue is negative, its solution is zero,
( v j k ( t ) ,   y j k ( t ) ) = ( 0 ,   0 ) is asymptotically stable.
According to the usual parameter selection method, the inertia weight 0 < ω 1 ,   φ = c 1 r 1 + c 2 r 2 > 0 , that is, 1 + φ ω > 0 , so:
  • If ( 1 + φ ω ) 2 4 φ 0 , then since 1 + φ ω > ( 1 + φ ω ) 2 4 φ , this equation has R e ( λ 1 , 2 ) < 0 ;
  • If ( 1 + φ ω ) 2 4 φ < 0 , then R e ( λ 1 , 2 ) = ( 1 + φ ω ) 2 < 0 .
Therefore, its solution is zero ( v j k ( t ) , y j k ( t ) ) = ( 0 ,   0 ) is asymptotically stable, and:
lim t + y j k ( t ) = lim t + ( x j k ( t ) φ 1 p j k + φ 2 p g k φ ) = 0 lim t + x j k ( t ) = φ 1 p j k + φ 2 p g k φ
This is the same as the existing theoretical results. The results show that the existing parameter selection method in the standard PSO algorithm can ensure the stability of the differential model (Equation (38)). Therefore, it is only necessary to discuss the selection method of the step size, and the selection methods of other parameters refer to the parameter settings of the standard PSO algorithm.

4.3. Absolute Stability

When using the single-step method to find the differential model of the initial value problem { y = f ( x , y ) y ( x 0 ) = y 0 , due to the influence of the original data and the rounding error in the calculation process, its error is constantly changing. If the error does not grow with the evolutionary algebra, the one-step method is said to be stable; otherwise, it is said to be unstable. Since the stability is related to f y in the method and initial value problem, when studying the stability of the algorithm, it is usually possible to directly study the model equation y = λ y ( R e ( λ ) < 0 ) , where λ = f y ( x ,   y ) . Therefore, solving the model Equation (26) in a single step gives:
y n + 1 = E ( h λ ) y n
where E ( h λ ) depends on the chosen method. If | E ( h λ ) | < 1 , the method is said to be stable, and the region where the complex variable h λ satisfies | E ( h λ ) | < 1 on the complex plane is called the absolutely stable region, and the intersection point with the real axis called the absolute stability interval. The absolute stability interval of the common single-step method is as follows:
  • Euler method: 2 < h λ < 0 ;
  • Second-order Runge–Kutta method: 2 < h λ < 0 ;
  • Third-order Runge–Kutta method: 2.51 < h λ < 0 ;
  • Fourth-order Runge–Kutta method: 2.785 < h λ < 0 .

4.4. How to Choose the Step Size h

Consider the following strategy for choosing a step size that satisfies absolute stability.

4.4.1. Step Size Selection Range of the Euler Method

First, consider the conditions that need to be satisfied for the absolute stability of the velocity direction. From the absolute stability of Euler’s method, it can be seen that its absolute stability interval satisfies:
| 1 + λ h | < 1
where h is the algorithm step size, λ = f 1 v j k . From the definition of f 1 ( v j k , x j k ) , it can be known that:
λ = f 1 v j k = ω 1
Substituting into the absolute stability interval shows that the step size h is sufficient:
0 < h < 2 1 ω
According to the definition of absolute stability, the parameter λ = f 1 v j k < 0 , which means ω < 1 . Therefore, for the velocity vector, the condition of its absolute stability is:
ω < 1 ,   0 < h < 2 1 ω
Next, we continue to consider the absolute stability of the position vector defined by:
d x j k ( t ) d t = f 2 ( t , v j k , x j k )
In f 2 ( t , v j k , x j k ) = ω v j k φ x j k + ( φ 1 p j k + φ 2 p g k ) , the parameter λ satisfies:
λ = f 2 x j k = φ < 0
According to the definition of absolute stability, the absolute stability interval of the step size h is obtained as:
0 < h < 2 φ
Therefore, for Euler’s method with a step size h, in order to ensure the absolute stability of the evolution equations of the velocity vector and the position vector, the step size h needs to satisfy:
0 < h < m i n { 2 1 ω , 2 φ }

4.4.2. Improve the Step Size Selection Range of the Euler Method

Since the improved Euler method belongs to a second-order Runge–Kutta method, its absolute stability interval is the same as that of the Euler method, so the obtained step size h is also Equation (64).

4.4.3. Step Size Selection Range of the Fourth-Order Runge–Kutta Method

First, consider the velocity evolution equation. For the fourth-order Runge–Kutta method, its absolute stability interval is:
2.785 < h λ < 0
Substituting λ = f 1 v j k = ω 1 we get:
0 < h < 2.785 1 ω
Similarly, for the position evolution equation, we can obtain:
0 < h < 2.785 φ
Based on the above results, it can be obtained that the step size h to ensure the absolute stability of the fourth-order Runge–Kutta method should satisfy:
0 < h < m i n { 2.785 1 ω , 2.785 φ }

4.4.4. Limitations of Standard PSOs

The following is an analysis of the case where the step size is fixed at 1 in the standard PSO algorithm. When the step size is 1, if the algorithm is absolutely stable, then if, and only if,
1 < 2 1 ω < 2 φ ,
or Equation (70),
1 < 2 φ < 2 1 ω
For Equation (69), 1 < 2 1 ω , it can be deduced that:
1 < ω < 1
That is | ω | < 1 , and 2 1 ω < 2 φ means that:
ω + φ < 1
That is, Equation (69) is equivalent to:
{ ω + φ < 1 0 < ω < 1
For Equation (70), 1 < 2 φ < 2 1 ω is equivalent to:
{ φ < 2 ω + φ > 1
In conclusion, the standard PSO algorithm with step size 1 is absolutely stable if and only if φ < 1 ω or 1 ω < φ < 2 . In the parameter setting of the standard PSO algorithm, the cognitive coefficient c 1 and the social coefficient c 2 are 1.414 or 2.0, and the expected value of their sum is greater than 2. In this way, the above conditions can only be established with a certain probability (<1). Since the algorithm cannot guarantee absolute stability, the error may increase as the number of repeated operations increases, resulting in the algorithm being unable to search for extreme points in the entire region.

4.4.5. The Choice of Step Size in Differential Evolution PSO

In order to establish the framework of the differential evolution particle swarm algorithm, it is necessary to establish the selection method of the step size h. From the previous section, we can see that the step size is equivalent to a dynamic sampling of the foraging behavior of birds. In order to find the real food source as much as possible, the step length needs to be dynamically adjusted with the bird’s foraging behavior, but the bird’s foraging behavior is random, and it is impossible to pre-set when to search for food and when to prey. Therefore, in order to satisfy the random characteristics of this foraging behavior, this section sets the step size as a random number, and the specific selection method is as follows:
  • For the Euler method and the improved Euler method, the step size satisfies:
    0 < h < m i n { 2 1 ω , 2 φ }
Arranging the available formula:
{ 0 < h < 2 1 ω , ω + φ < 1 0 < h < 2 φ ,           e l s e
Therefore, the step size can be determined as:
h = { 2 1 ω · r a n d ( 0 ,   1 ) , ω + φ < 1 2 φ · r a n d ( 0 ,   1 ) ,                   e l s e
2.
For the fourth-order Runge–Kutta method, the step size h satisfies:
0 < h < m i n { 2.785 1 ω , 2.785 φ }
Therefore, the step size can be chosen as follows:
h = { 2.785 1 ω · r a n d ( 0 ,   1 ) , ω + φ < 1 2.785 φ · r a n d ( 0 ,   1 ) ,                   e l s e
where rand (0, 1) represents a random number from 0 to 1 that satisfies a uniform distribution. The value formulas, Equations (77) and (79), of the step size h show that, because the step size is a random number, not only the step size of each particle is different. Moreover, the step size is not necessarily the same for different components of the same particle.

4.5. Instance Simulation

For the performance comparison of typical test functions such as the F8 function and F10–F13 function, please refer to Figure 2. The abscissa of Figure 2 represents the evolutionary algebra, and the ordinate represents the average optimal fitness value. The differential PSO algorithm with Runge–Kutta method (DPSO-RK) is better than the differential PSO algorithm with Euler (DPSO-E) and the differential PSO with modified Euler( DPSO-ME). This is very easy to understand numerically since the 4th-order Runge–Kutta method has a 4th-order local truncation error, so it can more accurately track the solution of the differential model (26).
To further verify the performance of the algorithm DPSO-RK, three other algorithms were selected: standard PSO (SPSO) [27,28], modified time-varying accelerator coefficient PSO (MPSO-TVAC) [29,30], and comprehensive learning PSO (CLPSO) [31,32]. When the dimension of the test function is high (dimension is 100~300), the differential evolution PSO based on the Runge–Kutta method works very well, far better than the other three algorithms, as shown in Figure 3. The absolute stability selection of the step size makes the algorithm have strong global searchability to better jump out of the local extreme point. Table 2 and Table 3 present execution time of different PSO algorithms in F8~F13 (test function). The CPU of our computer running the program is Intel(R) Core (TM) i5-9400F, 2.90 GHz. The operating system is Microsoft Windows 11. The memory is 32 GB and the MATLAB version is 2021b.

5. Conclusions

With the rapid development of PSO, a large number of improved algorithms have been developed. However, how to analyze their advantages and disadvantages and integrate them to provide a unified idea for further improvement becomes a problem that needs to be considered. Firstly, a unified differential model including basic PSO, standard PSO, PSO with shrinkage factor, and random PSO is established, and their stability is discussed. This work shows that various differenced methods of the same differential model have a great impact on the performance of the algorithm, for example, the standard PSO algorithm is generally better than the basic PSO algorithm.
Starting from this conclusion, this paper discusses the different transformation methods of the standard PSO differential model. The importance of the step size parameter is discussed from the perspective of biology, and then the selection method of step size is given by using the absolute stability theory. The preceding thus establishes the differential evolutionary PSO model with the help of several different numerical methods. This paper establishes a differential evolution PSO algorithm based on the Euler method, improved Euler method, and Runge–Kutta method. The simulation results show that the differential evolution PSO algorithm based on the Runge–Kutta method can effectively avoid premature results and improve the computational efficiency of the algorithm. First, this paper analyzes the influence of various differences of differential models on the performance of PSO and then provides a new idea. The differenced approaches of differential models thus provide a way to further improve performance for the numerous improved algorithms that have been published.

Author Contributions

Investigation, W.-T.S.; Writing—original draft, S.-J.H. All authors have read and agreed to the published version of the manuscript.

Funding

The author received no specific funding for this study.

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Acknowledgments

This research was supported by the Department of Information Technology, Takming University of Science and Technology. The authors would like to thank the Takming University of Science and Technology, for financially supporting this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Basic Concepts and Evolution Equations of PSO Algorithm

This paper discusses the following numerical optimization problems, such as Equation (A1).
m i n f ( x ) ,   x [ x m i n ,   x m a x ] D
where the D represents the dimension of the problem.
The PSO algorithm was first proposed by American social psychologist Kennedy and electrical engineer Eberhart in 1995, and its basic idea was inspired by their early research results of modeling and simulating the group behavior of many birds. Their model and simulation algorithm mainly use the biologist Heppner’s model.
Heppner’s bird model shares many similarities with other class models in reflecting flock behavior. The difference is that birds are drawn to their habitat. In the simulation, each bird initially flew without a specific target until a bird flew to the habitat. When setting the expected habitat has a larger fitness value than the expectation to stay in the flock, each bird will leave the flock and fly to the habitat, and then the flock will be formed naturally. Since birds use simple rules to determine their own flight direction and speed (essentially, each bird tries to stop in the flock without colliding with each other). When a bird flies away from the flock to its habitat, it causes other birds around it to also fly to the habitat. Once these birds find a habitat, they will land there, driving more birds to the habitat until the entire flock is in the habitat. Birds searching for habitat is very similar to finding a solution to a specific problem. The way a bird that has found a habitat guides the birds around it to fly to that habitat increases the likelihood that the entire flock will find habitat, and is in line with the social-cognitive view of belief. The particle swarm algorithm regards each individual as a particle without mass and volume in the D-dimensional search space, and flies at a certain speed. The flight speed is dynamically adjusted by individual flight experience and group flight experience. The velocity evolution equation of the standard particle swarm optimization algorithm is:
v j ( t + 1 ) = ω v j ( t ) + c 1 r 1 ( p j ( t ) x j ( t ) ) + c 2 r 2 ( p g ( t ) x j ( t ) )
Among,
  • v j ( t ) : This is the velocity of particle j at generation t.
  • ω : This is the inertia weight
  • c 1 : This is the cognitive coefficient.
  • r 1 ,   r 2 : These are random numbers that follow a uniform distribution.
  • p j ( t ) : This is the individual historical optimal position of particle j.
  • x j ( t ) : This is the position of particle j at generation t.
  • c 2 : This is the social factor.
  • p g ( t ) : This is the historically optimal position of the group.
The individual historical optimal position p j ( t ) represents the optimal position searched by particle j in the previous t generations. For minimization problems, smaller objective function values will correspond to better fitness values. Its update rule is:
p j ( t ) = { p j ( t 1 ) ,   f ( p j ( t ) ) > f ( p j ( t 1 ) ) x j ( t ) ,   other
And the group historical optimal position p g ( t ) is defined as:
p g ( t ) = arg   min { f ( p j ( t ) ) | j = 1 ,   2 ,   ,   n }
The n in Equation (A4) represents the number of particles contained in the population. In order to ensure the stability of the algorithm, the algorithm defines a maximum speed upper limit v m a x to limit the size of the particle moving speed, namely Equation (A5):
| v j k ( t + 1 ) | v m a x
Correspondingly, the position evolution equation of particle j is Equation (A6):
x j ( t + 1 ) = v j ( t + 1 ) + x j ( t )

Appendix A.2. Algorithm Flow

The steps of the standard particle swarm algorithm are as follows:
(1)
Particle swarm initialization. The initial position of each particle is randomly selected in the domain of definition. The velocity vector is randomly selected in [ v m a x ,   v m a x ] D . The individual historical optimal position p j ( t ) is equal to the initial position of each particle. The optimal group position p g ( t ) is the position corresponding to the particle with the best fitness value. The evolutionary algebra t is set to 0.
(2)
Parameter initialization. Set the inertia weight ω , cognitive coefficient c 1 , social coefficient c 2 and maximum speed upper limit v m a x .
(3)
The velocity of the next generation of particles is calculated by Equation (A2).
(4)
Adjust the speed of each particle according to Equation (A5).
(5)
The position of the next generation of particles is calculated by Equation (A6).
(6)
Calculate the fitness value of each particle.
(7)
Update the individual historical optimal position and the group historical optimal position of each particle.
(8)
If the end condition is not met, go back to step (3). Otherwise, stop the calculation and output the optimal result.
Table A1 present these factors affecting the PSO algorithm.
Table A1. Factors Affecting the PSO Algorithm.
Table A1. Factors Affecting the PSO Algorithm.
ItemPSO ParametersMain MeaningDescription of Influencing Factors
1.VmaxMaximum speed Determines the resolution (or precision) of the area between the current position and the best position. If Vmax is too high, particles may fly past good solutions, and if Vmax is too small, particles cannot explore enough, causing them to get stuck in local figures of merit.
2. ω Inertia weightKeeping the inertia of the particles in motion makes them tend to expand the search space and have the ability to explore new areas.
3. c 1 ,   c 2 Acceleration constantRepresents the weight of the statistical acceleration term that pushes every 1 particle to the pbest and gbest positions. Low values allow particles to wander outside the target area before being pulled back, while high values cause particles to suddenly rush towards or over the target area.
4. c 1 = 0 , c 2 = 0 The second and third parts of the PSO equation do not exist.Particles will continue to fly at their current speed until they reach the boundary. Since it can only search a limited area, it will be difficult to find a good solution.
5. ω = 0 The first part of the PSO equation does not exist.The particle velocity only depends on the particle’s current position and their historical best positions pbest and gbest, and the particle velocity itself has no memory. Assuming a particle is in the best position in the universe, it will remain stationary. And other particles fly to the weighted center of its own best position pbest and the global best position gbest. Under this condition, the particle swarm will be statistically contracted to the current best position in the whole domain, which is more like a local algorithm.
6. ω ExistenceAfter adding the first part, the particle has a tendency to expand the search space, that is, the first part has the ability of global search. This also makes the role of ω to adjust the balance between the global and local search capabilities of the algorithm for different search problems.
7. c 1 = 0 The second part of the PSO equation does not exist.Particles have no cognitive abilities, a “social-only” model. Under the interaction of particles, there is the ability to reach new search spaces. It converges faster than the standard version, but for complex problems, it is easier to fall into local optimum points than the standard version.
8. c 2 = 0 The third part of the PSO equation does not exist.There is no social information sharing between particles, a “cognition-only” model. Because there is no interaction between individuals, a particle swarm of size m is equivalent to the execution of m single particles. Therefore, the probability of obtaining a solution is very small.

References

  1. Gao, X.; Ren, Z.; Zhai, L.; Jia, Q.; Liu, H. Two-Stage Switching Hybrid Control Method Based on Improved PSO for Planar Three-Link Under-Actuated Manipulator. IEEE Access 2019, 7, 76263–76273. [Google Scholar] [CrossRef]
  2. Yun, Y.C.; Oh, S.H.; Lee, J.H.; Choi, K.; Chung, T.K.; Kim, H.S. Optimal Design of a Compact Filter for UWB Applications Using an Improved Particle Swarm Optimization. IEEE Trans. Magn. 2016, 52, 1–4. [Google Scholar] [CrossRef]
  3. Ma, H.; Wang, T.; Li, Y.; Meng, Y. A Time Picking Method for Microseismic Data Based on LLE and Improved PSO Clustering Algorithm. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1677–1681. [Google Scholar] [CrossRef]
  4. Huang, Y.; Xiang, Y.; Zhao, R.; Cheng, Z. Air Quality Prediction Using Improved PSO-BP Neural Network. IEEE Access 2020, 8, 99346–99353. [Google Scholar] [CrossRef]
  5. Jong, D. Evolutionary Computation Theory. In Evolutionary Computation; MIT Press: Cambridge, MA, USA, 2006; pp. 115–211. [Google Scholar]
  6. Zhang, H.; Liang, Y.; Zhang, W.; Xu, N.; Guo, Z.; Wu, G. Improved PSO-Based Method for Leak Detection and Localization in Liquid Pipelines. IEEE Trans. Ind. Inform. 2018, 14, 3143–3154. [Google Scholar] [CrossRef]
  7. Ma, Z.; Dong, Y.; Liu, H.; Shao, X.; Wang, C. Forecast of Non-Equal Interval Track Irregularity Based on Improved Grey Model and PSO-SVM. IEEE Access 2018, 6, 34812–34818. [Google Scholar] [CrossRef]
  8. Yiyang, L.; Xi, J.; Hongfei, B.; Zhining, W.; Liangliang, S. A General Robot Inverse Kinematics Solution Method Based on Improved PSO Algorithm. IEEE Access 2021, 9, 32341–32350. [Google Scholar] [CrossRef]
  9. Sun, G.; Qin, D.; Lan, T.; Ma, L. Research on Clustering Routing Protocol Based on Improved PSO in FANET. IEEE Sens. J. 2021, 21, 27168–27185. [Google Scholar] [CrossRef]
  10. Kang, M.S.; Won, Y.J.; Lim, B.G.; Kim, K.T. Efficient Synthesis of Antenna Pattern Using Improved PSO for Spaceborne SAR Performance and Imaging in Presence of Element Failure. IEEE Sens. J. 2018, 18, 6576–6587. [Google Scholar] [CrossRef]
  11. Zhou, Y.; Wang, N.; Xiang, W. Clustering Hierarchy Protocol in Wireless Sensor Networks Using an Improved PSO Algorithm. IEEE Access 2017, 5, 2241–2253. [Google Scholar] [CrossRef]
  12. Cheng, Z.; Fan, L.; Zhang, Y. Multi-agent decision support system for missile defense based on improved PSO algorithm. J. Syst. Eng. Electron. 2017, 28, 514–525. [Google Scholar]
  13. Zhou, Z.; Li, F.; Abawajy, J.H.; Gao, C. Improved PSO Algorithm Integrated with Opposition-Based Learning and Tentative Perception in Networked Data Centres. IEEE Access 2020, 8, 55872–55880. [Google Scholar] [CrossRef]
  14. Boussaïd, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  15. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  16. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef] [Green Version]
  17. Pourzangbar, A.; Vaezi, M. Optimal design of brace-viscous damper and pendulum tuned mass damper using Particle Swarm Optimization. Appl. Ocean Res. 2021, 112, 102706. [Google Scholar] [CrossRef]
  18. Cui, Y.; Meng, X.; Qiao, J. A multi-objective particle swarm optimization algorithm based on two-archive mechanism. Appl. Soft Comput. 2022, 119, 108532. [Google Scholar] [CrossRef]
  19. Xu, Y.; Hu, C.; Wu, Q.; Jian, S.; Li, Z.; Chen, Y.; Zhang, G.; Zhang, Z.; Wang, S. Research on Particle Swarm Optimization in LSTM Neural Networks for Rainfall-Runoff Simulation. J. Hydrol. 2022, 608, 127553. [Google Scholar] [CrossRef]
  20. Zouari, M.; Baklouti, N.; Sanchez, J.; Kammoun, H.M.; Ayed, M.B.; Alimi, A.M. PSO-Based Adaptive Hierarchical Interval Type-2 Fuzzy Knowledge Representation System (PSO-AHIT2FKRS) for Travel Route Guidance. IEEE Trans. Intell. Transp. Syst. 2022, 23, 804–818. [Google Scholar] [CrossRef]
  21. Liu, X.; Shi, Q.; Liu, Z.; Yuan, J. Using LSTM Neural Network Based on Improved PSO and Attention Mechanism for Predicting the Effluent COD in a Wastewater Treatment Plant. IEEE Access 2021, 9, 146082–146096. [Google Scholar] [CrossRef]
  22. Ghatak, S.R.; Sannigrahi, S.; Acharjee, P. Comparative Performance Analysis of DG and DSTATCOM Using Improved PSO Based on Success Rate for Deregulated Environment. IEEE Syst. J. 2018, 12, 2791–2802. [Google Scholar] [CrossRef]
  23. Han, Z.; Li, Y.; Liang, J. Numerical Improvement for the Mechanical Performance of Bikes Based on an Intelligent PSO-ABC Algorithm and WSN Technology. IEEE Access 2018, 6, 32890–32898. [Google Scholar] [CrossRef]
  24. Deng, W.; Xu, J.; Zhao, H.; Song, Y. A Novel Gate Resource Allocation Method Using Improved PSO-Based QEA. IEEE Trans. Intell. Transp. Syst. 2022, 23, 1737–1745. [Google Scholar] [CrossRef]
  25. Banerjee, S.; Ghosh, A.; Rana, N. An Improved Interleaved Boost Converter With PSO-Based Optimal Type-III Controller. IEEE J. Emerg. Sel. Top. Power Electron. 2017, 5, 323–337. [Google Scholar] [CrossRef]
  26. Chen, Z.; Li, W.; Shu, X.; Shen, J.; Zhang, Y.; Shen, S. Operation Efficiency Optimization for Permanent Magnet Synchronous Motor Based on Improved Particle Swarm Optimization. IEEE Access 2021, 9, 777–788. [Google Scholar] [CrossRef]
  27. Umar, A.; Shi, Z.; Khlil, A.; Farouk, Z.I.B. Developing a New Robust Swarm-Based Algorithm for Robot Analysis. Mathematics 2020, 8, 158. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, Y.; Li, B.; Yin, L.; Wu, J.; Wu, S.; Liu, C. Velocity-Controlled Particle Swarm Optimization (PSO) and Its Application to the Optimization of Transverse Flux Induction Heating Apparatus. Energies 2019, 12, 487. [Google Scholar] [CrossRef] [Green Version]
  29. Liu, G.; Zhu, H. Displacement Estimation of Six-Pole Hybrid Magnetic Bearing Using Modified Particle Swarm Optimization Support Vector Machine. Energies 2022, 15, 1610. [Google Scholar] [CrossRef]
  30. Sayed, A.; Ebeed, M.; Ali, Z.M.; Abdel-Rahman, A.B.; Ahmed, M.; Abdel Aleem, S.H.E.; El-Shahat, A.; Rihan, M. A Hybrid Optimization Algorithm for Solving of the Unit Commitment Problem Considering Uncertainty of the Load Demand. Energies 2021, 14, 8014. [Google Scholar] [CrossRef]
  31. Borowska, B. Learning Competitive Swarm Optimization. Entropy 2022, 24, 283. [Google Scholar] [CrossRef]
  32. Lin, A.; Sun, W. Multi-Leader Comprehensive Learning Particle Swarm Optimization with Adaptive Mutation for Economic Load Dispatch Problems. Energies 2019, 12, 116. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of differential equation models and difference equation models.
Figure 1. Comparison of differential equation models and difference equation models.
Applsci 12 05505 g001
Figure 2. Comparison of dynamic performance of three DPSO algorithms in the case of 300 dimensions. (a) F8 function, (b) F10 function, (c) F11 function, (d) F12 function, and (e) F13 function, (f) algorithm icon illustration.
Figure 2. Comparison of dynamic performance of three DPSO algorithms in the case of 300 dimensions. (a) F8 function, (b) F10 function, (c) F11 function, (d) F12 function, and (e) F13 function, (f) algorithm icon illustration.
Applsci 12 05505 g002
Figure 3. Comparison of dynamic performance of four algorithms in the case of 300 dimensions. (a) F8 function, (b) F10 function, (c) F11 function, (d) F12 function, and (e) F13 function. (f) algorithm icon illustration.
Figure 3. Comparison of dynamic performance of four algorithms in the case of 300 dimensions. (a) F8 function, (b) F10 function, (c) F11 function, (d) F12 function, and (e) F13 function. (f) algorithm icon illustration.
Applsci 12 05505 g003
Table 1. Orders of local truncation errors for different numerical algorithms.
Table 1. Orders of local truncation errors for different numerical algorithms.
AlgorithmOrder of Local Truncation Error
Euler method
y n + 1 = y n + h f ( x n ,   y n )
1st order
Implicit Euler method
y n + 1 = y n + h f ( x n + 1 ,   y n + 1 )
1st order
Trapezoidal Euler method
y n + 1 = y n + h 2 [ f ( x n + 1 ,   y n + 1 ) + f ( x n ,   y n ) ]
2nd order
Improved Euler method
{ y p = y n + h f ( x n ,   y n ) y c = y n + h f ( x n + 1 , y p ) y n + 1 = 1 2 ( y p + y c )
2nd order
Second-order Runge–Kutta method
{ y n + 1 = y n + h K 2 K 1 = f ( x n ,   y n ) K 2 = f ( x n + 1 2 , y n + h 2 K 1 )
2nd order
Third-order Runge–Kutta method
{ y n + 1 = y n + h 6 ( K 1 + 4 K 2 + K 3 ) K 1 = f ( x n , y n ) K 2 = f ( x n + 1 2 , y n + h 2 K 1 ) K 3 = f ( x n + 1 , y n + h ( 2 K 2 K 1 )
3rd order
Fourth-order Runge–Kutta method
{ y n + 1 = y n + h 6 ( K 1 + 2 K 2 + 2 K 3 + K 4 ) K 1 = f ( x n , y n ) K 2 = f ( x n + 1 2 , y n + h 2 K 1 ) K 3 = f ( x n + 1 2 , y n + h 2 K 2 ) K 4 = f ( x n + 1 , y n + h K 3 )
4th order
Table 2. Algorithm execution time of 3 DPSO in F8~F13.
Table 2. Algorithm execution time of 3 DPSO in F8~F13.
Algorithm Test ItemExecution Time (Second)
F8 (Schwefel function)DPSO-E42.56
DPSO-ME50.23
DPSO-RK72.45
F10 (Ackley function)DPSO-E63.38
DPSO-ME68.41
DPSO-RK74.32
F11 (Griewank function)DPSO-E81.45
DPSO-ME85.21
DPSO-RK90.42
F12 (Penalized function)DPSO-E106.23
DPSO-ME108.32
DPSO-RK110.45
F13 (Penalized function, different from the F12 coefficient range.)DPSO-E124.26
DPSO-ME126.23
DPSO-RK128.43
Table 3. The algorithm execution time of different 4 PSOs in F8~F13.
Table 3. The algorithm execution time of different 4 PSOs in F8~F13.
Algorithm Test ItemExecution Time (Second)
F8 (Schwefel function)SPSO43.15
MPSO-TVAC45.46
CLPSO47.21
DPSO-RK48.51
F10 (Ackley function)SPSO65.22
MPSO-TVAC68.41
CLPSO71.35
DPSO-RK73.24
F11 (Griewank function)SPSO82.46
MPSO-TVAC83.11
CLPSO86.13
DPSO-RK88.03
F12 (Penalized function)SPSO108.43
MPSO-TVAC110.33
CLPSO113.24
DPSO-RK114.25
F13 (Penalized function, different from the F12 coefficient range.)SPSO126.16
MPSO-TVAC127.42
CLPSO129.15
DPSO-RK131.54
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hsiao, S.-J.; Sung, W.-T. Improving Particle Swarm Optimization Analysis Using Differential Models. Appl. Sci. 2022, 12, 5505. https://doi.org/10.3390/app12115505

AMA Style

Hsiao S-J, Sung W-T. Improving Particle Swarm Optimization Analysis Using Differential Models. Applied Sciences. 2022; 12(11):5505. https://doi.org/10.3390/app12115505

Chicago/Turabian Style

Hsiao, Sung-Jung, and Wen-Tsai Sung. 2022. "Improving Particle Swarm Optimization Analysis Using Differential Models" Applied Sciences 12, no. 11: 5505. https://doi.org/10.3390/app12115505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop