Next Article in Journal
Higher Immersive Profiles Improve Learning Outcomes in Augmented Reality Learning Environments
Previous Article in Journal
Exploiting Incremental Virtual Full Duplex Non-Orthogonal Multiple Access Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions

by
Vasileios Charilogis
and
Ioannis G. Tsoulos
*,†
Department of Informatics and Telecommunications, University of Ioannina, 47150 Ioannina, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Information 2022, 13(5), 217; https://doi.org/10.3390/info13050217
Submission received: 22 March 2022 / Revised: 18 April 2022 / Accepted: 19 April 2022 / Published: 21 April 2022
(This article belongs to the Special Issue Advances in Swarm Intelligence and Evolutionary Computation)

Abstract

:
The Particle Swarm Optimization (PSO) method is a global optimization technique based on the gradual evolution of a population of solutions called particles. The method evolves the particles based on both the best position of each of them in the past and the best position of the whole. Due to its simplicity, the method has found application in many scientific areas, and for this reason, during the last few years, many modifications have been presented. This paper introduces three modifications to the method that aim to reduce the required number of function calls while maintaining the accuracy of the method in locating the global minimum. These modifications affect important components of the method, such as how fast the particles change or even how the method is terminated. The above modifications were tested on a number of known universal optimization problems from the relevant literature, and the results were compared with similar techniques.

1. Introduction

The global optimization problem is usually defined as:
x * = arg min x S f ( x )
with S:
S = a 1 , b 1 a 2 , b 2 a n , b n
The function f is considered a continuous and differentiable function, which is formulated as f : S R , S R n . This problem finds application in a variety of objective problems in the real world, such as problems of physics [1,2,3], chemistry [4,5,6], economics [7,8], medicine [9,10], etc. Global optimization methods are grouped into two broad categories: deterministic and stochastic methods. The most common methods of the first category are the so-called Interval methods [11,12], where the set S is divided iteratively into subregions, and some subregions that do not contain the global solution are discarded using some pre-defined criteria. In stochastic methods, the finding of the global minimum is guided by randomness. In these methods, there is no guarantee to find the global minimum, but they constitute the vast majority of global optimization methods, which is mainly due to the simplicity in their implementation. There have been proposed many stochastic methods by various researchers such as Controlled Random Search methods [13,14,15], Simulated Annealing methods [16,17,18], Differential Evolution methods [19,20], Particle Swarm Optimization methods [21,22,23], Ant Colony Optimization [24,25], Genetic Algorithms [26,27,28], etc. In addition, many works have appeared utilizing the modern GPU processing units [29,30,31]. The reader can find some complete surveys about metaheuristic algorithms in some recent works [32,33,34].
The method of Particle Swarm Optimization is a method based on a population of candidate solutions that are also called particles. These particles have two basic characteristics: their position at any given time x and the speed u at which they moved. The position x i of every particle can be defined as the summation of the current position x i and the associated velocity u i : x i = x i + u i , where the velocity u i is defined as as a combination of the old velocity and the best located values p i and p best :
u i = ω u i + r 1 c 1 p i x i + r 2 c 2 p best x i
where r 1 and r 2 are random numbers and c 1 , c 2 are user-defined constants. The vaue ω is called the inertia of the algorithm and usually is calculated using some efficient algorithm. The purpose of the method is to move the particles repetitively, and their next position is calculated as a function not only of their position but also of the best position they had in the past as well as the best position of the population. Various researchers have provided an analytic review of the PSO process such as the review of Jain et al. [35] where 52 papers have been reviewed, the work of Khare et al. [36] where a systematic review of the PSO algorithm is provided along with the application of PSO in solar photovoltaic systems. The method was successfully used in a variety of scientific and practical problems in areas such as physics [37,38], chemistry [39,40], medicine [41,42], economics [43], etc.
Many researchers have proposed a variety of modifications in the PSO method during the last few years; such methods aimed to estimate a more efficient calculation for the inertia parameter ω of the speed calculation [44,45,46]. The majority of methods used to calculate the inertia value calculate the inertia as a time-varying parameter, and hence, there is lack of diversity, which could be used to better explore the search space of the objective function. This paper introduces a new scheme for the calculation of inertia which takes into account the speed of the algorithm to locate new local minima in the search space of the objective function.
Other modifications of the PSO are used to alter the calculation of the factor c 1 , c 2 , which usually are called acceleration factors: for example, the work of Ratnaweera et al. [47], where a time-varying mechanism is proposed for the calculation of these factors. In addition, the PSO algorithm has been combined with the mutation mechanism [48,49,50] in order to improve the diversity of the algorithm and to better explore the search space of the objective function. Engelbrecht [51] proposed a novel method for the initialization of the velocity vector, where the velocity is initialized randomly rather than with zero value. In addition, in the relevant literature, there have been hybrid techniques [52,53,54], parallel techniques [55,56,57], methods aim to improve the velocity calculation [58,59,60], etc.
The method of PSO has been integrated into other optimization techniques. For example, Bogdanova et al. [61] combined Grammatical Evolution [62] with swarm techniques such as PSO. Their work is divided in two phases: during the first phase, the PSO method is decomposed into a list of basic grammatical rules. During the second phase, instances of the PSO method are created using the previous rules. Another interesting combination with an optimization technique is the work of Pan et al. [63], who combined the PSO method with simulated annealing to improve the local search ability of the PSO method. In addition, Mughal et al. [64] used a hybrid technique of PSO and simulated annealing for photovoltaic cell parameter estimation. Similarly, the work of Lin et al. [65] utilized a hybrid method of PSO and differential evolution for numerical optimization problems. In addition, Epitropakis et al. [66] suggested a hybrid PSO method with differential evolution, where the at each step of PSO, a differential evolution is applied to the best particle of the population. Among others, Wang et al. proposed a new PSO method based on chaotic neighbourhood search [67].
The PSO method is an iterative process, during which a series of particles evolve through a process that involves updating the position of the particles and fitness computation, i.e., evaluation of the objective function. Variations of PSO that aim at the global minimum in a shorter time may include the use of a local optimization method in each iteration of the algorithm. Of course, the above process can be extremely time consuming, and depending on the termination method used and the number of local searches performed, it may require a long execution time.
This text introduces three distinct modifications to the original method, which drastically improve the time required to find the total minimum by reducing the required number of function evaluations. These modifications cover a large part of the method: the speed calculation, a new method of avoiding running local search methods and a new adaptive termination rule. The first modification is used to enable the method to explore the search space more efficiently by calculating the inertia parameter with respect to the ability of the PSO to discover the local minima of the objective function. The second modification is used to avoid finding the same local minimum over and over again by the method. This way, on the one hand, the algorithm will not be trapped in local minima, and on the other hand, calls to the objective function will not be wasted, which will lead with some certainty to minima that have already been found. The third modification introduces a novel termination rule based on asymptotic calculations. Termination of methods such as PSO is an important process, as efficient termination methods will terminate the method in time without wasting many calls on the objective function, but at the same time, they should guarantee the ability of the method to find the global minimum.
The proposed modifications were applied to a number of problems from the relevant literature and compared with similar techniques, and the results are presented in a separate section. From the experiments performed, it was shown that the proposed modifications significantly reduce the required execution time of the method by drastically reducing the number of function calls required to find the total minimum. In addition, these modifications can be applied either alone or in combination with the same positive results. This means that they are quite general and can be included in other techniques based on PSO.
The rest of this article is divided as follows: in Section 2, the initial method and the proposed modifications are discussed, in Section 3, the experiments are listed, and finally, in Section 4, some conclusions and guidelines for future improvements are presented.

2. Method Description

The base algorithm of PSO and the proposed modifications are outlined in detail in the following subsections. The discussion starts with a new mechanism that calculates the velocity of the populations, continues with a discarding procedure used to minimize the number of local searches performed, and ends with a discussion about the new stopping rule proposed here.

2.1. The Base Algorithm

The base Algorithm 1 is listed below with the periodical application of the local search method in order to enhance the estimation of the global minimum, i.e., at every iteration, a decision with probability p l is made in order to apply a local search procedure to some of the particles. Usually, this probability is small, for example 0.05 (5%).
Algorithm 1 The base algorithm of PSO and the proposed modifications
    1.  Initialization.
            (a)   Set iter = 0 (iteration counter).
            (b)   Set the number of particles m.
            (c)   Set the maximum number of iterations allowed iter max .
            (d)   Set the local search rate p l [ 0 , 1 ] .
            (e)   Initialize randomly the positions of the m particles x 1 , x 2 , , x m , with x i S R n .
            (f)   Initialize randomly the velocities of the m particles u 1 , u 2 , , u m , with u i S R n .
            (g)   For i = 1 . . m do p i = x i . The p i vectors are the best located values for every particle i.
            (h)   Set p best = arg min i 1 . . m f x i .
    2.  Termination Check. Check for termination. If termination criteria are met, then stop.
    3.  For i = 1 . . m Do
            (a)   Update the velocity u i as a function of u i , p i and p best .
            (b)   Update the position x i = x i + u i .
            (c)   Set r [ 0 , 1 ] a random number. If r p m then x i = LS x i , where LS ( x ) is a local search procedure.
            (d)   Evaluate the fitness of the particle i, f x i .
            (e)   If f x i f p i then p i = x i .
    4.  End For
    5.  Set p best = arg min i 1 . . m f x i .
    6.  Set iter = iter + 1 .
    7.  Goto Step 2.
The current work modifies the above algorithm in three key points:
1.
In Step 2, a new termination rule based on asymptotic considerations is introduced.
2.
In Step 3b, the algorithm calculates the new position of the particles. The proposed methodology modifies the position of the particles based on the average speed of the algorithm to discover new minimums.
3.
In Step 3c, a method based on gradient calculations will be used to prevent the PSO method from executing unnecessary local searches.

2.2. Velocity Calculation

The algorithm of Section 2.1 calculates at every iteration the new position x i , which is calculated using the old position x i , and the associated velocity u i according to the scheme:
x i = x i + u i
Typically, the velocity is calculated as a combination of the old velocity and the best located values p i and p best and may be defined as:
u i = ω u i + r 1 c 1 p i x i + r 2 c 2 p best x i
where
1.
The parameters r 1 , r 2 are random numbers with r 1 [ 0 , 1 ] and r 2 [ 0 , 1 ] .
2.
The constant numbers c 1 , c 2 are in the range [ 1 , 2 ] .
3.
The variable ω is called inertia, with ω [ 0 , 1 ] .
The inertia was proposed by Shi and Eberhart [21]. They argued that high values of the inertia coefficient cause better exploration of the search area, while small values of this variable are used when we want to achieve better local research around promising areas for the global minimum. The value of the inertia factor generally starts with large values and decreases with repetition. This article proposes a new adaptive technique for the inertia parameter, and this mechanism is compared against three others from the relevant literature.

2.2.1. Random Inertia

The inertia calculation is proposed in [68], and it is defined as
ω iter = 0.5 + r 2
where r is a random number with r [ 0 , 1 ] . This inertia calculation will be called I1 in the tables with the experimental results.

2.2.2. Linear Time-Varying Inertia (Min Version)

This inertia schema has been proposed and used in various studies [68,69,70], and it is defined as:
ω iter = iter max iter iter max ω max ω min + ω min
where ω min is the minimum value of inertia and ω max the maximum value for inertia. This inertia calculation will be called I2 in the tables with the experimental results.

2.2.3. Linear Time-Varying Inertia (Max Version)

This method is proposed in [71,72], and it is defined as:
ω iter = iter max iter iter max ω min ω max + ω max
This inertia calculation will be called I3 in the tables with the experimental results.

2.2.4. Proposed Technique

This calculation of inertia involves the number of iterations where the method manages to find a new minimum. In the first iterations and when the method has to do more exploration of the research area, the inertia will be great. When the method should focus on a minimum, then the inertia will decrease. For this reason, at every iteration iter , the quantity
δ ( iter ) = i = 1 m f i ( iter ) i = 1 m f i ( iter 1 )
is measured. In the first steps of the algorithm, this quantity will change from repetition to repetition at a fast pace, and at some point, it will no longer change at the same rate or will be zero.
Hence, we define a metric to model the changes in δ ( iter ) as
ζ ( iter ) = 1 , δ ( i ) = 0 0 , otherwise
Using this observation, two additional metrics are created, S δ ( iter ) and C δ ( iter ) . The metric S δ ( iter ) is given by
S δ ( iter ) = i = 1 iter ζ ( i )
and the metric C δ is given by:
C δ ( iter ) = S δ ( iter ) iter
The following definition for the inertia calculation is proposed:
ω iter = ω max iter C δ ( iter ) ω max ω min
This mechanism will be called IP in the tables with the experimental results.

2.3. The Discarding Procedure

The method in each iteration performs under the condition of a series of local searches. However, these searches will often either lead to local minima already found or locate values far below the global minimum. This means that much of the computing time will be wasted on actions that could have been avoided. In order to be able to group points that would lead by local search to the same local minimum, we introduce the concept of cluster, which refers to a set of points that are believed, under some asymptotic considerations, to belong to the same region of attraction of the function. The region of attraction for a local minimum x * is defined as:
A x * = x : x S R n , LS ( x ) = x *
where LS ( x ) is a local search procedure that starts from a given point x and terminates when a local minimum is discovered. The discarding procedure suggested here prevents the method from starting a local search from a point x if that point belongs to the same region of attraction with other points. This procedure is composed by two two major parts:
1.
The first part is the so-called typical distance, which is a measure calculated after every local search, and it is given by
r C = 1 M i = 1 M x i x i L
where the local search procedure LS ( x ) initiates from x i and x i L is the outcome of LS x i . This measure has been used also in [73]. If a point x is close enough to an already discovered local minima, then it is highly possible that the point belongs to the so-called region of attraction of the minima.
2.
The second part is a check using the gradient values between a candidate starting point and an already discovered local minimum. The function value f ( x ) near to some local minimum z can be calculated using:
f ( x ) f ( z ) + 1 2 x z T B x z
where B is the Hessian matrix at the minimum z . By taking gradients in both sides of Equation (15), we obtain:
f ( x ) B x z
Of course, Equation (16) holds for any other point y near to z
f ( y ) B y z
By subtracting Equation (17) from Equation (16) and by multiplying with x y T , we have the following equation:
x y T f ( x ) f ( y ) x y T B x y T > 0
Hence, a candidate start point x can be rejected if
x z r C AND x y T f ( x ) f ( z )
for any already discovered local minimum z.

2.4. Stopping Rule

A common used way to terminate a global optimization method is to utilize a maximum number of allowed iterations iter max , i.e., stop when iter iter max . Although it is a simple criterion, it is not an efficient one, since if iter max is too small, then the algorithm will terminate without locating the global optimum. In addition, when iter max is too high, the optimization algorithm will spend computation time in unnecessary function calls. In this paper, a new termination rule is proposed to terminate the PSO process, and it is compared against two other methods used in various optimization methods.

2.4.1. Ali’s Stopping Method

A method is proposed in the work of Ali and Kaelo [74] where at every generation, the measure
α = f max f min
is calculated. The f max stands for the maximum function value of the population and f min represents the lowest function value of the population. The method will terminate when
α ϵ
where ϵ is a predefined small positive value, for example ϵ = 10 3 .

2.4.2. Double Box Method

The second method utilized is a method initially proposed by [75]. In this method, we denote with σ ( iter ) the variance of f min calculated at iteration iter . If the algorithm cannot locate a new lower value for f min for a number of iterations, then the global minimum has already located, and the algorithm should terminate when
σ ( iter ) σ iter last 2
where iter last stands for the last iteration where a new lower value for f min was discovered.

2.4.3. Proposed Technique

In the proposed termination technique, in each iteration k, the difference between the current best value and the previous best value is measured, i.e., the difference f min ( k ) f min ( k 1 ) . If this difference is zero for a predefined number of iterations k max , then the method terminates.

3. Experiments

To measure the effect of the proposed modifications on the original method, a series of experiments were performed on test functions from the relevant literature [76,77], and they have been used widely by various researchers [74,78,79,80]. The experiments evaluated both the effect of the new method of calculating inertia as well as the criterion for avoiding local minima as well as the new termination rule. The experiments were recorded in separate tables, so that it is more possible to understand the effect of each modification separately.

3.1. Test Functions

The definition of the test functions used are given below
  • Bf1 (Bohachevsky 1) function:
f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10
with x [ 100 , 100 ] 2 .
  • Bf2 (Bohachevsky 2) function:
    f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10
    with x [ 50 , 50 ] 2 .
  • Branin function: f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10 with 5 x 1 10 , 0 x 2 15 . with x [ 10 , 10 ] 2 .
  • CM function:
    f ( x ) = i = 1 n x i 2 1 10 i = 1 n cos 5 π x i
    where x [ 1 , 1 ] n . In the current experiments, we used n = 4 .
  • Camel function:
    f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2
  • Easom function:
    f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2
    with x [ 100 , 100 ] 2 .
  • Exponential function, defined as:
    f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1
    In the current experiments, we used this function with n = 2 , 4 , 8 , 16 , 32 .
  • Goldstein and Price function
    f ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ] × [ 30 + 2 x 1 3 x 2 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ]
    with x [ 2 , 2 ] 2 .
  • Griewank 2 function:
    f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) , x [ 100 , 100 ] 2
  • Gkls function. f ( x ) = Gkls ( x , n , w ) , is a function with w local minima, which was described in [81] with x [ 1 , 1 ] n and n being a positive integer between 2 and 100. The value of the global minimum is -1, and in our experiments, we have used n = 2 , 3 and w = 50 , 100 .
  • Hansen function: f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j ,
    x [ 10 , 10 ] 2 .
  • Hartman 3 function:
    f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2
    with x [ 0 , 1 ] 3 and a = 3 10 30 0.1 10 35 3 10 30 0.1 10 35 , c = 1 1.2 3 3.2 and
    p = 0.3689 0.117 0.2673 0.4699 0.4387 0.747 0.1091 0.8732 0.5547 0.03815 0.5743 0.8828
  • Hartman 6 function:
    f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2
    with x [ 0 , 1 ] 6 and a = 10 3 17 3.5 1.7 8 0.05 10 17 0.1 8 14 3 3.5 1.7 10 17 8 17 8 0.05 10 0.1 14 , c = 1 1.2 3 3.2 and
    p = 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886 0.2329 0.4135 0.8307 0.3736 0.1004 0.9991 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650 0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
  • Potential function. The molecular conformation corresponding to the global minimum of the energy of N atoms interacting via the Lennard–Jones potential [82] is used as a test function here, and it is defined by:
    V L J ( r ) = 4 ϵ σ r 12 σ r 6
    For our experiments, we used: N = 3 , 4 , 5 .
  • Rastrigin function.
    f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) , x [ 1 , 1 ] 2
  • Rosenbrock function.
    f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 , 30 x i 30 .
    In our experiments, we used the values n = 4 , 8 , 16 .
  • Shekel 7 function.
f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 3 5 3 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 .
  • Shekel 5 function.
f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 , c = 0.1 0.2 0.2 0.4 0.4 .
  • Shekel 10 function.
f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i
with x [ 0 , 10 ] 4 and a = 4 4 4 4 1 1 1 1 8 8 8 8 6 6 6 6 3 7 3 7 2 9 2 9 5 5 3 3 8 1 8 1 6 2 6 2 7 3.6 7 3.6 , c = 0.1 0.2 0.2 0.4 0.4 0.6 0.3 0.7 0.5 0.6 .
  • Sinusoidal function:
    f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π .
    The case of n = 4 , 8 , 16 , 32 and z = π 6 was used in the experimental results.
  • Test2N function:
    f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i , x i [ 5 , 5 ] .
    The function has 2 n in the specified range and in our experiments we used n = 4 , 5 , 6 , 7 .
  • Test30N function:
    f ( x ) = 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
    with x [ 10 , 10 ] , with 30 n local minima in the search space. For our experiments, we used n = 3 , 4 .

3.2. Experimental Setup

All the experiments have been performed 30 times using a different seed for the random number generator each. The code has been implemented in ANSI C++, and the well-known function drand48() was used to produce random numbers. The local search method used was the BFGS method [83]. All the parameters used in the conducted experiments are listed in Table 1.

3.3. Experimental Results

For every stopping rule, two tables are listed: the first one contains experiments with the relevant stopping rule without the gradient discarding procedure, and the second table contains experiments with the gradient discarding procedure enabled. The numbers in table cells stand for average function calls. The fraction in parentheses denotes the fraction of runs where the global optimum was found. The absence of these fractions means that the global minimum was discovered in every run (100% success). The experimental results using the stopping rule of Equation (21) are listed in Table 2 and Table 3. The experimental results for the Double Box stopping rule of Equation (22) are listed in Table 4 and Table 5. Finally, for the proposed stopping rule, the results are listed in Table 6 and Table 7. In addition, the boxplots for the proposed stopping rules without and with the gradient check of Equation (19) are illustrated in the Figure 1 and Figure 2 respectively. The above results lead to a number of observations:
1.
The PSO method is a robust method, and this is evident by the high success rate in finding the global minimum, although the number of particles used was relatively low (100).
2.
The proposed inertia calculation method as defined in Equation (12) achieves a significant reduction in the number of calls between 11 and 25% depending on the termination criterion used. However, the presence of the gradient check mechanism of Equation (19) nullifies any gain of the method, as the rejection criterion significantly reduces the number of calls regardless of the inertia calculation mechanism used.
3.
The local optimization avoidance mechanism of the gradient check drastically reduces the required number of calls for each termination criterion while maintaining the success rate of the method at extremely high levels.
4.
The proposed termination criterion is significantly superior to the other two with which the comparison was made. In addition, if the termination criterion is combined with the mechanism for avoiding local optimizations, then the gain in the number of calls grows even more.
To show the effect of the proposed termination method, an additional experiment was performed in which the dimension of the sinus problem increased from 2 to 32, and in each case, all three termination techniques were tested. The result of this experiment is graphically represented in Figure 3. This graph shows that the double box method is significantly superior to the Ali method but, of course, the new proposed method further reduces the required number of function calls. In addition, the effect of the application of the rejection mechanism based on gradients of Equation (19) is illustrated graphically in Figure 4, where the proposed termination rule is applied on a series of test functions with the gradient check and without the gradient check.

4. Conclusions

In the current work, three new modifications of the PSO method for locating the global minimum of continuous and differentiable functions were presented. The first modification alters the population velocity calculation in an attempt to cause large changes in velocity when the method is in its infancy and constantly finds new local minima and small velocity changes when the method is to be centered around a promising area of a global minimum. The second modification limits the number of local searches performed by the method through an asymptotic criterion based on derivative computation. The third modification introduces a new termination criterion based on the observation that the method from some iteration onwards will not be able to detect a new minimum, and therefore, its termination should be considered. All of these modifications have low computational requirements.
The proposed modifications were applied to the PSO method either one by one or all together in combination. The purpose of the method is to find the total minimum of continuous functions using the smallest possible number of function calls. The experimental results showed that the modifications significantly reduce the number of function calls even when not used in combination. This means that they can be used individually and in other variations of PSO. The reduction in the number of function calls reaches up to 80%. In addition, the amendments did not reduce the ability of the PSO to find the total minimum of the objective function. In addition, the first modification reduces the number of required calls but only when the criterion for avoiding local minimization is not present.

Author Contributions

I.G.T. and V.C. conceived of the idea and methodology and supervised the technical part regarding the software. I.G.T. conducted the experiments, employing several datasets, and provided the comparative experiments. V.C. performed the statistical analysis and prepared the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The experiments of this research work were performed at the high-performance computing system established at Knowledge and Intelligent Computing Laboratory, Department of Informatics and Telecommunications, University of Ioannina, acquired with the project “Educational Laboratory equipment of TEI of Epirus” with MIS 5007094 funded by the Operational Programme “Epirus” 2014–2020, by ERDF and national funds.

Conflicts of Interest

All authors declare that they have no conflict of interest.

References

  1. Yang, L.; Robin, D.; Sannibale, F.; Steier, C.; Wan, W. Global optimization of an accelerator lattice using multiobjective genetic algorithms. Nucl. Instrum. Methods Phys. Res. Accel. Spectrom. Detect. Assoc. Equip. 2009, 609, 50–57. [Google Scholar] [CrossRef]
  2. Iuliano, E. Global optimization of benchmark aerodynamic cases using physics-based surrogate models. Aerosp. Sci. Technol. 2017, 67, 273–286. [Google Scholar] [CrossRef]
  3. Schneider, P.I.; Santiago, X.G.; Soltwisch, V.; Hammerschmidt, M.; Burger, S.; Rockstuhl, C. Benchmarking Five Global Optimization Approaches for Nano-optical Shape Optimization and Parameter Reconstruction. ACS Photonics 2019, 6, 2726–2733. [Google Scholar] [CrossRef]
  4. Heiles, S.; Johnston, R.L. Global optimization of clusters using electronic structure methods. Int. J. Quantum Chem. 2013, 113, 2091–2109. [Google Scholar] [CrossRef]
  5. Shin, W.H.; Kim, J.K.; Kim, D.S.; Seok, C. GalaxyDock2: Protein-ligand docking using beta-complex and global optimization. J. Comput. Chem. 2013, 34, 2647–2656. [Google Scholar] [CrossRef] [PubMed]
  6. Marques, J.M.C.; Pereira, F.B.; Llanio-Trujillo, J.L.; Abreu, P.E.; Albertí, M.; Aguilar, A.; Bartolomei, F.P.F.M. A global optimization perspective on molecular clusters. Phil. Trans. R. Soc. A 2017, 375, 20160198. [Google Scholar] [CrossRef]
  7. Aguilar-Rivera, R.; Valenzuela-Rendón, M.; Rodríguez-Ortiz, J.J. Genetic algorithms and Darwinian approaches in financial applications: A survey. Expert Syst. Appl. 2015, 42, 7684–7697. [Google Scholar] [CrossRef]
  8. Hosseinnezhad, V.; Babaei, E. Economic load dispatch using θ-PSO. Int. J. Electr. Power Energy Syst. 2013, 49, 160–169. [Google Scholar] [CrossRef]
  9. Lee, E.K. Large-Scale Optimization-Based Classification Models in Medicine and Biology. Ann. Biomed. Eng. 2007, 35, 1095–1109. [Google Scholar] [CrossRef] [Green Version]
  10. Boutros, P.; Ewing, A.; Ellrott, K. Global optimization of somatic variant identification in cancer genomes with a global community challenge. Nat. Genet. 2014, 46, 318–319. [Google Scholar] [CrossRef] [Green Version]
  11. Wolfe, M.A. Interval methods for global optimization. Appl. Math. Comput. 1996, 75, 179–206. [Google Scholar]
  12. Reinking, J. GNSS-SNR water level estimation using global optimization based on interval analysis. J. Geod. 2016, 6, 80–92. [Google Scholar] [CrossRef]
  13. Price, W.L. Global optimization by controlled random search. J. Optim. Theory Appl. 1983, 40, 333–348. [Google Scholar] [CrossRef]
  14. Gupta, R.; Chandan, M. Use of “Controlled Random Search Technique for Global Optimization” in Animal Diet Problem. Int. Emerg. Technol. Adv. Eng. 2013, 3, 284–287. [Google Scholar]
  15. Charilogis, V.; Tsoulos, I.; Tzallas, A.; Anastasopoulos, N. An Improved Controlled Random Search Method. Symmetry 2021, 13, 1981. [Google Scholar] [CrossRef]
  16. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  17. Tavares, R.S.; Martins, T.C.; Tsuzuki, M.S.G. Simulated annealing with adaptive neighborhood: A case study in off-line robot path planning. Expert Syst. Appl. 2011, 38, 2951–2965. [Google Scholar] [CrossRef]
  18. Geng, X.; Chen, Z.; Yang, W.; Shi, D.; Zhao, K. Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy search. Appl. Soft Comput. 2011, 11, 3680–3689. [Google Scholar] [CrossRef]
  19. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  20. Liu, J.; Lampinen, J. A Fuzzy Adaptive Differential Evolution Algorithm. Soft Comput. 2005, 9, 448–462. [Google Scholar] [CrossRef]
  21. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  22. Poli, R.; Kennedy, J.K.; Blackwell, T. Particle swarm optimization An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  23. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  24. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  25. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef] [Green Version]
  26. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  27. Hamblin, S. On the practical usage of genetic algorithms in ecology and evolution. Methods Ecol. Evol. 2013, 4, 184–194. [Google Scholar] [CrossRef]
  28. Grady, S.A.; Hussaini, M.Y.; Abdullah, M.M. Placement of wind turbines using genetic algorithms. Renew. Energy 2005, 30, 259–270. [Google Scholar] [CrossRef]
  29. Zhou, Y.; Tan, Y. GPU-based parallel particle swarm optimization. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 1493–1500. [Google Scholar]
  30. Dawson, L.; Stewart, I. Improving Ant Colony Optimization performance on the GPU using CUDA. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1901–1908. [Google Scholar]
  31. Barkalov, K.; Gergel, V. Parallel global optimization on GPU. J. Glob. Optim. 2016, 66, 3–20. [Google Scholar] [CrossRef]
  32. Lepagnot, I.B.J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar]
  33. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  34. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef] [Green Version]
  35. Jain, N.K.; Nangia, U.; Jain, J. A Review of Particle Swarm Optimization. J. Inst. Eng. India Ser. B 2018, 99, 407–411. [Google Scholar] [CrossRef]
  36. Khare, l.; Rangnekar, S. A review of particle swarm optimization and its applications in Solar Photovoltaic system. Appl. Soft Comput. 2013, 13, 2997–3306. [Google Scholar] [CrossRef]
  37. Meneses, A.A.D.; Dornellas, M.; Schirru, M.R. Particle Swarm Optimization applied to the nuclear reload problem of a Pressurized Water Reactor. Progress Nucl. Energy 2009, 51, 319–326. [Google Scholar] [CrossRef]
  38. Shaw, R.; Srivastava, S. Particle swarm optimization: A new tool to invert geophysical data. Geophysics 2007, 72, F75–F83. [Google Scholar] [CrossRef]
  39. Ourique, C.O.; Biscaia, E.C.; Pinto, J.C. The use of particle swarm optimization for dynamical analysis in chemical processes. Comput. Chem. Eng. 2002, 26, 1783–1793. [Google Scholar] [CrossRef]
  40. Fang, H.; Zhou, J.; Wang, Z. Hybrid method integrating machine learning and particle swarm optimization for smart chemical process operations. Front. Chem. Sci. Eng. 2022, 16, 274–287. [Google Scholar] [CrossRef]
  41. Wachowiak, M.P.; Smolikova, R.; Zheng, Y.J.M.; Zurada, A.S.E. An approach to multimodal biomedical image registration utilizing particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 289–301. [Google Scholar] [CrossRef]
  42. Marinakis, Y.; Marinaki, M.; Dounias, G. Particle swarm optimization for pap-smear diagnosis. Expert Syst. Appl. 2008, 35, 1645–1656. [Google Scholar] [CrossRef]
  43. Park, J.; Jeong, Y.; Shin, J.; Lee, K. An Improved Particle Swarm Optimization for Nonconvex Economic Dispatch Problems. IEEE Trans. Power Syst. 2010, 25, 156–166. [Google Scholar] [CrossRef]
  44. Clerc, M. The swarm and the queen: Towsrds a deterministic and adaptive particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, pp. 1951–1957. [Google Scholar]
  45. Juan, H.; Laihang, Y.; Kaiqi, Z. Enhanced Self-Adaptive Search Capability Particle Swarm Optimization. In Proceedings of the 2008 Eighth International Conference on Intelligent Systems Design and Applications, Kaohsiung, Taiwan, 28 November 2008; pp. 49–55. [Google Scholar]
  46. Hou, Z.X. Wiener model identification based on adaptive particle swarm optimization. In Proceedings of the 2008 International Conference on Machine Learning and Cybernetics, Kunming, China, 10 January 2008; pp. 1041–1045. [Google Scholar]
  47. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  48. Stacey, A.; Jancic, M.; Grundy, I. Particle swarm optimization with mutation. In Proceedings of the 2003 Congress on Evolutionary Computation, CEC ’03, Canberra, Australia, 12 December 2004; pp. 1425–1430. [Google Scholar]
  49. Pant, M.; Thangaraj, R.; Abraham, A. Particle Swarm Optimization Using Adaptive Mutation. In Proceedings of the 2008 19th International Workshop on Database and Expert Systems Applications, Turin, Italy, 1–5 September 2008; pp. 519–523. [Google Scholar]
  50. Higashi, N.; Iba, H. Particle swarm optimization with Gaussian mutation. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS’03 (Cat. No.03EX706), Indianapolis, IN, USA, 26–26 April 2003; pp. 72–79. [Google Scholar]
  51. Engelbrecht, A. Particle swarm optimization: Velocity initialization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  52. Liu, B.; Wang, L.; Jin, Y.H.; Tang, F.; Huang, D.X. Improved particle swarm optimization combined with chaos. Chaos Solitions Fract. 2005, 25, 1261–1271. [Google Scholar] [CrossRef]
  53. Shi, X.H.; Liang, Y.C.; Lee, H.P.; Lu, C.; Wang, L.M. An improved GA and a novel PSO-GA based hybrid algorithm. Inf. Proc. Lett. 2005, 93, 255–261. [Google Scholar] [CrossRef]
  54. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  55. Schutte, J.F.; Reinbolt, J.A.; Fregly, B.J.; Haftka, R.T.; George, A.D. Parallel global optimization with the particle swarm algorithm. Int. J. Numer. Meth. Eng. 2004, 61, 2296–2315. [Google Scholar] [CrossRef] [Green Version]
  56. Koh, B.; George, A.D.; Haftka, R.T.; Fregly, B.J. Parallel asynchronous particle swarm optimization. Int. J. Numer. Meth. Eng. 2006, 67, 578–595. [Google Scholar] [CrossRef]
  57. Venter, G.; Sobieszczanski-Sobieski, J. Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations. J. Aerosp. Comput. Inf. Commun. 2006, 3, 123–137. [Google Scholar] [CrossRef] [Green Version]
  58. Gaing, Z.L. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Syst. 2003, 18, 1187–1195. [Google Scholar] [CrossRef]
  59. Yang, X.; Yuan, J.; Yuan, J.; Mao, H. A modified particle swarm optimizer with dynamic adaptation. Appl. Math. Comput. 2007, 189, 1205–1213. [Google Scholar] [CrossRef]
  60. Jiang, Y.; Hu, T.; Huang, C.; Wu, X. An improved particle swarm optimization algorithm. Appl. Math. Comput. 2007, 193, 231–239. [Google Scholar] [CrossRef]
  61. Bogdanova, A.; Junior, J.P.; Aranha, C. Franken-Swarm: Grammatical Evolution for the Automatic Generation of Swarm-like Meta-Heuristics. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, New York, NY, USA, 15 July 2018; pp. 411–412. [Google Scholar]
  62. O’Neill, M.; Ryan, C. Grammatical evolution. IEEE Trans. Evol. Comput. 2001, 5, 349–358. [Google Scholar] [CrossRef] [Green Version]
  63. Pan, X.; Xue, L.; Lu, Y. Hybrid particle swarm optimization with simulated annealing. Multimed Tools Appl. 2019, 78, 29921–29936. [Google Scholar] [CrossRef]
  64. Mughal, M.A.; Ma, Q.; Xiao, C. Photovoltaic Cell Parameter Estimation Using Hybrid Particle Swarm Optimization and Simulated Annealing. Energies 2017, 10, 1213. [Google Scholar] [CrossRef] [Green Version]
  65. Lin, G.H.; Zhang, J.; Liu, Z.H. Hybrid particle swarm optimization with differential evolution for numerical and engineering optimization. Int. J. Autom. Comput. 2018, 15, 103–114. [Google Scholar] [CrossRef]
  66. Epitropakis, M.G.; Plagianakos, V.P.; Vrahatis, M.N. Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution: A hybrid approach. Inf. Sci. 2012, 216, 50–92. [Google Scholar] [CrossRef] [Green Version]
  67. Wang, W.; Wu, J.M.; Liu, J.H. A Particle Swarm Optimization Based on Chaotic Neighborhood Search to Avoid Premature Convergence. In Proceedings of the 2009 Third International Conference on Genetic and Evolutionary Computing, Washington, DC, USA, 14 October 2009; pp. 633–636. [Google Scholar]
  68. Eberhart, R.C.; Shi, Y.H. Tracking and optimizing dynamic systems with particle swarms. In Proceedings of the Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001. [Google Scholar]
  69. Shi, Y.H.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999. [Google Scholar]
  70. Shi, Y.H.; Eberhart, R.C. Experimental study of particle swarm optimization. In Proceedings of the SCI2000 Conference, Orlando, FL, USA, 23–26 July 2000. [Google Scholar]
  71. Zheng, Y.; Ma, L.; Zhang, L.; Qian, J. Empirical study of particle swarm optimizer with an increasing inertia weight. IEEE Congr. Evol. Comput. 2003, 1, 221–226. [Google Scholar]
  72. Zheng, Y.; Ma, L.; Zhang, L.; Qian, J. On the convergence analysis and param- eter selection in particle swarm optimization. In Proceedings of the Second International Conference on Machine Learning and Cybernetics, Xi’an, China, 5 November 2003. [Google Scholar]
  73. Tsoulos, I.G.; Lagaris, I.E. MinFinder: Locating all the local minima of a function. Comput. Phys. Commun. 2006, 174, 166–179. [Google Scholar] [CrossRef] [Green Version]
  74. Ali, M.M.; Kaelo, P. Improved particle swarm algorithms for global optimization. Appl. Math. Comput. 2008, 196, 578–593. [Google Scholar] [CrossRef]
  75. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  76. Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A Numerical Evaluation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test Problems. J. Glob. Optim. 2005, 31, 635–672. [Google Scholar] [CrossRef]
  77. Floudas, C.A.; Pardalos, P.M.; Adjiman, C.; Esposoto, W.; Gümüs, Z.; Harding, S.; Klepeis, J.; Meyer, C.; Schweiger, C. Handbook of Test Problems in Local and Global Optimization; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1999. [Google Scholar]
  78. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  79. Siarry, P.; Berthiau, G.; François, D.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. Softw. 1997, 23, 209–228. [Google Scholar] [CrossRef]
  80. Tsoulos, I.G.; Lagaris, I.E. GenMin: An enhanced genetic algorithm for global optimization. Comput. Phys. Commun. 2008, 178, 843–851. [Google Scholar] [CrossRef]
  81. Gaviano, M.; Ksasov, D.E.; Lera, D.; Sergeyev, Y.D. Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. 2003, 29, 469–480. [Google Scholar] [CrossRef]
  82. Lennard-Jones, J.E. On the Determination of Molecular Fields. Proc. R. Soc. Lond. A 1924, 106, 463–477. [Google Scholar]
  83. Powell, M.J.D. A Tolerant Algorithm for Linearly Constrained Optimization Calculations. Math. Program. 1989, 45, 547–566. [Google Scholar] [CrossRef]
Figure 1. Standard deviation of function calls for the termination rules without the gradient check.
Figure 1. Standard deviation of function calls for the termination rules without the gradient check.
Information 13 00217 g001
Figure 2. Standard deviation of function calls for the termination rules with the gradient check.
Figure 2. Standard deviation of function calls for the termination rules with the gradient check.
Information 13 00217 g002
Figure 3. Experiments with the SINU function for a series of problem dimension from n = 2 to n = 32 .
Figure 3. Experiments with the SINU function for a series of problem dimension from n = 2 to n = 32 .
Information 13 00217 g003
Figure 4. Experiments with the proposed stopping rule with the gradient check and without the gradient check.
Figure 4. Experiments with the proposed stopping rule with the gradient check and without the gradient check.
Information 13 00217 g004
Table 1. Values for the experimental parameters.
Table 1. Values for the experimental parameters.
ParameterValue
m100
iter max 100
p l 0.05
c 1 1.0
c 2 1.0
ω min 0.4
ω max 0.9
ϵ 0.001
k max 15
Table 2. Experiments with the Ali stopping rule without gradient check.
Table 2. Experiments with the Ali stopping rule without gradient check.
FUNCTIONI1I2I3IP
BF124,92922,87418,73922,088
BF224,04322,25417,17220,743
BRANIN17,69116,20513,39712,471
CM420,11722,56826,86714,941
CAMEL19,47417,81314,46113,492
EASOM13,32713,10699699212
EXP26339824378533501
EXP4781610,06610,9004458
EXP8866710,93713,1264761
EXP16874811,40215,7545098
EXP32956712,32318,1895471
GKLS25010,90712,56296738552
GKLS210012,96013,40399309541
GKLS35015,410(0.97)14,72210,5429298
GKLS310016,63914,49510,41213,075(0.97)
GOLDSTEIN20,43722,87716,4108935
GRIEWANK227,62024,23018,47320,133
HANSEN21,51320,27916,32615,046
HARTMAN316,23317,15212,3056511
HARTMAN647,03848,94746,85223,431
POTENTIAL331,68432,17536,93024,463
POTENTIAL4184,602181,231168,962129,267
POTENTIAL574,50870,51976,89054,042
RASTRIGIN23,57420,86515,59616,198
ROSENBROCK4145,178161,136160,341129,891
ROSENBROCK895,29097,03596,68780,408
ROSENBCROK16118,614116,454115,12297,004
SHEKEL527,45827,08825,92718,036
SHEKEL727,52127,27125,96718,805
SHEKEL1029,699(0.97)28,08225,51120,823
TEST2N426,74027,05022,90519,495
TEST2N520,243(0.97)20,29017,72916,024(0.97)
TEST2N633,11833,36630,11825,235(0.93)
TEST2N723,266(0.90)22,80421,29418,218(0.90)
SINU417,03520,48718,97111,079
SINU822,82727,17627,73212,379
SINU1631,05535,99842,98415,692
SINU3244,736(0.97)51,62482,11425,991
TEST30N318,73320,11917,80317,543
TEST30N420,34822,19120,67920,085
TOTAL1,365,704(0.99)1,399,4191,367,6121,001,436(0.99)
Table 3. Experiments with the Ali stopping rule with the gradient check enabled.
Table 3. Experiments with the Ali stopping rule with the gradient check enabled.
FUNCTIONI1I2I3IP
BF197098918953110,932
BF210,1969588908910,730
BRANIN10,718959798139501
CM46242750312,5316985
CAMEL10,422930684919624
EASOM11,56511,36684978196
EXP23364444345581926
EXP43558476760232122
EXP83716478777532186
EXP163784507696962211
EXP324137569811,3792323
GKLS2505917708075175273
GKLS21006843826174497296
GKLS3506845807678335881(0.97)
GKLS310010,290(0.93)10,18778288066(0.97)
GOLDSTEIN7977903585054381
GRIEWANK212,56712,22212,00012,037
HANSEN13,44113,36011,87610,818
HARTMAN39758954881234114
HARTMAN612,893(0.90)12,889(0.93)22,30910,126(0.93)
POTENTIAL317,91216,42021,90415,969
POTENTIAL473,62964,88695,70767,084
POTENTIAL540,58535,23947,80733,661
RASTRIGIN11,30510,10111,14110,046
ROSENBROCK418,11521,91938,40743,093
ROSENBROCK812,86914,19231,92325,405
ROSENBCROK1612,09613,02338,48623,165
SHEKEL510,34711,46614,44611,802
SHEKEL711,51110,52113,94410,399
SHEKEL1010,83410,84213,78512,253
TEST2N411,13310,86912,16111,546
TEST2N510,923(0.97)10,31510,86811,072(0.97)
TEST2N612,331(0.97)12,34514,12315,652
TEST2N711,342(0.93)11,35412,11812,370(0.93)
SINU47724984512,2946575
SINU8846810,96918,1225382
SINU16933413,21331,5899294
SINU3213,29017,502(0.97)63,111(0.97)14,959
TEST30N312,67512,95412,47212,482
TEST30N413,96414,90314,99915,389
TOTAL494,119(0.99)504,585(0.99)720,208(0.99)502,326(0.99)
Table 4. Experiments with the double box stopping rule without gradient check.
Table 4. Experiments with the double box stopping rule without gradient check.
FUNCTIONI1I2I3IP
BF16807686667126757
BF26102615060576207
BRANIN4551459644704435
CM4981410,10195809342
CAMEL5055520248975004
EASOM2975278830143000
EXP24436454143774543
EXP45443556253315290
EXP85682575456145504
EXP165707579956385526
EXP325871579757695659
GKLS2503973390639713921
GKLS21004009386240733958
GKLS350455839654525(0.97)4266
GKLS31004701(0.87)42664361(0.90)4465
GOLDSTEIN10,259914579457625
GRIEWANK25932619457005915
HANSEN638662605688(0.97)5874
HARTMAN34681469446254675
HARTMAN614,24514,09113,79313,825
POTENTIAL37219720675327234
POTENTIAL438,05337,92438,42138,897
POTENTIAL515,19614,45915,70815,358
RASTRIGIN591557975944(0.83)5844
ROSENBROCK491,574101,485117,51276,367
ROSENBROCK866,64861,97458,83141,591
ROSENBCROK1662,02954,55063,40655,800
SHEKEL5911910,271(0.97)89758538
SHEKEL79197983196388732
SHEKEL1010,41710,44993739721(0.90)
TEST2N48512827288847992
TEST2N5579357045511(0.90)5515
TEST2N69797(0.93)97319657(0.83)9666(0.97)
TEST2N76435(0.80)66596713(0.77)5990(0.87)
SINU47567777473347063
SINU8988210,08396439331
SINU1612,75012,94712,56912,207
SINU3220,16421,11219,684(0.90)19,239
TEST30N36388794259345855
TEST30N47611925163858284
TOTAL531,453(0.99)532,690(0.99)543,794(0.98)475,015(0.99)
Table 5. Experiments with the double box stopping rule with the gradient check enabled.
Table 5. Experiments with the double box stopping rule with the gradient check enabled.
FUNCTIONI1I2I3IP
BF13296303830633003
BF22922276228632845
BRANIN2562264125382564
CM43569427729443230
CAMEL2646285424672577
EASOM2490239024792464
EXP22377248922612315
EXP42456266922822389
EXP82429267122682385
EXP162358256922272326
EXP322337253322482312
GKLS2502394253522742321
GKLS21002384251122672333
GKLS3502492241022122339
GKLS31002800(0.90)27082648(0.83)2571
GOLDSTEIN3161370131662799
GRIEWANK23910452035433641
HANSEN4409426837554325
HARTMAN32423251823742425
HARTMAN6391343904199(0.93)3700
POTENTIAL33951409344824021
POTENTIAL418,55519,55919,50618,691
POTENTIAL58771839796779154
RASTRIGIN31113244(0.97)30313146
ROSENBROCK4972912,98011,4538587
ROSENBROCK84987673846885512
ROSENBCROK164410593945534002
SHEKEL53906409532033495
SHEKEL73119396529503528
SHEKEL103497(0.97)44643142(0.97)3353
TEST2N43468(0.97)405941673881(0.93)
TEST2N53318(0.97)37862926(0.90)3157(0.97)
TEST2N64523(0.93)5046(0.93)5537(0.83)4066(0.97)
TEST2N73364(0.80)4191(0.90)4183(0.80)3315(0.87)
SINU43173380726103004
SINU83055374225922857
SINU163160374638543290
SINU326613737763276450
TEST30N35129636756054451
TEST30N45649644160745543
TOTAL162,816(0.99)182,490(0.99)164,638(0.98)158,367(0.99)
Table 6. Experiments with the proposed stopping rule without the gradient check.
Table 6. Experiments with the proposed stopping rule without the gradient check.
FUNCTIONI1I2I3IP
BF15305532652405209
BF24760484147504856
BRANIN3599370335203443
CM47674783574307057
CAMEL3996413138643825
EASOM2370229224252478
EXP23528361334553675
EXP44292435041784020
EXP84579463245154278
EXP164576463745054236
EXP324692477145884296
GKLS2503105306531153024
GKLS21003193304931933099
GKLS3503308300035603401
GKLS31002935(0.97)27773158(0.83)3088
GOLDSTEIN5534559553325265
GRIEWANK24225433244134489
HANSEN3865376238243769
HARTMAN33724377037143705
HARTMAN611,901(0.97)11,829(0.97)11,38610,573
POTENTIAL35910585061346501
POTENTIAL430,88030,57031,18030,682
POTENTIAL512,02111,64312,52113,475
RASTRIGIN4583459546254360
ROSENBROCK458,29961,26655,75935,517
ROSENBROCK831,77830,88830,98922,055
ROSENBCROK1632,71930,50330,95724,478
SHEKEL568067047(0.97)66366233
SHEKEL76807700166266270
SHEKEL106774698765836534
TEST2N46111612759095893
TEST2N54455(0.97)45584372(0.97)4271(0.93)
TEST2N67446(0.97)74197218(0.87)7122(0.93)
TEST2N74992(0.90)50574888(0.83)4680(0.90)
SINU45948604357505229
SINU87965809577786963
SINU1610,12110,25299689219
SINU3216,09316,50915,66314,478
TEST30N34331495342303957
TEST30N46290634142884717
TOTAL361,490(0.99)363,013(0.99)352,239(0.99)310,420(0.99)
Table 7. Experiments with the proposed stopping rule with the gradient check enabled.
Table 7. Experiments with the proposed stopping rule with the gradient check enabled.
FUNCTIONI1I2I3IP
BF12276237922662250
BF22157227420982191
BRANIN2132217820512170
CM43098371725382791
CAMEL2198233519742058
EASOM2007201120312084
EXP21952203018421861
EXP42046226618771909
EXP81990224018491879
EXP161944211018281838
EXP321953212618591867
GKLS2501982207918501900
GKLS21001983206418591891
GKLS3501882194417931831
GKLS310018981909(0.97)1850(0.83)1833(0.83)
GOLDSTEIN2523267021102164
GRIEWANK22893288527912681
HANSEN2766287927312804
HARTMAN31988209319492015
HARTMAN633663871(0.97)27673133
POTENTIAL33312348736133892
POTENTIAL415,39216,39016,22317,497
POTENTIAL57109710477328477
RASTRIGIN2591264824742732
ROSENBROCK4802312,17944336025
ROSENBROCK84376608127213314
ROSENBCROK163643495427462485
SHEKEL52849329622742390
SHEKEL72696329422622283
SHEKEL10262432512338(0.93)2359
TEST2N42536263724272782
TEST2N52266(0.97)2336(0.97)2163(0.90)2342(0.90)
TEST2N62724(0.93)28322694(0.80)3133(0.90)
TEST2N72283(0.80)23702279(0.80)2585(0.90)
SINU42789324522282436
SINU82601315122332348
SINU162721308624432624
SINU324652513540864089
TEST30N33031334930072562
TEST30N43747379732503237
TOTAL126,999(0.99)142,682(0.99)115,539(0.98)122,742(0.99)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Charilogis, V.; Tsoulos, I.G. Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions. Information 2022, 13, 217. https://doi.org/10.3390/info13050217

AMA Style

Charilogis V, Tsoulos IG. Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions. Information. 2022; 13(5):217. https://doi.org/10.3390/info13050217

Chicago/Turabian Style

Charilogis, Vasileios, and Ioannis G. Tsoulos. 2022. "Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions" Information 13, no. 5: 217. https://doi.org/10.3390/info13050217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop