Next Article in Journal
Performance-Constraint Fault Tolerant Control to Aircraft in Presence of Actuator Deviation
Next Article in Special Issue
Metaheuristic and Heuristic Algorithms-Based Identification Parameters of a Direct Current Motor
Previous Article in Journal
The Weighted Least-Squares Approach to State Estimation in Linear State Space Models: The Case of Correlated Noise Terms
Previous Article in Special Issue
IWO-IGA—A Hybrid Whale Optimization Algorithm Featuring Improved Genetic Characteristics for Mapping Real-Time Applications onto 2D Network on Chip
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Particle Swarm Optimization (PSO) Algorithm Employing Quasi-Random Numbers

by
Shiva Kumar Kannan
1 and
Urmila Diwekar
2,*
1
Ridge High School, Basking Ridge, Bernards, NJ 07920, USA
2
Vishwamitra Research Institute, Crystal Lake, IL 60012, USA
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(5), 195; https://doi.org/10.3390/a17050195
Submission received: 6 March 2024 / Revised: 26 April 2024 / Accepted: 30 April 2024 / Published: 6 May 2024
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems)

Abstract

:
This paper introduces an innovative Particle Swarm Optimization (PSO) Algorithm incorporating Sobol and Halton random number samplings. It evaluates the enhanced PSO’s performance against conventional PSO employing Monte Carlo random number samplings. The comparison involves assessing the algorithms across nine benchmark problems and the renowned Travelling Salesman Problem (TSP). The results reveal consistent enhancements achieved by the enhanced PSO utilizing Sobol/Halton samplings across the benchmark problems. Particularly noteworthy are the Sobol-based PSO improvements in iterations and the computational times for the benchmark problems. These findings underscore the efficacy of employing Sobol and Halton random number generation methods to enhance algorithm efficiency.

1. Introduction

Particle Swarm Optimization (PSO) is a metaheuristic algorithm for optimization problems. PSO was first introduced in 1995 by James Kennedy and Russell Eberhart [1]. The algorithm is based on the concept of social behavior, where particles (potential solutions) move towards the optimal solution through interactions with other particles in the search space. PSO has been widely used in various fields, including engineering, science, and finance, due to its simplicity, robustness, and efficiency. Despite its success, PSO suffers from several limitations. One of the main limitations is its slow convergence rate, which can be attributed to the premature convergence [2] of the particles towards local optima. This issue can be addressed by introducing efficient improvement techniques in PSO. Several enhancement ideas have been proposed in the past to improve the convergence rate of the PSO algorithm, and they are listed below. Firstly, the inertia weight technique was suggested by Russell Eberhart and Ying Shi [3]. The inertia weight technique is a well-known approach for enhancing the convergence speed of PSO. The inertia weight is used to control the movement of particles in the search space. The idea is to maintain a balance between exploration and exploitation of the search space. The inertia weight is updated at each iteration based on a predefined formula, which controls the speed and direction of particle movement. Various formulas have been proposed for updating the inertia weight, such as linear, nonlinear, and adaptive. The choice of the inertia weight formula depends on the optimization problem and the PSO parameters. Second, the concept of a mutation operator was proposed [4]. A mutation operator is a powerful tool for enhancing the diversity of the PSO population. The mutation operator randomly modifies the position of a particle to generate a new solution in the search space. This operation can prevent premature convergence by introducing new solutions that may lead to better solutions. The mutation operator can be applied at different stages of the PSO algorithm, such as before or after the velocity update. Similar to the mutation operator, another operator known as the crossover operator has also been applied to the PSO algorithm [5,6,7]. This concept involves the mixture of the attributes of different solutions to gain exploration and to achieve high diversity in potential results to avoid premature convergence. Third, the opposition-based learning technique was suggested [8]. Opposition-based learning (OBL) is a technique that uses the opposite of the current best solution to generate new solutions. The idea behind OBL is that the opposite of the best solution may represent a good direction for exploration in the search space. OBL can improve the diversity and convergence speed of PSO by generating new solutions that are different from those of the current population. Fourth, hybridization with other metaheuristics has been proposed [9]. Hybridization with other metaheuristics is a common approach for improving the efficiency of PSO. The idea is to combine the strengths of different metaheuristics to overcome their weaknesses. For example, PSO can be combined with genetic algorithms (GA), simulated annealing (SA), or ant colony optimization (ACO). The hybridization approach can enhance the exploration and exploitation capabilities of PSO, leading to better solutions in less time. Fifth, dynamic parameter tuning was presented [9]. The PSO parameters, such as the swarm size, maximum velocity, and acceleration coefficients, significantly impact the algorithm’s performance. Dynamic parameter tuning is a technique that adjusts the PSO parameters based on the search history during the optimization process. The idea is to adapt the PSO parameters to the problem characteristics and the search progress to improve the convergence speed and solution quality. In conclusion, efficient improvement techniques in PSO can enhance the algorithm’s convergence speed and solution quality. The approaches discussed in this paper [9], including the inertia weight technique, mutation operator, opposition-based learning, hybridization with other metaheuristics, and dynamic parameter tuning, can be used individually or in combination to address the limitations of PSO. Tareq M. Shami and a team [10] of researchers conducted a comprehensive survey on PSO. The survey discusses techniques such as varying the inertia weight and hybridizations, which are discussed above. The survey also states that the ability of PSO to be hybridized with other optimization algorithms has contributed to its popularity. Another technique described in the survey is velocity clamping [10], a technique introduced by Eberhart and Kennedy. Velocity clamping involves setting bounds for the values of the velocities of the particles in all the dimensions. Another approach to improving the efficiency of the PSO algorithm discussed in this paper [10] is varying the controlling parameters, such as using the varying inertia weight technique in which inertia weight changes throughout the optimization process, or acceleration coefficient techniques in which the two constant controlling parameters for PSO other than the inertia, are chosen in different ways to yield optimal solutions while evading premature convergence. Many other approaches have been discussed both in the survey and elsewhere. The choice of approach depends on the problem characteristics and the available computational resources. However, most of these approaches can provide problem-dependent solution methods. In this paper, we proposed a new approach to replace the random numbers used for this method with quasi-random numbers [11,12,13], like Halton and Sobol, by maintaining the k-dimensional uniformity of these quasi-random numbers. This not only provides a generalized approach to any optimization problem, but this method can be used in conjunction with the earlier enhancement techniques like the inertia weight technique, mutation operator, opposition-based learning, hybridization with other metaheuristics, and dynamic parameter tuning. In this research, two enhanced versions of PSO (one using Sobol random numbers and the other using Halton random numbers) were proposed with the intention of speeding up the convergence of the standard PSO algorithm. To test the efficiency improvement of the two proposed enhancements of the standard PSO algorithm, the number of iterations taken to achieve the optimum of the well-known cigar, ellipsoid, and paraboloid functions [14], along with the number of iterations taken to obtain an optimal path for the famous Travelling Salesman Problem (TSP) [15,16,17,18], were noted. Following this, improvement in terms of the optimum of the objective function and the number of iterations needed to reach the global optimum were calculated for both the PSO enhanced with Sobol random number samplings and the PSO enhanced with Halton random number samplings, with respect to the standard PSO, which uses Monte Carlo random number samplings. All the results for each benchmark function and TSP unanimously show efficiency improvement due to the use of the Sobol and Halton sequences. Additionally, we noted that the more decision variables an optimization problem has, the improvement due to Sobol and Halton sequences increases. In conclusion, both the enhancements of the standard PSO presented in this research, one utilizing Sobol random numbers and the other utilizing Halton random numbers, consistently show efficiency improvement and a better optimum, meaning that they successfully have increased the speed of convergence of the standard PSO algorithm.
In addition, we have also compared our enhanced PSO with the SALP meta-heuristic variant [19]. This is a recent algorithmic approach developed to improve the convergence rate. Our enhancement is compared with the SALP to see whether the approach of avoiding clusters in random number generation can make the enhanced PSO algorithm perform better than the SALP algorithm. The algorithms are compared for the four benchmark problems.
The time-varying inertia factor was introduced to improve the converge performance of the PSO [20,21]. The authors in [22] introduced a time-varying acceleration coefficient in addition to the time-varying inertia factor. They introduced a PSO concept called a self-organizing hierarchical particle swarm optimizer with a time-varying acceleration coefficient. For the velocity update in PSO, only the social part and cognitive part were considered, and to avoid stagnation in the search space, a time-varying mutation step size was used as well.
An adaptive particle swarm optimization (APSO) that features better search efficiency than the classical particle swarm optimization (PSO) is presented in [23]. It was engineered to perform a global search over the entire search space with faster convergence speed. The APSO algorithm was carried out in two main steps. In the first step, the algorithm evaluated the population distribution and particle fitness based on which a real-time evolutionary state was estimated. This enabled the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters in real-time to improve the search efficiency and convergence speed in the subsequent step.
The multimodal PSO approach proposed in [24] addressed the issues associated with poor local search ability and the requirement of prior knowledge to initialize the PSO algorithm parameters. The authors in [24] proposed an optimizer based on a distance-based locally informed PSO variant. The algorithm eliminated the need to specify the niche parameters in the PSO and enhanced its fine-searching capabilities. To guide the search direction of the particles, each particle used local best information instead of the global best as in the conventional PSO, and the neighborhood of the particle was measured using the Euclidean distance to perform the local best search.
In [25], a hybrid algorithm that combined particle swarm optimization with simulated annealing behavior (SA-PSO) was proposed. The SA-PSO algorithm takes advantage of the good solution quality provided by the simulated annealing and fast searching ability inherent to the particle swarm optimization. It was concluded that the hybrid algorithm could have higher efficiency, better quality and faster convergence speed than conventional PSO variants.
An economic environmental hydrothermal scheduling problem classified as a multi-objective nonlinear constrained optimization was solved using PSO [26]. The algorithm adopted an elite archive set to conserve Pareto optimal solutions and provide multiple evolutionary directions for individuals, while neighborhood searching and chaotic mutation strategies enhance the search capability and diversity of the population. The PSO algorithm also incorporated a constraint handling scheme designed to adjust the constraint violation of hydro and thermal plants.
To avoid parameter selection and overcome the premature convergence problem in the PSO optimization, an adaptive fuzzy particle swarm optimization based on a fuzzy inference system, which incorporated a variable neighborhood search strategy and hybrid evolution, was proposed in [27] and applied to the parameter estimation of nonlinear Hydro Turbine Regulation Systems. The results concluded that the new algorithm’s parameter error and the objective function were significantly smaller than the other algorithms. The estimated model could accurately reflect the dynamic characteristics of the real system, proving that the fuzzy PSO variant was an effective and efficient parameter estimation method.
Support vector machine (SVM)-based classifiers need accurate parameter estimation for high-accuracy classification, and in [28], an improved variant of the PSO is used to estimate the SVM parameters. Specifically, dynamic adjustment of inertia is proposed to optimize the parameters of the SVM. The computation of the inertia weight in the PSO proposed in [28] involves the nonlinear reduction in the inertia weight as the number of iterations increases. In particular, the introduction of random function inertia weight in the PSO avoids falling into the local extremum and, at the same time, increases diversity and the global searching ability during the optimization process.
In addition to the above-mentioned PSO algorithm comparisons, we compare the iteration efficiency of the enhanced PSO approach employing the Sobol sequence using the random inertia PSO variant for the benchmark problems.

2. Materials and Methods

Particle Swarm Optimization
There are countless examples of swarms in the real world, ranging from flocking birds to hunting wolves. When searching for nourishment, the individuals of these swarms, called particles to understand PSO, begin random exploration and then start gravitating towards the findings of other swarm individuals. While following signs of nourishment and searching randomly in space, these particles move towards their objective through knowledge of their discoveries and discoveries of the swarm. Similarly, in the PSO algorithm, proposed decision variable sets gravitate towards the optimal findings of each other, themselves, and the whole swarm together to achieve the globally optimal set of decision variables, yielding the optimal function value. The sets of decision variables are called position vectors. They are updated by vectors called velocity vectors (based on the physics principle that change in position is proportional to the velocity), by randomness, and by attraction towards their personal best sets of decision variables and the unanimous global best set of decision variables found by the whole swarm. The global optimum of the function is achieved through gradual movement towards optimal findings of the swarm. The following are the steps of the PSO algorithm.
  • Initialize Parameters:
    o
    Define the population size (number of particles), N p ;
    Define the number of decision variables (dimension), D ;
    o
    Define the maximum number of iterations, T ;
    o
    Define the inertia weight, ω ;
    o
    Define acceleration constants: cognitive and social, c 1 , c 2 ;
    o
    Initialize the position and velocity of each particle randomly within the search space.
The initially proposed set of solutions is grouped together to form the population matrix, which is written as follows:
x 0 00 x 0 0 ( D 1 ) x 0 ( N p 1 ) 0 x 0 ( N p 1 ) ( D 1 )
Each row in this matrix represents a potential decision variable vector intended to optimize the objective function. Each element in a row is the position with respect to a particular dimension of the particle that corresponds with that row. The subscript for each element is 0, as these values are the initial values in the matrix (0th iteration).
The velocity matrix can be represented as follows:
v 0 00 v 0 0 ( D 1 ) v 0 ( N p 1 ) 0 v 0 ( N p 1 ) ( D 1 )
This represents the velocity of each particle in all the dimensions. The velocity matrix changes the position matrix, and that is inspired by the physics concept that displacement is proportional to velocity. The superscript j is the index of the dimension.
  • Set the best-known position for each particle n, P n i j to its initial position.
    P j = x 0 j
  • Evaluate Fitness:
Evaluate the fitness (objective function value) for each particle based on its current position.
  • Update the personal best position for each particle G n j if its current fitness is better than its previous best fitness.
  • Update Global Best:
    o
    Determine the particle with the best fitness among all particles in the swarm, P n .
    o
    Update the global best position with the particle’s position with the best fitness, G n .
    o
    Update Velocities and Positions:
For each particle, update its velocity and position based on the following formulae.
The velocity of each particle at iteration n is updated according to the equation given below.
v n + 1 i j = ω × v n i j + c 1 × r 1 × P n j X n i j + c 2 × r 2 × G n j X n i j
where r 1 and r 2 are random numbers. So, we have two random numbers per variable.
The corresponding position is updated according to the following equation:
x n + 1 i j = x n i j + v n + 1 i j
  • Check Stopping Criteria:
    o
    If the maximum number of iterations is reached or a satisfactory solution is found, stop the algorithm.
    o
    Otherwise, go back to step 2 and repeat the process.
  • Output:
Return the global best position as the solution to the optimization problem.
Quasi-random sequence enhancements
In this work, we hypothesize that the success of PSO depends on the choice of appropriate random samples. At each iteration of PSO, two random numbers are generated for each decision variable, as shown in Equation (1). These random numbers are computer-generated random numbers and are called pseudorandom numbers or Monte Carlo random numbers. A sequence of random numbers must have two essential properties: uniformity, i.e., they are equally probable everywhere, and independence, i.e., the current value of a random variable has no relation with the previous values. Figure 1 shows the two random variables generated using a computer (Monte Carlo or pseudorandom numbers) with samples equal to 100. As seen in the figure, for uniformity, the points should be equally distributed; that is not the case here. We need more samples to cover the points equally in that 2-dimensional space. This means more iterations of the algorithm. To circumvent this problem and to increase the efficiency of PSO, we are presenting a construct based on quasi-random numbers. Some well-known quasi-random sequences are Halton, Hammersley, Sobol, Faure, Korobov, and Neiderreiter [11,12,13]. The choice of an appropriate quasi-Monte Carlo sequence is a function of discrepancy. Discrepancy is a quantitative measure of the deviation of the sequence from the uniform distribution. Therefore, it is desirable to choose a low discrepancy sequence. The Halton, SOBOL, and Hammersley are some examples of low-discrepancy sequences. Here, we are working with the two sequences, Halton and SOBOL, as described below.
Halton Sequence Points:
The design of Halton points is given below. Any integer n can be written in radix-R notation (R is an integer) as follows:
n n m n m 1 n 2 n 1 n 0
n = n 0 + n 1 R + n 2 R 2 + + n m R m
where m = [ log R n ] = [ ln n / ln R ] (the square brackets denote the integral part). A unique fraction between 0 and 1 called the inverse radix number can be constructed by reversing the order of the digits of n around the decimal point as follows:
φ R ( n ) = n 0 n 1 n 2 n m = n 0 R 1 + n 1 R 2 + + n m R m 1
The Halton points on a k -dimensional cube are given by the following sequence:
z k ( n ) = φ R 1 ( n ) , φ R 2 ( n ) , , φ R k 1 ( n ) ,   n = 1,2 , , N + 1
where R 1   ,   R 2 , … R k 1 are the first k 1 prime numbers. The Halton points are x k ( n ) = 1 z k ( n ) .
Figure 2 shows 2-dimensional Halton points (100 samples), which shows better uniformity than Figure 1.
SOBOL Sequence Points:
Like many other quasi-random sequences, the SOBOL sequence starts from the Van der Corput sequence in base 2. To generate the Vander Corput sequence, consider the k-th point in the Sobol sequence; this integer k can be written as a linear combination of a nonnegative power of base 2 as follows.
k = a 0 k + 2 a 1 k + 2 2 a 2 k + + 2 r a r k
where r is a large number.
Then, the k t h element in the Sobol sequence is given by the following equation:
x k = 1 / 2 y 1 k + 1 / 2 2 y 2 k + + 1 / 2 r y r k
where the coefficients y i k can be obtained using the following expression:
y 1 k y 2 k y r k   =   V a 0 k a 1 k a r 1 k mod 2
where V is the generation matrix whose elements are 0 or 1. V is an identity matrix for the Vander Corput sequence.
The operation in Equation (9) can be represented as follows:
a 0 k V 1 a 1 k V 2 a 2 k v 3 a r 1 k V r
where V i is an element of V and d e n o t e s   b i n a r y   a d d i t i o n .
The calculation of generation matrix V involves primitive polynomial A, primitive polynomial of degree d and all the coefficients A1 to Ad−1, which are either 1 or 0 and are given below.
P = X d + A 1 X d 1 + + A d 1 X + 1
Direction vectors M i are generated by the recursive equation given below for i > d, and the initial direction vectors, i.e., for i < d, are generated by selecting an odd integer between 0 and 2 d .
M i = 2 1 A 1 M i 1 2 2 A 2 M i 2 2 d 1 A d 1 M i d + 1 2 d M i d M i d .
Then, the generation matrix elements can be generated as shown below.
V i j = M i 2 i
Thus, SOBOL sequence can be generated by generating the V matrix and the Van der Corput sequence in base 2 .  Figure 3 shows that SOBOL points show better uniformity than pseudo-random numbers from the computer (Figure 1).
We show where the random numbers are used in PSO with traditional PSO in Figure 4 and enhanced PSO with Halton or SOBOL in Figure 5. However, to maintain k-dimensional uniformity, the Halton and SOBOL points cannot be generated one at a time; they must be generated together with the whole sample (for all iterations).
It should be noted that all other efficiency enhancements proposed in the literature can be directly applicable to our new algorithms as we are only changing the random numbers.

3. Results

Test Cases
To perform simulation and to estimate the standard deviations and the computation times, all the following algorithms were on a Windows 10 laptop equipped with an i7 core processor and with inbuilt 16 GB RAM. The simulations were run for 50 iterations and averaged to estimate the elapsed time. The computation elapsed times were estimated using the times associated with the start time and end time of the algorithm execution. These elapsed times were averaged across multiple runs and were used to measure the average elapsed time. The mean values for the number of iterations were estimated and rounded to provide the number of iterations in the following tables. All PSO variants implemented the mutation scheme that used uniformly generated random numbers.
To test the efficiency improvements of the enhanced PSO suggested in this paper, both mixed and continuous versions of three well-known benchmark functions, namely the Cigar, Ellipsoid, and Paraboloid functions, along with the famous Travelling Salesman Problem, were taken as test cases (Table 1). Three algorithms consisting of one PSO code with no enhancement (using Monte Carlo random number samplings), a second PSO code that uses Sobol random number samplings, and a third PSO code that uses Halton random numbers each ran for 5, 10, 15, and 20 decision variables to optimize both the mixed and continuous versions of the three benchmark functions and to solve the TSP problem. The number of iterations taken to achieve an optimum is recorded for all three algorithms to reach the global optimum.
Traveling Salesman Problem Optimization Procedure
The Traveling Salesman Problem is a discrete combinatorial optimization problem [12,13,14,15]. The locations of many cities are given, and an optimal order in which the cities are traversed is calculated. A position vector is the suggested order of cities. The velocity vector is a sequence of two-element tuples in which each tuple consists of two indices of elements to be swapped to make the path more optimal. For example, if there are five cities labeled with indices {1, 2, 3, 4, 5}, the population matrix could consist of different suggested orders in which they are to be traversed, such as {2, 4, 3, 5, 1} and {5, 4, 3, 1, 2}. The velocity vector for the first suggested order of cities could be {(1, 2), (3, 2), (4, 5)}, while for the second, it could be {(5, 1)}. In this case, the first element would be swapped with the second, the third with the second after that, and the fifth with the fourth after that, in the first suggested sequence of cities. For the second, the fifth and first elements are to be swapped.
ω , α , and β are pre-determined constants used in this algorithm. A random number is generated for each swap in the previous velocity vector. For each random number that is less than ω , the corresponding swap is included in the new velocity vector. Similarly, this process is carried out for the second term and the third term with α and Beta instead of ω . These three random numbers are generated together to maintain the k-dimensional uniformity of the quasi-random number sequence when Halton or SOBOL is used. Through the usage of this swapping method, the optimal order in which the cities must be visited is attained.
For the i t h particle at the iteration index n , to update the velocity, the following equation can be used.
t h : v n + 1 i = { ω v n i } α P n i X n i β G n j X n i
H e n c e ,   v n + 1 i   i s   a   s e t   o f   s w a p s .
Here, the ⊕ is the merging operator, which merges sequences of swaps into a new swap sequence.
α < 1 , β < 1   a n d   ω < 1
For the i t h particle at the iteration index n , we can apply the new updated velocity v n + 1 i to the current position X n i , ( which is a set of node sequences or a node list in TSP) and obtain the updated position X n + 1 i .
X n + 1 i = X n i A p p l y   S w a p s v n + 1 i
For ω v n , if v n is a vector of L elements to begin with, then L random numbers are generated and for each random number that is less than ω , the corresponding swap in v n is used. P n i X n i is a swap sequence to move from X n i to P n i . For example, if P n i is {3, 4, 5, 2, 1} and X n i   i s   5 , 3 , 4 , 1 , 2 , t h e   s e t   o f   s w a p s   n e e d e d   t o   m o v e   f r o m X n i to P n i is {(1, 3), (3, 2), (4, 5)} (assuming that indices start from 1).
α P n i X n i is a set of swaps that are selected from the swap sequence vector P n i X n i of length N based on the N random numbers generated that are less than α .
During initialization, for each particle i , X 0 i is set to a random selection of cities whose IDs are the city node index. If there are N cities to be visited, then X 0 i is a vector of length N.
The following are tables comparing the results produced by Monte Carlo random number sampling with Sobol random number sampling and Halton random number sampling.
Sobol vs. Monte Carlo Random Numbers
We compare the performance of using SOBOL versus conventional random number approaches based on PSO for the three functions (continuous and mixed variable form) in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7. In addition, we also compare the SALP algorithm for the continuous ellipse and paraboloid and mixed variable ellipse and paraboloid function optimizations.
Halton vs. Monte Carlo:
The results of Halton-based enhanced PSO are presented in Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13.
It can be seen from the above tables (Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13) that the quasi-random sequence-based enhanced PSO demonstrates superior performance compared to a conventional random number-based PSO for both continuous and mixed variable problems. The enhancement increases generally with a higher number of variables specifically for continuous variable problems. SOBOL works better for mixed variable functions than Halton, but Halton shows better convergence than SOBOL when considering most of the continuous variable benchmark problems.
Constant Inertia vs. Random Inertia:
As suggested earlier, in new variants proposed in the literature, adaptive parameter enhancements provide improvements. We can also replace the random numbers used in these variants with the quasi-random numbers suggested in our new algorithm as these two improvements are mutually exclusive. We illustrate this by comparing the constant inertia algorithm, which is the traditional approach to PSO, to the random inertia proposed as an enhancement [Table 14, Table 15, Table 16 and Table 17].
In this section, we compare the performance of the PSO with the constant inertia weight and that of the PSO with the random inertia weight. Random inertia weight has been used in reference [20,21] to improve the iteration efficiency. The traditional constant inertia PSO, our proposed enhanced PSO with quasi-random numbers with constant inertia PSO, is compared with random inertia variants for both algorithms. We use Sobol random numbers for the enhanced PSO and its random inertia variant.
The velocity of each particle in the PSO algorithm at iteration n is updated according to the following equation:
v n + 1 i j = ω v n i j + c 1 r 1 P n j X n i j + c 2 r 2 G n j X n i j
In the above equation, ω is the inertia weight. For constant inertia, ω was set to 0.75.
For random inertia weight, ω was set to 0.5 + r a n d 2 as per [21]. The comparison results are provided in the table below. The improvement percentage in the table below is the comparison between random inertial weight-aided Sobol and conventional PSO variants.
From the above results, we conclude that the Sobol-based PSO variant performs better in terms of iteration efficiency with respect to the conventional PSO as well as random inertia PSO.
TSP Optimization Results
Table 18 and Table 19 provide the TSP optimization comparisons using the PSO variants. TSP is a completely discrete problem, and SOBOL performs better, as can be seen from Figure 6, Figure 7 and Figure 8, as well as Table 14 and Table 15. For TSP, the particles were initialized using the standard uniformly distributed random numbers for both the standard and the enhanced PSOs. The velocity and position updates, which are basically a set of swaps, are controlled using the Sobol and Halton-generated random numbers in the case of enhanced PSO.

4. Discussion

We have augmented the PSO algorithm by integrating Sobol and Halton random number samplings to achieve superior results. Our rationale behind this enhancement is rooted in Sobol and Halton random numbers, which are quasi-random number types. These numbers are generated in such a manner that they are uniformly distributed across multidimensional space, thus mitigating clustering or bias in particle movement. Consequently, this facilitates more extensive exploration, thereby increasing the likelihood of avoiding local optima and accelerating the discovery of the global optimum, as a more significant portion of the space can be explored when biases or clusters are circumvented.
The outcomes indicate that the enhancement applied to the standard PSO through the utilization of quasi-random numbers has consistently improved the number of iterations required for the algorithm to produce the optimal function value across all three benchmark functions. The efficiency gains for continuous functions are more pronounced than those for mixed variable functions.
Additionally, including more decision variables in the problem correlates with more significant improvements in general. As evident from the data, the application of Sobol or Halton sequences to the standard PSO algorithm demonstrates efficiency improvements, suggesting the potential benefits of these sequences to researchers in various optimization fields. Notably, this technique is algorithm-agnostic, as indicated in the introduction, and can thus be employed with other optimization algorithms such as the Genetic Algorithm, Ant Colony Optimization (ACO) algorithm, and other established optimization algorithms.

5. Conclusions

Particle Swarm Optimization harnesses swarm behavior effectively for function optimization but is often hampered by slow convergence. We consistently improved efficiency through two distinct enhancements to the Particle Swarm Optimization algorithm. We proposed the use of quasi-random number sequences to update decision variable values and their rates of change in each iteration to enhance the PSO algorithm. Since quasi-random number sequences do not alter the fundamental algorithm, this concept can be integrated into modified versions of the PSO algorithm suggested previously, as discussed in the introduction. To evaluate whether quasi-random number sequences in PSO yield efficiency improvements, we tested them using continuous and mixed variable Cigar, Ellipsoid, and Paraboloid functions, along with an example of the Traveling Salesman Problem. The results demonstrate that both sequences of quasi-random numbers used to enhance the standard PSO improved efficiency.

Author Contributions

Conceptualization, U.D.; methodology, U.D.; software, S.K.K.; validation, S.K.K.; formal analysis, S.K.K. and U.D.; investigation, S.K.K. and U.D.; resources, U.D.; data curation, S.K.K.; writing—original draft preparation, S.K.K.; writing—review and editing, U.D.; visualization, S.K.K. and U.D.; supervision, U.D.; project administration, U.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data can be obtained from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A Comprehensive Survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  2. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  3. Bansal, J.C.; Singh, P.K.; Saraswat, M.; Verma, A.; Jadon, S.S.; Abraham, A. Inertia Weight strategies in Particle Swarm Optimization. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; pp. 633–640. [Google Scholar] [CrossRef]
  4. Li, N.; Qin, Y.-Q.; Sun, D.-B.; Zou, T. Particle swarm optimization with mutation operator. In Proceedings of the 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.04EX826), Shanghai, China, 26–29 August 2004; Volume 4, pp. 2251–2256. [Google Scholar] [CrossRef]
  5. Clerc, M.; Kennedy, J. The particle swarm–Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  6. Engelbrecht, A.P. Particle swarm optimization with crossover: A review and empirical analysis. Artif. Intell. Rev. 2016, 45, 131–165. [Google Scholar] [CrossRef]
  7. Chen, Y.; Li, L.; Xiao, J.; Yang, Y.; Liang, J.; Li, T. Particle swarm optimizer with crossover operation. Eng. Appl. Artif. Intell. 2018, 70, 159–169. [Google Scholar] [CrossRef]
  8. Zhou, Z.; Li, F.; Abawajy, J.H.; Gao, C. Improved PSO Algorithm Integrated with Opposition-Based Learning and Tentative Perception in Networked Data Centers. IEEE Access 2020, 8, 55872–55880. [Google Scholar] [CrossRef]
  9. Mandal, B.; Roy, P.K.; Mandal, S. Hybridization of Particle Swarm Optimization with Biogeography-Based Optimization for Reactive Power and Voltage Control. In Proceedings of the 2014 Fourth International Conference of Emerging Applications of Information Technology, Kolkata, India, 19–21 December 2014; pp. 34–39. [Google Scholar] [CrossRef]
  10. Cai, Y.; Yang, S.X. An improved PSO-based approach with dynamic parameter tuning for cooperative target searching of multi-robots. In Proceedings of the 2014 World Automation Congress (WAC), Waikoloa, HI, USA, 3–7 August 2014; pp. 616–621. [Google Scholar] [CrossRef]
  11. Morokoff, W.J.; Caflisch, R.E. Quasi-Random Sequences and Their Discrepancies. SIAM J. Sci. Comput. 1994, 15, 1251–1279. [Google Scholar] [CrossRef]
  12. Niederreiter, H. Random Number Generation and Quasi-Monte Carlo Methods; CBMS-NSF Regional Conference Series in Applied Mathematics 63; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar]
  13. Sobol, I.M. Uniformly distributed sequences with an additional uniform property. USSR J. Comput. Math. Math. Phys. Engl. Transl. 1976, 16, 1332–1337. [Google Scholar] [CrossRef]
  14. Diwekar, U.M.; Gebreslassie, B.H. Efficient Ant Colony Optimization (EACO) Algorithm for Deterministic Optimization. Int. J. Swarm Intel. Evol. Comput. 2016, 5, 131. [Google Scholar] [CrossRef]
  15. Hadia, S.K.; Joshi, A.H.; Patel, C.K.; Kosta, Y.P. Solving City Routing Issue with Particle Swarm Optimization. Int. J. Comput. Appl. 2012, 47, 15. [Google Scholar]
  16. Hadia, S.K.; Joshi, A.H.; Salman, C.A.; Ahmad, I.; Al-Madani, S. Particle swarm optimization for task assignment problem. Microprocess. Microsyst. 2002, 26, 363–371. [Google Scholar]
  17. Zhong, W.H.; Zhang, J.; Chen, W.N. A novel discrete particle swarm optimization to solve traveling salesman problem. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Singapore, 25–28 September 2007; pp. 3283–3287. [Google Scholar]
  18. Wang, K.P.; Huang, L.; Zhou, C.G.; Pang, W. Particle swarm optimization for solving traveling salesman problem. In Proceedings of the 2003 International Conference on Machine Learning and Cybernetic, Xi’an, China, 5 November 2003; Volume 3, pp. 1583–1585. [Google Scholar]
  19. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  20. Umapathy, P.; Venkataseshaiah, C.; Arumugam, M.S. Particle Swarm Optimization with Various Inertia Weight Variants for Optimal Power Flow Solution. Discret. Dyn. Nat. Soc. 2010, 2010, 462145. [Google Scholar] [CrossRef]
  21. Nikabadi, A.; Ebadzadeh, M. Particle swarm optimization algorithms with adaptive Inertia Weight: A survey of the state of the art and a Novel method. IEEE J. Evol. Comput. 2008, 14, 305–313. [Google Scholar]
  22. Ratnaweera, A.; Halgamuge, S.K.; Watson, H. Self-Organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  23. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.-H. Adaptive Particle Swarm Optimization. IEEE Trans. Syst. Man Cybernetics. Part B Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed]
  24. Qu, B.Y.; Suganthan, P.N.; Das, S. A Distance-Based Locally Informed Particle Swarm Model for Multimodal Optimization. IEEE Trans. Evol. Comput. 2013, 17, 387–402. [Google Scholar] [CrossRef]
  25. Shieh, H.-L.; Kuo, C.-C.; Chiang, C.-M. Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification. Appl. Math. Comput. 2011, 218, 4365–4383. [Google Scholar] [CrossRef]
  26. Feng, Z.-K.; Niu, W.-J.; Cheng, C.-T. Multi-objective quantum-behaved particle swarm optimization for economic environmental hydrothermal energy system scheduling. Energy 2017, 131, 165–178. [Google Scholar] [CrossRef]
  27. Liu, D.; Xiao, Z.; Li, H.; Liu, D.; Hu, X.; Malik, O.P. Accurate Parameter Estimation of a Hydro-Turbine Regulation System Using Adaptive Fuzzy Particle Swarm Optimization. Energies 2019, 12, 3903. [Google Scholar] [CrossRef]
  28. Wang, J.; Wang, X.; Li, X.; Yi, J. A Hybrid Particle Swarm Optimization Algorithm with Dynamic Adjustment of Inertia Weight Based on a New Feature Selection Method to Optimize SVM Parameters. Entropy 2023, 25, 531. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Two-dimensional pseudorandom numbers (100 points).
Figure 1. Two-dimensional pseudorandom numbers (100 points).
Algorithms 17 00195 g001
Figure 2. Two-dimensional Halton Points (100).
Figure 2. Two-dimensional Halton Points (100).
Algorithms 17 00195 g002
Figure 3. Two-dimensional SOBOL points (100).
Figure 3. Two-dimensional SOBOL points (100).
Algorithms 17 00195 g003
Figure 4. Traditional PSO flowchart.
Figure 4. Traditional PSO flowchart.
Algorithms 17 00195 g004
Figure 5. Enhanced PSO flowchart.
Figure 5. Enhanced PSO flowchart.
Algorithms 17 00195 g005
Figure 6. TSP with 10 cities—convergence plot.
Figure 6. TSP with 10 cities—convergence plot.
Algorithms 17 00195 g006
Figure 7. TSP with 15 cities—convergence plot.
Figure 7. TSP with 15 cities—convergence plot.
Algorithms 17 00195 g007
Figure 8. TSP with 20 cities—convergence plot.
Figure 8. TSP with 20 cities—convergence plot.
Algorithms 17 00195 g008
Table 1. Test cases for algorithm testing [11].
Table 1. Test cases for algorithm testing [11].
FunctionFormulaRange
Continuous Optimization Problems
1Cigar f x = x 1 2 + 10 4 i = 2 N C x i 2 [ 3 ,   3 ] N C
2Parabolic f x = i 1 N C x i 2 [ 3 ,   3 ] N C
3Ellipsoid f x = i 1 N C 5 i 1 n 1 x i 2 [ 3 ,   3 ] N C
Mixed-Variable Combinatorial Optimization Problems
4Cigar f x ,   y = x i 2 + 10 4 i = 1 N D x i 2 + y 1 2 + 10 4 i = 2 N D y i 2 [ 3 ,   3 ] N M
5Parabolic f x ,   y = i = 1 N C x i 2 + i = 1 N D y i 2 [ 3 ,   3 ] N M
6Ellipsoid f x ,   y = i = 1 N C 5 i 1 n 1 x i 2 + i = 1 N D 5 i 1 n 1 y i 2 [ 3 ,   3 ] N M
Table 2. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the continuous variable Cigar function.
Table 2. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the continuous variable Cigar function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
5851071.95.10.310.3721%
101671932.46.30.460.6814%
151833125.230.50.691.142%
2023144726.530.10.871.649%
Table 3. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the mixed variable Cigar function.
Table 3. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the mixed variable Cigar function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
Salp
Swarm
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
593973.16.8N/A0.470.414%
101191566.19.1N/A0.540.7124%
1514720911.211.8N/A0.730.8730%
2019526412.615.8N/A0.911.127%
Table 4. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO optimization for the continuous variable Paraboloid function.
Table 4. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO optimization for the continuous variable Paraboloid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
Salp
Swarm
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
Salp
Swarm
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
54254430.93.20.80.170.2123%
106791711.24.50.90.260.3726%
1582136881.917.10.890.330.5340%
201021781201.957.91.10.420.5943%
Table 5. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the mixed variable Paraboloid function.
Table 5. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the mixed variable Paraboloid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
Salp
Swarm
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
Salp
Swarm
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
54247852.23.51.50.190.2311%
106268983.54.91.690.270.3820%
15781063522.53.535.10.380.5327%
20941513644.58.230.20.460.6438%
Table 6. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the continuous variable Ellipsoid function.
Table 6. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the continuous variable Ellipsoid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
Salp
Swarm
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
Salp
Swarm
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
54860442.23.70.8750.200.2420%
107096754.13.11.10.310.4227%
1592154963.95.12.80.410.6840%
201051841324.29.22.90.520.8843%
Table 7. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the mixed variable Ellipsoid function.
Table 7. Comparison of performances of both Monte Carlo and Sobol random number samplings in PSO for the mixed variable Ellipsoid function.
Number of DimensionsPSO Enhanced
(Iteration)
PSO Conventional
(Iterations)
Salp
Swarm
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
Salp
Swarm
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
54553903.54.611.90.260.2615%
1062891094.98.329.10.420.4330%
15841283424.46.754.30.520.6535%
20901653503.27.863.60.630.8846%
Table 8. Comparison of performances of both Monte Carlo and (scrambled) Halton random number samplings in PSO for the continuous variable Cigar function.
Table 8. Comparison of performances of both Monte Carlo and (scrambled) Halton random number samplings in PSO for the continuous variable Cigar function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
5611013.25.10.510.3540%
101111895.16.30.770.6442%
151882986.830.51.11.0337%
2022045110.130.11.31.5152%
Table 9. Comparison of performances of both Monte Carlo and (scrambled) Halton random number samplings in PSO for the mixed variable Cigar function.
Table 9. Comparison of performances of both Monte Carlo and (scrambled) Halton random number samplings in PSO for the mixed variable Cigar function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
596952.874.90.670.401%
101071564.125.80.810.6532%
151572049.7514.51.030.8823%
2019026513.38.311.301.1629%
Table 10. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the continuous variable Paraboloid function.
Table 10. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the continuous variable Paraboloid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
538520.93.10.410.2030%
1064881.14.40.60.3328%
15791421.156.50.760.5245%
201031741.65.50.920.5541%
Table 11. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the mixed variable Paraboloid function.
Table 11. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the mixed variable Paraboloid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
538461.372.660.460.2118%
1060773.14.310.620.3323%
15711073.95.780.810.4835%
20921466.997.520.940.6738%
Table 12. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the continuous variable Ellipsoid function.
Table 12. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the continuous variable Ellipsoid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
538602.73.70.60.2440%
1075963.083.11.00.4224%
15861543.015.11.270.6845%
201521843.449.21.440.8835%
Table 13. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the mixed variable Ellipsoid function.
Table 13. Comparison of performances of both Monte Carlo and Halton random number samplings in PSO for the mixed variable Ellipsoid function.
Number of DimensionsPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
542532.544.60.500.2621%
1061892.758.30.700.4332%
151101286.196.70.910.6514%
201201657.17.81.20.8827%
Table 14. Comparison of random inertial weight-aided Sobol and conventional PSO performances for benchmark functions with 5 dimensions.
Table 14. Comparison of random inertial weight-aided Sobol and conventional PSO performances for benchmark functions with 5 dimensions.
Function TypePSO Sobol
Contant Weight Inertia
PSO Conventional
Random Weight Inertia
PSO Sobol Random Weight InertiaStandard Deviation
PSO Sobol
Contant Weight Inertia
Standard Deviation PSO Conventional
Random Weight Inertia
Standard Deviation
PSO Sobol Random Weight Inertia
Improvement in
Random Inertia
(Conventional vs. Sobol)
Cigar—
Continuous
8554481.91.883.911.1%
Ellipse—
Continuous
4835302.21.21.814.3%
Parabola—Continuous4231290.91.71.36.45%
Cigar—Mixed93108953.114.33.612.03%
Ellipse—Mixed4557473.58.54.617.54%
Parabola—Mixed4252432.215.85.517.3%
Table 15. Comparison of performances of random inertial weight-aided Sobol and conventional PSO for the benchmark functions with 10 dimensions.
Table 15. Comparison of performances of random inertial weight-aided Sobol and conventional PSO for the benchmark functions with 10 dimensions.
Function TypePSO Sobol
Contant Weight Inertia
PSO Conventional
Random Weight Inertia
PSO Sobol Random Weight InertiaStandard Deviation
PSO Sobol
Contant Weight Inertia
Standard Deviation PSO Conventional
Random Weight Inertia
Standard Deviation
PSO Sobol Random Weight Inertia
Improvement in
Random Inertia
(Conventional vs. Sobol)
Cigar—
Continuous
16776672.42.61.811.8%
Ellipse—
Continuous
7035304.11.81.714.3%
Parabola—Continuous6731291.22.31.66.5%
Cigar—Mixed1191771366.19.216.123.2%
Ellipse—Mixed6299844.98.017.215.2%
Parabola—Mixed6289713.58.24.520.2%
Table 16. Comparison of performances of random inertial weight-aided Sobol and conventional PSO for the benchmark functions with 15 dimensions.
Table 16. Comparison of performances of random inertial weight-aided Sobol and conventional PSO for the benchmark functions with 15 dimensions.
Function TypePSO Sobol
Contant Weight Inertia
PSO Conventional
Random Weight Inertia
PSO Sobol Random Weight InertiaStandard Deviation
PSO Sobol
Contant Weight Inertia
Standard Deviation PSO Conventional
Random Weight Inertia
Standard Deviation
PSO Sobol Random Weight Inertia
Improvement in
Random Inertia
(Conventional vs. Sobol)
Cigar—
Continuous
18396855.22.12.211.5%
Ellipse—
Continuous
9263563.91.21.611.1%
Parabola—Continuous8261521.911.31.914.8%
Cigar—Mixed14726016611.244.816.436.2%
Ellipse—Mixed841401004.411.96.928.6%
Parabola—Mixed78133862.530.15.735.3%
Table 17. Comparison of performances of random inertial weight-aided Sobol and conventional PSO for the benchmark functions with 20 dimensions.
Table 17. Comparison of performances of random inertial weight-aided Sobol and conventional PSO for the benchmark functions with 20 dimensions.
Function TypePSO Sobol
Contant Weight Inertia
PSO Conventional
Random Weight Inertia
PSO Sobol Random Weight InertiaStandard Deviation
PSO Sobol
Contant Weight Inertia
Standard Deviation PSO Conventional
Random Weight Inertia
Standard Deviation
PSO Sobol Random Weight Inertia
Improvement in
Random Inertia
(Conventional vs. Sobol)
Cigar—
Continuous
2311189826.55.43.516.9%
Ellipse—
Continuous
10577664.22.82.714.3%
Parabola—Continuous10274621.952.11.916.2%
Cigar—Mixed19533023412.637.121.229.1%
Ellipse—Mixed902131203.228.419.243.7%
Parabola—Mixed941931054.523.65.145.6%
Table 18. Comparison of performances for Monte Carlo and Sobol random number samplings in PSO for TSP.
Table 18. Comparison of performances for Monte Carlo and Sobol random number samplings in PSO for TSP.
Number of CitiesPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
107410316.128.90.981.1828%
1541447888.7130.27.858.713%
2017051927210.2371.416.5517.811.5%
Table 19. Comparison of performances for Monte Carlo and Halton random number samplings in PSO for TSP.
Table 19. Comparison of performances for Monte Carlo and Halton random number samplings in PSO for TSP.
Number of CitiesPSO Enhanced
(Iterations)
PSO Conventional
(Iterations)
PSO Enhanced Standard DeviationPSO Conventional
Standard Deviation
PSO Enhanced Compute Time (secs)PSO Conventional Compute Time (secs)Iterations’ Improvement Percentage
107810318.228.91.261.1824%
1543147876.7130.28.988.79.8%
2017601927212.1371.417.3917.88.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kannan, S.K.; Diwekar, U. An Enhanced Particle Swarm Optimization (PSO) Algorithm Employing Quasi-Random Numbers. Algorithms 2024, 17, 195. https://doi.org/10.3390/a17050195

AMA Style

Kannan SK, Diwekar U. An Enhanced Particle Swarm Optimization (PSO) Algorithm Employing Quasi-Random Numbers. Algorithms. 2024; 17(5):195. https://doi.org/10.3390/a17050195

Chicago/Turabian Style

Kannan, Shiva Kumar, and Urmila Diwekar. 2024. "An Enhanced Particle Swarm Optimization (PSO) Algorithm Employing Quasi-Random Numbers" Algorithms 17, no. 5: 195. https://doi.org/10.3390/a17050195

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop