Next Article in Journal
A Simplified Radial Basis Function Method with Exterior Fictitious Sources for Elliptic Boundary Value Problems
Previous Article in Journal
Estimation of the Hurst Parameter in Spot Volatility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predominant Cognitive Learning Particle Swarm Optimization for Global Numerical Optimization

1
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
3
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(10), 1620; https://doi.org/10.3390/math10101620
Submission received: 2 March 2022 / Revised: 20 April 2022 / Accepted: 7 May 2022 / Published: 10 May 2022
(This article belongs to the Special Issue Natural Computing)

Abstract

:
Particle swarm optimization (PSO) has witnessed giant success in problem optimization. Nevertheless, its optimization performance seriously degrades when coping with optimization problems with a lot of local optima. To alleviate this issue, this paper designs a predominant cognitive learning particle swarm optimization (PCLPSO) method to effectively tackle complicated optimization problems. Specifically, for each particle, a new promising exemplar is constructed by letting its personal best position cognitively learn from a better personal experience randomly selected from those of others based on a novel predominant cognitive learning strategy. As a result, different particles preserve different guiding exemplars. In this way, the learning effectiveness and the learning diversity of particles are expectedly improved. To eliminate the dilemma that PCLPSO is sensitive to the involved parameters, we propose dynamic adjustment strategies, so that different particles preserve different parameter settings, which is further beneficial to promote the learning diversity of particles. With the above techniques, the proposed PCLPSO could expectedly compromise the search intensification and diversification in a good way to search the complex solution space properly to achieve satisfactory performance. Comprehensive experiments are conducted on the commonly adopted CEC 2017 benchmark function set to testify the effectiveness of the devised PCLPSO. Experimental results show that PCLPSO obtains considerably competitive or even much more promising performance than several representative and state-of-the-art peer methods.

1. Introduction

Optimization problems emerge commonly and become more and more complicated in many research fields and industrial engineering [1,2], such as object detection and tracking [3,4], automatic design of algorithms for visual attention [5,6], path planning optimization [7,8], and control of pollutant spreading on social networks [9]. In particular, these complicated optimization problems usually are non-differentiable, discontinuous, non-convex, non-linear, or multimodal [10,11,12]. Confronted with such kinds of complicated optimization problems, the optimization effectiveness of traditional optimization methods, such as conjugate gradient methods [13,14], space-filling curve methods [15,16], quasi-Newton methods [17,18], line search methods [19,20,21], and trust-region methods [22,23], deteriorates rapidly. In extreme cases, they are even infeasible for solving these complex problems. As a consequence, effective optimization algorithms are increasingly demanded to solve increasingly emerging complex optimization problems, such that the development of related fields could be boosted.
In recent years, evolutionary algorithms (EAs), such as particle swarm optimization (PSO) [24,25] and differential evolution (DE) [26,27], have presented good optimization ability in problem optimization, especially in solving those problems that traditional optimization methods cannot tackle, such as locating multiple global optima of optimization problems [28,29,30,31], and simultaneously optimizing more than one objective [32,33], etc. Different from traditional mathematical optimization methods [34,35,36], which usually adopt only one feasible solution to iteratively search the solution space, EAs generally employ a population of candidate solutions to undergo iterative evolution to seek the global optimum. In this manner, compared with traditional mathematical approaches [16,36,37], EAs own many unique merits. (1) EAs have no requirements on the mathematical properties of the problem to be optimized [38,39], or even can deal with problems without mathematical models. However, most traditional optimization methods [16,17,18], especially gradient-based approaches [17,18], have critical requirements on the properties of optimization problems, such as continuous, differentiable, and convex. Theoretically, EAs can be adopted to optimize any kinds of problems. However, in the literature [40,41], EAs are mainly employed to tackle optimization problems that traditional optimization techniques cannot cope with. (2) EAs usually have strong global search ability since they maintain a population of individuals to explore the search space in different directions [38,41]. Therefore, falling into local regions could be avoided with a high probability. Nevertheless, traditional mathematical optimization methods [42,43,44] usually employ only one feasible candidate, and thus search the solution space in only one direction to seek the global optimum. As a consequence, it is likely for them to fall into local areas, especially when dealing with optimization problems with a lot of wide and flat local basins.
As a kind of EA, PSO [45,46] has been successfully employed to cope with different kinds of optimization problems since its first introduction in 1995 by Kennedy and Eberhart [47,48]. During the optimization, PSO maintains a population of candidate feasible solutions to search the solution space iteratively. By means of its great advantages, such as independence of the mathematic properties of problems to be optimized, fast convergence, and inherent parallelism [49], PSO has been researched by a lot of researchers. As a result, PSO has been not only widely applied to solve complex problems, such as multimodal optimization [28,29,30,31], and multi-objective optimization [32,33], but also has been commonly adopted to tackle real-world optimization problems, such as vehicle routing problems [45,50], neural networks [51,52], and task assignment [53,54].
In the literature [55,56,57,58,59], it is widely accepted that the learning strategy in updating the velocity of particles has a significant influence on assisting PSO to obtain good performance because it determines the way of information diffusion within the swarm. As a result, researchers have designed a lot of novel learning schemes for PSO to promote its optimization performance [11,60], such as comprehensive learning strategies [56,61,62], orthogonal learning mechanisms [57,63,64], and hybrid algorithm learning methods [55,65,66]. Roughly, existing learning mechanisms for PSO could be classified into two categories: exemplar construction-based learning methods [55,56,57] and topology-based learning strategies [58,59,67].
Exemplar construction-based learning strategies aim to construct new learning exemplars for particles to learn from [55,56,57] that may not be visited by particles. In most existing studies, the constructed exemplar is generated by some dimension recombination methods based on historically best positions of particles. By this method, it is expected that the constructed exemplars could provide guidance for the evolution of the swarm, so that particles could move to more promising areas. Along this line, the most representative approach is the comprehensive learning PSO (CLPSO) [56]. To improve the construction efficiency of exemplars, researchers have devised many other exemplar-construction methods, such as the orthogonal learning PSO (OLPSO) [57] and the genetic learning PSO (GL-PSO) [55].
Different from exemplar construction-based learning strategies, topology-based learning strategies mainly adopt certain kinds of topologies to select guiding exemplars to update particles [58,59,67,68]. Different topology structures affect the way of information exchange between particles and the speed of information circulation, thereby affecting the performance of PSO. Specifically, in most topology-based learning strategies, each particle cognitively learns from its own historically best position and socially learns from the historically best position among the neighbors connected by the associated topologies. In the classical PSO [47,48], a global topology connecting all particles was utilized to select the best position among the personal best positions of all particles as the learning exemplar for each particle. This global best position brings in overly greedy attraction, such that when dealing with multimodal problems [10,48], the swarm often falls into local regions. To alleviate this dilemma, many different neighborhood topologies have been designed [58], such as ring topology, four-cluster topology, pyramid topology, and square topology. Some researchers even proposed dynamic topologies [59,68] and composited different topologies [67] to select promising exemplars to direct the update of particles, so that the learning abilities of particles are further promoted.
Though PSO has been advanced significantly, and a lot of remarkable PSO variants [55,56,59,62,69,70,71] have shown their great feasibility in coping with optimization problems, their optimization ability encounters great challenges when dealing with complicated problems with a number of interacting variables and a lot of wide and flat local basins. Unfortunately, these complicated problems are ubiquitous in the era of big data and Internet of Things (IoT) [72]. As a result, there is still an increasing and urgent demand for effective and efficient PSOs to tackle the increasingly emerging complex optimization problems.
To further promote the optimization performance of PSO in dealing with complicated optimization problems, this paper designs a predominant cognitive learning particle swarm optimization (PCLPSO) algorithm, which utilizes a predominant cognitive learning strategy to construct guiding exemplars for particles. Specifically, the main components of PCLPSO are summarized as follows:
(1)
A predominant cognitive learning strategy (PCL) is devised to construct guiding exemplars to update particles. Different from existing exemplar construction-based learning PSOs [55,56,57] that construct the guiding exemplars in an elementwise way, the proposed PCL constructs a promising exemplar to guide the update of each particle by letting its personal best position cognitively learn from a predominant one randomly selected from those better than the personal best position of the updated particle. On the one hand, the personal best position of this particle learns from a better one, and thus it is expected that the constructed exemplar is more promising. As a result, the learning effectiveness of particles is expectedly promoted. On the other hand, due to the random selection in PCL, different particles generally preserve different guiding exemplars, and thus the learning diversity of particles is expectedly improved. In this way, the proposed PCLPSO could expectedly compromise the search diversity and the search convergence well to find satisfactory solutions.
(2)
Dynamic parameter adjustment strategies are further designed to alleviate the predicament that PCLPSO is sensitive to involved parameters. With these dynamic strategies, different particles usually preserve different parameter settings, which is beneficial for further improving the learning diversity of particles.
To validate the optimization effectiveness and efficiency of PCLPSO, comprehensive experiments were carried out on the commonly adopted CEC 2017 benchmark function set [73] with three dimensionality (namely 30, 50, and 100) by comparing PCLPSO with seven representative and state-of-the-art PSO variants. At the same time, deep investigations on PCLPSO were also executed to determine what contributes to its promising performance.
The remainder of this paper is arranged as follows. Closely related works on PSOs are briefly reviewed in Section 2. Then, the developed PCLPSO is described in Section 3. In Section 4, comparative experiments are executed to testify the effectiveness of PCLPSO. Finally, Section 5 concludes this paper.

2. Related Work

Without loss of generality, this paper aims to find the global minima of the following defined problems:
minimize   f ( x )     x = [ x 1 , , x d , x D ] R D
where x is the decision variable vector composed of D variables. In this paper, the objective value of the problem is used as the fitness of one particle.
In the literature, there are an ocean of optimization methods, including traditional mathematical optimization methods [15,18,20,21,36,44] and heuristic algorithms, such as evolutionary algorithms [25,27,30,38,55,56]. However, since this paper mainly proposes a PSO variant to solve complicated optimization problems, we mainly review closely related studies on PSO in the following.

2.1. Canonical PSO

Typically, each particle in the classical PSO [47,48] is updated by learning from its own experience and the social experience of the entire swarm. Specifically, in PSO, each particle is represented by one position vector and one velocity vector. Based on these two vectors, each particle is updated in the following way:
v i t + 1 = ω × v i t + c 1 × r 1 × ( p b e s t i t x i t ) + c 2 × r 2 × ( g b e s t t x i t )
x i t + 1 = x i t + v i t + 1
where xi is the position vector of the ith particle, vi is its velocity vector, and pbesti is its personal best position found so far, while gbest is the global best position of the entire swarm found so far. t denotes the generation index. ω denotes the inertia weight. c1 and c2 represent two acceleration coefficients in charge of the effect of pbesti and gbest on the updated particle. r1 and r2 are uniformly and randomly sampled within [0, 1].
Equation (2) shows that in the canonical PSO, all particles learn from the global best position gbest of the whole swarm found so far. Such an exemplar is too greedy and thus easily leads to falling into local basins of the swarm when coping with optimization problems with many local regions [49,74,75].
To improve the effectiveness of PSO in exploring and exploiting multimodal space, many researchers have developed a lot of novel and effective learning schemes [25,55,56,57,58,59] to guide the update of particles. In a broad sense, existing learning methods are roughly separated into two types: constructive learning strategies [55,56,57] and topology-based learning strategies [58,59,67].

2.2. Constructive Learning Strategies for PSO

Constructive learning strategies [55,56,57] mainly adopt some dimension recombination methods to randomly recombine dimensions of multiple personal best positions of particles to construct promising guiding exemplars to update particles. In the literature, the most representative constructive learning PSOs are the comprehensive learning PSO (CLPSO) [56], the orthogonal learning PSO (OLPSO) [57], and the genetic learning PSO (GLPSO) [55], which will be described in detail next.

2.2.1. CLPSO

CLPSO [56] utilizes the devised comprehensive learning (CL) strategy to recombine dimensions of pbests of different particles to construct new exemplars dimension by dimension. Specifically, the velocity of each particle is updated in the following way:
v i , d t + 1 = ω × v i , d t + c × r d × ( p b e s t f i ( d ) , d t x i , d t )
where xi,d is the dth dimension in the position vector of the ith particle, while vi,d is the dth dimension in the velocity vector of the ith particle. pbestfi(d),d is the dth dimension in the personal best position pbestfi(d), where fi(d) denotes the index of the selected personal best position for the dth dimension in the updated particle; t denotes the generation index; ω denotes the inertia weight; c represents the acceleration coefficient and rd is uniformly and randomly sampled within [0, 1].
With the above CL strategy, each particle learns from multiple historical experiences of particles. Since the personal best positions are all randomly selected, the constructed exemplars for different particles are likely different and thus the learning diversity of particles is expectedly improved largely. On the other hand, by learning from multiple personal best positions, the potentially useful information embedded in the multiple personal best positions could be integrated to direct the update of particles. As a result, the learning effectiveness of particles is expectedly promoted as well.
To further promote the optimization performance of CLPSO, some researchers have attempted to introduce or design some additional mechanisms [61,62,69,76] to assist the evolution of the swarm. For example, Lynn et al. [62] designed a heterogeneous CLPSO (HCLPSO) by partitioning the swarm into two subpopulations with one responsible for exploration and another in charge of exploitation. With respect to the subpopulation for exploration, the guiding exemplars are constructed by using pbests of particles in this subpopulation. While regarding the subpopulation for exploitation, the guiding exemplars are constructed by pbests of all particles in the entire swarm. In [76], a heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer (HCLDMS-PSO) was devised by introducing an improved dynamic multi-swarm (DMS) scheme and two mutation operators (a non-uniform mutation operator and a Gaussian mutation operator) into HCLPSO to further promote its optimization performance. In [77], an adaptive strategy for regulating the probability of conducting the CL strategy and a cooperative archive (CA) were devised and then embedded into CLPSO, leading to an adaptive CLPSO with collaborative archiving (ACLPSO-CA). In [69], a novel local search method was developed and then introduced into CLPSO, leading to a variant of CLPSO, named CLPSO-LS. With the help of this local search operator, the accuracy of the obtained solution is improved evidently. In [61], a local optima topology (LOT) structure was devised and then introduced into CLPSO, leading to CLPSO-LOT. Specifically, such a topology structure comprises the local optima, so that the search space of particles could be enlarged and the convergence of the swarm could be boosted with a certain probability.

2.2.2. OLPSO

Although CLPSO has shown promising performance on complicated problems, such as multimodal problems, it is inefficient to construct a new valid exemplar because the reorganization of dimensions is completely random and directionless. To afford effective recombination of dimensions, in [57], OLPSO was devised through the orthogonal experimental design to roughly seek useful recombination of the historical positions of particles. To be specific, the algorithm selects the dimension combinations with the best average fitness value by constructing an orthogonal matrix and then constructs a potentially useful exemplar for each particle. In particular, the velocity of each particle is updated in the following way:
v i , d t + 1 = w × v i , d t + c × r d × ( p b e s t o , d t x i , d t )
where xi,d and vi,d are the dth dimension in the position and the velocity vectors of the ith particle, respectively. pbesto,d is the determined personal best position by the orthogonal matrix for the dth dimension; t denotes the generation index; ω denotes the inertia weight; c represents the acceleration coefficient; rd is uniformly and randomly sampled from [0, 1].
With the orthogonal matrix, the dimension recombination of different personal best positions is effective to generate a good exemplar for each particle with a high probability. Therefore, OLPSO achieves much better performance than CLPSO when dealing with multimodal problems [57,63,78].
After the emergence of OLPSO, researchers have introduced a lot of other techniques [63,64,78] into OLPSO to further promote its optimization performance. For example, in [63], a variable relocation strategy (VRS) is combined with the OL strategy to develop a variant of OLPSO, named OLPSO-VRS to effectively solve dynamic optimization problems. In [64], the metropolis-based probabilistic acceptance criterion was incorporated into OLPSO, leading to a hybrid OLPSO (HOLPSO). The searching capability of the algorithm is improved by selecting guiding particles based on the probabilistic acceptance mechanism. In [78], a quadratic interpolation based OLPSO-G (QIOLPSO-G) was proposed, where a quadratic interpolation-based construction mechanism was applied to pbests of all particles to construct effective exemplars.
Although the above OLPSO variants can generate more effective learning exemplars with high probability, they consume a lot of fitness evaluations, because the OL strategy requires extensive fitness evaluations in the orthogonal experimental design.

2.2.3. GL-PSO

Different from the above two construction methods, GL-PSO [55] utilized the selection strategy, the crossover scheme, and the mutation mechanism in genetic algorithms (GA) to construct learning exemplars for each particle. At first, GL-PSO constructs an exemplar based on the classical update strategy of PSO as follows:
e i , d t + 1 = c 1 × r 1 , d × p b e s t i , d t + c 2 × r 2 , d × g b e s t d t c 1 × r 1 , d + c 2 × r 2 , d
where pbesti,d is the dth dimension in pbesti of the ith particle; gbestd is the dth dimension in gbest of the entire swarm; ei,d is the dth dimension of the constructed learning exemplar. t denotes the generation index; c1 and c2 represent two acceleration coefficients, and r1,d and r2,d are randomly and uniformly sampled from [0, 1].
Then, GL-PSO constructs another learning exemplar for each particle via utilizing the operators in GA as follows:
(1)
Crossover: First, the crossover operation is used to recombine pbests and gbest to build an offspring oi in the following way:
o i , d t + 1 = { r d × p b e s t i , d t + ( 1 r d ) × g b e s t d t ,   if   f ( p b e s t i t ) < f ( p b e s t k d t ) p b e s t k d , d t ,                                        otherwise
where rd is randomly and uniformly sampled in [0, 1], kd is the index of a randomly selected pbest for the dth dimension; f(•) is the fitness function.
From Equation (7), it is found that this crossover operator actually generates the offspring dimension by dimension. Specifically, for each dimension, one pbest is first randomly chosen from all pbests in the current generation, and then the selected pbest is compared with pbesti. If the selected pbest is better, the dimension of the offspring inherits the value from the selected pbest; otherwise, it inherits the value from the linear composition of pbesti and gbest.
(2)
Mutation: After crossover, the generated offspring oi performs the uniform mutation operation with a probability (pm) as follows:
o i , d t + 1 = r a n d ( l b d ,   u b d ) ,   if   r d < p m
where lbd and ubd are the lower bound and the upper bound of the dth dimension, respectively; rd is randomly and uniformly sampled from [0, 1]; pm is the mutation probability.
Such a mutation operation is actually the typical uniform mutation. Specifically, for each dimension, a real number rd is first uniformly sampled within [0, 1]. Then, if the generated rd is lower than pm, the dth dimension of oi is resampled in the search range [55].
(3)
Selection: At last, the selection operation is performed to determine the final learning exemplar for each particle to update. Specifically, the selection operator is conducted between the first learning exemplar constructed by Equation (6) and the one constructed by Equations (7) and (8) as follows:
e i t + 1 = { o i t + 1 ,   if   f ( o i t + 1 ) < f ( e i t + 1 ) e i t + 1 ,   otherwise .
After the selection, the velocity of each particle is updated in the following way:
v i , d t + 1 = ω × v i , d t + c × r d × ( e i , d t + 1 x i , d t )
In GL-PSO, due to the selection operator and the random construction of the learning exemplars, the finally constructed exemplars for different particles are expectedly different. With this exemplar construction strategy, the global search capability and the optimization performance of PSO are expectedly enhanced. Inspired from GL-PSO, many researchers have attempted to employ operators in other evolutionary algorithms to construct learning exemplars based on pbests of particles and gbest of the whole swarm [65,66,79]. For instance, in [65], a global GL-PSO with diversity enhancement (GGL-PSOD) was developed by changing the global topology with a local ring topology during the construction of exemplars to further enhance the search diversity of the algorithm. In [79], a triple archive PSO (TAPSO) maintains three archives to generate a promising guiding exemplar for each particle in the dimension-wise way. Specifically, this algorithm constructs a new exemplar based on the operators of GA by randomly selecting two particles the first archive and the second archive. In [66], a PSO with two differential mutation operators (PSOTD) was devised by designing a topology structure composed of two swarms and two layers. In this algorithm, two different differential mutation operations with two different control parameters are utilized to generate learning exemplars for particles.
The above variants of GL-PSO have shown promising performance in solving optimization problems. However, during the exemplar construction, they all consume many function evaluations to evaluate the constructed exemplars. This definitely reduces the number of fitness evaluations used for evolving the swarm, which may lead to insufficient evolution of the swarm to find high-quality solutions.

2.3. Topology-Based Learning Strategies for PSO

Different from the constructive learning strategies, the topology-based learning strategies [58,80] mainly select learning exemplars from pbests in the current generation based on some certain kinds of topologies. Specifically, the topologies are used to connect particles via information exchange to seek proper learning exemplars to guide the update of particles [81,82].
In [58], Mendes and Kennedy have designed many different neighborhood topologies, such as ring topology, four-cluster topology, pyramid topology, and square topology, where each particle is connected to a small number of individuals from a local community. Unlike the greedy global topology, these local topologies can select less greedy exemplars and thus high diversity could be maintained, which is profitable for PSO to avoid falling into local basins. In [80], Zou et al. devised a PSO based on a ring neighborhood topology, which is formed on the basis of the calculated Euclidean distance between particles, to solve multimodal optimization problems. Specifically, each particle builds its own topology and evolves in the associated local range. Instead of using fixed topologies, some researchers have employed dynamic topologies [59,83] to further enhance the search diversity of particles. Since different topology-based learning strategies preserve different properties and advantages, some researchers have attempted to use different topology-based learning strategies to update particles, so that different particles may interact with others with different topologies [67].
Though the abovementioned PSO variants have shown good performance on certain kinds of optimization problems, issues such as premature convergence and stagnation still occur when they deal with complicated problems, such as problems with a lot of interacting variables and many wide and flat local basins. With these limitations, the abovementioned algorithms cannot be adopted to tackle multimodal problems that increasingly emerge in real-world applications. To promote the optimization performance of PSO in coping with complicated problems, we design a predominant cognitive learning particle swarm optimization (PCLPSO) in this paper by constructing promising exemplars to improve the learning diversity and the learning effectiveness of particles.

3. Proposed PCLPSO

To elevate the search effectiveness and the search diversity of particles in tackling complicated optimization problems, this paper devises a predominant cognitive learning particle swarm optimization (PCLPSO) via utilizing the devised predominant cognitive learning strategy to construct promising learning exemplars for particles to update. Therefore, the proposed PCLPSO is a constructive-learning-based PSO variant.

3.1. Predominant Cognitive Learning Strategy

In the three classical constructive learning PSOs (namely CLPSO [56], OLPSO [57], and GL-PSO [55]), the construction efficiency for promising guiding exemplars cannot be guaranteed in CLPSO on account of its random selection of pbests dimension by dimension. Although OLPSO and GL-PSO construct promising exemplars more efficiently than CLPSO, they usually consume a great number of fitness evaluations during the construction of promising exemplars, which leads to the number of fitness evaluations used for the swarm evolution being reduced and consequently it is not beneficial for the algorithm to seek high-accuracy solutions.
To alleviate the above issues, this paper proposes a predominant cognitive learning strategy (PCL) to construct promising exemplars for particles. Specifically, given that the number of particles maintained in the swarm is NP, then, we sort the personal best positions pbests of all particles from the best to the worst. Subsequently, for each particle (xi, 0  i NP), we construct a promising guiding exemplar as follows:
e i t + 1 = { g b e s t t                     if   p b e s t i t = g b e s t t   p b e s t i t + F i × ( p b e s t r b t p b e s t i t )   otherwise
where pbesti is the personal best position found by the ith particle so far; gbest is the global best position found by the entire swarm so far; pbestrb is the personal best position randomly selected from those which are better than pbesti; ei is the constructed exemplar for the ith particle; Fi is a control parameter within [0, 1], which can be seen as a learning step of pbesti of the ith particle, and t denotes the generation index.
From Equation (11), we can see that the learning exemplar (ei) for each particle (xi) is constructed by letting its cognitive experience (namely pbesti) learn from a predominant cognitive experience of other particles (namely a better personal best position pbestrb), which is randomly selected from those pbests with better fitness than pbesti. In this way, the constructed exemplar is expectedly close to promising areas. It means that if the personal best position (pbesti) of one particle is just gbest, we do not construct a guiding exemplar for this particle since there is no better one to learn from. In this situation, we directly use pbesti (also gbest) to direct the update of this particle.
After the construction of the guiding exemplar for the ith particle, it is updated in the following way:
v i t + 1 = ω × v i t + c i × r × ( e i t + 1 x i t )
x i t + 1 = x i t + v i t + 1
where xi and vi denote the position and the velocity vectors of the ith particle, respectively; ei is the constructed guiding exemplar for the ith particle; t denotes the generation index; ω denotes the inertia weight; ci represents the acceleration coefficient for the ith particle; r is randomly and uniformly sampled from [0, 1].
In-depth observation of Equations (11) and (12) shows that the proposed PCL strategy preserves the following merits:
(1)
The constructed guiding exemplar for each particle is hopefully more promising than its personal best position because the exemplar is generated by letting its personal best position cognitively learn from a randomly selected better one. Therefore, the learning effectiveness of particles is hopefully promoted, which helps particles locate optimal areas fast.
(2)
Due to the random selection of the learning candidates in PCL, the constructed exemplars to guide the update of different particles are likely different. Hence, the learning diversity of particles is also expectedly promoted, which is helpful for the swarm to escape from local basins.
(3)
In particular, we find that different particles have different numbers of learning candidates in PCL to construct exemplars, which results in pbests of different particles having different ranges to learn. Specifically, the better the pbest is, the fewer candidates this pbest has to learn from, and thus the narrower range this position moves in. Implicitly, it is found that particles with worse personal best positions prefer to explore the solution space, while particles with better personal best positions prefer to exploit the solution space.
(4)
By means of the above merits, the devised PCL is expected to compromise exploration and exploitation well to search the solution space appropriately. Therefore, it is likely that the proposed PCLPSO could achieve good performance in coping with different kinds of optimization problems.

Remark

Compared with the three classical constructive learning PSOs, namely CLPSO [56], OLPSO [57], and GL-PSO [55], the proposed PCLPSO differs from them in the following aspects:
(1)
As shown in Equations (11) and (12), the proposed PCLPSO constructs the guiding exemplars in a whole way, namely taking all dimensions together to construct the guiding exemplars, while CLPSO (Equation (4)), OLPSO (Equation (5)), and GL-PSO (Equation (7)) all construct the guiding exemplars in an elementwise way, namely constructing the exemplars dimension by dimension. By learning from predominant experience of other particles as a whole, the proposed PCL could implicitly take the variable correlations into consideration to effectively construct promising guiding exemplars. At the same time, it can also reduce the computational complexity and time cost during the exemplar construction.
(2)
The proposed PCLPSO constructs a learning exemplar to guide the update of each particle by letting the personal best position of this particle cognitively learn from a randomly selected better personal best position. While CLPSO constructs a guiding exemplar for each particle dimension by dimension based on the personal best positions of all particles. Since the dimension recombination of the personal best positions are random, the quality of the constructed exemplar in CLPSO is uncertain. However, in PCLPSO, by learning from a better position, the quality of the constructed exemplar is expectedly improved. Therefore, compared with CLPSO, the proposed PCLPSO preserves higher efficiency in constructing more promising exemplars.
(3)
There is no additional consumption of fitness evaluations in the exemplar construction in PCLPSO, while in both OLPSO and GL-PSO, a lot of fitness evaluations are consumed during the construction of the guiding exemplars. In OLPSO, a lot of fitness evaluations are used to construct the orthogonal matrix to seek the effective recombination of dimensions to generate a more promising exemplar. In GL-PSO, in the selection operation, for each particle, two fitness evaluations are taken to evaluate the constructed exemplars, so that a better one can be determined as the guiding exemplar of this particle. Different from these two constructive learning PSOs, PCLPSO directly utilizes the constructed exemplars to direct the update of particles. Though it cannot guarantee that the constructed exemplar for each particle is definitely better than its personal best positions, the constructed exemplars are hopefully better than the associated personal best positions since they are constructed by letting the associated personal best positions learn from predominant ones. Therefore, it is expected that the proposed PCLPSO achieves more promising performance than OLPSO and GL-PSO.

3.2. Dynamic Strategies for Control Parameters

From Equations (11) and (12), it is found that PCLPSO has three key parameters, namely inertia weight ω , acceleration coefficient c, and control parameter F. With respect to the inertia weight ω , we directly utilize the following linear decay strategy, which is commonly employed in the literature [56,62,67,69,84]:
ω = 0.9 0.7 × t T m a x
where t is the current generation, while Tmax denotes the preset maximum number of generations.
Taking observation of Equation (14), we can see that the inertia weight ω linearly reduces from 0.9 to 0.2 as the evolution progresses. Therefore, in the early stage, a large ω is maintained to keep the moving inertia of particles, which is profitable for particles searching the solution space with high diversity. In the late stage, a small ω is maintained to decrease the influence of the inertia part. As a result, the swarm expectedly tends to exploit the found promising areas to obtain high-accuracy solutions.
With respect to the control parameter F and the acceleration coefficient c, we devise the following dynamic strategies.

3.2.1. Adaptive Strategy for F

In Equation (11), the control parameter F controls the learning step that the personal best position (pbesti) of the updated particle takes learning from the randomly selected predominant one (pbestrb). Therefore, it has a great effect on the quality of the constructed exemplars. In particular, a too large Fi leads to too greedy learning, which results in the constructed exemplar being too close to the randomly selected pbestrb. In this situation, the updated particle may approach promising areas too fast, which may lead to the risk of falling into local basins and premature convergence. By contrast, a too small Fi results in insufficient learning, which results in the constructed exemplar for each particle being close to its pbest. In this situation, the learning effectiveness of particles is improved very limitedly, which may slow down the convergence. Moreover, since the learning ranges of different particles are different, the settings of Fi should be different for different particles as well.
Bearing the above considerations into mind, we devise the following adaptive strategy for F:
F i = G a u s s i a n ( r a n k ( i ) N P , 0.1 )
where rank(i) is the ranking of the personal best position (pbesti) of the ith particle after all pbests are sorted from the best to the worst. Fi is the setting of the control parameter F of the ith particle. NP represents the number of particles in the swarm. Gaussian ( rank ( i ) NP , 0.1) randomly samples a real random number according to the Gaussian distribution with the mean value set as rank ( i ) NP and the standard deviation set as 0.1. Here, it should be mentioned that the Gaussian distribution with a small variance (0.1) is utilized because such a Gaussian distribution has a narrow sampling range and thus could generate diverse values around the mean value. This offers slight diversity in the exemplar construction for each particle without damaging the construction efficiency.
From Equation (15), we can see that the control parameter Fi for each particle is randomly generated by the Gaussian distribution with the mean value set as the division between the rank of its personal best position and the population size NP and the variance set as a small value, namely 0.1. Such an adaptive strategy brings the following benefits to the proposed PCLPSO:
(1)
Different particles have different settings of Fi. Because of the rank of the personal best position of each particle being different from each other, the mean value of the Gaussian distribution is different for different particles and thus, for different particles, the learning step Fi is different during the exemplar construction. This is beneficial for further improving the learning diversity of particles and thus profitable for assisting the swarm to get out of local basins.
(2)
Particles with better pbests preserve small Fi, while those with worse pbests have large Fi during the exemplar construction. Specifically, better pbests usually have small ranks, and thus the mean values of the Gaussian distribution are small. Therefore, the learning step Fi for particles with better pbests is expectedly small. This just matches the expectation that the constructed exemplars for those particles with better pbests should not be too close to the randomly selected better positions. This is because the learning range of those better pbests is narrow due to the small number of learning candidates. By contrast, worse pbests usually have large ranks, leading to the mean value of the Gaussian distribution being large. Therefore, the learning step Fi is expectedly large during the construction of guiding exemplars for those particles with worse pbests. This also matches the expectation that particles with worse pbests should learn more from better ones to accelerate their moving to promising areas.
(3)
As a whole, we can see that the devised adaptive scheme for F could implicitly help PCLPSO compromise the search diversity and the search effectiveness of particles well to search the solution space properly to obtain high-accuracy solutions.
Experiments conducted in Section 4.3 validate the usefulness of the designed adaptive strategy for F in helping PCLPSO achieve good performance.

3.2.2. Dynamic Acceleration Coefficient c Strategy

As for the acceleration coefficient c, instead of using fixed values in the literature [55,56,59,67,69], we develop the following dynamic strategy to generate different values for different particles
c i = C a u c h y ( 1.6 , 0.2 )
where Cauchy(1.6, 0.2) generates a real number based the Cauchy distribution with the position factor set as 1.6 and the scaling factor set as 0.2. ci is the setting of the acceleration coefficient of the ith particle. It deserves attention that instead of using the Gaussian distribution, the Cauchy distribution is employed here because the Cauchy distribution has a long fat tail and thus can generate more diversified values than the Gaussian distribution. Moreover, we set the position parameter and the scaling factor of the Cauchy distribution as 1.6 and 0.2, respectively, because in the literature [55,56,59,67,69], researchers have investigated that the setting of the acceleration coefficient c for PSOs is usually in the range of [1.0, 2.2] and with these parameter settings, the Cauchy distribution could generate diversified value in such a range.
With this dynamic strategy, different particles have different settings of ci and the difference among the values of different particles is relatively large. This is beneficial for further improving the learning diversity of particles, which is very valuable in solving complicated optimization problems with many local basins.
Experiments conducted in Section 4.3 demonstrate the usefulness of the designed dynamic strategy for c in assisting PCLPSO to obtain promising optimization performance.

3.3. Complete Procedure of PCLPSO

Integrating the abovementioned techniques together, we develop the complete PCLPSO with the overall procedure presented in Algorithm 1. Specifically, the algorithm first randomly initializes NP particles and then evaluates their fitness as shown in Line 1. After the initialization, the algorithm proceeds to the main loops of the optimization (Lines 2~18). Before the update of the swarm (Lines 5~17), pbests of all particles are first sorted from the best to the worst (Line 3) and then the inertia weight ω is computed based on Equation (14) (Line 4). Then, for each particle, the control parameter Fi is first calculated according to Equation (15) (Line 6) and then a promising guiding exemplar is constructed (Lines 7~12). After the construction of the guiding exemplar, the acceleration coefficient ci is sampled from the Cauchy distribution (Line 13) and then the particle is updated (Line 14). Subsequently, the updated particle is reevaluated (Line 15) and its pbest is updated accordingly (Line 16). The main loop (Lines 2~18) continuously iterates until the termination condition is satisfied. Finally, when the algorithm terminates, the best solution among all pbests is obtained as the final output (Line 19).
From Algorithm 1, we can that except for the time used for evaluating the fitness of particles, at each generation, it takes O(NP×log2NP) to sort all pbests, O(NP) to select random better pbests for all particles, and O(NP×D) to construct new guiding exemplars for all particles. Then, it takes O(NP×D) to update all particles and O(NP×D) to update pbests. On the whole, the time complexity of PCLPSO is O(NP×D). With respect to the space complexity, the same as the classical PSO, PCLPSO needs O(NP×D), O(NP×D), and O(NP×D) to store the velocity vectors, the position vectors, and the personal best position vectors of all particles, respectively.
In summary, it is concluded that PCLPSO remains as efficient as the classical PSO regarding the time complexity and the space occupation.
Algorithm 1: The Complete Procedure of PCLPSO
Input: Population size NP, Total fitness evaluations FEmax
1:Randomly initialize NP particles and compute their fitness, and fes = NP;
2:While (fesFEmax) do
3:    Sort pbests from the best to the worst;
4:    Calculate inertia weight ω based on Equation (14);
5:    For i = 1:NP do
6:          Calculate the control parameter Fi according to Equation (15);
7:          If pbesti == gbest then
8:                Use gbest as the learning exemplar ei;
9:          Else
10:                Select a better pbestrb randomly from those which are better than pbesti;
11:                Construct the learning exemplar ei according to Equation (11);
12:          End If
13:          Obtain the acceleration coefficient ci according to Equation (16);
14:          Update particle xi based on Equation (12) and Equation (13);
15:          Compute the fitness of xi: f(xi), and fes++;
16:          Update its pbesti;
17:    End For
18:End While
19:Find the global best solution gbest among all pbests;
Output: f(gbest) and gbest

4. Numerical Analysis

This section mainly presents extensive experiments to comprehensively validate the effective optimization of PCLPSO. To be specific, Section 4.1 first briefly introduces the used benchmark functions and the compared methods. Then, extensive comparisons between PCLPSO and the compared approaches are displayed in Section 4.2. At last, to perform a deep analysis on the proposed algorithm, investigative experiments are executed to testify the influence of each component on the developed PCLPSO, so that readers have a better view on why the developed method could achieve good performance.

4.1. Experimental Settings

In the experiments, we utilize the CEC 2017 benchmark problem set [73], which has been commonly used to validate evolutionary algorithms in the literature [75,85,86], to testify the optimization performance of the proposed PCLPSO. As shown in Table 1, there are 29 optimization problems including 2 unimodal problems, 7 simple multimodal problems, 10 hybrid problems, and 10 composition problems in this benchmark set. More detailed information can be found in [73]. It should be mentioned that to more visually understand the optimization results, for each particle, we utilize the error computed by the subtraction between its function value and the true global optimum as its fitness on each optimization problem.
Firstly, to comprehensively testify the effective optimization ability of the proposed PCLPSO, seven state-of-the-art and representative PSOs were selected for comparison with it. Specifically, the selected seven algorithms are TCSPSO [59], AWPSO [84], PSO-DLP [67], GL-PSO [29], CLPSO [56], HCLPSO [62], and CLPSO-LS [69]. The former three algorithms are topology-based learning PSOs, while the latter four methods are exemplar construction-based learning PSOs.
Secondly, to comprehensively compare PCLPSO with the selected PSO variants, extensive comparison experiments were conducted on the CEC 2017 benchmark set with three dimensionality settings, namely 30-D, 50-D, and 100-D. To make fair comparisons, we set the total number of fitness evaluations (FEmax) as 10,000*D for all algorithms.
Thirdly, for fairness, except for the swarm size, we set the key parameter settings of the selected seven algorithms as suggested in the related papers. With respect to the swarm size, since it is usually problem-dependent, we tuned its settings on the CEC 2017 benchmark set with three settings of the dimension size for all algorithms. After the preliminary experiments on fine-tuning, the settings of the swarm size along with the settings of the other key parameters of all algorithms are presented in Table 2.
Fourthly, to comprehensively measure the optimization performance of each algorithm, the median, the mean, and the standard deviation (Std) values in terms of the fitness of the global best solutions found at the end of the associated algorithms over 30 independent runs were employed as the measurements. Furthermore, to tell whether there is a significant difference between the proposed PCLPSO and the compared methods, the Wilcoxon rank sum test at the significance level of α = 0.05 was conducted to compare PCLPSO with each of the compared methods on each problem. To investigate the overall optimization performance of all algorithms on one whole benchmark set, we carried out the Friedman test at the significance level of α = 0.05 to get the average rank of each algorithm. It should be mentioned that the above two tests were executed by directing using the API in Matlab.
At last, it should be noticed that all algorithms were coded under PyCharm CE and run on a server with Inter(R) Core (TM) i7-10700T CPU @ 2.90 GHz and 8G RAM.

4.2. Comparison with State-of-the-Art PSOs

Table 3, Table 4 and Table 5 exhibit the comparison results in terms of the global best fitness between PCLPSO and the seven selected PSO variants on the CEC 2017 benchmark sets with the three dimension sizes (30-D, 50-D, and 100-D), respectively. In the three tables, the bolded p-values denote that the proposed PCLPSO is significantly better than the compared algorithms on the associated problems. Besides, the symbol “+” above the p-values denotes that PCLPSO obtains significant superiority to the corresponding compared PSO variants on the related problems, the symbol “-” above the p-values denotes that PCLPSO obtains significant inferiority to the corresponding compared PSO variants on the associated problems, and the symbol “=” above the p-values denotes that PCLPSO obtains equivalent performance with the corresponding compared PSO variants on the associated problems. Furthermore, “+/=/−” in the three tables counts the numbers of “+”, “=”, and “−” in the whole benchmark set, respectively. Additionally, the average rank of each algorithm obtained from the Friedman test is also presented in the three tables. To clearly observe the statistical comparison results, Table 6 summarizes the statistical comparison results between PCLPSO and the seven compared peer PSO methods on the CEC 2017 benchmark set with the three different dimension sizes in terms of “+/=/−”.
Observing Table 3, we summarize the comparison results between PCLPSO and the seven compared PSOs on the 30-D CEC 2017 functions as follows:
(1)
With respect to the Friedman test results, PCLPSO obtains the smallest rank, namely 1.97, and this rank value is much smaller than those of the seven compared algorithms (at least 2.66). This substantiates that PCLPSO achieves the best overall performance on the 30-D CEC 2017 benchmark set and shows significant overall dominance to the seven compared PSO variants.
(2)
From the perspective of the Wilcoxon rank sum test results, except for HCLPSO and CLPSO, PCLPSO significantly outperforms the seven compared PSO algorithms on at least 23 problems and shows worse performance on at most three problems. Compared with HCLPSO and CLPSO, PCLPSO shows significant superiority to them both on 17 problems and displays inferiority to them on at most nine problems. In particular, we find that PCLPSO shows significant superiority to PSO-DLP on all the 29 problems, and significantly outperforms AWPSO on 28 problems.
(3)
In view of the comparison results on different kinds of optimization problems, on the two unimodal problems, PCLPSO dominates TCSPSO, AWPSO, and PSO-DLP on the two problems, and achieves competitive performance with the other four algorithms (CLPSO-LS, GL-PSO, HCLPSO, and CLPSO). On the seven simple multimodal problems, PCLPSO is significantly superior to PSO-DLP on all these problems and significantly dominates AWPSO and CLPSO-LS both on six problems. Competing with GL-PSO, PCLPSO shows significant dominance on five problems and displays no failure to GL-PSO. Compared with the other three algorithms, namely TCSPSO, HCLPSO, and CLPSO, PCLPSO achieves highly competitive performance with them. When it comes to the 10 hybrid problems, PCLPSO shows significant superiority to TCSPSO, AWPSO, CLPSO-LS, and PSO-DLP all on the 10 problems. Compared with GL-PSO, HCLPSO, and CLPSO, PCLPSO outperforms them significantly on nine, eight, and six problems, respectively. In terms of the 10 composition problems, PCLPSO significantly dominates AWPSO and PSO-DLP both on all these problems and outperforms GL-PSO on nine problems. In comparison with TCSPSO, CLPSO-LS, and CLPSO, PCLPSO shows significant superiority to them all on seven problems. Compared with HCLPSO, PCLPSO beats it on five problems and is defeated on only two problems.
Taking a look at Table 4, we obtain the following findings from the comparison results between PCLPSO and the seven compared PSOs on the 50-D CEC 2017 functions:
(1)
In regard to the Friedman test results, PCLPSO still gains the lowest rank (2.34) among all algorithms. Moreover, such a rank value is still much lower than those of the seven PSO algorithms (at least 2.97). This substantiates that PCLPSO still performs the best on the whole 50-D CEC 2017 benchmark set, and its overall performance is significantly better than those of the seven compared PSO variants.
(2)
According to the Wilcoxon rank sum test results, PCLPSO presents significant dominance to AWPSO, PSO-DLP, and GL-PSO on 27, 29, and 24 problems, respectively. Compared with TCSPSO and CLPSO-LS, PCLPSO obtains significantly better performance on 20 problems. In competition with HCLPSO, PCLPSO achieves significant superiority on 13 problems and shows inferiority on 12 problems. This demonstrates that PCLPSO obtains highly competitive performance with HCLPSO on the 50-D CEC 2017 benchmark set.
(3)
Regarding the comparison results on different kinds of benchmark problems, on the two unimodal problems, PCLPSO performs much better than AWPSO, PSO-DLP, and GL-PSO all on these two problems, and performs competitively with TCSPSO, CLPSO-LS, and CLPSO. On the seven simple multimodal problems, PCLPSO performs significantly better than TCSPSO, AWPSO, CLPSO-LS, and PSO-DLP on at least five problems, and obtains highly competitive performance with HCLPSO and CLPSO. In face of the 10 hybrid problems, PCLPSO exhibits much better performance than the compared PSO variants on at least five problems, while it performs worse than them on at most three functions. In particular, in comparison with AWPSO, PSO-DLP, and GL-PSO, PCLPSO displays significant superiority to them on 9, 10, and 8 problems, respectively. When tackling the 10 composition problems, PCLPSO is better than TCSPSO, AWPSO, CLPSO-LS, PSO-DLP, GL-PSO, and CLPSO on at least seven problems, and performs similarly with HCLPSO.
Lastly, observing of Table 5, we achieve the following observations from the comparison results between PCLPSO and the seven selected PSO methods on the 100-D CEC 2017 problems:
(1)
In terms of the Friedman test results, PCLPSO still obtains the lowest rank value (2.14) among all algorithms. This demonstrates that PCLPSO consistently obtains the best overall performance on the whole 100-D CEC 2017 benchmark set.
(2)
Regarding the Wilcoxon rank sum test presented in the second to last row, PCLPSO outperforms TCSPSO, AWPSO, CLPSO-LS, PSO-DLP, and GL-PSO on 20, 27, 20, 29, and 24 problems respectively. Competing with HCLPSO and CLPSO, PCLPSO performs significantly better than them on 13 and 17 problems, respectively.
(3)
Regarding the optimization performance on different kinds of problems, on the two unimodal problems, PCLPSO beats AWPSO, PSO-DLP, and GL-PSO all on the two problems, and performs competitively with TCSPSO, CLPSO-LS, and CLPSO. On the seven simple multimodal problems, PCLPSO significantly outperforms TCSPSO, AWPSO, CLPSO-LS, PSO-DLP, and GLPSO on at least five problems, and performs worse than them on at most one problem. In comparison with HCLPSO and CLPSO, PCLPSO achieves very competitive performance with them. On the 10 hybrid problems, PCLPSO obtains significantly better performance than the seven PSO variants on at least five problems and obtains worse performance than them on at most three problems. On the 10 composition problems, except for HCLPSO, PCLPSO presents its dominance over the other six compared methods on at least seven problems.
In summary, from Table 6, we can see that PCLPSO consistently performs the best and exhibits significant superiority to the seven compared PSO methods on the CEC 2017 problem set with the three settings of dimensionality. This substantiates that PCLPSO is promising for dealing with optimization problems and has a good scalability in solving various optimization problems. In particular, PCLPSO performs much better than the compared methods on complex problems, such as the hybrid problems and the composition problems. This verifies that PCLPSO has a good optimization ability in dealing with complicated optimization problems.
The superiority of PCLPSO mainly profits from the devised predominant learning strategy, which could construct promising and effective guiding exemplars to update particles. In addition, the proposed dynamic parameter strategies also contribute to the good performance of PCLPSO in improving the swarm diversity. With the cohesive cooperation among the above techniques, PCLPSO could compromise the search diversity and the search effectiveness of particles well to search the solution space to obtain satisfactory performance.

4.3. Deep Investigations on PCLPSO

This section presents experiments for deep observations on PCLPSO to investigate the usefulness of each component, so that it can be determined what contributes to the good performance of PCLPSO.

4.3.1. Effectiveness of the Predominant Cognitive Learning Strategy

First, we carried out experiments to validate the usefulness of the devised predominant cognitive learning strategy. To achieve this goal, we first developed three additional versions of PCLPSO to make comparisons with the proposed PCLPSO. The first is to remove the predominant cognitive learning strategy and directly use the personal best position of the updated particle to guide its update. We name this variant of PCLPSO as “PCLPSO-WPCL”. The second variant is to randomly pick a pbest from those of the other particles to generate an exemplar to update each particle in Equation (11) instead of using a random definitely better one. We name this variant as “PCLPSO-Rand”. The third method is to use gbest to construct the guiding exemplar in Equation (11). We name this PCLPSO variant as “PCLPSO-Gbest”.
After the above preparation, we executed experiments on the 50-D CEC 2017 problem set to compare PCLPSO with the three variants. Table 7 presents the comparison results among these four versions of PCLPSO, which are all the mean fitness values of the global best solutions found at the end of the algorithms over 30 independent runs.
From Table 7, it is found that with respect to both the Friedman test results and the number of problems where the associated algorithm performs the best, PCLPSO obtains the best overall performance. In particular, PCLPSO-WPCL achieves the worst performance. This demonstrates that the proposed PCL strategy is effective. Compared with PCLPSO-Gbest, the proposed PCLPSO and PCLPSO-Rand perform much better. This is because in PCLPSO-Gbest, there is only one predominant position, namely the gbest, used to construct the guiding exemplar. This leads to the diversity of the constructed exemplars being very limited, and thus the learning diversity of particles is not high enough, leading to the swarm diversity being improved limitedly and thus easily falls into local regions. In competition with PCLPSO-Rand, the proposed PCLPSO achieves the best results on more problems (14 problems) and obtains smaller rank (1.52). This demonstrates the superiority of using a predominant cognitive best position over using a random one to construct the learning exemplar to update each particle.
On the whole, based on the above comparison experiments, the effectiveness of the PCL strategy is demonstrated, which could construct effective exemplars to direct the updating of particles.

4.3.2. Effectiveness of the Adaptive Strategy for F

Subsequently, we carried out experiments to validate the usefulness of the devised adaptive strategy (Equation (15)) for the learning step F. To this end, we set F with different fixed values ranging from 0.1 to 0.9. Table 8 shows the comparison results in view of the mean fitness values of the global best solutions found at the end of the associated algorithm over 30 independent runs between PCLPSO with the adaptive F and those with different fixed settings of F on the 50-D CEC 2017 benchmark set. The bolded values in this table mean that the associated algorithms achieve the best performance on the corresponding problems. In addition, the average rank of each configuration of F attained from the Friedman test is also presented in this table.
Taking a careful look at Table 8, we attain the following observations:
(1)
In view of the Friedman test results, the PCLPSO with the adaptive F achieves the lowest rank and its rank value is much lower than those of the others. This demonstrates that the PCLPSO with adaptive strategy obtains the best overall performance, which demonstrates the great superiority of the adaptive strategy to the fixed ones.
(2)
In-depth observations demonstrate that the PCLPSO with the adaptive strategy performs the best on 10 problems, while those with the fixed values obtain the best results on at most 3 problems. Moreover, the results obtained by the adaptive PCLPSO on the other 19 problems are very close to the best results obtained by PCLPSO with the associated optimal F. In particular, we find that the optimal F for PCLPSO is different on different optimization problems.
In conclusion, the adaptive strategy for F not only helps PCLPSO achieve more promising performance, but also helps PCLPSO get out of being sensitive to the parameter F. The great effectiveness of the adaptive strategy mainly benefits from the fact that it takes the difference among pbests into consideration to set the learning step F. In this way, not only the search effectiveness, but also the search diversity of particles could be improved to a large extent, and thus PCLPSO with this strategy could achieve good performance.

4.3.3. Effectiveness of the Dynamic Strategy for c

At last, we executed experiments to validate the usefulness of the dynamic acceleration coefficient strategy (Equation (16)). To achieve this goal, we set c with different fixed values ranging from 0.8 to 2.0 with a step size of 0.2. Table 9 displays the comparison results in terms of the mean fitness values of the global best solutions found at the end of the associated algorithm over 30 independent runs between PCLPSO with the dynamic strategy for c and those with different fixed c on the 50-D CEC 2017 benchmark set.
From Table 9, the following observations can be attained:
(1)
In view of the Friedman test results, the PCLPSO with the proposed dynamic strategy achieves the lowest rank (2.72) and such a rank is much smaller than those of the PCLPSO (at least 3.52) with the fixed values. This verifies that the PCLPSO with the proposed dynamic strategy obtains the best overall performance and presents its great dominance to the PCLPSO with the fixed c. This verifies the great superiority of the proposed dynamic strategy to the fixed one.
(2)
Taking further observations, we find that the PCLPSO with the dynamic strategy obtains the best optimization results on 16 problems, while those with the fixed values obtain the best performance on at most 4 problems. Moreover, the results obtained by PCLPSO with the dynamic strategy on the other 13 problems are very similar to the best results obtained by PCLPSO with the associated optimal c.
Based on the above experiments, the effectiveness of the proposed dynamic strategy is demonstrated. Such a strategy helps PCLPSO achieve promising performance because it can generate diversified values of c, which is beneficial for further improving the search diversity of particles.
To summarize, the abovementioned experiments have comprehensively demonstrated the effectiveness of the proposed PCLPSO in solving optimization problems. In particular, PCLPSO performs much better than the compared peer methods in tackling complex problems, such as the multimodal problems, the hybrid problems, and the composition problems. The superiority of PCLPSO mainly profits from the proposed PCL strategy and the devised dynamic strategies for the learning step F and the acceleration coefficient c, whose effectiveness was also verified by the experiments.

5. Conclusions

This paper devised a predominant cognitive learning particle swarm optimization (PCLPSO) to tackle optimization problems. Instead of letting particles learn from their own cognitive experience and the social experience of the entire swarm, the proposed PCLPSO constructs an effective guiding exemplar by the devised predominant cognitive learning (PCL) strategy to update each particle. Specifically, the guiding exemplar for each particle is constructed by letting its pbest learn from a predominant pbest randomly selected from those which are better than pbest of the updated particle. In this way, the constructed exemplar for each particle is expectedly more promising than its pbest, and thus the search effectiveness of particles is expectedly improved. Moreover, due to the random selection of the predominant positions, the constructed guiding exemplars to update different particles are likely different, and thus the search diversity of particles is expectedly promoted as well. To further promote the search diversity and get rid of the sensitivity of PCLPSO to the related parameters, two dynamic strategies are particularly designed for the learning step in the exemplar construction and the acceleration coefficient in the velocity update. The proposed PCL and the devised dynamic strategies collaborate cohesively to help PCLPSO compromise the search effectiveness and the search diversity of particles well to search the solution space to obtain satisfactory performance.
Comparative experiments were carried out on the commonly adopted CEC 2017 benchmark set with three settings of dimensionality (30-D, 50-D, and 100-D) to compare the proposed PCLPSO with seven representative and state-of-the-art PSOs. Experimental results substantiated the great effectiveness of the devised PCLPSO and demonstrated that PCLPSO preserves a good scalability to solve different kinds of optimization problems. In particular, it was verified that the proposed PCLPSO preserves a good ability in tackling complex optimization problems, such as the multimodal problems, the hybrid problems, and the composition problems. To determine what contributes to the good performance of PCLPSO, deep investigations on PCLPSO were also carried out. The experimental results demonstrated that the proposed techniques contribute a lot to assisting PCLPSO to obtain good performance.
In the future, we aim to employ the proposed PCLPSO to tackle real-world optimization problems. Since PCLPSO is mainly designed for low-dimensional continuous optimization problems and it is independent of the mathematical properties of optimization problems, we mainly intend to use PCLPSO to solve continuous optimization problems in academic research and real-world engineering.

Author Contributions

Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. Y.J.: Implementation, formal analysis, and writing—original draft preparation. X.G.: Methodology, and writing—review and editing. D.X.: Methodology, and writing—review and editing. Z.L.: Writing—review and editing, and funding acquisition. S.-W.J.: Writing—review and editing. J.Z.: Conceptualization and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qiao, K.; Yu, K.; Qu, B.; Liang, J.; Song, H.; Yue, C. An Evolutionary Multitasking Optimization Framework for Constrained Multi-objective Optimization Problems. IEEE Trans. Evol. Comput. 2022, 26, 263–277. [Google Scholar] [CrossRef]
  2. Yang, Q.; Chen, W.-N.; Zhang, J. Probabilistic Multimodal Optimization. In Metaheuristics for Finding Multiple Solutions; Preuss, M., Epitropakis, M.G., Li, X., Fieldsend, J.E., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 191–228. [Google Scholar]
  3. Wan, X.; Cao, J.; Zhou, S.; Wang, J.; Zheng, N. Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking. IEEE Trans. Image Process. 2021, 30, 8222–8235. [Google Scholar] [CrossRef] [PubMed]
  4. Shen, J.; Yu, D.; Deng, L.; Dong, X. Fast Online Tracking with Detection Refinement. IEEE Trans. Intell. Transp. Syst. 2017, 19, 162–173. [Google Scholar] [CrossRef]
  5. Xu, L.; Xu, H.; Li, X.; Pan, M. A Defect Inspection for Explosive Cartridge Using an Improved Visual Attention and Image-Weighted Eigenvalue. IEEE Trans. Instrum. Meas. 2019, 69, 1191–1204. [Google Scholar] [CrossRef]
  6. Zhou, X.; Wang, Y.; Zhu, Q.; Mao, J.; Xiao, C.; Lu, X.; Zhang, H. A Surface Defect Detection Framework for Glass Bottle Bottom Using Visual Attention Model and Wavelet Transform. IEEE Trans. Ind. Informatics 2019, 16, 2189–2201. [Google Scholar] [CrossRef]
  7. Hu, L.; Naeem, W.; Rajabally, E.; Watson, G.; Mills, T.; Bhuiyan, Z.; Raeburn, C.; Salter, I.; Pekcan, C. A Multiobjective Optimization Approach for COLREGs-Compliant Path Planning of Autonomous Surface Vehicles Verified on Networked Bridge Simulators. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1167–1179. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, L.; Liu, L.; Qi, J.; Peng, W. Improved Quantum Particle Swarm Optimization Algorithm for Offline Path Planning in AUVs. IEEE Access 2020, 8, 143397–143411. [Google Scholar] [CrossRef]
  9. Chen, W.-N.; Tan, D.-Z.; Yang, Q.; Gu, T.; Zhang, J. Ant Colony Optimization for the Control of Pollutant Spreading on Social Networks. IEEE Trans. Cybern. 2019, 50, 4053–4065. [Google Scholar] [CrossRef]
  10. Yang, Q.; Chen, W.-N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2022, 52, 1960–1976. [Google Scholar] [CrossRef]
  11. Wei, F.-F.; Chen, W.-N.; Yang, Q.; Deng, J.; Luo, X.-N.; Jin, H.; Zhang, J. A Classifier-Assisted Level-Based Learning Swarm Optimizer for Expensive Optimization. IEEE Trans. Evol. Comput. 2020, 25, 219–233. [Google Scholar] [CrossRef]
  12. Liang, J.J.; Qu, B.; Gong, D.; Yue, C. Problem Definitions and Evaluation Criteria for the CEC 2019 Special Session on Multimodal Multiobjective Optimization. Comput. Intell. Lab. Zhengzhou Univ. 2019. [Google Scholar] [CrossRef]
  13. Bisio, I.; Estatico, C.; Fedeli, A.; Lavagetto, F.; Pastorino, M.; Randazzo, A.; Sciarrone, A. Brain Stroke Microwave Imaging by Means of a Newton-Conjugate-Gradient Method in Lp Banach Spaces. IEEE Trans. Microw. Theory Tech. 2018, 66, 3668–3682. [Google Scholar] [CrossRef]
  14. Jain, P.; Kakade, S.M.; Kidambi, R.; Netrapalli, P.; Sidford, A. Accelerating Stochastic Gradient Descent for Least Squares Regression. In Proceedings of the 31st Conference on Learning Theory, Stockholm, Sweden, 6–9 July 2018; PMLR, Inc.: Brookline, MA, USA; pp. 545–604. [Google Scholar]
  15. Lera, D.; Posypkin, M.; Sergeyev, Y.D. Space-filling curves for numerical approximation and visualization of solutions to systems of nonlinear inequalities with applications in robotics. Appl. Math. Comput. 2020, 390, 125660. [Google Scholar] [CrossRef]
  16. Strongin, R.G.; Sergeyev, Y.D. Global multidimensional optimization on parallel computer. Parallel Comput. 1992, 18, 1259–1273. [Google Scholar] [CrossRef]
  17. Chang, D.; Sun, S.; Zhang, C. An Accelerated Linearly Convergent Stochastic L-BFGS Algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3338–3346. [Google Scholar] [CrossRef]
  18. Karimi, S.; Vavasis, S. IMRO: A Proximal Quasi-Newton Method for Solving ℓ1-Regularized Least Squares Problems. SIAM J. Optim. 2017, 27, 583–615. [Google Scholar] [CrossRef] [Green Version]
  19. Yu, P.; Pong, T.K.; Lu, Z. Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems. SIAM J. Optim. 2021, 31, 2024–2054. [Google Scholar] [CrossRef]
  20. Malik, M.; Mamat, M.; Abas, S.S. Convergence Analysis of a New Coefficient Conjugate Gradient Method under Exact Line Search. Int. J. Adv. Sci. Technol. 2020, 29, 187–198. [Google Scholar]
  21. Vaswani, S.; Mishkin, A.; Laradji, I.; Schmidt, M.; Gidel, G.; Lacoste-Julien, S. Painless Stochastic Gradient: Interpolation, Line-search, and Convergence Rates. In Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32, pp. 1–14. [Google Scholar]
  22. Babaie-Kafaki, S.; Rezaee, S. A randomized nonmonotone adaptive trust region method based on the simulated annealing strategy for unconstrained optimization. Int. J. Intell. Comput. Cybern. 2019, 12, 389–399. [Google Scholar] [CrossRef]
  23. Shani, L.; Efroni, Y.; Mannor, S. Adaptive Trust Region Policy Optimization: Global Convergence and Faster Rates for Regularized MDPs. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 5668–5675. [Google Scholar]
  24. Yang, Q.; Bian, Y.-W.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. Stochastic Triad Topology Based Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10, 1032. [Google Scholar] [CrossRef]
  25. Yang, Q.; Hua, L.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  26. Yang, Q.; Xie, H.; Chen, W.; Zhang, J. Multiple Parents Guided Differential Evolution for Large Scale Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 3549–3556. [Google Scholar]
  27. Yu, W.-J.; Ji, J.-Y.; Gong, Y.-J.; Yang, Q.; Zhang, J. A tri-objective differential evolution approach for multimodal optimization. Inf. Sci. 2018, 423, 1–23. [Google Scholar] [CrossRef]
  28. Seo, J.-H.; Im, C.-H.; Heo, C.-G.; Kim, J.-K.; Jung, H.-K.; Lee, C.-G. Multimodal function optimization based on particle swarm optimization. IEEE Trans. Magn. 2006, 42, 1095–1098. [Google Scholar] [CrossRef]
  29. Ji, X.; Zhang, Y.; Gong, D.; Sun, X.; Guo, Y. Multisurrogate-Assisted Multitasking Particle Swarm Optimization for Expensive Multimodal Problems. IEEE Trans. Cybern. 2021, 1–15. [Google Scholar] [CrossRef]
  30. Yang, Q.; Chen, W.-N.; Yu, Z.; Gu, T.; Li, Y.; Zhang, H.; Zhang, J. Adaptive Multimodal Continuous Ant Colony Optimization. IEEE Trans. Evol. Comput. 2016, 21, 191–205. [Google Scholar] [CrossRef] [Green Version]
  31. Yang, Q.; Chen, W.-N.; Li, Y.; Chen, C.L.P.; Xu, X.-M.; Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE Trans. Cybern. 2016, 47, 636–650. [Google Scholar] [CrossRef] [Green Version]
  32. Yue, C.; Qu, B.; Liang, J. A Multiobjective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal Multiobjective Problems. IEEE Trans. Evol. Comput. 2017, 22, 805–817. [Google Scholar] [CrossRef]
  33. Qu, B.; Li, G.; Yan, L.; Liang, J.; Yue, C.; Yu, K.; Crisalle, O.D. A grid-guided particle swarm optimizer for multimodal multi-objective problems. Appl. Soft Comput. 2022, 117, 108381. [Google Scholar] [CrossRef]
  34. Jones, D.R. A Taxonomy of Global Optimization Methods Based on Response Surfaces. J. Glob. Optim. 2001, 21, 345–383. [Google Scholar] [CrossRef]
  35. Kvasov, D.E.; Pizzuti, C.; Sergeyev, Y.D. Local tuning and partition strategies for diagonal GO methods. Numer. Math. 2003, 94, 93–106. [Google Scholar] [CrossRef] [Green Version]
  36. Sergeyev, Y.D.; Grishagin, V.A. Parallel Asynchronous Global Search and the Nested Optimization Scheme. J. Comput. Anal. Appl. 2001, 3, 123–145. [Google Scholar] [CrossRef]
  37. Remigijus, P.; Sergeev, Y.; Kvasov, D.; Julius, Ž. Globally-biased BIRECT Algorithm with Local Accelerators for Expensive Global Optimization. Expert Syst. Appl. 2020, 144, 113052. [Google Scholar]
  38. Vikhar, P.A. Evolutionary Algorithms: A Critical Review and Its Future Prospects. In Proceedings of the International Conference on Global Trends in Signal Processing, Information Computing and Communication, Jalgaon, India, 22–24 December 2016; pp. 261–265. [Google Scholar]
  39. Sloss, A.N.; Gustafson, S. 2019 Evolutionary Algorithms Review. In Genetic Programming Theory and Practice XVII, Banzhaf, W., Goodman, E., Sheneman, L., Trujillo, L., Worzel, B., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 307–344. [Google Scholar]
  40. Coello, C.A.C.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: Berlin, Germany, 2007; Volume 5. [Google Scholar]
  41. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef] [Green Version]
  42. Kvasov, D.; Sergeyev, Y. Deterministic approaches for solving practical black-box global optimization problems. Adv. Eng. Softw. 2015, 80, 58–66. [Google Scholar] [CrossRef] [Green Version]
  43. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [Green Version]
  44. Sergeyev, Y.D.; Kvasov, D.E. A deterministic global optimization using smooth diagonal auxiliary functions. Commun. Nonlinear Sci. Numer. Simul. 2014, 21, 99–111. [Google Scholar] [CrossRef] [Green Version]
  45. Zhang, J.; Yang, F.; Weng, X. An Evolutionary Scatter Search Particle Swarm Optimization Algorithm for the Vehicle Routing Problem with Time Windows. IEEE Access 2018, 6, 63468–63485. [Google Scholar] [CrossRef]
  46. Yang, Q.; Chen, W.-N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer with Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 50, 3393–3408. [Google Scholar] [CrossRef]
  47. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  48. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  49. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  50. Okulewicz, M.; Mańdziuk, J. The impact of particular components of the PSO-based algorithm solving the Dynamic Vehicle Routing Problem. Appl. Soft Comput. 2017, 58, 586–604. [Google Scholar] [CrossRef]
  51. Han, H.; Lu, W.; Hou, Y.; Qiao, J. An Adaptive-PSO-Based Self-Organizing RBF Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 104–117. [Google Scholar] [CrossRef]
  52. Guo, L. Research on Anomaly Detection in Massive Multimedia Data Transmission Network Based on Improved PSO Algorithm. IEEE Access 2020, 8, 95368–95377. [Google Scholar] [CrossRef]
  53. Wu, J.; Song, C.; Ma, J.; Wu, J.; Han, G. Reinforcement Learning and Particle Swarm Optimization Supporting Real-Time Rescue Assignments for Multiple Autonomous Underwater Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
  54. Wang, P.; Lei, Y.; Agbedanu, P.R.; Zhang, Z. Makespan-Driven Workflow Scheduling in Clouds Using Immune-Based PSO Algorithm. IEEE Access 2020, 8, 29281–29290. [Google Scholar] [CrossRef]
  55. Gong, Y.-J.; Li, J.-J.; Zhou, Y.; Li, Y.; Chung, H.S.-H.; Shi, Y.-H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2015, 46, 2277–2290. [Google Scholar] [CrossRef] [Green Version]
  56. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  57. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef] [Green Version]
  58. Mendes, R.; Kennedy, J.; Neves, J. The Fully Informed Particle Swarm: Simpler, Maybe Better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  59. Zhang, X.; Liu, H.; Zhang, T.; Wang, Q.; Wang, Y.; Tu, L. Terminal crossover and steering-based particle swarm optimization algorithm with disturbance. Appl. Soft Comput. 2019, 85, 105841. [Google Scholar] [CrossRef]
  60. Yang, Q.; Chen, W.-N.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2016, 47, 2896–2910. [Google Scholar] [CrossRef] [Green Version]
  61. Zhang, K.; Huang, Q.; Zhang, Y. Enhancing comprehensive learning particle swarm optimization with local optima topology. Inf. Sci. 2018, 471, 1–18. [Google Scholar] [CrossRef]
  62. Lynn, N.; Suganthan, P. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  63. Wang, Z.J.; Zhan, Z.H.; Du, K.J.; Yu, Z.W.; Zhang, J. Orthogonal Learning Particle Swarm Optimization with Variable Relocation for Dynamic Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 594–600. [Google Scholar]
  64. Guo, Q.; Ba, J.; Luo, C.; Xiao, S. Stability-enhanced prestack seismic inversion using hybrid orthogonal learning particle swarm optimization. J. Pet. Sci. Eng. 2020, 192, 107313. [Google Scholar] [CrossRef]
  65. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Global genetic learning particle swarm optimization with diversity enhancement by ring topology. Swarm Evol. Comput. 2018, 44, 571–583. [Google Scholar] [CrossRef]
  66. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Shi, Y. Particle swarm optimizer with two differential mutation. Appl. Soft Comput. 2017, 61, 314–330. [Google Scholar] [CrossRef]
  67. Shen, Y.; Wei, L.; Zeng, C.; Chen, J. Particle Swarm Optimization with Double Learning Patterns. Comput. Intell. Neurosci. 2016, 2016, 3049632. [Google Scholar] [CrossRef] [Green Version]
  68. Zhang, X.; Wang, X.; Kang, Q.; Cheng, J. Differential mutation and novel social learning particle swarm optimization algorithm. Inf. Sci. 2018, 480, 109–129. [Google Scholar] [CrossRef]
  69. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive Learning Particle Swarm Optimization Algorithm with Local Search for Multimodal Functions. IEEE Trans. Evol. Comput. 2018, 23, 718–731. [Google Scholar] [CrossRef]
  70. Song, G.W.; Yang, Q.; Gao, X.D.; Ma, Y.Y.; Lu, Z.Y.; Zhang, J. An Adaptive Level-Based Learning Swarm Optimizer for Large-Scale Optimization. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 152–159. [Google Scholar]
  71. Han-Yu, X.; Yang, Q.; Xiao-Min, H.; Chen, W.N. Cross-generation Elites Guided Particle Swarm Optimization for large scale optimization. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  72. Yang, Q.; Li, Y.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics 2021, 9, 3207. [Google Scholar] [CrossRef]
  73. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrain Real-Parameter Numerical Optimization; Nanyang Technological University: Singapore, November 2016. [Google Scholar]
  74. Sennan, S.; Ramasubbareddy, S.; Balasubramaniyam, S.; Nayyar, A.; Abouhawwash, M.; Hikal, N.A. T2FL-PSO: Type-2 Fuzzy Logic-Based Particle Swarm Optimization Algorithm Used to Maximize the Lifetime of Internet of Things. IEEE Access 2021, 9, 63966–63979. [Google Scholar] [CrossRef]
  75. Meng, Z.; Zhong, Y.; Mao, G.; Liang, Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2021, 586, 176–191. [Google Scholar] [CrossRef]
  76. Wang, S.; Liu, G.; Gao, M.; Cao, S.; Guo, A.; Wang, J. Heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer with two mutation operators. Inf. Sci. 2020, 540, 175–201. [Google Scholar] [CrossRef]
  77. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Adaptive comprehensive learning particle swarm optimization with cooperative archive. Appl. Soft Comput. 2019, 77, 533–546. [Google Scholar] [CrossRef]
  78. Liu, R.; Wang, L.; Ma, W.; Mu, C.; Jiao, L. Quadratic interpolation based orthogonal learning particle swarm optimization algorithm. Nat. Comput. 2013, 13, 17–37. [Google Scholar] [CrossRef]
  79. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.-L.; Zhan, Z.-H. Triple Archives Particle Swarm Optimization. IEEE Trans. Cybern. 2019, 50, 4862–4875. [Google Scholar] [CrossRef]
  80. Zou, J.; Deng, Q.; Zheng, J.; Yang, S. A close neighbor mobility method using particle swarm optimizer for solving multimodal optimization problems. Inf. Sci. 2020, 519, 332–347. [Google Scholar] [CrossRef]
  81. Blackwell, T.; Kennedy, J. Impact of Communication Topology in Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2018, 23, 689–702. [Google Scholar] [CrossRef] [Green Version]
  82. Liu, Q.; Wei, W.; Yuan, H.; Zhan, Z.-H.; Li, Y. Topology selection for particle swarm optimization. Inf. Sci. 2016, 363, 154–173. [Google Scholar] [CrossRef]
  83. Wang, L.; Yang, B.; Orchard, J. Particle swarm optimization using dynamic tournament topology. Appl. Soft Comput. 2016, 48, 584–596. [Google Scholar] [CrossRef]
  84. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A Novel Sigmoid-Function-Based Adaptive Weighted Particle Swarm Optimizer. IEEE Trans. Cybern. 2019, 51, 1085–1093. [Google Scholar] [CrossRef]
  85. Varna, F.T.; Husbands, P. HIDMS-PSO: A New Heterogeneous Improved Dynamic Multi-Swarm PSO Algorithm. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Canberra, Australia, 1–4 December 2020; pp. 473–480. [Google Scholar]
  86. Zhang, H.; Yuan, M.; Liang, Y.; Liao, Q. A novel particle swarm optimization based on prey–predator relationship. Appl. Soft Comput. 2018, 68, 202–218. [Google Scholar] [CrossRef]
Table 1. The summarized characteristics of the CEC 2017 benchmark problems.
Table 1. The summarized characteristics of the CEC 2017 benchmark problems.
CategoryFFunctionsFi* = Fi(x*)
Unimodal FunctionsF1Shifted and Rotated Bent Cigar Function100
F3Shifted and Rotated Zakharov Function300
Simple Multimodal FunctionsF4Shifted and Rotated Rosenbrock’s Function400
F5Shifted and Rotated Rastrigin’s Function500
F6Shifted and Rotated Expanded Scaffer’s F6 Function600
F7Shifted and Rotated Lunacek Bi_Rastrigin Function700
F8Shifted and Rotated Non-Continuous Rastrigin’s Function800
F9Shifted and Rotated Levy Function900
F10Shifted and Rotated Schwefel’s Function1000
Hybrid FunctionsF11Hybrid Function 1 (N = 3)1100
F12Hybrid Function 2 (N = 3)1200
F13Hybrid Function 3 (N = 3)1300
F14Hybrid Function 4 (N = 4)1400
F15Hybrid Function 5 (N = 4)1500
F16Hybrid Function 6 (N = 4)1600
F17Hybrid Function 6 (N = 5)1700
F18Hybrid Function 6 (N = 5)1800
F19Hybrid Function 6 (N = 5)1900
F20Hybrid Function 6 (N = 6)2000
Composition FunctionsF21Composition Function 1 (N = 3)2100
F22Composition Function 2 (N = 3)2200
F23Composition Function 3 (N = 4)2300
F24Composition Function 4 (N = 4)2400
F25Composition Function 5 (N = 5)2500
F26Composition Function 6 (N = 5)2600
F27Composition Function 7 (N = 6)2700
F28Composition Function 8 (N = 6)2800
F29Composition Function 9 (N = 3)2900
F30Composition Function 10 (N = 3)3000
Search Range: [−100, 100]D
Table 2. Parameter settings of PCLPSO and the seven compared PSO variants.
Table 2. Parameter settings of PCLPSO and the seven compared PSO variants.
AlgorithmsDParameter Settings
PCLPSO30NP = 80 ω   = 0.9~0.2, ci = Cauchy(1.6, 0.2)
50NP = 80
100NP = 150
TCSPSO30NP = 150 ω   = 0.9~0.2, c1 = c2 = 2
50NP = 150
100NP = 80
AWPSO30NP = 200 ω   = 0.9 ~ 0.4 ,   a = 0.000035 m, b = 0.5, c = 0, d = 1.5
50NP = 200
100NP = 200
CLPSO-LS30NP = 50 ω   = 0.9 ~ 0.4 ,   c = 1.49445 ,   β   = 1 3 ,   θ   = 0.94
50NP = 50
100NP = 50
PSO-DLP30NP = 200 ω   = 0.9 ~ 0.3 ,   c 1 s   = c 2 s   = c 1 m   = c 2 m   = 2.0, L = 50
50NP = 200
100NP = 200
GL-PSO30NP = 100 ω   = 0.7298, c = 1.49618, pm = 0.01, sg = 7
50NP = 30
100NP = 50
HCLPSO30NP = 200 ω   = 0.99~0.2, c1 = 2.5~0.5, c2 = 0.5~2.5, c = 3~1.5
50NP = 160
100NP = 160
CLPSO30NP = 40 ω   = 0.9~0.4, c = 1.49445, m = 7
Table 3. Global fitness comparisons between PCLPSO and the seven selected PSO methods on the CEC 2017 benchmark set with the dimensionality set as 30.
Table 3. Global fitness comparisons between PCLPSO and the seven selected PSO methods on the CEC 2017 benchmark set with the dimensionality set as 30.
FCategoryQualityPCLPSOTCSPSOAWPSOCLPSO-LSPSO-DLPGL-PSOHCLPSOCLPSO
F1Unimodal FunctionsMedian6.17 × 1022.08 × 1037.16 × 1091.42 × 1042.20 × 1092.10 × 1035.49 × 1037.62 × 102
Mean1.33 × 1033.78 × 1037.79 × 1091.67 × 1043.17 × 1092.34 × 1049.72 × 1033.88 × 102
Std1.76 × 1034.43 × 1034.03 × 1097.91 × 1032.53 × 1091.08 × 1058.04 × 1037.43 × 102
p-value-6.58 × 10−3 +3.81 × 10−15 +7.95 × 10−15 +4.93 × 10−9 +2.66 × 10−1 =6.46 × 10−7+9.21 × 10−3 −
F3Median1.78 × 1032.23 × 1041.39 × 1046.80 × 10−117.80 × 1041.22 × 1045.86 × 1011.08 × 102
Mean1.97 × 1032.32 × 1042.03 × 1044.28 × 1037.90 × 1041.25 × 1041.21 × 1024.41 × 104
Std8.25 × 1026.36 × 1031.62 × 1041.64 × 1041.14 × 1046.13 × 1031.50 × 1021.02 × 104
p-value-1.42 × 10−25 +7.13 × 10−8 +4.45 × 10−1 =4.33 × 10−42 +4.21 × 10−13 +1.66 × 10−17 −1.78 × 10−30 +
F1-3+/=/−-2/0/02/0/01/1/02/0/01/1/01/0/11/0/1
F4Simple Multimodal FunctionsMedian1.56 × 1021.29 × 1026.71 × 1028.90 × 1011.10 × 1032.10 × 1028.56 × 1011.72 × 102
Mean1.55 × 1021.28 × 1027.12 × 1028.90 × 1011.26 × 1032.15 × 1028.66 × 1019.12 × 101
Std1.21 × 1015.63 × 1013.40 × 1024.13 × 10−17.70 × 1025.81 × 1018.78 × 1001.59 × 100
p-value-1.17 × 10−2 −1.48 × 10−12 +8.80 × 10−37 −9.96 × 10−11 +8.58 × 10−7 +1.01 × 10−32 −8.45 × 10−36 −
F5Median6.43 × 1017.26 × 1011.47 × 1022.17 × 1023.67 × 1026.50 × 1016.31 × 1012.39 × 101
Mean5.04 × 1017.16 × 1011.44 × 1022.18 × 1023.59 × 1027.33 × 1016.72 × 1017.52 × 101
Std2.51 × 1012.19 × 1013.16 × 1011.23 × 1012.66 × 1013.00 × 1011.66 × 1016.81 × 100
p-value-9.29 × 10−4 +1.83 × 10−18 +3.61 × 10−39 +1.82 × 10−47 +2.25 × 10−3 +3.41 × 10−3 +2.54 × 10−6 +
F6Median5.20 × 10−21.69 × 1021.39 × 1014.32 × 10−17.45 × 1011.24 × 1003.36 × 10−43.87 × 10−1
Mean6.69 × 10−26.97 × 1021.58 × 1019.35 × 10−17.44 × 1011.49 × 1001.11 × 10−32.86 × 10−6
Std6.30 × 10−21.40 × 1016.23 × 1001.06 × 1004.69 × 1009.37 × 10−11.62 × 10−31.94 × 10−6
p-value-9.23 × 10−1 =5.65 × 10−20 +3.70 × 10−5 +4.32 × 10−63 +2.10 × 10−11 +3.90 × 10−7 −2.69 × 10−7 −
F7Median1.10 × 1021.18 × 1021.66 × 1022.37 × 1025.73 × 1021.02 × 1029.32 × 1017.11 × 101
Mean1.10 × 1021.24 × 1021.91 × 1022.33 × 1025.75 × 1021.00 × 1029.38 × 1019.05 × 101
Std1.63 × 1012.07 × 1019.51 × 1011.92 × 1016.30 × 1012.11 × 1011.95 × 1018.02 × 100
p-value-4.83 × 10−3 +2.41 × 10−5 +2.28 × 10−34 +2.09 × 10−43 +5.37 × 10−2 =1.20 × 10−3 −3.21 × 10−7 −
F8Median5.97 × 1017.32 × 1011.37 × 1022.25 × 1023.39 × 1026.17 × 1016.47 × 1011.99 × 101
Mean4.91 × 1017.25 × 1011.36 × 1022.22 × 1023.32 × 1026.39 × 1016.49 × 1018.17 × 101
Std2.72 × 1011.91 × 1013.38 × 1011.17 × 1012.27 × 1011.93 × 1011.87 × 1011.00 × 101
p-value-2.85 × 10−4 +8.32 × 10−16 +1.40 × 10−38 +4.40 × 10−46 +1.83 × 10−2 +1.14 × 10−2 +7.28 × 10−8 +
F9Median5.99 × 1002.86 × 1022.19 × 1031.90 × 1011.17 × 1044.57 × 1015.00 × 1019.31 × 100
Mean6.10 × 1003.87 × 1022.59 × 1032.29 × 1011.13 × 1048.06 × 1018.07 × 1016.76 × 102
Std4.09 × 1004.05 × 1021.70 × 1032.78 × 1011.30 × 1031.15 × 1021.43 × 1022.85 × 102
p-value-3.28 × 10−6 +1.78 × 10−11 +1.83 × 10−3 +3.86 × 10−48 +7.85 × 10−4 +6.02 × 10−3 +1.12 × 10−18 +
F10Median5.79 × 1032.97 × 1033.91 × 1036.42 × 1037.27 × 1035.81 × 1032.83 × 1035.68 × 103
Mean5.83 × 1033.04 × 1033.88 × 1036.26 × 1037.21 × 1035.58 × 1032.88 × 1032.94 × 103
Std3.68 × 1027.37 × 1027.23 × 1026.47 × 1022.65 × 1021.26 × 1035.25 × 1022.82 × 102
p-value-4.55 × 10−26 −4.12 × 10−19 −2.55 × 10−3 +9.14 × 10−24 +2.91 × 10−1 =5.37 × 10−33 −4.10 × 10−40 −
F4-10+/=/−-4/1/26/0/16/0/17/0/05/2/03/0/43/0/4
F11Hybrid FunctionsMedian5.04 × 1018.89 × 1014.20 × 1021.82 × 1022.67 × 1031.34 × 1026.80 × 1017.91 × 101
Mean6.04 × 1019.24 × 1014.84 × 1021.81 × 1022.64 × 1031.55 × 1028.31 × 1011.16 × 102
Std3.43 × 1014.40 × 1012.44 × 1024.10 × 1016.51 × 1028.82 × 1014.40 × 1011.75 × 101
p-value-2.62 × 10−3 +2.66 × 10−13 +6.84 × 10−18 +1.73 × 10−29 +9.07 × 10−7 +2.96 × 10−2 +8.89 × 10−11 +
F12Median8.65 × 1043.68 × 1052.11 × 1084.69 × 1056.18 × 1084.02 × 1061.86 × 1051.59 × 105
Mean2.26 × 1055.85 × 1053.11 × 1089.27 × 1056.23 × 1081.08 × 1072.81 × 1052.01 × 106
Std4.63 × 1057.80 × 1053.57 × 1088.92 × 1053.20 × 1082.90 × 1072.64 × 1051.12 × 106
p-value-3.47 × 10−2 +1.29 × 10−5 +3.29 × 10−4 +2.77 × 10−15 +5.04 × 10−2 =5.80 × 10−1 =5.05 × 10−11 +
F13Median1.35 × 1031.34 × 1044.57 × 1066.55 × 1039.07 × 1071.07 × 1044.02 × 1041.08 × 103
Mean2.01 × 1031.73 × 1041.83 × 1071.74 × 1048.54 × 1076.13 × 1043.36 × 1043.40 × 103
Std1.71 × 1031.67 × 1043.01 × 1072.22 × 1045.47 × 1071.52 × 1052.53 × 1041.45 × 103
p-value-6.12 × 10−6 +1.52 × 10−3 +3.51 × 10−4 +7.28 × 10−12 +3.63 × 10−2 +5.76 × 10−9 +1.22 × 10−3 +
F14Median6.18 × 1028.61 × 1033.66 × 1041.11 × 1053.49 × 1051.43 × 1041.78 × 1041.70 × 103
Mean1.06 × 1033.31 × 1046.71 × 1041.06 × 1053.56 × 1052.92 × 1042.11 × 1044.93 × 104
Std9.76 × 1027.29 × 1046.16 × 1045.41 × 1041.88 × 1054.70 × 1042.04 × 1043.46 × 104
p-value-1.92 × 10−2 +2.22 × 10−7 +2.66 × 10−15 +8.66 × 10−15 +1.76 × 10−3 +1.54 × 10−6 +2.66 × 10−10 +
F15Median1.17 × 1039.37 × 1036.85 × 1044.13 × 1048.31 × 1063.86 × 1031.13 × 1045.20 × 102
Mean1.63 × 1031.30 × 1041.18 × 1053.60 × 1041.00 × 1076.31 × 1031.51 × 1044.59 × 102
Std1.56 × 1031.21 × 1041.30 × 1058.78 × 1038.20 × 1067.20 × 1031.28 × 1042.49 × 102
p-value-3.84 × 10−6 +7.74 × 10−6 +7.12 × 10−29 +9.52 × 10−9 +9.53 × 10−4 +4.57 × 10−7 +1.47 × 10−4 −
F16Median5.61 × 1028.54 × 1021.04 × 1031.29 × 1032.52 × 1031.16 × 1036.92 × 1025.07 × 102
Mean5.22 × 1028.26 × 1021.08 × 1031.14 × 1032.50 × 1031.11 × 1036.74 × 1026.24 × 102
Std2.47 × 1023.71 × 1023.43 × 1024.17 × 1022.25 × 1024.17 × 1022.68 × 1021.67 × 102
p-value-4.28 × 10−4 +1.17 × 10−9 +2.96 × 10−9 +7.86 × 10−39 +9.41 × 10−9 +2.65 × 10−2 +6.66 × 10−2 =
F17Median6.94 × 1012.90 × 1025.78 × 1026.85 × 1021.02 × 1033.35 × 1022.98 × 1028.12 × 101
Mean8.91 × 1012.96 × 1025.58 × 1029.36 × 1029.90 × 1023.25 × 1022.97 × 1021.88 × 102
Std4.72 × 1011.63 × 1021.84 × 1026.50 × 1021.57 × 1021.73 × 1021.56 × 1026.70 × 101
p-value-1.07 × 10−8 +1.40 × 10−19 +1.90 × 10−9 +4.13 × 10−37 +1.23 × 10−9 +3.22 × 10−9 +1.42 × 10−8 +
F18Median7.21 × 1042.75 × 1056.51 × 1056.81 × 1054.18 × 1061.44 × 1051.58 × 1057.96 × 104
Mean8.30 × 1044.08 × 1051.23 × 1062.44 × 1064.66 × 1062.10 × 1052.01 × 1052.46 × 105
Std4.04 × 1043.70 × 1052.05 × 1064.19 × 1062.40 × 1062.01 × 1051.26 × 1051.58 × 105
p-value-1.23 × 10−5 +3.33 × 10−3 +3.20 × 10−3 +5.88 × 10−15 +1.24 × 10−3 +9.00 × 10−6 +1.02 × 10−6 +
F19Median3.71 × 1035.01 × 1033.81 × 1053.51 × 1042.58 × 1075.94 × 1031.07 × 1044.35 × 103
Mean4.24 × 1031.19 × 1047.01 × 1063.51 × 1042.31 × 1079.63 × 1031.77 × 1041.35 × 102
Std3.20 × 1031.39 × 1041.81 × 1071.98 × 1041.28 × 1079.84 × 1032.03 × 1048.51 × 101
p-value-4.76 × 10−3 +3.86 × 10−2 +1.08 × 10−11 +4.67 × 10−14 +6.01 × 10−3 +6.74 × 10−4 +2.74 × 10−9 −
F20Median1.79 × 1022.67 × 1023.76 × 1025.99 × 1027.86 × 1022.07 × 1021.78 × 1021.39 × 102
Mean1.95 × 1022.94 × 1023.83 × 1025.87 × 1027.79 × 1022.56 × 1021.89 × 1021.88 × 102
Std8.82 × 1011.67 × 1021.30 × 1021.57 × 1029.66 × 1011.32 × 1021.20 × 1026.68 × 101
p-value-5.36 × 10−3 +1.37 × 10−8 +3.19 × 10−17 +2.83 × 10−32 +3.92 × 10−2 +8.32 × 10−1 =7.64 × 10−1 =
F11-20+/=/−-10/0/010/0/010/0/010/0/09/1/08/2/06/2/2
F21Composition FunctionsMedian2.21 × 1022.78 × 1023.26 × 1024.00 × 1025.36 × 1022.83 × 1022.73 × 1022.25 × 102
Mean2.36 × 1022.78 × 1023.35 × 1024.02 × 1025.38 × 1023.02 × 1022.72 × 1022.83 × 102
Std2.43 × 1011.98 × 1013.09 × 1017.91 × 1002.07 × 1015.42 × 1011.56 × 1012.58 × 101
p-value-8.89 × 10−10 +6.25 × 10−20 +4.39 × 10−41 +3.11 × 10−50 +1.02 × 10−7 +4.30 × 10−9 +7.62 × 10−10 +
F22Median1.00 × 1021.03 × 1024.07 × 1036.99 × 1034.16 × 1031.00 × 1021.00 × 1021.00 × 102
Mean1.02 × 1021.20 × 1033.61 × 1036.95 × 1034.08 × 1031.02 × 1021.88 × 1028.25 × 102
Std1.94 × 1001.63 × 1031.47 × 1033.16 × 1025.78 × 1023.77 × 1004.81 × 1021.20 × 103
p-value-4.92 × 10−4 +6.07 × 10−19 +5.53 × 10−71 +1.74 × 10−42 +3.24 × 10−1 =3.27 × 10−1 =1.68 × 10−3 +
F23Median3.85 × 1024.36 × 1025.46 × 1025.61 × 1029.41 × 1025.75 × 1024.49 × 1023.91 × 102
Mean3.88 × 1024.34 × 1025.50 × 1025.57 × 1029.33 × 1025.56 × 1024.51 × 1024.45 × 102
Std1.67 × 1012.71 × 1014.98 × 1011.42 × 1013.80 × 1014.79 × 1012.01 × 1011.10 × 101
p-value-8.35 × 10−11 +4.33 × 10−24 +2.13 × 10−45 +2.13 × 10−58 +1.30 × 10−25 +3.45 × 10−19 +1.11 × 1−22 +
F24Median4.50 × 1025.02 × 1026.29 × 1026.26 × 1029.61 × 1026.44 × 1025.38 × 1024.61 × 102
Mean4.53 × 1025.19 × 1026.37 × 1026.20 × 1029.64 × 1026.36 × 1025.41 × 1025.60 × 102
Std1.80 × 1014.20 × 1013.88 × 1011.04 × 1016.49 × 1013.39 × 1012.50 × 1011.69 × 101
p-value-1.04 × 10−10 +2.66 × 10−31 +2.95 × 10−46 +7.59 × 10−45 +9.03 × 10−34 +2.16 × 10−22 +1.96 × 10−31 +
F25Median4.09 × 1024.03 × 1026.22 × 1023.88 × 1027.79 × 1024.31 × 1023.89 × 1024.03 × 102
Mean4.10 × 1024.12 × 1026.56 × 1023.88 × 1028.21 × 1024.31 × 1023.89 × 1023.88 × 102
Std1.42 × 1012.08 × 1011.89 × 1024.34 × 10−11.74 × 1022.41 × 1016.70 × 1006.07 × 10−1
p-value-7.26 × 10−1 =1.97 × 10−9 +3.87 × 10−12 −1.14 × 10−18 +1.32 × 10−4 +7.48 × 10−10 −9.15 × 10−12 −
F26Median1.27 × 1032.08 × 1032.98 × 1033.16 × 1035.79 × 1032.85 × 1031.93 × 1031.44 × 103
Mean1.27 × 1031.86 × 1033.06 × 1033.14 × 1035.69 × 1032.67 × 1031.77 × 1031.58 × 103
Std6.53 × 1016.08 × 1026.43 × 1021.10 × 1027.96 × 1029.70 × 1026.17 × 1024.77 × 102
p-value-2.16 × 10−6 +8.97 × 10−22 +4.08 × 10−61 +3.06 × 10−37 +1.09 × 10−10 +4.98 × 10−5 +7.88 × 10−4 +
F27Median5.45 × 1025.74 × 1025.68 × 1025.19 × 1021.04 × 1035.77 × 1025.13 × 1025.52 × 102
Mean5.50 × 1025.76 × 1025.81 × 1025.22 × 1021.04 × 1035.80 × 1025.15 × 1025.10 × 102
Std1.45 × 1012.61 × 1014.60 × 1011.55 × 1017.32 × 1011.96 × 1011.43 × 1014.60 × 100
p-value-1.05 × 10−5 +8.04 × 10−4 +7.41 × 10−10 −3.18 × 10−41 +1.35 × 10−8 +2.71 × 10−13 −5.03 × 10−21 −
F28Median4.45 × 1024.49 × 1027.85 × 1023.51 × 1031.15 × 1035.06 × 1024.45 × 1024.73 × 102
Mean4.50 × 1024.61 × 1029.64 × 1023.07 × 1031.24 × 1035.20 × 1024.49 × 1024.83 × 102
Std3.61 × 1014.81 × 1014.10 × 1029.21 × 1023.87 × 1026.10 × 1013.83 × 1012.94 × 101
p-value-3.32 × 10−1 =5.56 × 10−9 +2.33 × 10−22 +5.37 × 10−16 +1.29 × 10−6 +8.75 × 10−1 =2.97 × 10−4 +
F29Median5.12 × 1027.99 × 1029.78 × 1029.33 × 1022.41 × 1037.81 × 1026.50 × 1025.30 × 102
Mean5.16 × 1028.33 × 1021.02 × 1031.11 × 1032.35 × 1038.29 × 1027.08 × 1026.46 × 102
Std3.50 × 1012.17 × 1022.75 × 1026.12 × 1022.17 × 1021.73 × 1022.04 × 1027.33 × 101
p-value-9.37 × 10−11 +3.97 × 10−14 +1.67 × 10−6 +4.36 × 10−47 +9.05 × 10−14 +4.40 × 10−6 +3.26 × 10−12 +
F30Median9.18 × 1031.21 × 1042.19 × 1061.43 × 1044.35 × 1073.78 × 1046.27 × 1031.30 × 104
Mean1.11 × 1042.26 × 1047.75 × 1061.38 × 1044.69 × 1076.87 × 1048.22 × 1031.37 × 104
Std1.02 × 1043.46 × 1041.52 × 1071.54 × 1032.39 × 1078.22 × 1045.11 × 1034.06 × 103
p-value-8.61 × 10−2 =7.23 × 10−3 +1.54 × 10−1 =1.86 × 10−15 +3.42 × 10−4 +1.72 × 10−1 =1.96 × 10−1 =
F21-30+/=/−-7/3/010/0/07/1/210/0/09/1/05/3/27/1/2
+/=/−-23/4/228/0/124/2/329/0/024/5/017/5/717/3/9
Rank1.973.936.315.627.904.622.663.00
Table 4. Global fitness comparisons between PCLPSO and the seven selected PSO methods on the CEC 2017 benchmark set with the dimensionality set as 50.
Table 4. Global fitness comparisons between PCLPSO and the seven selected PSO methods on the CEC 2017 benchmark set with the dimensionality set as 50.
FCategoryQualityPCLPSOTCSPSOAWPSOCLPSO-LSPSO-DLPGL-PSOHCLPSOCLPSO
F1Unimodal FunctionsMedian4.39 × 1031.69 × 1037.16 × 1094.59 × 1075.96 × 1093.12 × 1033.72 × 1032.06 × 103
Mean4.19 × 1034.35 × 1033.92 × 1001.29 × 1088.04 × 1091.67 × 1074.06 × 1032.59 × 103
Std2.12 × 1035.68 × 1031.16 × 1003.39 × 1086.22 × 1096.58 × 1071.86 × 1031.91 × 103
p-value-8.85 × 10−1 =3.92 × 10−26 +4.08 × 10−2 +2.11 × 10−9 +2.96 × 10−2 +3.38 × 10−6 −3.80 × 10−3 −
F3Median1.63 × 1047.75 × 1041.39 × 1045.20 × 10−101.85 × 1051.97 × 1041.76 × 1021.31 × 105
Mean1.64 × 1047.86 × 1049.74 × 1042.37 × 1041.82 × 1052.25 × 1041.62 × 1021.31 × 105
Std4.17 × 1031.15 × 1043.70 × 1046.17 × 1042.03 × 1041.49 × 1044.43 × 1012.25 × 104
p-value-8.24 × 10−36 +2.69 × 10−17 +5.17 × 10−1 =2.32 × 10−46 +4.54 × 10−3 +2.61 × 10−26 −5.35 × 10−35 +
F1-3+/=/−-1/1/02/0/01/1/02/0/02/0/00/0/21/0/1
F4Simple Multimodal FunctionsMedian3.53 × 1022.95 × 1026.71 × 1022.38 × 1022.72 × 1034.38 × 1021.77 × 1021.89 × 102
Mean3.53 × 1023.13 × 1025.15 × 1032.40 × 1023.40 × 1034.49 × 1021.78 × 1021.87 × 102
Std4.89 × 1011.05 × 1022.19 × 1033.19 × 1012.48 × 1031.63 × 1023.69 × 1012.04 × 101
p-value-5.87 × 10−2 =2.32 × 10−17 +3.46 × 10−15 −8.39 × 10−9 +1.64 × 10−3 +1.01 × 10−22 −1.82 × 10−24 −
F5Median3.80 × 1011.68 × 1021.47 × 1024.40 × 1026.95 × 1021.45 × 1022.79 × 10−32.02 × 102
Mean5.17 × 1011.76 × 1023.32 × 1024.41 × 1026.99 × 1021.52 × 1024.32 × 10−31.98 × 102
Std6.70 × 1014.28 × 1015.14 × 1012.13 × 1013.26 × 1014.76 × 1016.43 × 10−31.57 × 101
p-value-7.82 × 10−17 +1.82 × 10−31 +4.28 × 10−48 +1.83 × 10−57 +4.06 × 10−14 +1.48 × 10−18 −2.14 × 10−26 +
F6Median8.72 × 10−11.94 × 1001.39 × 1015.71 × 1009.56 × 1019.40 × 10−12.00 × 1021.23 × 10−8
Mean8.50 × 10−12.22 × 1003.78 × 1016.01 × 1009.50 × 1011.07 × 1002.05 × 1022.45 × 10−3
Std2.23 × 10−11.56 × 1001.05 × 1011.24 × 1004.62 × 1001.41 × 1003.08 × 1011.34 × 10−2
p-value-2.34 × 10−5 +7.24 × 10−27 +4.41 × 10−29 +2.69 × 10−69 +9.56 × 10−2 =6.41 × 10−14 +5.98 × 10−14 −
F7Median2.27 × 1022.84 × 1021.66 × 1025.15 × 1021.25 × 1032.49 × 1021.65 × 1022.10 × 102
Mean2.11 × 1022.86 × 1026.93 × 1025.15 × 1021.26 × 1032.54 × 1021.70 × 1022.10 × 102
Std2.07 × 1015.16 × 1012.42 × 1023.55 × 1011.24 × 1024.58 × 1013.26 × 1011.47 × 101
p-value-2.24 × 10−6 +3.76 × 10−15 +5.94 × 10−32 +4.78 × 10−45 +1.74 × 10−3 +6.00 × 10−1 =9.10 × 10−1 =
F8Median3.34 × 1011.83 × 1021.37 × 1024.44 × 1026.93 × 1021.54 × 1021.28 × 1031.96 × 102
Mean5.39 × 1011.79 × 1023.43 × 1024.45 × 1026.87 × 1021.54 × 1021.52 × 1031.97 × 102
Std6.63 × 1013.90 × 1015.32 × 1011.36 × 1014.15 × 1016.49 × 1017.16 × 1021.64 × 101
p-value-6.30 × 10−16 +6.20 × 10−30 +1.23 × 10−45 +1.45 × 10−51 +5.41 × 10−13 +8.58 × 10−16 +3.50 × 10−22 +
F9Median4.82 × 1012.15 × 1032.19 × 1031.10 × 1034.02 × 1041.37 × 1035.26 × 1033.95 × 103
Mean5.26 × 1012.57 × 1031.01 × 1041.13 × 1034.01 × 1041.92 × 1035.34 × 1034.22 × 103
Std1.80 × 1011.71 × 1034.58 × 1033.05 × 1024.04 × 1038.15 × 1026.76 × 1021.22 × 103
p-value-4.85 × 10−11 +2.00 × 10−17 +7.63 × 10−27 +2.20 × 10−51 +9.63 × 10−8 +3.95 × 10−16 +3.36 × 10−26 +
F10Median1.09 × 1045.22 × 1033.91 × 1031.31 × 1041.35 × 1047.54 × 1032.34 × 1026.65 × 103
Mean1.09 × 1045.44 × 1037.54 × 1031.31 × 1041.34 × 1048.22 × 1032.44 × 1026.59 × 103
Std3.87 × 1028.75 × 1029.13 × 1024.42 × 1023.19 × 1022.20 × 1039.16 × 1014.51 × 102
p-value-3.87 × 10−37 −1.91 × 10−25 −1.80 × 10−25 +6.54 × 10−31 +4.41 × 10−8 −5.81 × 10−42 −2.08 × 10−41 −
F4-10+/=/−-5/1/16/0/16/0/17/0/05/1/13/1/33/1/3
F11Hybrid FunctionsMedian1.99 × 1022.02 × 1024.20 × 1022.75 × 1021.35 × 1041.26 × 1032.94 × 1061.93 × 102
Mean1.97 × 1022.41 × 1022.19 × 1031.24 × 1031.27 × 1041.73 × 1034.77 × 1061.92 × 102
Std6.72 × 1011.11 × 1021.84 × 1035.30 × 1032.66 × 1037.59 × 1025.30 × 1064.16 × 101
p-value-5.36 × 10−2 =1.74 × 10−7 +2.84 × 10−1 =1.78 × 10−33 +4.49 × 10−7 +1.58 × 10−2 +6.56 × 10−1 =
F12Median1.89 × 1062.62 × 1062.11 × 1082.56 × 1077.50 × 1085.68 × 1063.13 × 1041.86 × 107
Mean2.66 × 1061.28 × 1077.55 × 1092.70 × 1071.55 × 1095.23 × 1072.80 × 1041.96 × 107
Std1.81 × 1063.91 × 1074.61 × 1091.15 × 1073.13 × 1093.63 × 1081.12 × 1049.65 × 106
p-value-1.63 × 10−1 =1.44 × 10−12 +2.19 × 10−16 +8.95 × 10−3 +3.73 × 10−2 +5.02 × 10−2 =3.29 × 10−13 +
F13Median3.20 × 1036.21 × 1034.57 × 1063.78 × 1044.84 × 1073.45 × 1039.69 × 1041.11 × 104
Mean3.90 × 1037.87 × 1031.77 × 1091.03 × 1061.29 × 1088.10 × 1059.77 × 1041.13 × 104
Std1.75 × 1037.86 × 1031.55 × 1095.43 × 1062.07 × 1089.74 × 1067.55 × 1043.09 × 103
p-value-9.17 × 10−3 +5.29 × 10−8 +3.06 × 10−1 =1.14 × 10−3 +3.14 × 10−1 =7.97 × 10−17 +3.96 × 10−16 +
F14Median5.42 × 1047.69 × 1043.66 × 1043.32 × 1054.62 × 1061.24 × 1051.81 × 1044.64 × 105
Mean5.50 × 1041.19 × 1056.82 × 1053.92 × 1054.56 × 1061.21 × 1061.62 × 1045.29 × 105
Std2.34 × 1041.94 × 1057.16 × 1053.49 × 1052.31 × 1061.82 × 1061.08 × 1042.69 × 105
p-value-7.84 × 10−2 =1.18 × 10−5 +2.10 × 10−6 +2.65 × 10−15 +3.83 × 10−3 +4.65 × 10−3 −1.23 × 10−13 +
F15Median1.09 × 1034.76 × 1036.85 × 1043.16 × 1042.96 × 1072.99 × 1031.70 × 1038.15 × 102
Mean1.43 × 1036.94 × 1033.75 × 1072.35 × 1076.54 × 1074.18 × 1031.66 × 1039.31 × 102
Std1.82 × 1036.93 × 1031.06 × 1087.85 × 1078.03 × 1079.37 × 1034.24 × 1024.56 × 102
p-value-7.08 × 10−5 +5.79 × 10−2 =1.07 × 10−1 =3.82 × 10−5 +1.58 × 10−3 +5.23 × 10−10 +4.69 × 10−2 −
F16Median9.05 × 1021.59 × 1031.04 × 1033.20 × 1034.92 × 1031.70 × 1031.24 × 1031.45 × 103
Mean8.76 × 1021.55 × 1032.40 × 1033.21 × 1034.89 × 1031.67 × 1031.21 × 1031.41 × 103
Std3.89 × 1024.01 × 1025.44 × 1022.00 × 1023.07 × 1027.06 × 1022.71 × 1022.03 × 102
p-value-3.44 × 10−9 +7.98 × 10−19 +1.23 × 10−38 +3.54 × 10−48 +7.80 × 10−10 +9.36 × 10−11 +8.25 × 10−10 +
F17Median8.27 × 1021.17 × 1035.78 × 1022.07 × 1033.09 × 1031.33 × 1032.68 × 1051.05 × 103
Mean8.16 × 1021.16 × 1032.06 × 1032.20 × 1033.05 × 1031.30 × 1033.69 × 1051.04 × 103
Std1.80 × 1022.41 × 1024.49 × 1027.21 × 1023.20 × 1023.35 × 1023.10 × 1051.98 × 102
p-value-4.68 × 10−7 +9.63 × 10−20 +2.63 × 10−14 +3.84 × 10−38 +2.90 × 10−9 +5.80 × 10−8 +1.28 × 10−4 +
F18Median2.04 × 1052.45 × 1066.51 × 1055.68 × 1062.59 × 1075.67 × 1062.02 × 1041.14 × 106
Mean2.20 × 1054.08 × 1065.74 × 1068.35 × 1062.64 × 1077.66 × 1062.12 × 1041.31 × 106
Std1.42 × 1054.41 × 1069.83 × 1066.47 × 1061.17 × 1078.33 × 1061.69 × 1047.76 × 105
p-value-1.22 × 10−5 +3.20 × 10−3 +4.49 × 10−9 +1.19 × 10−17 +1.98 × 10−5 +1.37 × 10−2 −2.73 × 10−10 +
F19Median1.59 × 1046.45 × 1033.81 × 1052.52 × 1032.03 × 1077.66 × 1037.72 × 1023.36 × 102
Mean1.52 × 1041.05 × 1042.02 × 1072.51 × 1032.46 × 1071.52 × 1047.36 × 1025.26 × 102
Std4.55 × 1031.04 × 1044.65 × 1071.75 × 1011.71 × 1071.14 × 1042.19 × 1025.12 × 102
p-value-2.36 × 10−2 −2.08 × 10−2 +7.69 × 10−26 −9.26 × 10−11 +9.88 × 10−1 =6.36 × 10−2 =7.82 × 10−29 −
F20Median6.56 × 1028.66 × 1023.76 × 1021.73 × 1031.92 × 1038.65 × 1023.91 × 1025.92 × 102
Mean6.50 × 1028.43 × 1021.10 × 1031.70 × 1031.93 × 1039.08 × 1023.89 × 1026.14 × 102
Std1.34 × 1022.78 × 1023.20 × 1021.37 × 1021.39 × 1024.28 × 1023.19 × 1011.32 × 102
p-value-2.97 × 10−3 +1.93 × 10−8 +7.31 × 10−32 +2.87 × 10−36 +3.22 × 10−3 +1.16 × 10−1 =4.01 × 10−1 =
F11-20+/=/−-6/3/19/1/06/3/110/0/08/2/05/3/26/1/3
F21Composition FunctionsMedian2.40 × 1023.71 × 1023.26 × 1026.34 × 1029.26 × 1023.67 × 1026.17 × 1034.21 × 102
Mean2.53 × 1023.67 × 1025.58 × 1026.36 × 1029.27 × 1023.68 × 1025.80 × 1034.21 × 102
Std6.37 × 1013.71 × 1015.12 × 1011.67 × 1014.03 × 1015.91 × 1011.74 × 1031.53 × 101
p-value-5.12 × 10−17 +8.32 × 10−34 +1.08 × 10−49 +1.57 × 10−56 +3.82 × 10−16 +1.06 × 10−21 +2.50 × 10−30 +
F22Median1.04 × 1025.83 × 1034.07 × 1031.33 × 1041.39 × 1048.24 × 1036.53 × 1027.13 × 103
Mean2.59 × 1035.78 × 1038.32 × 1031.32 × 1041.38 × 1047.49 × 1036.56 × 1027.14 × 103
Std4.19 × 1031.47 × 1031.02 × 1033.72 × 1024.19 × 1022.89 × 1033.49 × 1012.79 × 102
p-value-5.67 × 10−4 +9.05 × 10−9 +1.91 × 10−18 +2.13 × 10−19 +3.77 × 10−5 +6.51 × 10−4 −1.07 × 10−6 +
F23Median4.98 × 1026.28 × 1025.46 × 1028.58 × 1021.69 × 1038.10 × 1027.30 × 1026.66 × 102
Mean4.98 × 1026.45 × 1029.60 × 1028.56 × 1021.69 × 1038.06 × 1027.24 × 1026.66 × 102
Std1.81 × 1016.32 × 1019.13 × 1011.40 × 1016.44 × 1019.20 × 1013.78 × 1011.93 × 101
p-value-1.17 × 10−17 +1.17 × 10−34 +4.21 × 10−62 +5.77 × 10−66 +5.96 × 10−31 +1.24 × 10−29 +4.80 × 10−40 +
F24Median5.81 × 1027.29 × 1026.29 × 1029.06 × 1021.79 × 1039.00 × 1024.92 × 1028.04 × 102
Mean5.83 × 1027.42 × 1021.03 × 1039.06 × 1021.78 × 1038.67 × 1025.13 × 1028.05 × 102
Std1.59 × 1019.22 × 1017.75 × 1011.24 × 1011.53 × 1023.68 × 1013.99 × 1012.69 × 101
p-value-4.50 × 10−13 +1.69 × 10−37 +8.12 × 10−63 +2.04 × 10−45 +7.03 × 10−25 +2.85 × 10−26 −5.04 × 10−43 +
F25Median6.47 × 1026.66 × 1026.22 × 1025.58 × 1021.52 × 1037.68 × 1023.58 × 1035.31 × 102
Mean6.49 × 1026.76 × 1023.42 × 1035.59 × 1021.77 × 1037.87 × 1023.53 × 1035.30 × 102
Std2.77 × 1016.94 × 1011.61 × 1039.54 × 1008.20 × 1028.76 × 1013.33 × 1026.36 × 100
p-value-4.31 × 10−2 +2.70 × 10−13 +1.54 × 10−30 −3.98 × 10−10 +8.64 × 10−14 −6.06 × 10−24 +2.14 × 10−38 −
F26Median1.75 × 1033.28 × 1032.98 × 1035.62 × 1031.22 × 1043.72 × 1036.49 × 1023.59 × 103
Mean1.77 × 1033.22 × 1037.17 × 1035.60 × 1031.19 × 1043.95 × 1036.55 × 1023.52 × 103
Std1.57 × 1028.61 × 1021.22 × 1031.89 × 1021.30 × 1031.17 × 1036.21 × 1013.43 × 102
p-value-8.61 × 10−13 +5.87 × 10−32 +1.09 × 10−65 +1.44 × 10−45 +1.70 × 10−11 +6.03 × 10−35 −2.93 × 10−34 +
F27Median8.64 × 1029.28 × 1025.68 × 1027.18 × 1022.71 × 1039.41 × 1024.86 × 1026.35 × 102
Mean8.74 × 1029.24 × 1021.10 × 1037.32 × 1022.72 × 1039.65 × 1024.86 × 1026.33 × 102
Std6.67 × 1018.92 × 1011.57 × 1027.97 × 1012.69 × 1029.99 × 1012.67 × 1012.81 × 101
p-value-2.37 × 10−2 +1.88 × 10−9 +4.61 × 10−9 −2.05 × 10−41 +7.54 × 10−4 +4.44 × 10−17 −2.64 × 10−22 −
F28Median6.94 × 1026.53 × 1027.85 × 1025.45 × 1032.46 × 1038.42 × 1021.18 × 1031.70 × 103
Mean6.96 × 1026.61 × 1024.86 × 1035.34 × 1032.50 × 1038.65 × 1021.18 × 1031.79 × 103
Std5.33 × 1016.70 × 1011.72 × 1034.56 × 1028.15 × 1021.59 × 1022.98 × 1024.52 × 102
p-value-1.54 × 10−2 −3.55 × 10−19 +4.77 × 10−52 +1.65 × 10−17 +2.70 × 10−9 +3.66 × 10−33 +3.82 × 10−19 +
F29Median6.25 × 1021.23 × 1039.78 × 1022.10 × 1035.19 × 1031.27 × 1031.13 × 1061.01 × 103
Mean6.39 × 1021.25 × 1032.85 × 1032.24 × 1035.32 × 1031.36 × 1031.26 × 1061.02 × 103
Std9.20 × 1012.82 × 1026.90 × 1025.96 × 1027.66 × 1025.30 × 1025.20 × 1051.51 × 102
p-value-3.25 × 10−16 +1.13 × 10−24 +5.67 × 10−21 +2.02 × 10−39 +3.99 × 10−13 +1.81 × 10−13 +1.26 × 10−16 +
F30Median2.15 × 1062.32 × 1062.19 × 1061.46 × 1062.70 × 1083.01 × 1063.14 × 1067.30 × 105
Mean2.19 × 1062.36 × 1068.22 × 1071.16 × 1072.98 × 1083.08 × 1069.14 × 1037.40 × 105
Std4.13 × 1056.23 × 1051.07 × 1085.47 × 1071.24 × 1082.84 × 1064.83 × 1067.63 × 104
p-value-1.97 × 10−1 =1.28 × 10−4 +3.52 × 10−1 =5.91 × 10−19 +4.37 × 10−4 +1.34 × 10−10 −7.81 × 10−28 −
F21–30+/=/−-8/1/110/0/07/1/210/0/09/0/15/0/57/0/3
+/=/−-20/6/327/1/120/5/429/0/024/3/213/4/1217/2/10
Rank2.343.456.455.384.553.342.973.34
Table 5. Global fitness comparisons between PCLPSO and the seven selected PSO methods on the CEC 2017 benchmark set with the dimensionality set as 100.
Table 5. Global fitness comparisons between PCLPSO and the seven selected PSO methods on the CEC 2017 benchmark set with the dimensionality set as 100.
FCategoryQualityPCLPSOTCSPSOAWPSOCLPSO-LSPSO-DLPGL-PSOHCLPSOCLPSO
F1Unimodal FunctionsMedian3.23 × 1038.51 × 1031.64 × 10111.21 × 10105.11 × 10101.47 × 1041.37 × 1041.63 × 109
Mean4.23 × 1038.15 × 1031.63 × 10111.22 × 10105.39 × 10103.49 × 1042.79 × 1041.78 × 109
Std3.99 × 1031.15 × 1042.13 × 10101.22 × 1091.15 × 10105.48 × 1042.56 × 1041.51 × 109
p-value-8.84 × 10−4 +4.98 × 10−45 +1.18 × 10−51 +2.16 × 10−33 +3.37 × 10−3 +5.59 × 10−6 +2.26 × 10−8 +
F3Median1.68 × 1052.66 × 1054.62 × 1052.68 × 10−84.83 × 1051.93 × 1057.13 × 1045.10 × 105
Mean1.71 × 1052.57 × 1054.65 × 1053.61 × 1044.66 × 1051.98 × 1057.37 × 1045.11 × 105
Std1.22 × 1042.73 × 1047.87 × 1041.38 × 1055.37 × 1042.75 × 1042.21 × 1043.89 × 104
p-value-3.12 × 10−26 +5.99 × 10−28 +1.71 × 10−6 −1.67 × 10−36 +5.44 × 10−6 +8.33 × 10−29 −3.28 × 10−47 +
F1-3+/=/−-2/0/02/0/01/0/12/0/02/0/01/0/12/0/0
F4Simple Multimodal FunctionsMedian6.46 × 1026.33 × 1023.03 × 1041.01 × 1039.72 × 1031.62 × 1032.36 × 1023.17 × 102
Mean6.43 × 1026.36 × 1023.09 × 1041.01 × 1031.12 × 1041.58 × 1032.47 × 1023.26 × 102
Std8.68 × 1012.68 × 1029.08 × 1031.31 × 1024.20 × 1034.10 × 1022.61 × 1014.38 × 101
p-value-1.49 × 10−1 =1.08 × 10−25 +1.08 × 10−18 +7.60 × 10−20 +1.19 × 10−17 +1.02 × 10−31 −3.03 × 10−25 −
F5Median8.81 × 1014.37 × 1029.87 × 1021.06 × 1031.63 × 1034.45 × 1024.81 × 1027.46 × 102
Mean8.83 × 1015.27 × 1021.01 × 1031.06 × 1031.63 × 1034.51 × 1024.89 × 1027.46 × 102
Std1.38 × 1016.06 × 1011.21 × 1022.62 × 1016.02 × 1018.76 × 1017.94 × 1014.21 × 101
p-value-2.04 × 10−38 +8.26 × 10−45 +2.50 × 10−81 +1.61 × 10−74 +3.18 × 10−30 +9.75 × 10−35 +1.79 × 10−61 +
F6Median2.70 × 1001.49 × 1015.97 × 1013.28 × 1011.17 × 1022.42 × 1009.16 × 10−31.07 × 10−2
Mean2.57 × 1001.64 × 1015.98 × 1013.28 × 1011.17 × 1023.03 × 1001.25 × 10−22.64 × 10−2
Std6.18 × 10−16.35 × 1007.97 × 1001.94 × 1003.82 × 1002.00 × 1001.04 × 10−23.26 × 10−2
p-value-5.02 × 10−17 +1.86 × 10−43 +2.09 × 10−61 +1.14 × 10−78 +2.39 × 10−1 =1.58 × 10−30 −2.25 × 10−30 −
F7Median2.89 × 1029.67 × 1023.45 × 1031.45 × 1033.43 × 1037.43 × 1026.66 × 1027.07 × 102
Mean3.91 × 1021.11 × 1033.46 × 1031.53 × 1033.50 × 1037.62 × 1026.57 × 1027.02 × 102
Std1.81 × 1021.35 × 1026.74 × 1021.75 × 1023.46 × 1021.21 × 1021.06 × 1026.45 × 101
p-value-2.01 × 10−20 +6.86 × 10−32 +1.96 × 10−32 +4.89 × 10−46 +3.93 × 10−13 +3.76 × 10−9 +2.11 × 10−12 +
F8Median9.19 × 1014.90 × 1021.07 × 1031.07 × 1031.74 × 1034.68 × 1025.69 × 1027.41 × 102
Mean9.49 × 1015.26 × 1021.07 × 1031.07 × 1031.74 × 1035.10 × 1025.71 × 127.48 × 102
Std1.76 × 1011.04 × 1029.22 × 1012.22 × 1015.03 × 1011.55 × 1027.23 × 1013.27 × 101
p-value-1.35 × 10−28 +1.50 × 10−52 +1.34 × 10−82 +9.45 × 10−80 +5.21 × 10−21 +1.01 × 10−40 +1.07 × 10−65 +
F9Median2.44 × 1021.11 × 1043.37 × 1041.57 × 1041.17 × 1051.28 × 1048.66 × 1032.24 × 104
Mean2.62 × 1021.31 × 1043.56 × 1041.56 × 1041.16 × 1051.41 × 1048.77 × 1032.32 × 104
Std8.93 × 1014.36 × 1037.71 × 1031.83 × 1035.75 × 1037.21 × 1032.87 × 1034.55 × 103
p-value-2.77 × 10−20 +8.13 × 10−33 +3.59 × 10−47 +3.50 × 10−69 +4.18 × 10−15 +3.40 × 10−23 +4.80 × 10−35 +
F10Median2.63 × 1041.33 × 1041.84 × 1043.02 × 1043.01 × 1041.98 × 1041.28 × 1042.17 × 104
Mean2.64 × 1041.33 × 1041.83 × 1043.01 × 1043.00 × 1042.07 × 1041.31 × 1042.18 × 104
Std5.97 × 1023.56 × 1031.69 × 1033.82 × 1026.27 × 1023.75 × 1031.20 × 1035.11 × 102
p-value-2.56 × 10−26 −2.43 × 10−32 −4.10 × 10−36 +4.68 × 10−31 +3.18 × 10−11 −1.74 × 10−51 −2.12 × 10−38 −
F4−10+/=/−-2005/1/16/0/17/0/07/0/02005/1/14/0/34/0/3
F11Hybrid FunctionsMedian2.32 × 1031.28 × 1043.22 × 1041.56 × 1031.66 × 1052.51 × 1048.47 × 1021.34 × 103
Mean2.65 × 1034.95 × 1034.29 × 1046.42 × 1031.67 × 1052.38 × 1048.43 × 1021.34 × 103
Std1.74 × 1033.75 × 1033.51 × 1049.25 × 1032.35 × 1048.97 × 1032.49 × 1021.62 × 102
p-value-2.37 × 10−18 +4.75 × 10−8 +3.20 × 10−2 +9.37 × 10−43 +2.16 × 10−18 +5.55 × 10−7 −1.31 × 10−4 −
F12Median4.16 × 1072.57 × 1074.54 × 10108.74 × 1081.72 × 10102.31 × 1081.32 × 1077.81 × 107
Mean4.57 × 1078.39 × 1074.69 × 10109.29 × 1081.94 × 10103.13 × 1082.77 × 1078.97 × 107
Std3.51 × 1075.12 × 1071.70 × 10103.17 × 1081.17 × 10102.69 × 1087.32 × 1074.15 × 107
p-value-7.98 × 10−1 =9.90 × 10−22 +8.13 × 10−22 +9.09 × 10−13 +1.31 × 10−6 +2.29 × 10−1 =4.26 × 10−5 +
F13Median3.50 × 1034.12 × 1035.19 × 1091.23 × 1041.94 × 1081.18 × 1042.20 × 1043.82 × 104
Mean3.74 × 1035.47 × 1035.72 × 1095.54 × 1062.64 × 1082.56 × 1072.24 × 1044.13 × 104
Std1.80 × 1036.69 × 1032.99 × 1092.10 × 1072.39 × 1081.38 × 1081.29 × 1041.61 × 104
p-value-4.73 × 10−2 +5.27 × 10−15 +1.54 × 10−1 =1.12 × 10−7 +3.15 × 10−1 =1.18 × 10−10 +2.47 × 10−18 +
F14Median3.68 × 1051.05 × 1061.52 × 1074.31 × 1061.84 × 1075.15 × 1056.31 × 1054.85 × 106
Mean4.87 × 1051.61 × 1061.84 × 1075.24 × 1061.94 × 1072.70 × 1069.83 × 1055.01 × 106
Std3.47 × 1051.44 × 1061.53 × 1073.40 × 1069.98 × 1064.02 × 1069.35 × 1051.22 × 106
p-value-6.93 × 10−5 +2.67 × 10−8 +2.68 × 10−10 +8.31 × 10−15 +3.92 × 10−3 +8.47 × 10−3 +3.23 × 10−27 +
F15Median9.50 × 1022.53 × 1031.96 × 1092.77 × 1049.67 × 1052.72 × 1037.93 × 1034.82 × 103
Mean1.08 × 1031.57 × 1051.86 × 1094.53 × 1076.15 × 1061.17 × 1051.16 × 1045.32 × 103
Std6.51 × 1023.13 × 1061.13 × 1092.45 × 1081.67 × 1076.08 × 1051.04 × 1042.74 × 103
p-value-2.42 × 10−1 =1.22 × 10−12 +3.15 × 10−1 =4.88 × 10−2 +2.99 × 10−1 =8.77 × 10−7 +2.42 × 10−11 +
F16Median2.03 × 1033.98 × 1037.73 × 1038.44 × 1031.31 × 1044.64 × 1034.15 × 1034.02 × 103
Mean2.17 × 1033.96 × 1037.73 × 1038.43 × 1031.29 × 1044.96 × 1034.08 × 1034.07 × 103
Std9.82 × 1028.18 × 1021.32 × 1033.39 × 1028.74 × 1021.69 × 1038.40 × 1023.43 × 102
p-value-1.30 × 10−10 +5.52 × 10−26 +2.65 × 10−39 +9.60 × 10−47 +1.20 × 10−10 +4.43 × 10−11 +2.98 × 10−14 +
F17Median2.43 × 1032.90 × 1038.10 × 1035.65 × 1036.80 × 1033.42 × 1033.76 × 1033.23 × 103
Mean2.13 × 1032.91 × 1039.09 × 1035.78 × 1037.15 × 1033.35 × 1033.86 × 1033.21 × 103
Std7.24 × 1024.93 × 1023.14 × 1038.25 × 1022.42 × 1031.03 × 1037.27 × 1022.86 × 102
p-value-4.27 × 10−5 +4.30 × 10−17 +1.08 × 10−25 +1.15 × 10−15 +1.71 × 10−6 +5.38 × 10−13 +2.70 × 10−10 +
F18Median4.35 × 1053.46 × 1061.76 × 1072.66 × 1071.59 × 1076.31 × 1051.10 × 1067.77 × 106
Mean4.30 × 1054.36 × 1062.51 × 1073.23 × 1071.89 × 1071.32 × 1061.97 × 1067.50 × 106
Std8.99 × 1042.32 × 1062.29 × 1072.52 × 1079.71 × 1061.72 × 1061.65 × 1062.45 × 106
p-value-3.35 × 10−11 +2.09 × 10−7 +4.22 × 10−9 +7.29 × 10−15 +6.39 × 10−3 +3.81 × 10−6 +1.25 × 10−22 +
F19Median7.87 × 1024.74 × 1039.56 × 1081.26 × 1041.02 × 171.37 × 1031.28 × 1041.78 × 103
Mean9.54 × 1023.51 × 1031.28 × 1093.76 × 1052.00 × 1079.64 × 1053.40 × 1051.97 × 103
Std7.68 × 1027.98 × 1059.44 × 1081.23 × 1063.72 × 1075.17 × 1061.76 × 1067.25 × 102
F11–20+/=/−-6/3/110/0/07/3/010/0/07/3/06/2/28/0/2
F21Composition FunctionsMedian3.38 × 1027.14 × 1021.39 × 1031.31 × 1032.28 × 1037.46 × 1028.27 × 1029.66 × 102
Mean3.40 × 1027.54 × 1021.39 × 1031.31 × 1032.27 × 1037.89 × 1028.36 × 1029.62 × 102
Std1.80 × 1018.75 × 1011.41 × 1022.82 × 1011.01 × 1021.84 × 1027.27 × 1013.09 × 101
p-value-6.33 × 10−31 +4.09 × 10−44 +3.08 × 10−78 +2.30 × 10−67 +2.61 × 10−19 +1.48 × 10−41 +1.87 × 10−65 +
F22Median2.73 × 1041.50 × 1041.96 × 1043.04 × 1043.19 × 1042.30 × 1041.43 × 1042.25 × 104
Mean2.62 × 1041.53 × 1041.99 × 1043.03 × 1043.17 × 1042.35 × 1041.41 × 1042.23 × 104
Std5.01 × 1031.51 × 1032.08 × 1036.31 × 1024.70 × 1024.30 × 1039.64 × 1026.65 × 102
p-value-5.37 × 10−17 −3.37 × 10−8 −3.01 × 10−5 +1.18 × 10−7 +3.08 × 10−2 −8.06 × 10−19 −9.34 × 10−5 −
F23Median7.76 × 1029.75 × 1022.04 × 1031.52 × 1033.62 × 1031.39 × 1038.91 × 1029.00 × 102
Mean7.71 × 1021.02 × 1032.03 × 1031.52 × 1033.62 × 1031.39 × 1038.80 × 1029.01 × 102
Std3.46 × 1011.22 × 1021.64 × 1023.13 × 1011.57 × 1021.96 × 1023.82 × 1012.70 × 101
p-value-1.04 × 10−13 +1.31 × 10−44 +2.84 × 10−63 +6.65 × 10−66 +2.33 × 10−24 +1.05 × 10−16 +3.86 × 10−23 +
F24Median1.23 × 1031.51 × 1032.80 × 1031.87 × 1036.05 × 1032.05 × 1031.52 × 1031.49 × 103
Mean1.23 × 1031.61 × 1032.87 × 1031.88 × 1035.91 × 1032.03 × 1031.53 × 1031.49 × 103
Std5.27 × 1011.33 × 1022.06 × 1027.30 × 1016.40 × 1021.34 × 1026.56 × 1012.54 × 101
p-value-3.61 × 10−16 +3.29 × 10−45 +1.85 × 10−43 +7.83 × 10−44 +3.84 × 10−37 +1.62 × 10−26 +1.64 × 10−31 +
F25Median1.52 × 1031.31 × 1031.43 × 1043.37 × 1035.16 × 1031.79 × 1037.87 × 1029.02 × 102
Mean1.52 × 1031.31 × 1031.48 × 1043.43 × 1035.08 × 1031.77 × 1037.82 × 1029.07 × 102
Std1.58 × 1022.86 × 1024.22 × 1032.65 × 1021.05 × 1032.80 × 1027.70 × 1014.86 × 101
p-value-1.15 × 10−2 −1.63 × 10−24 +5.68 × 10−40 +6.55 × 10−26 +6.36 × 10−5 +8.41 × 10−31 −5.54 × 10−28 −
F26Median5.61 × 1031.00 × 1042.57 × 1041.41 × 1043.15 × 1041.04 × 1041.11 × 1041.09 × 104
Mean5.60 × 1039.97 × 1032.58 × 1041.40 × 1043.15 × 1041.10 × 1041.10 × 1041.10 × 104
Std4.01 × 1021.23 × 1032.67 × 1033.41 × 1022.73 × 1032.32 × 1038.60 × 1023.14 × 102
p-value-8.98 × 10−26 +1.53 × 10−44 +2.43 × 10−63 +3.75 × 10−50 +2.74 × 10−18 +4.68 × 10−38 +7.82 × 10−53 +
F27Median1.12 × 1031.17 × 1032.09 × 1037.13 × 1026.21 × 1031.23 × 1037.96 × 1027.58 × 100
Mean1.11 × 1031.13 × 1032.09 × 1037.22 × 1026.03 × 1031.26 × 1037.95 × 1027.59 × 102
Std5.83 × 1011.23 × 1023.93 × 1023.75 × 1018.23 × 1029.93 × 1016.38 × 1012.25 × 101
p-value-3.00 × 10−3 +1.72 × 10−19 +1.46 × 10−37 −5.07 × 10−39 +4.61 × 10−9 +1.11 × 10−27 −1.20 × 10−37 −
F28Median1.60 × 1031.27 × 1032.06 × 1041.30 × 1049.66 × 1032.20 × 1035.84 × 1021.28 × 104
Mean1.61 × 1031.27 × 1032.10 × 1041.30 × 1041.02 × 1042.29 × 1033.14 × 1031.28 × 104
Std1.61 × 1024.02 × 1022.02 × 1031.88 × 1022.24 × 1035.73 × 1024.58 × 1034.87 × 101
p-value-3.48 × 10−3 −1.28 × 10−50 +6.31 × 10−90 +1.13 × 10−28 +3.87 × 10−8 +7.28 × 10−2 =4.65 × 10−99 +
F29Median2.21 × 1033.63 × 1038.50 × 1036.16 × 1031.13 × 1044.38 × 1033.95 × 1033.30 × 103
Mean2.20 × 1033.80 × 1038.91 × 1036.16 × 1031.16 × 1044.43 × 1033.93 × 1033.30 × 103
Std3.15 × 1025.72 × 1021.90 × 1031.19 × 1032.46 × 1036.94 × 1025.24 × 1022.75 × 102
p-value-2.80 × 10−18 +1.19 × 10−26 +6.60 × 10−25 +1.32 × 10−28 +5.01 × 10−23 +2.41 × 10−22 +7.36 × 10−21 +
F30Median7.15 × 1058.13 × 1043.86 × 1092.17 × 1047.41 × 1081.83 × 1065.58 × 1035.76 × 104
Mean9.64 × 1051.64 × 1054.41 × 1093.42 × 1071.15 × 1093.54 × 1061.85 × 1047.32 × 104
Std6.34 × 1051.22 × 1052.16 × 1091.19 × 1081.12 × 1093.91 × 1062.63 × 1045.28 × 104
p-value-3.58 × 10−9 −3.81 × 10−16 +1.31 × 10−1 =5.74 × 10−7 +7.21 × 10−4 +3.26 × 10−11 −2.18 × 10−10 −
F21-30+/=/−-6/0/49/0/18/1/110/0/09/0/15/1/46/0/4
+/=/−-19/4/627/0/223/4/229/0/023/4/216/3/1020/0/9
Rank2.143.176.95.837.414.412.593.55
Table 6. Statistical comparison results between PCLPSO and the seven compared PSO methods on the CEC 2017 benchmark problem set with the three dimension sizes in terms of “+/=/−”.
Table 6. Statistical comparison results between PCLPSO and the seven compared PSO methods on the CEC 2017 benchmark problem set with the three dimension sizes in terms of “+/=/−”.
CategoryDTCSPSOAWPSOCLPSO-LSPSO-DLPGL-PSOHCLPSOCLPSO
Unimodal Functions302/0/02/0/01/1/02/0/01/1/01/0/11/0/1
501/1/02/0/01/1/02/0/02/0/00/0/21/0/1
1002/0/02/0/01/0/12/0/02/0/01/0/12/0/0
Simple Multimodal Functions304/1/26/0/16/0/17/0/05/2/03/0/43/0/4
505/1/16/0/16/0/17/0/05/1/13/1/33/1/3
1005/1/16/0/17/0/07/0/05/1/14/0/34/0/3
Hybrid Functions3010/0/010/0/010/0/010/0/09/1/08/2/06/2/2
506/3/19/1/06/3/110/0/07/2/15/3/26/1/3
1006/3/110/0/07/3/010/0/08/2/06/2/28/0/2
Composition Functions307/3/010/0/07/1/210/0/09/1/05/3/27/1/2
508/1/110/0/07/1/210/0/09/0/15/0/57/0/3
1006/0/49/0/18/1/110/0/09/0/15/1/46/0/4
Whole Set3023/4/228/0/124/2/329/0/024/5/017/5/717/3/9
5020/6/327/1/120/5/429/0/024/3/213/4/1217/2/10
10019/4/627/0/223/4/229/0/023/4/216/3/1020/0/9
Table 7. Comparison results between PCLPSO with different learning strategies on 50-D CEC 2017 benchmark problems.
Table 7. Comparison results between PCLPSO with different learning strategies on 50-D CEC 2017 benchmark problems.
FPCLPSOPCLPSO-WPCLPCLPSO-RandPCLPSO-Gbest
F14.19 × 1032.25 × 1014.76 × 1031.51 × 104
F31.64 × 1045.66 × 1073.85 × 1049.12 × 103
F43.53 × 1028.98 × 1042.94 × 1025.42 × 102
F55.17 × 1011.12 × 1031.22 × 1021.49 × 102
F68.50 × 10−11.42 × 1028.20 × 10−39.57 × 100
F72.11 × 1024.81 × 1032.17 × 1023.01 × 102
F85.39 × 1011.12 × 1031.27 × 1021.40 × 102
F95.26 × 1018.61 × 1046.88 × 1001.66 × 103
F101.09 × 1041.60 × 1041.13 × 1045.66 × 103
F111.97 × 1028.71 × 1041.36 × 1024.71 × 102
F122.66 × 1061.13 × 1012.11 × 1063.77 × 107
F133.90 × 1035.84 × 1006.21 × 1039.59 × 103
F145.50 × 1041.99 × 1081.90 × 1051.35 × 105
F151.43 × 1032.21 × 1006.84 × 1023.58 × 103
F168.76 × 1021.05 × 1041.29 × 1031.70 × 103
F178.16 × 1023.26 × 1058.47 × 1021.10 × 103
F182.20 × 1057.66 × 1085.17 × 1052.42 × 106
F191.52 × 1041.03 × 1001.60 × 1041.17 × 104
F206.50 × 1023.22 × 1037.89 × 1027.69 × 102
F212.53 × 1021.35 × 1033.25 × 1023.38 × 102
F222.59 × 1031.67 × 1043.16 × 1033.50 × 103
F234.98 × 1022.52 × 1034.82 × 1026.76 × 102
F245.83 × 1022.88 × 1035.74 × 1027.63 × 102
F256.49 × 1024.62 × 1046.18 × 1028.24 × 102
F261.77 × 1032.53 × 1041.56 × 1033.17 × 103
F278.74 × 1025.24 × 1038.85 × 1021.05 × 103
F286.96 × 1021.94 × 1046.14 × 1028.91 × 102
F296.39 × 1024.54 × 1056.43 × 1021.44 × 103
F302.19 × 1061.31 × 10102.05 × 1069.10 × 106
Rank1.524.001.762.72
Table 8. Comparison results between PCLPSO with and without the adaptive strategy for F on the 50-D CEC 2017 benchmark set.
Table 8. Comparison results between PCLPSO with and without the adaptive strategy for F on the 50-D CEC 2017 benchmark set.
FAdaptive-FF = 0.1F = 0.2F = 0.3F = 0.4F = 0.5F = 0.6F = 0.7F = 0.8F = 0.9
F14.19 × 1034.07 × 1061.95 × 1043.43 × 1034.97 × 1034.24 × 1035.48 × 1036.85 × 1036.08 × 1035.90 × 103
F31.64 × 1046.45 × 1044.51 × 1043.19 × 1043.44 × 1042.73 × 1049.85 × 1031.68 × 1041.67 × 1043.07 × 104
F43.53 × 1022.42 × 1022.82 × 1023.34 × 1023.34 × 1023.77 × 1023.97 × 1023.23 × 1023.64 × 1023.14 × 102
F55.17 × 1012.86 × 1021.39 × 1026.18 × 1016.66 × 1016.54 × 1016.94 × 1011.87 × 1021.70 × 1021.30 × 102
F68.50 × 10−11.25 × 1012.23 × 1002.64 × 1002.09 × 1002.01 × 1003.25 × 1009.57 × 10−18.87 × 10−11.12 × 10−1
F72.11 × 1023.67 × 1022.84 × 1022.35 × 1022.69 × 1023.11 × 1022.31 × 1023.01 × 1022.94 × 1023.01 × 102
F85.39 × 1012.64 × 1021.33 × 1025.58 × 1013.24 × 1014.78 × 1013.48 × 1011.48 × 1021.44 × 1021.27 × 102
F95.26 × 1012.45 × 1032.81 × 1029.37 × 1016.08 × 1015.87 × 1017.73 × 1013.26 × 1014.25 × 1012.40 × 101
F101.09 × 1041.17 × 1041.22 × 1041.23 × 1041.23 × 1041.25 × 1041.23 × 1041.27 × 1041.24 × 1041.27 × 104
F111.97 × 1022.61 × 1022.52 × 1022.55 × 1022.20 × 1022.50 × 1022.28 × 1022.08 × 1022.28 × 1022.08 × 102
F122.66 × 1064.01 × 1062.59 × 1062.83 × 1062.19 × 1062.01 × 1063.32 × 1064.49 × 1062.88 × 1062.57 × 106
F133.90 × 1034.72 × 1033.60 × 1033.48 × 1034.28 × 1034.91 × 1034.74 × 1034.52 × 1036.36 × 1037.24 × 103
F145.50 × 1044.09 × 1043.37 × 1044.34 × 1045.29 × 1045.22 × 1041.98 × 1059.20 × 1042.31 × 1051.22 × 105
F151.43 × 1035.07 × 1033.78 × 1032.05 × 1033.22 × 1031.40 × 1031.30 × 1032.03 × 1031.50 × 1032.33 × 103
F168.76 × 1022.08 × 1031.88 × 1031.33 × 1031.13 × 1031.12 × 1039.82 × 1029.45 × 1021.79 × 1032.17 × 103
F178.16 × 1021.34 × 1031.29 × 1031.23 × 1031.20 × 1031.24 × 1031.08 × 1031.18 × 1031.26 × 1031.43 × 103
F182.20 × 1057.02 × 1056.30 × 1054.40 × 1054.15 × 1052.30 × 1052.65 × 1053.95 × 1055.76 × 1051.03 × 106
F191.52 × 1041.59 × 1041.25 × 1041.22 × 1041.42 × 1041.31 × 1041.56 × 1041.67 × 1041.71 × 1041.66 × 104
F206.50 × 1021.15 × 1031.22 × 1031.24 × 1031.21 × 1031.13 × 1038.33 × 1021.30 × 1031.25 × 1031.37 × 103
F212.53 × 1024.23 × 1023.10 × 1022.51 × 1022.46 × 1022.53 × 1022.42 × 1023.17 × 1023.37 × 1023.19 × 102
F222.59 × 1039.76 × 1021.20 × 1029.66 × 1029.41 × 1023.21 × 1035.13 × 1035.53 × 1036.74 × 1034.72 × 103
F234.98 × 1026.51 × 1025.24 × 1025.16 × 1025.00 × 1024.89 × 1024.93 × 1024.85 × 1025.09 × 1025.49 × 102
F245.83 × 1026.76 × 1025.92 × 1025.92 × 1025.73 × 1025.71 × 1025.69 × 1025.54 × 1025.90 × 1026.34 × 102
F256.49 × 1027.65 × 1027.33 × 1026.74 × 1026.61 × 1026.50 × 1026.83 × 1026.55 × 1026.57 × 1026.55 × 102
F261.77 × 1032.58 × 1031.78 × 1031.80 × 1031.70 × 1031.64 × 1031.86 × 1031.73 × 1032.17 × 1032.71 × 103
F278.74 × 1028.95 × 1028.32 × 1028.62 × 1028.36 × 1028.43 × 1029.06 × 1028.64 × 1028.63 × 1028.61 × 102
F286.96 × 1029.06 × 1028.25 × 1027.40 × 1027.15 × 1026.77 × 1026.63 × 1026.57 × 1026.62 × 1026.41 × 102
F296.39 × 1021.62 × 1031.09 × 1038.34 × 1027.60 × 1026.07 × 1027.08 × 1026.65 × 1028.25 × 1021.01 × 103
F302.19 × 1067.24 × 1064.40 × 1063.67 × 1063.07 × 1062.39 × 1062.89 × 1062.53 × 1062.37 × 1062.35 × 106
Rank3.178.286.175.344.554.175.035.286.526.48
Table 9. Comparison results between PCLPSO with and without the dynamic strategy for c on the 50-D CEC 2017 benchmark problems.
Table 9. Comparison results between PCLPSO with and without the dynamic strategy for c on the 50-D CEC 2017 benchmark problems.
FDynamic-cc = 0.8c = 1.0c = 1.2c = 1.4c = 1.6c = 1.8c = 2.0
F14.19 × 1031.50 × 1082.13 × 1083.67 × 1061.86 × 1071.06 × 1032.50 × 1035.93 × 103
F31.64 × 1049.44 × 1046.68 × 1047.91 × 1044.96 × 1043.96 × 1043.02 × 1041.35 × 104
F43.53 × 1024.75 × 1025.85 × 1024.38 × 1024.16 × 1023.69 × 1023.92 × 1024.02 × 102
F55.17 × 1011.09 × 1029.86 × 1019.60 × 1018.23 × 1016.69 × 1018.42 × 1019.64 × 101
F68.50 × 1017.12 × 1004.87 × 1003.37 × 1002.92 × 1002.02 × 1001.58 × 1001.96 × 100
F72.11 × 1021.96 × 1021.82 × 1021.74 × 1021.55 × 1021.72 × 1022.07 × 1022.19 × 102
F85.39 × 1011.08 × 1029.07 × 1018.67 × 1017.04 × 1017.45 × 1018.66 × 1011.17 × 10+02
F95.26 × 1015.73 × 1022.67 × 1021.41 × 1028.89 × 1016.04 × 1016.05 × 1016.21 × 101
F101.09 × 1048.61 × 1038.42 × 1031.01 × 1049.94 × 1031.01 × 1041.03 × 1041.04 × 104
F111.97 × 1024.60 × 1027.06 × 1024.59 × 1024.39 × 1024.38 × 1022.98 × 1022.83 × 102
F122.66 × 1061.43 × 1086.40 × 1071.06 × 1073.71 × 1064.32 × 1065.01 × 1067.61 × 106
F133.90 × 1031.41 × 1052.98 × 1041.53 × 1048.39 × 1038.84 × 1034.74 × 1033.66 × 103
F145.50 × 1041.10 × 1058.56 × 1043.92 × 1044.18 × 1044.85 × 1048.55 × 1047.89 × 104
F151.43 × 1037.49 × 1035.26 × 1033.34 × 1034.30 × 1034.05 × 1033.83 × 1032.62 × 103
F168.76 × 1028.37 × 1027.80 × 1027.92 × 1027.63 × 1028.26 × 1029.53 × 1021.15 × 103
F178.16 × 1027.62 × 1026.27 × 1026.45 × 1026.16 × 1027.06 × 1027.69 × 1028.65 × 102
F182.20 × 1051.53 × 1061.40 × 1069.63 × 1055.95 × 1053.35 × 1052.72 × 1051.41 × 105
F191.52 × 1041.57 × 1041.49 × 1041.52 × 1041.35 × 1041.46 × 1041.47 × 1041.43 × 104
F206.50 × 1024.16 × 1023.49 × 1023.61 × 1023.04 × 1024.59 × 1025.88 × 1026.80 × 102
F212.53 × 1022.87 × 1022.82 × 1022.66 × 1022.69 × 1022.67 × 1022.77 × 1022.94 × 102
F222.59 × 1033.44 × 1034.56 × 1032.95 × 1033.97 × 1033.83 × 1034.49 × 1033.30 × 103
F234.98 × 1025.87 × 1025.49 × 1025.32 × 1025.38 × 1025.33 × 1025.33 × 1025.43 × 102
F245.83 × 1026.19 × 1026.31 × 1026.05 × 1026.11 × 1026.14 × 1026.13 × 1026.09 × 102
F256.49 × 1027.81 × 1028.92 × 1028.17 × 1028.04 × 1027.26 × 1026.77 × 1026.61 × 102
F261.77 × 1033.34 × 1032.07 × 1031.70 × 1031.88 × 1031.91 × 1031.95 × 1032.02 × 103
F278.74 × 1028.61 × 1028.69 × 1028.22 × 1028.69 × 1028.50 × 1028.92 × 1028.70 × 102
F286.96 × 1021.08 × 1031.21 × 1031.14 × 1039.82 × 1028.50 × 1027.85 × 1027.21 × 102
F296.39 × 1021.22 × 1039.32 × 1028.78 × 1027.42 × 1027.15 × 1027.11 × 1027.08 × 102
F302.19 × 1065.18 × 1071.83 × 1071.22 × 1072.57 × 1062.25 × 1062.29 × 1062.93 × 106
Rank2.726.666.144.383.793.524.314.48
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Q.; Jing, Y.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Predominant Cognitive Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics 2022, 10, 1620. https://doi.org/10.3390/math10101620

AMA Style

Yang Q, Jing Y, Gao X, Xu D, Lu Z, Jeon S-W, Zhang J. Predominant Cognitive Learning Particle Swarm Optimization for Global Numerical Optimization. Mathematics. 2022; 10(10):1620. https://doi.org/10.3390/math10101620

Chicago/Turabian Style

Yang, Qiang, Yufei Jing, Xudong Gao, Dongdong Xu, Zhenyu Lu, Sang-Woon Jeon, and Jun Zhang. 2022. "Predominant Cognitive Learning Particle Swarm Optimization for Global Numerical Optimization" Mathematics 10, no. 10: 1620. https://doi.org/10.3390/math10101620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop