Next Article in Journal
The Running Performance of Amateur Football Players in Matches with a 1-4-3-3 Formation in Relation to Their Playing Position and the 15-min Time Periods
Previous Article in Journal
A Blockchain-Based Supervision Data Security Sharing Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple Learning Strategies and a Modified Dynamic Multiswarm Particle Swarm Optimization Algorithm with a Master Slave Structure

1
School of Computer and Communication Technology, Lanzhou University of Technology, Lanzhou 730050, China
2
School of Rail Transportation, Wuyi University, Jiangmen 529020, China
3
Engineering Research Center of Urban Railway Transportation of Gansu Province, Lanzhou 730050, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7035; https://doi.org/10.3390/app14167035
Submission received: 8 July 2024 / Revised: 31 July 2024 / Accepted: 5 August 2024 / Published: 11 August 2024

Abstract

:
It is a challenge for the particle swarm optimization algorithm to effectively control population diversity and select and design efficient learning models. To aid in this process, in this paper, we propose multiple learning strategies and a modified dynamic multiswarm particle swarm optimization with a master slave structure (MLDMS-PSO). First, a dynamic multiswarm strategy with a master–slave structure and a swarm reduction strategy was introduced to dynamically update the subswarm so that the population could maintain better diversity and more exploration abilities in the early stage and achieve better exploitation abilities in the later stage of the evolution. Second, three different particle updating strategies including a modified comprehensive learning (MCL) strategy, a united learning (UL) strategy, and a local dimension learning (LDL) strategy were introduced. The different learning strategies captured different swarm information and the three learning strategies cooperated with each other to obtain more abundant population information to help the particles effectively evolve. Finally, a multiple learning model selection mechanism with reward and punishment factors was designed to manage the three learning strategies so that the particles could select more advantageous evolutionary strategies for different fitness landscapes and improve their evolutionary efficiency. In addition, the results of the comparison between MLDMS-PSO and the other nine excellent PSOs on the CEC2017 test suite showed that MLDMS-PSO achieved an excellent performance on different types of functions, contributing to a higher accuracy and a better performance.

1. Introduction

Many real-world optimization problems such as classification problems, nonlinear processes, path selection, scheduling problems, structural designs, and text mining may have multimode, non-convex, nonlinear, discontinuous, and non-differentiable characteristics, which makes these problems extremely complex and difficult to solve by traditional methods [1]. To solve these problems better and more effectively, many researchers have developed population-based nature-inspired search optimization algorithms such as genetic algorithms (GAs) [2], particle swarm optimizations (PSOs) [3], ant colony optimizations (ACOs) [4], gravitation search algorithms (GSAs) [5], differential evolutions (DEs) [6], and distribution estimation algorithms (EDAs) [7], proposing various population organization patterns and learning mechanisms, cleverly utilizing individual historical knowledge to help population evolution. These population-based algorithms are capable of solving most optimization problems and are easy to implement. Due to the random initialization of the population and the introduction of random numbers into the search mechanism, their dependence on the initial scheme and gradient information is lessened to a great extent.
Among the population-based algorithms, PSO [3] was proposed by Kennedy and Eberhart in 1995 based on research on the predation behavior of birds. It finds the best solution through information sharing and cooperation among the individuals in the population. Because of its simplicity, a few parameters adjusted, and its high computational efficiency, PSO has successfully attracted the extensive attention of engineers and scholars in recent decades. The classical PSO constructs a learning mechanism in which each particle realizes its own evolution through self-cognition and social cognition. Self-cognition makes it maintain its own historical experience, and social cognition makes the individuals cooperate and share information. This learning mechanism enables particles to continuously mine better positions from the search space and guide the subsequent search behavior of particles, which gives PSO a strong ability to solve complex problems.
The learning model of PSO, that is, the information sharing mode among particles, largely determines the ability of the algorithm to solve complex problems. Generally, it is an effective way to enhance the algorithm ability to fully mine and use the effective information in the population to build an efficient learning model. In the classical PSO, particles perform the learning process by measuring the fitness and selecting the global best-so-far particles or the personal best experience as a learning model [8,9]. When optimizing the single-mode function, this learning model can help particles quickly find the global optimum. However, PSO may lose its diversity and fall into a premature convergence because the overall learning mode makes particles gradually move to one dominating candidate in the population until convergence, especially when solving nonseparable and high-dimensional optimization problems or optimization problems with many local optimizations [10].
For the population-based evolutionary algorithm, maintaining the diversity of the population gathers increasingly effective evolutionary information. According to the characteristics of population evolution, different evolutionary information is selected as a particle learning example, which balances the exploration and development and guides the population to evolve effectively. For instance, dynamic multiswarm PSO (DMS-PSO) [11] allocates all particles into multiple subswarms. The individuals in each subswarm receive different social information so that each subswarm has different evolutionary paths, and information among the subswarms is shared through population reorganization, which effectively maintains the diversity and makes it perform well on complex multimode problems. However, DMS-PSO is not conducive to the evolution of some subswarms because it adopts the same learning template for different subswarms. A heterogeneous comprehensive learning and dynamic multiswarm PSO (HCLDMS-PSO) [12] is presented, where a comprehensive learning (CL) strategy was used as the exploitation subswarm learning model and an improved DMS strategy was designed to structure the exploration subswarm learning model. The two subswarms rely on the different learning exemplars to update independently to ensure that the population does not lose its diversity.
Exploration and exploitation are two important stages for the evolution process of population-based algorithms [13,14]. In the exploration stage, to seek the global optimal solution, individuals need to reach different regional positions as much as possible, and better diversity is obtained at the population level. In the exploitation stage, particles search in a small area for the sake of obtaining a high-precision optimal solution faster, which can effectively improve the local search ability of the algorithm but sacrifices the diversity of the population and increases the risk of premature convergence. Therefore, the probability of the premature convergence can be significantly reduced by improving the diversity of the population. The multiswarm strategy divides the entire population into several subswarms. Individuals within a subswarm cooperate and share information with other subswarms to produce new offspring individuals, which makes each subswarm evolve along different paths and effectively maintains the diversity of the population during the evolution. To date, the multiswarm strategy has been widely used in population-based evolutionary algorithms because it has natural advantages in maintaining population diversity. Various population division strategies have been proposed such as stochastic division, clustering, fitness sharing, etc., in which the algorithm extends the learning beyond the dominant individuals by improving the probability of poor individuals entering the next generation. Their common starting point is to maintain the diversity of the population.
In addition, considering that the update mode of all individuals is the same for the algorithm of a single learning model, the population information cannot be fully used to guide individual evolution, which cannot intelligently deal with different complex situations. In other words, when the algorithm is in different evolutionary stages, or is used to solve different types of optimization problems, the guiding effects of the different learning models are quite different. Therefore, some scholars have begun to explore PS based on a variety of different learning models. For instance, in the triple archives PSO (TAPSO) [15], three different learning models, the Diffident Model, the Mild Model, and the Confident Model, are provided to update particles so that the particles in the population show different search behaviors, meeting the needs of the algorithm in its different evolutionary stages. In the multiswarm PSO with dynamic learning strategy (PSO-DLS) [16], particles take the local optimal position in their subswarms (focusing on exploitation) and a united local best position among subswarms (focusing on exploration), as examples. During the search process, the population can maintain diversity without being affected by the global best position of all the dominant particles.
In summary, multiple population structures can offer significant diversity at the population level, and excellent learning strategies can more extensively mine population historical knowledge, which are effective in helping find the global optimum, especially for problems with multiple local optimal solutions. However, most studies have a fixed number of populations. If the population size is large, it helps with the initial exploration of the algorithm, but limits the local development ability in the later stages of evolution, and vice versa. Meanwhile, the rational management of multiple learning strategies to match the evolutionary characteristics of the population is also an effective way to improve the overall performance of the algorithm. Motivated by this observation, we propose multiple learning strategies and a modified dynamic multiswarm PSO with a master–slave structure, called MLDMS-PSO. Specifically, the characteristics of MLDMS-PSO are as follows.
Adopting a multiswarm strategy is an effective means to improve population diversity. In this paper, a modified dynamic multiswarm (MDMS) strategy with a master–slave structure is proposed. Specifically, among the superior particles of the population, we selected the particle combinations with a more uniform spatial distribution as the master particles. With the master particles as the center, the subswarms were constructed. At the same time, the number reduction strategy of the subswarm was adopted to strengthen the local exploitation ability of the algorithm in its late evolution. Three learning strategies were adopted to update the particle positions. Respectively, a modified comprehensive learning (MCL) strategy is conducive to the diversity of the population; a local dimension learning (LDL) strategy can help better local exploitation; a united learning (UL) strategy was used to balance the MCL and LDL. In addition, to further exert the advantages of the three learning strategies, a probability function with a reward and punishment mechanism was designed to manage the three learning strategies, strengthen the competition within the subswarms and the collaboration between subswarms, and improve the search efficiency of the algorithm. Finally, a large number of experiments was carried out on 29 benchmark functions from the CEC2017 test suite. The experimental results, supported by the nonparametric Wilcoxon rank-sum test and Friedman test, consistently demonstrated the that MLDMS-PSO could effectively balance exploration and exploitation, clearly improve the convergence accuracy and reliability, and was able to provide a better and more consistent performance than all comparison algorithms on the majority of the benchmark functions.
The rest of this paper is arranged as follows. Section 2 shows the standard PSO and reviews some different types of PSOs. The proposed MLDMS-PSO is detailed in Section 3. In Section 4, extensive experiments are carried out to test the MLDMS-SSO, and its performance is analyzed and discussed in depth. Finally, the conclusions are presented in Section 5.

2. Review of the Standard PSO and Its Variants

2.1. Standard PSO

As a kind of population intelligent optimization algorithm, PSO is generally used because of its excellent performance in solving continuous optimization problems. It is assumed that the population contains M particles in the D-dimensional search space, and each particle is composed of two kinds of information, the particle position, and velocity, in which the particle position vector, x i = ( x i , 1 , x i , 2 , , x i , D ) T , represents a solution in the search space, and the particle velocity vector, v i = ( v i , 1 , v i , 2 , , v i , D ) T , indicates the search direction of the particle. Particle xi moves at speed vi in the search space. Due to the influence of inertia, it is more inclined to maintain its own speed and can learn from its personal best experience Pbesti and population best experience Gbest to improve the vi. In the initialization phase, each particle is given a random initial position and an initial velocity. After that, the update equations of vi and xi are as follows:
v i , j ( t + 1 ) = w v i , j ( t ) + c 1 r 1 , j ( P b e s t i , j ( t ) x i , j ( t ) ) + c 2 r 2 , j ( G b e s t j ( t ) x i , j ( t ) )
x i , j ( t + 1 ) = x i , j ( t ) + v i , j ( t + 1 )
where i = 1, 2,⋯, M, j = 1, 2,⋯, D. w is an inertia weight, c1 and c2 are the acceleration coefficients, and r1,j and r2,j are two uniformly distributed random numbers independently generated within [0, 1] for the jth dimension. t is the number of iterations. The velocity and position of x i ( t ) are updated with Equations (1) and (2) and achieve the new position x i ( t + 1 ) of particle i. If the fitness of x i ( t + 1 ) ( f ( x i ( t + 1 ) ) ) is better than P b e s t i ( t ) , P b e s t i ( t ) is replaced by f ( x i ( t + 1 ) ) . Otherwise, P b e s t i ( t ) will remain unchanged. Similarly, Gbest can be updated until the termination conditions are met.

2.2. Some Variants of PSO

In recent decades, the real-world problems that need to be solved have become increasingly complex. The classical PSO algorithm often shows premature convergence and a poor global search ability. To improve the performance of the classical PSO, some researchers and engineers have proposed plenty of PSO variants. Depending on the objectives to be dealt with, most PSO variants can be divided into four types: parameter tunings, domain topology adjustments, learning strategies, and hybrid strategies, which are briefly reviewed here.
(1) 
Parameter tuning
Generally, the parameter control of a PSO mainly focuses on the inertia weight (w) and acceleration coefficients (c1 and c2). The inertia weight, used to maintain the direction of particle movement, is a critical parameter of a standard PSO. Reducing the inertia weight over time can significantly improve the performance of the PSO [17]. Now, many PSOs still use this update strategy to linearly reduce w from 0.9 to 0.4 with the progress of iteration [18,19,20,21]. It has been found through experiments that with a smaller w, the particles move at a smaller pace, which is conducive to exploitation, while with a larger w, particles can appear at a farther place each time, which is more helpful to exploration. However, considering the complexity of the PSO search process, some researchers have introduced nonlinear-varying strategies to assign inertia weights, which is able to endow particles with diverse search behaviors [22,23,24,25,26,27].
Although the above-mentioned inertia weight updating methods improve the performance of the PSO to a certain extent, they are not good enough to match between the parameter adjustments and the search process due to the lack of appropriately used evolutionary state information in the iterative process. To obtain a more satisfactory adjustment for the parameters, the inertia weight can be updated based on some feedback parameters. For instance, in [25], the success rate of the swarm was used as a feedback parameter to adaptively adjust the inertia weight, which can configure a more appropriate value of the inertia weight according to the particles’ situation in the search space. In an adaptive PSO (APSO) [28], a systematic adaptation scheme is used to control w, c1, and c2. This scheme is not only based on an evolutionary state estimation, but also considers the influence of these parameters in different evolutionary states. In the artificial multiswarm PSO (AMPSO) [29], an adaptive inertia weight strategy based on a hybrid of position diversity is matched to different subswarms to better balance exploration and exploitation and improve the performance of the algorithm. What these methods have in common is that they can deeply mine the fitness, speed, position, and other information of the particles, which can be effectively used to adjust their own parameters during the evolution process.
(2) 
Neighborhood topology adjustments
The PSO particles with different neighborhood topologies are connected together through different structures. In these structures, the particles have different rules for communicating and sharing information, which can affect the diffusion speed of the information in the population and drive the whole population’s evolution. Therefore, neighborhood topology adjustments can also effectively improve the performance of PSO. Many neighborhood topologies have been proposed such as the star [3], ring [30], wheel [31], and Von Neumann topologies [32]. These foregoing neighborhood topologies belong to static topologies and the pattern of information acquisition is fixed, which contradicts the distinct requirements at the different evolutionary stages. Therefore, many dynamic topologies have been developed in which the neighborhood can be refreshed and regrouped depending on the particles’ parameters in the evolution process. In [33], a dynamic neighborhood that is introduced contains only a few particles during the initial stages of the optimization. With the increase in generations, the number of particles in the neighborhood gradually increases until all particles are included. This enhances the exploration abilities of the algorithm in the earlier stage and makes the algorithm focus more on exploitation at the end of the PSO run. Furthermore, in [26], the nonparametric PSO (NP-PSO) constructs two different topologies, making it perform global and local search over the search space with a fast convergence speed. In the dynamic neighborhood learning PSO (DNLPSO) [34], the exemplar particle is selected from a neighborhood dynamically updated at certain intervals, which is conducive to preserving the diversity of the population to discourage premature convergence.
The multipopulation mechanism has received extensive attention because of its advantages in maintaining population diversity. In the DMS-PSO [11] and the dynamic multiswarm global PSO (DMS-GPSO) [35], the entire population is divided into many small swarms and various regrouping schemes and information interaction methods have been designed to perform dynamic updating of these swarms. In addition, in the multiswarm ensemble PSO (MPEPSO) [36], all particles are divided into four subswarms including three indicator subswarms and one reward subswarm. AMPSO [29] contains three subswarms, namely, an exploration swarm, an artificial exploitation swarm, and an artificial convergence swarm. In the above two algorithms, the subswarms update their velocities through different strategies, with the intention of balancing exploration and exploitation.
(3) 
Learning strategy
In PSO, the learning strategy plays a very important role because it is one of the key factors determining the particle search direction. In the global PSO (GPSO) [17], the best historical experience of the whole population is selected as an example of a particle learning. Similarly, a particle selects its local neighbors’ best positions as its learning exemplar in the local PSO (LPSO) [30]. In OLPSO [19], the method of orthogonal experimental design (OED) is used to construct the particles’ learning exemplar and the combination of the historical best experience of particles and its neighborhood’s best historical guide particles to move toward the global optimal region. In the comprehensive learning PSO (CLPSO) [21], any dimension of a particle position can learn from its historical best position or other particles with better positions through a comprehensive learning strategy, while retaining the ability to learn from the global optimal experience. Similarly, in the two-swarm learning PSO (TSLPSO) [20], a novel dimensional learning (NDL) strategy is constructed by learning from the corresponding dimensions of the population best solution through each dimension of the particle’s personal best experience to discover and integrate promising information, which focuses more on local search. In CCPSO-ISM [37], an information sharing mechanism (ISM), and a kind of competitive and cooperative operator (CC), are proposed to enhance the PSO performance. The ISM needs to keep the best information of each particle so that other particles can read the shared information, which also realizes full communication between the particles in the population and each other. The CC operation provides a more efficient way to utilize this shared information. In PSO-DLS [16], a learning strategy focusing on exploration is proposed in which particles can learn from a united local best position and can also obtain information from other subswarms to improve themselves. In [8,28], an elite learning strategy guides the best knowledge of each particle to the global optimum by effectively sharing the best experience of the dominant particles when the evolution process is in a convergent state. However, in many cases, researchers have preferred to integrate multiple learning strategies to effectively guide particles toward a global optimization to meet the needs of distinct information at the different evolutionary stages.
(4) 
Hybridization of PSO with other algorithms
Some scholars have improved the performance of PSO by hybridizing the evolutionary mechanisms of other meta heuristic algorithms such as ACO, GA, DE, and GSA. In the genetic learning PSO (GL-PSO) [38], genetic operators are used to generate exemplars with high quality and rich diversity by performing crossover, mutation, and selection on the historical information of the particles. One of the advantages of DE is that it maintains population diversity. In [9,39,40], the differential evolution strategy was introduced into the particle swarm optimization algorithm, which could effectively improve the global optimization ability of a PSO. A novel hybrid PSO and GSA (HPSO-GSA) [41] uses coevolutionary technology, which updates particle positions simultaneously with PSO velocity and GSA acceleration, to seek an effective balance between exploration and exploitation. In the ant particle optimization algorithm (HAP) [42], the PSO and ACO work and generate the best solution, taken as the global best solution, which is used as an important basis for selecting the position of next generation particles and ants. In addition, various other algorithms are hybridized with PSO to improve the overall performance such as HPPSO [43] (PSO and POA), HFPSO [44] (Firefly and PSO), PSO-BOA [45], PSOGWO [46], and so on.

3. The Proposed Method

Based on the excellent research results of previous researchers, we proposed the MLDMS-PSO algorithm to improve the performance of the particle swarm optimization algorithm, especially in dealing with more complex multimodal function problems. An MDMS strategy with a master–slave structure helps the population maintain better diversity, and multimodel learning strategies can better utilize historical knowledge in evolution. In addition, an efficient learning strategy management mechanism was established to schedule learning strategies more reasonably based on the evolutionary stage of the population, thereby significantly enhancing the overall performance of the algorithm. The steps are shown as a flowchart of the MLDMS-PSO in Figure 1.

3.1. The Proposed MDMS Strategy

In the process of iteration, PSO continuously discards the particles with poor positions and retains the particles with better positions to achieve the goal of the population’s evolution toward a better position and the search for an optimal position. Therefore, the superior particles lead each subswarm in a dynamic multiswarm strategy that can effectively avoid the population falling into a local optimum and improve the search efficiency of the population. Based on this, we propose a modified dynamic multiswarm strategy with a master–slave structure in which the subswarm is constructed with the superior particle as the center.
(1)
A multiswarm segmentation scheme with a master–slave structure
Assuming that the population is composed of M particles, it is divided into sn subswarms { subpo p 1 , subpo p 2 , , subpo p s n } , and each subswarm contains sm particles, M = s i = 1 s n s m s i , including a master particle and (smsi − 1) slave particles. The number of subswarms sn decreases linearly with the iteration of the algorithm; at the iter generation, it can be calculated by Equation (3).
s n ( i t e r ) = c e i l ( M 2 ( 1 i t e r M a x G e n + 1 ) )
where MaxGen represents the maximum number of iterations of the algorithm. Figure 2 clearly explains the proposed multiswarm segmentation scheme with a master–slave structure, which is divided into three steps.
First, the superior particles are identified. The individuals are sorted according to the fitness value of the particles, and the top of the M/2 particles are selected as the superior particle set ST = { s t 1 , s t 2 , , s t g } , g = M / 2 , from which the master particles will be selected.
Second, the master particles are selected. The distribution of the subswarms is closely related to the leader particle, and a more balanced distribution is conducive to the particle exploring a wider area. Therefore, to ensure a more balanced distribution of each subswarm, we took the Euclidian distance between particles as a measure and selected the combination that contained sn particles with the largest sum of distances in the superior particle set ST as the set of master particles, Maste r i = ( mas t 1 , mas t 2 , , mas t s n ) T , Master ST . The distance sum of possible combinations, Cs = { c s 1 , , c s s n } and cs ST can be calculated by Equations (4) and (5).
max   S u m _ d = i = 1 s n 1 j = i + 1 s n d i s t i , j
d i s t i , j = k = 1 D ( c s i k c s j k ) 2 ,   ( i j )
where disti,j denotes the distance between two different D-dimensional individuals, csi and csj, and Sum_d represents the distance sum between all individuals in the Cs. Then, the combination of Cs with the largest Sum_d can be selected as the Master by traversing all possible combinations.
Finally, the slave particles are assigned. An individual master particle, mastsi, in Master leads to a subswarm, subpopsi, and all other individuals, except those who have been selected as leader particles, should be assigned to subswarms. When assigning slave particles, the master particle is taken as the center of the subswarm, and the particle closest to itself is selected as its slave particle. Each subswarm competes for its own slave particles until the number of its slave particles reaches the upper limit. In detail, a particle xi is randomly selected from all of the particles except the master particle. If xi has the smallest Euclidean distance to mastk, it will be assigned to subpopk. Different particle numbers are allowed in each subswarm.
(2)
The multiswarm dynamic adjustment strategy
In the process of evolution, due to the small scale, the global exploration ability of the subswarm is limited, which makes it easy to lose diversity and fall into the local optimal region. To solve this problem, a multiswarm dynamic adjustment strategy was designed in which the evolutionary ability of each subswarm is evaluated to determine when to update and restructure all the subswarms. When the evolutionary ability of the subswarm is weakened and its advantage cannot be effectively brought into play, the population is reorganized so that the subswarms can exchange information and maintain good diversity. In detail, the evolutionary ability of the population is measured for each interval iteration parameter G. If the subswarm finds a better position after generation G, it means that it still has evolutionary abilities. In contrast, this means that the subswarm has lost its evolution abilities. Finally, the average evolutionary ability of the subswarm (SPEAave) is used as a measure of the evolutionary ability of the population and can be calculated by Equations (6) and (7).
S P E A k = { 1 L b e s t v a l k i t e r + G < L b e s t v a l k i t e r 0 L b e s t v a l k i t e r + G L b e s t v a l k i t e r
S P E A a v e = 1 s n k = 1 s n S P E A k
where SPEAk is the evolutionary ability of the k-th subswarm, k = 1 , 2 , , s n . L b e s t v a l k i t e r and L b e s t v a l k i t e r + G represent the fitness values of the k-th subswarm at the local neighbors’ best positions in the iter and iter + G generation, respectively. At the same time, a recombination parameter, Rc, is defined. When condition S P E A a v e < R c is satisfied, the subswarm will be redivided. The values of the interval iteration parameter, G, and recombination parameter, Rc, are investigated in Section 4.1. In summary, the detailed process of the MDMS strategy is shown in Algorithm 1.
Algorithm 1. MDMS_Phase
1:Calculate sn using Equation (3);
2:[~,index_f] = sort(fitness);
3:Index of superior particles: ST_temp = index_f (1:M/2);
4:Index of all potential combinations of master particles Comb = nchoosek(ST_temp, sn);
5:Calculate all combinations of distance sum Sum_d(Pos(Comb)) using Equations (4) and (5);
6:Index of the combination Cs: [~, index_m] = max (Sum_d);
7:The index of master particles Master = Comb(index_m,:);
8:for k = 1: sn
9:   subx(1,:) = Pos(Master(k),:), subp(1,:) = Pbest(Master(k),:);
10:end for
11:for l = 1:M
12:   if ~ismember(l,Master)
13:    ms_d = pdist2(Pos(l), Pos(Master,:));
14:    [~,index_s] = min(ms_d);
15:    subx{index_s}(end + 1,:) = Pos(l,:), subp{index_s}(end + 1,:) = Pbest(l,:);
16:   end if
17:end for

3.2. A Multimodel Learning Strategy with Reward and Punishment Mechanisms

In the evolution process of the PSO algorithm, particles find better positions by constantly learning from their own historical experience and the historical experience of their neighbors. The learning characteristics are mainly reflected in the update of particle velocity v. A good learning example should be able to adapt to the different needs of the algorithm evolution process; that is, in the initial stage, it should be able to maintain better diversity to explore a wider area in the search space, and in the final stage, it should make the algorithm have better local search abilities to improve the solution accuracy of the algorithm and save the number of evaluations. However, it is difficult for a single learning strategy to satisfy all the needs of the algorithm in the different stages. Based on this, this paper used the reward and punishment mechanism to manage three learning strategies, the MCL, LDL, and UL strategies, so that the different subswarms can dynamically choose learning strategies conducive to their own evolution.
(1) 
MCL strategy
The CL strategy can obtain more abundant information from the entire population in the process of algorithm evolution to help particles evolve and keep the population exploring a wider area in the search space. In the multiswarm PSO, different subswarms may contain different information elements. Therefore, sufficient information sharing among the different subswarms is conducive to the evolution of the population. Based on this, this paper adjusted the CL strategy and proposed an MCL strategy. When selecting the learning template, individuals from different subswarms are selected as learning objects to promote the information interaction between subswarms in the process of evolution. The update of speed v is shown in Equations (8) and (9).
v i , j ( t + 1 ) = w v i , j ( t ) + c 1 r 1 , j ( M C L P b e s t i , j ( t ) x i , j ( t ) )
M C L P b e s t i , j ( t ) = { P b e s t i , j ( t )         ( r a n d > P c i ) M p b e s t i , j ( t )     ( r a n d P c i )
where Mpbesti is the particle xi of the learning example. The particle from each subswarm is randomly selected to form a learning set, in which Mpbesti,j is consistent with the dimension corresponding to the pbest of the winner in the learning set. Different particles adopt different learning probabilities, Pc, which are related to the number of particles in the subswarms where the particle is located sm.
P c i = 0.05 + 0.45 ( e 10 ( i 1 ) / ( s m k 1 ) 1 ) e 10 1
(2) 
LDL strategy
In the process of population evolution, the effective local exploitation of subswarms can better improve the overall performance of the algorithm. However, making full use of the local optimal information can speed up the development of the local areas. Using the idea of dimension learning for reference, this paper constructed an update template through the historical optimal dimension information of the particles and the global optimal dimension information of the population to update the particle speed and position, as shown in Equations (11) and (12):
v i , j ( t + 1 ) = w v i , j ( t ) + c 1 r 1 , j ( L D P b e s t i , j ( t ) x i , j ( t ) )
L D P b e s t i , j ( t ) = { P b e s t i , j         ( r a n d > P l i ) L b e s t k , j         ( r a n d P l i )
The selection probability Pl is used to manage the probability that the dimension information of the xi historical optimal position Pbesti and its subswarm historical optimal position Lbestk are selected. In this paper, when updating the particle speed, we put Pbesti and Lbestk in the same important position, so we assigned Pl = 0.5. In detail, for each dimension of xi, if the generated random number rand is greater than the selection probability Pl, the information of the corresponding dimension in LDPbesti is selected from Pbesti; otherwise, the Lbestk where particle xi belongs is selected.
(3) 
UL strategy
The MCL strategy can extract more individual information, maintain better diversity of the population, and promote particle exploration but it also limits the exploitation ability. The LDL strategy strengthens the particle exploitation ability through the information of the particle’s pbest and Lbest of the subpopulation where the particle belongs, however, it is also easier to fall into a local optimization. To balance the advantages and disadvantages of the MCL strategy and LDL strategy, the UL strategy was proposed. Randomly select H ( H s n ) subswarms, and use the Lbest of the selected subswarms to construct a unit learning example Ubest, that is, the center of the historical optimal position of all the selected subswarms is used to guide the evolution of the target particles. The particle’s speed update is affected by the combination of the particle’s pbest and Ubest, the best location in the history of the individual, and the update formulas are Equations (13) and (14):
v i , j ( t + 1 ) = w v i , j ( t ) + c 1 r 1 , j ( P b e s t i , j ( t ) x i , j ( t ) ) + c 2 r 2 , j ( U b e s t j ( t ) x i , j ( t ) )
U b e s t = 1 H h = 1 H L b e s t h
At different evolutionary stages, learning examples have different effects on the population evolution. For example, at the initial stage of the evolution, selecting the MCL learning model can meet the needs of the algorithm to fully explore the search space at the initial stage. At the end of evolution, the LDL strategy was adopted to accelerate the convergence of the algorithm. Therefore, to meet the needs of different evolutionary stages of the algorithm and give better play to the advantages of the multiple learning templates, this paper proposed a learning model selection mechanism with reward and punishment mechanisms. By selecting the benchmark function and the reward and punishment factors, the benchmark function ensures the overall selection probability change trend of the different templates in the evolution, and the reward and punishment factors fine tune the selected probability of the templates according to the evolutionary stage. There is more effective control of the probability that different learning templates are selected in the iteration to affect the evolution process of the population to improve the performance of the algorithm. Therefore, we built two strategy selection functions, Pls1 and Pls2, to manage three different learning strategies, which can be calculated by Equation (15). As shown in Figure 3, Pls1 is used to divide the selection probability of the LDL and MCL strategies, and Pls2 is the selection probability of the MCL and UL strategies. R a n d   ( R a n d [ 0 , 1 ] ) is a random parameter that conforms to a uniform distribution. If R a n d < P l s 1 , the subswarm selects the LDL strategy to update the individual. If P l s 1 R a n d < P l s 2 , the subswarm selects the MCL strategy to update the individual. If R a n d P l s 2 , the subswarm selects the UL strategy to update the individual. MaxFEs is the maximum number of fitness evaluations.
{ P l s 1 = 0.1 + 0.4 × 10 4   × ( f i t c o u n t M a x F E s 1 ) + r p 1 P l s 2 = 0.66 r p 2
where rp1 is the reward and punishment factor of the LDL strategy relative to the MCL strategy, and rp2 is the reward and punishment factor of the UL strategy relative to the MCL strategy, which was used to better balance the strategy selection at the different evolutionary stages. The reward and punishment factor was constructed based on the promotion abilities of the learning strategies in different stages for the individual evolution, that is, the improvement rate (Ir) of the fitness. In this paper, considering that the objective f is the minimum, that is, the smaller the value f(x), the better the performance of x. Therefore, the Ir of the learning strategy on the selected particles in generation t is defined as Equation (16).
I r     = { 0                       ( l n = 0 ) 1 l n i = 1 l n f ( x i t - 1 ) f ( x i t ) e x i t 1 x i t   ( l n > 0 )
where ln is the number of particles that choose a learning strategy, x i t 1 x i t indicates the Euclidean distance between x i t 1 and x i t , and f ( x i t ) is the fitness of x i t . Therefore, the improvement rates of the three learning strategies LDL, MCL, and UL are IrLD, IrCL, and IrUL, respectively.
It should be noted that in an independent iteration, the influence rate of each learning strategy on the individual evolution is random. Therefore, rp1 and rp2 cannot be directly used to observe the promotion effect of the learning model on the individual evolution at this stage. In this article, we used the technique to smooth it to reduce the influence of randomness in a single generation and obtained the performance change trend of the different learning templates at the different evolution stages. The reward and punishment factors rp1 and rp2 for learning strategy selection can be calculated by Equation (17).
{ r p 1 = smooth ( 0.5 1 1 + e I r L D I r C L I r C L + I r L D + σ ) r p 2 = smooth ( 0.5 1 1 + e I r U L I r C L I r C L + I r U L + σ )
where smooth ( ) is a Savitzky–Golay smooth function and σ is a minimum value, which ensures that the denominator is not equal to 0.
The change in the strategy selection function with the reward and punishment factors on benchmark function f1 is shown in Figure 4. The MCL strategy has a better ability to promote the evolution of the individuals in the middle of evolution. Therefore, at this stage, the probability of the strategy being selected is increased.
In MLDMS-PSO, the three learning strategies are properly implemented to enable the different strategies to cooperate with each other, which is conducive to mining potential promising information to help the particles evolve. For the different evolution stages, the strategy selection function is used to effectively control the selection of advantageous particle update strategies, which balances exploration and exploitation and improves the overall performance. The process of MLDMS-PSO is shown in Algorithm 2.
Algorithm 2. MLDMS-PSO_Phase
1:Set parameter M,G,Rc;SPEAave = 0;delta_t = G;
2:Initialize the position and velocity of all particles x and v, respectively;
3:Evaluate x: fitness = fit(x);
4:Pbest = x, Gbest = [Pbesti | min(fit(Pbesti)];
5:MDMS_Phase;
6:Lbestk = [subp(i,:) | min(fit(subp{k}(i,:)))];
7:While (iterMaxFEs)
8:  if delta_tG
9:    Calculate average evolutionary ability SPEAave using Equations (5) and (6);
10:    if SPEAave < Rc
11:        MDMS_Phase;
12:    end if
13:  end if
14:  for k = 1,2, …, sn do
15:    Calculate Pls1 and Pls2 using Equations (15)–(17);
16:    if a < Pls1
17:        Update particles of subpop{k} using Equations (11), (12) and (2);
18:    end if
19:    if Pls1a < Pls2
20:        Update particles of subpop{k} using Equations (8), (9) and (2);
21:    end if
22:    if Pls2a
23:        Update particles of subpop{k} using Equations (13), (14) and (2);
24:    end if
25:    Evaluate particles of subpop{k} fitness
26:    Update Lbestk;
27:  end for
28:  Update Gbest;
29:end while

4. Experimental Studies

To calibrate the parameters and test the performance of the proposed algorithm, we conducted a series of experiments based on 29 benchmark functions from the CEC2017 test suite. These benchmark functions were divided into four different categories, that is, two unimodal functions (f1 and f3. f2 was removed because it is unstable on issues of higher dimensions), seven multimodal functions (f4–f10), ten hybrid functions (f11–f20), and ten composition functions (f21–f30), which are shown in Table 1. All the experiments were implemented in MATLAB R2021a on a computer with an Intel Core i7-10700 CPU @2.90 GHz (Intel Corporation, Santa Clara, CA, USA) and 32 GB RAM running Windows 10x64 Edition.

4.1. Parameter Tuning

The value of some parameters may greatly affect the performance of the algorithm. In the proposed MLDMSPSO, two important parameters, the interval iteration parameter G and the recombination parameter Rc, need to be calibrated. The parameter tuning used the same population size of 40, maximum number of fitness evaluations (MaxFEs) of 1.00 × 105, run time of 51, and 10 dimensions.
A smaller value for parameter G and a larger value for Rc may lead to frequent reorganization of the subswarms in the iterative process, which makes the subpopulation unable to make full use of the internal information to improve itself and fully exploit the local area where it is located. In contrast, for a larger parameter G and a smaller Rc, the population may have the risk of a stagnant evolution and a waste of the evaluation times. Therefore, to obtain an insightful view of how the interval iteration parameter G and recombination parameter Rc affect the performance of MLDMS-PSO, this paper tested 48 different combinations of G values, from 5 to 20, when Rc was 0.1, 0.2, and 0.3. The experimental results were ranked by the average rank with the mean of 29 benchmark functions as the measurement standards. When the average rank value corresponding to a combination of G and Rc is smaller, it indicates that the combination performs better in terms of overall performance and can exhibit better performance on more functions; on in contrast, the larger the average rank value, the weaker its comprehensiveness.
The average rank value of the 29 benchmark functions was counted when MLDMS-PSO, using different G and Rc, plotted these, as shown in Figure 5. It can be clearly seen that the algorithm exhibited a similar performance at different Rc values. The average rank value first decreased and then increased with the increase in G, indicating that larger or smaller G values could not help the MLDMS-PSO achieve better performance; on the other hand, when the value of G was close to the middle region of the value range, it was beneficial for MLDMS-PSO to obtain a better ranking on more functions and had better overall performance. When the iteration parameter G = 12 and recombination parameter Rc = 0.1, the average rank was 19.34, which was lower than all of the other combinations of G and Rc for all test functions and could bring the best performance to MLDMS-PSO. Therefore, the iteration parameter G = 12 and recombination parameter Rc = 0.1 were selected for a further performance evaluation of the proposed MLDMS-PSO.

4.2. Comparison with Other PSO Variants

To compare and analyze the performance of the proposed MLDMS-PSO, nine other state-of-the-art PSO variations were selected and divided into three types.
There are four PSO variants with different learning strategies, as follows:
Traditional global PSO algorithms with inertia weight (GPSO) [17].
Full information PSO (FIPS) [47]. The FIPS implements the ring topology.
Comprehensive learning PSO (CLPSO) [21]. All particles update the dimension information of their velocity using pbest and learn from the pbest of different particles.
Competitive and cooperative (CC) PSO with ISM (CCPSO-ISM) [37]. The CC operator based on the ISM is designed to use the shared information properly and efficiently.
Two PSOs adopt multiple learning strategies, as follows:
A PSO with the multi-exemplar and forgetting ability (XPSO) [48]. XPSO uses two exemplars to update the velocity and configures distinct forgetting abilities to different particles.
Two-swarm learning PSO (TSLPSO) [20]. In TSLPSO, the dimensional learning strategy and CL strategy are used to guide the local and global search of the particles, respectively.
Three PSOs divide the whole population into multiple swarms.
Dynamic multiple-swarm PSO (DMS-PSO) [11]. DMS-PSO adopts the multiple-swarm strategy and these swarms can be regrouped frequently.
Multiswarm PSO with a dynamic learning strategy (PSO-DLS) [16]. In PSO-DLS, the particle classification mechanism and the dynamic control mechanism of the strategy promote an information exchange among the subswarms.
Heterogeneous comprehensive learning and dynamic multiswarm PSO with two mutation operators (HCLDMS-PSO) [49]. In HCLDMS-PSO, a CL strategy and a modified DMS strategy are used to construct the exploitation subpopulation exemplar and the exploration subpopulation exemplar, respectively.
The parameter settings of the PSO variants above-mentioned and the proposed MLDMS-PSO are listed in Table 2. To make a fair comparison between the proposed MLDMS-PSO and the other state-of-the-art PSO algorithms, the mean error (mean), standard deviation (std), and rank mean value (rank) of 10 PSOs solution were calculated as a basic performance measure to evaluate the accuracy of the algorithm. The experiments were conducted on dimensions of D = 10, 30, and 50. Each algorithm was run 51 times independently on 10-D, 30-D, and 50-D problems for the 29 benchmark functions, and the termination condition was set to MaxFEs = 10,000 × D. The results of 10-D, 30-D, and 50-D are shown in Appendix A, Appendix B, and Appendix C, respectively, where the best experimental result among all of the test functions is shown in bold. Based on this result, all algorithms were sorted according to the average rank (Ave rank) of the 29 functions as the final rank (Final rank). The Ave rank is the average value of the algorithm’s experimental results ranking on each benchmark function. The Final rank is the result of sorting all algorithms by Ave rank. To compare the comprehensive performance of MLDMS-PSO and nine other state-of-the-art PSO variations in different dimensions, as shown in Table 3, we calculated the mean of each algorithm’s Ave rank in different dimensions as the comprehensive mean rank (CMean rank). Based on this CMean rank, the comprehensive ranking (Crank) of each algorithm was obtained.

4.3. Results Analysis and Discussion

4.3.1. Solutions Accuracy

The comparison of the experimental results for the 10-D problems is shown in Appendix A. The proposed MLDMS-PSO achieved the best results on f6, f9, f11, f13, f14, f16, f18, f20, f23, and f29. The second-best solution was obtained on f12, f27, and f30. The result ranked third on f1, f4, f5, f8, f15, f17, f19, f21, f22, f24, f26, and f28, and the result ranked fourth on f7 and f10, which was better than most comparison algorithms. On f3, although the mean value of its solution ranked lower, the accuracy of the solution was satisfactory, which was 6.96 × 10−12. In addition, MLDMS-PSO achieved the minimum average rank. Therefore, MLDMS-PSO outperformed the other comparison algorithms on 10-D problems.
The comparison of the experimental results for the 30-D problems is shown in Appendix B. TSLPSO and XPSO had the best performance on unimodal functions f1 and f3, respectively. The proposed MLDMS-PSO ranked third and fifth on f1 and f3, respectively. In the simple multimodal functions, hybrid function, and composition function, the proposed MLDMS-PSO achieved the best solution on f6, f9, f11, f12, f17, f18, f20, f22, f28, and f29 and the second-best solution on f16, f25, and f30. CCPSO-ISM obtained the best performance on f6, f13, f15, f19, and f26. PSO-DLS obtained the best solution on f5, f7, f8, f21, f23, and f24. DMS-PSO found the smallest mean value of all solutions on f4, f14, f16, f25, f27, and f30. HCLDMS-PSO showed the best results on f10. In total, depending on the final rank, the MLDMS-PSO algorithm was the first. Hence, MLDMS-PSO consistently performed well and achieved the best overall performance on the 30-D problems.
As shown in Appendix C, for the 50-D problems, it can be seen that the proposed MLDMS-PSO had the highest performance on the whole test suite. Compared with other comparison algorithms, it achieved the best results on f6, f11, f13, f14, f16, f17, f20, and f29. Depending on the final rank, the MLDMS-PSO achieved the optimal performance, ranking first on eight problems, second on two problems, and third on eleven problems out of the twenty-nine benchmark problems. In addition, it is worth noting that the average ranking of the four algorithms with dynamic multiswarm DMS-PSO, PSO-DLS, HCLDMS-PSO, and MLDMS-PSO, was to a large extent, ahead of the other comparison algorithms, which also proves that a dynamic multiswarm algorithm can better implement a full exploration of the search space, effectively avoiding the population from falling into a local optimization, which can play a positive role in solving larger-scale problems.
Table 3 clearly shows that the CMean rank value of the MLDMS-PSO was the smallest among all of the compared algorithms, at 2.80. This result placed the MLDMS-PSO at the top of Crank, which is consistent with our previous discussion and further confirms the advantages of the MLDMS-PSO.

4.3.2. Convergence Process Analysis

The convergence speed is also an important indicator of the performance of evolutionary algorithms. In this part, we took the experimental results of the 30-D problems as an example to analyze the differences in the convergence process of the 10 algorithms. The results of the convergence process were classified according to the type of benchmark functions in the test set. Due to space limitations, the convergence of some functions was reduced, as shown in Figure 6, Figure 7, Figure 8 and Figure 9. The shaded area in the figures has been enlarged to more clearly illustrate the convergence process of the algorithms. It should be noted that in this experiment, the convergence curve showed the average results of 51 independent runs.
For the unimodal functions, the convergence process of the algorithms is shown in Figure 6. TSLPSO and XPSO achieved faster convergence speeds and superior performances on f1 and f3, respectively. However, MLDMS-PSO was limited by the population dynamic reorganization scheme, which weakened its exploitation ability and made it show more stable results on two single-mode functions.
For the multimodal functions, the convergence process of the algorithms is shown in Figure 7. The classic PSO algorithm had a sharp convergence on f5, f7, and f8 but its performance was poor. MLDMS-PSO, HCLDMS-PSO, and PSO-DLS had similar features in f5, f6, f7, f8, and f9, that is, they could accelerate the convergence when approaching the late evolutionary stage, which shows that the dynamic multiswarm strategy has a stronger ability to jump out of the local optimum.
For the hybrid functions, the convergence process of the algorithms is shown in Figure 8. MLDMS-PSO achieved the best performance in f14, f16, and f17. In most hybrid functions, the convergence process was slow. The reason for this is that the effective management of multiple learning strategies can continuously obtain more favorable information, making the population continue to evolve. MLDMS-PSO also demonstrated good exploratory ability in f16 and f20 during the later stages of evolution.
For the composition functions, the convergence process of the algorithms is shown in Figure 9. MLDMS-PSO obtained competitive results in f22, f23, f27, f29, and f30, but its advantages were not obvious. By further improving the algorithm, we could make it more accurate in solving the composite function and improving the practicability of the algorithm, which is a direction that needs to be constantly explored.
In summary, MLDMS-PSO adopts a multiswarm architecture, allowing various subswarms to fully explore the solution space and demonstrate strong exploration ability in multimodal, hybrid, and composition functions. However, the cost of powerful global search capability is the loss of some local development ability, which slows down its local development speed. Compared to the improvement in solution accuracy, the slight loss in convergence speed can be accepted.

4.3.3. Statistical Results of Solutions

(1) Wilcoxon rank sum test: A Wilcoxon rank sum test [50] is a nonparametric sample test that is based on the rank order of samples and not the average value, which is used to compare the difference between the distribution positions of two populations to which two independent samples belong. In this paper, the nonparametric Wilcoxon rank sum test was used to measure the statistical significance difference between the proposed MLDMS-PSO and the other comparison algorithms. In this experiment, the distribution of the results obtained by the MLDMS-PSO algorithm and the other PSO variants on all the benchmark functions through 51 simulation runs on the 10-D, 30-D, and 50-D problems were compared, in which its significance level was 0.05. The statistical results are recorded in Table 4. The symbols “N+”, “N−”, and “N=” indicate the number that MLDMS-PSO was significantly better than, significantly worse than, and almost the same as the corresponding competitor algorithm, respectively. The comprehensive performance (CP) was equal to the sum of “N+” minus the sum of “N−”.
Table 4 shows that MLDMS-PSO was significantly better than the other PSO variants on most 10-D, 30-D, and 50-D problems. This view is also supported by the fact that all values of “N+” were greater than “N−”. In addition, the CP of all the comparison algorithms was positive, which also showed that the new proposed MLDMS-PSO had the best comprehensive performance. HCLDMS-PSO performed well on the 30-D and 50-D test functions with CP values of 3 and 2, respectively, and its performance was close to that of MLDMS-PSO. However, in the 10-D problems, its performance was significantly weaker than MLDMS-PSO.
(2) Friedman Test: In this paper, the Friedman test of the mean was used to compare the overall performance of MLDMS-PSO and the other comparison algorithms on different benchmark functions. The Friedman test results of the 10-D, 30-D, and 50-D problems are given in Table 5, listing each algorithm and its ranking (ascending order). Finally, a comprehensive ranking based on the mean of the results was used to measure the overall performance of all of the algorithms.
From Table 5, we can see that the proposed MLDMS-PSO algorithm in this paper ranked first in the 10-D, 30-D, and 50-D problems and obtained a better overall performance than the other comparison algorithms, which was the same as the results obtained by the Wilcoxon rank sum test. In detail, TSLPSO and CCPSO-ISM ranked behind MLDMS-PSO in the 10-D problems, and HCLDMS-PSO and PSO-DLS ranked second and third in the 30- and 50-dimensional problems, respectively. This is because the HCLDMS-PSO and PSO-DLS could obtain better population diversity during the evolution than TSLPSO and CCPSO-ISM, strengthening the exploration ability of the algorithm but also weakening the exploitation ability of the algorithm to a certain extent. The MLDMS-PSO made the algorithm more compatible with exploration and exploitation through population contraction and an adaptive learning strategy adjustment, so it had a better performance on the 10-D, 30-D, and 50-D problems.

5. Conclusions

A multipopulation metaheuristic algorithm can effectively alleviate the loss of population diversity and the risk of falling into the local optima, especially for multimode optimization problems. Based on this and the distance measurement, the MLDMS-PSO algorithm adopted a master–slave structure to divide the entire population into multiple subswarms, and through linear population reduction, it could better meet the needs of the population diversity in the different evolutionary stages. Inspired by this phenomenon, the MLDMS-PSO algorithm introduces three learning strategies, the LDL strategy, the MCL strategy and the UL strategy, to guide particle updating. At the same time, two probability selection functions are used to manage the three learning strategies, and the reward and punishment factor constructed based on the change of fitness (i.e., the improvement rate) is regarded as an additional standard. In the different stages, the probability of the selected learning strategy can be dynamically adjusted according to the change in fitness so that particles can choose more advantageous evolution strategies for different fitness terrains and improve the efficiency of the evolution. In addition, the choice of multiple learning models can better balance exploration and development. To verify the effectiveness of the MLDMS-PSO, the CEC2017 test suite was used for the performance comparisons. The results show that compared with the other existing advanced particle swarm optimization algorithms, the proposed MLDMS-PSO algorithm had better performance in multimodal, hybrid, and composition problems. During the evolution process, it could better maintain population diversity, drive continuous population evolution, strengthen the algorithm’s exploration ability, and obtain more accurate solutions. To simplify the algorithm, a linear reduction was adopted to control the number of subswarms without considering the impact of population number on the evolutionary efficiency at different stages of evolution. This sacrificed its exploitation ability to some extent, slowed down the convergence speed, and also made its performance on single-mode problems less accurate, which are not ideal.
A good population diversity can enhance the exploration ability of algorithms, but it also weakens their local exploitation ability, and vice versa. Therefore, based on the characteristics of the problem and the different stages of algorithm evolution, exploring an efficient control strategy for the number and distribution of subswarms to ensure appropriate diversity in the overall population is worth further research. In addition, the advantages and disadvantages of these three learning strategies and their management modes for different fitness landscapes are also a promising research direction, which helps to design algorithms with better reliability and robustness for different problems.

Author Contributions

Conceptualization, L.C. (Ligang Cheng); Methodology, L.C. (Ligang Cheng) and J.C.; Software, L.C. (Ligang Cheng); Validation, L.C. (Linna Cheng); Investigation, W.W.; Data curation, W.W.; Writing—original draft, L.C. (Ligang Cheng); Writing—review & editing, L.C. (Ligang Cheng); Supervision, J.C.; Funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Key Research and Development Plan under grant number 2020YFB1713600 and the National Natural Science Foundation of China under grant number 62063021. It was also supported by the Key Talent Project of Gansu Province (ZZ2021G50700016) and the Jiangmen Basic and Theoretical Science Research Project, 2023 (2023JC01001), respectively.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison of experimental results for 10-dimensional problems.
Table A1. Comparison of experimental results for 10-dimensional problems.
GPSOFIPSCLPSOCCPSO-ISMXPSOTSLPSODMS-PSOPSO-DLSHCLDMS-PSOMLDMS-PSO
f1mean4.86 × 1081.35 × 10101.01 × 1035.22 × 1016.11 × 1021.85 × 1031.49 × 1022.25 × 1035.25 × 1023.04 × 102
std6.59 × 1083.43 × 1091.12 × 1035.61 × 1019.71 × 1022.01 × 1031.88 × 1022.80 × 1037.22 × 1025.60 × 102
rank91061572843
f3mean1.11× 10−151.33 × 1042.90 × 10−145.85 × 1024.24 × 10−143.01 × 10−141.80 × 10−122.90 × 10−141.54 × 10−136.96 × 10−12
std7.88 × 10−154.00 × 1033.06 × 10−143.42 × 1023.87 × 10−142.84 × 10−145.56 × 10−122.84 × 10−141.01 × 10−132.20 × 10−11
rank11029547368
f4mean2.01 × 1011.47 × 1033.53 × 1006.80 × 10−11.64 × 1002.63 × 10−11.33 × 1003.68 × 1002.33 × 1001.32 × 100
std2.62 × 1018.12 × 1029.05 × 10−18.04 × 10−13.11 × 10−12.13 × 10−19.61 × 10−13.21 × 10−14.82 × 10−15.99 × 10−1
rank91072514863
f5mean2.19 × 1011.28 × 1023.29 × 1005.75 × 1002.52 × 1006.98 × 1004.48 × 1004.95 × 1004.19 × 1003.71 × 100
std9.03 × 1001.77 × 1011.61 × 1002.00 × 1001.27 × 1002.90 × 1001.18 × 1001.65 × 1001.53 × 1001.06 × 100
rank91027185643
f6mean1.41 × 1006.22 × 1012.79 × 10−82.76 × 10−92.79 × 10−81.43 × 10−63.21 × 10−102.67 × 10−141.23 × 10−82.23 × 10−14
std2.04 × 1001.00 × 1011.97 × 10−73.49 × 10−91.97 × 10−76.89 × 10−62.27 × 10−94.82 × 10−145.85 × 10−84.51 × 10−14
rank91064783251
f7mean2.20 × 1011.57 × 1021.31 × 1011.66 × 1011.22 × 1011.76 × 1011.54 × 1011.94 × 1011.40 × 1011.50 × 101
std5.81 × 1001.95 × 1012.61 × 1002.01 × 1008.73 × 10−14.11 × 1001.85 × 1003.39 × 1001.45 × 1002.13 × 100
rank91026175834
f8mean1.73 × 1017.35 × 1013.51 × 1007.28 × 1002.34 × 1006.42 × 1005.31 × 1004.50 × 1004.02 × 1003.75 × 100
std7.18 × 1008.81 × 1001.20 × 1002.07 × 1001.24 × 1002.77 × 1001.57 × 1001.64 × 1001.86 × 1001.55 × 100
rank91028176543
f9mean1.35 × 10−11.15 × 1030.00 × 1008.71 × 10−51.56 × 10−141.34 × 10−141.05 × 10−130.00 × 1001.11 × 10−140.00 × 100
std3.08 × 10−13.10 × 1020.00 × 1001.90 × 10−44.51 × 10−143.66 × 10−146.30 × 10−140.00 × 1003.38 × 10−140.00 × 100
rank7816435121
f10mean5.85 × 1022.06 × 1031.25 × 1022.42 × 1021.71 × 1023.49 × 1021.44 × 1021.20 × 1021.73 × 1021.56 × 102
std1.98 × 1022.31 × 1029.88 × 1011.19 × 1021.36 × 1022.19 × 1021.15 × 1021.29 × 1021.13 × 1021.00 × 102
rank91027583164
f11mean2.81 × 1012.23 × 1042.02 × 1002.95 × 1002.75 × 1003.46 × 1002.54 × 1002.47 × 1001.40 × 1008.94 × 10−1
std4.55 × 1017.29 × 1041.21 × 1001.08 × 1001.15 × 1002.03 × 1001.29 × 1001.39 × 1009.57 × 10−18.03 × 10−1
rank91037685421
f12mean2.25 × 1064.23 × 1081.20 × 1048.03 × 1038.20 × 1031.20 × 1049.47 × 1031.37 × 1046.29 × 1037.85 × 103
std5.15 × 1063.83 × 1081.04 × 1046.94 × 1035.05 × 1031.13 × 1048.48 × 1031.37 × 1045.21 × 1035.73 × 103
rank91063475812
f13mean9.06 × 1039.79 × 1071.02 × 1036.32 × 1015.96 × 1035.18 × 1036.03 × 1013.63 × 1032.37 × 1034.07 × 101
std1.06 × 1042.37 × 1082.59 × 1039.82 × 1013.92 × 1034.32 × 1031.05 × 1022.96 × 1033.14 × 1033.04 × 101
rank91043872651
f14mean6.66 × 1019.52 × 1051.99 × 1015.31 × 1016.32 × 1014.15 × 1011.93 × 1013.23 × 1012.44 × 1011.59 × 101
std3.10 × 1016.48 × 1061.19 × 1016.64 × 1014.62 × 1011.50 × 1013.55 × 1019.67 × 1001.30 × 1011.28 × 101
rank91037862541
f15mean2.21 × 1025.23 × 1048.22 × 1002.82 × 1011.87 × 1023.53 × 1019.27 × 1003.87 × 1018.10 × 1011.04 × 101
std1.10 × 1039.51 × 1048.21 × 1003.01 × 1012.83 × 1022.64 × 1018.76 × 1003.68 × 1012.45 × 1026.64 × 100
rank91014852673
f16mean1.08 × 1027.13 × 1028.31 × 1001.79 × 1002.68 × 1007.49 × 1017.06 × 10−18.95 × 1007.02 × 10−15.87 × 10−1
std1.23 × 1021.45 × 1023.84 × 1019.90 × 10−11.36 × 1009.94 × 1014.64 × 10−12.80 × 1012.44 × 10−12.39 × 10−1
rank91064583721
f17mean5.91 × 1012.36 × 1026.34 × 1002.09 × 1001.84 × 1012.52 × 1012.34 × 1001.38 × 1011.14 × 1012.72 × 100
std4.72 × 1018.54 × 1018.85 × 1002.81 × 1008.05 × 1001.89 × 1011.48 × 1001.12 × 1019.41 × 1002.38 × 100
rank91041782653
f18mean2.14 × 1047.31 × 1081.41 × 1036.82 × 1022.55 × 1037.93 × 1035.95 × 1023.80 × 1031.95 × 1033.78 × 102
std1.78 × 1041.36 × 1092.69 × 1035.84 × 1022.24 × 1036.37 × 1035.40 × 1024.12 × 1032.40 × 1033.77 × 102
rank91043682751
f19mean3.26 × 1034.13 × 1073.43 × 1002.44 × 1012.07 × 1025.24 × 1013.47 × 1003.65 × 1016.31 × 1015.89 × 100
std9.18 × 1031.10 × 1083.47 × 1003.79 × 1012.95 × 1021.33 × 1026.49 × 1003.89 × 1011.51 × 1026.64 × 100
rank91014862573
f20mean5.06 × 1013.26 × 1023.39 × 1001.82 × 10−14.15 × 1007.07 × 1013.52 × 10−21.75 × 1012.12 × 1003.06 × 10−2
std4.87 × 1016.62 × 1016.83 × 1002.79 × 10−16.85 × 1006.09 × 1019.63 × 10−23.21 × 1014.03 × 1009.28 × 10−2
rank81053692741
f21mean2.10 × 1023.09 × 1021.59 × 1021.07 × 1021.87 × 1021.85 × 1029.83 × 1011.60 × 1021.23 × 1021.17 × 102
std4.01 × 1011.83 × 1015.28 × 1012.07 × 1013.59 × 1014.45 × 1011.39 × 1015.26 × 1014.36 × 1013.83 × 101
rank91052871643
f22mean1.79 × 1021.11 × 1039.00 × 1016.59 × 1011.00 × 1029.94 × 1016.63 × 1019.70 × 1019.76 × 1016.93 × 101
std2.01 × 1024.48 × 1023.04 × 1013.08 × 1013.20 × 10−11.41 × 1013.33 × 1011.96 × 1011.50 × 1013.71 × 101
rank91041872563
f23mean3.33 × 1024.79 × 1023.07 × 1023.10 × 1023.03 × 1023.04 × 1023.01 × 1023.08 × 1023.05 × 1022.99 × 102
std1.17 × 1014.75 × 1012.74 × 1002.82 × 1002.09 × 1004.31 × 1014.27 × 1012.01 × 1001.85 × 1004.23 × 101
rank91068342751
f24mean3.57 × 1025.03 × 1023.07 × 1021.01 × 1023.07 × 1023.02 × 1021.03 × 1022.72 × 1022.64 × 1021.61 × 102
std5.42 × 1019.03 × 1017.88 × 1012.57 × 1016.32 × 1018.28 × 1013.61 × 1011.02 × 1021.06 × 1021.03 × 102
rank91081762543
f25mean4.57 × 1021.28 × 1034.30 × 1021.98 × 1024.35 × 1024.26 × 1023.67 × 1024.12 × 1024.15 × 1023.99 × 102
std4.53 × 1013.22 × 1022.21 × 1019.96 × 1011.92 × 1012.28 × 1018.76 × 1012.09 × 1012.22 × 1016.29 × 100
rank91071862453
f26mean6.07 × 1021.83 × 1033.15 × 1021.29 × 1022.88 × 1023.15 × 1021.25 × 1023.00 × 1023.00 × 1022.41 × 102
std3.18 × 1023.53 × 1021.26 × 1029.78 × 1013.22 × 1011.26 × 1021.23 × 1020.00 × 1003.58 × 10−131.19 × 102
rank91072481563
f27mean4.16 × 1027.74 × 1023.93 × 1023.93 × 1023.94 × 1023.98 × 1023.92 × 1023.71 × 1023.93 × 1023.91 × 102
std2.87 × 1012.50 × 1026.14 × 1006.60 × 1002.21 × 1001.20 × 1012.02 × 1004.26 × 10−12.66 × 1002.26 × 100
rank91064783152
f28mean5.69 × 1021.06 × 1033.48 × 1022.63 × 1024.62 × 1024.97 × 1022.77 × 1024.37 × 1023.00 × 1022.88 × 102
std1.35 × 1025.91 × 1011.23 × 1029.49 × 1011.44 × 1021.42 × 1028.63 × 1015.83 × 1014.02 × 10−135.82 × 101
rank91051782643
f29mean3.31 × 1026.90 × 1022.53 × 1022.62 × 1022.56 × 1022.65 × 1022.54 × 1022.55 × 1022.47 × 1022.44 × 102
std7.20 × 1011.15 × 1021.68 × 1011.31 × 1019.81 × 1002.06 × 1018.87 × 1001.42 × 1016.90 × 1006.13 × 100
rank91037684521
f30mean8.15 × 1053.15 × 1075.36 × 1047.29 × 1039.08 × 1045.66 × 1054.93 × 1032.38 × 1031.90 × 1042.29 × 103
std9.41 × 1053.27 × 1071.92 × 1059.81 × 1032.64 × 1058.65 × 1054.08 × 1033.52 × 1031.15 × 1051.25 × 103
rank91064783151
Ave rank8.62 8.62 9.93 4.28 4.14 5.69 6.72 3.07 5.12 4.41
Final rank88104379265
The best experimental result is shown in bold.

Appendix B

Table A2. Comparison of experimental results for 30-dimensional problems.
Table A2. Comparison of experimental results for 30-dimensional problems.
GPSOFIPSCLPSOCCPSO-ISMXPSOTSLPSODMS-PSOPSO-DLSHCLDMS-PSOMLDMS-PSO
f1mean8.63 × 1096.48 × 10101.56 × 1033.12 × 1014.13 × 1032.33× 1015.59 × 1032.60 × 1031.31 × 1035.64 × 102
std6.54 × 1097.88 × 1092.54 × 1037.39 × 1014.33 × 1032.66 × 1016.55 × 1032.80 × 1031.72 × 1037.93 × 102
rank91042718643
f3mean2.58 × 1033.91 × 1062.44 × 1031.82 × 1041.94 × 10−32.39 × 1019.26 × 1022.26 × 1013.89 × 1014.86 × 102
std9.73 × 1031.98 × 1071.13 × 1034.66 × 1039.62 × 10−31.02 × 1023.06 × 1022.93 × 1016.68 × 1012.03 × 102
rank81079136245
f4mean1.19 × 1031.66 × 1049.37 × 1014.75 × 1011.20 × 1024.60 × 1012.66 × 1015.43 × 1017.65 × 1019.34 × 101
std1.30 × 1033.77 × 1031.78 × 1012.92 × 1013.25 × 1013.51 × 1017.86 × 1002.98 × 1012.76 × 1012.02 × 101
rank91073821456
f5mean1.45 × 1024.53 × 1023.79 × 1015.09 × 1014.67 × 1014.74 × 1014.13 × 1012.21 × 1012.88 × 1013.28 × 101
std3.99 × 1012.50 × 1011.00 × 1018.18 × 1001.29 × 1011.15 × 1011.08 × 1015.48 × 1006.69 × 1001.38 × 101
rank91048675123
f6mean1.58 × 1019.83 × 1013.01 × 10−81.14 × 10−137.57 × 10−21.41 × 10−85.54 × 10−41.04 × 10−31.04 × 10−32.85 × 10−13
std7.07 × 1006.33 × 1001.54 × 10−70.00 × 1002.35 × 10−19.98 × 10−81.01 × 10−32.40 × 10−31.36 × 10−31.75 × 10−13
rank91041835672
f7mean2.12 × 1028.56 × 1027.79 × 1017.74 × 1019.05 × 1018.46 × 1011.13 × 1025.21 × 1015.59 × 1016.75 × 101
std1.21 × 1024.77 × 1011.21 × 1018.40 × 1001.87 × 1011.06 × 1012.12 × 1015.65 × 1005.99 × 1001.23 × 101
rank91054768123
f8mean1.40 × 1023.78 × 1023.72 × 1016.25 × 1014.43 × 1016.18 × 1014.06 × 1012.27 × 1012.85 × 1013.24 × 101
std3.76 × 1012.55 × 1011.14 × 1019.95 × 1001.57 × 1011.14 × 1019.73 × 1005.39 × 1006.31 × 1001.19 × 101
rank91048675123
f9mean2.25 × 1031.30 × 1042.30 × 10−21.70 × 1028.02 × 1001.12 × 1022.12 × 1001.52 × 1001.24 × 10−21.90 × 10−1
std1.38 × 1031.52 × 1036.91 × 10−21.23 × 1026.27 × 1001.92 × 1021.82 × 1001.24 × 1006.49 × 10−22.34 × 10−1
rank91028675413
f10mean3.87 × 1038.13 × 1032.93 × 1032.43 × 1032.77 × 1032.45 × 1032.77 × 1032.79 × 1032.32 × 1033.05 × 103
std6.39 × 1024.41 × 1026.94 × 1023.56 × 1025.73 × 1025.03 × 1024.98 × 1023.61 × 1023.77 × 1026.07 × 102
rank91072435618
f11mean3.28 × 1028.35 × 1034.72 × 1013.54 × 1019.19 × 1014.89 × 1013.04 × 1013.87 × 1013.80 × 1013.14 × 101
std2.71 × 1023.18 × 1032.70 × 1011.85 × 1014.35 × 1012.59 × 1017.92 × 1002.83 × 1011.57 × 1012.79 × 101
rank91063871542
f12mean6.41 × 1081.73 × 10102.07 × 1052.67 × 1051.07 × 1052.94 × 1042.30 × 1052.11 × 1048.62 × 1043.70 × 104
std9.68 × 1083.62 × 1092.09 × 1051.72 × 1052.49 × 1051.58 × 1041.89 × 1051.05 × 1047.00 × 1041.54 × 104
rank91068527143
f13mean1.44 × 1081.12 × 10101.00 × 1047.23 × 1021.33 × 1043.23 × 1031.80 × 1041.34 × 1044.68 × 1037.81 × 103
std4.62 × 1087.69 × 1099.38 × 1033.61 × 1021.27 × 1044.61 × 1031.99 × 1041.01 × 1044.69 × 1036.37 × 103
rank91051628734
f14mean5.16 × 1041.65 × 1071.26 × 1042.44 × 1044.09 × 1031.10 × 1043.16 × 1035.44 × 1033.41 × 1032.33 × 103
std8.91 × 1041.69 × 1071.13 × 1042.08 × 1044.01 × 1039.77 × 1032.49 × 1034.85 × 1034.84 × 1031.81 × 103
rank91078462531
f15mean5.11 × 1041.89 × 1094.79 × 1021.47 × 1026.09 × 1033.03 × 1025.36 × 1032.63 × 1031.56 × 1031.91 × 103
std5.25 × 1041.23 × 1093.77 × 1029.05 × 1018.20 × 1033.54 × 1027.71 × 1033.48 × 1032.09 × 1032.24 × 103
rank91031827645
f16mean1.28 × 1036.03 × 1033.58 × 1025.17 × 1026.24 × 1025.57 × 1023.25 × 1023.91 × 1023.36 × 1022.45 × 102
std3.87 × 1021.58 × 1031.48 × 1021.65 × 1021.73 × 1021.33 × 1021.94 × 1021.55 × 1021.80 × 1021.36 × 102
rank91046872531
f17mean6.23 × 1021.42 × 1049.75 × 1011.23 × 1021.44 × 1021.37 × 1021.05 × 1029.34 × 1017.01 × 1016.43 × 101
std2.84 × 1023.13 × 1044.25 × 1016.74 × 1017.74 × 1018.21 × 1015.97 × 1014.70 × 1013.55 × 1013.03 × 101
rank91046875321
f18mean5.13 × 1051.30 × 1081.44 × 1051.30 × 1051.35 × 1051.09 × 1051.65 × 1051.65 × 1051.10 × 1051.32 × 105
std1.58 × 1062.88 × 1087.50 × 1044.87 × 1048.81 × 1045.82 × 1049.31 × 1041.36 × 1056.91 × 1047.51 × 104
rank91063517824
f19mean1.19 × 1071.73 × 1092.71 × 1026.98 × 1016.27 × 1031.71 × 1026.13 × 1034.39 × 1033.54 × 1033.45 × 103
std3.60 × 1078.62 × 1084.10 × 1024.58 × 1017.33 × 1033.08 × 1026.60 × 1034.22 × 1035.14 × 1033.86 × 103
rank91031827654
f20mean5.14 × 1021.21 × 1031.52 × 1022.19 × 1021.88 × 1021.70 × 1021.47 × 1021.72 × 1021.50 × 1021.33 × 102
std1.78 × 1021.52 × 1026.63 × 1019.08 × 1017.86 × 1018.09 × 1016.29 × 1015.92 × 1017.07 × 1015.62 × 101
rank91048752631
f21mean3.68 × 1027.08 × 1022.41 × 1022.37 × 1022.43 × 1022.31 × 1022.43 × 1022.23 × 1022.29 × 1022.29 × 102
std5.00 × 1015.24 × 1019.69 × 1004.63 × 1011.08 × 1014.84 × 1011.19 × 1015.16 × 1005.95 × 1009.38 × 100
rank91065748132
f22mean3.20 × 1037.09 × 1031.00 × 1022.19 × 1024.28 × 1023.50 × 1021.59 × 1021.01 × 1021.00 × 1021.00 × 102
std1.55 × 1037.41 × 1021.42 × 10−65.63 × 1029.96 × 1027.65 × 1024.14 × 1021.26 × 1001.71 × 10−103.12 × 10−13
rank91036875421
f23mean6.40 × 1021.78 × 1033.86 × 1024.01 × 1023.98 × 1024.02 × 1024.01 × 1023.66 × 1023.76 × 1023.73 × 102
std7.11 × 1012.49 × 1021.17 × 1011.14 × 1011.54 × 1011.30 × 1011.37 × 1019.98 × 1008.36 × 1001.00 × 101
rank91047586132
f24mean7.64 × 1022.04 × 1034.73 × 1024.59 × 1024.76 × 1024.52 × 1024.73 × 1024.35 × 1024.44 × 1024.40 × 102
std9.78 × 1013.16 × 1021.42 × 1019.87 × 1013.65 × 1011.11 × 1027.24 × 1007.36 × 1008.23 × 1007.74 × 100
rank91065847132
f25mean5.76 × 1023.15 × 1033.87 × 1023.87 × 1023.94 × 1023.87 × 1023.79 × 1023.90 × 1023.87 × 1023.87 × 102
std2.63 × 1027.07 × 1029.06 × 10−11.14 × 1001.02 × 1011.18 × 1002.97 × 10−18.43 × 1006.39 × 10−17.65 × 10−1
rank91062841735
f26mean3.93 × 1039.27 × 1031.27 × 1034.32 × 1028.78 × 1027.38 × 1021.26 × 1038.27 × 1021.17 × 1031.03 × 103
std7.99 × 1029.71 × 1022.79 × 1023.40 × 1025.91 × 1026.39 × 1025.10 × 1024.36 × 1023.00 × 1023.27 × 102
rank91081427365
f27mean6.44 × 1022.60 × 1035.05 × 1025.14 × 1025.28 × 1025.15 × 1024.85 × 1025.19 × 1025.15 × 1025.04 × 102
std6.72 × 1014.08 × 1025.33 × 1004.19 × 1001.51 × 1018.02 × 1002.76 × 1016.63 × 1008.26 × 1005.28 × 100
rank91034861752
f28mean1.15 × 1035.38 × 1034.06 × 1024.05 × 1023.78 × 1023.68 × 1024.37 × 1023.49 × 1023.66 × 1023.77 × 102
std7.55 × 1028.04 × 1022.06 × 1013.40 × 1006.86 × 1016.37 × 1012.54 × 1014.97 × 1014.62 × 1014.37 × 101
rank91076538124
f29mean1.21 × 1031.09 × 1045.42 × 1025.59 × 1026.18 × 1025.87 × 1024.91 × 1025.07 × 1024.92 × 1024.85 × 102
std3.44 × 1021.05 × 1045.84 × 1015.43 × 1011.32 × 1028.19 × 1018.08 × 1015.57 × 1013.16 × 1013.22 × 101
rank91056872431
f30mean2.65 × 1072.29 × 1098.56 × 1035.45 × 1031.04 × 1049.39 × 1033.22 × 1034.06 × 1034.11 × 1033.04 × 103
std1.53 × 1081.34 × 1094.43 × 1031.56 × 1036.84 × 1039.41 × 1034.62 × 1039.37 × 1021.32 × 1038.50 × 102
rank91065872341
Ave rank8.97 8.97 10.00 5.03 4.72 6.52 4.55 4.93 3.97 3.28
Final rank99107584632
The best experimental result is shown in bold.

Appendix C

Table A3. Comparison of experimental results for 50-dimensional problems.
Table A3. Comparison of experimental results for 50-dimensional problems.
GPSOFIPSCLPSOCCPSO-ISMXPSOTSLPSODMS-PSOPSO-DLSHCLDMS-PSOMLDMS-PSO
f1mean2.39 × 10101.22 × 10113.02 × 1031.24E× 1024.85 × 1038.21 × 1026.83 × 1032.35 × 1032.54 × 1031.74 × 103
std1.48 × 10108.22 × 1093.45 × 1031.70 × 1026.09 × 1038.66 × 1029.08 × 1032.39 × 1033.69 × 1031.82 × 103
rank91061728453
f3mean2.36 × 1041.96 × 1051.17 × 1044.99 × 1041.36 × 1028.22 × 1036.83 × 1038.35 × 1035.62 × 1036.22 × 103
std3.28 × 1043.50 × 1042.55 × 1036.55 × 1031.27 × 1025.71 × 1031.48 × 1033.09 × 1032.55 × 1031.34 × 103
rank81079154623
f4mean2.78 × 1034.42 × 1041.08 × 1025.75 × 1012.36 × 1027.33 × 1014.58 × 1011.07 × 1029.90 × 1019.80 × 101
std1.86 × 1036.77 × 1034.17 × 1012.84 × 1014.70 × 1014.72 × 1011.13 × 1005.01 × 1014.52 × 1013.97 × 101
rank91072831654
f5mean3.04 × 1027.56 × 1029.11 × 1011.12 × 1028.87 × 1011.28 × 1021.02 × 1025.84 × 1015.86 × 1011.06 × 102
std8.41 × 1013.99 × 1011.97 × 1011.82 × 1012.32 × 1012.81 × 1011.65 × 1011.19 × 1019.82 × 1005.37 × 101
rank91047385126
f6mean3.33 × 1011.07 × 1022.18 × 10−89.40 × 10−101.28 × 1009.79 × 10−94.33 × 10−11.51 × 10−21.00 × 10−23.36 × 10−13
std8.69 × 1004.40 × 1007.56 × 10−86.65 × 10−91.35 × 1006.28 × 10−84.29 × 10−11.58 × 10−25.22 × 10−31.68 × 10−13
rank91042837651
f7mean6.72 × 1021.48 × 1031.73 × 1021.53 × 1021.88 × 1021.82 × 1022.50 × 1021.15 × 1021.04 × 1021.46 × 102
std2.74 × 1023.08 × 1012.74 × 1011.80 × 1012.46 × 1012.26 × 1015.32 × 1011.30 × 1019.66 × 1003.37 × 101
rank91054768213
f8mean3.22 × 1027.55 × 1028.86 × 1011.11 × 1029.46 × 1011.34 × 1029.81 × 1016.47 × 1016.59 × 1011.07 × 102
std6.47 × 1015.41 × 1011.68 × 1011.96 × 1012.26 × 1012.53 × 1011.96 × 1011.14 × 1011.03 × 1014.40 × 101
rank91037485126
f9mean8.78 × 1034.40 × 1042.88 × 10−11.33 × 1031.43 × 1021.78 × 1037.91 × 1012.04 × 1011.53 × 10−13.08 × 100
std2.60 × 1033.75 × 1032.95 × 10−19.20 × 1029.22 × 1011.61 × 1035.01 × 1011.40 × 1012.52 × 10−11.80 × 100
rank91027685413
f10mean7.05 × 1031.44 × 1045.57 × 1034.91 × 1035.17 × 1035.00 × 1035.47 × 1035.29 × 1034.45 × 1035.47 × 103
std9.68 × 1025.41 × 1029.12 × 1026.76 × 1029.45 × 1027.87 × 1026.30 × 1027.48 × 1026.07 × 1026.69 × 102
rank91082437516
f11mean1.51 × 1032.43 × 1048.64 × 1019.82 × 1012.02 × 1021.33 × 1021.04 × 1028.52 × 1011.07 × 1025.44 × 101
std3.47 × 1032.41 × 1032.40 × 1012.05 × 1014.19 × 1013.85 × 1012.76 × 1014.32 × 1013.29 × 1011.50 × 101
rank91034875261
f12mean8.79 × 1099.58 × 10101.49 × 1061.63 × 1061.71 × 1064.58 × 1051.69 × 1063.13 × 1057.95 × 1056.28 × 105
std6.23 × 1091.34 × 10107.80 × 1056.13 × 1052.53 × 1063.33 × 1051.08 × 1061.42 × 1055.44 × 1052.35 × 105
rank91056827143
f13mean3.23 × 1095.26 × 10103.08 × 1031.63 × 1036.80 × 1035.87 × 1035.62 × 1031.40 × 1031.79 × 1031.08 × 103
std4.88 × 1091.46 × 10103.44 × 1031.21 × 1036.55 × 1036.50 × 1038.32 × 1031.29 × 1032.18 × 1031.19 × 103
rank91053876241
f14mean1.42 × 1062.18 × 1081.06 × 1052.22 × 1053.33 × 1045.52 × 1046.30 × 1043.05 × 1042.60 × 1042.44 × 104
std2.56 × 1061.81 × 1087.23 × 1041.02 × 1053.47 × 1045.21 × 1043.70 × 1041.84 × 1041.88 × 1041.60 × 104
rank91078456321
f15mean4.95 × 1071.27 × 10101.62 × 1034.03 × 1025.22 × 1031.39 × 1037.39 × 1034.22 × 1032.40 × 1032.56 × 103
std1.84 × 1084.95 × 1092.41 × 1031.56 × 1024.76 × 1033.19 × 1037.41 × 1033.10 × 1032.82 × 1031.93 × 103
rank91031728645
f16mean2.39 × 1031.01 × 1049.52 × 1021.06 × 1031.06 × 1031.19 × 1035.98 × 1026.64 × 1027.10 × 1025.72 × 102
std5.33 × 1021.58 × 1032.12 × 1022.15 × 1023.30 × 1022.48 × 1022.74 × 1021.96 × 1022.18 × 1021.81 × 102
rank91056782341
f17mean1.96 × 1032.61 × 1046.40 × 1027.62 × 1029.14 × 1027.46 × 1026.69 × 1025.79 × 1025.74 × 1024.62 × 102
std4.27 × 1022.38 × 1041.51 × 1021.44 × 1022.56 × 1021.66 × 1021.79 × 1021.65 × 1021.75 × 1021.24 × 102
rank91047865321
f18mean4.74 × 1063.16 × 1086.96 × 1055.87 × 1053.45 × 1052.33 × 1055.72 × 1052.60 × 1054.61 × 1055.83 × 105
std1.68 × 1072.10 × 1084.36 × 1053.64 × 1056.40 × 1052.41 × 1053.08 × 1052.43 × 1059.77 × 1052.73 × 105
rank91087315246
f19mean1.21 × 1085.79 × 1092.10 × 1035.52 × 1021.79 × 1041.25 × 1031.48 × 1041.31 × 1046.59 × 1031.24 × 104
std3.54 × 1082.15 × 1092.28 × 1037.02 × 1021.04 × 1042.31 × 1031.07 × 1046.13 × 1036.25 × 1034.97 × 103
rank91031827645
f20mean1.20 × 1032.32 × 1034.52 × 1025.96 × 1024.32 × 1026.20 × 1023.37 × 1023.77 × 1023.67 × 1021.94 × 102
std3.71 × 1022.15 × 1021.46 × 1021.51 × 1022.32 × 1021.50 × 1021.39 × 1021.60 × 1021.35 × 1027.78 × 101
rank91067582431
f21mean5.48 × 1021.22 × 1032.91 × 1023.07 × 1022.90 × 1023.29 × 1023.04 × 1022.61 × 1022.62 × 1022.81 × 102
std7.64 × 1011.14 × 1022.00 × 1012.59 × 1011.88 × 1012.65 × 1011.82 × 1011.29 × 1019.47 × 1002.98 × 101
rank91057486123
f22mean7.73 × 1031.49 × 1045.66 × 1034.77 × 1033.80 × 1034.71 × 1034.08 × 1033.27 × 1033.33 × 1033.78 × 103
std1.36 × 1034.89 × 1021.57 × 1032.23 × 1032.70 × 1032.20 × 1032.64 × 1032.65 × 1032.35 × 1033.12 × 103
rank91087465123
f23mean1.21 × 1032.51 × 1035.24 × 1025.49 × 1025.34 × 1025.69 × 1025.40 × 1024.84 × 1024.98 × 1025.07 × 102
std1.76 × 1022.07 × 1021.89 × 1011.93 × 1012.98 × 1013.69 × 1012.51 × 1011.84 × 1011.92 × 1012.71 × 101
rank91047586123
f24mean1.24 × 1033.52 × 1036.11 × 1026.81 × 1026.47 × 1026.94 × 1026.14 × 1025.41 × 1025.63 × 1025.51 × 102
std1.50 × 1025.02 × 1022.40 × 1012.45 × 1018.87 × 1013.67 × 1011.31 × 1011.39 × 1011.38 × 1011.57 × 101
rank91047685132
f25mean1.63 × 1031.48 × 1045.26 × 1025.41 × 1025.98 × 1025.36 × 1024.31 × 1025.54 × 1025.04 × 1025.22 × 102
std1.22 × 1031.59 × 1033.37 × 1011.84 × 1013.17 × 1013.45 × 1011.34 × 10−13.64 × 1012.99 × 1013.38 × 101
rank91046851723
f26mean8.51 × 1031.61 × 1042.06 × 1032.03 × 1031.41 × 1032.34 × 1032.32 × 1031.57 × 1031.69 × 1031.70 × 103
std1.68 × 1038.30 × 1022.20 × 1026.03 × 1029.26 × 1025.46 × 1023.67 × 1023.01 × 1023.77 × 1024.40 × 102
rank91065187234
f27mean1.22 × 1036.61 × 1035.63 × 1026.02 × 1027.08 × 1027.85 × 1024.98 × 1026.38 × 1026.03 × 1025.49 × 102
std2.42 × 1021.18 × 1032.29 × 1012.33 × 1018.03 × 1018.27 × 1011.03 × 1013.75 × 1013.32 × 1011.76 × 101
rank91034781652
f28mean3.95 × 1031.12 × 1044.89 × 1025.05 × 1025.36 × 1025.06 × 1024.44 × 1024.93 × 1024.78 × 1024.86 × 102
std1.92 × 1031.26 × 1032.23 × 1011.71 × 1013.42 × 1011.48 × 1013.08 × 1001.87 × 1012.35 × 1012.38 × 101
rank91046871523
f29mean2.79 × 1032.05 × 1055.91 × 1026.79 × 1029.61 × 1021.00 × 1036.72 × 1026.35 × 1025.83 × 1024.83 × 102
std7.84 × 1022.22 × 1051.12 × 1021.22 × 1022.90 × 1022.60 × 1021.60 × 1021.46 × 1021.30 × 1025.95 × 101
rank91036785421
f30mean1.89 × 1089.72 × 1098.87 × 1057.95 × 1051.96 × 1062.37 × 1061.76 × 1038.31 × 1058.49 × 1058.04 × 105
std4.32 × 1083.20 × 1091.40 × 1056.55 × 1045.10 × 1052.74 × 1061.73 × 1031.05 × 1059.43 × 1045.71 × 104
rank91062781453
Ave rank8.978.9710.004.905.105.905.794.863.413.07
Final rank9910 5687432
The best experimental result is shown in bold.

References

  1. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Self regulating particle swarm optimization algorithm. Inf. Sci. 2015, 294, 182–202. [Google Scholar] [CrossRef]
  2. Carson, J. Genetic Algorithms: Advances in Research and Applications; Nova Science Publishers, Inc.: New York, NY, USA, 2017. [Google Scholar]
  3. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar] [CrossRef]
  4. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  5. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  6. Zhao, X.; Xu, G.; Rui, L.; Liu, D.; Liu, H.; Yuan, J. A failure remember-driven self-adaptive differential evolution with top-bottom strategy. Swarm Evol. Comput. 2019, 45, 1–14. [Google Scholar] [CrossRef]
  7. Pedro, L.; José, A.L. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation; Springer: New York, NY, USA, 2012; Volume 2002. [Google Scholar]
  8. Lim, W.H.; Mat Isa, N.A. An adaptive two-layer particle swarm optimization with elitist learning strategy. Inf. Sci. 2014, 273, 49–72. [Google Scholar] [CrossRef]
  9. Buba, A.T.; Lee, L.S. Hybrid Differential Evolution-Particle Swarm Optimization Algorithm for Multiobjective Urban Transit Network Design Problem with Homogeneous Buses. Math. Probl. Eng. 2019, 2019, 5963240. [Google Scholar] [CrossRef]
  10. Qin, Z.; Pan, D. Improved Dual-Center Particle Swarm Optimization Algorithm. Mathematics 2024, 12, 1698. [Google Scholar] [CrossRef]
  11. Liang, J.J.; Suganthan, P.N. Dynamic multi-swarm particle swarm optimizer. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 124–129. [Google Scholar] [CrossRef]
  12. Wang, S.; Liu, G.; Gao, M.; Cao, S.; Guo, A.; Wang, J. Heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer with two mutation operators. Inf. Sci. 2020, 540, 175–201. [Google Scholar] [CrossRef]
  13. Angelini, M.; Zagaglia, L.; Marabelli, F.; Floris, F. Convergence and Performance Analysis of a Particle Swarm Optimization Algorithm for Optical Tuning of Gold Nanohole Arrays. Materials 2024, 17, 807. [Google Scholar] [CrossRef]
  14. Gao, W.; Peng, X.; Guo, W.; Li, D. A Dual-Competition-Based Particle Swarm Optimizer for Large-Scale Optimization. Mathematics 2024, 12, 1738. [Google Scholar] [CrossRef]
  15. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.L.; Zhan, Z.H. Triple Archives Particle Swarm Optimization. IEEE Trans Cybern 2020, 50, 4862–4875. [Google Scholar] [CrossRef]
  16. Ye, W.; Feng, W.; Fan, S. A novel multi-swarm particle swarm optimization with dynamic learning strategy. Appl. Soft Comput. 2017, 61, 832–843. [Google Scholar] [CrossRef]
  17. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar] [CrossRef]
  18. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; Volume 3, p. 1945. [Google Scholar] [CrossRef]
  19. Zhan, Z.-H.; Zhang, J.; Li, Y.; Shi, Y.-H. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef]
  20. Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.-H.; Lee, H.P.; Liang, Y.; Tai, R.; Wu, C. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  21. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  22. Jiao, B.; Lian, Z.; Gu, X. A dynamic inertia weight particle swarm optimization algorithm. Chaos Solitons Fractals 2008, 37, 698–705. [Google Scholar] [CrossRef]
  23. Chatterjee, A.; Siarry, P. Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Comput. Oper. Res. 2006, 33, 859–871. [Google Scholar] [CrossRef]
  24. Liu, H.; Zhang, X.-W.; Tu, L.-P. A modified particle swarm optimization using adaptive strategy. Expert Syst. Appl. 2020, 152, 113353. [Google Scholar] [CrossRef]
  25. Nickabadi, A.; Ebadzadeh, M.M.; Safabakhsh, R. A novel particle swarm optimization algorithm with adaptive inertia weight. Appl. Soft Comput. 2011, 11, 3658–3670. [Google Scholar] [CrossRef]
  26. Beheshti, Z.; Shamsuddin, S.M. Non-parametric particle swarm optimization for global optimization. Appl. Soft Comput. 2015, 28, 345–359. [Google Scholar] [CrossRef]
  27. Zhang, X.; Liu, H.; Zhang, T.; Wang, Q.; Wang, Y.; Tu, L. Terminal crossover and steering-based particle swarm optimization algorithm with disturbance. Appl. Soft Comput. 2019, 85, 105841. [Google Scholar] [CrossRef]
  28. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.S. Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B Cybern 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed]
  29. Zhou, H.; Zhan, Z.-H.; Yang, Z.-X.; Wei, X. AMPSO: Artificial Multi-Swarm Particle Swarm Optimization. arXiv Prepr. 2020, arXiv:2004.07561. [Google Scholar]
  30. Kennedy, J. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; Volume 3, p. 1931. [Google Scholar] [CrossRef]
  31. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02, Honolulu, HI, USA, 12–17 May 2002; Volume 2, p. 1671. [Google Scholar] [CrossRef]
  32. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A Comprehensive Survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  33. Suganthan, P.N. Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; Volume 3, p. 1958. [Google Scholar] [CrossRef]
  34. Nasir, M.; Das, S.; Maity, D.; Sengupta, S.; Halder, U.; Suganthan, P.N. A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization. Inf. Sci. 2012, 209, 16–36. [Google Scholar] [CrossRef]
  35. Xia, X.; Tang, Y.; Wei, B.; Zhang, Y.; Gui, L.; Li, X. Dynamic multi-swarm global particle swarm optimization. Computing 2020, 102, 1587–1626. [Google Scholar] [CrossRef]
  36. Liu, Z.; Nishi, T.; Dabnichki, P. Multipopulation Ensemble Particle Swarm Optimizer for Engineering Design Problems. Math. Probl. Eng. 2020, 2020, 1450985. [Google Scholar] [CrossRef]
  37. Li, Y.; Zhan, Z.-H.; Lin, S.; Zhang, J.; Luo, X. Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Inf. Sci. 2015, 293, 370–382. [Google Scholar] [CrossRef]
  38. Gong, Y.J.; Li, J.J.; Zhou, Y.; Li, Y.; Chung, H.S.; Shi, Y.H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans Cybern 2016, 46, 2277–2290. [Google Scholar] [CrossRef]
  39. Pant, M.; Thangaraj, R.; Grosan, C.; Abraham, A. Hybrid differential evolution—Particle Swarm Optimization algorithm for solving global optimization problems. In Proceedings of the 2008 Third International Conference on Digital Information Management, London, UK, 13–16 November 2008; pp. 18–24. [Google Scholar] [CrossRef]
  40. Tang, B.; Zhu, Z.; Luo, J. Hybridizing particle swarm optimization and differential evolution for the mobile robot global path planning. Int. J. Adv. Robot. Syst. 2016, 13, 86. [Google Scholar] [CrossRef]
  41. Jiang, S.; Ji, Z.; Shen, Y. A novel hybrid particle swarm optimization and gravitational search algorithm for solving economic emission load dispatch problems with various practical constraints. Int. J. Electr. Power Energy Syst. 2014, 55, 628–644. [Google Scholar] [CrossRef]
  42. Kıran, M.S.; Gündüz, M.; Baykan, Ö.K. A novel hybrid algorithm based on particle swarm and ant colony optimization for finding the global minimum. Appl. Math. Comput. 2012, 219, 1515–1521. [Google Scholar] [CrossRef]
  43. Raj, A.; Punia, P.; Kumar, P. A novel hybrid pelican-particle swarm optimization algorithm (HPPSO) for global optimization problem. Int. J. Syst. Assur. Eng. Manag. 2024. [Google Scholar] [CrossRef]
  44. Pawar, A.; Tiwari, N. A Novel Approach of DDOS Attack Classification with Optimizing the Ensemble Classifier Using A Hybrid Firefly and Particle Swarm Optimization (HFPSO). Int. J. Intell. Eng. Syst. 2023, 16, 201–214. [Google Scholar] [CrossRef]
  45. Zhang, M.; Long, D.; Qin, T.; Yang, J. A Chaotic Hybrid Butterfly Optimization Algorithm with Particle Swarm Optimization for High-Dimensional Optimization Problems. Symmetry 2020, 12, 1800. [Google Scholar] [CrossRef]
  46. Rezk, H.; Arfaoui, J.; Gomaa, M.R. Optimal Parameter Estimation of Solar PV Panel Based on Hybrid Particle Swarm and Grey Wolf Optimization Algorithms. Int. J. Interact. Multimed. Artif. Intell. 2021, 6, 145–155. [Google Scholar] [CrossRef]
  47. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  48. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.-H. An expanded particle swarm optimization based on multi-exemplar and forgetting ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  49. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  50. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. MLDMS-PSO flowchart.
Figure 1. MLDMS-PSO flowchart.
Applsci 14 07035 g001
Figure 2. The multiswarm segmentation scheme with a master–slave structure. The red stars represent master particles and the black dots represent slave particles.
Figure 2. The multiswarm segmentation scheme with a master–slave structure. The red stars represent master particles and the black dots represent slave particles.
Applsci 14 07035 g002
Figure 3. The schematic diagram of the strategy selection functions.
Figure 3. The schematic diagram of the strategy selection functions.
Applsci 14 07035 g003
Figure 4. The result of the strategy selection function with the reward and punishment factors on benchmark function f1.
Figure 4. The result of the strategy selection function with the reward and punishment factors on benchmark function f1.
Applsci 14 07035 g004
Figure 5. Parameter investigation result in MLDMS-PSO.
Figure 5. Parameter investigation result in MLDMS-PSO.
Applsci 14 07035 g005
Figure 6. Convergence progress on the unimodal functions (f1 and f3).
Figure 6. Convergence progress on the unimodal functions (f1 and f3).
Applsci 14 07035 g006
Figure 7. Convergence progress on the simple multimodal functions (f4–f10).
Figure 7. Convergence progress on the simple multimodal functions (f4–f10).
Applsci 14 07035 g007
Figure 8. Convergence progress on the hybrid functions (f11–f20).
Figure 8. Convergence progress on the hybrid functions (f11–f20).
Applsci 14 07035 g008
Figure 9. Convergence progress on the hybrid functions (f21–f30).
Figure 9. Convergence progress on the hybrid functions (f21–f30).
Applsci 14 07035 g009
Table 1. The information of the benchmark function used in this paper.
Table 1. The information of the benchmark function used in this paper.
No.FunctionRangeCategoryfopt
f1Shifted and Rotated Bent Cigar Function [−100, 100]UN100
f3Shifted and Rotated Zakharov Function[−100, 100]UN300
f4Shifted and Rotated Rosenbrock’s Function[−100, 100]MN400
f5Shifted and Rotated Rastrigin’s Function[−100, 100]MN500
f6Shifted and Rotated Expanded Scaffer’s F6 Function[−100, 100]MN600
f7Shifted and Rotated Lunacek Bi-Rastrigin Function[−100, 100]MN700
f8Shifted and Rotated Non-Continuous Rastrigin’s Function[−100, 100]MN800
f9Shifted and Rotated Levy Function[−100, 100]MN900
f10Shifted and Rotated Schwefel’s Function [−100, 100]MN1000
f11Hybrid Function 1 (N = 3)[−100, 100]H1100
f12Hybrid Function 2 (N = 3)[−100, 100]H1200
f13Hybrid Function 3 (N = 3)[−100, 100]H1300
f14Hybrid Function 4 (N = 4)[−100, 100]H1400
f15Hybrid Function 5 (N = 4)[−100, 100]H1500
f16Hybrid Function 6 (N = 4)[−100, 100]H1600
f17Hybrid Function 6 (N = 5)[−100, 100]H1700
f18Hybrid Function 6 (N = 5)[−100, 100]H1800
f19Hybrid Function 6 (N = 5)[−100, 100]H1900
f20Hybrid Function 6 (N = 6)[−100, 100]H2000
f21Composition Function 1 (N = 3)[−100, 100]C2100
f22Composition Function 2 (N = 3)[−100, 100]C2200
f23Composition Function 3 (N = 4)[−100, 100]C2300
f24Composition Function 4 (N = 4)[−100, 100]C2400
f25Composition Function 5 (N = 5)[−100, 100]C2500
f26Composition Function 6 (N = 5)[−100, 100]C2600
f27Composition Function 7 (N = 6)[−100, 100]C2700
f28Composition Function 8 (N = 6)[−100, 100]C2800
f29Composition Function 9 (N = 3)[−100, 100]C2900
f30Composition Function 10 (N = 3)[−100, 100]C3000
Table 2. Parameter settings of the ten PSOs.
Table 2. Parameter settings of the ten PSOs.
AlgorithmParameters SettingYear
GPSO w = 0.9 0.4 , c 1 = c 2 = 2 1998
FIPS χ = 0.729 , c i = 4.1 2004
CLPSO w = 0.9 0.4 , c = 1.49445 2006
CCPSO-ISMP = 0.05, G = 5, w = 0.6, c = 22015
TSLPSO w = 0.9 0.4 , c 1 = c 2 = 1.49445 , c 3 = 0.5 2.5 2019
XPSO η = 0.2 , S t a g max = 5 , p = 0.5 2020
DMS-PSO w = 0.9 0.2 , c 1 = c 2 = 1.49445 2005
PSO-DLS w = 0.9 0.4 , c 1 = c 2 = 1.49445 2017
HCLDMS-PSO w = 0.99 0.29 , c 1 = 2.5 0.5 , c 2 = 0.5 2.5 , P m = 0.1 2020
MLDMS-PSO w = 0.9 0.4 , c 1 = c 2 = 1.49445 , G = 12, Rc = 0.1 -
Table 3. The comprehensive rank of MLDMS-PSO and other PSO variations for 10-D, 30-D, and 50-D problems.
Table 3. The comprehensive rank of MLDMS-PSO and other PSO variations for 10-D, 30-D, and 50-D problems.
DimIndexGPSOFIPSCLPSOCCPSO-ISMXPSOTSLPSODMS-PSOPSO-DLSHCLDMS-PSOMLDMS-PSO
10-DAve rank8.629.934.284.145.696.723.075.124.412.41
30-DAve rank8.97105.034.726.524.554.933.973.283
50-DAve rank8.97104.95.15.95.794.863.413.073
CMean rank8.859.984.744.656.045.694.294.173.592.80
Crank91065874321
Table 4. Statistical results of the Wilcoxon rank sum test for 10-D, 30-D, and 50-D problems.
Table 4. Statistical results of the Wilcoxon rank sum test for 10-D, 30-D, and 50-D problems.
AlgorithmDimN+N=NCPAlgorithmDimN+N=NCP
MLDMS-PSO
vs.
GPSO
10-D280127MLDMS-PSO
vs.
TSLPSO
10-D111447
30-D28102830-D17489
50-D27202750-D185612
MLDMS-PSO
vs.
FIPS
10-D290029MLDMS-PSO
vs.
DMS-PSO
10-D223418
30-D29002930-D204515
50-D29002950-D168511
MLDMS-PSO
vs.
CLPSO
10-D159510MLDMS-PSO
vs.
PSO-DLS
10-D223418
30-D19731630-D152123
50-D18831550-D15596
MLDMS-PSO
vs.
CCPSO-ISM
10-D183810MLDMS-PSO
vs.
HCLDMS-PSO
10-D216219
30-D19191030-D12893
50-D18651350-D127102
MLDMS-PSO
vs.
XPSO
10-D261224
30-D234221
50-D207218
Table 5. Statistical rank of the Friedman test.
Table 5. Statistical rank of the Friedman test.
Overall RankComprehensive Rank10-D30-D50-D
AlgorithmRankAlgorithmRankAlgorithmRankAlgorithmRank
1MLDMS-PSO2.862 MLDMS-PSO2.483 MLDMS-PSO3.003 MLDMS-PSO3.000
2HCLDMS-PSO3.598 TSLPSO3.138 HCLDMS-PSO3.281 HCLDMS-PSO3.069
3PSO-DLS4.379 CCPSO-ISM4.207 PSO-DLS3.966 PSO-DLS3.414
4TSLPSO4.494 CLPSO4.328 TSLPSO4.552 DMS-PSO4.862
5CCPSO-ISM4.678 HCLDMS-PSO4.483 CCPSO-ISM4.724 CLPSO4.897
6CLPSO4.753 DMS-PSO5.121 DMS-PSO4.897 CCPSO-ISM5.103
7DMS-PSO4.960 PSO-DLS5.759 CLPSO5.034 TSLPSO5.793
8XPSO6.402 XPSO6.793 XPSO6.517 XPSO5.897
9GPSO8.874 GPSO8.690 GPSO8.966 GPSO8.966
10FIPS10.000 FIPS10.000 FIPS10.000 FIPS10.000
p-value3.23 × 10−30p-value1.53 × 10−28p-value1.93 × 10−29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, L.; Cao, J.; Wang, W.; Cheng, L. Multiple Learning Strategies and a Modified Dynamic Multiswarm Particle Swarm Optimization Algorithm with a Master Slave Structure. Appl. Sci. 2024, 14, 7035. https://doi.org/10.3390/app14167035

AMA Style

Cheng L, Cao J, Wang W, Cheng L. Multiple Learning Strategies and a Modified Dynamic Multiswarm Particle Swarm Optimization Algorithm with a Master Slave Structure. Applied Sciences. 2024; 14(16):7035. https://doi.org/10.3390/app14167035

Chicago/Turabian Style

Cheng, Ligang, Jie Cao, Wenxian Wang, and Linna Cheng. 2024. "Multiple Learning Strategies and a Modified Dynamic Multiswarm Particle Swarm Optimization Algorithm with a Master Slave Structure" Applied Sciences 14, no. 16: 7035. https://doi.org/10.3390/app14167035

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop