Next Article in Journal
A Confidential QR Code Approach with Higher Information Privacy
Previous Article in Journal
Analysis of Multi-Path Fading and the Doppler Effect for Reconfigurable-Intelligent-Surface-Assisted Wireless Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Competitive Swarm Optimization

Institute of Information Technology, Lodz University of Technology, 93-590 Lodz, Poland
Entropy 2022, 24(2), 283; https://doi.org/10.3390/e24020283
Submission received: 31 December 2021 / Revised: 14 February 2022 / Accepted: 15 February 2022 / Published: 16 February 2022

Abstract

:
Particle swarm optimization (PSO) is a popular method widely used in solving different optimization problems. Unfortunately, in the case of complex multidimensional problems, PSO encounters some troubles associated with the excessive loss of population diversity and exploration ability. This leads to a deterioration in the effectiveness of the method and premature convergence. In order to prevent these inconveniences, in this paper, a learning competitive swarm optimization algorithm (LCSO) based on the particle swarm optimization method and the competition mechanism is proposed. In the first phase of LCSO, the swarm is divided into sub-swarms, each of which can work in parallel. In each sub-swarm, particles participate in the tournament. The participants of the tournament update their knowledge by learning from their competitors. In the second phase, information is exchanged between sub-swarms. The new algorithm was examined on a set of test functions. To evaluate the effectiveness of the proposed LCSO, the test results were compared with those achieved through the competitive swarm optimizer (CSO), comprehensive particle swarm optimizer (CLPSO), PSO, fully informed particle swarm (FIPS), covariance matrix adaptation evolution strategy (CMA-ES) and heterogeneous comprehensive learning particle swarm optimization (HCLPSO). The experimental results indicate that the proposed approach enhances the entropy of the particle swarm and improves the search process. Moreover, the LCSO algorithm is statistically and significantly more efficient than the other tested methods.

1. Introduction

Swarm intelligence (SI) is a branch of artificial intelligence (AI) based on the social behavior of simple organisms occurring in natural environments [1]. The source of its inspiration was observations of the collective behavior of animals such as birds, fishes, bees, bacteria, ants, squirrels and others [2,3,4,5,6,7,8]. There is a number of methods based on swarm intelligence. One of them is particle swarm optimization (PSO).
PSO was developed by Kennedy and Eberhart [9] as a stochastic method of optimization. The standard PSO is based on a swarm of particles, each of which wanders through the search space to find better solutions. The key to the success of the method is the ability to share information found by population individuals. Due to many advantages (simplicity, easy implementation, lack of coding and special operators) [4], the PSO method has been widely applied in solving various optimization problems, including control systems [10], prediction problems [11], image classification [12], energy management [13], bilevel programming problems [14,15], antenna design [16], scheduling problems [17,18], electromagnetism [19,20] and many others.
Unfortunately, in cases of complex, high-dimensional optimization problems with many local optima, PSO can encounter some difficulties associated with preserving a sufficient diversity of particles and maintaining a balance between exploration and exploitation. This leads to premature convergence and renders the results unsatisfactory. In order to avoid these inconveniences, various improved versions of the PSO have been developed. The improvements rely on [21]:
  • Adjustment of parameters. Shi and Eberhart [22] indicated that the performance of the PSO method depends primarily on the inertia weight. In their opinion, the best results give inertial weight, which decreases linearly from 0.9 to 0.4. Zhang et al. [23] and Niu et al. [24] proposed to use a random inertia weight. In contrast to Zhang et al. [23] and Niu et. al. [24], Clerc [25] suggested that all coefficients should rather be constant and proved that inertia weight w = 0.729 and factors c1 = c2 = 1.494 can increase the rate of convergence. Venter and Sobieszczański [26] found that the PSO method is more effective when the acceleration coefficients are constant but different. According to them, the social coefficient should be greater than the cognitive one, and they proved that c2 = 2.5 and c1 = 1.5 produce superior performance. Nonlinear, dynamically changing coefficients were proposed by Borowska [27]. In this approach, the values of coefficients were affected by the performance of PSO and the number of iterations. In order to more efficiently control the searching process, Ratnaweera et al. [28] recommended time-varying coefficients (TVACs). An approach based on fuzzy systems was proposed by Chen [29].
  • Topology. Topological structure has a great influence on the performance of the PSO algorithm. According to Kennedy and Mendes [30], a proper topology significantly improves the exploration ability of PSO. Lin et al. [31] and Borowska [32] indicated that the ring topology can help maintain swarm diversity and improve the algorithm’s adaptability. An approach based on multi-swarm structure was proposed by Chen et al. [33] and Niu [24]. In turn, a two-layer cascading structure was recommended by Gong et al. [34]. To alleviate premature convergence, Mendes et al. [35] introduced a fully informed swarm in which particles are updated based on the best locations of their neighbors. A PSO variant with adaptive time-varying topology connectivity (PSO-ATVTC) was developed by Lim et al. [36]. Carvalho et al. [37] proposed a particle topology based on the clan structure. The dynamic Clan PSO topology was described by Bastos-Filho et al. [38]. Shen et al. [39] proposed a multi-stage search strategy supplemented by mutual repulsion and attraction among particles. The proposed algorithm increases the entropy of the particle population and leads to a more balanced search process.
  • Combining PSO with other methods. In order to obtain higher-quality solutions and enhance the performance of PSO, in many papers, researchers merge two or more different methods or their advantageous elements. Ali et al. [40] and Sharma et al. [41] combined PSO with genetic operators. A modified particle swarm optimization algorithm with simulated annealing strategy (SA) was proposed by Shieh et al. [42]. A PSO method with ant colony optimization (ACO) was developed by Holden et al. [43]. Cooperation of many swarms and four other methods for improving the efficiency of PSO was applied by Liu and Zhou [44]. Not only did the authors combine multi-population-based particle swarm optimization with the simulated annealing method but also with co-evolution theory, quantum behavior theory and mutation strategy. A different approach was presented by Cheng et al. [45]. To improve the exploration ability of PSO, they used a multi-swarm framework combining the feedback mechanism with the convergence strategy and the mutation strategy. The proposed approach helps reach a balance between exploration and exploitation and reduces the risk of premature convergence.
  • Adaptation of learning strategy. This approach allows particles to acquire knowledge from high-quality exemplars. In order to increase the adaptability of PSO, Ye et al. [46] developed dynamic learning strategy. A comprehensive learning strategy based on historical knowledge about particle position was recommended by Liang et al. [47] and was also developed by Lin et al. [48]. Instead of individual learning of particles based on their historical knowledge, Cheng and Jin [49] introduced a social learning mechanism using sorting of swarm and learning from demonstrators (any better particles) of the current swarm. The method turned out to be effective and computationally efficient. A learning strategy with operators of the genetic algorithm (GA) and a modified updating equation based on exemplars was proposed by Gong et al. [34]. To enhance diversity and improve the efficiency of PSO, Lin et al. [31] merged PSO with genetic operators and also connected them with global learning strategy and ring topology. Learning strategy with genetic operators and interlaced ring topology was also proposed by Borowska [32]. To improve the searching process, Niu et al. [50] recommended applying learning multi-swarm PSO based on a symbiosis.
In order to improve the performance of the particle swarm optimization method, Cheng et al. [21] introduced a competitive swarm optimizer (CSO) based on PSO. In CSO, neither the personal best position of particles nor the global best position is required. Instead of them, a simple pairwise competition mechanism within one single swarm was introduced. Particles do not need knowledge about their historical positions as they learn only from the winner.
Unfortunately, although the CSO method has better search capability than traditional PSO, it does not always perform well and obtain expected results for complex optimization problems. Difficulties are associated with the loss of population diversity too quickly and maintaining a balance between exploration and exploitation. This leads to a deterioration in the effectiveness of the method and premature convergence.
To reduce these inconveniences, ensure diversity of particles and limit the risk of getting stuck in the local optimum, in this paper, a new learning competitive swarm optimization called LCSO is presented. The proposed approach is based on the particle swarm optimization method (PSO) and a competition concept. In LCSO, particles do not use information about their personal best position and global best particle in the swarm; instead of that, the competition mechanism was applied but in a different way than in CSO. In LCSO, the swarm is divided into sub-swarms, each of which can work independently. In each sub-swarm, three particles participate in the tournament. The participants who lost the tournament learn from their competitors. The winners take part in the tournament between sub-swarms. The new algorithm was examined on a set of test functions. To evaluate the effectiveness of the proposed LCSO, the test results were compared with those achieved through the competitive swarm optimizer (CSO) [21], comprehensive particle swarm optimizer (CLPSO) [47], PSO [51], fully informed particle swarm (FIPS) [35], the covariance matrix adaptation evolution strategy (CMA-ES) [52] and heterogeneous comprehensive learning particle swarm optimization (HCLPSO) [53].

2. The PSO Method

As mentioned in the introduction, particle swarm optimization is a method based on swarm intelligence and the collective behavior of animal societies. It was first proposed by Kennedy and Eberhart [9] as a new simple optimization tool. Because of its effectiveness, it has been regarded as a powerful method of optimization and has become a competitive approach to the genetic algorithm and other artificial intelligence tools [1].
In PSO, the optimization process is performed by the population of individuals called a swarm of particles. The swarm consists of N particles that move in the D-dimensional search space. The individual particle within the swarm can be considered as a point of the search space determined by two vectors: Xi = (xi1, xi2, …, xiD) named a position vector and Vi = (vi1, vi2, …, viD) named a velocity vector. Both vectors are randomly generated at the first step of the PSO algorithm. Each particle roams through the search space according to its velocity vector Vi and remembers its personal best position pbesti = (pbesti1, pbesti2, …, pbestiD) that it found during its way and the best position gbest = (gbest1, gbest2, …, gbestD) found in an entire swarm. The particles are also evaluated on their quality, which is measured based on the objective function of the optimized task. The particle wandering in the search space is performed according to the velocity equation determined as follows:
V i t + 1 = w V i t + c 1 r 1 p b e s t i X i t + c 2 r 2 g b e s t X i t
The change of the particle position is realized by adding the velocity vector to its position vector according to the given equation:
X i t + 1 = X i t + V i t + 1
where t is the iteration number, w means the inertia weight (that determines the impact of the previous velocity of the particle on its current velocity), factors c1 and c2 are acceleration coefficients, and r1 and r2 represent two random numbers uniformly distributed between 0 and 1. The pseudo code of the standard PSO method is presented in Algorithm 1.
Algorithm 1 Pseudo code of the PSO algorithm.
   Determine the size of the swarm
    for j = 1 to size of the swarm do
     Generate an initial position and velocity of the particle,
     Evaluate the particle;
     Determine the pbest (personal best position) of the particle
    end
    Select the gbest (the best position) found in the swarm
    while termination criterion is not met do
     for j = 1 to size of the swarm do
      Update the velocity and position of each particle according to Equations (1) and (2)
      Evaluate new position of the particle;
      Update the pbest (personal best position) of each particle
     end
     Update the gbest (the best position) found in the swarm
    end

3. The Proposed LCSO Algorithm

The proposed LCSO (learning competitive swarm optimization) is a two-stage-learning-based method, which combines particle swarm optimization with a competition concept. In the first stage, particles take part in the tournaments within sub-swarms. In the second stage, tournaments are held between sub-swarms. Although LCSO is based on PSO, the knowledge acquisition mechanism is different. In LCSO, particles do not store any historical information about both their personal best position and global best particle in the swarm. Instead, particles derive knowledge from their better competitors.
In the first stage of LCSO, the swarm of N particles is divided into p sub-swarms. In every iteration, each particle within a sub-swarm participates in a tournament. In the sub-swarms, the tournaments are organized independently. In one tournament, participate 3 randomly selected particles. In one iteration, a particle takes part in a competition only once. The best particle of the tournament (winner) goes to the next iteration without updating. The particle that has finished the tournament as a second one (runner-up) learns from the winner according to the equations:
V s t + 1   = r 1 V s t   +   r 2 X w t X s t
X s t + 1   = X s t   +   V s t + 1
The particle that took the last place (loser) learns from its competitors according to the following formulas:
V l t + 1   = r 1 V l t   +   r 2 X w t X l t   +   r 3 X s t X l t
X l t + 1   = X l t   +   V l t + 1
where Xw is the position of the best particle (winner), Vs and Xs are velocity and position of the particle that got second place (runner-up), Vl and Xl are velocity and position of the particle that got last place (loser), t is the iteration number, and r1, r2 and r3 are randomly generated numbers in the range [0, 1].
This means that for the sub-swarm of M particles, only two out of three particles are updated, whereas one in three particles of each sub-swarm go to the next stage without updating. Then the tournaments are organized between sub-swarms. For each sub-swarm, one particle from the set of the winners is selected. Subsequently, the (each three) particles participate in the tournament. The particle that finished the tournament as a second one (runner-up) learns from the winner according to Equations (3) and (4). The particle that got last place (loser) learns from its competitors according to (5) and (6). Next, the tournaments are organized again in sub-swarms. Figure 1 illustrates the concept of learning LCSO.
This means that among particles that participate in competition between sub-swarms only two out of three particles are updated and one-third of them go to the next iteration without updating.
The proposed sub-swarms topology limits the excessive loss of swarm diversity and helps maintain a balance between exploration and exploitation. The competition strategy (in the sub-swarm) promotes the interaction among particles of the sub-swarm. The introduced learning strategy in the sub-swarm ensures the exchange of information between them. The tournament strategy among the winners ensures sharing of information between sub-swarms. In this way, the sub-swarm that falls into the local optimum has a chance to jump out of it by learning from the winners of other sub-swarms.
The pseudo code of the LCSO method is summarized in Algorithm 2.
Algorithm 2 Pseudo code of the LCSO algorithm. FEs is the number of fitness evaluations. The termination condition is the maximum number of fitness evaluations (maxFEs).
  Determine the size of the swarm;
  Determine the number of sub-swarms;
  Determine number of particles in the sub-swarms;
  Randomly initialize position and velocity of the particles in sub-swarms;
  t = 0;
  while termination criterion (FEs ≤ maxFEs) is not met do
   Evaluate the particles fitness;
   Parallel in each sub-swarm organize tournament independently:
    while sub-swarm ≠ Ø do
     Randomly select three particles, compare their fitness and determine winner Xw, runner-up Xr and loser Xl
     Update runner-up’s position according to (3) and (4)
     Update loser’s position according to (5) and (6)
    end
   Organize tournament between sub-swarms:
   Randomly select one particle from the winners of each sub-swarm
    while set of selected winners ≠ Ø do
     Randomly select three particles
     compare fitness of the selected particles to determine winner Xw, runner-up Xr and loser Xl
     Update runner-up’s position according to (3) and (4)
     Update loser’s position according to (5) and (6)
    end
   t = t + 1;
  end
  output the best solution

4. Results

The tests of the proposed algorithm were performed on a set of benchmark functions, sixteen of which are presented in this article and described in Table 1.
The first five test functions (f1f5) are unimodal, the next six (f6f11) are multimodal, whereas the remaining ones (f12f16) are rotated multimodal functions. The performance of the presented LCSO algorithm for all functions was compared with the results using the competitive swarm optimizer (CSO) [21], comprehensive particle swarm optimizer (CLPSO) [47], particle swarm optimization (PSO) [51], fully informed particle swarm (FIPS) [35], covariance matrix adaptation evolution strategy (CMA-ES) [52] and heterogeneous comprehensive learning particle swarm optimization (HCLPSO) [53].
The experiments were conducted on a PC with an Intel Core i7-3632QM 2.2 GHz CPU and Microsoft Windows 10 64-bit operating system. The LCSO was implemented in C with Microsoft Visual Studio 2010 Enterprise.
The parameter settings of the comparison algorithms are listed in Table 2. The details of algorithms used for comparison can be found in [21,35,47,51,53].
For all tested functions, the experiments with dimension size D = 30 were conducted. The population size of all algorithms is N = 72, whereas the maximum number of evaluations is 10,000. The number of sub-swarms p = 3. The exemplary results of the tests are summarized in Table 3, and the presented values were averaged over 32 runs. The best results are shown in bold.
To facilitate understanding of the results, the comparison of the algorithm’s effectiveness on a logarithmic scale is shown in Figure 2 and Figure 3. In order to compare the convergence rate of the tested algorithms, the convergence curves on six representative functions were plotted and presented in Figure 4 and Figure 5. To evaluate the effectiveness of the proposed method, a statistical t-test was conducted. For all comparisons, a confidence level of 0.05 was used. The t-values between LCSO and the other considered algorithms for 30 dimensions are presented in Table 4.
The results of the tests indicate that the LCSO method with the proposed approach is effective as it achieved superior performance over the other tested algorithms.
For the unimodal functions, LCSO obtained the best results (although the standard deviation values were not always the lowest) among all the algorithms, and only for f4, the results of LCSO and CSO were comparable, but LCSO turned out to be more stable. The weakest results for the f1 function were obtained by CLPSO, for f2 and f3 by FIPS and for f5 by PSO. For multimodal functions, in the case of f6, f9, f10, f11, f12, f14, f15 and f16, the LCSO method also achieved the best outcomes and was more stable than the other tested algorithms. In the case of f7 and f13 functions, LCSO achieved higher performance compared to the other methods, but it was not as stable as them. In the case of f7, the HCLPSO method turned out to be more stable than the others. For f13 function, the CSO method was more stable than LCSO. In the case of f8 function, LCSO performed worse than CSO and HCLPSO but much better than CLPSO, PSO and FIPS. In the case of f16 function, the results gained by CSO are also much better than those achieved by CLPSO, PSO, FIPS and even HCLPSO but not as good as LCSO. In all the tests, the least effective was PSO as it obtained the poorest results. Particles of PSO moved irregularly and had a tendency to fall and stop in the local minima. The convergence curves presented in Figure 4 and Figure 5 indicate that, for unimodal functions, LCSO converged slower in the early stage than most of the compared methods. At this stage, CSO was found to perform with the highest rate. However, after about 3000 iterations, LCSO surpasses CSO and other tested methods. Regarding multimodal functions, at an early stage, LCSO also converges slower than CSO but faster than CLPSO, PSO, FIPS and HCLPSO. In the middle stage, LCSO surpasses CSO and converges faster than the other algorithms (except f8), but it needs as many as 6000 iterations to achieve this. However, for f8 function, LCSO slows down in the middle stage and ultimately converges slower than CSO and HCLPSO.
The obtained results confirm that the proposed method is effective and efficient. The advantages of the proposed LCSO method include:
  • The use of sub-swarms helps maintain the diversity of the population and keep the balance between the global exploration and local exploitation;
  • Particles learning from the winners can effectively search space for a better position;
  • Good information found by sub-swarm is not lost;
  • Particles can learn from the useful information found by other sub-swarms;
  • In each iteration, the position and velocity of only two out of three particles is updated which significantly reduces the cost of computations;
  • Particles do not need to remember their personal best position; instead, the competition mechanism is applied;
  • LCSO can obtain better results and convergence than the other algorithms.
Based on the t-test results summarized in Table 4, it can be concluded that the performance of the proposed learning competitive swarm optimization (LCSO) algorithm is significantly better than the other methods with a 95% confidence level in a statistically meaningful way.
In order to assess the effectiveness of the LCSO method for a larger dimension of the search space, functions with dimensions of 100, 500 and 1000 were investigated. The tests were performed with a population of N = 99 particles. The maximum number of evaluations was 80,000. The results of the tests were averaged over 10 runs and are presented in Table 5.
It should be noted that, despite the increase of the search space dimensions, the proposed LCSO method is still effective for functions f1, f6, f7, f8, f9, f12, f13 and f16. An increase in the number of dimensions requires an increase in the size of the population. It also entails an increase in computation costs.

5. Conclusions

In this paper, a new learning competitive swarm optimization algorithm (LCSO) based on the particle swarm optimization method and competition mechanism was proposed. In LCSO, particles do not have to remember their personal best positions; instead of that, they participate in the tournament. The tournaments take place in sub-swarms as well as between them. The participants of the tournament update their positions by learning from their competitors.
The efficiency of the LCSO method was tested on a set of benchmark functions. Then the test results were compared with four different variants of PSO, including CSO, CLPSO, PSO, CMA-ES, FIPS and HCLPSO. The obtained results indicate that the proposed LCSO is faster and more effective than the other examined algorithms. The competition mechanism used in LCSO helps maintain particle diversity and improve its exploration ability.
Future work will focus on the implementation of the LCSO algorithm in real-world optimization problems. The application of the LCSO method in machine learning and image compression is also a promising area of research.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Kennedy, J.; Eberhart, R.C.; Shi, Y. Swarm Intelligence; Morgan Kaufmann Publishers: San Francisco, CA, USA, 2001. [Google Scholar]
  2. Xu, X.; Hao, J.; Zheng, Y. Multi-objective artificial bee colony algorithm for multi-stage resource leveling problem in sharing logistics network. Comput. Ind. Eng. 2020, 142, 106338. [Google Scholar] [CrossRef]
  3. Lei, D.; Liu, M. An artificial bee colony with division for distributed unrelated parallel machine scheduling with preventive maintenance. Comput. Ind. Eng. 2020, 141, 106320. [Google Scholar] [CrossRef]
  4. Borowska, B. An improved CPSO algorithm. In Proceedings of the International Scientific and Technical Conference Computer Sciences and Information Technologies CSIT, Lviv, Ukraine, 6–10 September 2016; pp. 1–3. [Google Scholar] [CrossRef]
  5. Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. Theor. Comput. Sci. 2005, 344, 243–278. [Google Scholar]
  6. Nanda, S.J.; Panda, G. A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm Evol. Comput. 2014, 16, 1–18. [Google Scholar]
  7. Ismkhan, H. Effective heuristics for ant colony optimization to handle large-scale problems. Swarm Evol. Comput. 2017, 32, 140–149. [Google Scholar] [CrossRef]
  8. Jain, M.; Singh, V.; Rani, A. A novel nature inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  9. Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  10. You, Z.; Lu, C. A heuristic fault diagnosis approach for electro-hydraulic control system based on hybrid particle swarm optimization and Levenberg–Marquardt algorithm. J. Ambient Intell. Humaniz. Comput. 2018. [Google Scholar] [CrossRef]
  11. Yu, J.; Mo, B.; Tang, D.; Liu, H.; Wan, J. Remaining useful life prediction for lithium-ion batteries using a quantum particle swarm optimization-based particle filter. Qual. Eng. 2017, 29, 536–546. [Google Scholar] [CrossRef]
  12. Fernandes Junior, F.E.; Yen, G. Particle swarm optimization of deep neural networks architectures for image classification. Swarm Evol. Comput. 2019, 49, 62–74. [Google Scholar] [CrossRef]
  13. Ignat, A.; Lazar, E.; Petreus, D. Energy Management for an Islanded Microgrid Based on Particle Swarm Optimization. In Proceedings of the International Symposium for Design and Technology of Electronics Packages, Cluj-Napoca, Romania, 23–26 October 2019. [Google Scholar]
  14. Abo-Elnaga, Y.; Nasr, S. Modified Evolutionary Algorithm and Chaotic Search for Bilevel Programming Problems. Symmetry 2020, 12, 767. [Google Scholar] [CrossRef]
  15. Goshu, N.N.; Kassa, S.M. A systematic sampling evolutionary (SSE) method for stochastic bilevel programming problems. Comput. Oper. Res. 2020, 120, 104942. [Google Scholar] [CrossRef]
  16. Zhang, X.; Lu, D.; Zhang, X.; Wang, Y. Antenna array design by a contraction adaptive particle swarm optimization algorithm. EURASIP J. Wirel. Commun. Netw. 2019, 57. [Google Scholar] [CrossRef] [Green Version]
  17. Hu, Z.; Chang, J.; Zhou, Z. PSO Scheduling Strategy for Task Load in Cloud Computing. Hunan Daxue Xuebao/J. Hunan Univ. Nat. Sci. 2019, 46, 117–123. [Google Scholar]
  18. Chen, X.; Xiao, S. Multi-Objective and Parallel Particle Swarm Optimization Algorithm for Container-Based Microservice Scheduling. Sensors 2021, 21, 6212. [Google Scholar] [CrossRef]
  19. Nadolski, S.; Borowska, B. Application of the PSO algorithm with sub-domain approach for the optimization of radio telescope array. J. Appl. Comput. Sci. 2008, 16, 7–14. [Google Scholar]
  20. Michaloglou, A.; Tsitsas, N.L. Feasible Optimal Solutions of Electromagnetic Cloaking Problems by Chaotic Accelerated Particle Swarm Optimization. Mathematics 2021, 9, 2725. [Google Scholar] [CrossRef]
  21. Cheng, R.; Jin, Y. A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef] [PubMed]
  22. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1945–1949. [Google Scholar]
  23. Zhang, L.; Yu, H.; Hu, S. A new approach to improve particle swarm optimization. In Proceedings of the International Conference on Genetic and Evolutionary Computation, Chicago, IL, USA, 12–16 July 2003; Springer: Berlin, Germany, 2003; pp. 134–139. [Google Scholar]
  24. Niu, B.; Zhu, Y.; He, X.; Wu, H. MCPSO: A multi-swarm cooperative particle swarm optimizer. Appl. Math. Comput. 2007, 185, 1050–1062. [Google Scholar] [CrossRef] [Green Version]
  25. Clerc, M. The swarm and the queen: Towards a deterministic and adaptive particle swarm optimization. In Proceedings of the ICEC, Washington, DC, USA, 6–9 July 1999; pp. 1951–1957. [Google Scholar]
  26. Venter, G.; Sobieszczanski-Sobieski, J. Particle swarm optimization. In Proceedings of the 43rd AIAA/ASME/ASCE/AHS/ASC Structure, Structure Dynamics and Materials Conference, Denver, CO, USA, 22–25 April 2002; pp. 22–25. [Google Scholar]
  27. Borowska, B. Social strategy of particles in optimization problems. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2020; Volume 991, pp. 537–546. [Google Scholar] [CrossRef]
  28. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [Google Scholar] [CrossRef]
  29. Chen, T.; Shen, Q.; Su, P.; Shang, C. Fuzzy rule weight modification with particle swarm optimization. Soft Comput. 2016, 20, 2923–2937. [Google Scholar] [CrossRef] [Green Version]
  30. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1671–1676. [Google Scholar]
  31. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Global genetic learning particle swarm optimization with diversity enhanced by ring topology. Swarm Evol. Comput. 2019, 44, 571–583. [Google Scholar] [CrossRef]
  32. Borowska, B. Genetic learning particle swarm optimization with interlaced ring topology. In Lecture Notes in Computer Science, Proceedings of the Computational Science—ICCS 2020, Amsterdam, The Netherlands, 3–5 June 2020; Krzhizhanovskaya, V.V., Závodszky, G., Lees, H., Eds.; Springer: Cham, Switzerland, 2020; Volume 12141, pp. 136–148. [Google Scholar] [CrossRef]
  33. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Wu, Q.T. Dynamic multi-swarm differential learning particle swarm optimizer. Swarm Evol. Comput. 2018, 39, 209–221. [Google Scholar] [CrossRef]
  34. Gong, Y.J.; Li, J.J.; Zhou, Y.C.; Li, Y.; Chung, H.S.H.; Shi, Y.H.; Zhang, J. Genetic learning particle swarm optimization. IEEE Trans. Cybern. 2016, 46, 2277–2290. [Google Scholar] [CrossRef] [Green Version]
  35. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  36. Lim, W.H.; Isa, N.A.M. Particle swarm optimization with adaptive time-varying topology connectivity. Appl. Soft Comput. 2014, 24, 623–642. [Google Scholar] [CrossRef]
  37. Carvalho, D.F.; Bastos-Filho, C.J.A. Clan Particle Swarm Optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 3044–3051. [Google Scholar] [CrossRef]
  38. Bastos-Filho, C.J.A.; Carvalho, D.F.; Figueiredo, E.M.N.; Miranda, P.B.C. Dynamic Clan Particle Swarm Optimization. In Proceedings of the Ninth International Conference on Intelligent Systems Design and Applications, Pisa, Italy, 30 November–2 December 2009; pp. 249–254. [Google Scholar] [CrossRef]
  39. Shen, Y.; Cai, W.; Kang, H.; Sun, X.; Chen, Q.; Zhang, H. A Particle Swarm Algorithm Based on a Multi-Stage Search Strategy. Entropy 2021, 23, 1200. [Google Scholar] [CrossRef]
  40. Ali, A.F.; Tawhid, M.A. A hybrid particle swarm optimization and genetic algorithm with population partitioning for large scale optimization problems. Ain Shams Eng. J. 2017, 8, 191–206. [Google Scholar] [CrossRef] [Green Version]
  41. Sharma, M.; Chhabra, J.K. Sustainable automatic data clustering using hybrid PSO algorithm with mutation. Sustain. Comput. Inform. Syst. 2019, 23, 144–157. [Google Scholar] [CrossRef]
  42. Shieh, H.L.; Kuo, C.C.; Chiang, C.M. Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification. Appl. Math. Comput. 2011, 218, 4365–4383. [Google Scholar] [CrossRef]
  43. Holden, N.; Freitas, A. A hybrid particle swarm/ant colony algorithm for the classification of hierarchical biological data. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 100–107. [Google Scholar]
  44. Liu, F.; Zhou, Z. An improved QPSO algorithm and its application in the high-dimensional complex problems. Chomometrics Intell. Lab. Syst. 2014, 132, 82–90. [Google Scholar] [CrossRef]
  45. Cheng, R.; Sun, C.; Jin, Y. A multi-swarm evolutionary framework based on a feedback mechanism. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 718–724. [Google Scholar] [CrossRef]
  46. Ye, W.; Feng, W.; Fan, S. A novel multi-swarm particle swarm optimization with dynamic learning strategy. Appl. Soft Comput. 2017, 61, 832–843. [Google Scholar] [CrossRef]
  47. Liang, J.; Qin, K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  48. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Adaptive comprehensive learning particle swarm optimization with cooperative archive. Appl. Soft Comput. J. 2019, 77, 533–546. [Google Scholar] [CrossRef]
  49. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar]
  50. Niu, B.; Huang, H.; Tan, L.; Duan, Q. Symbiosis-based alternative learning multi-swarm particle swarm optimization. IEEE/ACM Trans. Comput. Biol. Bioinform. 2017, 14, 4–14. [Google Scholar] [CrossRef] [PubMed]
  51. Shi, Y.; Eberhart, R. A Modified Particle Swarm Optimizer; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  52. Igel, C.; Hansen, N.; Roth, S. Covariance matrix adaptation for multi-objective optimization. Evol. Comput. 2007, 15, 1–28. [Google Scholar] [CrossRef] [PubMed]
  53. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
Figure 1. Learning LCSO with 3 sub-populations.
Figure 1. Learning LCSO with 3 sub-populations.
Entropy 24 00283 g001
Figure 2. Average value of the objective function (f1f3) for the indicated algorithm.
Figure 2. Average value of the objective function (f1f3) for the indicated algorithm.
Entropy 24 00283 g002
Figure 3. Average value of the objective function (f4f16) for the indicated algorithm.
Figure 3. Average value of the objective function (f4f16) for the indicated algorithm.
Entropy 24 00283 g003aEntropy 24 00283 g003b
Figure 4. Convergence curve for: (a) f1 function; (b) f2 function; (c) f3 function; (d) f4 function; (e) f5 function; (f) f6 function.
Figure 4. Convergence curve for: (a) f1 function; (b) f2 function; (c) f3 function; (d) f4 function; (e) f5 function; (f) f6 function.
Entropy 24 00283 g004
Figure 5. Convergence curve for: (a) f7 function, (b) f8 function, (c) f10 function, (d) f12 function, (e) f13 function), (f) f16 function.
Figure 5. Convergence curve for: (a) f7 function, (b) f8 function, (c) f10 function, (d) f12 function, (e) f13 function), (f) f16 function.
Entropy 24 00283 g005
Table 1. Test functions.
Table 1. Test functions.
FunctionFormulaFminRange
Sphere f 1 = i = 1 n x i 2 0[−100, 100]n
Schwefel 1.2 f 2 = i = 1 n ( j = 1 i x j ) 2 0[−100, 100]n
Quartic f 3 = i = 1 n i x i 4 + r a n d o m 0 , 1 0[−100, 100]n
Schwefel 2.22 f 4 = i = 1 n | x i | + i = 1 n x i 0[−10; 10]n
Rosenbrock f 5 = i = 1 n 1 [ 100 x i + 1 x i 2 2 + x i 1 2 0[−30, 30]n
Griewank f 6 = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 0[−600, 600]n
Rastrigin f 7 = i = 1 n x i 2 10 cos 2 π x i + 10 0[−5.12, 5.12]n
Ackley f 8 = 20   exp 0.2 1 n i = 1 n x i 2
exp 1 n i = 1 n cos ( 2 π x i ) + 20 + e
0[−32, 32]n
Zakharov f 9 = i = 1 n x i + ( i = 1 n i 2 x i ) 2 + ( i = 1 n i 2 x i ) 4 0[−10, 10]n
Schwefel f 10 = 418.9829   ·   n i = 1 n ( x i · sin ( | x i | 0.5 ) 0[−400, 400]n
Weierstrass f 11 = i = 1 n k = 0 k m a x [ a k cos 2 π b k x i + 0.5 ]
    n k = 0 k m a x [ a k cos 2 π b k · 0.5 ]
    a = 0.5, b = 3, kmax = 20
0[−0.5, 0.5]n
Rotated Rastrigin f 12 = i = 1 n z i 2 10 cos 2 π z i + 10 ,
z = x   ·   M  
0[−5.12, 5.12]n
Rotated Griewank f 13 = 1 4000 i = 1 n z i 2 i = 1 n cos z i i + 1 ,  
z = x   ·   M
0[−600, 600]n
Rotated Schwefel f 14 = 418.9829   ·   n i = 1 n y i
y i = x i sin ( x i 0.5                             i f x i 500 0.001 x i 500 2   i f x i > 500
f o r   i = 1 , 2 . n ,         x = x + 420.96    
     x = M z 420.96
0[−400, 400]n
Rotated Weierstrass f 15 = i = 1 n k = 0 k m a x [ a k cos 2 π b k x i + 0.5 ]
     n k = 0 k m a x [ a k cos 2 π b k · 0.5 ]
   a = 0.5, b = 3, kmax = 20, y = M   ·   x
0[−0.5, 0.5]n
Rotated Ackley     f 16 = 20   exp 0.2 1 n i = 1 n x i 2
exp 1 n i = 1 n cos ( 2 π x i ) + 20 + e
z = x   ·   M
0[−32, 32]n
Table 2. Parameter settings of algorithms.
Table 2. Parameter settings of algorithms.
AlgorithmInertia Weight wAcceleration Coefficients c1, c2, c Other Parameters
CLPSOw = 0.9–0.4c = 1.496-
FIPS c = 2.05 χ = 0.729
PSOw = 0.9–0.4c1 = 2.0, c2 = 2.0-
HCLPSOw = 0.99–0.2c1 = 2.5–0.5, c2 = 0.5–2.5, c = 3–1.5-
CSOw = 0.7298c = 1.49618pm = 0.01, sg = 7
Table 3. Comparison results.
Table 3. Comparison results.
Function CLPSOPSOHCLPSOCSOCMA-ESFIPSLCSO
f1Mean4.15E-062.43E-178.93E-103.22E-154.16E-191.03E-065.24E-21
Std3.81E-063.15E-177.11E-103.75E-154.07E-191.59E-062.19E-21
f2Mean2.43E-016.54E-038.93E-023.22E-021.78E-034.88E-005.63E-05
Std3.15E-014.81E-037.11E-023.75E-024.02E-045.37E-004.39E-05
f3Mean3.42E-031.34E-022.33E-035.22E-045.21E-012.41E+033.61E-06
Std5.17E-039.06E-037.94E-045.08E-042.79E-031.47E+036.42E-06
f4Mean5.16E-083.44E-144.33E-064.88E-173.25E-073.17E-074.86E-17
Std1.04E-072.80E-141.13E-054.63E-162.81E-074.54E-073.40E-16
f5Mean3.78E+016.14E+013.53E+014.20E+011.02E+012.82E+012.54E+00
Std1.86E+015.80E+015.14E+003.15E+011.61E+002.14E+015.72E+00
f6Mean4.43E-061.45E-024.61E-087.21E-103.34E-092.93E-033.98E-12
Std1.01E-051.29E-025.42E-085.36E-101.02E-104.18E-033.41E-11
f7Mean1.34E+004.38E+014.78E-043.80E-041.8E+017.32E+016.54E-06
Std6.16E-018.75E+007.05E-052.52E-047.35E-012.25E+013.77E-05
f8Mean5.12E-012.24E+001.15E-094.60E-107.35E-125.74E+008.12E-08
Std8.39E-011.46E+003.67E-101.88E-091.14E-126.18E-017.93E-09
f9Mean3.97E-011.56E+006.30E-025.61E-021.39E-012.25E-013.08E-03
Std3.12E-011.25E+005.47E-023.95E-022.61E-022.43E-012.95E-03
f10Mean1.71E-031.86E+032.08E-036.53E+031.25E-036.62E+023.15E-05
Std1.59E-032.07E+021.37E-033.87E+031.94E-034.94E+024.39E-05
f11Mean2.39E-051.89E-034.38E-043.15E-032.32E-035.29E-045.37E-07
Std2.12E-051.67E-033.65E-041.97E-031.85E-031.92E-043.71E-07
f12Mean1.89E+025.94E+015.98E+011.92E+012.74E+012.21E+027.67E+00
Std5.14E+016.73E+004.26E+015.07E+001.30E-012.80E+014.02E+00
f13Mean3.44E-037.68E-027.53E-035.41E-092.28E-086.67E-049.16E-12
Std4.96E-038.39E-028.66E-037.92E-093.46E-091.02E-037.85E-11
f14Mean6.23E+034.66E+036.83E+037.78E+031.15E+037.65E+034.59E+02
Std4.18E+033.19E+033.99E+033.54E+033.29E+033.61E+036.34E+02
f15Mean3.27E+012.55E+016.45E+011.83E-012.12E+014.07E+013.47E-03
Std3.39E+013.12E+014.73E+012.78E-011.96E+008.65E+002.51E-03
f16Mean8.27E-012.25E+002.31E+005.11E-101.76E+002.71E-035.09E-14
Std6.54E-026.58E-015.15E-025.16E-105.04E-012.03E-036.71E-14
Table 4. The t-test results.
Table 4. The t-test results.
Functiont-Value between CLPSO and LCSOt-Value between PSO and LCSOt-Value between HCLPSO and LCSOt-Value between CSO and LCSOt-Value between FIPS and LCSO Two-Tailed p between CLPSO and LCSOTwo-Tailed p between PSO and LCSOTwo-Tailed p between HCLPSO and LCSOTwo-Tailed p between CSO and LCSOTwo-Tailed p between FIPS and LCSO
f16.16E+004.36E+007.10E+004.86E+003.66E+001.60E-083.18E-051.95E-104.51E-064.02E-04
f24.36E+007.62E+007.10E+004.85E+005.14E+003.18E-051.59E-111.99E-104.67E-061.40E-06
f33.74E+008.36E+001.66E+015.77E+009.27E+003.12E-044.21E-133.50E-309.19E-084.55E-15
f42.82E+006.94E+002.16E+00N/A3.95E+005.89E-034.28E-103.30E-02N/A1.47E-04
f51.02E+015.71E+002.41E+016.97E+006.55E+003.47E-171.19E-076.31E-433.67E-102.64E-09
f62.49E+006.36E+004.81E+007.55E+003.97E+001.45E-026.49E-095.44E-062.26E-111.40E-04
f71.23E+012.83E+016.95E+004.66E+001.84E+011.35E-215.57E-494.01E-101.00E-051.45E-33
f83.45E+008.68E+00N/AN/A5.25E+018.23E-048.84E-14N/AN/A1.35E-73
f97.14E+007.04E+006.19E+007.57E+005.17E+001.63E-102.60E-101.43E-082.05E-111.26E-06
f105.97E+005.08E+018.45E+009.55E+007.58E+003.81E-083.10E-722.70E-131.18E-151.97E-11
f116.23E+006.40E+006.78E+009.04E+001.56E+011.15E-085.35E-099.11E-101.44E-143.01E-28
f121.93E+013.61E+016.67E+009.76E+004.13E+014.12E-351.84E-581.51E-094.01E-168.48E-64
f133.80E+005.01E+004.76E+003.75E+003.58E+002.52E-042.38E-066.62E-063.00E-045.39E-04
f147.48E+007.07E+008.64E+001.11E+011.07E+013.26E-112.25E-101.09E-133.95E-192.93E-18
f155.28E+004.48E+007.47E+003.54E+002.58E+017.69E-072.06E-053.39E-116.20E-041.94E-45
f166.93E+011.87E+012.46E+025.42E+007.31E+004.80E-853.78E-341.53E-1384.21E-077.22E-11
Table 5. Results on 100, 500 and 1000 dimensional functions.
Table 5. Results on 100, 500 and 1000 dimensional functions.
FunctionD = 100D = 500D = 1000
f11.97E-232.03E-112.14E-09
f24.62E-017.39E+012.17E+02
f31.95E-011.38E+042.01E+05
f45.16E-028.41E+003.56E+02
f59.38E+015.22E+029.72E+02
f61.02E-185.46E-123.65E-10
f71.06E-133.71E-085.29E-06
f85.12E-142.24E-054.15E-03
f97.06E-074.67E-045.07E-02
f103.18E+041.82E+053.94E+05
f111.37E-022.56E+014.61E+03
f121.25E-123.77E-096.29E-08
f131.03E-146.32E-095.07E-08
f142.56E+041.69E+053.83E+05
f151.42E+013.70E+027.26E+02
f162.07E-133.24E-074.19E-06
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Borowska, B. Learning Competitive Swarm Optimization. Entropy 2022, 24, 283. https://doi.org/10.3390/e24020283

AMA Style

Borowska B. Learning Competitive Swarm Optimization. Entropy. 2022; 24(2):283. https://doi.org/10.3390/e24020283

Chicago/Turabian Style

Borowska, Bożena. 2022. "Learning Competitive Swarm Optimization" Entropy 24, no. 2: 283. https://doi.org/10.3390/e24020283

APA Style

Borowska, B. (2022). Learning Competitive Swarm Optimization. Entropy, 24(2), 283. https://doi.org/10.3390/e24020283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop