**3. Experiments**

The performances of CoMPSO have been evaluated on the set of five benchmark functions reported in Table 1. Those functions differ from each other for the properties of modality, symmetry around the optimum, and regularity.

In each run of CoMPSO, termination conditions have been defined for convergence and maximum computational resources. An execution is regarded convergen<sup>t</sup> if *f*(*x*) − *f*(*xop<sup>t</sup>*) < . On the other hand, the execution has been considered terminated unsuccessfully if the number of function evaluations (NFES) exceeds the allowed cap of 100,000. The dimensionalities, feasible ranges, and values, used for each benchmark, are also reported in Table 1.


**Table 1.** Benchmark functions.

Following the setup of [20], three different swarm sizes have been considered, namely 15, 30, and 60. The other parameters were set by using the PSO standard values suggested in [46], i.e., *ω* = 0.7298, *ϕ*1 = *ϕ*2 = 1.49618. The domain ranges defining the meme space were *b*, *k* ∈ [1, 8] ∩ N, *q* ∈ [1, 16] ∩ N, and *w* ∈ [0.5, <sup>4</sup>], while an amplification width *λ* = 4 was used for the evolution of the memes' discrete features. Furthermore, meme application probability and frequency were set to *γ* = 0.2 and *φ* = 5.

For each swarm size configuration, a series of 50 executions was held in order to generate more confidence in the statistical results. In each execution series, the success rate *SR* (i.e., the number of convergen<sup>t</sup> executions above the total number of executions), the average NFES of all convergen<sup>t</sup> executions *C*, and the quality measure *Qm* = *Cavg*/*SR* introduced in [47] are recorded.

All these previously described performance indexes are reported in Table 2 for CoMPSO, classical PSO (CPSO), and static memetic PSO (SMPSO), i.e., PSO endowed with an RW local search operator but without meme evolution [20], which is the only comparable memetic PSO algorithm in the literature. The best quality measure in every line of Table 2 is provided in bold.


**Table 2.** Experimental results.

These results clearly show that the CoMPSO approach greatly improves the success rate of CPSO and SMPSO. In particular, it must be noted that CoMPSO converges almost everywhere, and it has a remarkable worst case convergence probability of 96%. On the other hand, CPSO, in some cases, fails to converge at all when a small swarm size is employed. Finally, the CoMPSO convergence speed is comparable to that of the SMPSO, although, as expected, in the more simple cases, i.e., *f*1 and *f*5, the convergence speed of CoMPSO is worse than that of CPSO, which is likely due to the NFES overhead introduced by local search operators.

Figures 1a,b and 2a–c plot the CoMPSO convergence graph with respect to the different swarm sizes adopted. These graphs show that CoMPSO appears quite monotonic with respect to swarm size and that a low number of particles seems to be generally preferable.

**Figure 1.** (**<sup>a</sup>**,**b**) CoMPSO convergence graphs for *f*1 and *f*2.

(**c**) *f*5 Convergence Graph

**Figure 2.** (**<sup>a</sup>**–**<sup>c</sup>**) CoMPSO convergence graphs for *f*3, *f*4, and *f*5.

Finally, a measure of meme convergence is shown in Figure 3, which shows the evolution of meme standard deviation (meme STD) on benchmark *f*4. Meme convergence is fast in the early stage and, as expected, remains fairly constant, with only small adaptations, during the rest of the computation. The meme convergence curve, together with the quality measures and the success rates, show the effectiveness of CoMPSO and its ability to adapt its local refinement behavior to the landscape of the problem at hand.

**Figure 3.** Memes Convergence Graph.

## **4. Discussion**

In this paper, a coevolving memetic PSO (CoMPSO), characterized by two co-evolving populations of particles and memes, has been presented. The main contribution of this work is represented by the meme evolution technique that allows one to enhance the effectiveness of the PSO approach. Memetic algorithms (MAs) have recently received grea<sup>t</sup> attention as effective meta-heuristics to

improve general evolutionary algorithm (EA) schemes by combining EAs with local search procedures and have demonstrated to be a very effective method for performance improvement. However, to the best of our knowledge, this is the first work where a memetic PSO algorithm with meme co-evolution has been proposed. Since memes have been described using one real and three integer parameters, a probabilistic PSO evolution technique for the discrete components in the meme representation has been designed. This technique is inspired from our previous work [45] and preserves typical PSO behavior of cognitive, social, and momentum dynamics.

The algorithm has been tested on some standard benchmark problems, and the results presented here show that CoMPSO outperforms the success rates of both classical PSO and static memetic PSO [20], although its convergence speed is affected by the overhead due to the local search applications. The effectiveness of the method relies on the ability to dynamically adapt the local search operators, i.e., the memes, to the problem landscape at hand. While experiments have been held on a limited set of standard benchmarks, the goal of demonstrating the feasibility of the approach i.e., providing the adaptivity of memes during searches in PSO, and improving the previous results can be considered achieved.

Future and ongoing works have been designed on different perspectives: from an experimental point of view, we are currently holding systematic experiments on larger sets of benchmarks, and we are planning experiments with hybrid continuous/discrete problems [48]; from the theoretical model point of view, we are investigating different models and synchronization mechanisms for the meme operators, we are also currently designing a self-regulatory mechanism for the CoMPSO parameters and investigating its applications to different classes of problems, such as multiobjective and multimodal problems [49].

**Author Contributions:** The authors equally contributed to this work, thus they share first-author contribution.

**Funding:** The research described in this work has been partially supported by: the research gran<sup>t</sup> "Fondi per i progetti di ricerca scientifica di Ateneo 2019" of the University for Foreigners of Perugia under the project "Algoritmi evolutivi per problemi di ottimizzazione e modelli di apprendimento automatico con applicazioni al Natural Language Processing".

**Acknowledgments:** We would like to thank the reviewers for their valuable comments.

**Conflicts of Interest:** The authors declare no conflict of interest.
