*2.2. Particle Swarm Optimization*

On the other hand, the PSO algorithm [40] uses a different search method, based on variable travel speeds and position shifting in the search space. Each individual ('particle') from the population ('swarm') changes its speed in each iteration based on its current distance from two reference points: The best solution found so far by the swarm leader and the best position ever achieved by the particle itself. Compared with the GA, the PSO mechanism, presented in Figure 4, is very simple, requiring for each particle *j, j* = 1, ..., *N,* only the computation of its new speed and position:

$$\text{ssp}\_{j}^{(it)} = w \cdot \text{sp}\_{j}^{(it-1)} + 2 \cdot md\_{1} \cdot (\text{x}\_{j, \text{best}}^{(it)} - \text{x}\_{j, \text{rt}}^{(it)}) + 2 \cdot md\_{2} \cdot (leader^{(it)} - \text{x}\_{j, \text{rt}}^{(it)}) \tag{2}$$

$$\mathbf{x}\_{j}^{(it+1)} = \mathbf{x}\_{j}^{(it)} + s\mathbf{p}\_{j}^{(it)} \tag{3}$$

followed by the update of each particle's best position and the change of the leader position, if better solutions are found. The particle speeds are initialized with low random values, which would not influence the search direction.

**Figure 4.** The flowchart of a PSO iteration.

In Equations (2) and (3), *spj* (*it*) and *sp*<sup>j</sup> (*it* <sup>−</sup> 1) are the speed of particle *xj* (*j* = 1 ... *N*) in the previous (*it* <sup>−</sup> 1) and current (*it*) iteration, *rnd*<sup>1</sup> and *rnd*<sup>2</sup> are random vectors, *<sup>x</sup> j,best*(*it*) and *<sup>x</sup> j,crt*(*it*) are the best personal and the current position of particle *xj*, *leader*(*it*) is the position of the leader in iteration *it*, and *xj* (*it*) is the position of particle *x* in the current iteration. The factor *w* from Equation (2) is an inertia term, which decreases over the iteration count, larger initial values encouraging exploration, and smaller final values enabling the exploitation or local search around the best-known optimal solution.

It should be noted that while the GA explores the search space using crossover to make random changes of the information that is already present in the population, the mutation probability being much smaller, PSO changes randomly the speed of each particle element, moving it in the direction of the leader and personal best position.

The newer metaheuristic methods used in this paper, while sharing the natural inspiration of GA and PSO, combine elements found in the two algorithms and increase the number of input parameters and the complexity of their mathematical model in order to improve their optimization performance. They are the Bat Optimization Algorithm (BOA), Whale Optimization Algorithm (WOA) and Sperm-Whale Algorithm (SWA).
