3.1.2. Swarm Intelligence Methods

These methodologies are part of the bio-inspired category because they draw their inspiration from the behavior that may be observed when more than one entity (e.g., animal) interacts with other individuals. As stated before, the biological behaviours linked to the interaction between individual can result in a sort of optimization, in the sense that the resources are shared and, thanks to the mutual collaboration, a solution is reached quicker. The result is a sort of information structure (which would not be present if the problem is approached by a single entity) that can be translated into mathematical formulations and, hence, used in optimization problems.

The difference with EA is that, in this case, the process is interaction-driven by the entities' behaviours (i.e., collective intelligence) and not controlled by genetic processes. The ability to exchange information and send or receive feedbacks from the other individuals inside the group is what makes the collective intelligence so powerful. In other words, SI methods are the result of a very attentive observation of real-life actions between animals. As said before, two SI methods have been approached in this study: particle swarm optimization (PSO) and gray wolf optimization (GWO). In this case, each individual is seen as a −→*<sup>k</sup>* vector and, hence, as a possible solution.

**Particle Swarm Optimization.** Being a SI algorithm, PSO [46] draws its inspiration from the movement of bird flocks or fish schools. In fact, starting from a population of potential solutions (i.e., particles) and moving them throughout the search space, this methodology can solve optimization problems by following rigid mathematical formulas. As said before, the optimization is guaranteed by the fact that there is a capillary and diffused intelligence: the movement of each particle, and hence, its path, is affected by each particle's local best known position and the best known locations in the search space (these are known because the knowledge is shared by particles). That is precisely why the swarm is able to iteratively identify the optimum solutions by information sharing. Some initial parameters shall be defined, such as the population size, particle initial placements and speed, and particle inertia. After the initialization set up, each particle is given a random neighborhood, and by travelling, the best overall position is discovered. The position associated with the optimal global location are updated, so that each particle knows it. A detailed examination of the solution space is possible, thanks to the velocities' inherent stochastic component [47].The pseudo-code for this algorithm is reported in Figure 4. More information on the algorithm implementation can be found in [41]. The employed code routine iteratively runs until 200 iterations are reached or until the error between two successive runs is less than 10<sup>−</sup>9.

#### START

```
\\ Parameters definition
    Set the population dimension N
    Set the vector k dimension D
\\ Initialise
    Initialise the position of each particle (vector k)
    Initialise the speed of each particle (vector k)
    Initialise the best position of each particle (vector k)
    WHILE (stopping criterion)
        Update the best position of each particle (vector k)
        Update the best global position
        Update position and velocity for each particle (vector k)
    END WHILE
END
```
**Figure 4.** Pseudo-code for PSO algorithm, as taken from [41].

**Grey Wolf Optimization.** If PSO was generically inspired by birds' and fishes' movement, GWO [48] has a more precise inspiring animal: wolves. This optimization technique follows the idea of the rigid hierarchical scales among grey wolf population's members. After choosing the size of the wolf pack and the initial positions of each "animal", an initial population hierarchy is defined by looking at the fitness function values. The decision of the number of wolves is crucial, as both the accuracy of the algorithm and the execution time are affected by this parameter. The higher the fitness value, the higher the hierarchical position in the scale. In this way, the individuals with lower scores will be less influential in the optimization process, while the better-positioned animals will lead the process. In other words, each wolf represents a distinct solution to the problem.

The main difference with PSO lies in the fact that, in this case, the wolves cannot communicate their position to the other members of the pack. In order to find the best solution, an adequate population initialization and search method must be adopted [49]. The pseudo-code for this algorithm is reported in Figure 5. The interested reader can find more information on the algorithm implementation in [41]. The size of the population has been set to 50 individuals. The optimization is stopped when the number of iterations reached is 150 or the tolerance of 10−<sup>9</sup> is reached within two successive runs.

```
START
\\Parameters definition
    Set the population dimension N
    Set the vector k dimension D
\\Initialise
    Initialise the position of each individual
    Score each solution
    Categorize all solutions
    WHILE (stopping criterion)
        Update the position of each individual
        Update the hierarchy
    END WHILE
END
```
**Figure 5.** Pseudo-code for GWO algorithm, as taken from [41].
