3.3.1. Particle Swarm Optimization

PSO is a population-based metaheuristic optimization algorithm developed in 1995 by Kennedy and Eberhart, inspired by the behavior of birds living in flocks in nature [46]. Generally, PSO is a population-based probability optimization method, which is preferred to produce solutions for multivariable and multiparameter optimization problems. It is frequently used in different optimization problems due to its high convergence speed and solutions. In adapting to various environmental conditions, such as avoiding predators or finding a rich food source, many animal swarms such as fish and birds communicate with each other, increasing their probability and speed of finding the real target. The essence of the PSO algorithm is a swarm and each particle is a part of it. In this swarm-based optimization algorithm, each particle consists of a position and velocity component, and an update is made in their positions by changing the velocity of the particles. Depending on the optimization problem, the updated positions of the particles are substituted in the objective function [46]. In the minimization process of the objective function, if the position value of the particle is smaller, than the best position value obtained, the new solution is kept in the memory in each iteration as shown in Algorithm 2. The position and velocity vectors of these particles are initially determined randomly, depending on the constraints. The velocities of randomly generated particles are computed as:

$$V\_i(t+1) = V\_i(t) + c\_1 rand(p\_{best} - X\_i) + c\_2 rand(g\_{best} - X\_i) \tag{30}$$

in the next iteration, where *Xi*, the position of *i*. particle, *rand*, is a uniformly random number between [0, 1]; *pbest* is the best position of the swarm; *gbest* is the best position within the group; and *c*1, *c*<sup>2</sup> are two constants which determine the weights of *pbest* and *gbest*, respectively. The position of the particles is obtained by adding the expression of velocity *Vi*(*t* + 1) to the current position *Xi* as:

$$X\_i(t+1) = X\_i(t) + V\_i(t+1) \tag{31}$$

In this optimization process, the position of each particle in the population is updated by changing the velocity vector. This update process consists of both the experimental knowledge of the particle and the knowledge it has socially acquired from neighboring particles.


3.3.2. Grey Wolf Optimization

The GWO algorithm, inspired by the hunting hierarchy of grey wolves that live as a swarm in nature, is proposed by Mirjalili et al. [24]. As illustrated in Figure 3, the alpha wolf makes all hunting decisions in the herd, leads the swarm, and is located at the top of hunting pyramid. According to the order of social hierarchy in the herd, the top three wolves are alpha, beta, and omega, respectively. The candidate solutions are randomly generated in optimization process as with other metaheuristic optimization algorithms as shown in Algorithm 3. Among these candidate solutions, the best, the second, and the third candidate solution refers to to alpha (*Xα*), beta (*Xβ*), and delta (*Xδ*) positions, respectively. The other low candidate solution refers to the omega (*ω*) position. The hunting mechanism of grey wolves consists of following the prey and approaching, encircling, and attacking the prey. In the grey wolf optimization algorithm, the process of encircling the prey is carried out: −→*<sup>D</sup>* <sup>=</sup><sup>|</sup>

$$\overrightarrow{D} = |\overrightarrow{\overline{C}} \, \overrightarrow{X\_p}(t) - \overrightarrow{X}(t)|\tag{32}$$

$$
\overrightarrow{X}\,(t+1) = \overrightarrow{X\_p}(t) - \overrightarrow{A}\,\overrightarrow{D} \tag{33}
$$

where *<sup>t</sup>* specifies the current iteration, −→*<sup>A</sup>* and −→*<sup>C</sup>* are constant vectors, −→*Xp* is the position vector of the prey, and −→*<sup>X</sup>* defines the position vector of a grey wolf. −→*<sup>A</sup>* and −→*<sup>D</sup>* are calculated by:

$$
\overrightarrow{A} = 2\,\overrightarrow{a}\,\overrightarrow{r\_1} - \,\overrightarrow{a}
\tag{34}
$$

$$
\overrightarrow{\mathcal{C}} = 2\,\overrightarrow{r\_2}\,\tag{35}
$$

where −→*<sup>a</sup>* is linearly decreased from 2 to 0 over the course of the iterations, and −→*r*<sup>1</sup> and −→*r*<sup>2</sup> are random values generated between 0 and 1. The hunting process of grey wolves is expressed as:

$$
\overrightarrow{D}\_{a}^{\rightarrow} = |\overrightarrow{\mathbb{C}\_{1}} \overrightarrow{X}\_{a}^{\rightarrow} - \overrightarrow{X}^{\rightarrow}| \qquad \overrightarrow{D}\_{\beta}^{\rightarrow} = |\overrightarrow{\mathbb{C}\_{2}} \overrightarrow{X}\_{\beta}^{\rightarrow} - \overrightarrow{X}^{\rightarrow}| \qquad \overrightarrow{D}\_{\delta}^{\rightarrow} = |\overrightarrow{\mathbb{C}\_{3}} \overrightarrow{X}\_{\delta}^{\rightarrow} - \overrightarrow{X}^{\rightarrow}| \tag{36}
$$

$$
\overrightarrow{X}\_1^{\flat} = \overrightarrow{X}\_{\mathfrak{a}}^{\flat} - \overrightarrow{A}\_1^{\flat}\overrightarrow{D}\_{\mathfrak{a}}^{\flat} \qquad \overrightarrow{X}\_2^{\flat} = \overrightarrow{X}\_{\mathfrak{f}}^{\flat} - \overrightarrow{A}\_2^{\flat}\overrightarrow{D}\_{\mathfrak{f}}^{\flat} \qquad \overrightarrow{X}\_3^{\flat} = \overrightarrow{X}\_{\delta}^{\flat} - \overrightarrow{A}\_3^{\flat}\overrightarrow{D}\_{\delta}^{\flat} \tag{37}
$$

$$
\overrightarrow{X} \left( t+1 \right) = \frac{\overrightarrow{X\_1} + \overrightarrow{X\_2} + \overrightarrow{X\_3}}{3} \tag{38}
$$

where the positions of the best three agents are indicated by −→*Xα*, −→*Xβ*, −→*Xδ*; the distance vectors ( −→*Dα*, −→*Dβ*, −→*Dδ*) of candidate solutions are calculated according to the best three solutions; ( −→*X*1, −→*X*2, −→*X*3) are the updated positions of the search agents; and −→*<sup>X</sup>* (*<sup>t</sup>* <sup>+</sup> <sup>1</sup>) is the next iteration position.

**Figure 3.** The hunting hierarchy of grey wolves.


3.3.3. Harris Hawk Optimization

In this section, the exploration, transition from exploration to exploitation, and exploitation phases of the HHO component of the hybrid GWO–HHO algorithm proposed in the study are explained. In this algorithm, the hunting strategy of Harris hawks, one of the smart birds in nature, is imitated. Harris hawks act as a swarm, especially during the rabbit-hunting process. Each swarm has a leader. The leader and other members of the swarm primarily make exploration flights. After the prey is detected, the hunting process begins. HHO is gradient-free optimization method; hence, it can be applied to many nonlinear engineering problems depending on a suitable formulation [45]. Harris hawks' main tactic in hunting is called the "surprise attack". In this clever strategy, several hawks collaboratively try to attack from different directions and simultaneously approach the prey that has been found to have fled outside the shelter. The attack can be completed quickly, with the hawks catching their prey in a matter of seconds. All phases of the HHO's exploration and exploitation processes are shown in Figure 4.

**Figure 4.** All phases of Harris hawk optimization algorithm [45].

• **Exploration phase:** Although Harris hawks have strong eyes, sometimes they may not be able to detect their prey easily. In this situation, Harris hawks often wait in the desert area and observe their surroundings. This process continues in a loop. Harris

hawks in each loop are identified as candidate solutions. The hawk, who is in the best position in relation to the rabbit in the loop, represents the optimum solution. The HHO algorithm uses two different strategies in the hunt search process. These strategies can be described by [45]:

$$X(t+1) = \begin{cases} X\_{rand}(t) - r\_1 |X\_{rand} - 2r\_2 X(t)| & q \ge 0.5\\ \left(X\_{rabbit}(t) - X\_m(t)\right) - r\_3 (LB + r\_4(tIB - LB)) & q < 0.5 \end{cases} \tag{39}$$

where *X*(*t* + 1) represents the position of Harris hawks in the next iteration *t*; *X*(*t*) denotes the current position of Harris hawks; *Xrabbit* indicates the position of the rabbit; *Xm*(*t*) is the average position of the current population of Harris Hawks; *Xrand*(*t*) represents a randomly selected Harris hawk from the current population; *r*1,*r*2,*r*3,*r*4, and *q* are random numbers between [0, 1]; and *UB* and *LB* show the upper and lower bounds of the variables, respectively. The average position of hawks is determined by:

$$X\_{\rm II}(t) = \frac{1}{N} \sum\_{i=1}^{N} X\_i(t) \tag{40}$$

where *N* represents the total number of Harris hawks, and *Xi*(*t*) indicates the location of each Harris hawk in iteration *t*.

• **Transition from exploration to exploitation phase:** Harris hawks begin the exploitation phase by developing different attack models according to the energy of the prey after the exploration process is completed. This process is modelled in [45] as:

$$E = 2E\_0(1 - \frac{t}{T})\tag{41}$$

where *E*<sup>0</sup> is the initial energy value of the prey randomly defined in the range of [0, 1], *E* is the energy of the escaping prey, and *T* is the maximum number of iterations.

	- **Soft besiege** (*r* ≥ 0.5 and |*E*| ≥ 0.5) In this strategy, the Harris hawk makes misleading jumps at its prey and tries to reduce the energy of its prey. This soft besiege strategy is mathematically described by:

$$X(t+1) = \Delta X(t) - E|f X\_{rabbit}(t) - X(t)|\tag{42}$$

$$
\Delta X(t) = X\_{rabbit}(t) - X(t) \tag{43}
$$

where Δ*X*(*t*) is the difference between the current position in the *t*-th iteration and the current position of the prey, and *J* is a value that changes with each iteration to simulate the natural motion of the prey.

**– Hard besiege** (*r* ≥ 0.5 and |*E*| < 0.5)

In this strategy, the energy of the prey is very low. The hawk hardly makes any besiege to throw his surprise claws on its prey. This strategy can be mathematically modeled as:

$$X(t+1) = X\_{rabbit}(t) - E|\Delta X(t)|\tag{44}$$

where *Xrabbit*(*t*) represents the current position of prey, Δ*X*(*t*) is the difference between the current position in the *t*-th iteration and the current position of the prey.

**– Soft besiege with progressive rapid dives** (*r* < 0.5 and |*E*| ≥ 0.5)

In this strategy, the prey has enough energy to escape. The Harris hawk is still performing the soft besiege strategy before the surprise jump. This process is smarter than the previous strategy. Before the hawks start their soft besiege, they decide their next move based on the following calculation:

$$\mathcal{Y} = X\_{rabbit}(t) - E|fX\_{rabbit}(t) - X(t)|\tag{45}$$

where *Xrabbit*(*t*) indicates the current position of the prey, and *J* is a value that changes with each iteration to simulate the natural motion of the prey. This situation is compared with the previous dive to decide whether such a move would be a good dive. If the situation is unfavorable, the hawks dive into their prey suddenly. When deciding on this, a Levy-flight-based movement structure is used. This situation is defined by:

$$Z = \text{Y} + \text{S} \times LF(D) \tag{46}$$

where *Z* is the variable that decides whether the hawk will make a move on its prey, *Y* indicates its position in relation to the decreasing energy of the prey, *D* is the size of the problem, *S* is a random vector of size 1 *x D*, and *LF* is the Levy flight function and is defined by:

$$LF(\mathbf{x}) = 0.01 \frac{\boldsymbol{\mu} \times \boldsymbol{\sigma}}{|\boldsymbol{\nu}|^{\frac{1}{\beta}}} \boldsymbol{\sigma} = \left( \frac{\Gamma(1+\beta) \times \sin(\frac{\pi\beta}{2})}{\Gamma(\frac{1+\beta}{2}) \times \beta \times 2^{(\frac{\beta-1}{2})}} \right)^{\frac{1}{\beta}} \tag{47}$$

where *u* and *v* are the random numbers between (0, 1), and *β* is 1.5. Note that the Levy flight algorithm is added to the exploitation phase to ensure that the local search process can be continued without becoming stuck at local optimum points. The positions of the hawks in the soft besiege phase are updated by:

$$X(t+1) = \begin{cases} \mathcal{Y} & \text{if } F(\mathcal{Y}) \lhd F(X(t)) \\ Z & \text{if } F(Z) \lhd F(X(t)) \end{cases} \tag{48}$$

where *Y* and *Z* are obtained using Equations (40) and (41).

**– Hard besiege with progressive rapid dives** (*r* < 0.5 and |*E*| < 0.5) In this strategy, the prey does not have enough energy to escape. The Harris hawk makes a fierce siege before its surprise jump to catch its prey. The hard besiege situation is expressed by:

$$X'(t+1) = \begin{cases} Y' & \text{if } F(Y') \prec F(X(t)) \\ Z' & \text{if } F(Z') \prec F(X(t)) \end{cases} \tag{49}$$

where *Y* and *Z* are defined as:

$$Y' = X\_{rabbit}(t) - E|fX\_{rabbit}(t) - X\_m(t)|\tag{50}$$

$$Z' = \mathbf{Y}' + \mathbf{S} \times LF(D) \tag{51}$$
