*3.3. Adaptive Weight*

Inspired by the predation process of the Harris's hawk hunting strategy, we added an adaptive weight to the position update of Harris's hawk that changed with the number of iterations. In the early stage of the exploration phase of HHO, the influence of the optimal Harris's hawk position on the current individual position adjustment was weakened to improve the global search ability of the algorithm in the early stage. As the number of iterations increased, the influence of the optimal Harris's hawk position gradually increased, so that other Harris hawks could quickly converge to the optimal Harris hawk position and improve the convergence speed of the whole algorithm. According to the variation of the number of updates in the HHO algorithm, the adaptive weight composed of the number of iterations *t* were chosen as follows:

$$w(t) = 0.2 \cos\left(\frac{\pi}{2} \cdot \left(1 - \frac{t}{t\_{\text{max}}}\right)\right) \tag{23}$$

Such adaptive weight *w*(*t*) had a property of nonlinear variation between [0, 1], due to the variation property of the *cos* function between [0, *π* 2 ], so that the weights were small at the beginning stage of the exploration phase, but changed slightly faster; at the end of the exploration phase their values were larger, but the speed of change would slow down, so that the convergence of the algorithm was fully guaranteed. The improved HHO algorithm position update formula is:

$$X(t+1) = \begin{cases} w(t)X\_{\text{rand}}\left(t\right) - r\_1|X\_{\text{rand}}\left(t\right) - 2r\_2X(t)| & q \ge 0.5\\ w(t)(X\_{\text{rabbit}}\left(t\right) - X\_{\text{fl}}\left(t\right)) - r\_3(lb + r\_4(ub-lb)) & q < 0.5 \end{cases} \tag{24}$$

The position update after the introduction of adaptive weights dynamically adjusted the weight size according to the increase of the number of iterations, so that the randomly selected Harris's hawk position *Xrand* (*t*) and the optimal average Harris's hawk position *Xrabbit* (*t*) − *Xm*(*t*) in the population guide the individual Harris's hawks differently at different times. As the number of iterations increased, the Harris's hawk population would move closer to the optimal position, and the larger weights would speed up the movement of Harris's hawk positions, which accelerated the convergence of the algorithm.

#### *3.4. Variable Spiral Position Update*

In the search phase of the HHO algorithm, Harris's hawk randomly searched for prey in two equal-opportunity strategies based on target location and its own location. However, in nature, Harris's hawks generally hover in a spiral shape to search for prey. To simulate the real process of prey search in nature, we introduced a variable spiral position update strategy in the search phase of the HHO algorithm, so that the Harris's hawk would adjust the distance of each position update according to the spiral shape between the target position and its own position (see Figure 3).

**Figure 3.** Spiral position update.

In the exploration phase of the HHO algorithm, Equation (1), a constant *b* was introduced to control the shape of the spiral; if this parameter was set to a constant, each time the Harris's hawk position updated a different spiral arc for speed adjustment would follow. However, if *b* was set to a constant value, the spiral movement of the Harris's hawk would be too singular when searching for prey, and it would follow a fixed spiral line to approach the target every time, which would easily fall into the misconception of local optimal solution and weaken the global exploration ability of the algorithm. To address this, we introduced the idea of variable spiral search to enable the Harris's hawk to develop more diverse search path strategies for location update and design the parameter *b* as a variable that changes with the number of iterations to dynamically adjust the shape of the spiral when the Harris's hawk explores, to increase the ability of the Harris's hawk to explore unknown areas; thus, improving the global search capability of the algorithm. After combining the adaptive weights, the new spiral position update was created (see Equation (25)).

The *b* parameter was designed based on the mathematical model of the spiral, and the spiral shape was dynamically adjusted by introducing the number of iterations on the basis of the original spiral model. The *b* parameter was designed in such a way that the spiral shape changed from large to small as the number of iterations increased. Early in the exploration phase of the HHO algorithm, the Harris's hawk searches the target with a larger spiral shape, the Harris hawk explores the global optimal solution as much as possible to improve the global optimal search capability of the algorithm; later in the exploration phase of the HHO algorithm, the Harris's hawk searched the target with a small spiral shape to improve the algorithm's search accuracy.

$$X(t+1) = \begin{cases} \begin{array}{c} w(t)X\_{\text{rand}}\left(t\right) - b|X\_{\text{rand}}\left(t\right) - 2r\_2X(t)| \quad q \ge 0.5\\ w(t)(X\_{\text{rabbit}}\left(t\right) - X\_{\text{m}}(t)) - b(lb + r\_4(ub - lb)) \quad q < 0.5\\ b = e^{5(\pi(1 - \frac{t}{\text{max}}))} \end{array} & \text{(25)} \end{cases} \tag{25}$$

#### *3.5. Optimal Neighborhood Disturbance*

When updating the position, the Harris's hawk generally takes the current optimal position as the target of this iteration. In the whole iteration, the optimal position is updated only when there is a better position; thus, the total number of updates was not many, which led to the inefficiency of the algorithm search. In this regard, the optimal neighborhood disturbance strategy was introduced to search the neighborhood of the optimal position randomly to find a better global value, which could not only improve the convergence speed of the algorithm, but also avoided premature maturity of the algorithm. The optimal position generated a random disturbance to increase its search of the nearby space, and the neighborhood disturbance formula was:

$$\tilde{X}(t) = \begin{cases} X^\*(t) + 0.5 \cdot \mathbf{h} \cdot X^\*(t), \text{ g} < 0.5\\ X^\*(t), \text{ g} \gg 0.5 \end{cases} \tag{26}$$

where *h* and *g* were random numbers uniformly generated between [0, 1]; *X*(*t*) was the new position generated. For the generated neighborhood positions, a greedy strategy was used to determine whether to keep them, and the formula was:

$$X^\*(t) = \begin{cases} \check{X}(t), f(\check{X}(t)) < f(X^\*(t)) \\ X^\*(t), f(X^\*(t)) \lessapprox f(\check{X}(t)) \end{cases} \tag{27}$$

where *f*(*x*) was the position adaptation value of *x*. If the generated position was better than the original position, it would be replaced with the original position to make it the global optimum. Otherwise, the optimal position remained unchanged.
