**1. Introduction**

Along with the significant increase in the processing power of computer hardware and software, a large number of excellent meta-heuristics were created in the intelligent computing field [1–5]. Meta-heuristics are a large class of algorithms developed in contrast to optimization and heuristics. Optimization algorithms are dedicated to finding the optimal solution to a problem, but they are often difficult to implement due to the unresolvability of the problem [6,7]. Heuristic algorithms are dedicated to customizing algorithms through intuitive experience and problem information, but are often difficult to generalize due to their specialized nature. Compared to these two algorithms, meta-heuristic algorithms are more general and do not require deep adaptation to the problem, and although they do not guarantee optimal solutions, they can generally obtain optimal solutions under acceptable spatial and temporal conditions, although the degree of deviation from the optimal solution is difficult to estimate [8–12].

**Citation:** Jiao, S.; Wang, C.; Gao, R.; Li, Y.; Zhang, Q. Harris Hawks Optimization with Multi-Strategy Search and Application. *Symmetry* **2021**, *13*, 2364. https://doi.org/ 10.3390/sym13122364

Academic Editors: Peng-Yeng Yin, Ray-I Chang, Youcef Gheraibia, Ming-Chin Chuang, Hua-Yi Lin and Jen-Chun Lee

Received: 5 November 2021 Accepted: 30 November 2021 Published: 8 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The main optimization strategies of meta-heuristic algorithms are summarized as follows: (1) diversification of exploration in a wide range to ensure the global optimal solution; and (2) intensification and exploitation in a local range to obtain an optimal solution as close to the optimal solution as possible [13]. The main difference between various meta-heuristic algorithms is how to strike a balance between the two. Almost all meta-heuristic algorithms have the following characteristics: (1) they are inspired by some phenomena in nature, such as simulated physics, biology, and biological behavior; (2) they use stochastic strategies; (3) they do not use the gradient resolution information of the objective function; and (4) they have several parameters that need to be adapted to the problem, and (5) they have good parallel and autonomous exploration. Meta-heuristic algorithms have been widely used in all aspects of social production and life. Many related research papers are published every year in the fields of production scheduling [14,15], engineering computing [16,17], managemen<sup>t</sup> decision-making [18,19], machine learning (ML) [20,21], system control [22], and many other disciplines.

P-meta-heuristics are categorized into four main groups [23,24]: (1) simulated physical process algorithm, (2) evolutionary algorithm, (3) simulated swarm intelligence algorithm, and (4) human behavior [25–27]. Algorithms for simulating physical processes include simulated annealing (SA) [28], gravitational search algorithm [29] which simulates earth gravity, artificial chemical reaction optimization algorithm [30], heat transfer search, which simulates the heat transfer search process in thermodynamics [31], Gases Brownian motion optimization, which simulates the phenomenon of Brownian motion in physics [32], Henry gas solubility optimization, which simulates Henry gas solubility process [33]; and evolutionary algorithm. In 1975, American professor Holland proposed the genetic algorithm (GA) based on the Darwinian evolutionary theory and the mechanism of superiority and inferiority in nature. GA [34], evolution strategies [35] and differential evolution [36], genetic programming [37], and Biogeography-Based Optimizer [38]; simulated population intelligence algorithms: the Artificial Bee Colony (ABC) algorithm [39] based on the honey bee harvesting mechanisms, Firefly Algorithm based on the flickering behavior of fireflies [40], Beetle Antennae Search algorithm based on the foraging principle of aspen bark beetles [41], Grey Wolf Optimization (GWO) algorithm inspired by the hierarchy and predatory behavior of gray wolf packs [42], and Virus Colony Search algorithm [43], which is based on the proliferation and infection strategies of viruses to survive and reproduce in the cellular environment through host cells. Simulation of human behavior: Tabu Search [44], Socio Evolution and Learning Optimization [45], Teaching Learning Based Optimization [46], and Imperialist Competitive Algorithm [47].

Harris Hawks Optimization (HHO) [24] is a swarm intelligence optimization algorithm proposed by Heidari et al. in 2019 to simulate the prey hunting process of Harris's hawks in nature. The algorithm was inspired by the three phases of Harris's hawks' predatory behavior: search, search-exploitation conversion, and exploitation. The algorithm has a simple principle, fewer parameters, and better global search capability. Therefore, HHO has been applied in image segmentation [48], neural network training [49], motor control [50] and other fields. However, similar to other swarm intelligence optimization algorithms, HHO has the disadvantages of slow convergence speed, low optimization accuracy, and easily falls into local optimum when solving complex optimization problems. For example, the literature [51] used the information exchange mechanism to enhance the population diversity, thus improving the convergence speed of the HHO algorithm with information exchange (IEHHO) algorithm. The limitation of the IEHHO algorithm is how to set the parameters of the proposed algorithm. Zhang et al. [52] introduced an exponentially decreasing strategy to update the energy factor to increase the exploration and exploitation capability obtained by the relatively higher values of escaping energy; Elgamal et al. [53] made two improvements: (1) they applied chaotic mapping in the initialization phase of HHO; and (2) they used the SA algorithm as the current best solution to improve HHO exploitation; Shiming Song et al. [54] applied Gaussian mutation and a dimension decision strategy of the cuckoo search method into this algorithm to increase

the HHO's performance. The mechanism of cuckoo search was useful in improving the convergence speed of the search agents as well as sufficient excavation of the solutions in the search area, while the Gaussian mutation strategy performed well in increasing the accuracy and jumping of the local optimum.

However, according to the no free lunch theory [55], one meta-heuristics algorithm cannot always perform as the best on all operations. The original HHO method could not fully balance the exploration and exploitation phases, which resulted in insufficient global search capability and slow convergence of the HHO method. To alleviate these adverse effects, we propose an improved algorithm model called chaotic multi-strategy search HHO (CSHHO), which introduces chaotic mapping and global search strategy, to solve single-objective optimization problems efficiently.

Here, the initialization phase of HHO is replaced by chaotic mapping, which allows the population initialization phase to be evenly distributed in the upper and lower bounds to enhance the population diversity, simultaneously enabling the population to approach the prey location faster, which accelerates the convergence speed of the algorithm. The adaptive weights are added to the position update formula in the exploration phase of HHO to dynamically adjust the influence of the global optimal solution. In the update phase of the HHO, the optimal neighborhood perturbation strategy is introduced to prevent the algorithm from falling into the local optimal solution and to solve the premature aging phenomenon.

To verify the superior performance of the CSHHO algorithm, this experiment first tests the effect of common chaotic mappings of the HHO algorithm's performance. The selected chaotic mappings are Sinusoidal, Tent, Kent, Cubic, Logistic, Gauss, and Circle, and the experimental results will show that Gauss chaotic mapping improves the accuracy of the HHO algorithm to the greatest extent. Second, the HHO algorithm based on Gauss chaotic mapping with multi-strategy search is tested. Then, it is compared with other classic and state-of-the-art algorithms on 23 classic test functions and 30 IEEE CEC2017 competition functions to verify the significant superiority of the proposed paradigm over other algorithms by Friedman test and Bonferroni–Holm corrected Wilcoxon signed-rank test. Finally, CSHHO is applied to model the reactive power output problem of a synchronous condenser based on LSSVM. The complete results will show that the effectiveness of the proposed optimizer is better than other models in the experiment.

The remainder of this paper is organized as follows: Section 2 introduces the basic theory and structure of the original HHO algorithm. Section 3 introduces the chaotic operator and Global search strategy to integrate it into the original optimizer. Section 4 conducts a full range of experiments on the proposed method and demonstrates the experimental results. It further discusses the proposed method based on experimental results. Section 5 applies the proposed method to the LSSVM-based synchronous condenser reactive power output problem. Finally, Section 6 summarizes the study and proposes research ideas for the future.

#### **2. Harris Hawks Optimization Algorithm**

The HHO algorithm is a swarm intelligence optimization algorithm that is widely used in solving optimization problems. The main idea of the algorithm is derived from the cooperative behavior and chasing strategy of Harris's hawk when catching prey in nature [24]. In the process of prey capture, the HHO algorithm is divided into two segments according to the physical energy *E* of the prey at the time of escape: exploration and exploitation phases, as shown in Figure 1. During the exploration phase, Harris's hawks randomly select a perching location to observe and monitor their prey.

$$X\_i^t(t+1) = \begin{cases} X\_{rand}(t) - r\_1 |X\_{rand}(t) - 2r\_2 X\_i(t)|, q \gg 0.5\\ X\_{randit}(t) - X\_m(t) - r\_3 [\mathbf{lb} + r\_4(\mathbf{ub} - \mathbf{lb})], q < 0.5 \end{cases} \tag{1}$$

where *Xrabbit* (*t*) and *Xrand* (*t*) denote the position of the prey and the individual position at time *t*, respectively, and *q* is a random number between (0, 1), the average individual position:

$$X\_{\mathfrak{M}}(t) = \frac{1}{N} \sum\_{i=1}^{N} X\_i(t) \tag{2}$$

**Figure 1.** Description of each stage of HHO.

As the physical capacity of the prey decreases, the exploration phase changes to the exploitation phase, the prey's physical energy factor *E* is as follows:

$$E = 2E\_0 \left( 1 - \frac{t}{T} \right) \tag{3}$$

In the exploitation phase, the Harris's hawk launches a surprise attack on the target prey found in the exploration phase, and the prey tries to escape when it encounters danger. Let the randomly generated prey escape probability be *r*, when *r* < 0.5 the prey successfully escapes; when *r* > 0.5 the prey does not successfully escape. According to the magnitude of *r* and *|E|*, four different location update strategies were proposed in the exploitation phase (see Table 1).

According to the position update condition in the HHO algorithm, the position of the Harris hawk is updated continuously, the fitness value was calculated according to the position of the Harris hawk, and if the fitness threshold was reached, the algorithm was finished. Otherwise, the algorithm continued to execute, and if the maximum number of iterations was reached, the algorithm was finished, and the optimal solution was obtained (See Figure 2).


**Table 1.** Exploitation phase of HHO algorithm.

**Figure 2.** Flow chart of HHO algorithm.

#### **3. HHO Algorithm Based on Multi-Search Strategy**

*3.1. Reasons for Improving the Basic HHO Algorithm*

Harris's hawks generally gather high in trees to hunt for prey. In the process of hunting for prey, they often hover in a spiral to capture prey; when approaching prey, they rush towards their prey at a faster speed until the distance from the prey is small. They slow down and adjust their body posture to increase the probability of their catching prey [56–58]. This mechanism is important in the HHO algorithm. The exploration phase of the basic HHO algorithm uses Equations (1)–(3), where the optimal solution from the previous iteration of the algorithm affected the current solution and caused the algorithm to fall into a local optimum. The search for prey in a linear way led to a single search result. From all iterations of the algorithm, the optimal position of the current algorithm was only updated when the algorithm searched for a better solution than the current one, and the overall number of updates of the optimal position was low, which led to a decrease in the efficiency of the algorithm's search. In reality, when a Harris's hawk chases its prey, it hovers and descends in a spiral manner to catch its prey adaptively, showing better agility when hunting.

Here, the optimal neighborhood disturbance strategy was introduced to enhance the convergence speed of the algorithm and avoid premature maturity of the algorithm. The adaptive weighting and variable spiral position update strategies were introduced to enhance the global search capability of the algorithm by simulating the predation process of Harris's hawks in nature. To make the initial solution generated in the population initialization phase of the HHO algorithm cover the solution space as much as possible, we selected the best chaotic mapping method for HHO among seven commonly used chaotic mapping population initialization methods. It was used as the population initialization method to improve the algorithm. Hence, the above four methods are used to improve the global search capability of the HHO algorithm and to increase the speed of Harris Hawk's search for the optimal solution.
