Next Article in Journal
Z-Type Control Methods on a Three-Species Model with an Invasive Prey
Next Article in Special Issue
A Conjugate Gradient Method: Quantum Spectral Polak–Ribiére–Polyak Approach for Unconstrained Optimization Problems
Previous Article in Journal
Geometric Properties of Certain Classes of Analytic Functions with Respect to (x,y)-Symmetric Points
Previous Article in Special Issue
Improving Wild Horse Optimizer: Integrating Multistrategy for Robust Performance across Multiple Engineering Problems and Evaluation Benchmarks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization

by
Sultan Almotairi
1,2,*,
Elsayed Badr
3,4,*,
Mustafa Abdul Salam
5,6 and
Alshimaa Dawood
3
1
Department of Computer Science, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
2
Department of Computer Science, Faculty of Computer and Information Systems, Islamic University of Madinah, Medinah 42351, Saudi Arabia
3
Scientific Computing Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
4
Data Science Department, Faculty of Computers and Information Systems, Egyptian Chinese University, Cairo 11786, Egypt
5
Artificial Intelligence Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
6
Faculty of Computer Studies, Arab Open University, Cairo 11211, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(19), 4181; https://doi.org/10.3390/math11194181
Submission received: 23 August 2023 / Revised: 16 September 2023 / Accepted: 22 September 2023 / Published: 6 October 2023
(This article belongs to the Special Issue Advanced Optimization Methods and Applications, 2nd Edition)

Abstract

:
Harris Hawk Optimization (HHO) is a well-known nature-inspired metaheuristic model inspired by the distinctive foraging strategy and cooperative behavior of Harris Hawks. As with numerous other algorithms, HHO is susceptible to getting stuck in local optima and has a sluggish convergence rate. Several techniques have been proposed in the literature to improve the performance of metaheuristic algorithms (MAs) and to tackle their limitations. Chaos optimization strategies have been proposed for many years to enhance MAs. There are four distinct categories of Chaos strategies, including chaotic mapped initialization, randomness, iterations, and controlled parameters. This paper introduces SHHOIRC, a novel hybrid algorithm designed to enhance the efficiency of HHO. Self-adaptive Harris Hawk Optimization using three chaotic optimization methods (SHHOIRC) is the proposed algorithm. On 16 well-known benchmark functions, the proposed hybrid algorithm, authentic HHO, and five HHO variants are evaluated. The computational results and statistical analysis demonstrate that SHHOIRC exhibits notable similarities to other previously published algorithms. The proposed algorithm outperformed the other algorithms by 81.25%, compared to 18.75% for the prior algorithms, by obtaining the best average solutions for 13 benchmark functions. Furthermore, the proposed algorithm is tested on a real-life problem, which is the maximum coverage problem of Wireless Sensor Networks (WSNs), and compared with pure HHO, and two well-known algorithms, Grey Wolf Optimization (GWO) and Whale Optimization Algorithm (WOA). For the maximum coverage experiments, the proposed algorithm demonstrated superior performance, surpassing other algorithms by obtaining the best coverage rates of 95.4375% and 97.125% for experiments 1 and 2, respectively.

1. Introduction

In recent years, numerous different real-world optimization challenges, such as engineering and scientific experiments, have seen the widespread adoption of nature-inspired algorithms. The optimization problem involves locating the optimal solution (maximum or minimum) among all feasible alternatives. Due to the increasing complexity of optimization problems, traditional mathematical methods such as gradient descent have become insufficient because they are time-consuming and confined to local optima. To tackle such issues, metaheuristic algorithms have gained popularity due to their efficiency, simplicity, easy implementation process, and low computational costs. Metaheuristic algorithms exhibit competitive performance and outperform other solvers in solving problems since they mimic biological and physical behavior in nature. Some examples are Particle Swarm Optimization (PSO) [1], Genetic algorithm (GA) [2], Grey Wolf Optimizer (GWO) [3], Whale Optimization Algorithm (WOA) [4], Flower Pollination Algorithm (FPA) [5], Salp Swarm Algorithm (SSA) [6], Moth–Flame Optimization (MFO) [7], Artificial Bee Colony Optimization (ABC) [8], Bat Algorithm (BA) [9], Wild Horse Optimizer (WHO) [10], Lightning Search Algorithm (LSA) [11], and Ant Colony Optimization (ACO) [12].
The No Free Lunch theorem (NFL) [13] states that there is no optimal algorithm capable of solving all optimization problems. Consequently, a significant number of metaheuristic algorithms have been devised over the past two decades, and it has been demonstrated that metaheuristic techniques can be used to efficiently and effectively find solutions to complex problems. In other words, if an algorithm performs well in one optimization task, it may not perform well in other optimization tasks. Therefore, the fact that a metaheuristic algorithm works well for solving one type of problem does not necessarily guarantee that it will work just as well for tackling another type of problem. HHO is one of the most effective metaheuristic algorithms; it is inspired by the cooperative behavior and chasing manner of Harris Hawks, as well as their predation and breeding behaviors in nature [14]. HHO outperforms many well-known Swarm Intelligence (SI) approaches, such as PSO, GWO, CS, MFO, DE, BA, and WOA, based on the outcomes of tests conducted on benchmark functions and multiple engineering optimization problems. Moreover, the results demonstrate that HHO obtains a good balance between exploration and exploitation, thereby enhancing its scalability and capacity to produce high-quality solutions [15]. As with many other algorithms, HHO has disadvantages, including the possibility of becoming locked in local optima and the absence of a study frame with dramatic convergence. Owing to the stochastic nature of the optimization process, achieving a balance between exploration and exploitation is one of the most difficult aspects of devising a metaheuristic algorithm [4]. The NFL has led to the development of HHO through a variety of techniques, including binary HHO release, evolutionary update structures, chaotic processes, and hybrid HHO [16].
In the literature, various studies have proposed enhancements to the HHO algorithm by incorporating different mathematical operators. For instance, Binary HHO (BHHO) and Quadratic Binary HHO (QBHHO) were introduced as binary versions of HHO [17]. Opposite HHO (OHHO) [18] was applied for feature selection in breast cancer classification. Chaotic HHO (CHHO) and variant chaotic HHO were proposed by Menesy et al. [19] and Basha et al. [20], respectively, while Hussien and Amin [21] enhanced a new HHO version based on chaotic local search, opposition-based learning, and self-adaption. Multi-Objective Harris Hawk Optimization algorithm (MOHHO) [22] employed Pareto dominance and crowding distance mechanisms to handle multiple objectives and improve diversity in the search space. Shi et al. [23] proposed a framework SGLHHOSVM that combines HHO with Support Vector Machine (SVM). Adaptive relative reflection HHO (ARHHO) was proposed by Zuo and Wang [24]. A hybrid Harris Hawk Optimization algorithm with Differential Evolution (HHO-DE) is proposed in [25]. Modified Harris Hawk Optimization algorithm (MHHO) is proposed in [26]. In [27], various Harris Hawk Optimization (HHO) algorithm techniques are proposed. These techniques, which include Improved HHO Opposition-Based Learning (OBL) (IHHOOBL), Improved HHO Lévy Flight (IHHOLF), and Improved HHO Chaotic Map (IHHOCM) to improve search efficiency and exploration and exploitation capabilities. In [28], the authors introduced an improved version of the Harris Hawk Optimization (IHHO) algorithm by incorporating elite opposite-based learning and a new search mechanism.
The theory of nonlinear dynamics chaos has found widespread application, and several metaheuristic algorithms have used chaos to enhance performance [29]. Chaos theory involves the study of chaotic dynamical systems, which are highly sensitive to initial conditions and comprise an infinite number of unstable periodic motions. In particular, the use of chaotic mapping techniques was developed to determine random parameters in metaheuristic algorithms, leading to faster convergence and reduced chance of being trapped in local optima [30,31]. In this context, a new variant of HHO is presented in this paper to further enhance its performance. This variant incorporates chaos theory and a self-adaptive technique to improve the algorithm’s exploration and exploitation capabilities.
In recent years, wireless sensor networks (WSNs) have spread rapidly. This is a consequence of its ubiquitous daily usage in several fields. Applications for WSNs include monitoring the environment, surveillance systems, healthcare, the Internet of Things (IoT), and public safety [32,33]. Power scarcity is an important issue that negatively affects the WSNs’ performance. The coverage problem of WSNs is a fundamental problem that directly affects the energy consumption of WSNs [34]. WSN’s coverage problem of a specific region refers to finding the minimum number of sensors to cover the maximum area [32]. Designing a WSN with a high coverage rate and a reasonable number of sensors leads to efficient energy consumption and a longer lifetime. The maximum coverage problem for wireless sensor networks is an NP-hard optimization problem.
There has been a lot of interest in studying the coverage problem of WSNs, and several research results have been issued in recent years. In the literature, Hu et al. [35] proposed a method to optimize large radius sensors’ deployment using binary PSO (binary-PSO). Aziz et al. [36] used a PSO-Voronoi algorithm to minimize the coverage hole. Ngatchou et al. [37] used a modified version of PSO, sequential PSO. In [38], Wang, X et al. proposed virtual force-directed co-evolutionary PSO to deploy WSN with good efficiency. Wang et al. [39] proposed a novel head wolf mutation strategy to enhance Wolf Pack Algorithm (WPA) to optimize coverage. Herein [40], Alshaqqawi et al. proposed an optimization approach that combined the PSO algorithm with a greedy method. Liang et al. [41] presented the MTCLM problem to maximize the number of covered targets. In [42], Wang, G proposes a positioning method based on a random PSO algorithm (SPSO). By integrating the locally convergent PSO with the gravity search algorithm, Mehta and Malik [43] introduced a new hybrid optimization algorithm that modifies the PSO into a global optimization algorithm. Tarnaris et al. [44] used GA, PSO, grid-based PSO, and a Voronoi-based PSO method to maximize area coverage and area k-coverage. Bonnah et al. [45] proposed a combination of the computed minimal exposed path and PSO algorithm using the ratio of covered to uncovered grids as a fitness function. Li and Liu [46] proposed an optimization algorithm for monitoring area coverage based on controlling the node position, which can rapidly improve the coverage effect. Zhang [47] simulated the coverage optimization of WSNs based on the Improved PSO algorithm. Gökalp [48] used the self-adaptive DE algorithm (SADE) to optimize the connected target coverage for the first time in the literature. In [49], Zhang, Y. and Zhang, Z. used DBSCAN and TDADA-Ⅱ. Mobile Assisted Coverage Hole Patching Scheme (MACHPS) is introduced by Wang et al. [50] who compared it with two other algorithms, CMPA and CHHA. Mnasri et al. [51] proposed a hybrid approach relying on a novel bird’s accent-based many-objective PSO algorithm (named acMaPSO) and another variant (named acMaMaPSO), and compared their performance with the standard PSO and the NSGA-III algorithm. Weighted Salp Swarm Algorithm (WSSA) is a technique proposed by M. A. Syed and R. Syed in [52] to improve the performance and convergence rate of the SSA and to compare WSSA’s performance with PSO, MFO, GWO, WOA, MVO, and MGWO.
In this paper, we propose a new version of HHO. The proposed algorithm (SHHOIRC) is a hybrid algorithm consisting of Harris Hawk optimization, three chaos optimization methods (Chaotic mapped initialization, Chaotic mapped randomness, and Chaotic mapped controlled parameter), and a self-adaptive method. SHHOIRC is tested and compared with genuine HHO, SHHO [21], and four versions of HHO, which are proposed in [31]. In [31], Harun and Haydar proposed forty HHO variants in which HHO is hybridized with ten distinct chaotic maps to modify its critical parameters, using four distinct hybridization techniques. The four methods used combinations of two chaos optimization techniques, which are Chaotic mapped randomness, and Chaotic mapped controlled parameter. Here, SHHOIRC is compared with four variants that use quietly comparable processes. The proposed algorithm and the other six HHO versions are applied to 16 Benchmark functions. Furthermore, the proposed algorithm is tested on a real-life problem, which is the maximum coverage problem, and compared with pure HHO, and two well-known algorithms, GWO [3] and WOA [4].
This document is structured as follows. The second section describes Harris Hawk optimization. The SHHOIRC algorithm and the used strategies are described in Section 3. Section 4 contains the results of the experiment, their simulation, and statistical analysis. The fifth section contains the conclusion.

2. Harris Hawk Optimization

HHO is a population-based metaheuristic technique inspired by the cooperative behavior and hunting style (surprise pounce) of Harris Hawks, as well as their natural predation and reproductive behaviors [14]. Harris Hawks forage with other family members, which distinguishes their foraging behavior from that of other birds. This algorithm has three phases: the Exploration phase, the Transportation phase, and the Exploitation phase. In the Exploration phase, the hawks perch randomly according to other members or at a random tall tree until they detect a prey. In the next phase, the Transportation phase, the hawks determine whether to chase the prey or not according to the prey’s energy. In the Exploitation phase, depending on the dynamic character of situations and the evasion strategies of their prey, Harris Hawks pursue their prey in a variety of ways. Algorithm 1 explains pseudocode for HHO. The HHO phases can be modeled as follows:

2.1. Exploration Phase

In HHO, Harris Hawks represent candidate solutions, and the best solution in each phase is regarded as the intended prey or as close to the optimal solution. Based on two strategies, Harris Hawks settle at random in various locations and wait to detect prey. Considering an equal probability q for each perching technique: if q < 0.5 , then the position of hawks is determined in regards to other hawks in the population and rabbit positions; if q 0.5 , then the hawk perch on a random tall tree inside the range. The position of each hawk can be modeled as in Equation (1), considering the stated perching strategies:
X t + 1 = X r a n d t r 1   X r a n d t 2 r 2   X t i f   q 0.5 X r a b b i t t X m t r 3 L B + r 4 U B L B i f q < 0.5
Such that X t + 1 is the position vector of hawks in the next iteration, X r a b b i t t is the rabbit position in the current iteration t , X t is the current position vector of hawks, X r a n d t is a randomly selected hawk from the current population, r 1 ,   r 2 ,   r 3 ,   a n d   r 4 are random numbers in the interval of 0 , 1 , L B   and   U B are the lower and upper bounds of the problem, and X m t   is the average position of the current population of N hawks (Equation (2)):
X m t = 1 N i = 1 N X i t

2.2. Transportation from Exploration to Exploitation

The transition from exploration to exploitation is dependent on the prey’s escaping energy. During its fleeing behavior, the prey’s energy decreases drastically. Exploration occurs when E 1 , whereas exploitation occurs in subsequent phases when E < 1 . The prey’s energy can be represented by Equation (3):
E = 2 E 0 1 t T
where E indicates the escaping energy of the prey, T represents the maximum number of iterations, and E 0 is the initial state of the rabbit’s energy, which randomly changes inside the interval 1 , 1 iteratively.

2.3. Exploitation Phase

In this phase, the Harris Hawks perform a surprise pounce (seven kills) by attacking the intended prey detected in the previous phase. In the attacking stage, there are four possible strategies proposed in regard to the escaping behavior and the chasing strategies. It is supposed that r is the chance of a prey successfully escaping ( r < 0.5 ) or not successfully escaping r 0.5 before the surprise pounce. The four strategies are as follows:

2.3.1. Soft Besiege

If r 0.5 and E 0.5 , then the prey still has energy but cannot escape successfully. In this case, the hawks proceed with a soft besiege. The position update formula can be modeled as Equations (4) and (5):
X t + 1 = Δ X t E J X r a b b i t t X t
Δ X t = X r a b b i t   t X t
Such that Δ X t represents the difference between the rabbit’s position vector and its current location in iteration t . J is the random jump strength of the rabbit throughout the escaping procedure. The J value changes randomly in each iteration to simulate the rabbit motion. The jump strength can be modeled as in Equation (6), where r 5 is a random number inside 0 , 1 .
J = 2 1 r 5

2.3.2. Hard Besiege

The prey lacks sufficient energy and cannot effectively flee when r 0.5 and E < 0.5 . In this case, the hawks proceed to hard besiege. The position update formula can be modeled as Equation (7):
X t + 1 = X r a b b i t t E Δ X t

2.3.3. Soft Besiege with Progressive Rapid Dives

If r < 0.5 and E 0.5 , then the prey still has enough energy and can escape successfully. This case is inspired by the actual behavior of hawks; it is assumed that, in competitive situations, they can choose the optimal dive toward prey in order to capture it. Consequently, the raptors can evaluate their next move based on the following rule, which is represented by Equation (8):
Y = X r a b b i t t E J X r a b b i t t X t
If the hawk will dive based on the LF-based patterns, the next move is to proceed using Equation (9):
Z = Y + S × L F D
Such that D is the dimension of the problem, S is a random vector by size 1 × D , and L F is the Levy Flight function (Equation (10)) where u , v are random values inside (0,1) and β is a default constant set to 1.5.
L F D = 0.01 × u × σ v 1 β   ,       σ = Γ 1 + β × s i n π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
In this phase, the next location is set as the better between Y and Z . Hence, the position formula of this phase can be modeled as Equation (11):
X t + 1 = Y                                                 i f   F Y < F X t Z                                                 i f   F Z < F X t

2.3.4. Hard Besiege with Progressive Rapid Dives

If r < 0.5 and E < 0.5 , then the prey still does not have enough energy and cannot escape successfully. In this case, the hawks proceed to hard besiege with progressive rapid dives. The position update formula can be modeled as Equation (12):
X t + 1 = Y                                                 i f   F Y < F X t Z                                                 i f   F Z < F X t
where Y   and   Z are obtained using new rules (Equations (13) and (14)):
Y = X r a b b i t t E J X r a b b i t t X m t
Z = Y + S × L F D
Algorithm 1 Pseudo-code of HHO Algorithm
1:Initialize the parameters (Popsize (N), MaxIter (T), LB, UB, and Dim)
2:Initialize a population X0
3:While (iterMaxIter) do
4:      Compare the fitness Function for hawk xi
5:      Xrabbit = the best search agent
6:      for each hawk (xi) do
7:            Update the escaping energy E using Equation (3)
8:            if (|E| ≥ 1) then
9:                Update hawk position using Equation (1)
10:            end if
11:            if (|E| < 1) then
12:              if (r ≥ 0.5 and |E| ≥ 0.5) then
13:                         Update hawk position using Equation (4)
14:              else if (r ≥ 0.5 and |E| < 0.5) then
15:                         Update hawk position using Equation (7)
16:              else if (r < 0.5 and |E| ≥ 0.5) then
17:                         Update hawk position using Equation (11)
18:              else if (r < 0.5 and |E| < 0.5) then
19:                         Update hawk position using Equation (12)
20:              end if
21:            end if
22:      end for
23:End
24:Return Xrabbit

3. SHHOIRC Algorithm

The proposed algorithm is Self-adaptive Harris Hawk Optimization with three Chaotic Optimization methods (Chaotic mapped initialization, Chaotic mapped randomness, and Chaotic mapped controlled parameter) called (SHHOIRC).

3.1. Chaos Optimization

Chaos is a random-like common phenomenon that occurs in nonlinear systems. It is considered as a source of stochasticity [53]. The chaotic map can generate a huge number of non-repetitive sequences instead of random sequences by only changing its initial values. The following chaotic maps are examples of chaotic maps that can be used to generate non-repetitive random numbers within specified limits, as detailed in Table 1, where z k ,   and   z k + 1   are the chaotic variables of k th   and   k + 1 th iteration, respectively. For initial conditions of the two chaotic maps, z k is set as a random number belonging to 0 ,   1 . The chaotic variable is generated using the previous chaotic variable in the sequence. In the proposed algorithm, the Circle map is used to generate all the chaotic variables. The theory of chaos was applied to HHO in order to enhance convergence speed and prevent it from falling into the local minimum trap by utilizing a chaotic map to explore unvisited regions in the search space.
The chaotic maps have been introduced to improve nature-inspired algorithms for many years. As stated in [54], there are four ways for chaotic applications to improve the performance of the existing nature-inspired algorithms. The chaotic improvements can be categorized as follows: Chaotic-mapped initialization, Chaotic-mapped randomness, Chaotic-mapped iterations, and Chaotic-mapped controlled parameters. In the literature, chaotic-mapped initialization [55] enhances PSO, chaotic-mapped randomness [56] improves the Bat algorithm, chaotic-mapped iterations [53] enhance the Grey Wolf optimizer, and chaotic-mapped controlled parameters [57] optimize GWO algorithm performance. These approaches significantly contribute to their respective algorithms. In the proposed algorithm, three types of strategies are used.
The first way is chaotic-mapped initialization; in this method, the chaotic variables are used to generate the initial population of HHO. This modification can be applied by generating random initial positions for the hawks using the chaotic variables. The generated positions can then be limited within the upper and lower boundaries of the search space according to the optimization problem. The initial population positions X i n i t i a l can be modeled as follows (Equation (15)):
X i n i t i a l = L B + X 0 z k + 1 × U B L B
where X 0 is the random positions generated, and z k + 1 is the chaotic variable generated using the circle chaotic map. U B   and   L B are the upper and lower limits of the problem.
The second method is chaotic-mapped randomness; in this method, the chaotic variables are used to generate the random numbers used in Equation (1) and to choose X r a n d t . This modification can be modeled as follows (Equation (16)):
X t + 1 = X r a n d t z k + 1   r 1   X r a n d t 2 z k + 1   r 2   X t                                                             i f   q 0.5 X r a b b i t t X m t z k + 1   r 3 L B + z k + 1   r 4 U B L B                   i f   q < 0.5
The third one is the chaotic-mapped controlled parameter; in this method, the chaotic variables are used to generate a random number used to edit the control parameter E in Equation (3). This modification can be modeled as follows (Equation (17)):
E = 2 z k + 1 E 0 1 t T

3.2. Self-Adaptive Technique

To improve the diversity and exploration capabilities of the HHO, a self-adaptive technique is used. The self-adaptive technique is integrated with HHO and a new solution is generated using Equation (18):
v i t + 1 = X i t + 1 + S R X C u r r B e s t X i t
Equation (18) updates the state of solution X i t + 1 .   X i t + 1 are the solutions obtained in iteration t , which are prepared for use in the upcoming iteration t + 1 , using a proposed search equation based on self-adaptation. X i t is the i th hawk solution before starting iteration t , and X C u r r B e s t   is the best solution found so far for X i t .   X C u r r B e s t can be described as the local best for the i th hawk after iteration t is performed, and S R is a self-adaptive rate that is a random number between 0 and 1.

3.3. Proposed Algorithm

The proposed algorithm integrates the introduced three chaotic strategies and a self-adaptive technique with HHO. The first chaotic strategy enhances HHO by allowing individuals to spread throughout the domain of the given problem as equally as possible. In this step, either the global or local optimum has a greater chance of gaining access by some individuals. The second type enhances HHO by replacing pseudo-random numbers with a non-repetitive sequence of random numbers generated by the circle chaotic map, which enhances the exploration phase diversity and exploits all previously visited promising areas. The third one is used to balance the ratio of exploration and exploitation for individuals during iterations. The self-adaptive method helps control the balance between exploration and exploitation, and the proposed search equation helps the algorithm explore previously unvisited promising areas. The pseudocode of SHHOIRC is shown in Algorithm 2.
In SHHOIRC, the integration of the three strategies and the self-adaptive method with HHO is performed by using the four Equations (15)–(18), as in the flowchart in Figure 1. The initial population is generated using Equation (15) by considering the problem limitations. At each iteration, the positions of the hawks are updated using Equation (16). To update the control parameter E for each hawk, use Equation (17). After generating the hawks’ solutions and obtaining HHO’s best solution for the current iteration, a new solution is generated using Equation (18), and then the best solution among the HHO solution and self-adaptive solution is selected as the new solution for the current iteration.
Algorithm 2 Pseudo-code of SHHOIRC Algorithm
1:Initialize the parameters (Popsize ( N ), MaxIter ( T ), L B , U B , and D i m )
2:Initialize a population X 0
3:Update X i n i t i a l using Equation (15)
4:While ( i t e r M a x I t e r ) do
5:      Compare the fitness Function for hawk x i
6:       X r a b b i t = the best search agent
7:      for each hawk ( x i ) do
8:            Update the escaping energy E using Equation (17)
9:            if E 1 then
10:                Update hawk position using Equation (16)
11:            end if
12:            if ( E < 1 ) then
13:                  if r 0.5   a n d   E 0.5 then
14:                         Update hawk position using Equation (4)
15:                  else if ( r 0.5   a n d   E < 0.5 ) then
16:                        Update hawk position using Equation (7)
17:                  else if ( r < 0.5   a n d   E 0.5 ) then
18:                        Update hawk position using Equation (11)
19:                  else if ( r < 0.5   a n d   E < 0.5 ) then
20:                        Update hawk position using Equation (12)
21:                  end if
22:            end if
23:            Update hawk using Equation (18)
24:      end for
25:End
26:Return  X r a b b i t

4. Simulation Experiments

In this article, the proposed SHHOIRC, the four CHHO that are proposed by Harun et al. [31], Self-adaptive HHO (SHHO), and genuine HHO are applied to 16 benchmark functions. Furthermore, SHHOIRC is tested on a real-life problem, which is the maximum coverage problem of WSN, and compared with pure HHO, and two well-known algorithms GWO and WOA. For the maximum coverage problem, two experiments are performed. All the experiments are applied on a PC with the following specifications: Intel Core i5-10400 CPU 2.90 GHz, 16.0 GB RAM. Table 2 shows the experimental parameter setting.

4.1. Measuring Performance

  • Average is the mean of the values obtained in different runs:
A V G = 1 N i = 1 N F i
where N   and F i are the number of runs and the value obtained in i th run, respectively.
  • Standard deviation is calculated as follows:
S T D = 1 N 1 i = 1 N F i M e a n 2
  • Best is the best (minimum or maximum according to the optimization problem) solution obtained in different runs.
  • Worst is the worst (maximum or minimum according to the optimization problem) solution obtained in different runs.

4.2. Testing HHO Versions on 16 Benchmark Functions

To evaluate the efficiency of the proposed metaheuristic SHHOIRC method, 16 well-known benchmark functions are subjected to a series of experiments. These functions are separated into three Unimodal functions and 13 Multimodal functions to evaluate the algorithm’s capacity to avoid local optima and locate the global optimum. The expressions of each test function are presented in Table 3, and specific parameters for some functions are shown in Table 4, Table 5 and Table 6.
Each algorithm is run independently five times with 500 iterations per run and a population size of 30 agents. Table 7 compares the proposed HHO version with six other HHO versions (genuine HHO, SHHO, and four CHHO versions proposed in [31]) in terms of the average solution, standard deviation values, best solutions obtained, worst solutions obtained, and average time consumed (seconds). The four CHHO versions are categorized into two types, PM1 and PM2, each using two different chaotic maps (Circle and Sinusoidal, which are presented in Table 1). This study aims to demonstrate the superiority of the proposed SHHOIRC method over other HHO variants and compare its performance on various benchmark functions.
The results presented in Table 7 demonstrate that the proposed HHO version (SHHOIRC) outperformed the other six algorithms in terms of average solutions obtained for 13 out of the 16 benchmark functions. HHO outperformed the other algorithms for four functions, while CHHO PM1C2 and SHHO performed the best for three functions, respectively. All seven HHO versions obtained the same best average solution for F5 and F7. In terms of the best average solutions with the lowest standard deviation, SHHOIRC outperformed the other algorithms for nine benchmark functions, while CHHO PM1C2, SHHO, and pure HHO performed the best for three functions. All seven HHO versions obtained the same best average solution with the same lowest standard deviation values for F5 and F7.
HHO consumed the least time for 16 benchmark functions. Regarding the best solutions obtained, SHHOIRC outperformed the other algorithms for 6 benchmark functions, while genuine HHO performed the best for 5, and SHHO for 10 functions. All seven HHO versions obtained the same best solution for F5 and F7. Finally, in terms of the lowest worst solutions obtained, SHHOIRC outperformed the other algorithms for 10 benchmark functions, CHHO PM2C9 for 5, and genuine HHO, CHHO PM1C9, and SHHO for 3 functions. All seven HHO versions obtained the same lowest worst solution for F5 and F7. According to the results presented in Table 7, the proposed hybrid algorithm, SHHOIRC, demonstrated superior performance compared with the other seven algorithms tested. The study concludes that SHHOIRC is a competitive algorithm. Additionally, when compared to the genuine HHO algorithm, SHHOIRC outperformed it by obtaining better average solutions on 14 benchmark functions. When compared to the two CHHO PM1 versions, SHHOIRC outperformed all of them for 13 benchmark functions in terms of average solutions achieved. Similarly, when compared with the five CHHO PM2 versions, SHHOIRC outperformed all of them for 15 benchmark functions. By comparing the proposed algorithm with the SHHO algorithm, SHHOIRC outperformed it by obtaining better average solutions on 15 benchmark functions.
Figure 2 displays the convergence curve of the average solutions obtained by the 7 HHO versions for 16 benchmark functions (F1–F16) in 500 iterations. Upon comparing the 7 HHO versions, the proposed algorithm outperformed the other HHO versions in terms of obtaining the best average solution for 13 benchmark functions out of the 16 tested. Therefore, the study concludes that the proposed SHHOIRC algorithm outperforms both the genuine HHO and other HHO versions.
According to the number of the best average solutions obtained in the experiment on benchmark functions, it may be concluded that SHHOIRC outperformed other algorithms by 81.25 percent, compared to 18.75 percent for the prior algorithms.

Statistical Analysis

Statistical tests should be conducted to evaluate the performance of metaheuristic algorithms [58]. Statistical analysis is required to determine the best algorithm among the proposed algorithm and the other six HHO versions. An evaluation of an algorithm’s performance based on the number of test functions on which it performs better is not an appropriate method for selecting the best algorithm. Instead, the success order of the proposed algorithms should be considered across all test functions. The well-known non-parametric Friedman test is used to evaluate the efficiency of the proposed algorithm. The rank values of the proposed algorithms derived from the Friedman test are shown in Table 8. The rank for algorithm i for a specific problem j is denoted as r i j , such that 1 i n   and   1 j k , where n is the number of benchmark problems and k is the number of tested algorithms. The average values are used to evaluate the performance of the algorithms more precisely. The average rank for each algorithm is denoted as R j , which is the sum of the ranks for each algorithm across all performance metrics divided by the number of performance metrics (benchmark functions), which is calculated using Equation (21):
R j = 1 n i n r i j
To determine which algorithm is more successful in the HHO algorithm, Table 8 provides the success scores of the HHO variants. During success scoring, for example, the HHO version that solves the F1 test function best, in terms of the mean value metric, receives one point, while the HHO version that solves the F1 test function worst receives seven points. With 33 total points, the proposed SHHOIRC algorithm is the most successful by obtaining the smallest number of total points. To apply the Friedman test, the average rank for each algorithm is calculated. The average ranks for each algorithm are shown in Table 8 in the last row. The best average rank is obtained by SHHOIRC, with a value of 4.2539.
The Friedman test is used to determine whether there are significant differences between the performances of multiple algorithms on multiple tasks. In this case, the test compares the performance of seven algorithms k = 7 on 16 benchmark functions n = 16 . We can then use the Friedman test formula (Equation (22)) to calculate the test statistic:
F = 12 n k k + 1 j R j 2 k k + 1 2 4
Friedman’s measure is obtained with a value of 21.2076 .
To determine whether the calculated test statistic is significant, we need to compare it to the critical value from the χ 2 -distribution with d f degree of freedom. The degrees of freedom d f for the Friedman test are k 1 , where k is the number of algorithms being compared. In this case, k = 7 , so d f = 6 . Using a significance level of 0.05, the critical value for d f = 6 is 12.59 . Since the calculated test statistic ( F = 21.2076 ) is greater than the critical value ( 12.59 ), we can reject the null hypothesis that all algorithms perform equally well. We can conclude that there is at least one algorithm that performs significantly better or worse than the others.
To determine the best algorithm, the average ranks obtained in Table 8 are examined. The algorithm with the lowest average rank is considered the best. In this case, the algorithm with the lowest average rank is SHHOIRC, with an average rank of 4.2539. Therefore, SHHOIRC is considered the best algorithm based on the Friedman test.
In conclusion, the Friedman test indicates that there are significant differences in the performances of the seven algorithms on the 16 benchmark functions. The best-performing algorithm on average is SHHOIRC, followed by SHHO, HHO, CHHO PM1C2, CHHO PM2C2, CHHO PM1C9, and CHHO PM2C9.

4.3. Wireless Sensor Network Coverage Optimization

The maximum coverage problem of a WSN can be modeled as follows: It is assumed that we have m sensors that will be used to cover the monitoring region, and the set S of m sensors as S = s 1 ,   s 2 , ,   s i ,   ,   s m , where s i is the i th sensor node, and its coordinate is x i , y i . And the coordinate of target point t is x t ,   y t , which is in the monitoring region.
In this paper, we used the Boolean (deterministic) sensing model, which is the most commonly used sensing model. In this model, if a point (target) t in the monitoring area is located within sensing range r s of sensor node s i , then it is assumed that t is covered by s i . Therefore, the coverage function, C s i ,   t , of sensor s i and point t can be modeled as in Equation (23):
C s i ,   t = 1 ,                   i f   d s i ,   t r s 0 ,                   o t h e r w i s e                    
where d s i ,   t is the Euclidean distance between sensor node s i and target point t , and r s is the sensing range.
The coverage of single target point t in the monitoring region by all the sensor nodes of the WSN is in Equation (24):
C S ,   t = 𝖵 i = 1 m C s i ,   t
To calculate the network’s net coverage, we divide the observing area into g equal squares grids. We determine the number of grid points in row g h , and columns g w . Therefore, the number of grid points g is g w × g h . The grid points are set as G = t 1 , t 2 , , t j ,   , t g , and the coordinate of target point t j is x t j , y t j . Consequently, the definition of the WSN coverage rate is the ratio of the covered grid points, and the number of all grid points is modeled in Equation (25):
C S ,   G = x t = 1 g w y t = 1 g h C S ,   t g w × g h
For the sensor nodes set S = s 1 ,   s 2 ,   ,   s i ,   , s m in the WSN, x i ,   y i , representing s i coordinates. In HHO, each hawk represents a solution (sensor distribution) for the WSN maximum coverage problem. The encoding of each seahorse is:
X = x 1 , y 1 , x 2 , y 2 ,   , x i , y i ,   , x m , y m
The objective/fitness function used to evaluate each sensor distribution is modeled as (Equation (27)):
f X = C = x t = 1 g w y t = 1 g h C s a l l ,   t g w × g h
Therefore, the coverage optimization problem can be defined as follows: the optimization variable is X, and in this optimization problem, we search for the best solution X to minimize f X , to find the maximal coverage.
Herein, we used a WSN that includes 30 sensor nodes, with sensing radius r s = 2.5   m . These sensors are distributed in a 20 × 20   m 2 monitoring region. The HHO population size P S = 30 . The number of grid points is set to 40 2 for a more accurate coverage rate. In this paper, we tested SHHOIRC, genuine HHO, GWO, and WOA on two maximum coverage experiments. In the first experiment, the maximum number of iterations in each run is 500, with a maximum number of runs of 10. In the second one, the maximum number of iterations in each run is 1000, with a maximum number of runs of 10. The results of the first and second experiments are shown in Table 9 and Table 10, respectively.
The two experiments compare the proposed HHO version (SHHOIRC) with pure HHO and two well-known swarm intelligence algorithms (GWO and WOA) in terms of average solution, standard deviation values, best solutions obtained, worst solutions obtained, and average time consumed. The four algorithms are used to solve the stated maximum coverage problem for wireless sensor networks.

4.3.1. Experiment 1 (500 Iterations)

Table 9 shows the results obtained by the four algorithms for the stated maximum coverage problem (500 iterations). Table 1 shows comparisons between the solutions obtained in terms of average, best, and worst solutions, standard deviation values, and average time.
In terms of average solutions obtained by the proposed algorithm SHHOIRC, HHO, GWO, and WOA, it is shown that the proposed algorithm SHHOIRC outperformed the other three algorithms. Based on this metric, SHHOIRC achieved the highest average score of 92.45%, followed by HHO (90.06%). The worst-performing algorithms in terms of average score were GWO (87.11%) and WOA (87.74%).
In terms of the best solution, the proposed algorithm SHHOIRC achieved the highest best solution with a value of 95.44%, followed by GWO (94.5%), and the lowest best solution was obtained by HHO with a value of 91.44%. The highest worst solution was obtained by SHHOIRC with a value of 90.625%, and the lowest worst solution was obtained by GWO with a value of 78.94%.
The standard deviation metric shows the variability in fitness values achieved by each algorithm across multiple runs. Based on this metric, GWO had the highest standard deviation of 6.03, followed by WOA (2.64). The algorithms with the lowest standard deviation were HHO (1.13), and SHHOIRC (1.72). In terms of the average time consumed, SHHOIRC had the longest runtime of 1726.56 s. The algorithms with the shortest runtime were WOA (403.24) and GWO (425.06).
Figure 3a shows the convergence curve of average solutions obtained so far by the four algorithms, and Figure 3b shows the convergence curve of the best solutions obtained so far by the four algorithms. As shown in Figure 3, the highest solutions are obtained by the proposed algorithm SHHOIRC.
Figure 4a shows the initial distribution of sensor nodes for the best solution obtained by SHHOIRC, and Figure 4b shows the final distribution of sensor nodes for the best solution obtained by SHHOIRC with a coverage rate of 95.4375%.

4.3.2. Experiment 2 (1000 Iterations)

Table 10 shows the results obtained by SHHOIRC, HHO, GWO, and WOA for the stated maximum coverage problem (1000 iterations). Table 10 shows comparisons between the solutions obtained in terms of average, best, and worst solutions, standard deviation values, and average consumed time.
In terms of average solutions obtained by the proposed algorithm SHHOIRC and the three algorithms, it can be seen that the proposed algorithm SHHOIRC outperformed the other algorithms. In terms of the average solution, SHHOIRC obtained the best value of 93.81%. The worst average solution was obtained by WOA with a value of 89.2%. In terms of the best solution, the proposed algorithm SHHOIRC achieved the highest best solution with a value of 97.13%, and the lowest best solution was obtained by WOA with a value of 92%. On the other hand, the highest worst solution was obtained by SHHOIRC with a value of 91.94%, and the lowest worst solution was obtained by GWO with a value of 79.13%.
In terms of average time consumed, the highest time consumption was obtained by SHHOIRC with a value of 4183.29 s, and the lowest time consumption was obtained by GWO with a value of 781.64 s. Based on the STD metric, the lowest standard deviation value of 0.92 was obtained by the HHO algorithm, and the highest standard deviation value of 4.63 was obtained by GWO.
Figure 5a shows the convergence curve of average solutions obtained so far by the four algorithms, and Figure 5b shows the convergence curve of best solutions obtained so far by the four algorithms. As shown in Figure 5, the highest solutions are obtained by the proposed algorithm SHHOIRC. Figure 6a shows the initial distribution of sensor nodes for the best solution obtained by SHHOIRC, and Figure 6b shows the final distribution of sensor nodes for the best solution obtained by SHHOIRC with a coverage rate of 97.125%.

4.3.3. Statistical Analysis

For Experiment 1, we performed a one-way analysis of variance (ANOVA) to compare the mean performance of the algorithms. The results from one-way ANOVA for Experiment 1 are shown in Table 11. The one-way ANOVA test conducted on the data from Experiment 1 yielded an F-statistic of 4.98. To determine whether the calculated test statistic is significant, we need to compare it with the critical value from the F -distribution with d f 1   and   d f 2   degrees of freedom. The degrees of freedom d f 1   and   d f 2 for the one-way ANOVA test are k 1   and   k n 1 , respectively, where k is the number of algorithms being compared and n is the number of runs. In this case, k = 4   and   n = 10 , so d f 1 = 3   and   d f 2 = 36 . Using a significance level of 0.05, the critical value for the stated d f 1   and   d f 2 is 2.87 . Since the calculated test statistic ( F = 4.98 ) is greater than the critical value ( 2.87 ), we can reject the null hypothesis that all algorithms perform equally well. We can conclude that there is at least one algorithm that performs significantly better or worse than the others. The corresponding p-value is 0.0054, which is less than 0.05. In this case, the p-value of 0.0054 suggests that the observed differences in mean performances among the algorithms in Experiment 1 are statistically significant.
Moving on to Experiment 2, the results from one-way ANOVA for Experiment 2 are shown in Table 12. The one-way ANOVA test conducted on the data from Experiment 2 yielded an F-statistic of 4.98. By comparing the calculated test statistic ( F = 4.98 ) with the critical value ( 2.87 ) for the stated d f 1   and   d f 2 using a significance level of 0.05, it is notably greater. Therefore, we can reject the null hypothesis that all algorithms perform equally well and conclude that there is at least one algorithm that performs significantly better or worse than the others. Since the corresponding p-value is 0.0054, the low p-value (less than 0.05) indicates that the observed differences in mean performances among the algorithms in Experiment 2 are statistically significant.
From the boxplot of Experiments 1 and 2 shown in Figure 7a,b, it can be observed that the SHHOIRC boxplot is located higher than the other algorithms. The ‘+’ marks represent the outliers.
Based on the statistical analysis performed on the data from Experiments 1 and 2, we can conclude that SHHOIRC has a significantly higher mean than the other algorithms tested. Specifically, SHHOIRC showed a significantly higher mean performance than pure HHO, GWO, and WOA. The significant differences in mean performances, supported by the calculated p-values, suggest that SHHOIRC is the best algorithm among the four algorithms in these experiments.

5. Conclusions

The proposed novel hybrid algorithm, SHHOIRC, incorporates self-adaptation and combines the HHO algorithm with three chaotic optimization techniques. On 16 well-known benchmark functions, the proposed algorithm was evaluated, and its performance was compared to that of authentic HHO and five other HHO variants. The proposed algorithm SHHOIRC significantly improves the performance of HHO. SHHOIRC obtained the best results in terms of average solution and best average solutions with the lowest standard deviation for 13 benchmark functions. In addition, statistical analyses employing the Friedman test and success scores confirmed that SHHOIRC is the most effective algorithm among the seven HHO variants evaluated. These results indicate that the proposed hybridization approach, which combines HHO with three chaotic optimization methods and incorporates self-adaptation, can be used to enhance the performance of other metaheuristic algorithms. The novel hybrid algorithm SHHOIRC and six other HHO variants are evaluated using sixteen well-known benchmark functions. The proposed algorithm SHHOIRC significantly improves the efficiency of HHO. The simulation experiment and statistical analysis demonstrate that the new hybrid algorithm SHHOIRC outperformed the other four CHHO variants, SHHO, and HHO. Furthermore, the proposed algorithm was tested on a real-life problem, which is the maximum coverage problem of WSN, and compared with genuine HHO, GWO, and WOA. The results of the experiments indicate that the proposed algorithm SHHOIRC performs better than the other algorithms tested in solving the maximum coverage problem for WSN. In both Experiments 1 and 2, SHHOIRC achieved the highest average solution and best solution values, while also having a relatively low standard deviation value. The statistical analysis of the results from both experiments shows that SHHOIRC has a significantly higher mean performance than the other algorithms tested. This indicates that SHHOIRC is a promising algorithm for solving maximum coverage problems in WSN and could be further developed and optimized for other related problems. The proposed algorithm SHHOIRC exhibits considerable improvement in HHO performance.
In the future, we intend to apply SHHOIRC to more real-world problems, and we anticipate achieving new breakthroughs in their resolution. In addition, we intend to introduce novel integration techniques to assist metaheuristic algorithms in escaping each local minimum.

Author Contributions

Conceptualization, S.A.; methodology, S.A.; software, E.B.; validation, E.B.; formal analysis, M.A.S.; investigation, M.A.S.; resources, A.D.; data curation, A.D.; writing—original draft preparation, A.D.; writing—review and editing, E.B.; visualization, A.D.; supervision, E.B.; project administration, M.A.S.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The author would like to thank Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No. R-2023-629.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The author would like to thank Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No. R-2023-629. We also thank the referees for suggestions to improve the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science IEEE, MHS’95, Nagoya, Japan, 4–6 October 1995. [Google Scholar]
  2. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: New York, NY, USA, 1992. [Google Scholar]
  3. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  4. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  5. Yang, X.S.; Durand-Lose, J.; Jonoska, N. Flower Pollination Algorithm for Global Optimization. In Unconventional Computation and Natural Computation, Proceedings of the 19th International Conference, UCNC 2021, Espoo, Finland, 18–22 October 2021; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  6. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  7. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  8. Karaboga, D.; Basturk, B.; Melin, P.; Castillo, O.; Aguilar, L.T.; Kacprzyk, J.; Pedrycz, W. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. In Foundations of Fuzzy Logic and Soft Computing, Proceedings of the 12th International Fuzzy Systems Association World Congress, IFSA 2007, Cancun, Mexico, 18–21 June 2007; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  9. Yang, X.S.; González, J.R.; Pelta, D.A.; Cruz, C.; Terrazas, G.; Krasnogor, N. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  10. Naruei, I.; Keynia, F. Wild horse optimizer: A new meta-heuristic algorithm for solving engineering optimization problems. Eng. Comput. 2022, 38, 3025–3056. [Google Scholar] [CrossRef]
  11. Abualigah, L.; Elaziz, M.A.; Hussien, A.G.; Alsalibi, B.; Jalali, S.M.J.; Gandomi, A.H. Lightning search algorithm: A comprehensive survey. Appl. Intell. 2021, 51, 2353–2376. [Google Scholar] [CrossRef]
  12. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999. [Google Scholar]
  13. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  14. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  15. Tripathy, B.K.; Maddikunta, P.K.R.; Pham, Q.V.; Gadekallu, T.R.; Dev, K.; Pandya, S.; ElHalawany, B.M. Harris Hawk Optimization: A Survey on Variants and Applications. Comput. Intell. Neurosci. 2022, 2022, 2218594. [Google Scholar] [CrossRef]
  16. Hussien, A.G.; Abualigah, L.; Abu Zitar, R.; Hashim, F.A.; Amin, M.; Saber, A.; Almotairi, K.H.; Gandomi, A.H. Recent Advances in Harris Hawks Optimization: A Comparative Study and Applications. Electronics 2022, 11, 1919. [Google Scholar] [CrossRef]
  17. Too, J.; Abdullah, A.; Mohd Saad, N. A new quadratic binary harris hawk optimization for feature selection. Electronics 2019, 8, 1130. [Google Scholar] [CrossRef]
  18. Hans, R.; Kaur, H.; Kaur, N. Opposition-based Harris Hawks optimization algorithm for feature selection in breast mass. J. Interdiscip. Math. 2020, 23, 97–106. [Google Scholar] [CrossRef]
  19. Menesy, A.S.; Sultan, H.M.; Selim, A.; Ashmawy, M.G.; Kamel, S. Developing and applying chaotic harris hawks optimization Technique for Extracting Parameters of Several Proton Exchange Membrane Fuel Cell Stacks. IEEE Access 2020, 8, 1146–1159. [Google Scholar] [CrossRef]
  20. Basha, J.; Bacanin, N.; Vukobrat, N.; Zivkovic, M.; Venkatachalam, K.; Hubálovský, S.; Trojovský, P. Chaotic Harris Hawks Optimization with Quasi-Reflection-Based Learning: An Application to Enhance CNN Design. Sensors 2021, 21, 6654. [Google Scholar] [CrossRef] [PubMed]
  21. Hussien, A.G.; Amin, M. A self-adaptive Harris Hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection, Int. J. Mach. Learn. Cybern. 2022, 13, 1–28. [Google Scholar] [CrossRef]
  22. Mirjalili, S.R. Multi-Objective Harris hawks optimization: A novel algorithm for multi-objective engineering design problems, Eng. Comput. 2020, 36, 879–891. [Google Scholar]
  23. Shi, B.; Heidari, A.A.; Chen, C.; Wang, M.; Huang, C.; Chen, H.; Zhu, J. Predicting Di-2-Ethylhexyl Phthalate Toxicity: Hybrid Integrated Harris Hawks Optimization with Support Vector Machines. IEEE Access 2020, 8, 161188–161202. [Google Scholar] [CrossRef]
  24. Zou, T.; Wang, C. Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization. Mathematics 2022, 10, 1145. [Google Scholar] [CrossRef]
  25. Birogul, S. Hybrid Harris Hawk Optimization Based on Differential Evolution (HHODE) Algorithm for Optimal Power Flow Problem. IEEE Access 2019, 7, 184468–184488. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Zhou, X.; Shih, P.-C. Modified Harris Hawks Optimization Algorithm for Global Optimization Problems. Arab. J. Sci. Eng. 2020, 45, 10949–10974. [Google Scholar] [CrossRef]
  27. Gharehchopogh, F.S. An Improved Harris Hawks Optimization Algorithm with Multi-strategy for Community Detection in Social Network, J. Bionic Eng. 2023, 20, 1175–1197. [Google Scholar] [CrossRef]
  28. Sihwail, R.; Omar, K.; Ariffin, K.A.Z.; Tubishat, M. Improved Harris Hawks Optimization Using Elite Opposition-Based Learning and Novel Search Mechanism for Feature Selection. IEEE Access 2020, 8, 121127–121145. [Google Scholar] [CrossRef]
  29. Kaur, G.; Arora, S. Chaotic whale optimization algorithm. J. Comput. Des. Eng. 2018, 5, 275–284. [Google Scholar] [CrossRef]
  30. Yang, D.; Li, G.; Cheng, G. On the efficiency of chaos optimization algorithms for global optimization. Chaos Solitons Fractals 2007, 34, 1366–1375. [Google Scholar] [CrossRef]
  31. Harun, G.; Haydar, L. Chaotic Harris hawks optimization algorithm. J. Comput. Des. Eng. 2022, 9, 216–245. [Google Scholar]
  32. Xia, J. Coverage Optimization Strategy of Wireless Sensor Network Based on Swarm Intelligence Algorithm. In Proceedings of the 2016 International Conference on Smart City and Systems Engineering (ICSCSE), Hunan, China, 25–26 November 2016. [Google Scholar]
  33. Yang, S.H. Wireless Sensor Networks; Springer: London, UK, 2014. [Google Scholar]
  34. Elhabyan, R.; Shi, W.; St-Hilaire, M. Coverage Protocols for Wireless Sensor Networks: Review and Future Directions. J. Commun. Netw. 2019, 21, 45–60. [Google Scholar] [CrossRef]
  35. Hu, J.; Song Jingyan, J.; Zhang, M.; Kang, X. Topology Optimization for Urban Traffic Sensor Network. Tsinghua Sci. Technol. 2008, 13, 2. [Google Scholar] [CrossRef]
  36. Aziz, M.; Tayarani, N.M.-H.; Meybodi, M.R. A two-objective memetic approach for the node localization problem in wireless sensor networks. Genet. Program. Evolvable Mach. 2016, 17, 4. [Google Scholar] [CrossRef]
  37. Ngatchou, P.N.; Fox, W.L.J.; El-Sharkawi, M.A. Distributed sensor placement with sequential particle swarm optimization. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, SIS 2005, Pasadena, CA, USA, 8–10 June 2005. [Google Scholar]
  38. Wang, X.; Wang, S.; Ma, J.-J. An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment. Sensors 2007, 7, 354–370. [Google Scholar] [CrossRef]
  39. Wang, S.; You, H.; Yue, Y.; Cao, L. A novel topology optimization of coverage-oriented strategy for wireless sensor networks. Int. J. Distrib. Sens. Netw. 2021, 17, 15501477219. [Google Scholar] [CrossRef]
  40. Alshaqqawi, B.; Haque, S.A.; Alreshoodi, M.; Alsukayti, I. Enhanced Particle Swarm Optimization for Effective Relay Nodes Deployment in Wireless Sensor Networks. Int. J. Comput. Netw. Commun. 2021, 13, 53–73. [Google Scholar] [CrossRef]
  41. Liang, D.; Shen, H.; Chen, L. Maximum Target Coverage Problem in Mobile Wireless Sensor Networks. Sensors 2021, 21, 184. [Google Scholar] [CrossRef] [PubMed]
  42. Wang, G. Research on Wireless Sensor Network Technology Based on Particle Swarm Algorithm. J. Phys. Conf. Ser. 2021, 1865, 4. [Google Scholar] [CrossRef]
  43. Mehta, S.; Malik, A. A Swarm Intelligence Based Coverage Hole Healing Approach for Wireless Sensor Networks. EAI Endorsed Trans. Scalable Inf. Syst. 2020, 7, 26. [Google Scholar] [CrossRef]
  44. Tarnaris, K.; Preka, I.; Kandris, D.; Alexandridis, A. Coverage and k-Coverage Optimization in Wireless Sensor Networks Using Computational Intelligence Methods: A Comparative Study. Electronics 2020, 9, 675. [Google Scholar] [CrossRef]
  45. Bonnah, E.; Ju, S.; Cai, W. Coverage Maximization in Wireless Sensor Networks Using Minimal Exposure Path and Particle Swarm Optimization. Sens. Imaging 2020, 21, 1–16. [Google Scholar] [CrossRef]
  46. Li, Q.; Liu, N. Coverage optimization algorithm based on control nodes position in wireless sensor networks. Int. J. Commun. Syst. 2020, 35, e4599. [Google Scholar] [CrossRef]
  47. Zhang, Y. Coverage Optimization and Simulation of Wireless Sensor Networks Based on Particle Swarm Optimization. Int. J. Wirel. Inf. Netw. 2020, 27, 307–316. [Google Scholar] [CrossRef]
  48. Gökalp, O. Optimizing Connected Target Coverage in Wireless Sensor Networks Using Self-Adaptive Differential Evolution. Balk. J. Electr. Comput. Eng. 2020, 8, 325–330. [Google Scholar] [CrossRef]
  49. Zhang, Y.; Zhang, Z. Wireless Sensor Network Node Deployment Based On Regular Tetrahedron. J. Phys. Conf. Ser. 2020, 1453, 012113. [Google Scholar] [CrossRef]
  50. Wang, J.; Ju, C.; Kim, H.-J.; Sherratt, R.S.; Lee, S. A Mobile Assisted Coverage Hole Patching Scheme based on Particle Swarm Optimization for WSNs. Clust. Comput. 2019, 22, 1. [Google Scholar] [CrossRef]
  51. Mnasri, S.; Nasri, N.; Van den Bossche, A.; Val, T. A new multi-agent particle swarm algorithm based on birds accents for the 3D indoor deployment problem. ISA Trans. 2019, 91, 262–280. [Google Scholar] [CrossRef] [PubMed]
  52. Syed, M.A.; Syed, R. Weighted Salp Swarm Algorithm and its applications towards optimal sensor deployment. J. King Saud Univ. Comput. Inf. Sci. 2019, 34, 4. [Google Scholar] [CrossRef]
  53. Kohli, M.; Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comput. Des. Eng. 2017, 5, 458–472. [Google Scholar] [CrossRef]
  54. Gao, Z.M.; Zhao, J.; Zhang, Y.J. Review of chaotic mapping enabled nature-inspired algorithms. Math. Biosci. Eng. MBE 2022, 19, 8215–8258. [Google Scholar] [PubMed]
  55. Xiang, T.; Liao, X.; Wong, K.W. An improved particle swarm optimization algorithm combined with piecewise linear chaotic map. Appl. Math. Comput. 2007, 190, 1637–1645. [Google Scholar] [CrossRef]
  56. Gandomi, A.H.; Yang, X.-S. Chaotic bat algorithm. J. Comput. Sci. 2014, 5, 224–232. [Google Scholar] [CrossRef]
  57. Köker, R.; Gao, Z.-M.; Zhao, J. An Improved Grey Wolf Optimization Algorithm with Variable Weights. Comput. Intell. Neurosci. 2019, 2019, 2981282. [Google Scholar]
  58. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. The flowchart of SHHOIRC.
Figure 1. The flowchart of SHHOIRC.
Mathematics 11 04181 g001
Figure 2. Average convergence curve obtained by 7 HHO versions for 16 benchmark functions (500 iterations).
Figure 2. Average convergence curve obtained by 7 HHO versions for 16 benchmark functions (500 iterations).
Mathematics 11 04181 g002aMathematics 11 04181 g002b
Figure 3. Maximum coverage convergence curve of average fitness obtained by the 4 algorithms (500 iterations): (a) for the average solutions and (b) for the best solutions.
Figure 3. Maximum coverage convergence curve of average fitness obtained by the 4 algorithms (500 iterations): (a) for the average solutions and (b) for the best solutions.
Mathematics 11 04181 g003
Figure 4. (a,b) Initial and final distribution of the best solution obtained by SHHOIRC (500 iterations).
Figure 4. (a,b) Initial and final distribution of the best solution obtained by SHHOIRC (500 iterations).
Mathematics 11 04181 g004
Figure 5. Maximum coverage convergence curve of average fitness obtained by the 4 algorithms (1000 iterations): (a) for the average solutions and (b) for the best solutions.
Figure 5. Maximum coverage convergence curve of average fitness obtained by the 4 algorithms (1000 iterations): (a) for the average solutions and (b) for the best solutions.
Mathematics 11 04181 g005
Figure 6. (a,b) Initial and final distribution of the best solution obtained by SHHOIRC (1000 iterations).
Figure 6. (a,b) Initial and final distribution of the best solution obtained by SHHOIRC (1000 iterations).
Mathematics 11 04181 g006
Figure 7. (a,b) boxplot of Experiments 1 and 2.
Figure 7. (a,b) boxplot of Experiments 1 and 2.
Mathematics 11 04181 g007
Table 1. Chaotic maps’ details.
Table 1. Chaotic maps’ details.
No.Map NameMap EquationsParameters
1Circle map z k + 1 = z k + φ K 2 π sin 2 π z k m o d 1 φ = 2.5 ,   K = 5
2Logistic map z k + 1 = μ   1 z k μ = 4
3Gaussian map z k + 1 = 0 ,        i f   z k = 0 ;
z k + 1 = 1 z k m o d 1 ,        i f   z k 0
4Chebyshev map z k + 1 = c o s   a   c o s 1 z k a = 5
5Sinusoidal map z k + 1 = s i n   π z k
Table 2. Experimental parameter setting.
Table 2. Experimental parameter setting.
No.Parameter NameBenchmark Functions ExperimentsMaximum Coverage Experiments
1Population size3030
2Number of DimensionsAccording to function (Table 3)30
3Max iteration and run500 and 5 runs(Exp1: 500, Exp2: 1000) and 10 runs for each
Table 3. List of Benchmark Functions.
Table 3. List of Benchmark Functions.
No.NameFunction Expressions Dimension   ( d ) Initial   Range L B ,   U B TypeOptimal Solution
f m i n
F1Schwefel 2.22 F x = i = 1 d x i + i = 1 d x i 30 10 , 10 UM0
F2Schwefel 2.21 F x = max 1 i d x i 30 100 , 100 UM0
F3Generalized Rosenbrock F x = i = 1 d 1 100 x i + 1 x i 2 2 + x i 1 2 30 30 , 30 UM0
F4Quartic with Noise F x = i = 1 d i x i 4 + r a n d o m 0 , 1 30 1.28 , 1.28 MM0
F5Generalized Rastrigin F x = i = 1 d x i 2 10 cos 2 π x i + 10 30 5.12 , 5.12 MM0
F6Ackley 1 F x = 20   e x p 0.2 1 d i = 1 d x i 2       e x p 1 d i = 1 d cos 2 π x i + 20 + e 30 32 , 32 MM0
F7Generalized Griewank Function F x = i = 1 d x i 2 4000 i = 1 d c o s x i i + 1 30 600 , 600 MM0
F8Generalized Penalized 1 F x = π d 10 s i n 2 π y i + i = 1 d y i 1 2 1 + 10 s i n 2 π y i + 1   + y d 1 2 + i = 1 d u x i , 10 , 100 , 4
u x i , a , k , m = k x i a m                               , x i > a   0                                             , a < x i < a k x i a m                       ,   x i < a
30 50 , 50 MM0
F9Generalized Penalized 2 F x = 0.1 s i n 2 3 π x i + i = 1 d x i 1 2 1 + s i n 2 3 π x i + 1   + x i 1 2 1 + s i n 2 2 π x i + i = 1 d u x i , 5 , 100 , 4 30 50 , 50 MM0
F10Six-Hump Camel-Back Function F x = 4 2.1 x 1 2 + x 1 4 3 x 1 2 + x 1 x 2 + 4 x 2 2 4 x 2 2 2 5 , 5 MM 1.0316285
F11Goldstein–Price Function F x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2
× 30 + 2 x 1 3 x 2 2   18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2
2 5 , 5 MM 3
F12Hartman 3 F x = i = 1 4 c i exp j = 1 3 a i j   x j p i j 2 3 0 , 1 MM 3.86278
F13Hartman 6 F x = i = 1 4 c i exp j = 1 6 a i j   x j p i j 2 6 0 , 1 MM 3.32237
F14Shekel 5 F x = i = 1 5 x a i x a i T + c i 1 4 0 , 10 MM 10.1532
F15Shekel 7 F x = i = 1 7 x a i x a i T + c i 1 4 0 , 10 MM 10.4029
F16Shekel 10 F x = i = 1 10 x a i x a i T + c i 1 4 0 , 10 MM 10.5364
Table 4. Parameters of F13 (Hartman 6-Dimensional function).
Table 4. Parameters of F13 (Hartman 6-Dimensional function).
i a i j ,   j = 1 , , 6 c i p i j ,   j = 1 , , 6
1103173.51.7810.13120.16960.55690.01240.82830.5886
20.0510170.18141.20.23290.41350.83070.37360.10040.9991
333.51.71017830.23480.14150.35220.28830.30470.6650
Table 5. Parameters of Shekel functions: F14 (Shekel-5), F15 (Shekel-7), and 16 (Shekel-10).
Table 5. Parameters of Shekel functions: F14 (Shekel-5), F15 (Shekel-7), and 16 (Shekel-10).
i a i j ,   j = 1 , , 4 c i
144440.1
211110.2
388880.2
466660.4
537370.4
629290.6
755330.3
881810.7
962620.5
1073.673.60.5
Table 6. Parameters of F12 (Hartman 3-Dimensional function).
Table 6. Parameters of F12 (Hartman 3-Dimensional function).
i a i j ,   j = 1 , , 3 c i p i j ,   j = 1 , , 3
13.010301.00.36890.11700.2673
20.110351.20.46990.43870.7470
33.010303.00.10910.87320.5547
40.110353.20.03810.57430.8828
Table 7. HHO versions’ results (500 iterations, 30 agents).
Table 7. HHO versions’ results (500 iterations, 30 agents).
MetricHHOSHHOIRCCHHO PM1C2CHHO PM1C9CHHO PM2C2CHHO PM2C9SHHO
F1AVG9.86 × 10−521.04 × 10−591.18 × 10−491.19 × 10−442.69 × 10−529.04 × 10−451.03 × 10−58
STD1.53 × 10−512.18 × 10−592.64 × 10−492.66 × 10−442.77 × 10−522.00 × 10−441.31 × 10−58
Best8.06 × 10−554.95 × 10−595.90 × 10−495.94 × 10−446.64 × 10−524.47 × 10−442.82 × 10−64
Worst3.66 × 10−517.47 × 10−634.58 × 10−543.76 × 10−521.33 × 10−548.85 × 10−503.06 × 10−58
AVG Time1.19 × 10−12.30 × 1005.54 × 10−15.21 × 10−11.35 × 1001.43 × 1007.08 × 10−1
F2AVG5.57 × 10−481.73 × 10−491.75 × 10−501.01 × 10−418.54 × 10−489.84 × 10−483.62 × 10−48
STD1.24 × 10−473.84 × 10−493.59 × 10−501.56 × 10−411.91 × 10−471.94 × 10−478.08 × 10−48
Best1.12 × 10−578.59 × 10−498.16 × 10−503.53 × 10−414.27 × 10−474.43 × 10−477.04 × 10−55
Worst2.78 × 10−473.13 × 10−546.30 × 10−576.18 × 10−488.31 × 10−556.47 × 10−541.81 × 10−47
AVG Time1.36 × 10−11.98 × 1005.51 × 10−15.38 × 10−11.39 × 1001.43 × 1005.67 × 10−1
F3AVG2.43 × 10−28.35 × 10−048.59 × 10−036.02 × 10−033.97 × 10−031.33 × 10−038.59 × 10−04
STD3.19 × 10−25.21 × 10−48.12 × 10−34.37 × 10−35.16 × 10−32.80 × 10−35.94 × 10−4
Best3.21 × 10−41.58 × 10−32.02 × 10−21.13 × 10−21.29 × 10−26.34 × 10−32.99 × 10−4
Worst7.78 × 10−24.09 × 10−46.35 × 10−41.74 × 10−39.27 × 10−53.42 × 10−51.82 × 10−3
AVG Time2.12 × 10−11.72 × 1006.39 × 10−16.04 × 10−11.53 × 1001.56 × 1006.59 × 10−1
F4AVG5.34 × 10−52.09 × 10−41.16 × 10−49.04 × 10−51.42 × 10−41.77 × 10−49.11 × 10−5
STD4.20 × 10−52.47 × 10−57.69 × 10−51.56 × 10−45.80 × 10−52.58 × 10−45.78 × 10−5
Best5.68 × 10−62.45 × 10−41.79 × 10−43.68 × 10−41.97 × 10−46.23 × 10−45.00 × 10−5
Worst1.21 × 10−41.82 × 10−48.17 × 10−65.62 × 10−67.19 × 10−59.03 × 10−61.93 × 10−4
AVG Time3.35 × 10−12.00 × 1006.95 × 10−16.74 × 10−11.61 × 1001.67 × 1001.10 × 100
F5AVG0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
STD0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Best0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Worst0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
AVG Time1.77 × 10−11.51 × 1006.16 × 10−16.30 × 10−11.51 × 1001.53 × 1004.62 × 10−1
F6AVG8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
STD0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Best8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Worst8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
AVG Time1.89 × 10−11.92 × 1006.40 × 10−16.53 × 10−11.49 × 1001.53 × 1004.69 × 10−1
F7AVG0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
STD0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Best0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Worst0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
AVG Time2.29 × 10−11.80 × 1007.01 × 10−16.21 × 10−19.42 × 10−19.90 × 10−15.06 × 10−1
F8AVG1.18 × 10−52.33 × 10−74.76 × 10−62.16 × 10−56.84 × 10−62.16 × 10−63.17 × 10−7
STD1.18 × 10−52.18 × 10−74.06 × 10−62.89 × 10−57.02 × 10−62.55 × 10−63.22 × 10−7
Best8.71 × 10−75.96 × 10−78.60 × 10−66.92 × 10−51.87 × 10−56.29 × 10−63.53 × 10−8
Worst3.04 × 10−56.17 × 10−83.30 × 10−72.02 × 10−84.20 × 10−76.61 × 10−98.48 × 10−7
AVG Time7.67 × 10−12.60 × 1001.02 × 1009.72 × 10−11.31 × 1001.26 × 1001.51 × 100
F9AVG7.51 × 10−51.71 × 10−67.58 × 10−55.53 × 10−51.83 × 10−46.89 × 10−64.54 × 10−6
STD7.61 × 10−54.64 × 10−71.36 × 10−48.37 × 10−52.65 × 10−47.49 × 10−64.46 × 10−6
Best2.14 × 10−62.15 × 10−63.18 × 10−41.97 × 10−46.28 × 10−41.84 × 10−59.91 × 10−7
Worst1.84 × 10−41.10 × 10−66.82 × 10−65.34 × 10−71.92 × 10−63.68 × 10−99.90 × 10−6
AVG Time7.73 × 10−12.95 × 1001.05 × 1001.03 × 1001.32 × 1001.30 × 1001.52 × 100
F10AVG−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
STD1.03 × 10−91.57 × 10−165.84 × 10−101.51 × 10−87.08 × 10−115.53 × 10−91.92 × 10−16
Best−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Worst−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
AVG Time1.30 × 10−11.95 × 1007.23 × 10−17.12 × 10−19.67 × 10−19.82 × 10−16.79 × 10−1
F11AVG3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
STD1.63 × 10−86.08 × 10−158.88 × 10−87.16 × 10−74.69 × 10−78.83 × 10−74.32 × 10−15
Best3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
Worst3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
AVG Time1.10 × 10−11.89 × 10+07.04 × 10−16.94 × 10−19.84 × 10−11.02 × 1006.96 × 10−1
F12AVG−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100
STD1.11 × 10−33.85 × 10−164.39 × 10−34.73 × 10−45.61 × 10−43.31 × 10−35.44 × 10−16
Best−3.86 × 100−3.86 × 100−3.85 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100
Worst−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100−3.86 × 100
AVG Time1.54 × 10−12.50 × 10+07.39 × 10−17.66 × 10−11.02 × 1001.06 × 1007.14 × 10−1
F13AVG−3.15 × 100−3.32 × 100−3.17 × 100−3.11 × 100−3.14 × 100−3.16 × 100−3.27 × 100
STD9.45 × 10−22.17 × 10−148.44 × 10−21.54 × 10−17.01 × 10−23.50 × 10−26.51 × 10−2
Best−3.22 × 100−3.32 × 100−3.06 × 100−2.89 × 100−3.03 × 100−3.12 × 100−3.32 × 100
Worst−2.99 × 100−3.32 × 100−3.27 × 100−3.30 × 100−3.22 × 100−3.22 × 100−3.20 × 100
AVG Time1.57 × 10−11.82 × 1007.45 × 10−17.31 × 10−11.02 × 1001.08 × 1007.13 × 10−1
F14AVG−5.05 × 100−6.07 × 100−5.05 × 100−5.05 × 100−5.05 × 100−5.04 × 100−5.06 × 100
STD3.06 × 10−32.28 × 1003.28 × 10−34.27 × 10−41.28 × 10−31.50 × 10−27.69 × 10−16
Best−5.05 × 100−5.06 × 100−5.05 × 100−5.05 × 100−5.05 × 100−5.02 × 100−5.06 × 100
Worst−5.05 × 100−1.02 × 101−5.05 × 100−5.06 × 100−5.05 × 100−5.05 × 100−5.06 × 100
AVG Time1.91 × 10−12.27 × 1007.60 × 10−17.49 × 10−11.07 × 1001.06 × 1007.74 × 10−1
F15AVG−5.09 × 100−6.15 × 100−5.08 × 100−5.09 × 100−5.09 × 100−4.41 × 100−6.15 × 100
STD2.40 × 10−32.38 × 1004.47 × 10−32.60 × 10−42.67 × 10−31.46 × 1002.38 × 100
Best−5.09 × 100−5.09 × 100−5.08 × 100−5.09 × 100−5.08 × 100−1.79 × 100−1.04 × 101
Worst−5.08 × 100−1.04 × 101−5.09 × 100−5.09 × 100−5.09 × 100−5.08 × 100−5.09 × 100
AVG Time2.17 × 10−12.13 × 10+07.98 × 10−17.61 × 10−11.07 × 1001.08 × 1008.10 × 10−1
F16AVG−5.12 × 100−8.37 × 100−6.17 × 100−7.29 × 100−5.12 × 100−4.57 × 100−8.37 × 100
STD5.29 × 10−32.96 × 1002.33 × 1002.96 × 1004.07 × 10−31.21 × 1002.96 × 100
Best−5.13 × 100−5.13 × 100−5.12 × 100−5.13 × 100−5.12 × 100−2.41 × 100−1.05 × 101
Worst−5.12 × 100−1.05 × 101−1.03 × 101−1.05 × 101−5.13 × 100−5.11 × 100−5.13 × 100
AVG Time2.62 × 10−12.08 × 1008.02 × 10−17.85 × 10−11.10 × 1001.13 × 1008.30 × 10−1
Table 8. Ranks of the proposed algorithm and the other HHO versions.
Table 8. Ranks of the proposed algorithm and the other HHO versions.
HHOSHHOIRCCHHO PM1C2CHHO PM1C9CHHO PM2C2CHHO PM2C9SHHO
F14125736
F24231756
F37126543
F41734256
F54444444
F61.551.55555
F74444444
F86124753
F95126473
F106163524
F112124657
F124147326
F135123764
F146125347
F153126457
F166124357
Total69334471767182
Average rank18.32914.25397.391619.691422.562519.691426.2656
Table 9. Results obtained for maximum coverage problem by the four algorithms (500 iterations).
Table 9. Results obtained for maximum coverage problem by the four algorithms (500 iterations).
AVGBestWorstSTDAVG Time
SHHOIRC92.4595.437590.6251.71941726.5590
HHO90.062591.437588.6251.1273949.8674
GWO87.106394.578.93756.0276425.0602
WOA87.737592.37582.56252.6411403.2416
Table 10. Results obtained for maximum coverage problem by the four algorithms (1000 iterations).
Table 10. Results obtained for maximum coverage problem by the four algorithms (1000 iterations).
AVGBestWorstSTDAVG Time
SHHOIRC93.806497.12591.93751.62464183.2866
HHO91.631393.125900.91831947.2500
GWO90.631396.12579.1254.6276781.6393
WOA89.292852.2783864.4674
Table 11. Results obtained by one-way ANOVA for Experiment 1.
Table 11. Results obtained by one-way ANOVA for Experiment 1.
Source of
Variation
Sum of Squares (SS)Degrees of Freedom (df)Mean Sum of Squares (MSS)F-Computed (Fc)p-Value
Columns S S T r = 177.517 d f 1 = 3   M S S T r = 59.1725 4.98 0.0054
Error S S E = 427.81 d f 2 = 36 M S S E = 11.8836
Total S S T = 605.327 39
Table 12. Results obtained by one-way ANOVA for Experiment 2.
Table 12. Results obtained by one-way ANOVA for Experiment 2.
Source of
Variation
Sum of Squares (SS)Degrees of Freedom (df)Mean Sum of Squares (MSS)F-Computed (Fc)p-Value
Columns S S T r = 122.47 d f 1 = 3   M S S T r = 37.4902 4.98 0.0054
Error S S E = 270.79 d f 2 = 36 M S S E = 7.522
Total S S T = 383.261 39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almotairi, S.; Badr, E.; Abdul Salam, M.; Dawood, A. Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization. Mathematics 2023, 11, 4181. https://doi.org/10.3390/math11194181

AMA Style

Almotairi S, Badr E, Abdul Salam M, Dawood A. Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization. Mathematics. 2023; 11(19):4181. https://doi.org/10.3390/math11194181

Chicago/Turabian Style

Almotairi, Sultan, Elsayed Badr, Mustafa Abdul Salam, and Alshimaa Dawood. 2023. "Three Chaotic Strategies for Enhancing the Self-Adaptive Harris Hawk Optimization Algorithm for Global Optimization" Mathematics 11, no. 19: 4181. https://doi.org/10.3390/math11194181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop