Next Article in Journal
Order Properties Concerning Tsallis Residual Entropy
Previous Article in Journal
A Matheuristic Approach for the Multi-Depot Periodic Petrol Station Replenishment Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSI-HHO: Multi-Strategy Improved HHO Algorithm for Global Optimization

College of Systems Engineering, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 415; https://doi.org/10.3390/math12030415
Submission received: 4 January 2024 / Revised: 24 January 2024 / Accepted: 24 January 2024 / Published: 27 January 2024

Abstract

:
The Harris Hawks Optimization algorithm (HHO) is a sophisticated metaheuristic technique that draws inspiration from the hunting process of Harris hawks, which has gained attention in recent years. However, despite its promising features, the algorithm exhibits certain limitations, including the tendency to converge to local optima and a relatively slow convergence speed. In this paper, we propose the multi-strategy improved HHO algorithm (MSI-HHO) as an enhancement to the standard HHO algorithm, which adopts three strategies to improve its performance, namely, inverted S-shaped escape energy, a stochastic learning mechanism based on Gaussian mutation, and refracted opposition-based learning. At the same time, we conduct a comprehensive comparison between our proposed MSI-HHO algorithm with the standard HHO algorithm and five other well-known metaheuristic optimization algorithms. Extensive simulation experiments are conducted on both the 23 classical benchmark functions and the IEEE CEC 2020 benchmark functions. Then, the results of the non-parametric tests indicate that the MSI-HHO algorithm outperforms six other comparative algorithms at a significance level of 0.05 or greater. Additionally, the visualization analysis demonstrates the superior convergence speed and accuracy of the MSI-HHO algorithm, providing evidence of its robust performance.

1. Introduction

Optimization is a crucial process involved in identifying the most favorable value among all feasible options for a particular problem. In the domain of traditional optimization methods, it is imperative to establish a precise mathematical model of the problem; determine its constraints, objective function, and decision variables; and subsequently leverage the gradient information of the objective function to effectively solve the problem.
However, the practical engineering optimization problems have become increasingly complicated, exhibiting characteristics such as multiple variables, objectives, constraints, extremes, as well as non-linearity and non-analyticity, which are commonly referred to as NP-hard problems in mathematics [1]. Establishing mathematical models for these problems is often challenging due to the discontinuous and non-differentiable objective functions, as well as the high dimensionality of variables. Consequently, traditional optimization methods are inadequate for solving such problems [2].
In recent years, metaheuristic optimization algorithms have gained prominence due to their advantages of not relying on mathematical models or the gradient information of the objective functions. Furthermore, they exhibit the ability to identify satisfactory solutions and have a low dependence on specific optimization problems. As a result, they have been widely applied in engineering fields such as artificial intelligence, robotics, system control, data analysis, and image processing, playing an increasingly vital role [3].
Two different criteria are widely used to categorize metaheuristic algorithms according to the current literature. The first criterion refers to the number of agents searching within the algorithm, while the second criterion pertains to the underlying sources of inspiration employed to develop the algorithm [4].
Based on the number of agents, metaheuristic algorithms can be further classified into two categories, i.e., the Trajectory-based and Population-based algorithms. The former usually begin with a randomly generated solution and progressively refine it over iterations. Examples of such algorithms include Simulated Annealing (SA) [5], Iterated Local Search (ILS), etc. Actually, a considerable number of prominent metaheuristic algorithms belong to the category of Population-based algorithms, e.g., Particle Swarm Optimization (PSO) [6], Genetic Algorithm (GA) [7], Grey Wolf Optimizer (GWO) [8], and so on. These algorithms start with a randomly generated population of solutions and iteratively improve them. Population-based algorithms offer distinctive advantages of effectively exploring the search space, sharing information among individuals, and increasing the likelihood of finding the global optimal solution.
According to the second criterion, we divide metaheuristic algorithms into four primary categories based on their sources of inspiration, i.e., evolutionary phenomena, physical rules, human-related concepts, and the swarm intelligence of creatures. Algorithms based on evolutionary phenomena can be mainly categorized into four branches, including the extensively employed GA [7] and Differential Evolution (DE) [9], as well as the Evolution Strategies (ES) [10] and Evolution Programming (EP) [11]. Physics-based algorithms comprise renowned techniques like SA [5], Gravitational Search Algorithm (GSA) [12], and the state-of-the-art Artificial Electric Field Algorithm (AEFA) [13], which has been proposed in recent years, among others. Algorithms based on human-related concepts include Harmony Search (HS) [14], Teaching–learning-based Optimization (TLBO) [15], Student Psychology-Based Optimization (SPBO) [16], etc. The final category encompasses algorithms inspired by the swarm intelligence observed in creatures, constituting the largest and most extensively utilized branch. Representative algorithms within this domain include PSO [6], GWO [8], Seagull Optimization Algorithm (SOA) [17], Whale Optimization Algorithm (WOA) [18], and Artificial Bee Colony (ABC) [19], among others.
The HHO algorithm is an advanced metaheuristic optimization algorithm based on the swarm intelligence of creatures, which draws inspiration from the prey hunting behavior of Harris hawks [20]. The HHO algorithm comprises two distinct stages, i.e., the exploration phase and the exploitation phase, which correspond to global exploration and local exploitation of the search space, respectively. The random component of prey escape energy facilitates the seamless transition between the exploration and exploitation phases, ensuring a balanced relationship between them. Additionally, the Lévy flight is utilized in the exploitation phase to help agents escape from local optimal solutions, enhancing the global search capability of the algorithm. HHO offers several advantages such as few control parameters, high search efficiency and accuracy, and so on [21]. It has found widespread applications in various fields, including neural networks [22], image segmentation [23], parameter identification of photovoltaic cells and modules [24], satellite image denoising [25], and others.
However, the HHO algorithm still exhibits certain limitations, such as a tendency to converge towards local optima [21] and a relatively slow convergence speed [26]. Aiming at addressing these shortcomings, we propose a novel multi-strategy improved HHO algorithm, named MSI-HHO. Firstly, an inverted S-shaped function is utilized to characterize the dynamics of prey escape energy, achieving a better balance between exploration and exploitation. Secondly, a stochastic learning mechanism based on Gaussian mutation is incorporated to enhance the search accuracy of the algorithm. Additionally, the refracted opposition-based learning technique is employed to assist the agents in escaping from local optimal solutions, thereby improving the global search capacity of the algorithm. Subsequently, we conduct comprehensive experiments of the proposed MSI-HHO on 23 classical benchmark functions and CEC 2020 benchmark functions. The assessment outcomes are analyzed using two non-parametric test methods, i.e., the Wilcoxon test and Friedman test, as well as the visualization method, i.e., convergence graphs. Remarkably, the results clearly demonstrated the superior performance of MSI-HHO over both the standard HHO and other algorithms used for comparison.
The subsequent sections of this paper are structured as follows: Section 2 offers a comprehensive review of the standard HHO algorithm. Section 3 provides a detailed exposition of the proposed MSI-HHO as well as the three enhancement strategies. In Section 4, we elaborate on the simulation experiments, along with the corresponding test results. Finally, Section 5 presents the conclusions drawn from our study.

2. Harris Hawks Optimization

The escape energy of prey determines whether a hawk is in the exploration or exploitation phase in the HHO algorithm. Distinct search strategies are associated with different values of escape energy. Consequently, it is imperative to calculate the prey’s escape energy prior to updating the population in each iteration and subsequently determine the appropriate next stage to pursue. The escape energy of prey is mathematically defined as follows [20].
E = 2 E 0 × 1 t T max
where E 0 represents the initial escape energy of the prey, which is a random number between 1 and 1. t represents the current number of iterations, and T max represents the maximum number of iterations.

2.1. Exploration Phase

If the escape energy of prey | E | 1 , it is considered that the prey is currently energetic, and the distance between the hawk and the prey remains significant. Under these circumstances, the hawk becomes less purposeful and starts searching for the prey across a large area. Two distinct strategies are employed by the hawk to update its position, depending on whether it has successfully determined the location of the prey. The HHO algorithm generates a random number, denoted as q , to determine which strategy to use as below [20].
( t + 1 ) = X r a n d t r 1 X r a n d t 2 r 2 X t , X b e s t ( t ) X m ( t ) r 3 L B + r 4 U B L B , i f   q 0.5 i f   q < 0.5
where r 1 ,   r 2 ,   r 3 ,   r 4 ,   q are random numbers between 0 and 1. X ( t ) and X ( t + 1 ) represent the current and the next position of the hawk, respectively. X r a n d ( t ) represents the position of a hawk which is randomly chosen from the current population. X b e s t ( t ) is the current position of the prey, i.e., the globally fittest solution. U B ,   L B are the upper and lower bounds of this dimension, respectively. X m ( t ) is the current average position of the population, which is defined as below [20].
X m ( t ) = 1 N i = 1 N X i ( t )
where N represents the number of the population. X i ( t ) represents the current position of the i -th hawk.

2.2. Exploitation Phase

If the escape energy of prey E < 1 , the algorithm transitions into the exploitation phase. During this stage, the hawk endeavors to encircle the prey. However, the prey often manages to evade the hawk due to its residual energy and deceptive maneuvers. In response to this scenario, the hawk has evolved four distinct strategies, i.e., soft besiege, hard besiege, soft besiege with progressive rapid dives, and hard besiege with progressive rapid dives. The selection of the optimal strategy for a successful hunt depends on the prey’s available energy and the effectiveness of its escape attempts. The HHO algorithm utilizes a random number, denoted as r , to simulate the prey’s likelihood of escaping capture. If r 0.5 , it indicates that the prey has failed to escape. Conversely, if r < 0.5 , the prey successfully evades capture. Simultaneously, the escape energy of the prey is employed to assess its current state of vitality. If | E | 0.5 , the prey remains energetic. On the contrary, if E < 0.5 , the prey is deemed exhausted.

2.2.1. Soft Besiege

If | E | 0.5 and r 0.5 , the prey fails to escape but still remains energetic. The hawk opts to persist in hovering over the prey, thereby contributing to further energy depletion in the prey and augmenting the likelihood of a successful hunt. The position update formula, delineated as follows, incorporates these considerations [20].
X ( t + 1 ) = X b e s t ( t ) X ( t ) E J · X b e s t ( t ) X ( t )
where J = 2 ( 1 r 5 ) is the jump strength of the prey, employed to simulate the distance that the prey is capable of jumping. r 5 is a random number between 0 and 1.

2.2.2. Hard Besiege

If E < 0.5 and r 0.5 , the prey fails to escape and becomes exhausted. At this time, the hawk launches a swift and forceful attack, employing a blitz tactic and rapidly closing in on the prey. The position update formula is presented as follows [20].
X ( t + 1 ) = X b e s t ( t ) E X b e s t ( t ) X ( t )

2.2.3. Soft Besiege with Progressive Rapid Dives

If | E | 0.5 and r < 0.5 , the prey successfully escapes from capture and remains energetic. Under these circumstances, the hawks make a series of rapid group dives around the prey, adjusting their position and orientation based on the path taken by the prey during its escape for the subsequent attack. The position update formula is depicted below [20].
Y = X b e s t ( t ) E J · X b e s t ( t ) X ( t )
The variable Y represents the newly obtained position derived from the given equation above. Then, the fitness values associated with the new position and the previous position are compared to assess the efficacy of the strategy employed by the hawks. If F i t ( Y ) < F i t ( X ( t ) ) , it indicates the successful implementation of the strategy, and the previous position X ( t ) will be replaced with the new position Y . Otherwise, the strategy fails, prompting the hawks to initiate a rapid and irregular dive to attain a new position, denoted as Z , which is calculated as below [20].
Z = Y + S × L F ( D )
where S is a D -dimensional vector, and each component of S is a uniform random number generated within the range of 0 to 1. L F ( D ) , called the Lévy flight, is also a D -dimensional vector, whose components can be obtained using the following procedure [20].
L F ( x ) = 0.01 × u × σ v 1 β , σ = Γ ( 1 + β ) × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
where variables u and v are random numbers uniformly distributed between 0 and 1. The variable β represents a constant value, which is typically set to 1.5 as a default.
Based on the previous discussion, the strategy of soft besiege with progressive rapid dives can be summarized as follows [20].
X ( t + 1 ) = Y ,         i f   F i t ( Y ) < F i t ( X ( t ) ) Z ,         i f   F i t ( Z ) < F i t ( X ( t ) )
where Y and Z are given by Equation (6) and Equation (7), respectively.

2.2.4. Hard Besiege with Progressive Rapid Dives

If E < 0.5 and r < 0.5 , it indicates that the prey has managed to escape, albeit with a significant depletion of its escape energy. In this scenario, the hawk initiates a gradual dive, but with the primary objective of approaching the prey as closely as possible. The position update formula is as follows [20].
Y = X b e s t ( t ) E J · X b e s t ( t ) X m ( t )
where Y is the newly obtained position. Similar to the strategy of soft besiege with progressive rapid dives, if F i t ( Y ) < F i t ( X ( t ) ) , the previous position X ( t ) will be replaced with the new position Y . Otherwise, a new position Z will be assigned according to Equation (7), and X ( t + 1 ) will be determined based on Equation (9).

3. The Proposed Algorithm

3.1. Inverted S-Shaped Escape Energy

The search phase of the hawks is determined by the escape energy of the prey in the HHO algorithm. Typically, when the absolute value of the escape energy, denoted as | E | , is large, the hawks are more inclined to enter the exploration phase. Conversely, they are also more likely to enter the exploitation phase. However, the escape energy of the prey exhibits a linearly decreasing trend, indicating a constant rate of energy depletion. Consequently, the rapid decline in escape energy during the early exploration phase results in a loss of population diversity and premature convergence in the subsequent exploitation phase.
In this case, we utilize an inverted S-shaped function to characterize the dynamics of the prey escape energy, which capitalizes on the characteristics of slower decline rates in the early and late stages and a faster decline rate in the middle stage. The formula defining the inverted S-shaped escape energy is presented below.
E = 2 E 0 × 1 1 1 + e a b t
where a , b are the parameters that control the shape of the inverted S-shaped function, which are set to 5 and 15 / T m a x , respectively. t is the current number of iterations.
Figure 1 illustrates the dynamics of the proposed inverted S-shaped escape energy. As shown in Figure 1, the strategy enables the escape energy of the prey to maintain a large value for an extended period in the early stage, facilitated through the global exploration. Simultaneously, it ensures that the escape energy remains a small value for a prolonged time in the later stage, thereby promoting comprehensive local exploitation.

3.2. Stochastic Learning Mechanism Based on Gaussian Mutation

The HHO algorithm still faces challenges pertaining to low convergence accuracy and limited local search capabilities. In this case, we propose the stochastic learning mechanism based on Gaussian mutation to enhance the hawks’ exploitation capacities. The fundamental concept of the strategy involves applying the stochastic learning mechanism, driven by Gaussian mutation, to each hawk according to the mutation probability P m within each iteration. Then, a specific method, i.e., greedy selection, is utilized to determine the acceptance of the new solutions obtained. We refer to this process as “researching”. The formula of the stochastic learning mechanism based on Gaussian mutation is defined as below.
X i k + 1 ( t ) = X i k ( t ) + X r a n d ( t ) X i k ( t ) G a u s s i a n ( 1 × D , σ , μ )
where X i k ( t ) represents the position of i -th hawk after performing k researchings in the t -th iteration, with k = 0 , 1 , 2 , , n 1 and X i 0 ( t ) = X i ( t ) . The parameter n signifies the number of researchings that will be conducted. Moreover, X r a n d represents the position of a hawk randomly selected from the population in the t -th iteration. The term G a u s s i a n ( 1 × D , σ , μ ) refers to a D -dimensional random vector, wherein each component is a random number following a Gaussian distribution with a standard deviation of σ and a mean value of μ . Additionally, the operator ‘ ’, known as the Hadamard Product, signifies the element-wise multiplication of corresponding components in two vectors.
As illustrated in Figure 2, the hawk integrates the positional information of a random individual, denoted as X r a n d ( t ) , and the result of the Gaussian mutation to reach a new position, denoted as X i k + 1 ( t ) . Subsequently, the acceptance of this new position is determined based on the greedy selection strategy. When the objective is to minimize the optimization function, the implementation of the greedy selection strategy is as follows.
X i k + 1 ( t ) = X i k + 1 ( t ) ,         i f   F i t X i k + 1 ( t ) < F i t X i k ( t ) X i k ( t ) ,                 i f   F i t X i k + 1 ( t ) F i t X i k ( t )
The position of the hawk X i ( t ) will be replaced with the final result X i n ( t ) obtained by conducting n researchings. For cases when n = 3 , the specific process of researchings is shown in Figure 3. The arrows depict the path followed by the hawk during the researchings, with the adjacent number representing the sequential order of the researchings. Additionally, according to the greedy selection strategy, the red arrows signify the rejection of the new solution, while the green arrows indicate acceptance. Notably, the researchings enable hawks to thoroughly explore the immediate vicinity, thereby significantly enhancing the local exploitation capability of the algorithm.

3.3. Refracted Opposition-Based Learning

Opposition-based learning (OBL) is a valuable algorithm enhancement mechanism, which extends the search space by taking the opposition solutions into account, leading to the discovery of improved solutions for the problem [27]. OBL has significantly contributed to enhancing the performance of various metaheuristic algorithms [28,29,30,31,32,33]. Building upon OBL, refracted opposition-based learning (ROBL) leverages the principles of both OBL and the refraction law of light to identify superior solutions, which has been successfully applied to improve GWO [34], WOA [35], AEFA [36], and other algorithms. We attempt to use the mechanism to improve the performance of HHO. The basic principle of ROBL is shown in Figure 4.
In Figure 4, x i , j denotes the position of the i -th individual within the population in the j -th dimension. The refracted opposition solution of x i , j is represented as x i , j * . The variables u b , l b , and u b + l b 2 correspond to the upper bound, lower bound, and their midpoint on this dimension, respectively. The angles α and β depict the angle of incidence and exit, while the lengths of the incoming and outgoing light are represented by l and l * , respectively. Based on the definition of trigonometric functions, Equation (14) can be derived.
sin α = ( u b + l b 2 x i , j ) / l sin β = x i , j * u b + l b 2 / l *
Based on the definition of the refractive index, i.e., n = sin α / s i n β , Equation (15) can be derived by combining the definition with Equation (14).
n = l * ( u b + l b 2 x i , j ) l x i , j * u b + l b 2
Equation (16) can be obtained by Equation (15) with making k = l / l * .
x i , j * = u b + l b 2 + u b + l b 2 k n x i , j k n
Referring to Equation (16), it can be observed that the opposition-based learning is actually a specific case of the refracted opposition-based learning. Specifically, when k , n = 1 , Equation (16) reduces to the standard form of OBL as follows [27].
x i , j * = u b + l b x i , j
At the end of each iteration, the population is sorted in descending order based on their fitness values. The top N R O P individuals are selected to undergo ROBL operations, and the greedy selection strategy is employed to determine the acceptance of new solutions. Subsequently, the updated first N R O P individuals are merged with the remaining N N R O P individuals to form a new population, which will be utilized in the subsequent iteration. The formula for determining N R O P is as follows.
N R O P = Floor N t × N 1 T m a x
where the operator ‘Floor’ denotes the mathematical operation of truncating the decimal portion of a numerical value, leaving only the integer part. N represents the size of the population, while t and T m a x denote the current and maximum iteration counts, respectively.
As evident from Equation (18), the N R O P decreases gradually from the population size N to 1 as the current iteration count t increases during the search process. The deliberate design enables the algorithm to explore the search space extensively during the early stages, while reducing the computation required in the later stages.

3.4. The Proposed Algorithm Combining Three Improved Strategies

The proposed MSI-HHO algorithm is developed by integrating the above three strategies, i.e., the inverted S-shaped escape energy, the stochastic learning mechanism based on Gaussian mutation, and the refracted opposition-based learning, as well as an additional strategy called greedy selection. The specific process of the algorithm is shown in Algorithm 1.
As depicted in Algorithm 1, the three proposed improvement strategies are highlighted using bold black borderlines to indicate their respective positions and implementation processes within the MSI-HHO algorithm. Specifically, the first strategy is applied at line 6, where hawks update the escape energy of prey using the proposed Equation (11). Then, the second strategy operates on lines 26–33. After each hawk updates its position in each iteration, it continues to undergo operations defined by the second strategy, i.e., the stochastic learning mechanism based on Gaussian mutation, performing n rounds of researchings. The final result is utilized for the next operations. Subsequently, the third strategy is applied at lines 35–39. Once the population completes the update for the current iteration, hawks are sorted in descending order based on their fitness values. The third strategy, i.e., the refracted opposition-based learning, is employed on the top N R O P hawks. Finally, the updated N R O P hawks are then merged with the remaining N N R O P hawks, forming a new population for the next iteration. The three proposed strategies aim to improve the standard HHO algorithm from the perspectives of prey, hawks, and the population. Firstly, the inverted S-shaped escape energy strategy is introduced to optimize the utilization of limited computational resources. Secondly, the stochastic learning mechanism based on Gaussian mutation enhances the algorithm’s local exploitation capability. Lastly, the refracted opposition-based learning strategy primarily enhances the global exploration capability of the Algorithm 1.
Algorithm 1. The multi-strategy improved HHO algorithm (MSI-HHO)
Input:Population size N
Maximum iterations T m a x
Mutation probability P m
Number of researchings n
Standard deviation and mean of Gauss distribution σ ,   μ
Refractive index k n
Output:Global best hawk X b e s t T m a x
Global best fitness value F i t ( X b e s t T m a x )
Mathematics 12 00415 i001

4. Experimental Results and Discussion

To assess the efficacy of our proposed algorithm, we employ it to address the optimization problems of the 23 classical benchmark functions [12] along with the latest IEEE CEC 2020 benchmark functions [37]. These benchmark functions are elaborated in detail in Table 1 and Table 2. In Section 4.2, we conduct a comprehensive comparative analysis between our MSI-HHO algorithm and six other state-of-the-art search algorithms, namely, GSA [12], DE [9], SOA [17], WOA [18], ABC [19], and the standard HHO [20]. Furthermore, the experimental results are presented in Section 4.3, followed by non-parametric tests in Section 4.2. Section 4.4 provides graphical representations of the iterative curves for all algorithms, facilitating a more intuitive observation of their convergence.
All the experiments are implemented using the MATLAB 2021b development environment. The computations are performed on a 64-bit computer manufactured by Lenovo in Changsha, China, equipped with an AMD [email protected] processor, 16 GB of RAM, and the Windows 10 operating system.

4.1. Benchmark Functions

Firstly, the 23 widely used classical benchmark functions are utilized for the simulation experiments. A detailed description of these functions, including their type, definition, variable range ( S ), dimension ( d ), and optimal value ( F o p t ), is provided in Table 1. Among the functions, f 1 f 13 represent a high-dimensional problem with d = 30 . Specifically, f 1 f 7 are categorized as unimodal functions, while f 8 f 13 are considered multimodal functions where the number of local optimal solutions increases exponentially as the problem dimension expands. The functions f 14 f 23 represent a low-dimensional problem characterized by having only a few local optimal solutions. In order to enhance the visual understanding of the characteristics of these functions, we generated three-dimensional surface graphs along with contour lines. These graphs are assigned as Figure 5a–c, representing functions f 7 , f 8 , and f 16 , respectively.
Additionally, we include the latest IEEE CEC 2020 benchmark functions to evaluate the performance of the algorithms, facilitating a more comprehensive comparison. The specific details of these functions are presented in Table 2. Similarly, Figure 5d–f correspond to functions F 2 , F 6 , and F 9 , respectively.

4.2. Comprehensive Comparison

The proposed MSI-HHO algorithm is compared with the standard HHO algorithm along with five other state-of-the-art algorithms, i.e., GSA, DE, SOA, WOA, and ABC. These algorithms have been extensively validated for their effectiveness and robustness in solving various engineering optimization problems. Among them, GSA is a well-known physics-based algorithm that draws inspiration from the law of universal gravitation. Furthermore, DE is also a celebrated algorithm widely recognized for its effectiveness in problem-solving, with its mechanisms mimicking evolutionary phenomena. Additionally, three widely used algorithms based on swarm intelligence, i.e., SOA, WOA, and ABC, are chosen to facilitate a comprehensive comparison with the proposed algorithm. The parameter settings for these algorithms remain consistent with those described in their respective literature. Simultaneously, Table 3 displays the parameter configurations of the MSI-HHO algorithm, used for both the 23 classical benchmark functions and the CEC 2020 benchmark functions.
As widely recognized, the initial state of the population significantly impacts the final results of an algorithm. Therefore, in each round of the experiment, every algorithm commences from the same pre-defined initial population state and performs 25 times. The performance of the algorithm is determined based on the average result of these runs. To ensure fairness and impartiality, a total of 10 rounds are conducted, involving 10 distinct initial population states. Regarding the parameter settings, the maximum number of iterations for each algorithm is set as 1000 × d , where d represents the dimensionality of the optimization problem. In terms of the population size, it was set to 50 for the 23 classical benchmark functions and 60 for the IEEE CEC 2020 benchmark function.
Firstly, we conduct further analysis of the experimental results obtained from the execution of the 23 classical benchmark functions. The minimum error value (denoted as “best”) and the standard deviation of the error (denoted as “std”) are calculated for each algorithm. The results are presented in Table 4, with the best results being highlighted in bold. The symbols “ + ”, “ ”, and “ ” are employed to indicate whether the performance of the corresponding algorithm is superior, similar, or inferior to that of the MSI-HHO algorithm.
As depicted in Table 4, when compared to physics-based algorithms, MSI-HHO demonstrates superiority over the GSA on twelve functions and slightly inferior performance on three functions. For algorithms based on evolutionary phenomena, MSI-HHO outperforms DE on sixteen functions and exhibits negligible differences on seven functions, showing its significant advantages. Likewise, regarding algorithms inspired by swarm intelligence, MSI-HHO outperforms SOA, WOA, and ABC on fifteen, fifteen, and twenty functions, respectively, while demonstrating a similar performance to them on eight, eight, and three functions. Additionally, it is noteworthy that MSI-HHO did not exhibit an inferior performance to them on any of the tested functions, emphasizing the superiority of MSI-HHO in terms of its solving capabilities. In comparison to the standard HHO algorithm, the MSI-HHO algorithm demonstrates superior performance on a total of 11 functions, with a slight decrement observed in only one function. Notably, MSI-HHO exhibits obvious advantages over HHO when addressing low-dimensional problems, i.e., functions f 14 f 23 .
Subsequently, Table 5 presents the experimental results of the CEC 2020 benchmark functions, building upon the basic format displayed in Table 4. We further calculate a series of additional metrics to provide a comprehensive and detailed representation of these outcomes. Specifically, we determine the minimum error (denoted as “Best”), maximum error (denoted as “Worst”), average error (denoted as “Mean”), median error (denoted as “Median”), and standard deviation of error (denoted as “STD”). Similarly, the best results for each test function, achieved by the algorithms, are marked in bold.
As depicted in Table 5, the outcomes of our MSI-HHO algorithm demonstrate superior performance compared to those of the other six algorithms. Specifically, among these functions, MSI-HHO outperforms GSA, DE, WOA, and HHO on six, nine, eight, and nine functions, respectively. Furthermore, MSI-HHO demonstrates superiority over the SOA and ABC on all 10 test functions. Based on the aforementioned results and analysis, it can be concluded that MSI-HHO exhibits a favorable and comprehensive performance, surpassing the other six advanced optimization algorithms across the majority of functions.
The time cost of each algorithm is determined based on the average value across 25 repetitions of the experiment, denoted as the “Mean time cost”, with outcomes presented at the bottom of Table 4 and Table 5. For the 23 classical benchmark functions, the algorithms are ranked in ascending order of time cost as follows: WOA, SOA, HHO, ABC, DE, MSI-HHO, and GSA. For the IEEE CEC 2020 benchmark functions, the order is SOA, WOA, HHO, ABC, DE, MSI-HHO, and GSA. It is evident that the relative ranking of the algorithms in terms of time cost is consistent across both the datasets. Notably, SOA and WOA consistently exhibit the lowest time costs, while GSA incurs the highest. Furthermore, upon inspecting the tables, it is observed that the differences in time costs among the algorithms are relatively small, with the maximum disparity within one order of magnitude.

4.3. Statistical Test

Two widely used non-parametric test methods, i.e., the Wilcoxon signed-rank test and Friedman test, are utilized to conduct a more comprehensive analysis of the experimental results. Specifically, the Wilcoxon signed-rank test is frequently employed to determine whether there is a significant difference between two paired samples, whereas the Friedman test is commonly utilized to detect significant differences among multiple samples [38].
For the Wilcoxon signed-rank test, the experimental results of all the algorithms on the 23 classical benchmark functions in Table 4 are firstly combined in a pairwise manner. Then, the Wilcoxon signed-rank test is conducted, with the outcomes presented in Table 6. The positive rank sum, denoted as “ R + ”, signifies the cumulative sum of ranks that our proposed algorithm outperforms the second algorithm, while the negative rank sum, denoted as “ R ”, represents the cumulative sum of ranks in the opposite scenario. By calculating the p -values, it is observed that the MSI-HHO algorithm demonstrates a significant improvement compared to GSA, surpassing it at a significance level of α = 0.05 . Moreover, the MSI-HHO algorithm also outperforms the other five algorithms at a significance level of α = 0.01 .
For the Friedman test, we firstly rank the algorithms based on their performance on each test function, ensuring that each algorithm receives its own ranking for every test function. Subsequently, the rankings of the algorithms across all 23 test functions are integrated, and the Friedman test is conducted, with the results presented in Table 7. The sum of ranks across all test functions is denoted as the “Rank sum”, while the average rank is represented as “Friedman rank”. Additionally, we assign the general rank to each algorithm based on the Friedman rank, which represents the overall performance of all algorithms on the test functions. The performance order of the algorithms is as follows: MSI-HHO, HHO, WOA, GSA, SOA, DE, and ABC. The results indicate that MSI-HHO achieves the highest rank, further affirming its superior and comprehensive performance compared to that of other algorithms.
Similar to the process mentioned above, both of the two non-parametric statistical tests are also utilized to analyze the results of the CEC 2020 benchmark functions in Table 5, with the outcomes presented in Table 8 and Table 9, respectively. Based on the results presented in Table 8, it is evident that MSI-HHO exhibits superior performance compared to DE, SOA, ABC, and HHO, with a significance level of α = 0.1 . Furthermore, the results of the Friedman test in Table 9 indicate that both the Friedman rank and general rank of our MSI-HHO algorithm are ranked first, illustrating its substantial superiority over the other algorithms.

4.4. Iterative Curve

To further compare the convergence performance of MSI-HHO with that of other algorithms, we choose six and four functions from the two datasets mentioned above, respectively, to draw the convergence curves, as shown in Figure 6 and Figure 7. In consideration of the diverse types of test functions, we make selections from each type to create visualizations for analysis purposes. These selections are made to ensure a representative coverage of the different function types within each dataset.
As shown in Figure 6 and Figure 7, the convergence speed and search accuracy of different algorithms show significant variations. Throughout the 23 classical benchmark functions and the CEC 2020 benchmark functions, MSI-HHO consistently achieves convergence within the fewest number of iterations while successfully searching for the global optimal solution, exhibiting noteworthy exploration and exploitation capabilities. These findings provide further evidence for the effectiveness of the three enhancement strategies proposed in this study, which significantly improve the performance of HHO.

5. Conclusions

In this paper, we propose the MSI-HHO algorithm, which adopts three strategies to improve the performance of the standard HHO algorithm, i.e., inverted S-shaped escape energy, a stochastic learning mechanism based on Gaussian mutation, and refracted opposition-based learning. Furthermore, to assess the effectiveness of the proposed algorithm, a comprehensive evaluation is conducted by comparing it with the standard HHO algorithm and five other well-known search algorithms, i.e., GSA, DE, SOA, WOA, and ABC. Numerous simulation experiments are carried out on both the 23 classical benchmark functions and the IEEE CEC 2020 benchmark functions. The experimental results are analyzed by using two widely used non-parametric test methods, including the Wilcoxon signed-rank test and Friedman test. Finally, the convergence curves of the algorithms are generated to facilitate a comprehensive analysis of their convergence and optimization capabilities. Based on the aforementioned results and analysis, it is proved that our proposed MSI-HHO has a significant performance improvement over HHO and the other five algorithms.
In the future, we intend to include recently proposed advanced metaheuristic optimization algorithms in comparative experiments to further evaluate the performance of our proposed algorithm. Additionally, beyond numerical experiments, we will apply our algorithm to a wider range of real-world engineering optimization problems to comprehensively assess its effectiveness and robustness. While our method has achieved satisfactory solution accuracy, it has also resulted in relatively higher computational costs. Consequently, we will continue to explore further optimizations for the algorithm, starting from the specific procedures, in order to strike a better balance between search accuracy and computational efficiency.

Author Contributions

Conceptualization, H.W.; Funding acquisition, J.T.; Methodology, H.W. and Q.P.; Project administration, J.T.; Supervision, J.T. and Q.P.; Validation, H.W. and Q.P.; Visualization, H.W.; Writing—original draft, H.W.; Writing—review and editing, J.T. and Q.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China, grant number 62073330.

Data Availability Statement

All data generated or analyzed during this study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Yaşa, E.; Aksu, D.T.; Özdamar, L. Metaheuristics for the Stochastic Post-Disaster Debris Clearance Problem. IISE Trans. 2022, 54, 1004–1017. [Google Scholar] [CrossRef]
  2. Simpson, A.R.; Dandy, G.C.; Murphy, L.J. Genetic Algorithms Compared to Other Techniques for Pipe Optimization. J. Water Resour. Plann. Manag. 1994, 120, 423–443. [Google Scholar] [CrossRef]
  3. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  4. Ezugwu, A.E.; Shukla, A.K.; Nath, R.; Akinyelu, A.A.; Agushaka, J.O.; Chiroma, H.; Muhuri, P.K. Metaheuristics: A Comprehensive Overview and Classification along with Bibliometric Analysis. Artif. Intell. Rev. 2021, 54, 4237–4316. [Google Scholar] [CrossRef]
  5. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  6. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  7. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, 1st ed.; Complex adaptive systems; MIT Press: Cambridge, MA, USA, 1992; ISBN 978-0-262-08213-6. [Google Scholar]
  8. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  9. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Beyer, H.-G.; Schwefel, H.-P. Evolution Strategies—A Comprehensive Introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  11. Artificial Intelligence through Simulated Evolution. In Evolutionary Computation; IEEE: Piscataway, NJ, USA, 2009; ISBN 978-0-470-54460-0.
  12. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  13. Yadav, A. AEFA: Artificial Electric Field Algorithm for Global Optimization. Swarm Evol. Comput. 2019, 48, 93–108. [Google Scholar] [CrossRef]
  14. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  15. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  16. Das, B.; Mukherjee, V.; Das, D. Student Psychology Based Optimization Algorithm: A New Population Based Optimization Algorithm for Solving Optimization Problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  17. Dhiman, G.; Kumar, V. Seagull Optimization Algorithm: Theory and Its Applications for Large-Scale Industrial Engineering Problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  19. Karaboga, D.; Basturk, B. A Powerful and Efficient Algorithm for Numerical Function Optimization: Artificial Bee Colony (ABC) Algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  20. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  21. Zou, T.; Wang, C. Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization. Mathematics 2022, 10, 1145. [Google Scholar] [CrossRef]
  22. Shehabeldeen, T.A.; Elaziz, M.A.; Elsheikh, A.H.; Zhou, J. Modeling of Friction Stir Welding Process Using Adaptive Neuro-Fuzzy Inference System Integrated with Harris Hawks Optimizer. J. Mater. Res. Technol. 2019, 8, 5882–5892. [Google Scholar] [CrossRef]
  23. Rodríguez-Esparza, E.; Zanella-Calzada, L.A.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Pérez-Cisneros, M.; Foong, L.K. An Efficient Harris Hawks-Inspired Image Segmentation Method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  24. Chen, H.; Jiao, S.; Wang, M.; Heidari, A.A.; Zhao, X. Parameters Identification of Photovoltaic Cells and Modules Using Diversification-Enriched Harris Hawks Optimization with Chaotic Drifts. J. Clean. Prod. 2020, 244, 118778. [Google Scholar] [CrossRef]
  25. Amiri Golilarz, N.; Gao, H.; Demirel, H. Satellite Image De-Noising With Harris Hawks Meta Heuristic Optimization Algorithm and Improved Adaptive Generalized Gaussian Distribution Threshold Function. IEEE Access 2019, 7, 57459–57468. [Google Scholar] [CrossRef]
  26. Gezici, H.; Livatyalı, H. Chaotic Harris Hawks Optimization Algorithm. J. Comput. Des. Eng. 2022, 9, 216–245. [Google Scholar] [CrossRef]
  27. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 695–701. [Google Scholar]
  28. Sahoo, S.K.; Saha, A.K.; Nama, S.; Masdari, M. An Improved Moth Flame Optimization Algorithm Based on Modified Dynamic Opposite Learning Strategy. Artif. Intell. Rev. 2023, 56, 2811–2869. [Google Scholar] [CrossRef]
  29. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-Based Differential Evolution. IEEE Trans. Evol. Computat. 2008, 12, 64–79. [Google Scholar] [CrossRef]
  30. Xu, Y.; Yang, Z.; Li, X.; Kang, H.; Yang, X. Dynamic Opposite Learning Enhanced Teaching–Learning-Based Optimization. Knowl.-Based Syst. 2020, 188, 104966. [Google Scholar] [CrossRef]
  31. Cao, D.; Xu, Y.; Yang, Z.; Dong, H.; Li, X. An Enhanced Whale Optimization Algorithm with Improved Dynamic Opposite Learning and Adaptive Inertia Weight Strategy. Complex Intell. Syst. 2023, 9, 767–795. [Google Scholar] [CrossRef]
  32. Wang, Y.; Jin, C.; Li, Q.; Hu, T.; Xu, Y.; Chen, C.; Zhang, Y.; Yang, Z. A Dynamic Opposite Learning-Assisted Grey Wolf Optimizer. Symmetry 2022, 14, 1871. [Google Scholar] [CrossRef]
  33. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing Particle Swarm Optimization Using Generalized Opposition-Based Learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  34. Long, W.; Wu, T.; Cai, S.; Liang, X.; Jiao, J.; Xu, M. A Novel Grey Wolf Optimizer Algorithm with Refraction Learning. IEEE Access 2019, 7, 57805–57819. [Google Scholar] [CrossRef]
  35. Long, W.; Wu, T.; Jiao, J.; Tang, M.; Xu, M. Refraction-Learning-Based Whale Optimization Algorithm for High-Dimensional Problems and Parameter Estimation of PV Model. Eng. Appl. Artif. Intell. 2020, 89, 103457. [Google Scholar] [CrossRef]
  36. Adegboye, O.R.; Deniz Ülker, E. Hybrid Artificial Electric Field Employing Cuckoo Search Algorithm with Refraction Learning for Engineering Optimization Problems. Sci. Rep. 2023, 13, 4098. [Google Scholar] [CrossRef] [PubMed]
  37. Pan, Q.; Tang, J.; Lao, S. EDOA: An Elastic Deformation Optimization Algorithm. Appl. Intell. 2022, 52, 17580–17599. [Google Scholar] [CrossRef]
  38. Derrac, J.; García, S.; Molina, D.; Herrera, F. A Practical Tutorial on the Use of Nonparametric Statistical Tests as a Methodology for Comparing Evolutionary and Swarm Intelligence Algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. The dynamics of the inverted S-shaped escape energy.
Figure 1. The dynamics of the inverted S-shaped escape energy.
Mathematics 12 00415 g001
Figure 2. The stochastic learning mechanism based on Gaussian mutation.
Figure 2. The stochastic learning mechanism based on Gaussian mutation.
Mathematics 12 00415 g002
Figure 3. The specific process of the researchings.
Figure 3. The specific process of the researchings.
Mathematics 12 00415 g003
Figure 4. The diagram of the refracted opposition-based learning.
Figure 4. The diagram of the refracted opposition-based learning.
Mathematics 12 00415 g004
Figure 5. The 3D surface graphs of the example test functions.
Figure 5. The 3D surface graphs of the example test functions.
Mathematics 12 00415 g005
Figure 6. The convergence graphs of the 23 classical benchmark functions.
Figure 6. The convergence graphs of the 23 classical benchmark functions.
Mathematics 12 00415 g006
Figure 7. The convergence graphs of the IEEE CEC 2020 benchmark functions.
Figure 7. The convergence graphs of the IEEE CEC 2020 benchmark functions.
Mathematics 12 00415 g007
Table 1. The 23 classical benchmark functions.
Table 1. The 23 classical benchmark functions.
TypeDefinition S d F o p t
Unimodal Test Functions f 1 ( X ) = i = 1 d x i 2 100 ,   100 d 300
f 2 ( X ) = i = 1 d x i + i = 1 d x i 10 ,   10 d 300
f 3 ( X ) = i = 1 d j = 1 i x j 2 100 ,   100 d 300
f 4 ( X ) = m a x i x i ,   1 i d 100 ,   100 d 300
f 5 ( X ) = i = 1 d 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 30 ,   30 d 300
f 6 ( X ) = i = 1 d ( x i + 0.5 ) 2 100 ,   100 d 300
f 7 ( X ) = i = 1 d i x i 4 + r a n d o m [ 0 ,   1 ) 1.28 ,   1.28 d 300
Multimodal Test Functions f 8 ( X ) = i = 1 d x i sin ( x i ) 500 ,   500 d 30 418.9829 × d
f 9 ( X ) = i = 1 d x i 2 10 cos 2 π x i + 10 5.12 ,   5.12 d 300
f 10 ( X ) = 20 exp 0.2 1 30 i = 1 d x i 2 exp 1 30 i = 1 d cos 2 π x i + 20 + e 32 ,   32 d 300
f 11 ( X ) = 1 4000 i = 1 d x i 2 i = 1 d cos x i x i + 1 600 ,   600 d 300
f 12 ( X ) = π d 10 s i n π y 1 + i = 1 d 1 y 1 1 2 1 + 10 s i n 2 π y i + 1 + i = 1 d u x i , 10 , 100 , 4 50 ,   50 d 300
f 13 ( X ) = 0.1 sin 2 3 π x 1 + i = 1 d x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 d u x i ,   5 ,   100 ,   4 50 ,   50 d 300
Multimodal Test Functionswith Fixed Dimension f 14 X = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 65.53 ,   65.53 d 21
f 15 X = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 5 ,   5 d 40.0003
f 16 ( X ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 2 2 + 4 2 4 5 ,   5 d 2 1.0316
f 17 ( X ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 5 ,   10 × 0 ,   15 20.398
f 18 ( X ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 5 ,   5 d 23
f 19 ( X ) = i = 1 4 c i exp j = 1 d a i j ( x j p i j ) 2 0 ,   1 d 3 3.86
f 20 ( X ) = i = 1 4 c i exp j = 1 d a i j ( x j p i j ) 2 0 ,   1 d 6 3.32
f 21 X = i = 1 5 X a i X a i T + c i 1 0 ,   10 d 4 10.1532
f 22 X = i = 1 7 X a i X a i T + c i 1 0 ,   10 d 4 10.4028
f 23 X = i = 1 10 X a i X a i T + c i 1 0 ,   10 d 4 10.5363
Table 2. The IEEE CEC 2020 benchmark functions.
Table 2. The IEEE CEC 2020 benchmark functions.
TypeSymbolFunctions Name S F o p t
Unimodal Functions F 1 Shifted and Rotated Bent Cigar Function [ 100 ,   100 ] d 100
Basic Functions F 2 Shifted and Rotated Schwefel’s Function [ 100 ,   100 ] d 1100
F 3 Shifted and Rotated Lunacek Bi_Rastrigin Function [ 100 ,   100 ] d 700
F 4 Expanded Rosenbrock’s plus Griewangk’s Function [ 100 ,   100 ] d 1900
Hybrid Functions F 5 Hybrid Function 1 ( N = 3 ) [ 100 ,   100 ] d 1700
F 6 Hybrid Function 2 ( N = 4 ) [ 100 ,   100 ] d 1600
F 7 Hybrid Function 3 ( N = 5 ) [ 100 ,   100 ] d 2100
Composition Functions F 8 Composition Function 1 ( N = 3 ) [ 100 ,   100 ] d 2200
F 9 Composition Function 2 ( N = 4 ) [ 100 ,   100 ] d 2400
F 10 Composition Function 3 ( N = 5 ) [ 100 ,   100 ] d 2500
Table 3. The parameter settings of MSI-HHO.
Table 3. The parameter settings of MSI-HHO.
Parameter NameParameter Symbol23 Classical FunctionsIEEE CEC 2020 Functions
Mutation probability p m 10.7
Number of researchings n 66
Mean of Gaussian distribution μ 11
STD of Gaussian distribution σ 0.50.4
Refractive index k n 3.50.6
Table 4. The experimental results of the 23 classical benchmark functions.
Table 4. The experimental results of the 23 classical benchmark functions.
FunctionsMetricGSADESOAWOAABCHHOMSI-HHO
f 1 best2.1663 × 10−180001.1545 × 10200 *
std8.0505 × 10−19004.3530 × 1033.8134 × 10100
f 2 best7.2933 × 10−90003.5110 × 10000
std1.2135 × 10−9001.2290 × 1035.8342 × 10−100
f 3 best7.3487 × 10−185.6621 × 1037.7878 × 10−213.6275 × 10−111.4743 × 10400
std4.5585 × 10−186.7469 × 1037.5259 × 10−43.7058 × 1022.2542 × 10300
f 4 best8.6309 × 10−10003.9540 × 10−653.3299 × 10100
std1.3420 × 10−10008.3809 × 1023.5879 × 10000
f 5 best9.4979 × 10−142.8771 × 1017.5078 × 10−42.1133 × 1014.8574 × 1037.1889 × 10−101.7910 × 10−9
std2.9191 × 10−147.7532 × 10−21.4440 × 1012.2760 × 1053.8975 × 1033.7545 × 10−75.7432 × 10−8
+ +
f 6 best2.7785 × 10−186.7830 × 1001.3141 × 10−21.6489 × 10−95.2036 × 1012.4492 × 10−115.2957 × 10−14
std7.9649 × 10−191.2116 × 10−12.2525 × 10−18.3293 × 1024.5092 × 1017.8445 × 10−92.5557 × 10−12
+
f 7 best3.0408 × 10−32.8232 × 10−88.4267 × 10−81.4995 × 10−62.3710 × 10−12.8281 × 10−82.7201 × 10−8
std1.2968 × 10−34.6625 × 10−72.1441 × 10−62.3645 × 1051.6949 × 10−19.7979 × 10−77.2272 × 10−7
f 8 best8.9181 × 1037.7194 × 1033.8183 × 10−44.0727 × 10−42.4841 × 1033.8183 × 10−43.8183 × 10−4
std4.3870 × 1023.7041 × 1022.5048 × 10−76.6199 × 1031.9027 × 1024.8243 × 10−59.0892 × 10−11
f 9 best3.9798 × 1000003.4096 × 10100
std2.6022 × 100001.2443 × 1037.2091 × 10000
f 10 best1.2031 × 10−98.8818 × 10−168.8818 × 10−168.8818 × 10−165.0069 × 1008.8818 × 10−168.8818 × 10−16
std1.4709 × 10−10001.3490 × 1035.7356 × 10−100
f 11 best00001.6739 × 10000
std3.3453 × 10−30004.2819 × 10−100
f 12 best1.7715 × 10−209.5627 × 10−11.7949 × 10−44.4589 × 10−104.5812 × 10−12.6631 × 10−134.0182 × 10−14
std4.1967 × 10−211.9320 × 10−11.3598 × 10−25.3233 × 10−104.0680 × 10−19.4014 × 10−104.4993 × 10−13
+
f 13 best2.3447 × 10−192.8126 × 1001.5226 × 10−31.0464 × 10−82.7463 × 1001.9310 × 10−127.6478 × 10−13
std8.1704 × 10−202.7897 × 10−24.0115 × 10−27.2755 × 10−91.9305 × 1004.5732 × 10−98.5172 × 10−12
+
f 14 best3.8742 × 10−69.9431 × 10−13.8378 × 10−63.8378 × 10−63.8378 × 10−63.8378 × 10−63.8378 × 10−6
std1.2126 × 1003.2853 × 1004.5517 × 1002.0367 × 10−136.4073 × 10−164.7203 × 10−132.9458 × 10−16
f 15 best9.6812 × 10−41.8337 × 10−22.5106 × 10−59.6928 × 10−68.2069 × 10−57.4910 × 10−67.4862 × 10−6
std6.9165 × 10−43.3202 × 10−25.5961 × 10−48.2550 × 10−68.9288 × 10−51.9848 × 10−61.8259 × 10−7
f 16 best4.6510 × 10−83.1629 × 10−24.9261 × 10−84.6510 × 10−84.6510 × 10−84.6510 × 10−84.6510 × 10−8
std1.7104 × 10−167.1979 × 10−28.8371 × 10−81.8244 × 10−141.0265 × 10−158.9067 × 10−161.0571 × 10−16
f 17 best3.5773 × 10−73.5773 × 10−74.1391 × 10−73.5773 × 10−73.5773 × 10−73.5773 × 10−73.5773 × 10−7
std01.2796 × 10−148.6196 × 10−65.1759 × 10−93.4609 × 10−102.1514 × 10−103.3679 × 10−10
f 18 best1.3323 × 10−155.1215 × 10−31.1356 × 10−91.1954 × 10−91.2398 × 10−62.6645 × 10−151.3323 × 10−15
std1.2889 × 10−154.1432 × 1011.1254 × 1013.7920 × 10−73.1667 × 10−43.1841 × 10−112.0492 × 10−11
f 19 best1.7924 × 10−101.3910 × 10−21.5399 × 10−62.6202 × 10−71.7929 × 10−101.7926 × 10−101.7924 × 10−10
std2.8556 × 10−12.9142 × 10−19.2239 × 10−23.1764 × 10−31.7896 × 10−123.2756 × 10−64.4632 × 10−10
f 20 best3.8991 × 10−13.6173 × 10−14.0555 × 10−21.3331 × 10−81.2662 × 10−78.5138 × 10−61.5761 × 10−11
std5.7099 × 10−13.7214 × 10−11.1503 × 10−15.3179 × 10−21.3695 × 10−66.2794 × 10−25.8246 × 10−2
f 21 best5.0980 × 1009.0550 × 1001.1273 × 10−16.7876 × 10−71.8791 × 10−51.1224 × 10−53.2094 × 10−7
std9.0649 × 10−161.4581 × 10−13.1684 × 1002.7660 × 10−67.3683 × 10−41.6907 × 1007.9255 × 10−13
f 22 best1.3340 × 10−128.3837 × 1001.4427 × 10−32.3814 × 10−72.5690 × 10−51.3601 × 10−51.3376 × 10−12
std2.6576 × 1003.1831 × 10−12.7502 × 1004.1971 × 10−63.6355 × 10−42.1699 × 1002.6357 × 10−13
+
f 23 best3.0796 × 10−107.5041 × 1008.3103 × 10−32.6009 × 10−71.4752 × 10−59.9230 × 10−63.0796 × 10−10
std8.8818 × 10−165.0624 × 10−13.4986 × 1007.8253 × 10−75.2648 × 10−41.7936 × 1001.9674 × 10−12
Mean time cost8.7509 × 1025.6206 × 1021.9537 × 1024.3847 × 1014.3891 × 1023.8060 × 1027.6101 × 102
+ 300001
121615152011
6788311
* The best results are highlighted in bold.
Table 5. The experimental results of the IEEE CEC 2020 benchmark functions.
Table 5. The experimental results of the IEEE CEC 2020 benchmark functions.
FunctionsMetricGSADESOAWOAABCHHOMSI-HHO
F 1 Best5.5988 × 1017.3509 × 1096.8047 × 1081.0096 × 1021.0033 × 1065.7539 × 1041.0575 × 101
Median3.0493 × 1031.2064 × 10101.1507 × 10101.1338 × 1035.4545 × 1062.9392 × 1054.7133 × 103
Mean3.5691 × 1031.2413 × 10101.1295 × 10106.0608 × 1035.3996 × 1063.0048 × 1057.8990 × 103
Worst8.1534 × 1031.6424 × 10101.9877 × 10102.6579 × 1049.1827 × 1066.1484 × 1052.6267 × 104
STD2.3956 × 1032.0794 × 1094.6156 × 1098.5480 × 1032.1790 × 1061.5090 × 1058.7253 × 103
F 2 Best6.6981 × 1022.2479 × 1031.4935 × 1035.1135 × 1023.5075 × 1024.6776 × 1022.7102 × 102
Median1.2823 × 1033.0599 × 1032.4841 × 1031.5239 × 1035.2103 × 1027.6539 × 1029.6634 × 102
Mean1.2065 × 1033.0451 × 1032.4324 × 1031.5510 × 1035.0268 × 1027.9812 × 1029.4318 × 102
Worst1.6942 × 1033.4889 × 1033.5122 × 1032.5810 × 1035.8770 × 1021.4017 × 1031.6516 × 103
STD2.7194 × 1022.8658 × 1025.3348 × 1025.1268 × 1027.3681 × 1012.3444 × 1023.2384 × 102
F 3 Best1.6157 × 1012.2403 × 1021.6647 × 1027.0456 × 1014.1040 × 1016.8793 × 1012.9487 × 101
Median1.8899 × 1012.5489 × 1022.3748 × 1021.3547 × 1025.2135 × 1019.5479 × 1015.5181 × 101
Mean1.9177 × 1012.5229 × 1022.4167 × 1021.4528 × 1025.1621 × 1019.4575 × 1015.5738 × 101
Worst2.3194 × 1012.8456 × 1023.3526 × 1022.5283 × 1026.2170 × 1011.2609 × 1029.1986 × 101
STD1.9613 × 1001.6545 × 1013.9385 × 1014.3026 × 1014.8802 × 1001.6380 × 1011.3143 × 101
+
F 4 Best1.3659 × 1003.2797 × 1032.8125 × 1021.0779 × 1014.9237 × 1005.1484 × 1001.1967 × 100
Median2.1800 × 1001.1466 × 1052.7657 × 1042.0325 × 1016.4668 × 1001.0829 × 1012.3761 × 100
Mean2.1238 × 1001.6292 × 1051.0169 × 1052.2161 × 1016.4713 × 1001.1962 × 1012.5347 × 100
Worst3.0916 × 1004.6534 × 1057.8701 × 1054.8286 × 1018.0445 × 1002.1662 × 1015.2632 × 100
STD4.5315 × 10−11.3511 × 1051.6494 × 1058.5941 × 1007.8202 × 10−13.8459 × 1008.9963 × 10−1
F 5 Best1.5303 × 1048.6617 × 1053.3272 × 1041.2443 × 1041.8117 × 1044.3957 × 1031.0891 × 103
Median2.8899 × 1041.4678 × 1073.7275 × 1063.4682 × 1051.2264 × 1053.6159 × 1048.8460 × 104
Mean3.8180 × 1041.8857 × 1076.7618 × 1063.4007 × 1051.2871 × 1054.5072 × 1041.0390 × 105
Worst8.2838 × 1046.0133 × 1073.2028 × 1078.1066 × 1052.9435 × 1051.3118 × 1053.1202 × 105
STD2.1075 × 1041.5005 × 1077.8370 × 1062.2850 × 1057.4350 × 1043.3436 × 1049.2549 × 104
F 6 Best2.2897 × 10−18.1139 × 10−13.1238 × 1008.2728 × 10−17.2963 × 1001.4182 × 1002.3456 × 100
Median1.8768 × 1021.1425 × 1004.1412 × 1022.8091 × 1022.1242 × 1023.0688 × 1022.7791 × 102
Mean2.5674 × 1021.6453 × 1013.4454 × 1023.1365 × 1022.6529 × 1022.8833 × 1023.0586 × 102
Worst8.2812 × 1023.6210 × 1028.1601 × 1028.3667 × 1026.7793 × 1027.1999 × 1028.3753 × 102
STD2.1717 × 1027.2105 × 1012.0516 × 1022.2834 × 1022.2105 × 1022.1721 × 1022.5097 × 102
+ + + +
F 7 Best5.3783 × 1032.2291 × 1065.3934 × 1041.2860 × 1041.9904 × 1041.0888 × 1037.3490 × 102
Median1.1419 × 1048.4098 × 1065.1032 × 1061.6106 × 1055.7226 × 1041.4781 × 1042.3625 × 104
Mean1.2375 × 1049.5788 × 1067.6309 × 1062.9009 × 1055.5705 × 1042.0908 × 1044.0370 × 104
Worst2.3654 × 1042.1848 × 1071.7426 × 1072.0167 × 1069.5273 × 1047.2776 × 1042.2392 × 105
STD5.3778 × 1035.2526 × 1066.6471 × 1064.1988 × 1052.0233 × 1041.8667 × 1045.1839 × 104
F 8 Best1.0000 × 1026.9290 × 1028.6976 × 1028.6908 × 1014.7240 × 1011.0453 × 1022.8266 × 101
Median1.0000 × 1021.1264 × 1031.4628 × 1031.0263 × 1028.3694 × 1011.0894 × 1021.0110 × 102
Mean1.0000 × 1021.2246 × 1031.5876 × 1033.6438 × 1028.0989 × 1011.3925 × 1029.8460 × 101
Worst1.0000 × 1021.8587 × 1032.8844 × 1032.5218 × 1039.6042 × 1018.7791 × 1021.0391 × 102
STD7.0829 × 10−112.8082 × 1025.1542 × 1027.2974 × 1021.1983 × 1011.5390 × 1021.4664 × 101
F 9 Best1.0000 × 1026.2937 × 1024.0308 × 1021.0011 × 1021.8625 × 1021.0164 × 1021.0063 × 102
Median3.0000 × 1028.0849 × 1026.1034 × 1024.6517 × 1022.0930 × 1024.2768 × 1024.1919 × 102
Mean2.8512 × 1028.1348 × 1026.3587 × 1024.5668 × 1022.0947 × 1024.1660 × 1024.0783 × 102
Worst4.3607 × 1029.9991 × 1029.9721 × 1025.5090 × 1022.3270 × 1024.5792 × 1024.4910 × 102
STD1.3774 × 1029.8935 × 1011.3481 × 1027.9980 × 1011.2814 × 1016.7068 × 1016.5395 × 101
+ +
F 10 Best4.0000 × 1021.1409 × 1038.4520 × 1024.0030 × 1024.6424 × 1024.0162 × 1024.0000 × 102
Median4.0000 × 1021.5750 × 1031.5890 × 1036.2216 × 1025.3277 × 1024.0277 × 1025.5217 × 102
Mean4.0000 × 1021.6922 × 1031.6215 × 1035.9767 × 1025.3221 × 1024.6107 × 1025.1171 × 102
Worst4.0000 × 1022.6111 × 1032.7046 × 1036.8228 × 1025.9776 × 1026.2396 × 1026.7400 × 102
STD4.5306 × 10−103.8039 × 1024.1198 × 1027.5617 × 1012.8090 × 1019.6270 × 1011.0515 × 102
+
Mean time cost (s)2.2682 × 1021.9170 × 1025.9314 × 1016.1687 × 1011.8615 × 1021.3610 × 1022.0677 × 102
+ 410201
69108109
000000
The best results are highlighted in bold.
Table 6. The Wilcoxon signed-rank test results of the 23 classical benchmark functions.
Table 6. The Wilcoxon signed-rank test results of the 23 classical benchmark functions.
Comparison R + R p -Value
MSI-HHO vs. GSA146−440.023907
MSI-HHO vs. DE13600.000321
MSI-HHO vs. SOA150−30.000355
MSI-HHO vs. WOA17100.000143
MSI-HHO vs. ABC27600.000091
MSI-HHO vs. HHO97−80.003445
Table 7. The Friedman test results of the 23 classical benchmark functions.
Table 7. The Friedman test results of the 23 classical benchmark functions.
FunctionsGSADESOAWOAABCHHOMSI-HHO
f 1 6333733
f 2 6333733
f 3 463571.51.5
f 4 62.52.5572.52.5
f 5 1645723
f 6 1654732
f 7 6245731
f 8 7614532
f 9 6333733
f 10 6333733
f 11 3.53.53.53.573.53.5
f 12 1754632
f 13 1754632
f 14 6754321
f 15 6743521
f 16 2764.54.522
f 17 2.52.57562.52.5
f 18 1.5745631.5
f 19 1765432
f 20 7652341
f 21 6752431
f 22 1763542
f 23 1763542
Rank sum88.5122.59988132.56647.5
Friedman rank3.855.334.303.835.762.872.07
General rank4653721
Table 8. The Wilcoxon signed-rank test results of the IEEE CEC 2020 benchmark functions.
Table 8. The Wilcoxon signed-rank test results of the IEEE CEC 2020 benchmark functions.
Comparison R + R p -Value
MSI-HHO vs. GSA42−130.465
MSI-HHO vs. DE54−10.067
MSI-HHO vs. SOA5500.054
MSI-HHO vs. WOA50−50.147
MSI-HHO vs. ABC5500.054
MSI-HHO vs. HHO54−10.067
Table 9. The Friedman test results of the IEEE CEC 2020 benchmark functions.
Table 9. The Friedman test results of the IEEE CEC 2020 benchmark functions.
FunctionsGSADESOAWOAABCHHOMSI-HHO
F 1 2763541
F 2 5764231
F 3 1765342
F 4 2765341
F 5 4763521
F 6 1263745
F 7 3764521
F 8 4673251
F 9 1762543
F 10 1763542
Rank sum24646135423618
Friedman rank2.46.46.13.54.23.61.8
General rank2763541
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Tang, J.; Pan, Q. MSI-HHO: Multi-Strategy Improved HHO Algorithm for Global Optimization. Mathematics 2024, 12, 415. https://doi.org/10.3390/math12030415

AMA Style

Wang H, Tang J, Pan Q. MSI-HHO: Multi-Strategy Improved HHO Algorithm for Global Optimization. Mathematics. 2024; 12(3):415. https://doi.org/10.3390/math12030415

Chicago/Turabian Style

Wang, Haosen, Jun Tang, and Qingtao Pan. 2024. "MSI-HHO: Multi-Strategy Improved HHO Algorithm for Global Optimization" Mathematics 12, no. 3: 415. https://doi.org/10.3390/math12030415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop