Next Article in Journal
Statistical Blending-Type Approximation by a Class of Operators That Includes Shape Parameters λ and α
Previous Article in Journal
A Boundary Element Procedure for 3D Electromagnetic Transmission Problems with Large Conductivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization

Information Science and Technology College, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1145; https://doi.org/10.3390/math10071145
Submission received: 28 February 2022 / Revised: 24 March 2022 / Accepted: 30 March 2022 / Published: 2 April 2022
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The Harris Hawks optimization (HHO) is a population-based metaheuristic algorithm; however, it has low diversity and premature convergence in certain problems. This paper proposes an adaptive relative reflection HHO (ARHHO), which increases the diversity of standard HHO, alleviates the problem of stagnation of local optimal solutions, and improves the search accuracy of the algorithm. The main features of the algorithm define nonlinear escape energy and adaptive weights and combine adaptive relative reflection with the HHO algorithm. Furthermore, we prove the computational complexity of the ARHHO algorithm. Finally, the performance of our algorithm is evaluated by comparison with other well-known metaheuristic algorithms on 23 benchmark problems. Experimental results show that our algorithms performs better than the compared algorithms on most of the benchmark functions.

1. Introduction

Optimization is the process of finding the best solution from the available set of possible solutions [1]. Recently, optimization problems have been widely used in many areas, such as engineering [2], supply chain [3,4], scheduling [5], and so on [6]. Traditional optimization methods require accurate mathematical models of optimization problems. However, as the complexity of optimization problems increases, it is difficult to provide accurate mathematical models of the optimization problems to be solved. Therefore, many researchers have focused their study on metaheuristic methods, which have the advantages of fewer parameters, strong robustness, simplicity, high performance, and low computing time which have attracted the attention of the research community. According to the nature of optimization algorithms, these algorithms can be categorized into deterministic and stochastic methods [7,8]. Deterministic methods search for local minima according to a certain deterministic strategy and try to jump the obtained local minima to reach the global optimum. In contradiction to deterministic methods, stochastic methods typically use uniform random positions to guide their behavior to search for global optimal solutions. Metaheuristics are stochastic methods [9].
Most of the metaheuristic algorithms are inspired by the real world and have now been widely used in engineering applications [10], resource allocation [11], feature selection [12], cluster analysis [13], and many other fields. Metaheuristic algorithms are divided into three categories: swarm intelligence related algorithms, evolutionary mutation related algorithms, and physical theory related algorithms. Firstly, swarm intelligence algorithms are inspired by natural animal and plant populations and mimic the competition and cooperation behavior of each member of the population in the process of foraging. Each individual in the population represents a candidate solution for the optimization problem, and the individual moves to the prey, that is, the optimal solution, based on the information obtained by himself and his companions in the foraging process. Some of the well-known swarm intelligence algorithms are particle swarm optimization algorithm (PSO) [14], grey wolf optimizer (GWO) [15], whale optimization algorithm (WOA) [16], cuckoo search (CS) [17], slime mould algorithm (SMA) [18], ant colony algorithm (ACO) [19], salp swarm algorithm (SSA) [20], artificial tree (AT) algorithm [21], sine cosine algorithm (SCA) [22], and moth-flame optimization algorithm (MFO) [23]. Secondly, evolutionary mutation-related algorithms are inspired by biological evolution in nature and Darwin’s survival of the fittest, mimic the mutation and crossover phenomena of organisms in the evolution process. The typical algorithms are genetic algorithm (GA) [24], differential evolution algorithm (DE) [25], and genetic programming (GP) [26]. Thirdly, physics theory-related algorithms are inspired by physical phenomena in the real world and are formed by simulating their physical process. This kind of algorithm has better support for the laws of physics, is easy to jump out of the local optimum, and might quickly converge to the optimal solution. There are many related algorithms, such as Gravity Search Algorithm (GSA) [27], simulated annealing (SA) [28], multi-verse optimization algorithm (MVO) [29], atom search optimization (ASO) [30], water wave optimization (WWO) [31], charged system search (CSS) [32], and heat transfer search (HTS) [33].
Harris Hawk Optimization Algorithm (HHO) [34] was proposed by Heidari et al. in 2019, which was inspired by the collaboration of Harris hawk populations in nature to explore and prey on prey. Harris Hawk optimization has few control parameters and high search efficiency and search accuracy, it has attracted many researchers’ attention. In recent years, HHO has been widely applied to neural network [35], water distribution system optimization [36], photovoltaic model parameter identification [37], estimate the parameters of the single-diode PV model [38], Optimization of Controller Parameters in Power System [39], image segmentation [40], and so on. Recently, many researchers have focused on studying modified HHO to improve its performance. Qu, He, and Pend introduces an information exchange strategy in the HHO algorithm [41]. Individuals of the population can exchange and share information, which improves the convergence speed of the algorithm. Fan, Chen, and Xia proposed a new type of quasi-reflection strategy [42], which randomly updates between the current position of the individual and the center position of the search space, and helps the algorithm HHO to jump out of the local optimum solution. Kaveh and Rahmani proposed a hybrid algorithm, which based on HHO and imperialist competitive algorithm (ICA), the ICA can help the HHO algorithm to perform effective exploration, and the HHO algorithm can quickly converge to the optimal solution. [43]. Gupta introduces four strategies in HHO, non-linear parameter escape energy, differential rapid dive, greedy selection, and opposition-based learning strategies [44]. Zhang and Zhou studied the parameter escape energy in HHO and proposed six different improvement schemes [45]. Al-Beter combines the survival of the fittest strategy of the evolutionary algorithm with the HHO, and introduces three improved selection strategies (i.e., tournament, proportional and linear rank-based methods) in the exploration phase of the HHO [46]. Kamboj and Nandi combine HHO and SCA to help HHO to jump out of the local optimum and improve the convergence speed [47].
However, HHO algorithm sometimes can fall into local optimization or converge prematurely on certain problems. Therefore, we further improve HHO, and propose an adaptive relative reflection Harris Hawks optimization algorithm (ARHHO) to make a better balance the exploitation and exploration. Firstly, non-linear escape energy is proposed to increase the diversity of the algorithm. Secondly, we present an adaptive weight in the exploitation phase, which improves the search accuracy of the algorithm. Moreover, we propose adaptive relative reflection to help the algorithm jump out of the local optimum. Finally, we present pseudo code of our algorithm, and prove the computational complexity. In order to validate the performance of our algorithm, we conduct intensive experiments on commonly used benchmark problems. The results show that our algorithm is superior to the compared algorithms.
The rest of this paper is organized as follows: Section 2 comprehensively reviews the standard HHO algorithm. Section 3 describes the ARHHO algorithm proposed in detail. Section 4 gives the comparative experimental results and analysis. Finally, Section 5 summarizing the overall work and future research directions.

2. Harris Hawks Optimization

The transition process from exploration to exploitation in the HHO algorithm is mainly controlled by the key parameter escape energy factor E . The formula is shown in Equation (1):
E = 2 E 0 1 k K m a x
where E represents the escape energy of the prey, E 0 represents a random number in the interval (−1, 1), k and K m a x respectively represent the current number of iterations and the maximum number of iterations.

2.1. Exploration Phase

In this phase of the HHO, the Harris Hawk randomly chooses to land on a tall tree or chooses a location based on other members of the population and explores its prey according to these two strategies. The formula is shown in Equation (2):
X k + 1 = X r a n d k r 1 X r a n d k 2 r 2 X k                                                           q 0.5 X b e s t k X m k r 3 L b + r 4 U b L b                       q < 0.5
where   X k + 1 is the position vector of hawks at ( k + 1 ) th iteration,   X k represents the position vector of hawks at k th iteration, X r a n d k represents the position vector of the hawk randomly chosen from the current population, X b e s t k represents the position vector of the prey or fittest solution, L b and U b indicate the lower and upper bounds of variables, respectively, r 1 , r 2 , r 3 , r 4 , and q are random numbers in the interval (0, 1), X m k represents the average position of Hawks population in the k th iteration, which is calculated as follows:
X m k = 1 N i = 1 N X i k
where X i k represents the position vector of the ith Hawks in the k th iteration, and N represents the total number of hawks.

2.2. Exploitation Phase

In the exploitation stage, the HHO algorithm simulates the prey-preying behavior of hawks and determines the four behaviors in the exploitation stage according to the two variables, the probability r and the escape energy E .

2.2.1. Soft Besiege

When r 0.5 and E 0.5 , the prey has sufficient energy but no chance to escape. At this time, Harris Hawks perform soft besiege. This behavior is described as follows:
X k + 1 = Δ X k E J X b e s t k X k
Δ X k = X b e s t k X k
where Δ X k represents the difference between the position of the prey and the hawk in the k th iteration, J = 2 1 r 5 represents the jump strength of the prey during the escape, r 5 is a random number between (0, 1).

2.2.2. Hard Besiege

When r 0.5 and E < 0.5 , the prey does not have sufficient energy and no chance to escape successfully. At this time, Harris Hawks perform hard besiege. This behavior is performed based on Equation (6):
X k + 1 = X b e s t k E Δ X k

2.2.3. Soft Besiege with Progressive Rapid Dives

When r < 0.5 and E 0.5 , the prey has sufficient energy and may successfully escape, and uses more fraudulent escape actions during the escape. Harris Hawk will correct its position according to the prey. Harris Hawk will make the next move according to Equations (7) and (8):
Y = X b e s t k E J X b e s t k X k
Z = Y + S × L F D
where S is a random vector by size 1 ×   D , D represents the dimension of the problem, L F is the Levi flight function as described follows:
L F x = 0.01 × u × σ v 1 β ,   σ = Γ 1 + β × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
where u , v are random numbers in the interval (0, 1), and β is the default value set to 1.5.
Therefore, the final strategy for updating the position of the Hawks is as follows:
X k + 1 = Y           i f   f Y < f X k Z           i f   f Z < f X k

2.2.4. Hard Besiege with Progressive Rapid Dives

When r < 0.5 and E < 0.5 , the prey has a chance to escape but has insufficient escape energy, the Harris Hawk attempts to decrease the distance with its prey in progressive rapid dives and construct a hard besiege before the surprise pounces to kill the prey, this behavior is performed based on Equation (11):
X k + 1 = Y           i f   f Y < f X k Z           i f   f Z < f X k
where Y and Z are obtained by the following Equations (12) and (13):
Y = X b e s t k E J X b e s t k X m k
Z = Y + S × L F D
The pseudo-code containing all the steps of standard HHO algorithm is shown in Algorithm 1 and the flow of search process is presented in Figure 1.
Algorithm 1 The standard HHO algorithm
Initialize the population X i ( i = 1, 2, …, N )
While k < k m a x
  Calculate the fitness values of hawks
  Set X b e s t as the location of best solution location
  For i = 1 to N
    Update the initial energy E 0 , E and jump strength J
    If ( E 1 )
     Update the location vector of the current search agent using Equation (1)
    If ( E < 1 )
     If ( r 0.5 and E 0.5 )
       Update the location vector of the current search agent using Equation (4)
     elseif ( r 0.5 and E < 0.5 )
       Update the location vector of the current search agent using Equation (6)
     elseif ( r < 0.5 and E 0.5 )
       Update the location vector of the current search agent using Equation (10)
     elseif ( r < 0.5 and E < 0.5 )
       Update the location vector of the current search agent using Equation (11)
Return X b e s t

3. The Proposed ARHHO

In this section, we propose adaptive relative reflection Harris Hawk optimization algorithm (ARHHO). Firstly, the nonlinear escape energy is introduced to simulate the actual optimization search process. Secondly, we present adaptive weight in the exploitation stage to improve the search accuracy of the algorithm. Then we propose adaptive relative reflection to help the algorithm jump out of the local optimum. Finally, we analyze the complexity of the algorithm.

3.1. Nonlinear Escape Energy

Escape energy usually guides the algorithm towards exploitation as the iteration value increases. If the value of escape energy is large, then the algorithm is beneficial to global exploration. If the value of escape energy is small, then the algorithm is beneficial to local exploitation. Therefore, choosing a suitable escape energy value is important to the algorithm, which creates a balance between exploitation and exploration.
The escape energy in the HHO algorithm is a linear function. In the earliest stages, the algorithm belongs to exploration phase. In the middle and later stages, the algorithm belongs to the exploitation phase. However, HHO has the lower diversity and is easy to fall into the local optimum. Therefore, we further improve the escape energy, and propose a nonlinear escape energy as shown in Equation (14):
E = 2 E 0 1 k K m a x 2
where E 0 indicates a random number from the interval (−1, 1), k represents the current iteration number, and K m a x represents the maximum iteration number.
The nonlinear escape energy can further enhance the diversity than the linear escape energy, as shown in Figure 2. The linear escape energy guides the algorithm to search the whole solution space before 150 iterations and guides the algorithm to fully exploit after 150 iterations. So linear escape energy will make the algorithm fall into local optimization and cause to converge prematurely. Moreover, the nonlinear escape energy guides the algorithm to search the whole solution space before 350 iterations and guides the algorithm to fully exploit after 350 iterations. Therefore, the nonlinear escape energy is beneficial for promoting the diversity and jumping out of local optimization.

3.2. Adaptive Weight

In the exploitation phase of the standard HHO algorithm, the position of the search agent is updated according to the globally optimal solution found so far by the population. However, it is difficult to estimate the distance between the individual and the optimal solution. Therefore, we propose an adaptive weight, as shown in Equation (15):
ω = ω m a x ω m a x ω m i n × k K m a x e
where ω m a x and ω m i n are the initial weight and final weight of the parameter ω , k represents the current iteration number, and K m a x represents the maximum iteration number.
Therefore, the exploitation stage of the ARHHO algorithm can be expressed as follows:
1.
Soft besiege
X k + 1 = ω X b e s t k X k E J X b e s t k X k
2.
Hard besiege
X k + 1 = ω X b e s t k E X b e s t k X k
3.
Soft besiege with progressive rapid dives
Y n e w = ω X b e s t k E J X b e s t k X k
Z n e w = Y n e w + S × L F D
Therefore, the final strategy for updating the position of the hawks during the soft besiege with progressive rapid dives is as follows.
X k + 1 = Y n e w           i f   f Y n e w < f X k Z n e w           i f   f Z n e w < f X k
4.
Hard besiege with progressive rapid dives
Y n e w = ω X b e s t k E J X b e s t k X m k
Z n e w = Y n e w + S × L F D
Therefore, the final strategy for updating the position of the hawks during the hard besiege with progressive rapid dives is as follows.
X k + 1 = Y n e w           i f   f Y n e w < f X k Z n e w           i f   f Z n e w < f X k
where ω represents the adaptive inertia weight, X b e s t k represents the position of the optimal solution found so far in the k generation, X k represents the position of the search agent in the population, E represents the escape energy, and J represents the jumping strength of the prey during escape, S is a random vector determined by 1 ×   D , X m k represents the average position of all individuals in the current hawks population calculated by Equation (2), L F represents the Levi flight function is calculated by Equation (9).
In the early stage of iteration, the algorithm tends to exploration. A larger weight can make the updated position of the individual close to the position of the current optimal solution. Therefore, the algorithm has a certain exploitation ability. In the later stage of iteration, the algorithm tends to exploitation. A smaller weight can make the updated position of the individual far from the position of the current optimal solution. Thus, the algorithm has a certain exploration ability. Therefore, adaptive weight can help the algorithm to make a balance between exploration and exploitation.
Figure 3 shows the change of the adaptive weight in the iterative process. In the early stage of the iteration, the adaptive weight is large. As the iteration progresses, the adaptive weight gradually decreases, which can well balance the exploration and exploitation phases of the algorithm and make the transition between exploration and exploitation smoother. Therefore, the adaptive weight effectively improves the search ability of the ARHHO algorithm, which has theoretical and practical significance.

3.3. Adaptive Relative Reflection

If the optimal solution remains unchanged during the iteration process, the algorithm may fall into local optimal. Therefore, we propose an adaptive relative reflection to jump out of local optimal. Firstly, we take the prey or the currently found optimal solution as fulcrum, then each individual reflects around the position of the fulcrum according to its current position to obtain a new position, as shown in Equation (24):
X _ new i k + 1 = γ 3 X i k + P · γ 1 · X best k γ 2 · X i k
P = ( P m a x P m i n ) × k K m a x + P m i n
γ k + 1 = a 4 s i n π γ k
where X _ n e w i k + 1 is the new position obtained by the relative optimal solution reflection of the search agent,   X b e s t k is the position of the current optimal solution, X i k is the position of the search agent in the k th iteration, P is the adaptive relative reflection factor, P m a x and P m i n are the initial and final values of the relative reflection factor, and γ 1 , γ 2 and γ 3 are random numbers in [0, 1] generated by chaotic mapping.
The adaptive relative reflection can help the individual to jump out of local optimal as shown in Figure 4, when X i is an individual, X b e s t is a current optimal solution, L B and U B are the lower bound and upper bound of the search space of. The new individual with respect to X i is a position in the circle, which can be obtained according to the adaptive relative reflection. If the current iteration number k is small, then the circle is small, and the value of new individual is close to X b e s t . If k is large, then the circle is large, and the value of new individual is far away from X b e s t .
The pseudo code of the proposed ARHHO algorithm is presented in Algorithm 2 and the flow of search process is presented in Figure 5. First, the algorithm initializes the population, and calculates the fitness value of each individual and sets the individual with the best fitness as the current optimal solution. Secondly, all individuals are updated using exploitation and exploitation phase in each iteration. If the optimal value of ARHHO has not changed in six iterations, then adaptive relative reflection is used to updating all individuals. Finally, the global optimal solution is outputted.
Algorithm 2 The ARHHO algorithm
Initialize the random population X i ( i = 1, 2,..., N )
Calculate the fitness values of hawks
Set X b e s t as the location of rabbit (best location)
While k < K m a x
   For i = 1 to N
   Update the initial energy E 0 , jump strength J and ω
   Update the E using Equation (14)
   If ( E 1 )
   Update the location vector using Equation (1)
   If ( E < 1 )If ( r 0.5 and E 0.5 )
   Update the location vector using Equation (16)
   elseIf ( r 0.5 and E < 0.5 )
   Update the location vector using Equation (17)
   elseIf ( r < 0.5 and E 0.5 )
   Update the location vector using Equation (20)
   elseIf ( r < 0.5 and E < 0.5 )
   Update the location vector using Equation (23)
   Calculate the fitness values of hawks
   Set X b e s t as the location of rabbit (best location)
    k = k + 1
   If h = 6 and k < ( K m a x 1 )
     For (each hawk ( X i )) do
     Update the new location vector using Equation (24)
     Calculate the fitness values of hawks_new
     If f( X _ n e w i k ) < f( X i k )
      f( X i k ) = f( X _ n e w i k )
     Set X b e s t as the location of rabbit (best location)
      k = k + 1
end while
Return X b e s t

3.4. Complexity Analysis

The computational complexity of the ARHHO algorithm mainly includes the initialization of the population, the evaluation of the fitness value of the search agent, and the location update of the search agent. It should be pointed out that N is the size of the population, D is the dimension of the problem to be solved, and k is the number of iterations. Therefore, the computational complexity is analyzed as follows.
1.
The computational complexity of the population initialization stage is O ( N × D )
2.
The computational complexity of evaluating the fitness value of the search agents in each iteration and finding the optimal solution is O ( N )
3.
The computational complexity of the position update of each iteration of search agents is O ( N × D )
4.
The computational complexity of updating the position of the search agents according to the adaptive relative reflection in each iteration is O ( N × D )
In summary, the computational complexity of the entire optimization process of the ARHHO algorithm is O ( N × D × k ). SO, the computational complexity of the ARHHO algorithm is the same as that of the standard HHO algorithm, and the computational complexity is not increased.

4. Experiments and Results

In this section, we use a series of benchmark functions to evaluate the performance of our algorithm. We first describe twenty-three standard benchmark test functions and experiment setup used in the experimental study, and then compare the experimental results with other algorithms.

4.1. Benchmark Test Functions

In order to demonstrate the efficiency of proposed algorithm, we select 23 classic benchmark functions [48]. F1~F7 are unimodal functions, which have only one optimal solution in the search space. F8~F13 are multimodal functions which have many local optimal solutions in the search space. F14~F23 are multimodal functions with fixed dimensions, which also has multiple local optimal solutions in the search space, but the dimensionality of the function is low. The unimodal function is used to test the exploitation ability of the algorithm, and the multimodal function is used to test the exploration of the algorithm and the ability to jump out of the local optimum. Finally, the solid-dimensional multimodal function is used to test the stability of the algorithm.

4.2. Experimental Setup

The experiment was run on a PC composed of Windows 10 Ultimate 64-bit operating system, Inter(R) Core(TM) i5-5200U CPU, and 8 GB memory. We implemented all the algorithms in the Python programming language.
The parameter values of all proposed algorithm and other algorithms such as DE [24], SSA [19], GWO [14], MFO [22], PSO [13], SCA [21], WOA [15] and HHO [33] are given in Table 1. To make the experimental results reasonable and credible, the population size of all algorithms is set to 30, and the maximum number of iterations is set to 500 iterations. The final results of all experiments select the average fitness value (mean) and standard deviation (std) obtained by the algorithm on each benchmark function for 30 runs as the evaluation criteria.

4.3. Comparison with Other Algorithms

In order to validate the performance of the proposed algorithm, we compared the experimental results with eight well-known metaheuristic algorithms.
Table 2 shows the mean and standard deviation found by nine algorithms on thirteen benchmark test functions with D = 30. It can be observed that out of 13 benchmark test problems, in 12 test problems the proposed ARHHO significantly outperformed the other eight algorithms, with the exception that in one problem, SSA was significantly better than ARHHO. Moreover, ARHHO shares optimal results with HHO on F9, F10 and F11. In the F11 function, the three algorithms of ARHHO, HHO and WOA found the optimal solution, which is better than other algorithms.
The results presented in Table 3, Table 4 and Table 5 clearly show the better performance of proposed algorithm ARHHO compared to other algorithms with different dimensions, such as 100, 500 and 1000. In unimodal functions F1 to F7, the ARHHO is superior to other algorithms, and values of the ARHHO are closest to the theoretical optimal values. In particular, the ARHHO outperforms the SSA algorithm on the F6 function. Thus, the results show that nonlinear escape energy and adaptive weight still play a key role, and the ARHHO maintains a good exploitation ability in high-dimensional problems. In multimodal functions F8 to F13, the results of ARHHO are better than other compared methods. Moreover, ARHHO shares optimal results with at least one algorithm from the algorithms compared in function F9 and F10. The experiments on multimodal functions show that the adaptive relative reflection strategy can also help ARHHO to solve the local optimal stagnation problem on the high-dimension problem. It can be concluded that the ARHHO Algorithm has good scalability when compared to other algorithms.
The experimental results of the ARHHO algorithm in fixed-dimensional multimodal functions F14 to F23 are given in Table 6, which is used to test the stability and exploratory ability of the algorithm. The ARHHO works better on functions F15, F21, F22, and F23 than other compared algorithms, and works similarly on functions F14, F16, and F17. Additionally, it performs worse than DE, SSA, GWO, MFO, and PSO on functions F18 and F19, respectively, and is worse than HHO on function F20.
The performance validates ARHHO is superior in searching for an optimal solution on both unimodal and multimodal functions. The ARHHO has better stability, exploitation, and exploration capabilities than other algorithms.

4.4. Convergence Analysis

In this section, convergence curves have been plotted corresponding to the median value of the objective function obtained in 30 runs for different problems. In Figure 5 and Figure 6, the convergence curves are plotted for the unimodal as well as for multimodal test functions. The vertical coordinate represents the logarithmic formation of the mean fitness value of all the algorithms, and the horizontal coordinate denotes the number of iterations.
Figure 6 shows the convergence curves of ARHHO and its comparison algorithm in some of the benchmark problems with 30-dimensional. It can be learned that ARHHO can converge to the optimal value at the fastest speed for both types of functions. Moreover, the accuracy of ARHHO is better than other comparison algorithms, except that accuracy of SSA is significantly better than ARHHO in F6 function. In particular, the convergence curves of F1, F2, F3, F4, F5, F7, F8, F12, and F13 present an outstanding optimized performance with rapid convergence and better accuracy.
Figure 7 reflects the convergence of ARHHO and its comparison algorithm in fixed-dimensional functions. Compared with other algorithms, the convergence of the ARHHO algorithm is faster in the F21, F22, and F23 functions, and is similar in the F14 to F18 functions, and is slower in the 19 and F20 functions. From the perspective of convergence accuracy, the ARHHO performs better than other algorithms on functions F14, F15, F21, F22, and F23, and worse on functions F18, F19. Moreover, ARHHO shares convergence accuracy with at least one algorithm from the algorithms compared in function F16 and F17. The adaptive relative reflection strategy plays a key role in helping ARHHO to jump out of the local optimum and achieve satisfactory accuracy.

5. Conclusions and Future Works

In this paper, an improvement to the Harris Hawks optimization is proposed, namely the ARHHO algorithm. First, we changed the key parameter escape energy from linear function to nonlinear function to improve the diversity of the algorithm. Secondly, we propose an adaptive weight in the exploitation phase to maintain better balance between exploration and exploitation. Moreover, we present an adaptive relative reflection method to help the algorithm jump out of the local optimum. In order to prove the performance of ARHHO algorithm, this paper compares ARHHO, HHO and the most popular seven metaheuristic algorithms, and execute experiments on the 23 most common benchmark functions. Experimental results show that the proposed ARHHO algorithm can show better search accuracy on most functions and has strong competitiveness and superiority.
The major challenge is the sensitivity of the ARHHO algorithm to parameters, even though we have conducted a lot of experiments, we cannot ensure that the selected parameters are optimal. In future work, we will first attempt to further verify the effect of ARHHO, which will be used to solve practical engineering optimization problems. Secondly, a binary version of ARHHO will be proposed for feature selection. Third, the proposed method may be combined with other metaheuristic algorithms to improve the performance of the algorithm. Finally, multi-objective optimization is also one of the future directions of interest.

Author Contributions

Conceptualization, C.W.; methodology, T.Z. and C.W.; software, C.W.; writing—original draft preparation, C.W.; writing—review and editing, T.Z. and C.W.; supervision, T.Z. and C.W.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (No. 61972063, and No. 12175028), and the Fundamental Research Funds for the Central Universities of China (No.3132022260).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used during the study appear in the submitted article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, X. Introduction to Mathematical Optimization—From Linear Programming to Metaheuristics, 1st ed.; Cambridge International Science Publishing: Great Abington, UK, 2008. [Google Scholar]
  2. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A balanced whale optimization algorithm for constrained engineering design problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  3. Gharaei, A.; Pasandideh, S.H.R.; Arshadi Khamseh, A. Inventory model in a four-echelon integrated supply chain: Modeling and optimization. J. Modell. Manag. 2017, 12, 739–762. [Google Scholar] [CrossRef]
  4. Sawik, T. Balancing cybersecurity in a supply chain under direct and indirect cyber risks. Int. J. Prod. Res. 2022, 60, 766–782. [Google Scholar] [CrossRef]
  5. Gharaei, A.; Naderi, B.; Mohammadi, M. Optimization of rewards in single machine scheduling in the rewards-driven systems. Manag. Sci. Lett. 2015, 5, 629–638. [Google Scholar] [CrossRef]
  6. Ben-Tal, A.; Nemirovski, A. Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, 1st ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2001. [Google Scholar]
  7. Brownlee, J. Clever Algorithms—Nature-Inspired Programming Recipes, 1st ed.; Lulu: North Carolina, UK, 2011. [Google Scholar]
  8. Arora, S.; Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 2019, 31, 4385–4405. [Google Scholar] [CrossRef]
  9. Gao, W.; Liu, S.; Huang, L. A global best artificial bee colony algorithm for global optimization. J. Comput. Appl. Math. 2012, 236, 2741–2753. [Google Scholar] [CrossRef] [Green Version]
  10. Wang, H.; Wu, F.; Zhang, L. Application of variational mode decomposition optimized with improved whale optimization algorithm in bearing failure diagnosis. Alex. Eng. J. 2021, 60, 4689–4699. [Google Scholar] [CrossRef]
  11. Gong, Y.; Zhang, J.; Chung, H.S.; Chen, W.; Zhan, Z.; Li, Y.; Shi, Y. An Efficient Resource Allocation Scheme Using Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2012, 16, 801–816. [Google Scholar] [CrossRef]
  12. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimisation for feature selection in classification: Novel initialisation and updating mechanisms. Appl. Soft. Comput. 2014, 18, 261–276. [Google Scholar] [CrossRef]
  13. Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Bell, D. A Novel Particle Swarm Optimization Approach for Patient Clustering From Emergency Departments. IEEE Trans. Evol. Comput. 2019, 23, 632–644. [Google Scholar] [CrossRef] [Green Version]
  14. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN), Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  16. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Yang, X.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  18. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comp. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  19. Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. Theor. Comput. Sci. 2005, 344, 243–278. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  21. Li, Q.; Song, K.; He, Z.; Li, E.; Cheng, A.; Chen, T. The artificial tree (AT) algorithm. Eng. Appl. Artif. Intell. 2017, 62, 99–110. [Google Scholar] [CrossRef] [Green Version]
  22. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  23. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  24. Holland, J.H. Adaptation in Natural and Artificial Systems, 2nd ed.; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  25. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  26. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection, 1st ed.; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  27. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  28. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  29. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  30. Zhao, W.; Wang, L.; Zhang, Z. A novel atom search optimization for dispersion coefficient estimation in groundwater. Future Gener. Comp. Syst. 2019, 91, 601–610. [Google Scholar] [CrossRef]
  31. Zheng, Y. Water wave optimization: A new nature-inspired metaheuristic. Comput. Oper. Res. 2015, 55, 1–11. [Google Scholar] [CrossRef] [Green Version]
  32. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  33. Patel, V.K.; Savsani, V.J. Heat transfer search (HTS): A novel optimization algorithm. Inf. Sci. 2015, 324, 217–246. [Google Scholar] [CrossRef]
  34. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, L.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comp. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  35. Shehabeldeen, T.A.; Elaziz, M.A.; Elsheikh, A.H.; Zhou, J. Modeling of friction stir welding process using adaptive neuro-fuzzy inference system integrated with harris hawks optimizer. J. Mater. Res. Technol.-JMRT 2019, 8, 5882–5892. [Google Scholar] [CrossRef]
  36. Khalifeh, S.; Akbarifard, S.; Khalifeh, V.; Zallaghi, E. Optimization of water distribution of network systems using the Harris Hawks optimization algorithm (Case study: Homashahr city). MethodsX 2020, 7, 100948. [Google Scholar] [CrossRef]
  37. Chen, H.; Jiao, S.; Wang, M.; Heidari, A.A.; Zhao, X. Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. J. Clean. Prod. 2020, 244, 118778. [Google Scholar] [CrossRef]
  38. Ridha, H.M.; Heidari, A.A.; Wang, M.; Chen, H. Boosted mutation-based Harris hawks optimizer for parameters identification of single-diode solar cell models. Energy Convers. Manag. 2020, 209, 112660. [Google Scholar] [CrossRef]
  39. Elkasem, A.H.; Kamel, S.; Korashy, A.; Jurado, F. Application of Harris Hawks Algorithm for Frequency Response Enhancement of Two-Area Interconnected Power System with DFIG Based Wind Turbine. In Proceedings of the 21st International Middle East Power Systems Conference, Cairo, Egypt, 17–19 December 2019; pp. 568–574. [Google Scholar]
  40. Rodriguez-Esparza, E.; Zanella-Calzada, L.A.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Perez-Cisnerosa, M.; Foong, L.K. An efficient Harris hawks-inspired image segmentation method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  41. Qu, C.; He, W.; Peng, X.; Peng, X.N. Harris Hawks optimization with information exchange. Appl. Math. Model. 2020, 84, 52–75. [Google Scholar] [CrossRef]
  42. Fan, Q.; Chen, Z.; Xia, Z. A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems. Soft Comput. 2020, 24, 14825–14843. [Google Scholar] [CrossRef]
  43. Kaveh, A.; Rahmani, P.; Eslamlou, A.D. An efficient hybrid approach based on Harris Hawks optimization and imperialist competitive algorithm for structural optimization. Eng. Comput. 2021, 1–29. [Google Scholar] [CrossRef]
  44. Gupta, S.; Deep, K.; Heidari, A.A.; Moayedi, H.; Wang, M. Opposition-based learning Harris hawks optimization with advanced transition rules: Principles and analysis. Expert Syst. Appl. 2020, 158, 113510. [Google Scholar] [CrossRef]
  45. Zhang, Y.; Zhou, X.; Shih, P. Modified Harris Hawks Optimization Algorithm for Global Optimization Problems. Arab. J. Sci. Eng. 2020, 45, 10949–10974. [Google Scholar] [CrossRef]
  46. Al-Betar, M.A.; Awadallah, M.A.; Heidari, A.A.; Chen, H.; Al-khraisat, H.; Li, C. Survival exploration strategies for Harris Hawks Optimizer. Expert Syst. Appl. 2021, 168, 114243. [Google Scholar] [CrossRef]
  47. Kamboj, V.K.; Nandi, A.; Bhadoria, A.; Sehgal, S. An intensify Harris Hawks optimizer for numerical and engineering optimization problems. Appl. Soft. Comput. 2020, 89, 106018. [Google Scholar] [CrossRef]
  48. Yao, X.; Liu, Y.; Lin, G. Evolutionary Programming Made Faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
Figure 1. Flowchart of HHO.
Figure 1. Flowchart of HHO.
Mathematics 10 01145 g001
Figure 2. Escape energy.
Figure 2. Escape energy.
Mathematics 10 01145 g002
Figure 3. Adaptive weights.
Figure 3. Adaptive weights.
Mathematics 10 01145 g003
Figure 4. Adaptive relative reflection.
Figure 4. Adaptive relative reflection.
Mathematics 10 01145 g004
Figure 5. Flowchart of ARHHO.
Figure 5. Flowchart of ARHHO.
Mathematics 10 01145 g005
Figure 6. Convergence curves of the benchmark problems.
Figure 6. Convergence curves of the benchmark problems.
Mathematics 10 01145 g006
Figure 7. Convergence curves of fixed-dimensional problems.
Figure 7. Convergence curves of fixed-dimensional problems.
Mathematics 10 01145 g007
Table 1. Parameter setting.
Table 1. Parameter setting.
AlgorithmParameter Configurations
DE F = 0.5 ,   C R = 0.5
SSA c 1 0 , 1 ,   c 2 0 , 1
MFO a 2 , 1 ,   b = 1
PSO c 1 = c 2 = 2 ,   ω m a x = 0.9 ,   ω m i n = 0.2
SCA a = 2 ,   r 1 0 , a ,   r 2 0 , 2 π , r 3 0 , 2 , r 4 0 , 1
ARHHO h = 6 ,   ω m a x = 0.9 ,   ω m i n = 0.4 ,   P m a x = 5 ,   P m i n = 2 ,   U 2 ,   5
Table 2. Experimental results of benchmark functions F1~F13 in 30 dimensions.
Table 2. Experimental results of benchmark functions F1~F13 in 30 dimensions.
Benchmark DESSAGWOMFOPSOSCAWOAHHOARHHO
F1mean1.39E−041.99E−079.85E−313.02E+032.30E−041.70E+012.74E−726.96E−953.27E−137
std8.92E−052.46E−072.60E−305.95E+033.74E−043.99E+011.47E−713.80E−941.76E−136
F2mean4.80E−032.69E+001.15E−184.27E+015.71E+001.36E−021.76E−515.82E−491.62E−71
std2.54E−031.91E+001.11E−182.05E+016.78E+001.23E−025.10E−513.10E−488.85E−71
F3mean1.87E+041.63E+039.72E−051.80E+048.89E+011.01E+044.66E+042.48E−724.58E−117
std5.03E+038.32E+022.54E−041.19E+043.35E+016.20E+031.38E+049.04E−722.50E−116
F4mean1.14E+011.20E+015.99E−067.07E+011.03E+003.71E+014.58E+011.07E−491.75E−65
std5.91E+003.89E+001.76E−058.05E+002.25E−011.05E+012.86E+014.14E−497.45E−65
F5mean5.93E+012.92E+022.69E+012.47E+047.57E+013.66E+042.80E+011.85E−022.25E−03
std7.26E+014.70E+028.11E−014.02E+046.35E+017.18E+044.55E−012.97E−022.61E−03
F6mean1.65E−041.94E−076.54E−012.66E+031.26E−041.90E+014.30E−011.24E−041.38E−05
std1.24E−043.85E−073.11E−015.20E+031.69E−041.92E+012.47E−011.56E−043.34E−05
F7mean4.09E−021.58E−012.98E−036.46E+005.59E+001.43E−013.56E−032.16E−041.10E−04
std1.08E−026.07E−021.97E−031.02E+017.60E+001.76E−013.73E−032.17E−041.15E−04
F8mean−6.03E+03−7.60E+03−5.78E+03−8.02E+03−5.06E+03−3.77E+03−1.08E+04−1.25E+04−1.26E+04
std3.79E+027.20E+021.16E+037.51E+021.11E+032.88E+021.78E+036.42E+025.14E−01
F9mean1.79E+025.56E+011.22E+011.82E+021.14E+022.94E+011.89E−150.00E+000.00E+00
std1.47E+011.80E+011.07E+013.86E+012.88E+012.35E+011.04E−140.00E+000.00E+00
F10mean4.18E−032.54E+006.27E−141.67E+012.17E−011.36E+013.64E−154.44E−164.44E−16
std1.96E−035.67E−017.79E−156.42E+004.30E−018.92E+002.16E−151.50E−315.01E−32
F11mean1.29E−021.49E−022.77E−031.25E+018.45E−039.26E−010.00E+000.00E+000.00E+00
std3.36E−021.01E−026.33E−033.12E+011.19E−023.45E−010.00E+000.00E+000.00E+00
F12mean9.32E−047.73E+004.30E−021.12E+056.92E−035.39E+042.18E−027.55E−068.55E−07
std3.55E−033.13E+002.04E−026.07E+052.63E−021.39E+051.91E−029.43E−061.07E−06
F13mean8.92E−029.99E+006.05E−011.30E+036.64E−031.86E+055.36E−015.03E−059.59E−06
std4.29E−011.24E+012.85E−016.94E+038.94E−036.59E+052.24E−017.73E−051.03E−05
w/t/l13/0/012/0/113/0/013/0/013/0/013/0/012/1/010/3/0-
Table 3. Experimental results of benchmark functions F1~F13 in 100 dimensions.
Table 3. Experimental results of benchmark functions F1~F13 in 100 dimensions.
Benchmark DESSAGWOMFOPSOSCAWOAHHOARHHO
F1mean7.03E+021.44E+033.93E−145.24E+042.08E+011.36E+042.20E−718.81E−935.69E−138
std4.43E+023.83E+023.63E−142.13E+045.99E+008.22E+031.20E−704.83E−922.54E−137
F2mean3.68E+014.68E+014.49E−092.30E+021.27E+028.96E+005.90E−501.29E−495.14E−74
std1.80E+017.55E+002.11E−095.06E+012.34E+016.72E+001.57E−495.97E−492.68E−73
F3mean4.19E+055.40E+043.65E+032.52E+051.67E+042.21E+059.52E+055.16E−522.52E−114
std3.59E+042.43E+043.08E+037.10E+044.61E+034.96E+042.59E+052.83E−511.38E−113
F4mean9.61E+012.79E+013.13E+009.37E+011.23E+018.96E+017.19E+011.04E−499.38E−68
std1.26E+003.57E+002.04E+001.79E+001.73E+003.03E+002.46E+012.91E−492.95E−67
F5mean6.19E+051.80E+059.79E+011.42E+081.85E+041.25E+089.82E+014.18E−021.02E−02
std6.94E+058.09E+046.44E−015.90E+071.81E+045.79E+072.23E−014.52E−022.40E−02
F6mean5.63E+021.47E+039.29E+005.46E+042.01E+011.04E+044.72E+005.43E−046.90E−05
std2.20E+024.68E+021.06E+001.62E+046.06E+006.67E+031.13E+001.15E−031.08E−04
F7mean1.57E+002.81E+009.57E−032.19E+022.78E+021.18E+024.75E−033.08E−041.36E−04
std4.82E−017.14E−013.06E−031.05E+021.22E+027.14E+015.12E−032.56E−041.38E−04
F8mean−1.06E+04−2.16E+04−1.63E+04−2.23E+04−1.05E+04−6.82E+03−3.53E+04−4.19E+04−4.19E+04
std7.91E+021.59E+033.74E+032.08E+034.22E+034.84E+026.44E+033.42E+001.21E+00
F9mean9.56E+022.42E+023.06E+018.79E+027.87E+022.23E+020.00E+000.00E+000.00E+00
std4.01E+014.76E+012.86E+016.29E+018.42E+019.86E+010.00E+000.00E+000.00E+00
F10mean7.14E+009.99E+002.35E−081.99E+013.74E+001.88E+014.12E−154.44E−164.44E−16
std4.52E+001.21E+001.51E−089.75E−023.12E−014.04E+002.72E−151.50E−311.50E−31
F11mean8.06E+001.32E+018.23E−045.11E+024.31E−011.00E+021.37E−020.00E+000.00E+00
std3.60E+003.05E+004.51E−031.41E+021.46E−017.80E+017.50E−020.00E+000.00E+00
F12mean1.40E+053.62E+012.81E−012.41E+084.91E+003.13E+084.23E−024.06E−065.59E−07
std1.37E+051.35E+016.41E−021.51E+081.73E+001.74E+082.18E−025.78E−068.07E−07
F13mean1.40E+068.83E+036.82E+005.15E+086.20E+015.49E+082.61E+002.36E−042.95E−05
std1.13E+061.76E+045.07E−012.56E+081.79E+013.08E+081.01E+003.04E−044.25E−05
w/t/l13/0/013/0/013/0/013/0/013/0/013/0/012/1/010/3/0-
Table 4. Experimental results of benchmark functions F1~F13 in 500 dimensions.
Table 4. Experimental results of benchmark functions F1~F13 in 500 dimensions.
Benchmark DESSAGWOMFOPSOSCAWOAHHOARHHO
F1mean2.77E+059.33E+045.11E−041.09E+065.93E+032.10E+053.57E−671.96E−921.35E−135
std1.78E+046.35E+031.72E−044.48E+044.24E+026.50E+041.95E−661.07E−917.37E−135
F2mean1.26E+035.39E+024.23E−031.13E+951.60E+461.01E+025.68E−491.23E−471.14E−72
std1.39E+021.86E+017.24E−046.17E+958.78E+465.04E+012.84E−486.05E−475.98E−72
F3mean1.04E+071.15E+065.82E+055.19E+066.18E+057.03E+063.03E+071.69E−352.16E−118
std1.16E+065.26E+051.01E+051.01E+061.62E+051.35E+061.28E+079.17E−357.07E−118
F4mean9.92E+014.12E+016.39E+019.91E+012.74E+019.91E+018.00E+011.15E−481.30E−66
std2.77E−013.03E+003.89E+003.03E−011.15E+002.73E−012.09E+013.61E−485.07E−66
F5mean5.44E+083.72E+074.98E+024.78E+092.98E+071.83E+094.96E+021.14E−017.19E−02
std7.59E+075.73E+063.82E−012.27E+083.93E+065.13E+083.89E−011.30E−011.12E−01
F6mean2.65E+059.45E+048.96E+011.10E+065.83E+032.05E+053.13E+011.82E−033.33E−04
std2.30E+045.93E+031.93E+004.54E+044.73E+027.98E+049.36E+002.34E−036.44E−04
F7mean3.56E+032.86E+026.64E−023.63E+044.44E+041.44E+043.73E−031.82E−041.30E−04
std6.02E+024.18E+011.65E−022.39E+035.68E+033.74E+034.58E−032.24E−041.06E−04
F8mean−2.37E+04−6.00E+04−5.85E+04−6.23E+04−2.49E+04−1.53E+04−1.64E+05−2.09E+05−2.09E+05
std1.63E+033.86E+031.29E+043.39E+039.12E+031.16E+032.96E+042.18E+017.33E+00
F9mean6.05E+033.16E+031.70E+026.87E+036.34E+031.12E+030.00E+000.00E+000.00E+00
std1.25E+028.74E+016.80E+011.57E+022.30E+024.41E+020.00E+000.00E+000.00E+00
F10mean1.83E+011.43E+011.09E−032.02E+011.21E+011.90E+013.76E−154.44E−164.44E−16
std1.08E+002.47E−011.95E−041.56E−015.11E−013.31E+002.27E−151.50E−311.50E−31
F11mean2.42E+038.59E+021.22E−029.96E+037.34E+011.91E+030.00E+000.00E+000.00E+00
std1.96E+026.62E+013.42E−022.89E+021.00E+017.22E+020.00E+000.00E+000.00E+00
F12mean6.52E+081.82E+069.46E−011.14E+102.68E+055.91E+091.15E−012.44E−062.52E−07
std1.74E+081.05E+062.27E−017.04E+081.06E+051.32E+096.60E−025.60E−063.43E−07
F13mean1.79E+093.64E+075.68E+012.10E+104.52E+069.84E+091.89E+013.33E−043.98E−05
std2.94E+088.37E+063.66E+001.34E+091.20E+061.85E+096.58E+004.18E−045.08E−05
w/t/l13/0/013/0/013/0/013/0/013/0/013/0/011/2/010/3/0-
Table 5. Experimental results of benchmark functions F1~F13 in 1000 dimensions.
Table 5. Experimental results of benchmark functions F1~F13 in 1000 dimensions.
Benchmark DESSAGWOMFOPSOSCAWOAHHOARHHO
F1mean9.01E+052.32E+051.22E−012.67E+064.09E+045.07E+052.56E−675.81E−952.91E−134
std4.96E+041.11E+043.39E−025.52E+041.41E+031.23E+051.33E−663.13E−941.55E−133
F2meaninf1.19E+035.46E−01inf1.41E+032.19E+023.48E−491.27E−472.80E−75
std-2.83E+014.06E−01-4.51E+011.06E+021.59E−486.33E−478.66E−75
F3mean4.04E+075.79E+062.56E+061.88E+072.45E+062.84E+071.10E+081.29E−289.23E−112
std4.72E+062.31E+063.88E+053.93E+066.09E+056.78E+063.58E+077.04E−283.54E−111
F4mean9.96E+014.42E+017.60E+019.95E+013.29E+019.96E+017.89E+017.80E−461.80E−64
std1.21E−012.47E+002.68E+001.63E−011.45E+001.50E−012.02E+014.23E−459.28E−64
F5mean2.14E+091.16E+081.12E+031.22E+102.85E+084.34E+099.94E+024.01E−011.41E−01
std2.43E+081.02E+075.05E+013.59E+082.17E+078.17E+089.21E−018.76E−011.69E−01
F6mean9.17E+052.31E+052.00E+022.65E+064.09E+045.27E+056.61E+015.89E−034.22E−04
std4.90E+041.11E+042.11E+005.31E+042.05E+031.58E+051.80E+018.54E−038.59E−04
F7mean3.01E+041.80E+032.37E−011.90E+052.41E+056.99E+044.49E−031.55E−041.24E−04
std3.21E+031.89E+025.02E−026.26E+036.95E+031.64E+044.23E−031.11E−041.39E−04
F8mean−3.26E+04−8.81E+04−8.63E+04−9.18E+04−3.52E+04−2.21E+04−3.39E+05−4.19E+05−4.19E+05
std1.58E+039.02E+032.33E+046.77E+031.39E+041.47E+035.72E+041.93E+025.25E+01
F9mean1.28E+047.62E+033.98E+021.53E+041.41E+041.88E+030.00E+000.00E+000.00E+00
std2.64E+021.82E+029.51E+011.99E+023.24E+027.78E+020.00E+000.00E+000.00E+00
F10mean1.92E+011.45E+011.19E−022.02E+011.60E+011.95E+014.35E−154.44E−164.44E−16
std4.66E−012.22E−011.36E−032.24E−012.70E−013.37E+003.00E−151.50E−311.50E−31
F11mean8.32E+032.12E+033.88E−022.40E+042.78E+024.29E+037.40E−180.00E+000.00E+00
std4.22E+021.03E+027.85E−025.36E+021.84E+011.24E+032.82E−170.00E+000.00E+00
F12mean3.15E+091.05E+073.34E+002.97E+109.92E+061.33E+109.64E−022.58E−063.02E−07
std4.80E+082.93E+061.30E+009.88E+082.47E+061.78E+093.97E−023.44E−064.00E−07
F13mean7.78E+091.51E+082.04E+025.47E+108.53E+072.19E+103.73E+017.06E−041.37E−04
std7.48E+082.53E+075.71E+011.59E+091.19E+074.11E+091.34E+018.11E−041.47E−04
w/t/l13/0/013/0/013/0/013/0/013/0/013/0/012/1/010/3/0-
Table 6. Experimental results of benchmark functions F14~F23 in fixed-dimensional.
Table 6. Experimental results of benchmark functions F14~F23 in fixed-dimensional.
Benchmark DESSAGWOMFOPSOSCAWOAHHOARHHO
F14mean9.98E−011.20E+005.01E+001.76E+002.51E+001.59E+002.12E+001.39E+009.98E−01
std3.39E−165.47E−014.38E+001.53E+001.86E+009.24E−011.97E+001.26E+008.84E−08
F15mean4.22E−049.32E−043.08E−033.15E−034.33E−031.10E−036.53E−043.75E−043.42E−04
std2.44E−042.93E−046.90E−035.85E−037.31E−033.57E−043.27E−042.44E−043.56E−05
F16mean−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
std6.78E−162.49E−143.62E−086.78E−160.00E+005.11E−055.88E−106.40E−104.27E−04
F17mean3.98E−013.98E−013.98E−013.98E−013.98E−014.01E−013.98E−013.98E−013.98E−01
std1.13E−161.51E−141.29E−041.13E−161.13E−164.01E−032.19E−051.17E−069.20E−05
F18mean3.00E+003.00E+003.00E+003.00E+003.00E+003.00E+003.00E+007.50E+004.80E+00
std2.67E−152.18E−133.27E−051.95E−150.00E+001.61E−042.46E−041.02E+016.85E+00
F19mean−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.85E+00−3.85E+00−3.84E+00−3.80E+00
std1.36E−157.15E−122.57E−031.36E−151.44E−032.61E−031.80E−021.41E−016.71E−02
F20mean−3.25E+00−3.23E+00−3.27E+00−3.22E+00−3.23E+00−2.96E+00−3.25E+00−3.28E+00−3.04E+00
std5.83E−025.89E−027.04E−028.89E−021.35E−013.00E−018.31E−025.87E−021.32E−01
F21mean−9.1554−7.6469−8.9695−6.3029−7.5201−2.9647−8.4270−5.0551−10.0942
std2.58743.21812.17893.33333.09651.94582.70197.24E−050.0751
F22mean−9.9253−8.5214−10.1466−8.3319−8.9354−3.1454−7.2835−5.4418−10.3098
std1.82232.97081.39403.26942.76641.98753.22921.34820.1853
F23mean−10.2809−7.6608−10.0838−7.4863−9.7002−3.5671−7.8585−5.1284−10.4397
std1.39953.64611.75143.82542.19841.15323.36988.86E−050.1178
w/t/l4/3/35/2/35/2/35/2/35/2/36/1/35/2/36/2/2-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zou, T.; Wang, C. Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization. Mathematics 2022, 10, 1145. https://doi.org/10.3390/math10071145

AMA Style

Zou T, Wang C. Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization. Mathematics. 2022; 10(7):1145. https://doi.org/10.3390/math10071145

Chicago/Turabian Style

Zou, Tingting, and Changyu Wang. 2022. "Adaptive Relative Reflection Harris Hawks Optimization for Global Optimization" Mathematics 10, no. 7: 1145. https://doi.org/10.3390/math10071145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop