*3.7. Algorithm Procedure*

Algorithm 1 shows the procedure of the CSHHO optimization algorithm: See Algorithm 1.

#### **Algorithm 1:** CSHHO algorithm

**Input:** The population size N, maximum number of iterations T. **Output:** The location of rabbit and its fitness value. Using Seven chaotic maps to initialize the population: Through chaotic variables y*ki* ∈ [0, 1], *k* = 1, 2, . . . , M. M indicates the initial population dimension and Equations (14)–(21)generate initial chaotic vector; Inverse mapping to ge<sup>t</sup> the initial population of the corresponding solution space through Equation (22); **While** stopping condition is not meet **do** Calculate the fitness values of hawks; Set *Xrabbit* as best location of rabbit; **For** each *Xi* **do** Update the **E** using Equation (3); **if** |E| 1 **then** Update the vector *Xi* using Equations (25) and (2); Random generate parameters: r1, r2; b = *e*5×(*π*(<sup>1</sup><sup>−</sup> *t t*max )); **If** *q* ≥ 0.5 **then** *Xi*(*t* + 1) = *ω*(*t*) × *Xrand*(*t*) − *b* × |*Xrand*(*t*) − 2 × *r*2 × *Xi*(*t*)|; **end if** q < 0.5 **then** *Xi*(*t* + 1) = *ω*(*t*) × (*Xrabbit*(*t*) − *Xm*(*t*)) − *b* × |*lb* + *r*4 × (*ub* − *lb*)|; **end end if** |E| < 1 **then if** r ≥ 0.5 and |E| ≥ 0.5 **then** Update the vector *Xi* using Equations (4)–(6); **end else if** r ≥ 0.5 and |E| < 0.5 **then** Update the vector *Xi* using Equation (7); **end else if** r < 0.5 and E |≥ 0.5 Update the vector *Xi* using Equations (8)–(11); **end else if** r < 0.5and |E| < 0.5 Update the vector *Xi* using Equations (12) and (13); **end end end** Optimal neighborhood disturbance using Equations (27) and (28); ∼ *X*(*t*) = *X*<sup>∗</sup>(*t*) + 0.5 × *h* × *<sup>X</sup>*<sup>∗</sup>(*t*), *g* < 0.5 *<sup>X</sup>*<sup>∗</sup>(*t*), *g* > 0.5 ; *X*<sup>∗</sup>(*t*) = ⎧⎨⎩ ∼*<sup>X</sup>*(*t*), *<sup>f</sup>*(<sup>∼</sup>*X*(*t*)) < *f*(*X*<sup>∗</sup>(*t*)) *<sup>X</sup>*<sup>∗</sup>(*t*), *f*(*X*<sup>∗</sup>(*t*)) > *<sup>f</sup>*(<sup>∼</sup>*X*(*t*)) ;

**end Return**

 *Xrabbit*;

#### **4. Experiments and Discussion**

In this section, to test and verify the performance of our optimizer proposed, namely CSHHO, a different category of experiments were designed. According to the randomness of the HHO algorithm, this section used a necessary and acceptable set of test functions to ensure that the superior results of the CSHHO algorithm did not happen by accident. Therefore, this section used two different benchmark test suites: classical 23 well-known benchmark functions [70,71] and standard IEEE CEC 2017 [72]. All experiments were as follows:

Experiment 1: First, seven chaotic mappings were used as the initialization method of HHO population and tested separately. Second, the seven data sets are analyzed, and the optimal chaotic mapping was selected as the population initialization method of the improved algorithm.

Experiment 2: First, on the basis of Experiment 1, a combination test of adaptive weighting mechanism, variable spiral position update and optimal neighborhood disturbance mechanism was executed. Second, we analyzed and compared the CSHHO algorithm with other recently proposed meta-heuristic algorithms such as HHO [24], WOA [38], SCA [73], and Chicken Swarm Optimization (CSO) [74]. Third, we analyzed and compared the CSHHO algorithm with developed advanced variants such as HHO with dimension decision logic and Gaussian mutation (GCHHO) [54] and Hybrid PSO Algorithm with Adaptive Step Search (DEPSOASS) [75] and Gravitational search algorithm with linearly decreasing gravitational constant (Improved GSA) [76] and Dynamic Generalized Opposition-based Learning Fruit Fly Algorithm (DGOBLFOA) [77]. Fourth, based on the third step, the IEEE CEC 2017 was used to perform an algorithm-based accuracy scalability test with test dimensions D = 50, D = 100.

To ensure the fairness of the experiments, the experiments were evaluated using the same parameters, and all population sizes N were set to 30, and dimension D was set to 30; each algorithm on each test instance was performed over 50 independent runs. In each run, the function error value log(*F*(*x*) − *<sup>F</sup>*(*x*<sup>∗</sup>)), where *F(x)* was mean value found at all of the run, and *<sup>F</sup>*(*x*<sup>∗</sup>) was the optimal value recorded in 23 benchmark functions. The average error (Mean) and standard deviation (Std) of the function error values were considered as two performance metrics for evaluating the performance of the algorithm in all runs. The experimental environment: CPU Intel(R) Xeon(R) CPU E5-2680 v3 (2.50 GHz), RAM 16.00 GB, MATLAB R2019b.

#### *4.1. Benchmark Functions Verification*

All experiments were performed using the classical 23 test functions [70,71] to test the performance of each algorithm in terms of convergence speed and search accuracy. These Benchmark functions were divided into three categories, including unimodal (UM) and multi-modal (MM). F1–F7 were the UM functions, which had unique global optimality and were used to test the exploitation performance of optimization algorithms. F8–F23 were the MM functions, which were used to test the exploration performance of the optimization algorithm and LO avoidance potentials. As the complexity of the test functions increased, the tested algorithms were more likely to fall into local optima, and all the test functions were used to evaluate the performance of the tested algorithms in various aspects. The convergence curves and test values of the corresponding test functions are given. Appendix A shows the classical 23 test functions.

IEEE CEC 2017 functions were also used in Experiment 2 to evaluate the scalability of CSHHO, other meta-heuristic algorithms, and developed HHO advanced variants. IEEE CEC 2017 Benchmark functions were classified into four categories, consisting of three UM functions (F1–F3), seven MM functions (F4–F10), 10 hybrid functions (F11–F20), and 10 composite functions (F21–F30). To evaluate the scalability of each algorithm more comprehensively, the dimensions of Benchmark functions were set to D = 50, D = 100, and Table 2 records the corresponding accuracy values. It also shows the function formulas for IEEE CEC 2017.

In addition, to compare the performance of various algorithms, the rank was used to rank the mean values of all the algorithms in the simulation experiment in the order of lowest to highest. The lower the rank, the better the algorithm was compared to other algorithms; conversely, the higher the rank, the worse the algorithm was compared to other algorithms. Wilcoxon signed-rank test [78] was used to detect whether there was a significant performance difference among all algorithms, the *p*-value was corrected by Bonferroni–Holm correction [79]; moreover, the Friedman test [80] was used to rank the superiority of all the algorithms.


#### **Table 2.** The information of IEEE CEC2017.

We used the values of the Friedman test to rank all the algorithms involved in the comparison, if the values of the Friedman test were the same, then the rankings were averaged. Here, the Friedman test was performed on the classical 23 test functions, and the test values were recorded in the average ranking values (ARV) column.

#### *4.2. Efficiency Analysis of the Improvement Strategy*

First, in the population initialization phase, we selected the Gauss mapping, which had the highest impact on the accuracy of the HHO algorithm, as the population initialization method of CSHHO from seven commonly used chaotic mappings. Second, a global optimization strategy was used to optimize the HHO algorithm, which consisted of three components, including adaptive weight strategy, variable spiral update strategy and optimal neighborhood disturbance strategy. To verify the performance improvement of the HHO algorithm by the two improvements, six algorithms were used for comparison:


#### 4.2.1. Influence of Seven Common Chaotic Mappings on HHO Algorithm

In order to select the best effective chaotic mapping method among seven well-known chaotic mapping methods, which enables us to obtain the best initial solution position and speed up the convergence of the Harris Hawk algorithm population, sinusoidal chaotic mapping, Tent chaotic mapping, Kent chaotic mapping, Cubic chaotic mapping, Logistic chaotic mapping, Gauss chaotic mapping, Circle chaotic mapping were initialized to the population of the HHO algorithm, respectively, forming Sinusoidal-HHO, Tent-HHO, Kent-HHO, Cubic-HHO, Logistic-HHO, Gauss-HHO, Circle-HHO, and we compared the accuracy of these seven algorithms.

Table 3 presents the results of the seven algorithms for 23 classical test functions. The results included Best, Worst, Mean, Rank, and Std for each algorithm run 50 times independently.

Table 4 shows the Bonferroni–Holm corrected probability values *p* obtained from the Wilcoxon signed-rank test for the seven chaotic mapping HHO algorithms. Symbols "+\=\−" represent the number of algorithms that were better, similar, or worse than Gauss-HHO. The ARV at Table 5 is the value of the Friedman test for the seven chaotic mapping HHO algorithms.

Table 3 shows the data with better experimental results in bold. By analyzing the experimental results, we concluded that under the UM functions (F1–F7), Sinusoidal chaotic mapping achieved optimal results in F3, F5, F7 test functions, Circle chaotic mapping achieved optimal results in F1, F4 test functions, and Sinusoidal chaotic mapping had the most influence on the HHO algorithm, followed by Gauss chaotic mapping and Circle chaotic mapping. Under the MM functions (F8–F23), Gauss chaotic mapping had the most influence on the HHO algorithm. Circle chaotic mapping, Sinusoidal chaotic mapping, Tent chaotic mapping, and Kent chaotic mapping obtained the best results in F21, F15, F20, F13, and F23 test functions, respectively; the results of the seven chaotic mappings were compared in 23 test functions, The Gauss chaotic mapping obtained the most optimal solutions.

Table 4 shows the Bonferroni–Holm correction *p*-values of Wilcoxon signed rank test with 5% confidence level, "+\=\−"indicates whether Gauss-HHO was worse consistent or better with Circle-HHO, Sinusoidal-HHO, Tent-HHO, Kent-HHO, Cubic-HHO and Logistic-HHO. Analyzing the Bonferroni–Holm corrected *p*-value of Wilcoxon signed rank test and the value of "+\=\−" in each row of the table, better results were obtained using Gauss-HHO based among the 23 tested functions; the experimental results of HHO algorithms based on seven chaotic mappings, respectively, were evaluated comprehensively using Friedman's test at Table 5, compared with the other six chaotic mappings population initialization methods. Gauss-HHO obtained the best results in terms of average ranking, indicating that for the HHO optimization algorithm, Gauss chaotic mapping not only had the randomness, ergodicity and initial value sensitivity of the chaotic mapping itself, but also the population initialization of the HHO optimization algorithm using Gauss chaotic mapping. The Gauss chaos map was used to initialize the population of the HHO optimization algorithm, and to obtain a better optimization accuracy.


**Table 3.** Results of a comparison with seven chaotic mappings on HHO.

−1.25 ×

−1.25 ×

−1.25 ×

−1.25 ×

−1.25 ×

−1.25 ×

*−***1.25** *×*


**Table 3.** *Cont.*




**Table 4.** The Bonferroni–Holm corrected *P*-values of Wilcoxon's signed-rank test.


**Table 5.** Average ranks obtained by each method in the Friedman test.

#### 4.2.2. Comparison with Conventional Techniques

The Gauss mapping was used for population initialization. Then the adaptive weight mechanism, variable spiral position update mechanism, and adaptive neighborhood disturbance mechanism were introduced to form the CSHHO algorithm. In order to verify the effectiveness of the CSHHO algorithm against the emerging swarm intelligence optimization algorithms in recent years. In this subsection, the CSHHO algorithm is compared with recently published meta-heuristics, including HHO [24], WOA [24], SCA [65] and CSO [66] to calculate the average precision mean and stability Std of each algorithm. The performance of CSHHO was tested against other optimization algorithms using a nonparametric test: the Bonferroni–Holm corrected Wilcoxon signed rank test. Finally, the non-parametric test method (i.e., the Friedman test) was used to calculate the ARV values of all the participating algorithms and rank them together. As in Experiment 1, this experiment was also based on the test set of 23 classical test functions (see Table 1). The details of the experiment were consistent with the description at the beginning of this section, and Table 6 shows the detailed experimental results. Additionally, Table 7 gives the corrected Wilcoxon signed-rank test based on the 5% confidence level, the "+\=\−" value: the number of CSHHO results that were worse, similar, better or than the comparison algorithm for each test function run 50 times, the result based on the Friedman test was at table.

The optimal results of the tested algorithms under the current Benchmark function are marked in bold. Analyzing the data in Table 6, CSHHO had a strong optimization capability compared to the traditional optimization algorithms. Under the MM functions (F8–F23), CSHHO obtained good results with the best optimization results under the Benchmarks of F9–F13, F15–F19, F21–F23, and CSHHO explored the most optimal region of the above Benchmark and outperformed the other compared optimization algorithms in terms of search performance. The CSHHO explored the above Benchmark optimal regions and outperformed the other participating optimization algorithms in terms of exploration performance. It tied for first place in Benchmark F9, F11, and F11. This showed that CSHHO had strong exploration ability and LO avoidance potentials.

Table 7 was analyzed to determine if there was a significant difference between the other algorithms and CSHHO. The "+\=\−" column indicates the number of results that are less than, similar to, or greater than CSHHO for each of the HHO, WOA, SCA, and CSO algorithms run 50 times in each test function. CSHHO has 21 test functions with better results than HHO, CSHHO has 22 test functions with better results than WOA, CSHHO has 23 test functions with better results than SCA, CSHHO has 22 test functions with better results than CSO; the results of the corrected 5% confidence level Wilcoxon signed rank test were analyzed. If the *p*-value was greater than 0.05, the algorithm was considered to be the same as CSHHO; otherwise, it was considered to be significantly different. In most cases the *p*-values of Wilcoxon signed-rank test < 0.5, indicating that the CSHHO algorithm was significantly different from the other compared algorithms, all results were corrected by Bonferroni–Holm correction; At Table 8, analysis of the Friedman test value showed that the value of CSHHO was 2.57 lower than the traditional optimization algorithm. The CSHHO algorithm's performance was better than other meta-heuristic algorithms.


**Table 6.** Results of a comparison with classic meta-heuristic algorithms.




**Table 6.** *Cont.*


**Table 8.** Average rankings obtained by classic meta-heuristic in the Friedman test, and the best result is shown in boldface.

Figure 4 shows the convergence curves of CSHHO and the traditional optimization algorithms HHO, WOA, SCA and CSO under 23 Benchmark functions, including the performance under UM functions (F1–F7), MM functions (F8–F23), MM functions (F8–F23). The function error value was defined as, where *F*(*x*) was the mean value found at all of iterations, and *F(x\*)* was the optimal value recorded in 23 benchmark functions. Among them, under UM functions (F1–F7), the CSHHO algorithm converged with higher accuracy and converged faster than other algorithms, indicating that the development performance of CSHHO algorithm was improved compared with other algorithms; under MM functions (F8–F23), the CSHHO algorithm converged with higher accuracy and converged faster than other algorithms, indicating that the development performance of CSHHO algorithm was improved compared with other algorithms. From F8–F23 CSHHO algorithm did not fall into the local optimum region and could not escape; in F9, F11, F16 CSHHO converged faster in F12, F13, F17, F19, F21–F23. Although the convergence curve of the CSHHO algorithm was smoother and converged slower in the initial iterations as the algorithm searched further. In F10, F15 the CSHHO algorithm not only explores the dominant region with faster convergence speed, but also leads the rest of the algorithms in terms of search accuracy. Therefore, CSHHO algorithm benefits from Gauss chaotic mapping that enhances the population initialization of the algorithm, as well as adaptive weighting mechanism, variable spiral position updating mechanism and adaptive neighborhood disturbance mechanism that enhance the exploration and exploitation ability of the algorithm, CSHHO is less likely to fall into the current search region and increase the ability to jump out of the local optimal region.

**Figure 4.** *Cont.*

**Figure 4.** Convergence curves of CSHHO and the classic meta-heuristic algorithms.

#### 4.2.3. Comparison with HHO Variants

In order to verify the effectiveness of the CSHHO algorithm against HHO variants in recent years, this subsection compares the CSHHO algorithm with the recently published advanced HHO variants: GCHHO and DEPSOASS and Improved GSA and DGOBLFOA, observing the mean accuracy mean and stability Std of each algorithm. Table 9 presents the results of the experiments. Next, using nonparametric tests: the Wilcoxon signed-rank test and the Friedman test were used to synthetically assess the performance differences between CSHHO and GCHHO and DEPSOASS and Improved GSA and DGOBLFOA. The configuration of the experiments is the same as in Section 4.2.2, "+\=\−": the number of CSHHO results obtained from 50 runs in each test function that are worse, similar, or better than the comparison algorithm. Table 10 presents the results of the Bonferroni–Holm correction of Wilcoxon signed-rank test experiments, Table 11 presents the results of the Friedman test experiments.

Analyzing the data in Table 9, CSHHO has some advantages over the advanced HHO variants: under the UM functions (F1–F7), CSHHO outperforms the other algorithms in F1–F4, F6, F7, which indicates that CSHHO has further enhanced global best-finding capability compared to the advanced HHO variants. Under the MM functions (F8–F23) the CSHHO's performance outperforms the remaining algorithms in F9, F10, F11, F16, F17, F18, F19, F21, which indicates that CSHHO can explore the peak-to-peak information deeply and effectively to avoid the algorithm from entering the local optimum. In summary, CSHHO has a good ability to develop and explore and avoid local optima.

Table 10 shows the results of the corrected Wilcoxon signed rank test at 5% confidence level and Friedman's test for CSHHO and GCHHO and DEPSOASS and Improved GSA and DGOBLFOA. The p-values indicate whether the numerical results of the algorithms involved in the comparison are significant compared to CSHHO; if the p-value is greater than 0.05, the numerical results of the algorithm are considered the same as CSHHO; otherwise, it is considered to be significantly different. Analyzing the Bonferroni–Holm corrected p-value column values in Table 10, under the 23 classical test functions, only GCHHO is less different from CSHHO in the F9–F11 and F16–18 test functions, and in the rest of the test functions. Moreover, at Table 11 upon analyzing the results of the Friedman test, the value of CSHHO is 1.70, which is lower than others. The result indicates that CSHHO has an advantage over the above algorithms in optimization.

Figure 5 shows the convergence curves of CSHHO and the variants of the optimization algorithm GCHHO and DEPSOASS and Improved GSA and DGOBLFOA under 23 Benchmark Functions, including the performance under UM functions (F1–F7), MM functions (F8–F23). The function error value is defined as, where *<sup>F</sup>*(*x*) is the mean value found at all of iterations, and *<sup>F</sup>*(*x*<sup>∗</sup>) is the optimal value is recorded in 23 The CSHHO algorithm under UM functions (F1–F7) has improved convergence accuracy in F1–F4, F6, F7 compared to other algorithms, and convergence speed is better than other algorithms in F1, F2, F3, F4, F5, F6, F7; under MM functions (F8–F23), CSHHO algorithm does not fall into the local optimal region and cannot escape, in F9, F10, F11, F16, F17, F18, F19, F21. CSHHO can explore the dominant region well and is ahead of other algorithms in terms of search accuracy. In F9, F10, F11, F12, F13, F21, F22, F23, CSHHO has smooth convergence curves and faster convergence speed. In F16–F20, although the convergence speed of the CSHHO algorithm is slow, the convergence curve does not produce large fluctuations, which indicates that the CSHHO algorithm has good search ability, and it does not fall into local optimum and cannot jump out. Functions. In summary, the CSHHO algorithm's performance is further improved compared to GCHHO algorithm and other advanced algorithms. Compared to these advanced variants, CSHHO algorithm are effective.


**Table 9.** Results of a comparison with the advanced meta-heuristic algorithms.


**Table 9.** *Cont.*


#### *Symmetry* , ,

**Table 9.** *Cont.*



**Figure 5.** *Cont.*

**Figure 5.** Convergence curves of CSHHO and the advanced meta-heuristic algorithms.

#### 4.2.4. Scalability Test on CSHHO

Dimensional data are an important basis for analyzing the influence of the number of factors to be optimized on the algorithm, and the purpose of the scalability test is to further verify the overall performance and stability of the optimization model. The experimental subjects in this section are CSHHO, HHO. 29 CEC2017 functions [72] based on 50 and 100 dimensions, respectively, are used for scalability experiments. In this experiment, the experimental parameters and the experimental environment are consistent with the previous experiments except that the dimensionality settings are different from the previous experiments, and Table 12 shows the experimental results using Mean and Std.

The best numerical results in CSHHO and HHO are set in bold, and the numerical results of both equivalents are in bold. Under the UM function (F1–F2), the numerical results of CSHHO are overall better than those of HHO, and CSHHO continues to maintain some advantage as the number of dimensions increases; under the MM function test (F3–F9), CSHHO performs better than HHO in 50 and 100 dimensions. The CSHHO's performance in the 50 and 100 dimensions was generally better than that of HHO in the 50 dimensions in F6 and F7 and in the 100 dimensions in F4. In the hybrid function (F10–F19), CSHHO performs better in the rest of the test sets, except for F11 and F12, where it performs lower than HHO. In the composition function (F20–F29), it still has good ability, and the accuracy in functions F21, F22 and F26 is higher than HHO, and the effect in functions F22, F23, F24, F26 is the same as HHO, and the effect in functions F23 (50 dimensions), F26 (100 dimensions) is lower than HHO. In general, compared with HHO, CSHHO can better balance the exploration and exploitation process as the number of dimensions increases.


