1. Introduction
It is well known that metaheuristic optimization methods are widely used to solve problems in several fields of science and engineering. Population-based metaheuristic methods iteratively generate new populations to increase diversity in the current generation. This increases the probability of reaching the optimum for the considered problem. These algorithms are proposed to replace exact optimization algorithms when they are not able to reach an acceptable solution. The inability to provide an adequate solution may be due to either the characteristics of the objective function or the wide search space, which renders a comprehensive search useless. In addition, classical optimization methods, such as greedy-based algorithms, need to consider several assumptions that make it hard to resolve the considered problem.
When metaheuristic methods are operated, on the one hand, the objective function has no restrictions. On the other hand, each optimization method proposes its own rules for the evolution of the population towards the optimum. These algorithms are suitable for general problems, but each one has different skills in global exploration and local exploitation.
Some of the proposed algorithms that have proven to be effective in several areas of science and engineering are studied: mine blast algorithm (MBA) [
1] based on the mine bomb explosion concept; the manta ray foraging optimization method (MRFO) [
2] based on intelligent behaviors of manta ray; the crow search algorithm (CSA) [
3] based on the behavior of crows; the ant colony optimization (ACO) algorithm [
4] which imitates the foraging behavior of ant colonies; the biogeography-based optimization (BBO) algorithm [
5] which improves solutions stochastically and iteratively; the grenade explosion method (GEM) algorithm [
6] based on the characteristics of the explosion of a grenade; the particle swarm optimization (PSO) algorithm [
7] based on the social behavior of fish schooling or bird flocking; the firefly (FF) algorithm [
8] inspired by the flashing behavior of fireflies; the artificial bee colony (ABC) algorithm [
9] inspired by the foraging behavior of honey bees; the gravitational search algorithm (GSA) [
10] based on Newton’s law of gravity; and the shuffled frog leaping (SFL) algorithm [
11] which imitates the collaborative behavior of frogs; among others. Many of them require configuration parameters that must be correctly tuned according to the problem to be solved, see for example [
12]. Otherwise, exploitation and exploration skills can be degraded. If the exploitation capacity degrades, the number of populations generated must be increased, while if the exploration capacity deteriorates, the quality of the solution may worsen.
Other proposed algorithms that have also been shown to be effective in various areas of science and engineering but have no algorithm-specific parameters are: the sine cosine algorithm (SCA) [
13] based on the sine and cosine trigonometric functions; the teaching-learning-based optimization (TLBO) algorithm [
14] based on the processes of teaching and learning; the supply-demand-based optimization method (SDO) [
15] based on both the demand relation of consumers and supply relation of producers; the Jaya algorithm [
16] based on geometric distances and random processes; the Harris haws optimization method (HHO) [
17] based on the cooperative behavior and chasing style of Harris’ hawks, and Rao optimization algorithms [
18]; among others.
One of the widely used techniques to improve optimization algorithms is chaos theory. Nonlinear dynamic systems that are characterized by a high sensitivity to their initial conditions are studied in chaos theory [
19,
20]. They can be applied to replace the PRNGs in producing the control parameters or performing local searches [
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34]. However, improving an optimization algorithm using chaotic systems instead of pseudo-random number generators (PRNGs) may be restricted to the problem under consideration or to a set of problems with similar characteristics.
Hybridization is a well-known strategy that boosts the capacity of optimization algorithms. Since a metaheuristic optimization algorithm cannot overcome all algorithms in solving any problem, hybridization can be a solution that merges the capabilities of different algorithms in one system [
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52]. Many of these algorithms require the correct setting of control parameters, and merging several of these algorithms into a single solution increases the complexity of accurate adjustment of control parameters. Furthermore, some hybridization techniques can be complicated if the management and replacement strategies of individuals in the populations are not similar. On the other hand, when chaos is applied, hybrid algorithms can provide excellent performance for a limited number of applications.
The proposed algorithms consist of hybridizations of seven of the best optimization algorithms that satisfy two requirements: (i) they must be free of algorithm-specific control parameters, and (ii) population management should allow hybridization not only at the population level but also at the individual level.
The remainder of this paper is organized as follows.
Section 2 presents a brief description of the optimization algorithms used for the hybridizations.
Section 3 describes the hybrid algorithms in detail, analyses of which are provided in
Section 4. Finally, concluding remarks are drawn in
Section 5.
2. Preliminaries
As mentioned above, among the best free control parameter algorithms are the Jaya algorithm [
16], the SCA algorithm [
13], the supply-demand-based optimization method [
15], Rao’s optimization algorithms [
18], the Harris hawks optimization method (HHO) [
17], and the teaching-learning-based optimization (TLBO) algorithm [
14]. Among these proposals, the HHO algorithm is the most complex. It consists of two phases. During the first phase, the elements of the population are replaced without comparing the fitness of the associated solutions, which is an unwanted strategy for hybrid algorithms. In addition, the SDO algorithm, which offers impressive initial results, works with two populations preventing its integration in our hybrid proposals.
The Jaya optimization algorithm and the three new Rao’s optimization algorithms (i.e., RAO1, RAO2, and RAO3) are described in Algorithm 1. The Jaya optimization algorithm has been successfully used for solving a large number of large-scale industrial problems [
53,
54,
55,
56,
57,
58,
59]. The three new Rao’s optimization algorithms are metaphor-less algorithms based on the best and worst solutions obtained during the optimization process and the random interactions between the candidate solutions [
60,
61,
62]. In Algorithms 1–3 and 5–9,
is the number of generations;
is the number of individuals in population
;
is the number of variables of the objective function
F;
is the
mth individual in the current population;
and
are the low and high bounds of the
kth variable of
F, respectively;
and
are the best and worst individuals of the current population,
, successively;
is the
mth new individual that can replace the current
mth individual of population
; and
is an uniformly distributed random number in
.
The SCA algorithm, presented in Algorithm 2 has been proven to be efficient in several applications [
63,
64,
65,
66,
67,
68,
69].
The TLBO algorithm, described in Algorithm 3, is a two-phase algorithm; teacher phase and learner phase. It has been proven effective in solving various engineering problems [
70,
71,
72,
73,
74,
75,
76].
As mentioned earlier, the use of chaotic maps can improve the behavior of some metaheuristic methods. The 2D chaotic map reported in [
33] has significantly improved the convergence rate of the Jaya algorithm [
33,
77]. The generation of the 2D chaotic map is shown in Algorithm 4, where the initial conditions are
,
,
, and
. The computed values of
and
are in
. The chaotic Jaya algorithm (in short, CJaya) is shown in Algorithm 5, where
are chaotic values randomly extracted from the 2D chaotic map. Other chaotic maps have been applied to Jaya in [
32,
78]. However, they do not surmount the chaotic behavior of the aforementioned 2D map.
As they present a similar structure, Algorithms 1–5 are used for designing our hybrid algorithms.
Algorithm 1 Jaya and Rao algorithms |
- 1:
Set and population size (Iterator individuals: m) - 2:
Define the function cost (Iterator design variables: k) - 3:
Generate the initial population - 4:
for to do - 5:
for to do - 6:
- 7:
- 8:
end for - 9:
Compute and store function fitness - 10:
end for - 11:
for to do - 12:
Search for the current and - 13:
for to do - 14:
Select the random individual {Only for RAO2 and RAO3} - 15:
for to do - 16:
if Jaya then - 17:
- 18:
- 19:
end if - 20:
if RAO1 then - 21:
- 22:
- 23:
end if - 24:
if RAO2 then - 25:
- 26:
if then - 27:
- 28:
else - 29:
- 30:
end if - 31:
end if - 32:
if RAO3 then - 33:
- 34:
if then - 35:
- 36:
else - 37:
- 38:
end if - 39:
end if - 40:
if then - 41:
- 42:
end if - 43:
if then - 44:
- 45:
end if - 46:
end for - 47:
if then - 48:
{replace the current population} - 49:
end if - 50:
end for - 51:
end for - 52:
Search for the current
|
Algorithm 2 SCA optimization algorithm |
- 1:
Set - 2:
Set and the population size (Iterator individuals: m) - 3:
Define the function cost (Iterator design variables: k) - 4:
Generate the initial population {lines 4–10 of Algorithm 1} - 5:
for to do - 6:
Search for the current - 7:
- 8:
for to do - 9:
for to do - 10:
; ; - 11:
if then - 12:
- 13:
else - 14:
- 15:
end if - 16:
Check the bounds of {lines 40–45 of Algorithm 1} - 17:
end for - 18:
if then - 19:
{replace the current population} - 20:
end if - 21:
end for - 22:
end for - 23:
Search for the current
|
Algorithm 3 TLBO algorithm |
- 1:
Set - 2:
Set and the population size (Iterator individuals: m) - 3:
Define the function cost (Iterator design variables: k) - 4:
Generate the initial population {lines 4–10 of Algorithm 1} - 5:
Set - 6:
for to do - 7:
Search for the current - 8:
Set the teaching factor (an integer random value ) - 9:
for to do - 10:
- 11:
end for - 12:
for to do - 13:
Select the random individual - 14:
for to do - 15:
if then - 16:
- 17:
- 18:
end if - 19:
if then - 20:
- 21:
if then - 22:
- 23:
else - 24:
- 25:
end if - 26:
end if - 27:
Check the bounds of {lines 40–45 of Algorithm 1} - 28:
end for - 29:
if then - 30:
{replace the current population} - 31:
end if - 32:
end for - 33:
if then - 34:
- 35:
else - 36:
- 37:
end if - 38:
end for - 39:
Search for the current
|
Algorithm 4 2D chaotic map |
- 1:
Set , and - 2:
for to do - 3:
- 4:
- 5:
end for
|
Algorithm 5 Chaotic 2D Jaya algorithm |
- 1:
Set - 2:
Set and the population size (Iterator individuals: m) - 3:
Define the function cost (Iterator design variables: k) - 4:
Generate the initial population {lines 4–10 of Algorithm 1} - 5:
for to do - 6:
Search for the current - 7:
Search for the current - 8:
Set the scaling factor (integer random value ) - 9:
for to do - 10:
Select the random individual - 11:
- 12:
- 13:
- 14:
for to do - 15:
Extract - 16:
if then - 17:
- 18:
- 19:
end if - 20:
if then - 21:
- 22:
- 23:
end if - 24:
if then - 25:
- 26:
end if - 27:
Check the bounds of {lines 40–45 of Algorithm 1} - 28:
end for - 29:
if then - 30:
{replace the current population} - 31:
end if - 32:
end for - 33:
end for - 34:
Search for the current
|
3. Hybrid Algorithms
The proposed hybrid algorithms are designed using the seven algorithms described in
Section 2. These algorithms have been selected thanks to their performance in solving constrained and unconstrained functions, but also because they share a similar structure that allows the implementation of different hybridization strategies.
Algorithm 6 shows the skeleton of the proposed hybrid algorithms, which includes all common and uncommon tasks without any updating procedure of the current population. Since the TLBO algorithm is a two-phase algorithm, the proposed hybrid algorithms apply these two phases consecutively to each individual. In contrast to the other algorithm where a single-phase is executed, a control parameter is applied to process twice the same individual when the TLBO algorithm is used (see lines 24–29 of Algorithm 6). The algorithm used to obtain a new individual is determined by (see line 17 of Algorithm 6). In Algorithms 6–9, determines the algorithm accountable for producing a new individual.
Given that only algorithms that are free of control parameters have been considered, proposals that require the inclusion of control parameters have been discarded. Following these guidelines, we have designed three hybrid algorithms, an analysis of which is provided in
Section 4. The first proposed hybrid algorithm, shown in Algorithm 7, processes the entire population in each iteration using the same algorithm, and is referred to as the HYBPOP algorithm. This is the most straightforward hybridization technique where the requirement to follow the structure given by Algorithm 6 is not mandatory on all algorithms. In Algorithms 7–9,
is the number of algorithms free of control parameters involved in the hybrid proposals.
Algorithm 6 Skeleton of hybrid algorithms |
- 1:
Set and the population size (Iterator individuals: m) - 2:
Define the function cost (Iterator design variables: k) - 3:
Set ; ; - 4:
Generate the initial population {lines 4–10 of Algorithm 1} - 5:
for to do - 6:
Search for the current and - 7:
Set the scaling factor and teaching factor (an integer random value ) - 8:
for to do - 9:
- 10:
end for - 11:
- 12:
for to do - 13:
Select random individual - 14:
- 15:
; - 16:
for to do - 17:
⇒ () Compute using (one from Algorithms 1–5) - 18:
Check the bounds of {lines 40–45 of Algorithm 1} - 19:
end for - 20:
if then - 21:
{Replace the current population} - 22:
end if - 23:
if then - 24:
if then - 25:
; ; - 26:
else - 27:
; - 28:
end if - 29:
end if - 30:
end for - 31:
end for - 32:
Search for the current
|
Algorithm 7 HYBPOP: Hybrid algorithm based on population |
- 1:
- 2:
- 3:
- 4:
for to do - 5:
- 6:
for to do - 7:
for to do - 8:
Compute using - 9:
end for - 10:
end for - 11:
- 12:
if then - 13:
- 14:
end if - 15:
end for - 16:
Search for the current
|
The second algorithm, named HYBSUBPOP, is described through Algorithm 8. It logically splits the population into sub-populations. During the optimization process, each sub-population will be processed by one of the seven algorithms mentioned previously.
Algorithm 8 HYBSUBPOP: Hybrid algorithm based on sub-populations |
- 1:
- 2:
Split into sub-populations - 3:
- 4:
for to do - 5:
for to do - 6:
= sub-population index of individual m. - 7:
- 8:
for to do - 9:
Compute using - 10:
end for - 11:
end for - 12:
end for - 13:
Search for the current
|
Algorithm 9 shows the third proposed hybrid algorithm, dubbed HYBIND, in which a different algorithm in each iteration handles each individual of the population.
Algorithm 9 HYBIND: Hybrid algorithm based on individuals |
- 1:
- 2:
- 3:
- 4:
for to do - 5:
- 6:
for to do - 7:
- 8:
for to do - 9:
Compute using - 10:
end for - 11:
- 12:
if then - 13:
- 14:
end if - 15:
end for - 16:
end for - 17:
Search for the current
|
It is worth noting that the aim of the proposed hybrid algorithms is not to improve the convergence ratio of the used algorithms separately, nor to perform optimally for a particular problem. It is to show outstanding performance for a large number of problems without adjusting any control parameters of the considered algorithms.
4. Numerical Experiments
In this section, the performance of the proposed hybrid algorithms is analyzed through solving 28 well-known unconstrained functions (see
Table 1), the definitions of which can be seen in [
77]. The proposed algorithms were implemented in the C language, using the GCC v.4.4.7 [
79], and an Intel Xeon E5-2620 v2 processor at 2.1 GHz. The hybrid proposals, along with the original algorithms, have been implemented and tested using C language. The C implementations of the original algorithms used are not available through the Internet. However, their Java/Matlab implementations are commonly available.
The data collected from the experimental analysis are as follows:
NoR-AI: the total number of replacements for any individual.
NoR-BI: the total number of replacements for the current best individual.
NoR-BwT: the total number of replacements for the current best individual with an error of less than .
LtI-AI: the last iteration () in which a replacement of any individual occurs.
LtI-BI: the last iteration () in which a replacement of the best individual occurs.
Three of the five analyzed data (NoR-) indicate the number of times the current individual () is replaced by a new individual (), which provides a better fitness function (see line 21 of Algoritm 6), while the remaining two (LtI-) refer to the last generation (iterator) in which at least one individual has been replaced.
All data given below have been obtained under 50 runs,
iterations (
) and two population sizes (
and 210). The maximum values of the analyzed data are listed in
Table 2.
Table 3,
Table 4 and
Table 5 show the data of all the considered algorithms independently, i.e., without hybridization. As expected, the behavior of the different algorithms does not follow a familiar pattern. In addition, it depends on the objective function. Regarding a global convergence analysis, both TLBO and CJaya behave better but with a higher order of complexity (see [
77,
80]). Moreover, it is noted that when using TLBO, two new individuals are generated in each iteration; one in the teacher phase and the other one in the learner phase. The values in brackets in
Table 3,
Table 4 and
Table 5 refer to the standard deviation of the data under 50 runs. Note that heuristic optimization algorithms are partially based on randomness, which leads to high values of standard deviation. The average standard deviations are approximately equal to 16%, 22%, 15%, 30%, 23%, 23%, and 22% for Jaya, Chaotic Jaya, SCA, RAO1, RAO2, RAO3, and TLBO, respectively.
An important aspect, not shown in
Table 3,
Table 4 and
Table 5, is whether the solution obtained by each algorithm is acceptable or not. In particular, the original algorithms fail to obtain a solution tolerance of less than
for 3, 8, 2, 4, 7, 5, and 2 functions for Jaya, CJaya, SCA, RAO1, RAO2, RAO3, and TLBO, respectively. Therefore, considering only original algorithms, there is no algorithm whose behavior is always the best, which justifies the development of a generalist hybrid system that can solve a large number of benchmark functions and engineering problems.
Comparing the quality of the solutions obtained from the proposed hybrid algorithms, it can be concluded that the HYBSUBPOP algorithm is the worst one because the same thoroughbred algorithm is always applied to the same sub-population, which degrades the algorithm’s performance for a small population. Contrary to HYBSUBPOP, the HYBPOP and HYBIND algorithms apply the selected algorithms to all individuals, which leads to better-exploiting hybridizations. The HYBSUPOP algorithm fails to obtain a solution tolerance of less than in 3 functions (F11, F23, and F27) and the HYBPOP and HYBIND algorithms fail in only one function (F27 and F11, respectively). If the population size is increased to 210, the HYBIND algorithm succeeds with all functions, thus the HYBIND algorithm has a slightly better performance in comparison to HYBPOP.
Local exploration has improved both in the HYBPOP method and especially in the HYBIND method, as stated above.
Figure 1 and
Figure 2 show the convergence curves of both all the individual methods and the three hybrid methods proposed for the first 1000 and 100 iterations, respectively, for functions F1, F8, F11, and F18. Each point in both figures is the average of the data obtained from 10 runs. As shown in these figures, the curves of the three hybrid methods are similar to the curves of the best single algorithms for each function. Therefore, global exploitation, while not improving all methods, behaves similarly to the best single methods for each function. It should be noted that the hybrid methods behave similarly to the best individual methods for each function, which are not always the same.
Table 6 sorts the algorithms according to the number of iterations required to obtain an error of less than
, if an algorithm is missing in a row an acceptable solution is not reached. As seen from this table, no algorithm outperforms all other algorithms. Moreover, a computational cost analysis would be necessary to classify them correctly.
Table 7 exhibits the computational cost of different algorithms. It reveals from this table that the hybrid algorithms are mid-ranked in terms of computational cost, and HYBIND is computationally less expensive than HYBPOP.
An analysis of the contribution of each algorithm in the HYBPOP and HYBIND algorithms is exhibited in
Table 8,
Table 9 and
Table 10.
Table 8 indicates the number of times that an individual has been replaced in each algorithm. The replacement is accepted when the new individual improves the fitness of the current solution. As seen from
Table 8, the HYBIND algorithm performs more replacements of individuals. In addition, the numbers of replacements per individual for the contributing algorithms are nearly equal, except for the RAO1 algorithm, where the contribution to replacements is limited. The standard deviations of each data (from 50 runs) are being put in brackets. We found that, on average, the standard deviations for HYBPOP and HYBRID algorithms are both equal to 14%.
Table 9 shows the last iteration in which each optimization algorithm replaces an individual in the population, i.e., when it no longer brings improvement to the hybrid algorithm. As can be seen from
Table 8, the optimization algorithms, except the RAO1 algorithm, work efficiently in the hybrid algorithms. It is also revealed that the considered algorithms contribute to more generations in the HYBIND algorithm. The mean value of the standard deviation rises to 28% and 23% for HYBPOP and HYBIND, respectively, due to the randomness behavior and lower LtI-AI costs.
Finally,
Table 10 shows the last iteration in which each algorithm obtains a new optimum. A careful analysis of the results in
Table 10 reveals that in the HYBPOP algorithm, the seven algorithms contribute similarly to reaching a better solution as new populations are produced. By contrast, when using the HYBIND algorithm, the powerful algorithms are CJaya and TLBO. It should be noted that the CJaya algorithm extracts random individuals from the population to generate new individuals. The TLBO algorithm collects all the individuals of the population to obtain new individuals. Therefore, these algorithms exploit the results obtained from the rest of the algorithms to converge towards the optimum. This fact is due to the nature of these algorithms, where the best solution correctly guided the individuals. The mean value of the standard deviation is high because the LtI-BI is strongly affected by randomness behavior.
It has been found that the HYBSUBPOP algorithm does not reach excellent optimization performance because of the lack of harmony between the original algorithms, so it has left without further analysis. On the other hand, the exploitation phase of the HYBPOP and HYBIND algorithms are similar. In contrast, the HYBIND algorithm outperforms the HYBPOP one in terms of exploitation. The hybridization of the original algorithms is implemented at the individual level in the HYBIND algorithm, contrary to the HYBPOP algorithm, in which that hybridization is performed at the population level. Finally, the HYBPOP algorithm included algorithms that update the population without analyzing the fitness of the associated solutions, while this restriction is mandatory in the HYBIND algorithm.