1. Introduction
Retaining walls are civil engineering structures which are designed to restrain the movement of soil mass in the lateral direction for providing free space in front of them. The material of retaining walls may be different, and the required specifications and properties of design projects are used in the design according to the different types of retaining walls. These properties include soil profile on the project site, the magnitude of the lateral loads, construction time, equipment mobilization, construction area boundaries, immediate environment, neighboring structures, and drainage conditions. Gravity type walls, reinforced concrete walls, cantilever soldier pile walls, sheet piles, bulkheads, piles with anchorages, and diaphragm walls are all types of retaining walls practically used in real life. Retaining wall designs must satisfy both geotechnical design and structural design states. According to the geotechnical design states, retaining structures must safely support the loads which are resulted by the backfill, to ensure the safety measures of overturning, sliding, and bearing capacity of the soil [
1,
2]. Due to the complexity of both limit states, the optimum design of the retaining wall system can be solved via metaheuristic methods.
Metaheuristic algorithms that formalize a process, happening, natural phenomena, or theory in several phases of numerical iterations, are generally used in the design of structures. These algorithms are especially effective on the optimum cost design of reinforced concrete (RC) members since they involve two different types of material with a complimentary design to eliminate disadvantages of concrete and steel.
The process and phenomena used in metaheuristic algorithms have a final goal as in the optimum design. In these processes, the best combination or option is found as design variables in the engineering problems, while the maximum gain is defined. This maximum gain is the objective function in the optimum design problems, and the minimization of the cost is essential in engineering. There are several metaheuristic algorithms with imitations: the process of natural selection in genetic algorithms (GA) [
3,
4], the behavior of swarms in particle swarm optimization (PSO) [
5], the generation of the universe in big bang–big crunch algorithm (BB–BC) [
6], the process of music in harmony search algorithm (HS) [
7], flashing behavior of fireflies in firefly algorithm (FA) [
8], and heat treatment method in metallurgy in simulated annealing (SA) [
9]. As a recent example used for the optimal placement of triaxial accelerometers for automated monitoring of high-rise buildings, natural spiral phenomena are imitated in hypotrochoid spiral optimization algorithm [
10].
A good balance is needed for optimization for safety and cost in the design of RC structures with considering two materials with different costs. That is because the optimum design of retaining walls is one of the major applications in structural optimization, and the studies were performed during the 1980s [
11]. There are recent studies subjected to employing metaheuristic algorithms. The major metaheuristic based optimum design approaches for RC retaining walls are mentioned in this section.
Ceranic et al. [
12] developed an SA-based method for cost optimization. Then, also by using SA, a parametric study was conducted by Yepes et al. [
13]. PSO was employed by Ahmadi-Nedushan and Varaee [
14] for the design of optimum variables of RC retaining walls. HS is another metaheuristic method that was employed by Kaveh and Abadi [
15] for optimization of RC retaining walls. A metaheuristic algorithm, the bacterial foraging optimization, was used by Ghazavi and Salvati [
16] for the problem. Camp and Akin [
17] investigated the cases of surcharge load, backfill slope, and internal friction angle of the retained soil by employing the BB–BC algorithm. Multi-objective optimization for both cost and constructability was conducted by Kaveh et al. [
18] based on non-dominated sorting GA. Khajehzadeh et al. [
19] adopted the gravitational search algorithm to the optimum RC retaining wall problem. Sheikholeslami et al. [
20] evaluated the performance of FA on the optimum design of RC retaining walls. Gandomi et al. [
21] compared accelerated PSO, FA, and cuckoo search for optimization design variables of RC retaining walls. Kaveh and Soleimani [
22] optimized RC retaining walls utilizing colliding bodies optimization (CBO) and democratic particle swarm optimization (DPSO) according to static loading using Coulomb and Rankine theory, and dynamic loading using the Mononobe-Okabe method. Sheikholeslami et al. [
23] used the hybrid combination of FA and HS for RC retaining wall optimization. Aydoğdu [
24] combined biogeography-based optimization with Levy flight distribution to solve optimum variables. One of the most recent studies on the optimum design employing the flower pollination algorithm (FPA) [
25] proves the popularity of the subject. Kalemci et al. [
26] employed grey wolf optimization (GWO) for optimum design of cantilever type RC retaining walls with a shear key. TLBO and Jaya algorithm (JA) were employed for RC counterfort retaining walls by Öztürk et al. [
27]. JA was also employed for optimum design of statically loaded cantilever retaining walls with toe projection restriction [
28] and optimum design of dynamically loaded RC retaining walls [
29].
In the present study, a hybrid algorithm combined adaptive HS and JA is proposed to improve the global search part of HS with the only single phase of JA using both the best and worst solution in one equation. Via this modification, the convergence and robustness of HS are improved. Thus, the single-phase JA is also improved by adding a second phase that is the local search part of HS. To present and evaluate the performance of this algorithm, several cases of RC retaining walls are investigated. Ten different metaheuristic algorithms are used in comparison including the most classical ones such as GA, differential evolution (DE), and PSO; proved metaheuristic algorithms such as HS, artificial bee colony (ABC), and FA; and recently proposed new-generation algorithms such as TLBO, FPA, GWO, and JA. The research cases include 30 multiple cycles of the optimization methodology for evaluation of algorithms based on minimum cost, average cost, and standard deviation.
After this introduction section, the structure of the paper continues with the short descriptions of employed metaheuristic algorithms and the newly proposed hybrid algorithm in
Section 2. Then,
Section 3 includes the application of the methods to RC retaining wall cases. In the first case that includes a shorter wall than Case 2, the optimum value of the toe slab/front encasement width of the retaining wall is to be zero, and the optimum RC retaining wall is L-shaped. Secondly, there is presented an optimum design with a T-shaped wall by using 30 multiple cycles of optimization. Also, both the best number of the population and iteration numbers are evaluated. Finally, multiple cycle evaluation is performed for different wall parameters defined as design constants by using HS and modified versions. In
Section 4, the conclusion is given by separately considering the results of all cases.
2. Employed Metaheuristic Algorithms
In this section, 10 metaheuristic algorithms are briefly summarized based on the modifications adapted for the optimized problem. Also, the adaptive version of HS and the hybridized version of it with JA are given.
2.1. Genetic Algorithm (GA)
Genetic algorithm (GA) is one of the metaheuristic algorithms that was developed at the beginning of the 1970s by J. Holland who was inspired by biological systems, which can transform into pretty successful organisms by adapting to the environment. Furthermore, the algorithm is arranged with five biologic processes called mating, reproduction, cloning, crossover, and mutation intended for optimization applications [
4,
30].
New candidate solutions (child members) are defined in the direction of the mentioned biological processes in the evolution of the population (optimizing of design variables) and grown by appropriate candidates (ancestor members) selected from the initial population. Next, the crossover is applied between these members. The determinant value in the crossing process is a crossover possibility (cap), and whether crossover will occur or not is determined by the value [
31]. The mutation process is necessary for members that have similar features, and crossover may remain incapable (Equation (1)). Finally, better solutions are selected by comparing new members with old ones and transferred to a new generation. For this, the fitness value of the assigned new solutions is considered.
In the equation,
is mutation rate, q is a gene (design parameter) randomly selected from total design parameter, and
and
are new, lower, and upper limit values of q
th parameter, respectively. A random number between 0 and 1 is shown as rand( ).
2.2. Differential Evolution (DE)
Differential evolution (DE) is an evolutionary algorithm, which was developed by R. Storn and K. Price [
32], and can be considered as an advanced version of GA. At the same time, DE applies the mutation, rearrangement (cross-over), and selection operations as in GA. During the process, two parameters are benefited with names: crossover possibility (CR) and weighting factor (F). CR is considered as a possible value, which determines the crossover cases that will occur between the new solution and solution handled at the beginning [
33,
34].
In the optimization process, a new solution is generated through selection of three different solutions randomly ( and ), for each candidate solution. In mutation, whole variables of chromosome (candidate solution) are changed by using these three chromosomes (Equation (2)) [
34,
35].
After the mutation process, crossover (assigning of another random existing solution; rand
cs) is applied according to CR parameter; otherwise, the current candidate solution (cs) is the same as the randomly selected one from all of the chromosomes (Equation (3)). Finally, the optimization of design variables is completed by considering fitness value (objective function) as in GA [
34].
2.3. Particle Swarm Optimization (PSO)
Kennedy and Eberhart [
5] presented the particle swarm optimization (PSO) algorithm in 1995 by inspiration from the natural behaviors of colonies of insect swarms, a flock of birds, or fish [
36]. Each member of the swarm is a particle that has a specific velocity and position in the search area. Additionally, a value called inertia weight parameter (w) is used for combining local and global search and for generating balance with each other (as different from classic PSO). Also, new velocity and position are formalized as in Equations (4) and (5), respectively. As shown in the equations,
and
are the new velocity and position for i
th design variable.
and
are the best global (among all particles) and local position (obtained in each iteration) in terms of the objective function, respectively. Also,
and
are values of current position and velocity of the corresponding j
th particle;
with
parameters are positive constants, which provide controlling flying velocities [
37,
38].
2.4. Artificial Bee Colony (ABC) Algorithm
Karaboğa [
39] developed an algorithm known as the artificial bee colony (ABC) by using the benefits of natural behaviors of bee colonies in food-searching. This colony contains different bee groups such as employee/worker, onlooker, and scout, which aims at the improvement of nectar quality. Moreover, in the process of algorithm development, some rules are considered as equality of employee and onlooker bees with the location of food sources, and being employee bee of scout ones when nectar of sources finishes. At first, there is the employee bee stage for optimization and a random selection is made for a food
with design variable
for defined new value of j
th food source belonging p
th parameter
and
the possibility that is formalized via Equations (6) and (7), respectively. The second one is the onlooker bee stage and carried out via Equation (8). Comparison of food quality (
) is needed for this stage. Equation (8) is applied for the whole of the nectar of
fn sources, where
and
are j
th and k
th food source position of p
th parameter, respectively [
39,
40,
41]:
Finally, employee bees appear as scout bees. To find new foods, a condition is applied indicated with Equation (9).
is a parameter, which takes value according to improving design variables, and
controls this value as a limitation. Also,
is the upper and
is the lower limit of i
th design variable [
40,
42].
2.5. Firefly Algorithm (FA)
Firefly algorithm was developed by Yang [
8] and it is one of the population-based methods. As for the development of the algorithm, flashing ability is based. It is a natural feature of fireflies and is effective for performing activities such as foraging and communicating with other fireflies to find them. For usage of this algorithm in optimization problems, some assumptions are suggested as listed below [
21,
34,
43]:
Fireflies are hermaphrodites and attract the other ones to themselves in every condition.
Brighter kth firefly attracts the jth firefly which is a less bright one, because attractiveness (β) increases as long as brightness (I) increases. But kth firefly continues to fly randomly in the case that a less bright one is not found than itself (for minimization problems) (Equation (10)).
The brightness of fireflies is determined by the objective function. Therefore, the brightness of kth firefly (I(xk)) at an x position is proportional with objective function (f (xk)).
In Equations (11) and (12),
and
are attractiveness for kth firefly corresponding jth and minimum attractiveness ranged between 0–1;
and
are net position;
and
are any pth design variable value belonging to kth and jth firefly, respectively; besides
expresses the distance between j-k firefly and
is the total design parameter number.
2.6. Teaching–Learning-Based Optimization (TLBO)
Rao et al. developed an algorithm called teaching–learning-based optimization (TLBO) in 2011 [
44]. The basic idea comes from the principle of teaching students by a teacher and self-learning by themselves in a class.
The task of teachers is to improve the knowledge level of students, affect them, and produce higher grades. Thus, he/she realizes the optimization of the class average. Moreover, students improve not only their knowledge but also their grades through communication, sharing knowledge, and investigation.
That is why optimization is performed through teachers and students. In the teacher phase, the teacher is chosen from a student, who has the best/higher grades, and he/she improves the knowledge of the other students using teaching factor (TF) (Equation (13)). This operation is formalized in Equation (14). In the learner phase, two random solutions are selected as “a” and “b” among all students that their grades were updated in the teacher phase, and grades are again updated depending on the solution, which is the better one in terms of the objective function, as expressed in Equation (15):
2.7. Grey Wolf Optimization (GWO)
Grey wolf optimization (GWO) was developed with inspiration from the conception of leadership hierarchy with hunting behavior of grey wolves in nature [
45]. In the hierarchy, three wolves are defined as alfa (α) (first leader), beta (β) (helper), and delta (δ) (transferring orders coming from beta), and these control the progression in a pack. The remaining wolves are called omega wolves (ω) and they are the weakest ones in the pack. For characterizing the algorithm as optimization, a group hunt performed by wolves becomes prominent. Firstly, group leaders are determined. After wolves determine their prey, they follow and finally encircle it. In the meantime, distance of prey with encircling wolf
is indicated via Equation (16). Additionally, each wolf can change position around their prey randomly, as in Equation (17). Here,
and
are new value, initial matrix value of p
th (prey), and j
th (ω) candidate solution for i
th design variable, respectively;
is a coefficient factor,
is a vector, which defines the case that wolf attacks prey,
is a vector, which affects the distance of the prey–grey wolf,
is the current iteration number [
46,
47].
After encirclement, the attack realizes but is consulted to the knowledge of leader wolves for being successful of this state. Distances of α, β, and δ wolves to prey and new positions of each one can be defined via Equations (21)–(23) and (24)–(26), respectively. The final updated position of the current solution is expressed via Equation (27) [
45]:
In the end, the grey wolf, which found the new position, attacks the prey.
is decreased to approach the prey of wolves. Correspondingly to this,
is also changed and the rule, which expresses the realization of attack, is as Equation (28) [
45].
2.8. Flower Pollination Algorithm (FPA)
Flower Pollination Algorithm (FPA) is a kind of metaheuristic algorithm which was suggested by Yang [
48]. Pollination, which is an important ability providing continuity of flowery plants’ species, underlies the working structure of this algorithm.
On the other hand, in the optimization process the algorithm performs two stages known as self-pollination and cross-pollination and applies them as local (Equation (29)) and global search (Equation (30)), respectively. In global search, a function called Lévy in the assumption that pollinators continue to search by fly randomly is used. Lévy function is expressed with Equation (31) [
43]:
where sp is switch probability (change value of search). For i
th design variable,
is new value;
is initial matrix value corresponding j
th candidate solution;
is the best solution in terms of the objective function;
and
are randomly selected k
th and m
th solutions.
2.9. Jaya Algorithm (JA)
In 2016, Rao proposed a method called the Jaya algorithm (JA). The name of this algorithm comes from the Sanskrit word Jaya, which is victory [
49]. This method searches and finds the best solutions by diverging to the worst existing value and requires only common parameters such as population size for the optimization process. Furthermore, all of the solutions are generated for each variable via Equation (32), where
and
are the best and worst values in terms of the objective function, besides
and
are the current and new value of i
th design variable [
50,
51].
When the current best solution and the other solutions are close to each other, JA may trap a local solution although this algorithm has a good convergence ability. This is a non-effective feature of using a single-phase option.
2.10. Harmony Search (HS)
Harmony search (HS) is a metaheuristic algorithm that was proposed by enhancing musical performance with better harmonies by a musician. In the working principle of this method, musical performances are improved by increasing the effects of harmonies directed to gain approval from audiences. Geem et al. [
7] considered HS as an optimization tool. For the optimistic approach of the natural musical process, HS is operated via two different choices as the possibility of memory usage and generation of random notes. These are formalized as the following Equations (33) and (34), respectively. The classical equations of HS together with the evaluation history was presented by Zhang and Geem [
52] and Geem [
53]. The modified equations of HS were used in this study.
HMCR is harmony memory consideration rate and
is fret width. Also,
, , express a new value, the lower limit, the upper limit of ith design variable, and the kth random-selected candidate vector, respectively. A random number between −1/2 and 1/2 is shown as
.
2.11. Adaptive Harmony Search (AHS)
In this modification of HS, the mentioned algorithm is arranged according to the usage differences of
and FW values by considering the iteration process. A modified version of HS parameters expressed as
HMCR (harmony memory consideration rate), and
are utilized as changeable according to iterations. These parameters can be formalized as Equations (35) and (36).
and
are also classical parameters and their values are initially chosen by the user;
and
express the current iteration step and total iteration number, respectively. In the numerical evaluations, the initial values of HMCR and FW are also taken randomly to present an algorithm that is not dependent on specific parameters. Via this modification, the convergence ability is increased as can be seen in the numerical examples. Besides, it is not required to calibrate the algorithm for the parameters since it can use different combinations of parameters during iterations.
2.12. Adaptive-Hybrid Harmony Search (AHHS)
HS did not use the best or worst solutions in the formulations, while both of them are used in JA in a single equation. Besides, JA has only a single-phase and always uses the best and worst solutions without considering the other ones. This difference between these two algorithms leads to the idea of hybridizing these two algorithms.
For the hybrid modification, HS is combined with JA by considering the general optimization equation belonging to the JA method shown in Equation (32) instead of the global search phase of HS (expressed via Equation (33)), besides the usage of modified versions of
and
parameters. By the modification, the effective features of JA on using both best and worst solutions in a single equation are used, while the non-advantages of using a single-phase are avoided by the hybridization. Also, the user-defined initial values of HMCR and FW are investigated by taking random values at the start of the optimization to propose a user-defined parameter-free algorithm as JA. By using a two-phase algorithm, the trapping to a local optimum is also prevented.
As proved in the numerical example, this algorithm has good convergence, fast computational time, and robustness.
3. Investigation and Optimization of Reinforced Concrete (RC) Retaining Walls
The cross-section of a T-shaped retaining wall is shown in
Figure 1 including the design variables and design constants. The definitions of the symbols are listed in
Table 1. Active and passive stresses occurred according to earth pressures of soil, and external loads are calculated using Rankine earth pressure theory [
54]. The objective function is the total material cost of the RC retaining wall, and the aim is to minimize it concerning the design constraints listed in
Table 2. Also, some coefficients, which are handled to provide the safety of retaining wall, can be seen in
Table 3. These coefficients and constraints are calculated according to ACI 318: Building Code Requirements for Structural Concrete [
55]. If one of the design constraints is violated, the objective function is penalized with a big value. In the design of the RC retaining wall, a tension-controlled design is performed. For that reason, all sections must be designed for the situation that requires a minimum value of 0.005 net tensile strain in the extreme tension steel at nominal strength.
In this section, three different optimization applications are explained: GA, DE, PSO, HS (classic (HS), adaptive (AHS), and adaptive-hybridized (AHHS) versions), FA, ABC, TLBO, FPA, GWO, and JA for minimization of total material cost (concrete and steel reinforcement) for cantilever retaining wall designs. These have consisted of four different cases for wall models individually:
Case 1: Thirty multiple cycles of optimization of design data within
Table 1.
Case 2: Thirty multiple cycles of optimization of design data of Case 1, but for different H of the wall.
Case 3: Determination of the best iteration number and population number combination by using different values concerning data in
Table 1.
Case 4: Optimization with 20 multiple cycles for different wall parameter combinations given in
Table 2.
3.1. Optimization for T-Shaped Wall Designs via Multiple Cycles (Case 1)
In Case 1, optimization processes were carried out for 30 cycles with 20 populations and 5000 iteration numbers to generate optimum wall designs by considering of design values stated in
Table 1. The termination criterion is to reach the value of the defined iteration number. Via that criterion, it is also possible to check the number of iterations needed to reach the optimum result. Optimum values of design parameters and objective function with statistical measurements attained in the result of 30 cycles with the usage of 10 metaheuristic algorithms are shown in
Table 4. Also, optimization results of HS, AHS, and AHHS where HMCR and FW values are handled as 0.5–0.1, 0.1–0.1 together with random-determined (rand( )), can be seen in
Table 5. From the results, the optimum design has zero X
2 variables. The front encasement is not required in the optimum design by using the design constants given in
Table 1. Only, GA and GWO results have a small non-zero value for X
2, but these algorithms are not effective to find the best optimum result.
On the other hand, according to
Table 5, from all of the HMCR–FW combinations, the best results were provided through the usage of random-valued arrangements for each modification of HS. Therefore, for the other following cases, only random-valued arrangements will be evaluated. Additionally, the most effective modification is observed as the random-valued combination of AHHS, so it can reach the minimum cost as an objective function with extremely small deviation compared with all the HS and AHS arrangements.
3.2. Optimization for Wall Designs via Multiple Cycles for H = 10 m (Case 2)
The optimum design of the RC retaining wall is conducted for Case 2 to find an optimum design with front encasement. H value of the wall is taken as 10 m, while all other design properties of the wall and parameters of the optimization process are the same as Case 1, except for iteration number 40,000. The optimum results are given in
Table 6 and
Table 7. It can be understood from
Table 6 that DE, TLBO, FPA, and JA could reach the minimum cost, but DE deviated from this value with a very high rate along with the cycles. Also, TLBO and especially JA can converge to this value with an extremely minor standard deviation. Both algorithms can be considered as the best options intended for determining optimum design.
On the other hand, HS and AHS are not so effective in terms of reaching the minimum cost. However, AHHS can converge to the minimum cost with a small deviation almost similar to the best algorithms (TLBO and JA).
Also,
Figure 2 shows convergence behaviors concerning minimum total cost provided in a cycle where the best results are obtained with HS and both modified versions of HS. As can be seen from this figure, reaching the minimum cost level via AHHS, which can find the optimum results and best cost, is realized earlier than the other two methods.
3.3. Best Population and Iteration Numbers for Optimization Processes (Case 3)
In Case 3, the best iteration and population numbers found for determining the optimum design parameters and minimum cost values of retaining wall structures. Accordingly, in all mentioned metaheuristics, the population number is operated as 3, 5, 10, 15, 20, 25, 30, and iteration number is handled from 1 to 5000 increasingly 499, for a specific design comprising from values expressed in
Table 1. In this connection, optimum design parameters providing the minimum material cost are found out owing to the determination of the most convenient population together with iteration numbers for each algorithm. These results can be seen in
Table 8 and
Table 9.
As is shown in the tables above, the minimum cost can be obtained via DE, TLBO, FPA, JA, and AHHS. However, it is so clear that the standard deviation of the AHHS result has an extremely low rate when this cost value is determined, besides the population number is smaller than the other most effective algorithm, JA. AHHS can reach the optimum result with a smaller number of the analysis that is calculated by multiplying the population number with the iteration number for all algorithms other than TLBO since it is double this value for TLBO that employs two phases in an iteration.
3.4. Optimum Analysis for Different Wall Structure Variations with Multiple Cycles (Case 4)
Finally, numerous retaining wall models are generated employing different ranges for three design constants including stem height (H), soil weight per unit weight of volume for wall back soil (γ
z), and surcharge load on the top elevation of soil (q
a). These constants and their properties are summarized in
Table 10.
Also, for both modifications of HS together with the classical version of itself, 20 cycles are performed with values of Case 1 as the same population and iteration numbers during optimization application. In
Figure 3,
Figure 4,
Figure 5,
Figure 6,
Figure 7 and
Figure 8, analysis results are presented for HS, AHS, and AHHS, respectively. These are generated according to the minimum and maximum value of γ
z as 16 and 22 kN/m
3 and three different H including 3, 7, and 10 m for each q
a to understand the deviation of cost values in each cycle.
According to the results of HS, it can be recognized that the obvious fluctuations for minimum cost results occur in γz = 16 kN/m3 and 5 and 10 kN/m2 qa values for 7 m wall height. There are no great changes in costs for other wall heights and qa values according to the increasing of cycles.
On the other hand, for AHS, minimum costs provided in sequent cycles for γz = 16 kN/m3 with 7 m wall height in 0 and 10 kN/m2 (qa), occur as unstable/wavy. Also, in AHHS analysis results, it can be said that there is so small change for minimum cost intended for only one design parameter combination as γz = 16 kN/m3, H = 10 m with qa = 0 kN/m2. In general, classical HS and modified versions are effective in this evaluation.