Abstract
This paper introduces a novel physical-inspired metaheuristic algorithm called “Light Spectrum Optimizer (LSO)” for continuous optimization problems. The inspiration for the proposed algorithm is the light dispersions with different angles while passing through rain droplets, causing the meteorological phenomenon of the colorful rainbow spectrum. In order to validate the proposed algorithm, three different experiments are conducted. First, LSO is tested on solving CEC 2005, and the obtained results are compared with a wide range of well-regarded metaheuristics. In the second experiment, LSO is used for solving four CEC competitions in single objective optimization benchmarks (CEC2014, CEC2017, CEC2020, and CEC2022), and its results are compared with eleven well-established and recently-published optimizers, named grey wolf optimizer (GWO), whale optimization algorithm (WOA), and salp swarm algorithm (SSA), evolutionary algorithms like differential evolution (DE), and recently-published optimizers including gradient-based optimizer (GBO), artificial gorilla troops optimizer (GTO), Runge–Kutta method (RUN) beyond the metaphor, African vultures optimization algorithm (AVOA), equilibrium optimizer (EO), grey wolf optimizer (GWO), Reptile Search Algorithm (RSA), and slime mold algorithm (SMA). In addition, several engineering design problems are solved, and the results are compared with many algorithms from the literature. The experimental results with the statistical analysis demonstrate the merits and highly superior performance of the proposed LSO algorithm.
MSC:
68-04; 68Q25; 68T20; 68W25; 68W40; 68W50
1. Introduction
The practical implications of metaheuristic algorithms have been widely spread, especially in the last few years. The reason behind this is the rapidity, high-quality solutions, and problem-independent characteristics of metaheuristics [1,2,3,4,5]. Unfortunately, no metaheuristics can efficiently solve all types of optimization problems. Consequently, a significant number of metaheuristics have been proposed from time to time, aiming to find efficient metaheuristics that are proper for various types of optimization problems. In particular, metaheuristics depend on the progress or movement behavior of a specified phenomenon or creature. By simulating such a progress or movement style, a metaheuristic can invade the search space of a problem as the environment of the simulated phenomenon or creature.
Metaheuristics depend on two search mechanisms while trying to find the best solution to the given problem. The first mechanism is exploration, which invades the unvisited search area. The second mechanism is exploitation, which searches around the found best solution [6]. The main factor of any metaheuristic success is balancing these two mechanisms. In particular, using more exploration makes metaheuristics unable to reach the global best solution. Alternatively, using more exploitation may lead to trapping into the local optima. In general, metaheuristics’ searching mechanisms have stemmed from natural phenomena or the behavior of creatures.
Metaheuristics can be categorized based on their metaphors into seven main categories [7,8,9,10,11]: evolution-based, swarm-based, physics-based, human-based, chemistry-based, math-based, and others (See Figure 1). Evolution-based metaheuristics mimic the natural evolution process, which consists of select, crossover, and mutation processes such as genetic algorithm (GA) [12], genetic programming (GP) [13], evolution strategy (ES) [14], probability-based incremental learning (PBIL) [15], and differential evolution (DE) [16].
Figure 1.
Categories of metaheuristic algorithms.
The second category, referred to as swarm-based algorithms, imitates the social behavior of swarms, birds, insects, and animal groups [17]. Some of the well-established and recently-published algorithms in this category are particle swarm optimization (PSO) [18], cuckoo search (CS) algorithm [19], flower pollination algorithm (FPA) [20], marine predators algorithm (MPA) [21], Harris hawks optimization (HHO) [22], salp swarm algorithm (SSA) [23], red fox optimizer (RFO) [24], duck swarm algorithm [25], chameleon swarm algorithm [26], artificial gorilla troops optimizer [27], cat optimization algorithm [28], donkey and smuggler optimization algorithm [29], krill herd algorithm [30], elephant herding optimization [31], wolf pack search algorithm [32], hunting search [33], monkey search [34], chicken swarm optimization [35], horse herd optimization algorithm (HOA) [36], moth search (MS) algorithm [37], earthworm optimization algorithm (EWA) [38], monarch butterfly optimization (MBO) [39], slime mold algorithm (SMA) [40] and whale optimization algorithm (WOA) [41]. In general, swarm-based metaheuristics have some advantages over evolution-based ones. In particular, a swarm-based metaheuristics search in a cumulative form preserves the information of subsequent search iterations. On the other hand, evolution-based metaheuristics ignore previous search information once the new population is generated. Additionally, evolution-based metaheuristics usually need more parameters than swarm-based metaheuristics. This makes swarm-based metaheuristics more applicable than evolution-based metaheuristics in most cases.
The third category of metaheuristics is human-based algorithms, which mimic human behaviors and human interactions in societies. The most popular algorithms belonging to this category are teaching–learning-based Optimization (TLBO) [42], harmony search (HS) [43], past present future (PPF) [44], political optimizer (PO) [45], brain storm optimization (BSO) [46], exchange market algorithm (EMA) [47], league championship algorithm (LCA) [48], poor and rich optimization algorithm [49], driving training-based optimization [50], gaining–sharing knowledge-based algorithm (GSK) [51], imperialist competitive algorithm (ICA) [52], and soccer league competition (SLC) [53].
The fourth category is physics-based algorithms, which are inspired by physical laws, such as inertia, electromagnetic force, gravitational force, and so on. In this category, the algorithms are based on physical principles to enable the search agents to interact and navigate the optimization problems’ search space to reach the near-optimal solution. This category includes several algorithms like simulated annealing (SA) [54], gravitational search algorithm (GSA) [55], charged system search (CSS) [56], big-bang big-crunch (BBBC) [57], artificial physics algorithm (APA) [58], galaxy-based search algorithm (GbSA) [59], black hole (BH) algorithm [60], river formation dynamics (RFD) algorithm [61], henry gas solubility optimization (HGSO) algorithm [62], curved space optimization (CSO) [63], central force optimization (CFO) [64], water cycle algorithm (WCA) [65], water waves optimization (WWO) [66], ray optimization (RO) algorithm [67], gravitational local search algorithm (GLSA) [68], small-world optimization algorithm (SWOA) [69], multi-verse optimizer (MVO) [70], intelligent water drops (IWD) algorithm [71], integrated radiation algorithm (IRA) [72], space gravitational algorithm (SGA) [73], ion motion algorithm (IMA) [74], electromagnetism-like algorithm (EMA) [75], equilibrium optimizer (EO) [76], light ray optimization (LRO) [77], and Archimedes optimization algorithm (AOA) [78]. Both light ray optimization (LRO) [77] and ray optimization (RO) [67] simulate the reflection and refraction of the light rays, respectively, when transferred from a medium to a darker one, which is completely different from the proposed algorithm, referred to as Light Spectrum Optimizer (LSO), as illustrated later.
The chemistry-based metaheuristic algorithms in the fifth category are inspired by mimicking certain chemical laws; some of those algorithms are gases Brownian motion optimization (GBMO) [79], artificial chemical reaction optimization algorithm (ACROA) [80], and several others [81]. The sixth category, called math-based metaheuristics, is based on presenting metaheuristic algorithms inspired by simulating certain mathematics functions like the golden sine algorithm (GSA) [82], base optimization algorithm (BOA) [83], and sine–cosine algorithm [84]. Table 1 presents the category and inspiration of some of the recently-published metaheuristic algorithms―specifically published over the last three years.
The last category (Others) includes all the metaheuristic algorithms, which has not been inspired by the behaviors of creatures or natural phenomena, such as adaptive large neighborhood search technique (ALNS) [85], large neighborhood search (LNS) [86,87], and greedy randomized adaptive search procedure (GRASP) [88,89]. For example, the large neighborhood search technique is a metaheuristic algorithm based on improving an initial solution using destroy and repair operators.
Table 1.
Classification and inspiration of some recently-published metaheuristic algorithms.
Table 1.
Classification and inspiration of some recently-published metaheuristic algorithms.
| Algorithm | Inspiration | Category | Year |
|---|---|---|---|
| Starling murmuration optimizer (SMO) [90] | Starlings’ behaviors | Swarm-based | 2022 |
| Snake optimizer (SO) [91] | Mating behavior of snakes | Swarm-based | 2022 |
| Reptile Search Algorithm (RSA) [92] | Hunting behavior of Crocodiles | Swarm-based | 2022 |
| Archerfish hunting optimizer (AHO) [93] | Jumping behaviors of the archerfish | Swarm-based | 2022 |
| Water optimization algorithm (WAO) [94] | Chemical and physical properties of water molecules | Physics-based Chemistry-based | 2022 |
| Ebola optimization search algorithm (EOSA) [95] | Propagation mechanism of the Ebola virus disease | Others | 2022 |
| Beluga whale optimization (BWO) [96] | Behaviors of beluga whales | Swarm-based | 2022 |
| White Shark Optimizer (WSO) | Behaviors of great white sharks | Swarm-based | 2022 |
| Aphid–Ant Mutualism (AAM) [97] | The relationship between aphids and ants species is called Mutualism | Swarm-based | 2022 |
| Circle Search Algorithm (CSA) [98] | Geometrical features of circles | Math-based | 2022 |
| Pelican optimization algorithm (POA) [99] | The behavior of pelicans during hunting | Swarm-based | 2022 |
| Sheep flock optimization algorithm (SFOA) [100] | Shepherd and sheep behaviors in the pasture | Swarm-based | 2022 |
| Gannet optimization algorithm (GOA) [101] | Behaviors of gannets during foraging | Swarm-based | 2022 |
| Prairie dog optimization (PDO) [102] | The behavior of the prairie dogs | Swarm-based | 2022 |
| Driving Training-Based Optimization (DTBO) [50] | The human activity of driving training | Human-based | 2022 |
| Stock exchange trading optimization (SETO) [103] | The behavior of traders and stock price changes | Human-based | 2022 |
| Archimedes optimization algorithm (AOA) [78] | Archimedes law | Physics-based | 2021 |
| Golden eagle optimizer (GEO) [104] | Golden eagles’ hunting process | Swarm-based | 2021 |
| Heap-based optimizer (HBO) [105] | Corporate rank hierarchy | Human-based | 2021 |
| African vultures optimization algorithm (AVOA) [106] | African vultures’ lifestyle | Swarm-based | 2021 |
| Artificial gorilla troops optimizer (GTO) [27] | Gorilla troops’ social intelligence | Swarm-based | 2021 |
| Quantum-based avian navigation optimizer algorithm (QANA) [107] | Migratory birds’ navigation behaviors | Evolution-based (Based DE) | 2021 |
| Colony predation algorithm (CPA) [108] | Corporate predation of animals | Swarm-based | 2021 |
| Lévy flight distribution (LFD) [42] | Lévy flight random walk | Physics-based | 2020 |
| Political Optimizer (PO) [45] | Multi-phased process of politics | Human-based | 2020 |
| Marine predators algorithm (MPA) [21] | Foraging strategy in the ocean between predators and prey | Swarm-based | 2020 |
| Equilibrium optimizer (EO) [76] | Mass balance models | Physics-based | 2020 |
Over the last few decades, several metaheuristic algorithms have been proposed, but unfortunately, most of these algorithms are not able to adapt themselves when tackling several optimization problems with various characteristics. Therefore, this paper proposes a novel physical-based metaheuristic algorithm called Light Spectrum Optimizer (LSO) for global optimization over a continuous search space. This novel metaheuristic is inspired by the sunlight ray dispersion while passing through the rain droplets causing the sparkle rainbow phenomenon. In particular, the mathematical formulation of the sunlight ray’s reflection, refraction, and dispersion can be efficiently utilized for presenting a variety in the updating process to preserve the population diversity, in addition to accelerating the convergence speed when applied to different optimization problems. Experimentally, LSO is extensively assessed using several mathematical benchmarks like CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022 to reveal its performance compared to several well-established metaheuristic algorithms. In addition, LSO is employed to solve some engineering design problems to further affirm its efficiency. The main advantages of the proposed metaheuristic are:
- ➢
- Simple representation.
- ➢
- Robustness.
- ➢
- Balancing between exploration and exploitation.
- ➢
- High-quality solutions.
- ➢
- Swarm intelligence powerfulness.
- ➢
- Low computational complexity.
- ➢
- High scalability.
These advantages are proved with three different validating experiments that include several optimization problems with various characteristics. Besides, LSO is compared with many other optimization algorithms, and the results are analyzed with the appropriate statistical tests. The experimental findings affirm the superiority of LSO compared to all the other rival algorithms. Finally, the main contributions of this study are listed as follows:
- Proposing a novel physical-based metaheuristic algorithm called Light Spectrum Optimizer (LSO), inspired by the sparkle rainbow phenomenon caused by passing sunlight rays through the rain droplets.
- Validating LSO using four challengeable mathematical benchmarks like CEC2014, CEC2017, CEC2020, and CEC2022, as well as several engineering design problems.
- The experimental findings, along with the Wilcoxon rank-sum test as a statistical test, illustrate the merits and highly superior performance of the proposed LSO algorithm
The remainder of this work is organized as follows. Section 2 gives the background illustration of the inspiration and the mathematical modelling of the rainbow phenomenon. Section 3 explains the mathematical formulation and the searching procedure of LSO. In Section 4, various experiments are done on the CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022 benchmarks, and their experimental results are analyzed with the proper statistical analysis. Additionally, LSO sensitivity is presented. In Section 5, popular engineering design problems are solved with LSO.
2. Background
Rainbow is one of the most fabulous metrological wonders. From the physical perspective, it is a half-circle of spectrum colors created by dispersion and internal reflection of sunlight rays that hit spherical rain droplets [109]. When a white ray hits a water droplet, it changes its direction by refracting and reflecting inside and outside the droplet (sometimes more than once) [110]. In other words, the rainbow is formed by light rays’ refraction, reflection, and dispersion through water droplets.
According to Descartes’s laws [111,112], refraction occurs when the light rays travel from one material to another with a different refractive index. When light rays hit the outer surface of a droplet, some light rays reflect away from the droplet while the others are refracted. The refracted light rays hit the inner surface of a droplet, causing another reflection and refracting away from the droplet with different angles, which causes the white sunlight to be dispersed into its seven spectral colors: red, orange, yellow, green, blue, indigo, and violet, as depicted in Figure 2. These spectral colors, differs according to the angles of deviations, which range from 40° (violet) to 42° (red) [113,114] (See Figure 2).
Figure 2.
Light dispersion and colors of rainbow.
Mathematically, the refraction and reflection of the rainbow spectrum have been illustrated by Snell’s laws. Snell’s law said that the ratio between the sines of the incident and refracted angles is equal to the ratio between the refractive indices of air and water, as [115]:
where is the incident angle, is the refracted angle, is the refractive index of water, and is the refractive index of air.
In this work, Snell’s law is used in its vector form. As shown in Figure 3, all normal, incidents, refracted, and reflected rays are converted to vectors. The mathematical formulation of the refracted ray can be expressed as [116]:
where is the refracted light ray, is the refractive index of the droplet, is the incident light ray, and is the normal line at the point of incidence. Meanwhile, the inner reflected ray can be formulated as:
where is the inner reflected light ray and is the normal line at the point of inner reflection. Finally, the outer refracted ray is expressed as:
where is the outer refracted light ray and is the normal line at the point of outer refraction.
Figure 3.
Light dispersion and colors of rainbow and vector form of refraction and reflection in rainbow.
3. Light Spectrum Optimizer (LSO)
As discussed before, the rainbow spectrum rays are caused by colorful light dispersion. In this paper, the proposed algorithm takes its inspiration from this metrological phenomenon. In particular, LSO is based on the following assumptions:
- (1)
- Each colorful ray represents a candidate solution.
- (2)
- The dispersion of light rays ranges from 40° to 42° or have a refractive index that varies between and .
- (3)
- The population of light rays has a global best solution, which is the best dispersion reached so far.
- (4)
- The refraction and reflection (inner or outer) are randomly controlled.
- (5)
- The current solution’s fitness value controls a colorful rainbow curve’s first and second scattering phases compared to the best so-far solution’s fitness. Suppose the fitness value between them is so close. In that case, the algorithm will apply the first scattering phase to exploit the regions around the current solution because it might be so close to the near-optimal solution. Otherwise, the second phase will be applied to help the proposed algorithm avoid getting stuck in the regions of the best-so-far solution because it might be local minima.
Next, the detailed mathematical formulation of LSO will be discussed.
3.1. Initialization Step
The search process of LSO begins with the random initialization of the initial population of white lights as:
where is the initial solution, is a vector of uniform random numbers generated between with a length equal to the given problem dimension (d), and and are the lower and upper bounds of the search space, respectively. After that, the generated initial solutions are evaluated in order to determine the global and personal best solutions.
3.2. Colorful Dispersion of Light Rays
In this subsection, we discuss the mathematical formulation of rainbow spectrum directions, colorful rays scattering, and the exploration and exploitation mechanisms of LSO.
3.2.1. The Direction of Rainbow Spectrums
After the initialization, the normal vector of inner refraction , inner reflection , and outer refraction are calculated as:
where is a randomly selected solution from the current population at iteration , is the current solution at iteration , is the global best solution ever founded, and indicates the normalized value of a vector and computed according to the following formula:
where d stands for the number of dimensions in an optimization problem. is the input vector to the function to normalize it. is the jth dimension in the input vector . For the incident light ray, it is calculated as follows:
where is the incident light ray, is the mean of the current population of solutions , and is the population size.
Then, the vectors of inner and outer refracted and reflected light rays are calculated as:
where , , and are the inner refracted, inner reflected, and outer refracted light rays, respectively. stands for the refractive index, which is updated randomly between and to define a random spectrum color as:
where is a uniform random number generated randomly between .
Table 2 presents a numerical example to illustrate the vectors with six dimensions generated by the previously described equations with noting that the vectors , , and presented at the same table are randomly generated between 100 and −100. The values of the inner refracted and inner reflected are obvious from this table. Outer refracted vectors could not be employed alone to update the individuals, which have to be ranged between 100 and −100 because the change rate in the updated solutions will be so low. Hence, many function evaluations will be consumed to reach better solutions. Therefore, the equations described in the next section are adapted to deal with this problem by extensively encouraging the exploration operator of the newly-proposed algorithm.
Table 2.
An illustrative numerical example to the results generated according to Equations (6)–(14).
3.2.2. Generating New Colorful Ray: Exploration Mechanism
After the calculation of the rays’ directions, we calculate the candidate solutions according to the value of a randomly generated probability between 0 and 1, referred to as . In particular, if the value of is lower than a number generated randomly between 0 and 1, then the new candidate solution will be calculated as:
Otherwise, the new candidate solution will be calculated as:
where is the newly generated candidate solution, is the current candidate solution at iteration . , , , and are indices of four solutions selected randomly from the current population. and are vectors of uniform random numbers that are generated between . is a scaling factor that is calculated using (18). is an adaptive control factor based on the inverse incomplete gamma function and computed according to (19).
where a vector of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and is an adaptive parameter that can be calculated using (20).
is an adaptive control factor. is a uniform random number between that is inversed to promote the exploration operator throughout the optimization process because inversing any random value will generate a new decimal number greater than 1, which might take the current solution to far away regions within the search space for finding a better solution. is the inverse incomplete gamma function for the corresponding value of .
where is the current iteration number, is a scalar numerical value of uniform random numbers generated between , and is the maximum number of function evaluations.
When the input numbers are greater than 0.5, this inverse incomplete gamma function generates high numerical values starting from almost 0.8 and ending at nearly 5.5, as described in Figure 4; otherwise, it generates decimal values down to 0. When the input numbers to this function are high, it will encourage the exploration operator. However, the highly-generated value might take the updated solutions out of the search boundary, and hence the algorithm might be converted into a randomization process because the boundary checking method will move those infeasible solutions back again into the search space. Therefore, the factor, a, described before in (20), is modeled with the inverse incomplete gamma values to reduce their magnitude and avoid the randomization process when the input values are high. Both the inverse function and factor a decrease gradually with increasing the current iteration, and hence the optimization process will be gradually converted from the exploration operator into the exploitation that might lead to falling into local minima. Therefore, to support the exploration operator throughout the optimization process, the inverse of a number generated randomly between 0 and 1 will be modeled with both the inverse function and factor a as defined in (19).
Figure 4.
The behavior of the inverse incomplete gamma function with respect to values.
3.2.3. Colorful Rays Scattering: Exploitation Mechanism
This phase helps to scatter the rays in the direction of the best-so-far solution, the current solution, and a solution selected randomly from the current population to improve its exploitation operator. At the start, the algorithm will scatter the rays around the current solution to exploit the region around it to reach better outcomes. However, this might reduce the convergence speed of LSO, so an additional step size applied with a predefined probability is integrated to move the current solution in the direction of the best-so-far solution to overcome this problem. The mathematical model of scattering around the current solution is as follows:
where is the best-so-far solution, and and are two solutions selected randomly from the current population. includes a number selected randomly at the interval of 0 and 1. is a vector including numbers generated randomly between 0 and 1. The second scattering phase is based on generating rays in a new position based on the best-so-far solution and the current solution according to the following formula:
where is a randomly generated numerical value at the interval of 0 and 1. indicates the ratio of the perimeter of a circle to its diameter. Exchanging between the first and second scattering phases is achieved based on a predefined probability Pe as shown in the following formula:
where is a number generated randomly between 0 and 1. The last scattering phase is based on generating a new solution according to a solution selected randomly from the population and the current solution according to the following formula:
where is a scalar value of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and is a vector including random values of 0 and 1. is the absolute symbol, which converts the negative values into positive ones and returns the positive numbers as passed. Exchanging between Equations (23) and (24) is based on computing the difference between the fitness value of each solution and that of the best-so-far solution and normalizing this difference between 0 and 1 according to (25). If this difference is less than a threshold value generated randomly between 0 and 1, (23) will be applied; otherwise, (24) is applied. Our hypothesis is based herein computing the probabilistic fitness value using (25) to determine how far the current light ray is close to the best-so-far light ray. If the probabilistic fitness value for the ith light ray is smaller than , it is preferable to scatter this light ray in the same direction as the best-so-far solution. Our proposed algorithm suggests this hypothesis to maximize its performance when dealing with any optimization problem that needs a high-exploitation operator to accelerate the convergence speed and save computational costs.
where , , and indicate the fitness values of the current solution, best-so-far solution, and worst solution, respectively. However, the probability of applying (23) when the value of is high is a little. For example, Figure 5 has tracked the values of for an agent and the random number R1 during the optimization process of a test function; this figure shows that the values are nearly greater than for most of the optimization process, as shown by red points that are much greater than the blue points in the 9 subgraphs depicted in Figure 5, and thus the chance of firing the first and second scattering stages is so low when relying solely on factor F′. Therefore, exchanging between (23) and (24) is also applied with a predefined probability Ps to further promote the first and second scattering stages for accelerating convergence toward the best-so-far solution. Finally, exchanging between these two equations is formulated in the following formula:
where and are numbers generated randomly between 0 and 1.
Figure 5.
Tracing F’s values versus R1 for an individual over 9 independent runs: The red points indicate that ; and the blue points indicate that .
3.3. LSO Pseudocode
The pseudocode of the proposed algorithm is stated in Algorithm 1, and the same steps are depicted in Figure 6. Some solutions might go outside the problem’s search space. Thus, they have to be returned back into the search space to find feasible solutions to the problem. There are two common ways to convert the infeasible solutions, which go outside the search space to feasible ones; the first one is based on setting the lower bound to the dimensions, which are smaller, and setting the upper bound to these, which are higher; while the second one is based on generating new random values within the search boundaries of the dimensions, which go outside the search space of the problem. Within our proposed algorithm, we make hybridization between these two methods to improve the convergence rate by the first and the exploration by the second. This hybridization is achieved based on a predefined probability Ph, which is estimated within the experiments section.
| Algorithm 1: LSO Pseudo-Code | |
| Input: Population size of light rays , problem Number of Iterations | |
| Output: The best light dispersion and its fitness Generate initial random population of light rays | |
| t = 0 | |
| 1 | While () |
| 2 | for each light ray |
| 3 | evaluate the fitness value |
| 4 | t = t + 1 |
| 5 | keep the current global best |
| 6 | Update the current solution if the updated solution is better. |
| 7 | determine normal lines , , & |
| 8 | determine direction vectors , , , & |
| 9 | update the refractive index |
| 10 | update , , and |
| 11 | Generate two random numbers: , between 0 and 1 |
| %%%%Generating new ColorFul ray: Exploration phase | |
| 12 | if |
| 13 | update the next light dispersion using Equation (16) |
| 14 | Else |
| 15 | update the next light dispersion using Equation (17) |
| 16 | end if |
| 17 | evaluate the fitness value |
| 18 | t = t + 1 |
| 19 | keep the current global best |
| 20 | Update the current solution if the updated solution is better. |
| %%%%Scattering phase: exploitation phase | |
| 21 | Update the next light dispersion using Equation (26) |
| 22 | end for |
| 23 | end while |
| 24 | Return |
Figure 6.
Flowchart of LSO.
3.4. Searching Behavior and Complexity of LSO
In this section, we will discuss the searching schema of LSO and its computational complexity.
- A.
- Searching behavior of LSO
As discussed before, LSO reciprocates the methods of finding the next solution through the use of , , and . In other words, is calculated according to a randomly selected solution, which ensures the exploration of the search space. Meanwhile, the calculation of and depends on the global and personal best solutions, respectively. This preserves the exploitation of the search space. Another exploitation consolidation is preserved by the usage of the inverse incomplete gamma function [117], which can be expressed as follows:
where is a scaling factor that is greater than or equal 0.
Figure 7 depicts the LSO’s exploration and exploitation operators to illustrate the behavior of LSO experimentally. This figure is plotted by displaying the search core of an individual during the exploration and exploitation phases for the first two dimensions (. From this figure, specifically Figure 7a, which depicts the LSO’s exploitation operator, it is obvious that this operator focuses its search toward a specific region, often the best-so-far region, to explore the solutions around and inside this region in the hope of reaching better solutions in a lower number of function evaluations. On the other side, Figure 7b pictures the exploration behavior of LSO to show how far LSO could reach; this figure shows that the individuals within the optimization process try to attack different regions, far from the current, within the search space of the optimization process for reaching the most promising region, which is attacked using the exploitation operator discussed formerly.
Figure 7.
Depiction of exploration and exploitation stages of the proposed algorithm (LSO). (a) Exploitation phase. (b) Exploration phase.
- B.
- Space and Time Complexity
- (1)
- LSO Space ComplexityThe space complexity of any metaheuristic can be defined as the maximum space required during the search process. The big O notation of LSO space complexity can be stated as , where is the number of search agents, and is the dimension of the given optimization problem.
- (2)
- LSO Time ComplexityThe time complexity of LSO is analyzed in this study using asymptotic analysis, which could analyze the performance of an algorithm based on the input size. Other than the input, all the other operations, like the exploration and exploitation operators, are considered constant. There are three asymptotic notations: big-O, omega, and theta, which are commonly used to analyze the running time complexity of an algorithm. The big-O notation is considered in this study to analyze the time complexity of LSO because it expresses the upper bound of the running time required by LSO for reaching the outcomes.
The time complexity of any metaheuristic depends on the required time for each step of the algorithm, like generating the initial population, updating candidate solutions, etc. Thus, the total time complexity is the sum of all such time measures. The time complexity of LSO results from three main algorithm steps:
- (1)
- Generation of the initial population.
- (2)
- Calculation of candidate solutions.
- (3)
- Evaluation of candidate solutions.
The first initialization step has a time complexity equal to . The candidate solutions calculation has a time complexity , which includes the evaluation of the generated solutions and updating the current best solution, Where is the maximum number of search iterations. So, the total time complexity of LSO in big-O is , which is confirmed in detail in Table 3.
Table 3.
Execution time of each line in the proposed algorithm according to using the asymptotic analysis.
3.5. Difference between LSO, RO, and LRO
This section compares the proposed algorithm to two other metaheuristic algorithms inspired by light reflection and refraction to demonstrate that LSO is completely different from those algorithms in terms of inspiration, formulation of candidate solutions, and the variation of the updating process, as illustrated in Table 4.
Table 4.
Comparison between LSO, RO, and LRO.
4. Experimental Results and Discussions
In this section, we investigate the efficiency of LSO by different benchmarks, including CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022. In addition, the sensitivity and scalability analyses of the proposed algorithm are introduced in this section.
4.1. Benchmarks and Compared Optimizers
We first validate the efficiency of LSO by solving 20 classical benchmarks CEC2005 that were selected from [118,119,120]. The selected benchmarks consist of three classes: uni-modal, multi-modal, and fixed-dimension multi-modal. Both unimodal and multimodal functions of CEC2005 are solved in 100 dimensions. Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 shows the characteristics of these three classes, which are mathematical formulations of benchmarks, dimension (D), boundaries of the search space (B), and the global optimal solution (OS). Furthermore, the proposed algorithm is tested on solving challengeable benchmarks like CEC2014, CEC2017, CEC2020, and CEC2020, which are described in Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7. The dimensions of these challengeable benchmarks are set to 10. In addition, the Wilcoxon test [121] is performed to analyze both algorithms’ performance during the 30 runs with a 5% significance level.
The experimental results of LSO are compared with highly-cited state-of-the-art optimization algorithms like grey wolf optimizer (GWO) [122], whale optimization algorithm (WOA) [123], and salp swarm algorithm (SSA) [23], evolutionary algorithms like differential evolution (DE), and recently-published optimizers including gradient-based optimizer (GBO) [124], artificial gorilla troops optimizer (GTO) [27], Runge–Kutta method (RUN) beyond the metaphor [125], African vultures optimization algorithm (AVOA) [106], equilibrium optimizer (EO) [76], grey wolf optimizer (GWO) [122], reptile search algorithm (RSA) [92], and slime mold algorithm (SMA) [40]. Both comparisons are based on standard deviation (SD), an average of fitness values (Avr), and rank. All the algorithms are coded in MATLAB© 2019. All experiments are performed on a 64-bit operating system with a 2.60 GHz CPU and 32 GB RAM. For a fair comparison, each algorithm runs for 25 independent times, the maximum number of function evaluations and population size are of 50,000 and 20, respectively (These parameters are constant within our experiments for all validated benchmarks). The other algorithms’ parameters are kept as standard. The used parameters are given in Table 5.
Table 5.
Parameters of the compared algorithms.
4.2. Sensitivity Analysis of LSO
Extensive experiments have been done to perform a sensitivity analysis of four controlling parameters found in LSO, which are Pe, Ps, Ph, and β. For each parameter of these, extensive experiments have been done using different values for each one to solve two test functions: F57 and F58, and the obtained outcomes, are depicted in Figure 8. This figure shows that the most effective values of these four parameters: Pe, Ps, Ph, and β, for two observed test functions are of 0.9, 0.05, 0.4, and 0.05, respectively.

Figure 8.
Sensitivity analysis of LSO. (a) Tuning the parameter Ph over F58. (b) Tuning the parameter Ph over F57. (c) Tuning the parameter β over F58. (d) Tuning the parameter β over F57. (e) Tuning the parameter β over F58 in terms of convergence speed. (f) Tuning the parameter Pe over F58. (g) Tuning the parameter Pe over F57. (h) Adjusting the parameter Ps over F58. (i) Adjusting the parameter Ps over F57.
The first investigated parameter is Ph (responsible for the tradeoff between two boundary checking methods to improve the LSO’s searchability), which is analyzed in Figure 8a,b using various randomly-picked values between 0 and 1.0. These figures show that LSO could reach the optimal value for the test function: F58 when Ph = 0.4. Based on that, this value is assigned to Ph within the experiments conducted in this study.
For the parameter β (responsible for improving the convergence speed of LSO), Figure 8c,d depicts the performance of LSO under various randomly-picked values between 0 and 0.6 for this parameter over two test problems: F57 and F58. According to this figure, over F57, on one side, the performance of LSO is substantially improved with increasing the value of this parameter even reaching 0.3, and then the performance again deteriorated. On the other side, over F58, LSO has poor performance when increasing the value of this parameter. Therefore, we found that the best value for the parameter β that will be substantially suitable for most test functions is 0.05, since LSO under this value could reach the optimal value for the test problem: F58. It is worth mentioning that this parameter is responsible for accelerating the convergence speed of LSO to reach the near-optimal solution in as low a number of function evaluations as possible. Therefore, an additional experiment has been conducted in this section to depict the convergence speed of LSO under various values for the parameter β over F58 (see Figure 8e). Figure 8e further affirms that the best value for this parameter is 0.05.
The third parameter is Pe, employed in LSO to exchange between the first and second scattering phases. Figure 8f,g compare the influence of various values for this parameter over the test functions: F57 and F58. According to these figures, the best value for this parameter is 0.9, since LSO under this value could reach 900 and 805.9 for F58 and F57, respectively. Regarding the parameter Ps, which is employed to further promote the first and second scattering stages for improving the exploitation operator of LSO, Figure 8h,i were presented to report the influence of various values for this parameter; these values range between 0 and 0.6. Inspecting these figures shows that LSO reaches the top performance when Ps has a value of 0.05 over two investigated test functions: F57 and F58.
4.3. Evaluation of Exploitation and Exploration Operators
The class of Uni-modal benchmarks has only one global optimal solution. This feature allows testing and validating a metaheuristic’s exploitation capabilities. Table 6 and Table 7 show that LSO is competitive with some other comparators for F1, F2, and F3. For F3 and F5, LSO has inferior performance compared to some of the recently-published rival algorithms. In general, LSO proves that it has a competitive exploitation operator. Multi-modal classes can efficiently discover the exploration of metaheuristics, as they have many optimal solutions. As observed in Table 6, LSO is able to reach the optimal solution for 13 benchmarks, especially fixed-dimension ones, including F11-F20. In addition, LSO is competitive with the other comparators for F6-F8. To affirm the difference between the outcomes produced by LSO and those of the rival algorithms, the Wilcoxon rank-sum test is employed to compute the p-values, which determine that there is a difference when its value is less than 5%; otherwise, there is no difference. Table 7 introduces the p-values on unimodal and multimodal test functions between LSO and each rival algorithm. These values clarify that there are differences between the outcomes of LSO and most rival algorithms on most test functions. NaN in this table indicates that the independent outcomes of LSO and the corresponding optimizer are the same. As a result, the results and discussion is given herein assure the prosperity of LSO’s exploration and exploitation capabilities.
Table 6.
Avr and SD of CEC2005 for 25 independent runs (The bolded value is the best overall).
Table 7.
p-values of LSO with each rival optimizer on CEC2005 test suite (F1–F20).
4.4. LSO for Challengeable CEC2014
Additional validation is carried out on the CEC-2014 test suite in order to ensure that the proposed and other methods perform in accordance with expectations. With the help of this collection of test functions, you can determine whether or not an algorithm has the ability to explore, escape from local minima, and exploit. The test functions are divided into four categories: unimodal, multimodal, hybrid, and composition. The test suite’s characteristics are described in greater detail in Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7. The dimensions taken into consideration within our experiments are all ten. Table 8 shows the average and standard deviation values and the rank metric obtained by rival optimizers and others on CEC2014. Inspecting this table demonstrates that LSO could rank first for 23 out of 30 test functions, while its performance for the other test functions is competitive with some of the other optimizers. The average of rank values presented in Table 8 for each test function to show the best-performance order of the algorithms is computed and displayed in Figure 9. This figure shows the superior performance of LSO because it could come in the first rank with a value of 1.7, followed by DE with a value of 4.4, while RSA is the worst performing one.
Table 8.
Avr and SD of CEC2014 for 25 independent runs (The bolded value is the best overall).
Figure 9.
Average rank of each optimizer on all CEC2014.
In terms of the standard deviation, Figure 10 displays the average SD values of 25 independent runs for each CEC-2014 test function; this figure discloses that the outcomes obtained by LSO within these independent runs are substantially similar since it could reach less average SD of 23, while RSA has the worst average SD. Finally, the Wilcoxon rank-sum statistical test is employed to show the difference between the outcomes of LSO and each rival algorithm; this test relies on two hypotheses: the null hypothesis, indicating that there is no difference, and the alternative one, indicating that there is difference between the outcomes of each pair of algorithms. This test determines the accepted hypothesis based on the confidence level and p-value returned after comparing the outcomes of each pair of algorithms. Within our experiments, the confidence level is 0.05. The obtained p-value between LSO and each rival algorithm is presented in Table 9. The majority of p-values presented in this table is less than 5%, which notifies us that the alternative hypothesis is accepted; hence, the outcome of LSO is different from those of the other compared algorithms.
Figure 10.
Average SD of each optimizer on all CEC2014.
Table 9.
p-values of LSO with each rival optimizer on CEC-2014 test suite (F21–F50).
4.5. LSO for Challengeable CEC2017
This section compares the performance of LSO and other optimizers using the CEC2017 test suite to further validate the performance of LSO against the comparators for more challenging mathematical test functions [126]. CEC2017 is composed of four mathematical function families: unimodal (F51–F52), multimodal (F53–F59), composition (F60–F69), and hybrid (F70–F79). As previously described, unimodal test functions are preferable for evaluating the exploitation operator of optimization algorithms because they involve only one global best solution, and multimodal test functions contain multiple local optimal solutions, which makes them particularly well-suited for evaluating the exploration operator of newly proposed optimizers; while composition and hybrid test functions have been designed to evaluate the optimization algorithms’ ability to escape out of local optima. The dimension of this benchmark is set to 10 within the conducted experiments in this section. Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 contains the characteristics of the CEC2017 benchmark.
Table 10 shows the Avr, SD, and Rank values of 25 independent findings obtained by this suite’s proposed and rival optimizers. According to this table, LSO comes in the first rank compared to all optimizers for unimodal, multimodal, composition and hybrid test functions since it could reach better Avr and SD for all test functions. Figure 11 and Figure 12 display the average of the rank and standard deviation values presented in Table 10 for all test functions for each algorithm. According to these figures, LSO is the best since it occupies the 1st rank with a value of 1 and has the lowest standard deviation of 32, while RSA is the worst.
Table 10.
Avr and SD of CEC2017 for 25 independent runs (The bolded value is the best overall).
Figure 11.
Average rank on all CEC2017 test functions.
Figure 12.
Average SD of each optimizer on all CEC2017.
The Wilcoxon rank-sum test is used to determine the difference between the outcomes of LSO and these of each rival optimizer on CEC2017 test functions. Wilcoxon rank-sum test demonstrates a significant difference between the outcomes of LSO with the rival algorithms, as the p-values in Table 11 support the alternative hypothesis. Ultimately, LSO is a strong optimizer, as demonstrated by its ability to defeat GBO, RUN, GTO, AVOA, SMA, RSA, and EO, which are the most recently published optimizers, as well as four highly-cited metaheuristic algorithms such as WOA, GWO, SSA, and DE.
Table 11.
p-values of LSO with each rival optimizer on CEC-2017 test suite (F51–F79).
4.6. LSO for Challengeable CEC2020
During this section, additional testing is carried out on the CEC-2020 test suite to determine whether the proposed has stable performance for more challenging test functions. An algorithm’s ability to explore, exploit, and stay away from local minima can be evaluated using this suite, which consists of ten test functions and is divided into four categories: unimodal, multimodal, hybrid, and compositional. The characteristics of this suite are shown in Appendix A Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7. LSO has superior performance for all test functions found in the CEC-2020 test suite except for F83, as evidenced by the Avr, rank, and SD values presented in Table 12 and obtained after analyzing the results of 25 independent runs. Figure 13 and Figure 14 show the average of the rank values and SD on all test functions of CEC2020. According to these figures, LSO is the best because it is ranked first with a value of 1.7 and has the lowest standard deviation of 38, whereas RSA is the worst because it is ranked last with a value of 12. Finally, the Wilcoxon rank-sum test is used to determine the difference between the results of LSO and those of each rival optimizer on this suite. The Wilcoxon rank-sum test results demonstrate a statistically significant difference between the outcomes of LSO and the rival algorithms for most test functions, as evidenced by the p-values in Table 13, which support the alternative hypothesis. As an added bonus, in this section, we present additional experimental evidence that LSO belongs to the strong optimizers.
Table 12.
Avr and SD of CEC2020 for 25 independent runs (The bolded value is the best overall).
Figure 13.
Average rank of each optimizer on all CEC2020.
Figure 14.
Average SD of each optimizer on all CEC2020.
Table 13.
p-values of LSO with each rival optimizer on CEC-2020 test suite (F80–F89).
4.7. LSO for Challengeable CEC2022
The proposed and other methods are tested again on the CEC2022 test suite. This test suite contains 12 test functions divided into unimodal, multimodal, hybrid, and compositional. The properties of this test suite are also listed in Appendix A (Table A1, Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7), and their dimensions in the conducted experiments are 10. Table 14 shows the Avr, rank, and SD for 25 independent runs, demonstrating LSO’s superior performance for 9 out of 12 test functions of the CEC-2022 test suite. The average of the rank values and standard deviation on all test functions of CEC2022 are depicted in Figure 15 and Figure 16. These figures show that LSO is the best because it is ranked first with a value of 1.6 and has the lowest standard deviation of 12, whereas RSA is the worst because it is ranked last with a value of 12 and has the highest standard deviation. In the end, the Wilcoxon rank-sum test is used to determine whether there is a significant difference between the results of LSO and those of each rival optimizer on this suite of problems. For most test functions, the Wilcoxon rank-sum test results demonstrate a statistically significant difference between the outcomes of LSO and the rival algorithms, as demonstrated by the p-values in Table 15. This section further affirms that LSO belongs to the category of highly-performed optimizers.
Table 14.
Avr and SD of CEC2022 for 25 independent runs (The bolded value is the best overall).
Figure 15.
Average rank of each optimizer on all CEC2022.
Figure 16.
Average SD of each optimizer on all CEC2022.
Table 15.
p-values of LSO with each rival optimizer on CEC-2022 test suite (F90–F101).
4.8. The Overall Effectiveness of the Proposed Algorithm
In the previous sections, LSO has been separately assessed using five mathematical benchmarks: CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022, and compared with twenty-two well-established metaheuristic algorithms, but the overall performance of LSO over all benchmarks has to be elaborated. Therefore, this section is presented to compare the overall performance of LSO and the other algorithms over all the test functions of each benchmark and all benchmarks. The average rank values and SD values of each benchmark are computed and reported in Table 16. This table also indicates the overall effectiveness of the proposed algorithm and other rival algorithms using an additional metric known as overall effectiveness (OE) and computed according to the following formula [127]:
where denotes the total number of test functions, and denotes the number of test functions in which the i-th algorithm is a loser. Inspecting this table reveals that the proposed algorithm could be superior in terms of SD, rank, and OE for four challengeable benchmarks, and competitive regarding rank and superior regarding SD and OE for the remaining benchmark. The average of the rank values, OE values, and SD values of each algorithm across all benchmarks are computed and reported in the last rows of Table 16, respectively, to measure the overall effectiveness of each algorithm across all benchmarks. According to those rows, LSO ranks first for all indicators, with a significant difference from the nearest well-performed algorithm. The LSO’s strong performance due to the variation of the search process enables the algorithm to have a strong exploration and exploitation operator during the optimization process to help aid in preserving the population diversity, avoiding being stuck in local optima, accelerating the convergence towards the best-so-far solution. It is worth noting that both the inverse incomplete gamma function and the inverse random number are capable of preserving population diversity as well as avoiding becoming stuck in local minima due to their ability to generate significant numbers that aid in jumping the current solution to far away regions within the search space throughout the optimization process. On the other hand, three different scattering stages provide a variation in the exploitation operator to the LSO for rapidly reaching the near-optimal solution for various optimization problems of varying difficulty.
Table 16.
Overall effectiveness over five investigated benchmarks (The bolded and red value is the best overall).
4.9. Convergence Curve
Figure 17 compares the convergence rates of LSO and competing algorithms to show how they differ in terms of reaching the near-optimal solution in less number function evaluations. This figure illustrates that all LSO convergence curves exhibit an accelerated reducing pattern within the various stages of the optimization process for four families of test functions (unimodal, multimodal, composition, and hybrid). The LSO optimizer is significantly faster than any of the other competing algorithms, as shown by the convergence curves in Figure 17. Exploration and exploitation operators of LSO can work together in harmony, which prevents stagnation in local minima and speeds up convergence in the right direction to the most promising regions.



Figure 17.
Depiction of averaged convergence speed among algorithms on some test functions. (a) F51 (Unimodal), (b) F52 (Unimodal), (c) F53 (Multimodal), (d) F54 (Multimodal), (e) F55 (Multimodal), (f) F56 (Multimodal), (g) F57 (Multimodal), (h) F59 (Multimodal), (i) F60 (Multimodal), (j) F61 (Multimodal), (k) F62 (Hybrid), (l) F63 (Hybrid), (m) F65 (Hybrid), (n) F64 (Hybrid), (o) F73 (Composition), (p) F75 (Composition), (q) F77 (Composition), and (r) F78 (Composition).
4.10. Qualitative Analysis
The following metrics are used to evaluate the LSO performance during optimization: diversity, convergence curve, average fitness value, trajectory in the first dimension, and search history. The diversity metric shows how far apart an individual is on average from other individuals in the population; the convergence curve depicts the best-fitting value that was obtained within each iteration; the average fitness value represents the average of all individuals’ fitness values throughout each iteration; the trajectory curve shows how a solution’s first dimension changes over time as it progresses through the optimization process; and search history shows how a solution’s position changed during the optimization process.
The diversity metric shown in the second column of Figure 18 is computed by summing the difference mean between the positions of each two solutions in the population according to the following formula:
where d indicates the number of dimensions, N stands for the population size, indicate the kth dimension in the ith and jth solutions such that . Observing Figure 18 shows that the diversity metric of LSO is decreasing over time, indicating that LSO’s optimization process is gradually shifting from exploration to exploitation. LSO’s performance initially started to explore most regions in the search space to avoid stagnation into local minima and then shifted gradually to the exploitation operator to reduce diversity quickly during the second half of the optimization process to accelerate the convergence toward the most promising region discovered thus far.

Figure 18.
Diversity, convergence curve, average fitness history, trajectory, and search history.
The LSO convergence curves show an accelerated reducing pattern on a variety of test functions during the latter half of the optimization process when population diversity is reduced. The exploratory phase is largely transformed into the exploitative phase, as illustrated in the third column of Figure 18. At the beginning of the optimization process, LSO convergence is slow to avoid becoming stuck in local minima. Then, it is highly improved in the second half of the optimization process.
The depiction of average fitness history in Figure 18 shows that LSO’s competitiveness has decreased over time due to all solutions focusing on exploiting the regions around the best-so-far solution, and as a result, the fitness values of all solutions are nearly moved towards the same region which involves the best-so-far solution. Figure 18 also depicts the trajectory of LSO’s search for the optimal position of the first dimension as it gradually explores all aspects of the search space, as depicted in Figure 18. Because of the need to find a better solution in a short period of time, the exploratory approach is being replaced by an exploitative approach that restricts the scope of the search to a single aspect of it. As can be seen from the LSO’s trajectory curve, the optimization process begins with an exploratory trend before moving to an exploitation trend in search of better outcomes before coming to an end.
In the final column of Figure 18, the history of LSO positions is depicted. The search history is investigated by depicting the search core followed by LSO’s solutions within the whole optimization process for the first two dimensions: of an optimization problem. The same pattern is followed for the other dimensions. As can be seen in this preceding column, LSO does not follow a consistent pattern for all test functions. Consider F21 as an example: to find an optimal solution for this problem, LSO first explores the entire search space before narrowing its focus to the range 0–50. The search history graph shows that LSO’s performance is more dispersed for the multimodal and composition test functions, while its performance for the unimodal test function is more concentrated around the optimum points.
4.11. Computational Cost
The average computational cost consumed by each algorithm on the investigated test functions is shown in Figure 19. The graph shows that the CPU time for all algorithms is nearly the same, except RSA and SMA, which take a long time, and WOA, which takes less than half the time required by the rest. LSO is thus far superior in terms of the convergence speed and the quality of the obtained outcomes, with a negligible difference in CPU time.
Figure 19.
Comparison among algorithms in terms of CPU time.
5. LSO for Engineering Design Problems
In section, LSO is applied to solve three constrained engineering benchmarks, including Tension/Compression Spring Design Optimization Problem, Welded Beam Design Problem, and Pressure Vessel Design Problem. The best values found by LSO during 25 runs are compared with many optimization algorithms. For compared the algorithms’ parameters settings, all parameters are left as the defaults suggested by the authors. The parameters of LSO are kept as mentioned in Table 5, except for Ps, which substantially affect the performance of LSO based on the nature of the solved problems. Therefore, an extensive experiment has been done under various values for this parameter, and the obtained outcomes are depicted in Figure 20. This figure shows that the performance of LSO is maximized when Ps = 0.6. In addition to all rival algorithms used in the previous comparisons, five recently-published metaheuristic algorithms known as political optimizer (PO) [45], continuous-state cellular automata algorithm (CCAA) [128], snake optimizer (SO) [91], beluga whale optimization (BWO), [96] and driving training-based optimization (DTBO) [50] are added in the next experiments to further show the superiority of LSO when tackling the real-world optimization problems, such as engineering design problems. Additionally, LSO is compared to some of the state-of-the-art optimizers proposed for each constrained engineering benchmark according to the cited results.
Figure 20.
Tuning the parameter Ps over tension spring design.
Engineering design problems are characterized by many different constraints. In order to handle this type of constraint, we employ penalty-based constraint handling techniques with LSO. There are several methods for handling constraints of optimization problems based on the penalty function. In this work, we choose to implement the Death Penalty method (the rejection of infeasible solutions method) [129], in which the infeasible solutions are rejected and regenerated. So, the infeasible solution is automatically omitted from the candidate solutions. The main advantage of the Death Penalty method is its simple implementation and low computational complexity.
5.1. Tension/Compression Spring Design Optimization Problem
The main objective of the tension/compression spring design optimization problem is to find the minimum volume of a coil spring under compression undergoing constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and design variables (See Figure 21a). Mathematically, the problem can be formulated as [130]:
where is the wire diameter, is the mean coil diameter, and is the length or number of coils.
Figure 21.
Tension/compression spring design problem. (a) Structure. (b) Convergence curve.
LSO is compared with 14 additional algorithms (mathematical techniques or metaheuristics) selected from various literature including evolution strategies (ES) [131], gravitational search algorithm (GSA) [55], Tabu search (TS) [132], swarm strategy [133], unified particle swarm optimization (UPSO) [134], cultural algorithms (CA) [132], two-point adaptive nonlinear approximations-3 (TANA-3) [135], particle swarm optimization (PSO) [136], ant colony optimization (ACO) [137], genetic algorithm (GA) [138], quantum evolutionary algorithm (QEA) [137], ray optimization (RO) [67], probability collectives (PC) [139], social interaction genetic algorithm (SIGA) [140], and parallel genetic algorithm with social interaction (PSIGA) [138]. Table 17 shows the results obtained by LSO for tension/compression spring design optimization problem. As shown, LSO is better able to reach minimum values than others. Figure 21b shows the convergence speed of LSO.
Table 17.
Comparison of the results for tension/compression spring design optimization problem.
5.2. Welded Beam Design Problem
The problem of designing welded beams can be defined as finding the feasible dimensions of a welded beam , , , and (which are the thickness of weld, length of the clamped bar, the height of the bar, and thickness of the bar, respectively) that minimize the total manufacturing cost subject to a set of constraints. Figure 22a shows a representation of the weld beam design problem. Mathematically, the problem can be formulated as the following [130]:
where is shear stress, is the bending stress, is the buckling load, and is the end deflection.
Figure 22.
Weld beam design problem. (a) Structure. (b) Convergence curve.
The proposed algorithm is compared with nine additional algorithms from the literature including hybrid and improved ones, which are RO [67], WOA [41], HS [141], hybrid charged system search and particle swarm optimization (PSO) algorithms I (CSS&PSO I) [136], hybrid charged system search and particle swarm optimization (PSO) algorithms II (CSS&PSO II) [136], particle swarm optimization algorithm with struggle selection (PSOStr) [140], RO [67], firefly algorithm (FA) [142], differential evolution with dynamic stochastic selection (DE) [143], and hybrid artificial immune system and genetic algorithm (AIS-GA). As shown in Table 18, on one side, LSO has a competitive result comparing to GBO, GTO, EO, and DE, and superior in terms of the convergence speed as shown in Figure 22b. On the other side, LSO is superior to all the other optimizers.
Table 18.
Comparison of the results for weld beam design problem.
5.3. Pressure Vessel Design Problem
The problem of pressure vessel design can be described as minimizing the total fabrication cost of a cylindrical pressure vessel with the consideration of optimization constraints. The mathematical formulation of the problem can be expressed as [41]:
where is the shell thickness, is the head thickness, is the inner radius, and is the cylindrical section length (without the head), as shown in Figure 23a.
Figure 23.
Pressure vessel design problem. (a) Structure. (b) Convergence curve.
For the problem of pressure vessel design, the proposed algorithm is compared with 16 metaheuristics and mathematical methods, including Harris Hawks optimization (HHO) [22], grey wolf optimizer (GWO) [122], hybrid particle swarm optimization with a feasibility-based rule (HPSO) [144], Gaussian quantum-behaved particle swarm optimization (G-QPSO) [145], water evaporation optimization (WEO) [146], bat algorithm (BA) [147], MFO [148], charged system search (CSS) [56], multimembered evolution strategy (ESs) [149], genetic algorithm for design and optimization of composite laminates (BIANCA) [150], modified differential evolution (MDDE) [151], differential evolution with level comparison (DELC) [152], WOA [41], niched-pareto genetic algorithm (NPGA) [153], Lagrangian multiplier [41], and branch-bound [41]. As observed from Table 19, LSO significantly reaches the best results for the given problem compared to other algorithms except for DE and GTO, which have competitive performance for the best result, but LSO is superior for the average convergence speed as shown in Figure 23b.
Table 19.
Comparison of the results for pressure vessel design optimization problem.
6. Conclusions
In this work, a novel LSO metaheuristic algorithm is introduced that is inspired by sunlight dispersion through a water droplet, causing the rainbow phenomenon. The proposed algorithm is tested on several selected benchmarks. For CEC2005 benchmarks, LSO significantly performs well, especially for fixed dimensional multi-model functions. This indicates that LSO has high exploration capabilities. In addition, the sensitivity analysis of LSO parameters shows that the selected values of the parameters are the best. Finally, for CEC2020, CEC2017, CEC2022, and CEC2014, LSO has a superior performance compared to several well-established and recently published metaheuristic algorithms like DE, WOA, SMA, EO, GWO, GTO, GBO, RSA, SSA, RUN, and AVOA, which have been selected in our comparison due to their stability and recent publication compared to some of the other optimization algorithms like MBO, EWA, EHO, MS, HGS, CPA, and HHO proposed for tackling the same benchmarks. This indicates that LSO has a good balance between exploration and exploitation. LSO has competitive results for engineering design problems compared to other algorithms, even for improved and hybrid metaheuristics. For future work, we suggest developing the binary and multi-objective versions of LSO. In addition, several enhancements can be proposed for LSO by using fuzzy controllers or chaotic maps for defining LSO controlling probabilities and the hybridization with other algorithms. Finally, we suggest using LSO for solving recent real-life optimization problems such as sensor allocation problems, smart management of the power grid, and smart routing of vehicles.
Author Contributions
Conceptualization, M.A.-B. and R.M.; methodology, M.A.-B., R.M. and K.M.S.; software, M.A.-B. and R.M.; validation, M.A.-B., R.M., R.K.C. and K.M.S.; formal analysis, M.A.-B., R.M., R.K.C. and K.M.S.; investigation, M.A.-B., R.M., R.K.C. and K.M.S.; data curation, M.A.-B., R.M., R.K.C. and K.M.S.; writing—original draft preparation, M.A.-B. and R.M.; writing—review and editing M.A.-B., R.M., R.K.C. and K.M.S.; visualization, M.A.-B., R.M., R.K.C. and K.M.S.; supervision, M.A.-B. and R.K.C.; funding acquisition, R.K.C. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The code used in this paper can be obtained from this publicly accessible platform: https://drive.matlab.com/sharing/633c724c-9b52-4145-beba-c9e49f9df42e (accessed on 9 January 2022).
Acknowledgments
We would like to thank the Editor and anonymous reviewers for their thorough comments, which have helped us improve the manuscript—also, special thanks to Seyedali Mirjalili for his excellent suggestions and comments.
Conflicts of Interest
The authors declare no conflict of interest.
Nomenclature
| Nomenclature of symbols used in this study | |
| Angle of reflection or refraction | |
| Refractive index of a medium | |
| refracted or reflected light ray | |
| Normal line at a point | |
| Controlling probability of inner and outer reflection and refraction | |
| Controlling probability of the first scattering phase | |
| Controlling probability of the second scattering phase | |
| Iteration number | |
| Initial candidate solution | |
| Population size | |
| Problem dimension | |
| Lower bound of the search space | |
| Upper bound of the search space | |
| Vector of uniform random numbers | |
| Candidate solution at iteration | |
| Scaling factor | |
| Scaling factor | |
| Scaling factor | |
| Inverse incomplete gamma function | |
Appendix A
Table A1.
Description of uni-modal benchmark functions.
Table A1.
Description of uni-modal benchmark functions.
| ID | Benchmark | D | Domain | Global Opt. |
|---|---|---|---|---|
| F1 | 100 | |||
| F2 | 100 | |||
| F3 | 100 | |||
| F4 | 100 | |||
| F5 | 100 |
Table A2.
Description of multi-modal benchmark functions.
Table A2.
Description of multi-modal benchmark functions.
| ID | Benchmark | D | Domain | Global Opt. |
|---|---|---|---|---|
| F6 | 100 | |||
| F7 | 100 | |||
| F8 | 100 | |||
| F9 | 100 | |||
| F10 | 100 |
Table A3.
Fixed-dimension multi-modal benchmark.
Table A3.
Fixed-dimension multi-modal benchmark.
| ID | Benchmark | D | Domain | Global Opt. |
|---|---|---|---|---|
| F11 | 2 | |||
| F12 | 4 | |||
| F13 | 2 | |||
| F14 | 2 | |||
| F15 | 2 | |||
| F16 | 3 | |||
| F17 | 6 | |||
| F18 | 4 | |||
| F19 | 4 | |||
| F20 | 4 |
Table A4.
Description of CEC-2014 benchmark.
Table A4.
Description of CEC-2014 benchmark.
| Type | ID | Functions | Global Opt. | Domain |
|---|---|---|---|---|
| Unimodal function | F21 (CF1) | Rotated High Conditioned Elliptic Function | 100 | [−100,100] |
| F22 (CF2) | Rotated Bent Cigar Function | 200 | [−100,100] | |
| F23 (CF3) | Rotated Discus Function | 300 | [−100,100] | |
| Simple multimodal Test functions | F24 (CF4) | Shifted and Rotated Rosenbrock’s Function | 400 | [−100,100] |
| F25 (CF5) | Shifted and Rotated Ackley’s Function | 500 | [−100,100] | |
| F26 (CF6) | Shifted and Rotated Weierstrass Function | 600 | [−100,100] | |
| F27 (CF7) | Shifted and Rotated Griewank’s Function | 700 | [−100,100] | |
| F28 (CF8) | Shifted Rastrigin’s Function | 800 | [−100,100] | |
| F29 (CF9) | Shifted and Rotated Rastrigin’s Function | 900 | [−100,100] | |
| F30 (CF10) | Shifted Schwefel’s Function | 1000 | [−100,100] | |
| F31 (CF11) | Shifted and Rotated Schwefel’s Function | 1100 | [−100,100] | |
| F32 (CF12) | Shifted and Rotated Katsuura Function | 1200 | [−100,100] | |
| F33 (CF13) | Shifted and Rotated HappyCat Function | 1300 | [−100,100] | |
| F34 (CF14) | Shifted and Rotated HGBat Function | 1400 | [−100,100] | |
| F35 (CF15) | Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function | 1500 | [−100,100] | |
| F36 (CF16) | Shifted and Rotated Expanded Scaffer’s F6 Function | 1600 | [−100,100] | |
| Hybrid test functions | F37 (CF17) | Hybrid Function 1 | 1700 | [−100,100] |
| F38 (CF18) | Hybrid Function 2 | 1800 | [−100,100] | |
| F39 (CF19) | Hybrid Function 3 | 1900 | [−100,100] | |
| F40 (CF20) | Hybrid Function 4 | 2000 | [−100,100] | |
| F41 (CF17) | Hybrid Function 5 | 2100 | [−100,100] | |
| F42 (CF18) | Hybrid Function 6 | 2200 | [−100,100] | |
| Composition test functions | F43 (CF23) | Composition Function 1 | 2300 | [−100,100] |
| F44 (CF24) | Composition Function 2 | 2400 | [−100,100] | |
| F45 (CF25) | Composition Function 3 | 2500 | [−100,100] | |
| F46 (CF26) | Composition Function 4 | 2600 | [−100,100] | |
| F47 (CF27) | Composition Function 5 | 2700 | [−100,100] | |
| F48 (CF28) | Composition Function 6 | 2800 | [−100,100] | |
| F49 (CF29) | Composition Function 7 | 2900 | [−100,100] | |
| F50 (CF30) | Composition Function 8 | 3000 | [−100,100] |
Table A5.
Description of CEC-2017 benchmark.
Table A5.
Description of CEC-2017 benchmark.
| Type | ID | Functions | Global Opt. | Domain |
|---|---|---|---|---|
| Unimodal function | F51 (CF1) | Shifted and Rotated Bent Cigar Function | 100 | [−100,100] |
| F52 (CF3) | Shifted and Rotated Zakharov Function | 300 | [−100,100] | |
| Simple multimodal Test functions | F53 (CF4) | Shifted and Rotated Rosenbrock’s Function | 400 | [−100,100] |
| F54 (CF5) | Shifted and Rotated Rastrigin’s Function | 500 | [−100,100] | |
| F55 (CF6) | Shifted and Rotated Expanded Scaffer’s Function | 600 | [−100,100] | |
| F56 (CF7) | Shifted and Rotated Lunacek Bi_Rastrigin Function | 700 | [−100,100] | |
| F57 (CF8) | Shifted and Rotated Non-Continuous Rastrigin’s Function | 800 | [−100,100] | |
| F58 (CF9) | Shifted and Rotated Levy Function | 900 | [−100,100] | |
| F59 (CF10) | Shifted and Rotated Schwefel’s Function | 1000 | [−100,100] | |
| Hybrid test functions | F60 (CF11) | Hybrid Function 1 | 1100 | [−100,100] |
| F61 (CF12) | Hybrid Function 2 | 1200 | [−100,100] | |
| F62 (CF13) | Hybrid Function 3 | 1300 | [−100,100] | |
| F63 (CF14) | Hybrid Function 4 | 1400 | [−100,100] | |
| F64 (CF15) | Hybrid Function 5 | 1500 | [−100,100] | |
| F65 (CF16) | Hybrid Function 6 | 1600 | [−100,100] | |
| F66 (CF17) | Hybrid Function 7 | 1700 | [−100,100] | |
| F67 (CF18) | Hybrid Function 8 | 1800 | [−100,100] | |
| F68 (CF19) | Hybrid Function 9 | 1900 | [−100,100] | |
| F69 (CF20) | Hybrid Function 10 | 2000 | [−100,100] | |
| Composition test functions | F70 (CF21) | Composition Function 1 | 2100 | [−100,100] |
| F71 (CF22) | Composition Function 2 | 2200 | [−100,100] | |
| F72 (CF23) | Composition Function 3 | 2300 | [−100,100] | |
| F73 (CF24) | Composition Function 4 | 2400 | [−100,100] | |
| F74 (CF25) | Composition Function 5 | 2500 | [−100,100] | |
| F75 (CF26) | Composition Function 6 | 2600 | [−100,100] | |
| F76 (CF27) | Composition Function 7 | 2700 | [−100,100] | |
| F77 (CF28) | Composition Function 8 | 2800 | [−100,100] | |
| F78 (CF29) | Composition Function 9 | 2900 | [−100,100] | |
| F79 (CF30) | Composition Function 10 | 3000 | [−100,100] |
Table A6.
Description of CEC2020 benchmarks.
Table A6.
Description of CEC2020 benchmarks.
| Type | ID | Functions | Global Opt. | Domain |
|---|---|---|---|---|
| Unimodal | F80 (CF1) | Shifted and Rotated Bent Cigar Function | 100 | [−100,100] |
| multimodal | F81 (CF2) | Shifted and Rotated Lunacek Bi_Rastrigin Function | 700 | [−100,100] |
| F82 (CF3) | Hybrid Function 1 | 1100 | [−100,100] | |
| Hybrid | F83 (CF4) | Hybrid Function 2 | 1700 | [−100,100] |
| F84 (CF5) | Hybrid Function 3 | 1900 | [−100,100] | |
| F85 (CF6) | Hybrid Function 4 | 2100 | [−100,100] | |
| F86 (CF7) | Hybrid Function 5 | 1600 | [−100,100] | |
| Composition | F87 (CF8) | Composition Function 1 | 2200 | [−100,100] |
| F88 (CF9) | Composition Function 2 | 2400 | [−100,100] | |
| F89 (CF10) | Composition Function 3 | 2500 | [−100,100] |
Table A7.
Description of CEC2022 benchmark.
Table A7.
Description of CEC2022 benchmark.
| Type | No. | Functions | Global Opt. | Domain |
|---|---|---|---|---|
| Unimodal function | F90 | Shifted and full Rotated Zakharov Function | 300 | [−100,100] |
| Basic functions | F91 | Shifted and full Rotated Rosenbrock’s Function | 400 | [−100,100] |
| F92 | Shifted and full Rotated Expanded Schaffer’s f6 Function | 600 | [−100,100] | |
| F93 | Shifted and full Rotated Non-continuous Rastrigin’s Function | 800 | [−100,100] | |
| F94 | Shifted and Rotated Levy Function | 900 | [−100,100] | |
| Hybrid functions | F95 | Hybrid function 1 (N = 3) | 1800 | [−100,100] |
| F96 | Hybrid function 2 (N = 6) | 2000 | [−100,100] | |
| F97 | Hybrid function 3 (N = 5) | 2200 | [−100,100] | |
| Composite functions | F98 | Composite function 1 (N = 5) | 2300 | [−100,100] |
| F99 | Composite function 2 (N = 4) | 2400 | [−100,100] | |
| F100 | Composite function 3 (N = 5) | 2600 | [−100,100] | |
| F101 | Composite function 4 (N = 6) | 2700 | [−100,100] |
References
- Fausto, F.; Reyna-Orta, A.; Cuevas, E.; Andrade, Á.G.; Perez-Cisneros, M. From ants to whales: Metaheuristics for all tastes. Artif. Intell. Rev. 2020, 53, 753–810. [Google Scholar] [CrossRef]
- Ross, O.H.M. A review of quantum-inspired metaheuristics: Going from classical computers to real quantum computers. IEEE Access 2019, 8, 814–838. [Google Scholar] [CrossRef]
- Guo, Y.-N.; Zhang, X.; Gong, D.-W.; Zhang, Z.; Yang, J.-J. Novel interactive preference-based multiobjective evolutionary optimization for bolt supporting networks. IEEE Trans. Evol. Comput. 2019, 24, 750–764. [Google Scholar] [CrossRef]
- Guo, Y.; Yang, H.; Chen, M.; Cheng, J.; Gong, D. Ensemble prediction-based dynamic robust multi-objective optimization methods. Swarm Evol. Comput. 2019, 48, 156–171. [Google Scholar] [CrossRef]
- Ji, J.-J.; Guo, Y.-N.; Gao, X.-Z.; Gong, D.-W.; Wang, Y.-P. Q-Learning-Based Hyperheuristic Evolutionary Algorithm for Dynamic Task Allocation of Crowdsensing. IEEE Trans. Cybern. 2021, 34606469. [Google Scholar] [CrossRef]
- Loubière, P.; Jourdan, A.; Siarry, P.; Chelouah, R. A sensitivity analysis indicator to adapt the shift length in a metaheuristic. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–6. [Google Scholar]
- Gao, K.-Z.; He, Z.M.; Huang, Y.; Duan, P.-Y.; Suganthan, P.N. A survey on meta-heuristics for solving disassembly line balancing, planning and scheduling problems in remanufacturing. Swarm Evol. Comput. 2020, 57, 100719. [Google Scholar] [CrossRef]
- Li, J.; Lei, H.; Alavi, A.H.; Wang, G.-G. Elephant herding optimization: Variants, hybrids, and applications. Mathematics 2020, 8, 1415. [Google Scholar] [CrossRef]
- Feng, Y.; Deb, S.; Wang, G.-G.; Alavi, A.H. Monarch butterfly optimization: A comprehensive review. Expert Syst. Appl. 2021, 168, 114418. [Google Scholar] [CrossRef]
- Li, M.; Wang, G.-G. A Review of Green Shop Scheduling Problem. Inf. Sci. 2022, 589, 478–496. [Google Scholar] [CrossRef]
- Li, W.; Wang, G.-G.; Gandomi, A.H. A survey of learning-based intelligent optimization algorithms. Arch. Comput. Methods Eng. 2021, 28, 3781–3799. [Google Scholar] [CrossRef]
- Han, J.-H.; Choi, D.-J.; Park, S.-U.; Hong, S.-K. Hyperparameter optimization for multi-layer data input using genetic algorithm. In Proceedings of the 2020 IEEE 7th International Conference on Industrial Engineering and Applications (ICIEA), Bangkok, Thailand, 16–21 April 2020; pp. 701–704. [Google Scholar]
- Tang, Y.; Jia, H.; Verma, N. Reducing energy of approximate feature extraction in heterogeneous architectures for sensor inference via energy-aware genetic programming. IEEE Trans. Circuits Syst. I Regul. Pap. 2020, 67, 1576–1587. [Google Scholar] [CrossRef]
- Rechenberg, I. Evolutionsstrategien. In Simulationsmethoden in der Medizin und Biologie; Springer: New York, NY, USA, 1978; pp. 83–114. [Google Scholar]
- Dasgupta, D.; Michalewicz, Z. Evolutionary Algorithms in Engineering Applications; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
- Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
- Sharif, M.; Amin, J.; Raza, M.; Yasmin, M.; Satapathy, S.C. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit. Lett. 2020, 129, 150–157. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
- Tsipianitis, A.; Tsompanakis, Y. Improved Cuckoo Search algorithmic variants for constrained nonlinear optimization. Adv. Eng. Softw. 2020, 149, 102865. [Google Scholar] [CrossRef]
- Adithiyaa, T.; Chandramohan, D.; Sathish, T. Flower Pollination Algorithm for the Optimization of Stair casting parameter for the preparation of AMC. Mater. Today Proc. 2020, 21, 882–886. [Google Scholar] [CrossRef]
- Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
- Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
- Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
- Połap, D.; Woźniak, M. Red fox optimization algorithm. Expert Syst. Appl. 2021, 166, 114107. [Google Scholar] [CrossRef]
- Zhang, M.; Wen, G.; Yang, J. Duck swarm algorithm: A novel swarm intelligence algorithm. arXiv 2021, arXiv:2112.13508. [Google Scholar]
- Braik, M.S. Chameleon Swarm Algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
- Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
- Chu, S.-C.; Tsai, P.-W.; Pan, J.-S. Cat swarm optimization. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; pp. 854–858. [Google Scholar]
- Shamsaldin, A.S.; Rashid, T.A.; Al-Rashid Agha, R.A.; Al-Salihi, N.K.; Mohammadi, M. Donkey and smuggler optimization algorithm: A collaborative working approach to path finding. J. Comput. Des. Eng. 2019, 6, 562–583. [Google Scholar] [CrossRef]
- Bolaji, A.L.; Al-Betar, M.A.; Awadallah, M.A.; Khader, A.T.; Abualigah, L. A comprehensive review: Krill Herd algorithm (KH) and its applications. Appl. Soft Comput. 2016, 49, 437–446. [Google Scholar] [CrossRef]
- Wang, G.-G.; Deb, S.; Coelho, L.d.S. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar]
- Yang, C.; Tu, X.; Chen, J. Algorithm of marriage in honey bees optimization based on the wolf pack search. In Proceedings of the 2007 International Conference on Intelligent Pervasive Computing (IPC 2007), Jeju, Korea, 11–13 October 2007; pp. 462–467. [Google Scholar]
- Oftadeh, R.; Mahjoob, M.J.; Shariatpanahi, M. A novel meta-heuristic optimization algorithm inspired by group hunting of animals: Hunting search. Comput. Math. Appl. 2010, 60, 2087–2098. [Google Scholar] [CrossRef]
- Mucherino, A.; Seref, O. Monkey search: A novel metaheuristic search for global optimization. In Proceedings of the Conference on Data Mining, System Analysis and Optimization in Biomedicine, Gainesvile, FL, USA, 28–30 March 2007; pp. 162–173. [Google Scholar]
- Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A new bio-inspired algorithm: Chicken swarm optimization. In Proceedings of the International Conference in Swarm Intelligence, Hefei, China, 17–20 October 2014; pp. 86–94. [Google Scholar]
- MiarNaeimi, F.; Azizyan, G.; Rashki, M. Horse herd optimization algorithm: A nature-inspired algorithm for high-dimensional optimization problems. Knowl.-Based Syst. 2021, 213, 106711. [Google Scholar] [CrossRef]
- Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
- Wang, G.G.; Deb, S.; Coelho, L.D.S. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
- Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef]
- Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, J.S. Improved whale optimization algorithm based on nonlinear adaptive weight and golden sine operator. IEEE Access 2020, 8, 77013–77048. [Google Scholar] [CrossRef]
- Houssein, E.H.; Saad, M.R.; Hashim, F.A.; Shaban, H.; Hassaballah, M. Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 94, 103731. [Google Scholar] [CrossRef]
- Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
- Naik, A.; Satapathy, S.C. Past present future: A new human-based algorithm for stochastic optimization. Soft Comput. 2021, 25, 12915–12976. [Google Scholar] [CrossRef]
- Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
- Shi, Y. Brain storm optimization algorithm. In Proceedings of the International Conference in Swarm Intelligence, Chongqing, China, 12–15 June 2011; pp. 303–309. [Google Scholar]
- Ghorbani, N.; Babaei, E. Exchange market algorithm. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar] [CrossRef]
- Kashan, A.H. League Championship Algorithm (LCA): An algorithm for global optimization inspired by sport championships. Appl. Soft Comput. 2014, 16, 171–200. [Google Scholar] [CrossRef]
- Moosavi, S.H.S.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
- Dehghani, M.; Trojovská, E.; Trojovský, P. Driving Training-Based Optimization: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems. Res. Square 2022. [Google Scholar] [CrossRef]
- Mohamed, A.W.; Hadi, A.A.; Mohamed, A. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar]
- Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
- Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
- Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
- Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
- Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
- Erol, O.K.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
- Xie, L.; Zeng, J.; Cui, Z. General framework of artificial physics optimization algorithm. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 1321–1326. [Google Scholar]
- Shah-Hosseini, H. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimisation. Int. J. Comput. Sci. Eng. 2011, 6, 132–140. [Google Scholar]
- Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
- Rabanal, P.; Rodríguez, I.; Rubio, F. Using river formation dynamics to design heuristic algorithms. In Proceedings of the International Conference on Unconventional Computation, Kingston, UC, Canada, 13–17 August 2007; pp. 163–177. [Google Scholar]
- Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
- Moghaddam, F.F.; Moghaddam, R.F.; Cheriet, M. Curved space optimization: A random search based on general relativity theory. arXiv 2012, arXiv:1208.2214. [Google Scholar]
- Formato, R. Central force optimization. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef]
- Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
- Zheng, Y.-J. Water wave optimization: A new nature-inspired metaheuristic. Comput. Oper. Res. 2015, 55, 1–11. [Google Scholar] [CrossRef]
- Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
- Webster, B.; Bernhard, P.J. A Local Search Optimization Algorithm Based on Natural Principles of Gravitation; Florida Institute of Technology: Melbourne, FL, USA, 2003. [Google Scholar]
- Du, H.; Wu, X.; Zhuang, J. Small-world optimization algorithm for function optimization. In Proceedings of the International Conference on Natural Computation, Xi’an, China, 24–28 September 2006; pp. 264–273. [Google Scholar]
- Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
- Shah-Hosseini, H. The intelligent water drops algorithm: A nature-inspired swarm-based optimization algorithm. Int. J. Bio-Inspired Comput. 2009, 1, 71–79. [Google Scholar] [CrossRef]
- Chuang, C.-L.; Jiang, J.-A. Integrated radiation optimization: Inspired by the gravitational radiation in the curvature of space-time. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 3157–3164. [Google Scholar]
- Hsiao, Y.-T.; Chuang, C.-L.; Jiang, J.-A.; Chien, C.-C. A novel optimization algorithm: Space gravitational optimization. In Proceedings of the 2005 IEEE international Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 10–12 October 2005; pp. 2323–2328. [Google Scholar]
- Javidy, B.; Hatamlou, A.; Mirjalili, S. Ions motion algorithm for solving optimization problems. Appl. Soft Comput. 2015, 32, 72–79. [Google Scholar] [CrossRef]
- Birbil, Ş.İ.; Fang, S.-C. An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 2003, 25, 263–282. [Google Scholar] [CrossRef]
- Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
- Shen, J.; Li, J. The principle analysis of light ray optimization algorithm. In Proceedings of the 2010 Second International Conference on Computational Intelligence and Natural Computing, Washington, DC, USA, 23–24 October 2010; pp. 154–157. [Google Scholar]
- Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
- Abdechiri, M.; Meybodi, M.R.; Bahrami, H. Gases Brownian motion optimization: An algorithm for optimization (GBMO). Appl. Soft Comput. 2013, 13, 2932–2946. [Google Scholar] [CrossRef]
- Alatas, B. ACROA: Artificial chemical reaction optimization algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
- Siddique, N.; Adeli, H. Nature-Inspired Computing: Physics-and Chemistry-Based Algorithms; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
- Tanyildizi, E.; Demir, G. Golden sine algorithm: A novel math-inspired algorithm. Adv. Electr. Comput. Eng. 2017, 17, 71–78. [Google Scholar] [CrossRef]
- Salem, S.A. BOA: A novel optimization algorithm. In Proceedings of the 2012 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–5. [Google Scholar]
- Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
- Mara, S.T.W.; Norcahyo, R.; Jodiawan, P.; Lusiantoro, L.; Rifai, A.P. A survey of adaptive large neighborhood search algorithms and applications. Comput. Oper. Res. 2022, 146, 105903. [Google Scholar] [CrossRef]
- Pisinger, D.; Ropke, S. Large neighborhood search. In Handbook of Metaheuristics; Springer: New York, NY, USA, 2010; pp. 399–419. [Google Scholar]
- Ahuja, R.K.; Ergun, Ö.; Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [Google Scholar] [CrossRef]
- Feo, T.A.; Resende, M. Greedy randomized adaptive search procedures. J. Glob. Optim. 1995, 6, 109–133. [Google Scholar] [CrossRef]
- Lee, D.-H.; Chen, J.H.; Cao, J.X.; Review, T. The continuous berth allocation problem: A greedy randomized adaptive search solution. Transp. Res. Part E Logist. Transp. Rev. 2010, 46, 1017–1029. [Google Scholar] [CrossRef]
- Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
- Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
- Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
- Zitouni, F.; Harous, S.; Belkeram, A.; Hammou, L.E.B. The archerfish hunting optimizer: A novel metaheuristic algorithm for global optimization. Arab. J. Sci. Eng. 2022, 47, 2513–2553. [Google Scholar] [CrossRef]
- Daliri, A.; Asghari, A.; Azgomi, H.; Alimoradi, M. The water optimization algorithm: A novel metaheuristic for solving optimization problems. Appl. Intell. 2022, 1–40. [Google Scholar] [CrossRef]
- Oyelade, O.N.; Ezugwu, A.E.-S.; Mohamed, T.I.A.; Abualigah, L. Ebola optimization search algorithm: A new nature-inspired metaheuristic optimization algorithm. IEEE Access 2022, 10, 16150–16177. [Google Scholar] [CrossRef]
- Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
- Eslami, N.; Yazdani, S.; Mirzaei, M.; Hadavandi, E. Aphid-Ant Mutualism: A novel nature-inspired metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 201, 362–395. [Google Scholar] [CrossRef]
- Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
- Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
- Kivi, M.E.; Majidnezhad, V.J.; Computing, H. A novel swarm intelligence algorithm inspired by the grazing of sheep. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 1201–1213. [Google Scholar] [CrossRef]
- Pan, J.-S.; Zhang, L.-G.; Wang, R.-B.; Snášel, V.; Chu, S.-C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
- Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie dog optimization algorithm. Neural Comput. Appl. 2022, 1–49. [Google Scholar] [CrossRef]
- Emami, H. Stock exchange trading optimization algorithm: A human-inspired method for global optimization. J. Supercomput. 2022, 78, 2125–2174. [Google Scholar] [CrossRef] [PubMed]
- Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
- Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
- Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
- Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
- Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The colony predation algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
- Davies, O.; Wannell, J.; Inglesfield, J. The Rainbow; Cambridge University Press: Cambridge, UK, 2006; Volume 37, pp. 17–21. [Google Scholar]
- Adam, J.A. The mathematical physics of rainbows and glories. Phys. Rep. 2002, 356, 229–365. [Google Scholar] [CrossRef]
- Suzuki, M.; Suzuki, I.S. Physics of rainbow. Phys. Teach. 2010, 12, 283–286. [Google Scholar]
- Buchwald, J.Z. Descartes’s experimental journey past the prism and through the invisible world to the rainbow. Ann. Sci. 2008, 65, 1–46. [Google Scholar] [CrossRef]
- Zhou, J.; Fang, Y.; Wang, J.; Zhu, L.; Wriedt, T. Rainbow pattern analysis of a multilayered sphere for optical diagnostic of a heating droplet. Opt. Commun. 2019, 441, 113–120. [Google Scholar] [CrossRef]
- Wu, Y.; Li, H.; Wu, X.; Gréhan, G.; Mädler, L.; Crua, C. Change of evaporation rate of single monocomponent droplet with temperature using time-resolved phase rainbow refractometry. Proc. Combust. Inst. 2019, 37, 3211–3218. [Google Scholar] [CrossRef]
- Wu, Y.; Li, C.; Crua, C.; Wu, X.; Saengkaew, S.; Chen, L.; Gréhan, G.; Cen, K. Primary rainbow of high refractive index particle (1.547 < n < 2) has refraction ripples. Opt. Commun. 2018, 426, 237–241. [Google Scholar]
- Yu, H.; Xu, F.; Tropea, C. Optical caustics associated with the primary rainbow of oblate droplets: Simulation and application in non-sphericity measurement. Opt. Express 2013, 21, 25761–25771. [Google Scholar] [CrossRef] [PubMed]
- Greengard, P.; Rokhlin, V. An algorithm for the evaluation of the incomplete gamma function. Adv. Comput. Math. 2019, 45, 23–49. [Google Scholar] [CrossRef]
- Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
- Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithms. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
- Molga, M.; Smutnicki, C. Test functions for optimization needs. Test Funct. Optim. Needs 2005, 101, 48. [Google Scholar]
- De Barros, R.S.M.; Hidalgo, J.I.G.; de Lima Cabral, D.R. Wilcoxon rank sum test drift detector. Neurocomputing 2018, 275, 1954–1963. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
- Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
- Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastián, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
- Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
- Seck-Tuoh-Mora, J.C.; Hernandez-Romero, N.; Lagos-Eulogio, P.; Medina-Marin, J.; Zuñiga-Peña, N.S. A continuous-state cellular automata algorithm for global optimization. Expert Syst. Appl. 2021, 177, 114930. [Google Scholar] [CrossRef]
- Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
- Rao, R.V.; Waghmare, G.G. A new optimization algorithm for solving complex constrained design optimization problems. Comput. Methods Appl. Mech. Eng. 2017, 49, 60–83. [Google Scholar] [CrossRef]
- Mezura-Montes, E.; Coello, C.A.C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
- Ali, M.Z.; Reynolds, R.G. Cultural algorithms: A Tabu search approach for the optimization of engineering design problems. Soft Comput. 2014, 18, 1631–1644. [Google Scholar] [CrossRef]
- Ray, T.; Saini, P. Engineering design optimization using a swarm with an intelligent information sharing among individuals. Eng. Optim. 2001, 33, 735–748. [Google Scholar] [CrossRef]
- Parsopoulos, K.E.; Vrahatis, M.N. Unified particle swarm optimization for solving constrained engineering optimization problems. In International Conference on Natural Computation; Springer: Berlin/Heidelberg, Germany, 2005; pp. 582–591. [Google Scholar]
- Kim, M.-S.; Kim, J.-R.; Jeon, J.-Y.; Choi, D.-H. Efficient mechanical system optimization using two-point diagonal quadratic approximation in the nonlinear intervening variable space. KSME Int. J. 2001, 15, 1257–1267. [Google Scholar] [CrossRef]
- Kaveh, A.; Talatahari, S. Hybrid charged system search and particle swarm optimization for engineering design problems. Eng. Comput. 2011, 28, 423–440. [Google Scholar] [CrossRef]
- Mani, A.; Patvardhan, C. An adaptive quantum evolutionary algorithm for engineering optimization problems. Int. J. Comput. Appl. 2010, 1, 43–48. [Google Scholar] [CrossRef]
- Lisbôa, R.; Yasojima, E.K.; de Oliveira, R.M.S.; Mollinetti, M.A.F.; Teixeira, O.N.; de Oliveira, R.C.L. Parallel genetic algorithm with social interaction for solving constrained global optimization problems. In Proceedings of the 2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), Fukuoka, Japan, 13–15 November 2015; pp. 351–356. [Google Scholar]
- Kulkarni, A.J.; Tai, K. Solving constrained optimization problems using probability collectives and a penalty function approach. Int. J. Comput. Intell. Appl. 2011, 10, 445–470. [Google Scholar] [CrossRef]
- Dimopoulos, G.G. Mixed-variable engineering optimization based on evolutionary and social metaphors. Comput. Methods Appl. Mech. Eng. 2007, 196, 803–817. [Google Scholar] [CrossRef]
- Sedhom, B.E.; El-Saadawi, M.M.; Hatata, A.Y.; Alsayyari, A.S.; Systems, E. Hierarchical control technique-based harmony search optimization algorithm versus model predictive control for autonomous smart microgrids. Int. J. Electr. Power Energy Syst. 2020, 115, 105511. [Google Scholar] [CrossRef]
- Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
- Zhang, M.; Luo, W.; Wang, X. Differential evolution with dynamic stochastic selection for constrained optimization. Inf. Sci. 2008, 178, 3043–3074. [Google Scholar] [CrossRef]
- He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar] [CrossRef]
- Dos Santos Coelho, L. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
- Kaveh, A.; Bakhshpoori, T. Water evaporation optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
- Gandomi, A.H.; Yang, X.-S.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
- Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
- Mezura-Montes, E.; Coello, C.A. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans. Evol. Comput. 2005, 9, 1–17. [Google Scholar] [CrossRef]
- Montemurro, M.; Vincenti, A.; Vannucci, P. The automatic dynamic penalisation method (ADP) for handling constraints with genetic algorithms. Comput. Methods Appl. Mech. Eng. 2013, 256, 70–87. [Google Scholar] [CrossRef]
- Mezura-Montes, E.; Coello Coello, C.A.; Velázquez-Reyes, J.; Munoz-Dávila, L. Multiple trial vectors in differential evolution for engineering design. Eng. Optim. 2007, 39, 567–589. [Google Scholar] [CrossRef]
- Wang, L.; Li, L.-P. An effective differential evolution with level comparison for constrained engineering design. Struct. Multidiscip. Optim. 2010, 41, 947–963. [Google Scholar] [CrossRef]
- Coello, C.A.C.; Montes, E.M. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 2002, 16, 193–203. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).



























