1. Introduction
In real life, the optimization problem [
1] can be normally classified as a “black box” model which consists of three main components: input, model, and output as shown in
Figure 1. In case any component is unknown, a new problem type arises. When the input and output are known and the model is unknown, this type of problem is called a modeling problem where the solution is to find the function that maps the input to the output. This type of problem can be heavily seen in data mining and machine learning domain, especially in prediction and classification problems.
When some inputs and models are known, and the target is to enter these input conditions into the model to determine the output, this problem is known as a simulation problem that can be used in engineering design problems, especially for forecasting. Finally, when the model and the desired output are known and the target is to find the input, this problem is known as the optimization problem. An example of optimization problems is the creating an optimal image quality evaluator [
2], feature selection [
3], and scheduling [
4,
5], additive manufacturing [
6], renewable energy system [
7], etc.
In general, optimization problems normally include a set of decision variables as an input to the objective function as a model where the desired output is known or can be measured. The main target is to find the optimal values for the decision variables that results in the minimal or maximum value of the objective function [
8]. Based on their value ranges, the alternative combination of decision variables forms a huge search space. The definition of the search space depends solely on the problem characteristics. The problems have either unimodal modal or multimodal search space as can be shown in
Figure 2. The complexity of the problems is normally justified based on the search space ruggedness and solution dimensions. The ruggedness of the search space can be related to the problem constraints while solution dimensions can be related to the problem size.
The constraints of the optimization problem determine the movement through the search space. The earliest methods are based on mathematical theories such as integer/linear programming, Simplex Method, etc. The main advantage of these methods for the optimization problem is that they can find the exact solution by conducting an exhaustive search; however, they are normally unworkable with the optimization problem of NP-hard and NP-Complete classes since the polynomial-time required is almost uncountable. As a result, heuristic-based approaches emerged to find an approximate solution for the optimization problem in a reasonable time. The heuristic-based techniques are normally problem-dependent where the problem-specific knowledge is embedded. Examples of these heuristic-based approaches in traveling salesman problems such as 2-opt and 3-opt moves. These heuristics indeed have efficiency in solving the optimization problems in a small amount of time, although they work as a constraint satisfaction method with less concern for the solution quality.
A new metaphor in evolutionary computation is produced to deal with optimization problems. A population of individuals who competes in an environment with limited resources to survive is the imitation of evolutionary algorithms (EAs). In EAs, the fitter individuals have a better chance to inherit their strong attributes to the next generations. This is known as the Darwinian natural selection of survival-of-the-fittest principle. EAs are general optimization templates that can be applied to a variety of optimization problems through efficient operators for sharing knowledge such as recombination, mutation, and selection controlled by carefully initialized parameters until a “good-enough” individual is obtained. The first generation EAs are Genetic Algorithm (GA) [
9], Genetic Programming [
10], Evolution Strategy [
11], and Biogeography-Based Optimizer [
12]. These methods are generated by taking into consideration the type of the problem (i.e., binary, discrete, continuous, perpetration, or structured). In the second generation of EAs, only one optimization template is available for almost all optimization problems. These includes Differential Evolution (DE) [
13], Particle Swarm Optimization (PSO) [
14].
Nowadays, a plethora of EAs is mostly inspired by natural phenomena related to human behavior, physical wisdom, animal swarm survival strategies, and chemical principles. These methods will be surveyed in the literature review section. The common features of these methods can be summarized as follows: (i) they are population-based, (ii) they have an iterative improvement process, (iii) They turned toward the optimal solution through the special operator(s), (iv) they can explore several search space niches and exploit each niche, (v) they embed the problem-specific knowledge mapping the phenotype to genotype, and (vi) they can provide a suitable balance between diversification and intensification of the problem search space. The main difference between them is their movement way through the search space via their operators where their definitions of the search space niches are varied.
Due to the complex nature of the optimization problem, where there is up-to-now a superior EA can ultimately tackle all optimization problems and excel over all other EAs. This is stated in a pioneer theorem in optimization named No Free Lunch (NFL) where there is no single algorithm that performs better than others in every case or even for the same problem in different instances [
15]. Therefore, NFL opens the door for new innovation theories to stem other natural phenomena and propose other intelligent EAs with the ability to efficiently tackle optimization problems with rugged and huge search space.
Although there are many studies in the optimization field, there are still various behaviors in nature that have not been studied yet [
16,
17]. One of these behaviors is the behavior of lemurs animal in the movement to search for food or escape from other predators. Lemurs can only be found in Madagascar and the neighboring Comoro Islands, which are located off the coast of Mozambique, Africa.
In this paper, the Lemurs Optimizer (LO) is proposed as an evolutionary algorithm. LO is stemmed by its behavior in locomotor behavior: leap up and dance-hup. The LO behavior is formalized in terms of optimization context. LO is evaluated using twenty-three standard optimization functions circulated well in the literature. In addition, some real engineering problems are also used. Initially, the effect of some parameters in LO is studied to decide which is the best configuration. Thereafter, the comparative evaluation is conducted against six well-established methods. The results prove the superiority of LO over other comparative algorithms. For statistical evaluation, the Wilcoxon Mann-Whitney test shows the significance of LO results. In conclusion, LO is a new optimization algorithm that can be applied to a large variety of global optimization problems efficiently.
This paper’s remaining subsections are organized as follows: The literature review of the previous natural inspired evolutionary algorithms is summarized in
Section 2. The inspiration and procedural steps of the LO algorithm are proposed and described in
Section 3. The evaluation of the proposed LO is conducted and the experimental results are analyzed and compared in
Section 4. Finally, in
Section 5, the conclusion is presented, as well as various scenarios for future development.
2. Literature Review
Evolutionary Algorithms (EA) are random search algorithms that are inspired by the concept of natural evolution, where this inspiration concept re-formulates into a set of optimization operators that combine to form an optimization algorithm. The population of EA algorithms is a random set of solutions that are used as an initial point for the algorithm, and after the growth of the generations, the genes of the parent individuals are subjected to change or alternation in the process of producing new offspring individuals by recombination and mutation processes, where the natural selection method utilizes the survival-of-the-fittest principle to select these offspring. EA’s first natural evolution-inspired algorithm is called GA, which is introduced in 1960 by John Henry Holland [
18].
Swarm-based algorithms imitated the swarm collaboration behavior of animals. Particle Swarm Optimization (PSO) [
14] is the most popular swarm-based algorithm, which imitates the social behavior of birds. The optimization framework of the PSO algorithm is developed based on the following assumptions. The particles (solutions) fly randomly for the exploration of their environment (search space) and iteratively adjust their positions according to PSO operators to allocate the optimal solution (global best). The best positions located during the flying process toward the optimal position are stored. Other well-known swarm-based optimizer are Ant Colony Optimization (ACO) [
19] and Artificial Bee Colony (ABC) [
20].
Table 1 shows many others swarm optimization algorithms.
Physical-based algorithms are imitated by the physical phenomena that regulated and appeared in the universe. Many algorithms fall under this category, for example, Simulated Annealing (SA) which is inspired by the annealing process of metallurgy through putting the metal in heating followed by slow cooling to approach the best solution Physical-based algorithms are listed in
Table 1. Other examples of algorithms that fall in this category are listed in
Table 1.
Eventually, the human-based algorithm is another type of optimization algorithm that mimics human behavior and interactions in societies. An example of a human-based algorithm is the Harmony Search Algorithm (HSA), which is inspired by the music players’ interactions with the notes of their instruments, they apply the best practices for approaching the desired harmony (optimal solution) [
21]. The Fireworks algorithm is another example of human-based algorithms [
22].
Conventionally, there is a bunch of nature-inspired algorithms which offer promising good solutions for a diverse scale of optimization problems. As mentioned before, there is no super optimization algorithm that can solve all classes of optimization problems efficiently [
15]. Furthermore, optimization problems classified as non-linearity and multi-modality are arduous to be solved by deterministic algorithms. Therefore, researchers put great efforts into developing metaheuristic algorithms with diverse intelligence features from a wide variety of inspiration sources, to bring algorithms with robust optimization capabilities that can solve complex optimization problems successfully.
Table 1.
Nature-inspired Metaheuristics.
Table 1.
Nature-inspired Metaheuristics.
Optimization Algorithms Type | Algorithms |
---|
Evaluation-based algorithms | Biogeography-Based Optimizer [12], Genetic Programming [10], Evolution Strategy [11], and Genetic Algorithm (GA) [9]. |
Chemical-Based algorithms | Chemical reaction optimisation [23]. |
Human-Based algorithms | -Hill Climbing (HC) [24], Coronavirus herd immunity optimizer (CHIO) [25], Fireworks algorithm [22], Group search optimizer [26], Harmony Search Algorithm (HSA) [21], Mine blast algorithm [27], Seeker optimization algorithm (SOA) [28], Social-based algorithm (SBA) [29], Tabu search (TS) [30], and Wisdom of artificial crowds (WAC) [31]. |
Physical-based algorithms | Big bang-big crunch (BBBC) [32], Charged system search (CSS) [33], Electromagnetism-like mechanism (EM) [34], Equilibrium optimizer (EO) [35], Gravitational search algorithm (GSA) [36], Henry gas solubility optimization (HGSO) [37], Water cycle algorithm (WCA) [38], Multi-verse optimizer (MVO) [39] and Sine cosine algorithm (SCA) [40]. |
Swarm-based algorithms | Ant colony optimization (ACO) [19], Ant lion optimizer (ALO) [41], Artifical bee colony (ABC) [20], Artificial fish-swarm algorithm (AFSA) [42], Bat algorithm (BA) [43], Bird mating optimizer (BMO) [44], Butterfly optimization algorithm (BOA) [45], Cat swarm optimization algorithm (CSOA) [46], Crow search algorithm (CSA) [47], Cuckoo search (CS) [48], Chicken swarm optimization (CSO) [49], Dragonfly algorithm (DA) [50], Elephant search algorithm (ESA) [51], Firefly algorithm [52],
Flower pollination algorithm (FPA) [53], Salp Swarm Algorithm (SSA) [54], Moth-flame optimization algorithm (MFO) [55], Monarch butterfly optimization (MBO) [56], Grey wolf optimizer (GWO) [57], Fruit fly optimization algorithm (FOA) [58], Glowworm swarm optimization [59], Harris hawks optimization [60], Krill herd algorithm (KHA) [61], PSO [14], Red deer algorithm [62], Pelican optimization algorithm [63], Enhanced marine predators algorithm (LEO-MPA) [64] and Whale optimization algorithm (WOA) [65]. |
3. Lemurs Optimizer (LO)
In this section, the inspiration for the LO algorithm is first presented. After that, the mathematical model and the LO algorithm are discussed in detail.
3.1. Inspiration
Lemurs are classified as prosimian primates, which includes all primates that are neither monkeys nor apes [
66]. Lemurs come in a diversity of varieties, but there are just a few individuals of each species. Just a small portion of the world is home to these primates. Many species have small populations that are dwindling. Lemurs can only be found in Madagascar and the neighboring Comoro Islands, which are located off the coast of Mozambique, Africa. They live in a variety of environments: mountains, wetlands, rain forests, spiny forests, and dry deciduous forests. The indri is the largest lemur species. It can reach a weight of 15.5 to 22 pounds (7 to 10 kg) and a length of 24 to 35 inches (60 to 90 cm). Madame Berthe’s mouse lemur is the tiniest of the lemurs, measuring 3.5 to 4 inches (9 to 11 cm) in length (not including the tail) [
67].
Lemurs are highly social animals that live in groups known as troops. According to National Geographic, the ring-tail lemur’s troop is led by a dominant female and can consist of six to thirty species. The majority of lemurs spend their waking hours in trees. Lemurs groom each other while they aren’t feeding. The Lemurs communicate in two different ways. They communicate by vocalization and scent markings. Lemurs interact by emitting low growls. Sometimes it is a warning to flee, and sometimes a warming welcome. Soft purrs are used by mothers to communicate with their offspring. This also aids in the formation of strong bonds.
The pitch of a Lemur’s shrill scream is extremely high. This is a warning signal that can be received from a long distance. This may be a territorial symbol, warning other Lemurs to stay away. Other times, it’s a way of alerting the family that they are in danger and should seek shelter.
Lemurs have been observed meowing like cats. This form of sound is used to summon the family to a central position or to flee from predators such as fossa (the risk is very high). If they have spread out to search for food, this might be a way to get them all together for nesting.
Lemurs use their scent glands to convey their location. To locate food, the family groups may disperse. This may also assist dominant females in determining whether or not an alien has entered their family group and poses a challenge.
Lemurs have a wide range of locomotor behavior. For the LO algorithm, we used two main lemur behaviors as inspiration: leap up and dance-hup. In a leap up, the lemurs jump into the air and sit upright on a nearby branch, both hands and feet grasping the trunk closely. They have the potential to jump up to 10 m (33 ft) from tree trunk to tree trunk in a matter of seconds. The dance-hup occurs when the space between trees becomes too great, lemurs will descend to the ground and cross lengths of more than 100 m (330 feet) by standing upright and jumping horizontally with arms extended to the side and waving up and down from chest to head height, ostensibly for balance [
68].
Figure 3 illustrates conceptual models of these two key lemur locomotor behaviors. The two stages of optimization using metaheuristics, exploration and exploitation, are very similar to these two Locomotor behaviors. The primary objective of the exploration phase is for lemurs to leap up over various areas to locate the best lemur location in the search space. However, lemurs in the dance-hub move into the best nearby lemur location and in one direction, which is useful during the exploitation phase. The following subsection will explain how the LO algorithm works conceptually and mathematically.
3.2. Mathematical Model of the Lemur Optimizer Algorithm
The search process is divided into two phases in the population-based algorithm, as described in the previous section: exploration versus exploitation. In the exploration phase, we utilize the dance-hup behavior. The leap-up behavior, on the other hand, aids LO in exploiting the search space. We consider each solution to be a lemur, with each vector representing a single one of the lemur’s coordinates. We also allocate the best location to each solution that is related to the solution’s fitness function value. As a result, the lemurs will change their place vectors and dance-hup towards the best neatest lemur or leap up to the global best lemur.
Figure 4 illustrates the conceptual model of dance-hup and leap-up for the proposed algorithm.
The set of lemurs is represented in a matrix since the LO algorithm is a population-based algorithm. To do this, the following procedures are carried out. Assuming that we have the population defined as the following matrix:
where
T denotes the set of lemurs in a population matrix of size
,
d denotes the decision variables, and
s denotes the candidate solutions.
Typically, the decision variable
j in solution
i is randomly generated as follows:
where the function
produces a random distributed number in the range
,
, where
is the largest integer number that can be generated, and the discrete lower and upper bound limits of variable
j are denoted by
.
The lemur that has a lower fitness value tends to change its decision variables from the lemur that has a higher fitness value. This means that the overall fitness values of the total lemurs improve with iterations. Lemurs are organized based on their fitness values in each iteration, with one chosen as the global best lemur (i.e., ) and one chosen as the best nearest lemur for each lemur (i.e., ).
In this direction, the decision variable
j in the solution
i is assigned a value each iteration using two options: (a) the value is selected from the global best lemur, and (b) the value is selected from the best nearest lemur. This is formulated as shown in Equation (
3).
where
indicates
j value of the current lemur,
indicates
j value of the best nearest lemur for the the current lemur
,
indicates the global best lemur, free risk rate (
) indicates the risk rate of the all lemurs in the troops, and
represents random numbers between [0, 1]. Based on this formulation, it can be concluded that the probability of the
is the main coefficient of the LO algorithm. The formula of this coefficient is given in:
where
and
represent constant pre-defined values,
is the maximum iterations’ number, and
denotes current iteration.
Figure 5 illustrates the conceptual model of High-Risk Rate and Low-Risk Rate for the proposed algorithm. Note that the purpose of
and
is to determine the minimum and the maximum value for
.
Algorithm 1 shows the LO algorithm’s pseudocode, while the general steps are presented in
Figure 6.
Algorithm 1: The LO algorithm’s pseudocode. |
- 1:
Set up the LO parameters (Number of iterations, Number of dimensions (Dim), Number of solutions, Lower Bound (LB), Upper Bound (UB), Low-Risk Rate, High-Risk Rate). - 2:
Generate Lemurs population. - 3:
while the current iteration does not equal the number of iterations do - 4:
Evaluate the objective function for all Lemurs. - 5:
Calculate free risk rate ( ) using Equation ( 4). - 6:
Update the Global Best Lemur (). - 7:
for each lemur indexed by i do - 8:
Update the Best Nearest Lemur (). - 9:
for each decision variable in Lemur i indexed by j do - 10:
Set to . - 11:
if then - 12:
Use Equation ( 3) case number one to update the decision variable j. - 13:
else - 14:
Use Equation ( 3) case number two to update the decision variable j. - 15:
end if - 16:
- 17:
end for - 18:
end for - 19:
end while - 20:
Return the Global Best Lemur.
|
The LO algorithm begins by generating a swarm of lemurs randomly. At each iteration, the decision variables of the neatest lemur with a better fitness value try to transfer towards the lemurs with a lower fitness value via dance-hup.
In the LO algorithm, creating a set of random lemurs is the first step in the optimization process. The value starts close to , which means the lemur tends to move toward the best neatest lemur via dance-hup. During the LO execution, the will decrease close to , which means the lemur tends to move toward the best global lemur via leap up. This procedure shall be iterated until an end condition is met.
The number of lemurs, iterations, and decision variables affects the algorithms’ computing complexity. To summarise, the proposed algorithm has the following computational complexity:
4. Experiments and Results
The proposed LO’s performance is evaluated using 23 standard test functions in this section. Minimization optimization problems with various complexity and dimensional search spaces are used in these test cases. These test functions are divided into three categories, including unimodal, multimodal, and fixed-dimension multimodal functions. The primary features of the three categories are presented in
Table 2,
Table 3 and
Table 4. The tables show the function name, the function mathematical model, the range of the search space’s boundaries, the functions dimension (
n), and the functions’ optimum solution
. The programming language MATLAB version 9.12.0 is used to conduct the experiments, and the code is available in the “Lemurs-Optimizer” GitHub, available online:
https://github.com/ammarabbasi/Lemurs-Optimizer, accessed on 28 September 2022.
As mentioned previously, the test functions used in this section are divided into unimodal, multimodal, and fixed-dimension multimodal functions. The multimodal and fixed-dimension multimodal categories are similar but differ from each other to define the number of decision variables. The fixed-dimensional test functions provide various search spaces compared with multimodal test functions. However, tuning the decision variables cannot be done using the fixed-dimensional test functions’ mathematical model. In this evaluation section, the unimodal functions are utilized to investigate the proposed LO exploitation ability, whereas the multimodal functions are utilized to examine and evaluate the exploration side of the LO [
37].
4.1. Comparative Analysis with the Swarm-Based Optimization Algorithms
The LO is compared with six robust optimization algorithms, including ABC, SSA, SCA, BA, FPA, and JAYA to investigate and prove the proposed LO’s robust performance. The population size and number of iterations used for all compared algorithms are 30 and 100,000, respectively. In addition, the low-scal0E-rate and high-scal0E-rate in LO are set to be 0.5 and 0.7, respectively.
4.2. Evaluation of Exploitation Capability (Functions F1–F7)
Given that one global optimum exists for each of the unimodal functions (F1–F7), they investigate the compared algorithms’ exploitation capability.
Table S1 demonstrates that the proposed LO shows high performance in optimizing the functions and reducing their values, and achieving the best exploitation capability. Particularly, the proposed LO obtains the best results in optimizing F1 and F2 in terms of best, worst, and mean and gets the second-best results in most of the other functions (i.e., F3–F7). Accordingly, it is notable that the proposed LO achieves the best exploitation capability.
4.3. Evaluation of Exploration Capability (Functions F8–F23)
The multimodal functions have several local optimums determined by the problem size (i.e., decision variables), where the number of local optimums increases with increasing the size of the problem. Accordingly, the multimodal functions play the primary role in evaluating the optimization algorithm’s exploration capability. The results presented in
Table S2 prove the demonstration of the proposed LO against the compared algorithms, where the LO obtains the best results in achieving the best in ten functions, including F8, F10, F11, F14, F21, and F23, and Worst and Mean in 10 functions, including F10, F14, F16–F19, and F23. These results prove the LO’s robust performance in managing its exploration capability, which significantly leads LO towards the global optimum. The detailed results can be found in the
Supplemental Information in Tables S1 and S2.
4.4. Analysis of Convergence Behavior
In the optimization process of LO, the search agents share their information and exploit the best lemur to scan the search space effectively to reach the most promising regions in the search space. The search agents in the early stage of optimization allocate abruptly positions and then gradually converge. Researchers in [
69] stated such behavior in other population-based algorithms can lead to achieving the desired convergence. Convergence curves of LO, ABC, SSA, BAT, FPA, and JAYA are plotted in
Figure 7 based on the average best-so-far in each iteration over 30 runs for some of the unimodal and multimodal benchmark functions in this study. It can be observed that the convergence trend of LO is competitive with other stat0E-of-th0E-art meta-heuristic algorithms.
Moreover, the convergence rate of LO based on best and average fitness is plotted in
Figure 8 and
Figure 9. There is a descending pattern in the average fitness throughout the growth of the iterations in these figures’ second and fourth columns. The successful convergence of LO in optimizing benchmark functions is owing to its search capabilities in terms of leap up (exploitation) and dance hub (exploration) that attractively improve the trajectory of lemurs toward optimality.
Further, the convergence curves in
Figure 8 and
Figure 9 demonstrate that the LO algorithm produces better quality solutions throughout the optimization iterations.
Figure 10 illustrates the average rankings of the proposed algorithm and other comparative algorithms according to the average of the results. These rankings are calculated using Friedman’s test. It should be noted that the lower value of the rankings, the better performance. It can be seen that the LEO-MPA is placed first by getting the lowest rankings, while the proposed LO is ranked third.
By following [
55], the signed Wilcoxon statistical test [
70] was used to determine if there is a significant improvement between the LO algorithm and the other comparative algorithms. For each algorithm, the best results of 30 runs are used in the Wilcoxon signed rank with
p value equal to
. The proposed LO’s improvements were tested to see whether they happened by chance, or were statistically significant using this test. The
p value was determined using the statistically signed Wilcoxon statistical test. Two hypotheses, the null hypothesis, and the alternative hypothesis were considered in our experiment. The null hypothesis states that the mean values of the LO and other algorithms do not vary significantly (i.e., “−”). The alternative hypothesis, on the other hand, revealed that the mean values of the LO and other algorithms (i.e., “+”) differ significantly. All other algorithms and the LO pair-wise are shown in
Table 5 and
Table 6, indicating whether the null or alternative hypothesis is accepted. The proposed algorithm outperformed other algorithms by the smallest
p value. The detailed results can be found in the
Supplemental Information in Table S3.
Figure 11 shows the time-average rankings of the proposed lo and other comparative algorithms according to the average time of all 30 runs. This figure clearly shows that the proposed LEO-MPA takes more time to finish than other algorithms. In contrast, the proposed LO is ranked first.
The results of this section revealed various characteristics of the proposed LO algorithm. The location updating mechanism of lemurs using Equation (
3) case number one is responsible for LO’s high exploration ability. During the initial steps of the iterations, this equation allows lemurs to move around the best nearest lemur. The rest of the iterations emphasize high exploitation and convergence, which come from Equation (
3) case number two. This equation enables the lemurs to quickly reposition themselves around or step towards the current global best lemur. It is worth mentioning here that the exploration and exploitation phases are completed separately, and the LO exhibits convergence speed and high local optima avoidance at the same time. Besides, the LO is utilized one formula to manage these two phases and update the position of lemurs. The performance of LO in real engineering problems is verified in the following section.
4.5. Engineering Optimization Problems in the Real World
In this section, three well-known real-world problems, presented at (IEE0E-CEC 2011) the 2011 IEEE Congress on Evolutionary Computation, are addressed to evaluate the efficiency of the proposed LO algorithm [
71]. In this regard, transmission network expansion planning (TNEP), the bifunctional catalyst blend optimal control (BCBOC), and Parameter estimation for frequency-modulated (FM) sound waves (PEFMSW) problems are utilized.
Table 7 shows the characteristics of these real-world problems in terms of the problem dimension and the value range of decision variables.
It should be noted that the parameter settings for the proposed LO algorithm are as follow: the number of runs is 30, the number of iterations is 150,000 and the population size is 30. These settings are under the IEE0E-CEC2011 rules [
71]. These settings are suggested to make a fair comparison with thirteen other comparative algorithms. These comparative algorithms include CHIO (i.e., coronavirus herd immunity optimizer) [
25], APS (i.e., adaptive population-based simplex algorithm) [
72], ADE (i.e., adaptive differential evolution algorithm) [
73], CDASA (i.e., continuous differential ant-stigmergy Algorithm) [
74], DE (i.e., differential evolution) [
75], D0E-RHC [
69], GA-MPC [
76], HDE [
77], HMA (i.e., hybrid EA-D0E-memetic algorithm) [
78], IMO (i.e., intellects-masses optimizer) [
79], ABC (i.e., artificial bee colony) [
80], AABC (i.e., accelerated artificial Bee colony algorithm) [
80], and KHABC [
81].
4.6. Transmission Network Expansion Planning (TNEP) Problem
The TNEP problem entails finding the lowest cost transmission assets that can be installed in a power system to meet predicted demand over a specified time horizon [
82]. Because TNEP has a long-term effect on system operation, it is a key strategic decision in power systems. Additionally, TNEP is a non-linear, non-convex, and multi-modal optimization problem that is classified as NP-hard in terms of computing complexity. Different models and strategies for solving the TNEP problem have been developed in the existing literature. To address the TNEP problem in its various manifestations, heuristic and metaheuristic have been developed. While heuristic techniques are simple to use, they typically become stuck in locally optimal solutions. Metaheuristic techniques are more efficient search algorithms that are capable of finding better solutions than conventional heuristic techniques at the cost of increased processing time.
Table 8 compares the performance of the proposed LO algorithm with eleven different comparison algorithms. In this table, the experimental results of each comparative algorithm are summarized in terms of the best, mean, worst, median, and standard derivations across 30 runs. From
Table 8, it is clear that the proposed LO algorithm performance is similar to all other algorithms by obtaining the same results (i.e., it reached the optimal solution).
4.7. The Bifunctional Catalyst Blend Optimal Control Problem
The experimental results of the proposed LO algorithm, as well as the results of ten of the comparative algorithms, are recorded in
Table 9. It can be observed from the results in
Table 9 that the performance of the LO algorithm is similar to other algorithms by obtaining the same best results.
4.8. Parameter Estimation for Frequency-Modulated (FM) Sound Waves
The experimental results of running the proposed LO algorithm are recorded in
Table 10. In the same table, these findings are compared to those of thirteen different comparative algorithms. From
Table 10, it can be shown that the LO algorithm performs similarly to five of the other compared algorithms in terms of attaining optimal outcomes when solving such problems. Furthermore, the LO algorithm is obtained the same best results at all times of runs, and this is similar to the GA-MPC algorithm. Thus, the effectiveness of the proposed LO in handling complex optimization problems is demonstrated.
All of the previous experiments and observations support the proposed algorithm’s ability to solve complex problems with unknown search spaces. As a result, this efficient optimization algorithm is being provided to be utilized for optimization problems in various fields. It is worth mentioning that other well-defined optimization problems such as text documents clustering [
83,
84], EEG signals denoising [
85,
86,
87], feature selection [
88,
89,
90,
91], and scheduling problems in smart home [
92,
93,
94,
95] can be handled by the proposed algorithm.