Next Article in Journal
Membrane Computing after 25 Years
Previous Article in Journal
XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Slime Mould Optimizer That Uses Chaotic Behavior and an Elitist Group for Solving Engineering Problems

by
Shahenda Sarhan
1,2,*,
Abdullah Mohamed Shaheen
3,
Ragab A. El-Sehiemy
4 and
Mona Gafar
5,6
1
Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Faculty of Computers and Information Sciences, Mansoura University, Mansoura 35516, Egypt
3
Department of Electrical Engineering, Faculty of Engineering, Suez University, Suez 43533, Egypt
4
Department of Electrical Engineering, Faculty of Engineering, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
5
Department of Computer Science, College of Science and Humanities in Al-Sulail, Prince Sattam Bin Abdulaziz University, Kharj 16278, Saudi Arabia
6
Machine Learning and Information Retrieval Department, Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(12), 1991; https://doi.org/10.3390/math10121991
Submission received: 5 May 2022 / Revised: 24 May 2022 / Accepted: 25 May 2022 / Published: 9 June 2022

Abstract

:
This article suggests a novel enhanced slime mould optimizer (ESMO) that incorporates a chaotic strategy and an elitist group for handling various mathematical optimization benchmark functions and engineering problems. In the newly suggested solver, a chaotic strategy was integrated into the movement updating rule of the basic SMO, whereas the exploitation mechanism was enhanced via searching around an elitist group instead of only the global best dependence. To handle the mathematical optimization problems, 13 benchmark functions were utilized. To handle the engineering optimization problems, the optimal power flow (OPF) was handled first, where three studied cases were considered. The suggested scheme was scrutinized on a typical IEEE test grid, and the simulation results were compared with the results given in the former publications and found to be competitive in terms of the quality of the solution. The suggested ESMO outperformed the basic SMO in terms of the convergence rate, standard deviation, and solution merit. Furthermore, a test was executed to authenticate the statistical efficacy of the suggested ESMO-inspired scheme. The suggested ESMO provided a robust and straightforward solution for the OPF problem under diverse goal functions. Furthermore, the combined heat and electrical power dispatch problem was handled by considering a large-scale test case of 84 diverse units. Similar findings were drawn, where the suggested ESMO showed high superiority compared with the basic SMO and other recent techniques in minimizing the total production costs of heat and electrical energies.

1. Introduction

The term “optimization process” relates to the procedure of determining the optimal settings for certain system characteristics in order to complete the design, operation, or planning tasks at the lowest possible cost [1]. Practical implementations and issues in artificial intelligence and machine learning are, in general, unconstrained or discrete [2]. As a result, finding optimal alternatives using standard mathematically based programming approaches is difficult [3]. Therefore, numerous optimization techniques were created in recent years to enhance the efficiency of several systems and minimize computing costs. Conventional optimization techniques have several flaws and restrictions, such as convergence to local optima and an undefined search space. Furthermore, they only provide a single-based solution [4].
On the other hand, effective optimization techniques must be applied when solving real-world optimization problems. In electrical power systems, they are necessary for the effective integration [5], analysis, control, and administration of modern design network processes [6]. OPF is a multi-modal, non-linear, non-differentiable, non-convex, and constrained minimization problem that involves fulfilling a combination of operating, technological, and security constraints, as well as picking optimal control variable values. OPF aims to lower energy generation and distribution operating costs by controlling control variables while keeping economic, technical, and environmental considerations in view [7].
Moreover, from the perspective of economic and environmental conservation, the combined heat and electrical power dispatch (CHEPD) problem has piqued the attention of several scholars. Numerous approaches to the CHPED issue were developed over time, including computational approaches and meta-heuristic methods. Power plants of thermal nature use fossil fuels, such as gas, coal, or oil to generate electricity. During the production of electricity, high-temperature heat is used to create steam power. Despite this, low-temperature heat is wasted via cooling systems, flue gas, and other means. As a result, a thermal power plant’s efficiency is reduced to 50 to 60%. However, during the heating process, several forms of pollution, such as sulfur, nitrogen, and carbon dioxide, are produced, causing the warming effect and harming the ecological landscape. Cogeneration systems where heat and power producers generate energy at the minimum potential costs while minimizing pollutants are referred to as combined heat and power economic dispatch (CHEPD) [8].
Meta-heuristic algorithms have received a lot of interest and have been used to manage a wide range of optimization issues. They have common aspects, including the search strategy, which comprises two stages [9], the first of which is termed diversification (exploration), and the second is called intensification (exploitation). The meta-heuristic method creates randomized operations in the first stage to investigate various searching space areas. The optimization approach then attempts to find the best solution in the searching area in the second stage. To prevent entrapment at an optimum, an efficient meta-heuristic optimization technique must strike a balance between the exploration and exploitation phases.
Physics-based algorithms, evolutionary algorithms, swarm intelligence algorithms, and human-based algorithms are the four main types of meta-heuristic algorithms. Physical rules, such as the Henry gas solubility algorithm [4] and equilibrium algorithm (EA) [10,11,12,13], motivate physics-based algorithms. Evolutionary algorithms were developed by modeling biological evolutionary characteristics, such as mutations, crossovers, and selections, as described in [14,15] regarding the genetic algorithm and in [16] regarding the evaluation strategy. Swarm intelligence algorithms, such as jellyfish search optimization (JFSO) [17], grasshopper optimization GO [18], JFSO [17], GO [18], heap-based technique (HT) [19], whale optimization algorithm (WOA) [20], manta rays foraging optimization (MRFO) [21], marine predators algorithm (MPA) [22], particle swarm optimization [23], and artificial bee colony [24], are a series of techniques influenced by swarming and animal group behavior.
A slime mould optimizer (SMO) is a novel technique that was developed by considering the spreading and foraging behavior of slime mould and presented in 2020 by Li et al. [25]. The basic SMO has a unique mathematical model and very competitive results, along with a simple code structure. The gradient-free SMO method simulates positive and negative feedbacks of the propagation wave of slime mould. It has been used to address various real engineering optimizing issues because of its high globally searching ability and resilience, such as economic emission dispatch [26], optimal power flow [27], operation of cascade hydropower stations [28], demand estimation of urban water resources [29], and design optimization problems [30]. Added to that, other recent versions of the SMO were effectively presented, such as the leader SMO (LSMO) [31], equilibrium SMO (EQSMO) [32], adaptive opposition SMO (AOSMO) [33], and fitness-distance-balance SMO (FDBSMO) [34]. Additionally, different statistical comparisons of the competing meta-heuristic optimization were reviewed [35,36], where numerous runs of different optimizers can be effectively compared. Both studies stated the importance of the statistical comparison of stochastic optimizations and displayed the significance of the Friedman ranking test, Wilcoxon rank-sum test, and the convergence rates in terms of the average, median, and best obtained results.
However, in the present period, SMO still has several drawbacks, such as low computation precision and a premature convergence rate on specific benchmark problems [29]. Hence, in this study, an enhanced slime mould optimizer (ESMO) that uses chaotic behavior and an elitist group was proposed for solving engineering problems. The proposed ESMO provided two modifications to the standard SMO to enhance its performance. At first, to enhance the exploitation searching feature, an elitist group was created and updated to store the best individuals in each iteration. Second, to enhance the exploration searching feature, a logistic map that uses chaotic behavior was designed to boost the searching in a highly stochastic nature. The main contributions proposed in this study are listed as follows:
  • A chaotic logistic mapping and an elitist group were combined with SMO to formulate a novel ESMO with better performance.
  • The standard SMO and the proposed ESMO were applied to several benchmark functions and different practical engineering problems, including the OPF in power systems and the CHEPD combined heat and power systems.
  • When handling different uni-modal and multi-modal functions, the proposed ESMO provided better performance than the original SMO and miscellaneous recent algorithms.
  • When handling the OPF, the proposed ESMO demonstrated superiority over several reported techniques in minimizing the fuel costs, the losses, and the pollutant emissions.
  • When handling the CHEPD problem, the proposed ESMO achieved the minimum total production costs against several reported techniques
  • Moreover, better robustness and stability were demonstrated by the proposed ESMO compared with different recent SMO versions.

2. Enhanced Slime Mould Optimizer

2.1. Standard Slime Mould Optimizer

The slime mould optimizer (SMO) is a novel optimizer that relies on the oscillation pattern of slime mould in reality. It has a distinctive computational framework that uses dynamic weights to imitate the processes to produce positive and negative responses of the slime mould propagation wave to constitute the optimized route for attaching food [25]. An initial SMO population of n individuals is used for every d-dimensional optimizing task. Equation (1) initializes each member in the population as a vector with d entries.
Y j ( 0 ) = Y min + r a n d ( 0 , 1 ) · [ Y max Y min ] j = 1 : n
where Ymin and Ymax are the solutions representing the control variables’ minimum and maximum bounds.
In the standard SMO, there are two stages, namely, the approach and food wrapping [37]. In the first stage, because slime mould may pursue food based on the scent in the air, this behavior can be represented using the formula below:
Y j ( I t + 1 ) = { Y b ( I t ) + υ 1 · ( W · Y r 1 ( I t ) Y r 2 ( I t ) ) Pr > r υ 2 · Y j ( I t ) Pr r j = 1 : n
where It is the present iteration, Yj is the slime mould position, Yb is the position with the greatest odor concentration, and Yr1 and Yr2 are two solutions chosen at random from the population. The slime mould selection behavior is replicated by two components, namely, υ1 and υ2, where υ2 decreases linearly from 1 to 0. W is the weight of the search agent, while r is a random value between [0, 1]. The Pr formula is written as follows:
Pr = tanh | S ( j ) O B F | j = 1 : n
where S(j) denotes the present individual’s fitness score and OBF denotes the overall best fitness score over all the iterations. The following is the formula for υ1:
υ 1 = [ arctanh ( 1 2 max I t ) , arctanh ( 1 2 max I t ) ]
where the maximum number of iterations is represented by maxIt. The following is the weight W [38,39]:
W ( I n d e x s m e l l ( j ) ) = { 1 + r · log ( B F S ( j ) B F W F + 1 ) , c o n d i t i o n 1 r · log ( B F S ( j ) B F W F + 1 ) , o t h e r s j = 1 : n
Condition indicates the first half of the population and r is a randomized value within [0, 1]. The optimal and worst values acquired in the current iteration are denoted by BF and WF, respectively, and Indexsmell represents the sorted series of fitness ratings:
I n d e x s m e l l = s o r t ( S )
When searching, the second stage computationally models the contraction mechanism of slime mould’s venous tissue arrangement. The slime mould may change the searching behaviors based on the food quality that it eats. The slime mould’s exact model for adjusting its location is as follows:
Y j ( I t + 1 ) = { Y min + r a n d ( 0 , 1 ) · [ Y max Y min ] Y b ( I t ) + υ 1 · ( W · Y r 1 ( I t ) Y r 2 ( I t ) ) υ 2 · Y j ( I t ) r a n d < z Pr > r Pr r
where rand and r are randomized values within [0, 1]. z is a parameter that determines how well a balancing process can explore and utilize data, and distinct values may be used depending on the situation. Figure 1 explains the steps of the SMO.

2.2. Proposed ESMO

In this section, an enhanced slime mould optimizer (ESMO) that uses chaotic behavior and an elitist group algorithm is presented to improve the performance of the standard SMO.
The SMO’s performance is improved by two changes. To enhance the exploitation searching feature, an elitist group, with a size of five individuals, is created and updated to store the best four individuals in each iteration besides the mean individual, just like the equilibrium pool in EA [9], as follows:
Y j ( I t + 1 ) = Y E l i t i s t ( I t ) + υ 1 · ( W · Y r 1 ( I t ) Y r 2 ( I t ) )     i f   Pr > r
Y E l i t i s t ( I t ) = r a n d ( [ Y b 1 ; Y b 2 ; Y b 3 ; Y b 4 ; Y A v g ] )
where YElitist is the elitist group with a size of five individuals, Yb is the position with the greatest odor concentrations, and YAvg is the mean position over the first four greatest concentrations. Therefore, the first four positions are stored in the elitist group besides the average position. In each iteration, the best four individuals are updated and the mean individual over them is calculated.
Based on this, exploitation searching is supported in various preferred directions. Furthermore, to enhance the exploration searching feature, a logistic map that uses chaotic behavior is designed to boost the searching in a highly stochastic nature [40]. Based on that, a produced vector (Cm) is created via the chaotic logistic map as follows:
C m j ( I t + 1 ) = 4 C m j ( I t ) ( C m j ( I t ) 1 ) )
C m ( 0 ) = r a n d ( 1 , dim )
Using Equation (11), a vector is generated in each iteration for each dimensional variable, as described in Figure 2. As shown, a highly stochastic nature is provided using the utilized chaotic logistic map, which supports the exploration searching feature.
As a result, the standard SMO’s updating process is adjusted, and the slime mould’s new positions are adjusted as follows:
Y j ( I t + 1 ) = { Y min + r a n d ( 0 , 1 ) · [ Y max Y min ] Y E l i t i s t ( I t ) + υ 1 · ( W · Y r 1 ( I t ) Y r 2 ( I t ) ) υ 2 · C m j · Y j ( I t ) r a n d < z Pr > r Pr r
Based on the chaotic behavior and elitist group algorithm, the proposed ESMO’s main steps are depicted in Figure 3.

3. Application for Benchmark Optimization Functions

The results of the effectiveness and functionality evaluations of the suggested ESMO and SMO are presented in this section. They were examined using seven uni-modal and six multi-modal benchmark functions. Their detailed data are tabulated in Table 1 and Table 2 in terms of their mathematical models, variable dimensions, and the considered ranges.
The suggested ESMO were assessed in comparison to the standard SMO, sine cosine algorithm (SCA) [41], salp swarm algorithm (SSA) [42], whale optimization algorithm (WOA) [20], multiverse optimizer (MVO) [43], PSO [44], and DE [45], as depicted in [25]. The parameters were chosen depending on those employed by the original source in the study or those generally utilized by other researchers. The detailed data of the parameter settings of these implemented techniques are described in Appendix A (Table A1).
To ensure fairness and consistency during the comparison, the methods were run under similar conditions. The numbers of solution individuals and iterations were assigned to be 30 and 1000, respectively. To minimize the effects of randomness in the algorithms, thirty run times were considered for each function and the mean outcome was used. The means, standard deviations (STds), and medians were used to analyze the outcomes for the purpose of quantifying them. Their comparisons for uni-modal and multi-modal optimization functions are tabulated in Table 3 and Table 4, respectively. As illustrated, the suggested ESMO had a stronger resilience in terms of obtaining the smallest mean, STd and median in more than 50% of benchmark functions. For uni-modal functions, the suggested ESMO always provided the capability to find the minimum median fitnesses of 0, 0, 0, 0, 0.057554, 0.000436, and 5.98 × 10−5, respectively, for the seven tested functions. Furthermore, it showed great performance in terms of the means and STds compared with SMO, SCA, SSA, WOA, MVO, PSO, and DE. Similar findings were obtained for the multi-modal benchmark functions. The suggested ESMO always provided the capability to find the minimum median fitnesses of −12,569.5, 0, 8.88 × 1016, 0, 0.000204, and 0.00047, respectively, for the six tested functions, with great performance in terms of the means and STds compared with the others.
Additionally, a Friedman ranking test of the mean obtained fitness was executed for the uni-modal and multi-modal benchmark functions for the suggested ESMO, SMO, SCA, SSA, WOA, MVO, PSO, and DE, as depicted in Table 5.
As shown, the suggested ESMO had higher robustness, as it occupied the first rank, with a mean rank of 1.607. On the other hand, the standard SMO occupied the second rank, with a mean rank of 1.75. In ascending order, the other algorithms were DE, WOA, MVO, SSA, SCA, and PSO, with mean ranks of 3.5, 4.2142857, 4.7857143, 4.8571429, 5.7857143, and 6.7142857, respectively.

4. Application for Engineering Optimization Problems

4.1. Optimal Power Flow in Electric Power Systems

Regarding the OPF issue, the control variables can be seen as follows:
  • (Pgen1, Pgen2, …, PgenNgen) denote the active output powers of the generators.
  • (Qcap1, Qcap2, …, QcapNq) denote the absorbing or injecting reactive powers via switched reactors and capacitors, respectively.
  • (Vgen1, Vgen2, …, VgenNgen) denote the generator voltages.
  • (Ta1, Ta2, …, TaNt) denote the transformer tap settings.
where Ngen, Nq, and Nt reflect the number of generators, reactive power sources, and tap changers, respectively.
Additionally, the dependent variables can be seen as follows:
  • (VLoad1, …, VLoadNPQ) denote the load bus voltage magnitudes.
  • (Qgen1, Qgen2, …, QgenNgen) denote the reactive power of the generators.
  • (S1, …, SNF) denote the transmission line loadings.

4.1.1. Minimization of the Fuel Costs

The OPF problem can be mathematically solved to minimize the fuel generation costs (F1), as described in Equation (13):
F 1 = k = 1 Ng a k Pgen k 2 + b k Pgen k + c k
where ak, bk, and ck are the cost coefficients of the generator k.
This minimization target should be handled by maintaining different equality constraints, as described in Equations (14)–(20), and inequality constraints, as described in Equations (21) and (22).
Pgen k min Pgen k Pgen k max ,   k = 1 : Ngen
Vgen k min Vgen k Vgen k max ,   k = 1 : Ngen
Qgen k min Qgen k Qgen k max ,   k = 1 : Ngen
Ta L min Ta L Ta L max ,   L = 1 : Nt  
Qcap var max Qcap var Qcap var max ,   var = 1 : Nq
VLoad j min VLoad j VLoad j max ,   j = 1 : NPQ
| S Line | S Line max ,   Line = 1 : Nf
Pgen k PLoad k V k j = 1 Nb V j ( G kj cos   θ kj + B kj sin   θ kj ) = 0 ,   k = 1 : Nb
Qgen k QLoad k + Qcap k V k j = 1 Nb V j ( G kj sin θ kj B kj cos θ kj ) = 0 ,   k = 1 : Nb
where PLoad and QLoad denote the active and reactive power demands, respectively; θkj is the phase angle difference between bus k and j; and Bkj is the mutual susceptance between bus k and j.
In the standard IEEE 30-bus system, the proposed ESMO and SMO were used. Thirty simulated tests were conducted for both the proposed ESMO and SMO, with a maximal number of iterations of 300 and a population number of 50. As illustrated in Figure 4, the basic IEEE 30-bus system consisted of 30 buses, 4 on-load tap changers, 9 capacitive sources, 6 generators, and 41 lines. The statistics for the allowable boundaries of reactive power production, buses, and transmission lines were derived from [46]. The allowable generator voltages were 1.1000 and 0.9500 p.u. for the minimum and maximum, respectively.
To minimize the fuel costs case, the proposed ESMO, standard SMO, and other recent versions of the SMO of LSMO [31], EQSMO [32], AOSMO [33], and FDBSMO [34] were performed. Table 6 describes the parameter settings of each applied algorithm to solve the OPF issue. As shown, the same number of function evaluations was maintained at 15,000 times and the same number of implementations was maintained at 30 times. These considerations guarantee a fair comparison with equivalent fitness functions between the applied methods.
The attained outputs of the proposed ESMO and other recent versions of the SMO, i.e., LSMO [31], EQSMO [32], AOSMO [33], and FDBSMO [34], are displayed in Table 7. In addition, Figure 5 depicts their average, median, and best convergence characteristics. The proposed ESMO clearly beat the other versions of the SMO in terms of reducing fuel costs. The proposed ESMO achieved the best value of 799.1134 USD/h, while the SMO achieved 799.202 USD/h vs. 901.96 USD/h in the initial condition. Furthermore, LSMO, EQSMO, AOSMO and FDBSMO achieved 799.2048692, 799.1730514, 799.1745189, and 799.12964 USD/h, respectively.
In particular, after utilizing the suggested ESMO and other SMO versions, Figure 6 depicts the box plot of the thirty obtained fitnesses of the derived fuel costs. As shown, the suggested ESMO was effective at producing the lowest fuel cost values. In terms of the mean fuel costs, the suggested ESMO achieved a value of 799.2483 USD/h, whereas the SMO obtained a value of 799.437 USD/h. In terms of the maximum fuel costs, the suggested ESMO achieved a value of 799.2483 USD/h, whereas the SMO obtained a value of 799.5056 USD/h. Moreover, the ESMO provided a lower STd of 0.074835 compared with 0.085524 for the SMO.
In addition, Table 8 compares the outcomes of reducing the FCs (case 1) with numerous different methods, including MCSO [49], improved electromagnetism-like algorithm (IEOA) [50], NBO [51], CSO [52], black-hole-based optimization approach (BHBOA) [53], adaptive GO (AGO) [54], improved moth–flame optimization (IMFO) [55], teaching–learning algorithm (TLA) [56], developed grey wolf algorithm (DGWA) [57], moth swarm algorithm (MSA) [58], grasshopper optimizer (GO) [54], symbiotic organisms search (SOS) [59], imperialist competitive algorithm (ICA) [60], differential harmony search algorithm (DHSA) [61], and GA [62]. As shown, the proposed ESMO and the SMO obtained minimum fuel costs of 799.1134 USD/h and 799.202 USD/h, respectively, which were lower than the other techniques.

4.1.2. Minimization of the Power Losses

Based on the preferences of power system operators, the OPF problem can be mathematically solved to minimize the power losses (F2), as described in Equation (23) [63]:
G L s = k = 1 Nb j = 1 Nb G kj ( V k 2 + V j 2 2 ( V k V j cos   θ k j ) )
where Gkj indicates the conductance of the line connected between buses k and j.
To minimize the power losses, the proposed ESMO and other SMO versions were performed, and their attained outputs are displayed in Table 9. In addition, Figure 7 depicts the convergence characteristics of the suggested ESMO and other SMO versions. The proposed ESMO clearly beat the other SMO versions in terms of reducing the power losses, as the proposed ESMO achieved the lowest value of 2.852 MW vs. 5.83 MW in the initial condition. Furthermore, the SMO, LSMO, EQSMO, AOSMO, and FDBSMO achieved 2.873, 2.8789, 2.87089, 2.869748, and 2.86545 MW, respectively. In particular, after utilizing the suggested ESMO and other SMO versions, Figure 8 depicts the box plot of the thirty obtained fitnesses of the derived power losses.
As shown, the suggested ESMO was the most effective version at producing the lowest power losses. In terms of mean power losses, the suggested ESMO achieved a value of 2.8677 MW, whereas the SMO, LSMO, EQSMO, AOSMO, and FDBSMO obtained values of 2.99839, 2.942707, 2.970576, 2.910941, and 2.933065 MW, respectively. In terms of the maximum power losses, the suggested ESMO achieved a value of 2.914 MW, whereas the SMO, LSMO, EQSMO, AOSMO, and FDBSMO obtained values of 3.3243, 3.148561, 3.155098, 3.073784, and 3.118624 MW, respectively. Moreover, the ESMO provided the lowest STd of 0.01269 relative to 0.113264, 0.071125, 0.100712, 0.051942, and 0.069051 for the SMO, LSMO, EQSMO, AOSMO, and FDBSMO, respectively.

4.1.3. Minimization of the Total Producing Emissions

Nowadays, there is great interest in the production of pollutant gases all over the world. Thus, the OPF problem can be mathematically solved to minimize the total produced emissions (F3), as described in Equation (24):
F 3 = k = 1 Ngen ( A k Pgen k 2 + B k Pgen k + C k ) / 100 + D k e E k Pgen k
where Ak, Bk, Ck, Dk, and Ek are the atmospheric coefficients of the produced emissions of each generator k.
For minimizing the emissions, the proposed ESMO and SMO were used, and their attained outputs are displayed in Table 10.
In addition, Figure 9 depicts the convergence characteristics of the suggested ESMO and other SMO versions. The proposed ESMO clearly beat the other SMO versions in terms of reducing the total producing emissions, as the proposed ESMO achieved the lowest value of 0.20469247 ton/h vs. 0.23909633 ton/h in the initial condition for the SMO, while the other SMO versions achieved 0.204700981, 0.20470874, 0.20470045, 0.204713264, and 0.204707036 ton/h using LSMO, EQSMO, AOSMO, and FDBSMO, respectively.
For this case, Table 11 shows a comparison with other meta-heuristics optimizers. As shown, the suggested ESMO attained the minimum emissions of 0.20469247 ton/h. It outperformed the other meta-heuristics of adaptive real coded biogeography-based optimization (ARBO) [64], Jaya algorithm [65], Stud Krill herd algorithm (KHA) [66], AGO [54], MCSO [49], KHA [66], GO [54], modified TLA [67], NBO [49], and CSO [49].
Additionally, after utilizing the suggested ESMO and other SMO versions, Figure 10 depicts the box plot of the thirty obtained fitnesses of the derived emissions. As shown, the suggested ESMO was effective at producing the lowest emissions. In terms of mean emissions, the suggested ESMO achieved a value of 0.20471 ton/h, whereas the SMO, LSMO, EQSMO, AOSMO, and FDBSMO obtained values of 0.20482, 0.204835, 0.204798, 0.204789, and 0.204778 ton/h, respectively. In terms of the maximum emissions, the suggested ESMO achieved a value of 0.204893 ton/h, whereas the SMO, LSMO, EQSMO, AOSMO, and FDBSMO obtained values of 0.204949, 0.204991, 0.204919, 0.204953, and 0.204862 ton/h, respectively. Furthermore, the ESMO provided the lowest STd of 3.54 × 10−5 relative to 8.12 × 10−5, 8.56 × 10−5, 7.14 × 10−5, 7.18 × 10−5, and 5.43 × 10−5 for the SMO, LSMO, EQSMO, AOSMO, and FDBSMO, respectively.

4.1.4. Wilcoxon Rank-Sum Test of the Implemented SMO Versions

In this part, the two-sided Wilcoxon rank-sum test was used, which is equivalent to a Mann–Whitney U-test, in order to return the p-values. The proposed ESMO was considered against each SMO version for the three cases studied and the test was executed. Table 12 displays the p-values that were found for each OPF fitness minimization. For the costs minimization, the majority of the p-values were less than 0.05, which indicated that the test rejected the null hypothesis at the default 5% significance level. For the losses minimization, the p-values, which were 7.39 × 10−11, 2.37 × 10−10, 3.82 × 10−10, 5.0 × 10−9, and 2.02 × 10−8 for SMO, LSMO, EQSMO, AOSMO, and FDBSMO, respectively, were less than 0.05. Similar findings were attained for the emissions minimization, where the p-values were 3.82 × 10−09, 7.38 × 10−10, 5.53 × 10−8, 8.1 × 10−10, and 1.43 × 10−8 for SMO, LSMO, EQSMO, AOSMO, and FDBSMO. Therefore, the Wilcoxon rank-sum test rejected the null hypothesis between the implemented SMO versions.

4.1.5. Evaluation of the Chaotic Strategy and Elitist Group with the ESMO

In this part, the two proposed differences in ESMO (chaotic strategy and elitist group) were evaluated independently and together in order to study the influence of those mechanisms on the behavior of the ESMO versus a standard SMO. The SMO with only the elitist group (SMO_Elitist), SMO with only the chaotic strategy (SMO_Chaotic), and the proposed ESMO, which involved both the chaotic strategy and elitist group, were assessed. For the three cases studied, SMO_Elitist, SMO_Chaotic, and the proposed ESMO were run with the same parameter settings that were previously defined in Table 6. Their obtained best, mean, worst, and STd values are recorded in Table 13. As shown, great enhancements are illustrated in the three cases studied, especially in the robustness behavior. For the costs minimization, the STd improvement was found to be 6.6379 and 74.9000% relative to SMO_Elitist and SMO_Chaotic, respectively. For the losses minimization, the STd improvement was found to be 87.4008 and 84.2415% relative to SMO_Elitist and SMO_Chaotic, respectively. For the emissions minimization, the STd improvement was found to be 50.4891 and 54.5761% relative to SMO_Elitist and SMO_Chaotic, respectively.

4.2. Optimal Combined Heat and Electrical Power Dispatch Problem

The combined heat and electrical power dispatch (CHEPD) problem was handled by considering a large-scale test case of 84 diverse units. The CHEPD’s main purpose was to identify the best amount for heat and electrical power from heat-only generators, power-only generators, and co-generators in order to keep fuel prices low while meeting heat and electrical power needs and limitations exactly. Its objective was to minimize the system’s total production costs. As a result, the generation cost reduction goal (F) may be expressed as:
F = k = 1 N G C k ( P k ) + j = 1 N H C j ( H j ) + i = 1 N C H P C i ( P i , H i )
where NG, NH, and NCHP are the numbers of the power-only, heat-only, and co-generator units, respectively, while Ck(Pk) [68], Cj(Hj), and Ci(Pi, Hi) are, respectively, the cost functions for the power-only, heat-only, and co-generator units, as follows:
C k ( P k ) = α 1 k ( P k ) 2 + α 2 k P k + α 3 k + | α 4 k sin ( α 5 k ( P k , min P k ) ) |
C j ( H j ) = φ 1 j ( H j ) 2 + φ 2 j H j + φ 3 j
C i ( P i , H i ) = β 1 i ( P i ) 2 + β 2 i P i + β 3 i + β 4 i ( H i ) 2 + β 5 i H i + β 6 i H i P i
where α1, α2, α3, α4, and α5 are the cost coefficients of the power units; φ1, φ2, and φ3 are the cost coefficients of the heat units; and β1, β2, β3, β4, β5, and β6 are the cost coefficients for the co-generator units.
Added to this, inequality constraints of this issue must be satisfied in terms of the capacity of the power-only, heat-only, and co-generator units, as follows:
P k min P k P k max k = 1 : N G
H j min H j H j max j = 1 : N H
P i min P i P i max i = 1 : N C H P
H i min H i H i max i = 1 : N C H P
where the superscripts “min” and “max” indicate the minimum and maximum limits.
Moreover, equality constraints of this issue must be satisfied in terms of the power and heat balance, respectively, as follows:
k = 1 N G P k + i = 1 N C H P P i = P d e m a n d
j = 1 N H H j + i = 1 N C H P H i = H d e m a n d
where Hdemand and Pdemand are the system heat demand and electric demand, respectively.
For the CHEPD problem, a sizable case study with 84 different units was addressed. An electrical power load of 12,700 MW and a heat load of 5000 MWth were maintained for this system, which included twenty-four co-generation units, forty power-only units, and twenty heat-only units. Ref. [69] contains the complete data of the considered system and Table 14 tabulates the power and heat outputs from the power-only, heat-only, and co-generator units depending on the proposed ESMO and SMO algorithms. In addition, Figure 11 depicts the convergence characteristics of the suggested ESMO and SMO. The proposed ESMO clearly beat the SMO in terms of reducing the total production costs, as a lower fuel cost in the CHEPD system of USD 289,498.2 was obtained using the proposed ESMO compared to USD 290,362.8 using the conventional SMO.
Table 15 contrasts the effectiveness of the suggested ESMO, which gave the optimal operating costs, with other current optimization methods, such as JFSO [70], WOA [69], MPA [71], IMPA [71], and the hybrid HT and JFSO (HT-JFSO) [70]. The suggested ESMO had the lowest costs and achieved the highest performance among the various optimizers, as shown in the table. This comparison validated the suggested ESMO’s efficacy and superiority. Furthermore, Figure 12 depicts the box plot of the thirty obtained fitness of the derived production costs. As shown, the suggested ESMO was effective at producing lower fuel cost values. In terms of the mean fuel costs, the suggested ESMO achieved a value of 290,894.1 USD/h, whereas the SMO obtained a value of 291,812.6 USD/h. In terms of the maximum fuel costs, the suggested ESMO achieved a value of 293,371.5 USD/h, whereas the SMO obtained a value of 293,884.7 USD/h.

4.3. Friedman Ranking Test for Engineering Optimization Problems

Additionally, a Friedman ranking test of the best, mean, worst, and standard deviation obtained fitnesses was executed for the considered optimization cases of the OPF engineering problems for the suggested ESMO, SMO, LSMO, EQSMO, AOSMO, and FDBSMO, as depicted in Table 16. In this table, the great ability of the proposed ESMO at finding the first rank compared to the others is clearly shown.

4.4. Friedman and Post Hoc Tests for Engineering Optimization Problems

Moreover, Friedman and accompanying post hoc tests were implemented, where each method had a statistical distribution based on the outcome of its independent executions. For the OPF results, the related results are described in Table 17 by means of Friedman’s ANOVA table in MATLAB. Moreover, the distribution of the outcomes for each case study is displayed in Appendix A. From this table, the null hypothesis was always rejected for all cases studied, where the probability of the p-value was always very small. For the first case regarding costs minimization, the recorded p-value was 9.5818 × 10−7. For the second case regarding losses minimization, the recorded p-value was 3.65118 × 10−14. For the third case regarding emissions minimization, the recorded p-value was 7.18362 × 10−12.
Added to this, the accompanying post hoc test is tabulated in Table 18. The second column shows the difference between the estimated group means (DGM). The first and third columns show the lower and upper limits for 95% confidence intervals for the true mean difference, which are addressed by “LCI” and “UCI”, respectively. The last column containing the p-value for a hypothesis test shows that the corresponding mean difference was equal to zero. The majority of p-values were very small, which indicated that the proposed ESMO yield differed across all three minimization tasks.
In a similar manner, Friedman and accompanying post hoc tests were implemented described in Table 19 and Table 20, respectively. Moreover, the distribution of the outcomes for each case study is displayed in Appendix A. From both tables, the null hypothesis was completely rejected, where the p-value based on Friedman’s ANOVA and accompanied post hoc tests were 0.0003 and 0.000261, respectively.

5. Conclusions

In the current study, an enhanced slime mould optimizer (ESMO) was proposed. The proposed ESMO was tested on 13 benchmark functions. In this study, the proposed ESMO incorporated a chaotic strategy and an elitist group to handle well-known engineering optimization problems called the optimal power flow and combined heat economic load dispatch. A chaotic strategy was integrated into the movement updating rule of the basic SMO, whereas the exploitation mechanism was enhanced via searching around an elitist group instead of only the global best dependence. To handle mathematical optimization problems, three cases were considered for the OPF problem. Applications were scrutinized on a typical IEEE test grid. The simulation results were compared with the results given in the former publications and were found to be competitive in terms of the quality of the solution. The second engineering application was the combined heat and electrical power dispatch problem, which was handled by considering a large-scale test case of 84 diverse units. Competitive findings were achieved using the suggested ESMO that surpassed the basic SMO and other recent techniques regarding minimizing the total production costs of heat and electrical energies. Moreover, the suggested ESMO outperformed the other optimization methods examined in terms of convergence rate, as well as solution merits. Furthermore, the statistical efficacy authenticated the quality of the suggested ESMO.
Considering the high efficacy of the suggested ESMO in the above-mentioned studies, it is mentioned that the proposed method should be tested for sufficiency when attempting to solve the OPF issue with the increasing penetration of renewable energies in electrical power networks in the future. It may also be designed for AC–DC power grids with the incorporation of modern voltage source converters. The limitation of the methodology adopted in this work, like the other meta-heuristic techniques, is a dependence on the parameter settings. Fortunately, only two parameter settings are required for the proposed ESMO, which are the numbers of individuals and iterations.

Author Contributions

Conceptualization, S.S., R.A.E.-S. and A.M.S.; methodology, A.M.S.; software, A.M.S.; validation, R.A.E.-S. and M.G.; formal analysis, A.M.S. and R.A.E.-S.; investigation, A.M.S. and M.G.; resources, M.G. and R.A.E.-S.; data curation, S.S.; writing—original draft preparation, S.S. and A.M.S.; writing—review and editing, R.A.E.-S.; visualization, M.G.; supervision, R.A.E.-S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Ministry of Education and King Abdulaziz University, Jeddah, Saudi Arabia.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research work was funded by Institutional Fund Projects under grant no.(IFPFP-263-22). Therefore, the authors gratefully acknowledge technical and financial’ support from Ministry of Education and Deanship of Scientific Research (DSR), King Abdulaziz University (KAU), Jeddah, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1 describes the parameter settings of the implemented techniques related to Table 3 and Table 4.
Table A1. Parameter settings of the implemented techniques.
Table A1. Parameter settings of the implemented techniques.
AlgorithmParameter Settings
ESMOAdaptive parameters
SMOAdaptive parameters
SCAA = 2
SSAc1 = c2 ∈ [0, 1]
WOAa1 = [2, 0]
a2 = [−2, −1]
b = 1
MVOtravelling distance rate ∈ [0.6, 1]
existence probability ∈ [0.2, 1]
PSOc1 = c2 = 2
vMax = 6
DEcrossover probability = 0.5
scaling factor = 0.5
Figure A1, Figure A2 and Figure A3 show the distribution of the outcomes for each case study in the OPF problem for different objectives.
Figure A1. Distribution of the outcomes of the compared algorithms in order to minimize the costs in the OPF problem.
Figure A1. Distribution of the outcomes of the compared algorithms in order to minimize the costs in the OPF problem.
Mathematics 10 01991 g0a1
Figure A2. Distribution of the outcomes of the compared algorithms in order to minimize the losses in the OPF problem.
Figure A2. Distribution of the outcomes of the compared algorithms in order to minimize the losses in the OPF problem.
Mathematics 10 01991 g0a2
Figure A3. Distribution of the outcomes of the compared algorithms in order to minimize the emissions in the OPF problem.
Figure A3. Distribution of the outcomes of the compared algorithms in order to minimize the emissions in the OPF problem.
Mathematics 10 01991 g0a3

References

  1. Hajipour, V.; Kheirkhah, A.S.; Tavana, M.; Absi, N. Novel Pareto-based meta-heuristics for solving multi-objective multi-item capacitated lot-sizing problems. Int. J. Adv. Manuf. Technol. 2015, 80, 31–45. [Google Scholar] [CrossRef]
  2. Hajipour, V.; Mehdizadeh, E.; Tavakkoli-Moghaddam, R. A novel Pareto-based multi-objective vibration damping optimization algorithm to solve multi-objective optimization problems. Sci. Iran. 2014, 21, 2368. [Google Scholar]
  3. Wu, G. Across neighborhood search for numerical optimization. Inf. Sci. 2016, 329, 597–618. [Google Scholar] [CrossRef] [Green Version]
  4. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  5. Shaheen, A.M.; Elattar, E.E.; El Sehiemy, R.A.; Elsayed, A.M. An Improved Sunflower Optimization Algorithm Based-Monte Carlo Simulation for Efficiency Improvement of Radial Distribution Systems Considering Wind Power Uncertainty. IEEE Access 2020, 9, 2332–2344. [Google Scholar] [CrossRef]
  6. Bentouati, B.; Javaid, M.S.; Bouchekara, H.R.E.H.; El-Fergany, A.A. Optimizing performance attributes of electric power systems using chaotic salp swarm optimizer. Int. J. Manag. Sci. Eng. Manag. 2020, 15, 165–175. [Google Scholar] [CrossRef]
  7. Li, S.; Gong, W.; Wang, L.; Yan, X.; Hu, C. Optimal power flow by means of improved adaptive differential evolution. Energy 2020, 198, 117314. [Google Scholar] [CrossRef]
  8. Chen, X.; Li, K.; Xu, B.; Yang, Z. Biogeography-based learning particle swarm optimization for combined heat and power economic dispatch problem. Knowl.-Based Syst. 2020, 208, 106463. [Google Scholar] [CrossRef]
  9. Abualigah, L.; Diabat, A.; Geem, Z.W. A comprehensive survey of the harmony search algorithm in clustering applications. Appl. Sci. 2020, 10, 3827. [Google Scholar] [CrossRef]
  10. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2019, 191, 105190. [Google Scholar] [CrossRef]
  11. Elgamal, Z.M.; Yasin, N.M.; Sabri, A.Q.M.; Sihwail, R.; Tubishat, M.; Jarrah, H. Improved equilibrium optimization algorithm using elite opposition-based learning and new local search strategy for feature selection in medical datasets. Computation 2021, 9, 68. [Google Scholar] [CrossRef]
  12. Sun, F.; Yu, J.; Zhao, A.; Zhou, M. Optimizing multi-chiller dispatch in HVAC system using equilibrium optimization algorithm. Energy Rep. 2021, 7, 5997–6013. [Google Scholar] [CrossRef]
  13. Lan, P.; Xia, K.; Pan, Y.; Fan, S. An improved equilibrium optimizer algorithm and its application in LSTM neural network. Symmetry 2021, 13, 1706. [Google Scholar] [CrossRef]
  14. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning, 1989th ed.; Addison-Wesley Publishing Company, INC.: Redwood City, CA, USA, 1989. [Google Scholar]
  15. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  16. Das, D.B.; Patvardhan, C. A new hybrid evolutionary strategy for reactive power dispatch. Electr. Power Syst. Res. 2003, 65, 83–90. [Google Scholar]
  17. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2020, 389, 125535. [Google Scholar] [CrossRef]
  18. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  19. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  20. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2019, 87, 103300. [Google Scholar] [CrossRef]
  22. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  23. Abou-El-Ela, A.A.; El-Sehiemy, R.A. Optimized generation costs using a modified particle swarm optimization version. In Proceedings of the 2008 12th International Middle East Power System Conference, MEPCON 2008, Aswan, Egypt, 12–15 March 2008; pp. 420–424. [Google Scholar] [CrossRef] [Green Version]
  24. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  25. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Futur. Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  26. Hassan, M.H.; Kamel, S.; Abualigah, L.; Eid, A. Development and application of slime mould algorithm for optimal economic emission dispatch. Expert Syst. Appl. 2021, 182, 115205. [Google Scholar] [CrossRef]
  27. Khunkitti, S.; Siritaratiwat, A.; Premrudeepreechacharn, S. Multi-objective optimal power flow problems based on slime mould algorithm. Sustainability 2021, 13, 7448. [Google Scholar] [CrossRef]
  28. Nguyen, T.T.; Wang, H.J.; Dao, T.K.; Pan, J.S.; Liu, J.H.; Weng, S. An Improved Slime Mold Algorithm and its Application for Optimal Operation of Cascade Hydropower Stations. IEEE Access 2020, 8, 226754–226772. [Google Scholar] [CrossRef]
  29. Yu, K.; Liu, L.; Chen, Z. An improved slime mould algorithm for demand estimation of urban water resources. Mathematics 2021, 9, 1316. [Google Scholar] [CrossRef]
  30. Dhawale, D.; Kamboj, V.K.; Anand, P. An effective solution to numerical and multi-disciplinary design optimization problems using chaotic slime mold algorithm. Eng. Comput. 2021. [Google Scholar] [CrossRef]
  31. Naik, M.K.; Panda, R.; Abraham, A. Normalized square difference based multilevel thresholding technique for multispectral images using leader slime mould algorithm. J. King Saud Univ.-Comput. Inf. Sci. 2020. [Google Scholar] [CrossRef]
  32. Naik, M.K.; Panda, R.; Abraham, A. Adaptive opposition slime mould algorithm. Soft Comput. 2021, 25, 14297–14313. [Google Scholar] [CrossRef]
  33. Naik, M.K.; Panda, R.; Abraham, A. An entropy minimization based multilevel colour thresholding technique for analysis of breast thermograms using equilibrium slime mould algorithm. Appl. Soft Comput. 2021, 113, 107955. [Google Scholar] [CrossRef]
  34. Suiçmez, Ç.; Kahraman, H.; Yilmaz, C.; Işik, M.F.; Cengiz, E. Improved Slime-Mould-Algorithm with Fitness Distance Balance-based Guiding Mechanism for Global Optimization Problems. Duzce Univ. J. Sci. Technol. 2021, 9, 40–54. [Google Scholar] [CrossRef]
  35. Eftimov, T.; Korošec, P.; Seljak, B.K. Disadvantages of Statistical Comparison of Stochastic Optimization Algorithms. In Proceedings of the 7th International Conference on Bioinspired Optimization Methods and their Applications, Bled, Slovenia, 18–20 May 2016. [Google Scholar]
  36. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2020, 54, 2323–2409. [Google Scholar] [CrossRef]
  37. Liu, M.; Li, Y.; Huo, Q.; Li, A.; Zhu, M.; Qu, N.; Chen, L.; Xia, M. A two-way parallel slime mold algorithm by flow and distance for the travelling salesman problem. Appl. Sci. 2020, 10, 6180. [Google Scholar] [CrossRef]
  38. Premkumar, M.; Jangir, P.; Sowmya, R.; Alhelou, H.H.; Heidari, A.A.; Chen, H. MOSMA: Multi-Objective Slime Mould Algorithm Based on Elitist Non-Dominated Sorting. IEEE Access 2020, 9, 3229–3248. [Google Scholar] [CrossRef]
  39. İzci, D.; Ekinci, S. Comparative performance analysis of slime mould algorithm for efficient design of proportional–integral–derivative controller. Electrica 2021, 21, 151–159. [Google Scholar] [CrossRef]
  40. Kumari, S.; Chugh, R. A novel four-step feedback procedure for rapid control of chaotic behavior of the logistic map and unstable traffic on the road. Chaos 2020, 30, 123115. [Google Scholar] [CrossRef]
  41. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  42. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  44. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: Washington, DC, USA, 1995. [Google Scholar]
  45. Storn, R.; Price, K. Differential Evolution–A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  46. Liu, Y.; Gong, D.; Sun, J.; Jin, Y. A Many-Objective Evolutionary Algorithm Using a One-by-One Selection Strategy. IEEE Trans. Cybern. 2017, 47, 2689–2702. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. El-Sehiemy, R.A.; El Ela, A.A.A.; Shaheen, A. A multi-objective fuzzy-based procedure for reactive power-based preventive emergency strategy. Int. J. Eng. Res. Afr. 2014, 13, 91–102. [Google Scholar] [CrossRef]
  48. Shaheen, A.M.; El-Sehiemy, R.A. Application of Multi-Verse Optimizer for Transmission Network Expansion Planning in Power Systems. In Proceedings of the 2019 International Conference on Innovative Trends in Computer Engineering (ITCE), Aswan, Egypt, 2–4 February 2019. [Google Scholar]
  49. Shaheen, A.M.; El-Sehiemy, R.A.; Elattar, E.E.; Abd-Elrazek, A.S. A Modified Crow Search Optimizer for Solving Non-Linear OPF Problem with Emissions. IEEE Access 2021, 9, 43107–43120. [Google Scholar] [CrossRef]
  50. Jeddi, B.; Einaddin, A.H.; Kazemzadeh, R. A novel multi-objective approach based on improved electromagnetism-like algorithm to solve optimal power flow problem considering the detailed model of thermal generators. Int. Trans. Electr. Energy Syst. 2016, 27, e2293. [Google Scholar] [CrossRef]
  51. Yang, X.S. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef] [Green Version]
  52. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  53. Bouchekara, H.R.E.H. Optimal power flow using black-hole-based optimization approach. Appl. Soft Comput. J. 2014, 24, 879–888. [Google Scholar] [CrossRef]
  54. Alhejji, A.; Ebeed Hussein, M.; Kamel, S.; Alyami, S. Optimal Power Flow Solution with an Embedded Center-Node Unified Power Flow Controller Using an Adaptive Grasshopper Optimization Algorithm. IEEE Access 2020, 8, 119020–119037. [Google Scholar] [CrossRef]
  55. Taher, M.A.; Kamel, S.; Jurado, F.; Ebeed, M. An improved moth-flame optimization algorithm for solving optimal power flow problem. Int. Trans. Electr. Energy Syst. 2019, 29, e2743. [Google Scholar] [CrossRef]
  56. Ghasemi, M.; Ghavidel, S.; Gitizadeh, M.; Akbari, E. An improved teaching-learning-based optimization algorithm using Lévy mutation strategy for non-smooth optimal power flow. Int. J. Electr. Power Energy Syst. 2015, 65, 375–384. [Google Scholar] [CrossRef]
  57. Abdo, M.; Kamel, S.; Ebeed, M.; Yu, J.; Jurado, F. Solving non-smooth optimal power flow problems using a developed grey wolf optimizer. Energies 2018, 11, 1692. [Google Scholar] [CrossRef] [Green Version]
  58. Mohamed, A.A.A.; Mohamed, Y.S.; El-Gaafary, A.A.M.; Hemeida, A.M. Optimal power flow using moth swarm algorithm. Electr. Power Syst. Res. 2017, 142, 190–206. [Google Scholar] [CrossRef]
  59. Duman, S. Symbiotic organisms search algorithm for optimal power flow problem based on valve-point effect and prohibited zones. Neural Comput. Appl. 2016, 28, 3571–3585. [Google Scholar] [CrossRef]
  60. Ghanizadeh, A.J.; Mokhtari, M.; Abedi, M.; Gharehpetian, G.B. Optimal power flow based on imperialist competitive algorithm. Int. Rev. Electr. Eng. 2011, 6, 1847–1852. [Google Scholar]
  61. Arul, R.; Ravi, G.; Velusami, S. Solving optimal power flow problems using chaotic self-adaptive differential harmony search algorithm. Electr. Power Compon. Syst. 2013, 41, 782–805. [Google Scholar] [CrossRef]
  62. Zhang, J.; Wang, S.; Tang, Q.; Zhou, Y.; Zeng, T. An improved NSGA-III integrating adaptive elimination strategy to solution of many-objective optimal power flow problems. Energy 2019, 172, 945–957. [Google Scholar] [CrossRef]
  63. Shaheen, A.; Ginidi, A.; El-Sehiemy, R.; Elsayed, A.; Elattar, E.; Dorrah, H.T. Developed Gorilla Troops Technique for Optimal Power Flow Problem in Electrical Power Systems. Mathematics 2022, 10, 1636. [Google Scholar] [CrossRef]
  64. Ramesh Kumar, A.; Premalatha, L. Optimal power flow for a deregulated power system using adaptive real coded biogeography-based optimization. Int. J. Electr. Power Energy Syst. 2015, 73, 393–399. [Google Scholar] [CrossRef]
  65. El-Sattar, S.A.; Kamel, S.; El Sehiemy, R.A.; Jurado, F.; Yu, J. Single- and multi-objective optimal power flow frameworks using Jaya optimization technique. Neural Comput. Appl. 2019, 31, 8787–8806. [Google Scholar] [CrossRef]
  66. Pulluri, H.; Naresh, R.; Sharma, V. A solution network based on stud krill herd algorithm for optimal power flow problems. Soft Comput. 2018, 22, 159–176. [Google Scholar] [CrossRef]
  67. Shabanpour-Haghighi, A.; Seifi, A.R.; Niknam, T. A modified teaching-learning based optimization for multi-objective optimal power flow problem. Energy Convers. Manag. 2014, 77, 597–607. [Google Scholar] [CrossRef]
  68. Shaheen, A.M.; El-Sehiemy, R.A.; Elattar, E.; Ginidi, A.R. An Amalgamated Heap and Jellyfish Optimizer for economic dispatch in Combined heat and power systems including N-1 Unit outages. Energy 2022, 246, 123351. [Google Scholar] [CrossRef]
  69. Nazari-Heris, M.; Mehdinejad, M.; Mohammadi-Ivatloo, B.; Babamalek-Gharehpetian, G. Combined heat and power economic dispatch problem solution by implementation of whale optimization method. Neural Comput. Appl. 2019, 31, 421–436. [Google Scholar] [CrossRef]
  70. Ginidi, A.; Elsayed, A.; Shaheen, A.; Elattar, E.; El-Sehiemy, R. An Innovative Hybrid Heap-Based and Jellyfish Search Algorithm for Combined Heat and Power Economic Dispatch in Electrical Grids. Mathematics 2021, 9, 2053. [Google Scholar] [CrossRef]
  71. Shaheen, A.M.; Elsayed, A.M.; Ginidi, A.R.; EL-Sehiemy, R.A.; Alharthi, M.M.; Ghoneim, S.S.M. A novel improved marine predators algorithm for combined heat and power economic dispatch problem. Alex. Eng. J. 2021, 61, 1834–1851. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the SMO.
Figure 1. Flowchart of the SMO.
Mathematics 10 01991 g001
Figure 2. Logistic map based on chaotic behavior.
Figure 2. Logistic map based on chaotic behavior.
Mathematics 10 01991 g002
Figure 3. Proposed ESMO flowchart.
Figure 3. Proposed ESMO flowchart.
Mathematics 10 01991 g003
Figure 4. IEEE 30-bus system [47,48].
Figure 4. IEEE 30-bus system [47,48].
Mathematics 10 01991 g004
Figure 5. Convergences of the proposed ESMO and other SMO versions when minimizing the costs. (a) Average convergence; (b) Median convergence; (c) Best convergence.
Figure 5. Convergences of the proposed ESMO and other SMO versions when minimizing the costs. (a) Average convergence; (b) Median convergence; (c) Best convergence.
Mathematics 10 01991 g005
Figure 6. Box plot of the acquired fuel costs via the proposed ESMO and other SMO versions.
Figure 6. Box plot of the acquired fuel costs via the proposed ESMO and other SMO versions.
Mathematics 10 01991 g006
Figure 7. Convergences of the proposed ESMO and other recent SMO versions when minimizing the losses. (a) Average convergence; (b) Median convergence; (c) Best convergence.
Figure 7. Convergences of the proposed ESMO and other recent SMO versions when minimizing the losses. (a) Average convergence; (b) Median convergence; (c) Best convergence.
Mathematics 10 01991 g007aMathematics 10 01991 g007b
Figure 8. Box plot of the acquired losses via the proposed ESMO and other recent SMO versions.
Figure 8. Box plot of the acquired losses via the proposed ESMO and other recent SMO versions.
Mathematics 10 01991 g008
Figure 9. Convergences of the proposed ESMO and other recent SMO versions when minimizing the emissions. (a) Average convergence; (b) Median convergence; (c) Best convergence.
Figure 9. Convergences of the proposed ESMO and other recent SMO versions when minimizing the emissions. (a) Average convergence; (b) Median convergence; (c) Best convergence.
Mathematics 10 01991 g009aMathematics 10 01991 g009b
Figure 10. Box plot of the acquired emissions via the proposed ESMO and other recent SMO versions.
Figure 10. Box plot of the acquired emissions via the proposed ESMO and other recent SMO versions.
Mathematics 10 01991 g010
Figure 11. Convergences of the best obtained results using ESMO and SMO when minimizing the CHEPD production costs.
Figure 11. Convergences of the best obtained results using ESMO and SMO when minimizing the CHEPD production costs.
Mathematics 10 01991 g011
Figure 12. Box plot of the acquired CHEPD production costs via the proposed ESMO and SMO.
Figure 12. Box plot of the acquired CHEPD production costs via the proposed ESMO and SMO.
Mathematics 10 01991 g012
Table 1. Data of the tested uni-modal benchmark functions.
Table 1. Data of the tested uni-modal benchmark functions.
FunctionRangeDimension (d)Minimum
Fu 1 = d j = 1 x j 2 [−100, 100]300
Fu 2 = d j = 1 | x j | + Π j = 1 d | x j | [−10, 10]300
Fu 3 = d j = 1 ( j i 1 x i ) 2 [−100, 100]300
Fu 4 = max j { | x j | , 1 < j < d } [−100, 100]300
Fu 5 = d 1 j = 1 [ 100 ( x j + 1 x j 2 ) + ( x j 1 ) 2 ] [−30, 30]300
Fu 6 = d j = 1 ( [ x j + 0.5 ] ) 2 [−100, 100]300
Fu 7 = d j = 1 jx j 4 + r a n d o m [ 0 , 1 ] [−128, 128]300
Table 2. Data of the tested multi-modal benchmark functions.
Table 2. Data of the tested multi-modal benchmark functions.
FunctionRangedMin.
Fm 1 = d j = 1 ( x j sin ( | x j | ) ) [−500, 500]30−418.9829 × d
Fm 2 = d j = 1 ( x j 2 10 cos ( 2 π x j ) + 10 ) [−5.12, 5.12]300
Fm 3 = 20 exp ( 0.2 ( 1 d d j = 1 x j 2 ) 1 2 ) exp ( 1 d d j = 1 cos ( 2 π x j ) ) + 20 + e [−32, 32]300
Fm 4 = 1 4000 d j = 1 x j 2 Π j = 1 d cos ( x j j ) + 1 [−600, 600]300
Fm 5 = π d 10 sin ( π z 1 ) + d 1 j = 1 [ ( z j 1 ) 2 ( 1 + 10 sin 2 ( π z j + 1 ) ) ] + ( z d 1 ) 2 + d j = 1 u ( x j , 10 , 100 , 4 ) w h e r e , z j = 1 + x j 4 + 1 , u ( x j , α , β , γ ) = { β ( x j α ) γ 0 β ( x j α ) γ i f   x j > α i f   a < x j < α i f   x j < α [−50, 50]300
Fm 6 = 0.1 ( d j = 1 ( x j 1 ) 2 [ 1 + sin 2 ( 3 π x j + 1 ) ] + sin 2 ( 3 π x 1 ) + ( x d 1 ) 2 [ 1 + sin 2 ( 3 π x d ) ] ) + j = 1 d u ( x j , 5 , 100 , 4 ) [−50, 50]300
Table 3. Comparisons of the mean, STd, and median fitnesses for the uni-modal benchmark functions.
Table 3. Comparisons of the mean, STd, and median fitnesses for the uni-modal benchmark functions.
FunctionIndexESMOSMOSCASSAWOAMVOPSODE
Fu1Mean000.0152441.23 × 10−84.32 × 10−1530.318998128.80373.03 × 10−12
STd000.0299893.54 × 10−92.28 × 10−1520.1120615.368383.45 × 10−12
Median01.08 × 10−6493.61832.34 × 10−549401420.000401
Fu2Mean05.33 × 10−2071.15 × 10−50.8481465.03 × 10−1040.3889386.075433.72 × 10−8
STd002.74 × 10−50.9415181.59 × 10−1030.13783465.298811.2 × 10−8
Median05.93 × 10−580.008068.93.42 × 10−3413.91120.00224
Fu3Mean003261.997236.621920,802.2848.11246406.962624,230.57
STd002935.038155.547110,554.3921.7752671.309264174.379
Median00.082227,500294053,000461060630,000
Fu4Mean02.30 × 10−19720.532498.25460245.706341.0769684.4981581.965929
STd0011.046643.28796626.935040.3108840.3293390.430531
Median01.31 × 10−2575.316.246.1144.7913.2
Fu5Mean2.2868480.42779532.7126135.569827.26543407.946515473646.12942
STd6.9366660.6371907.446174.12130.57447615.32936,03927.29727
Median0.0575549.891,580,000777027.386,300185,000140
Fu6Mean0.0004770.0008794.55012100.1005570.323756132.7793.1 × 10−12
STd0.0002810.0004150.35704900.1105250.09739415.1891.46 × 10−12
Median0.0004360.59733.72040.1019341450.000411
Fu7Mean7 × 10−58.84 × 10−50.0243820.0955410.0009860.020859111.00680.026937
STd6.31 × 10−57.12 × 10−50.0207320.050530.0011470.00958421.53780.006322
Median5.98 × 10−50.0004080.6040.1590.002660.1421110.0544
Table 4. Comparisons of the mean, STd and median fitnesses for the multi-modal benchmark functions.
Table 4. Comparisons of the mean, STd and median fitnesses for the multi-modal benchmark functions.
FunctionIndexESMOSMOSCASSAWOAMVOPSODE
Fm1Mean−12,569.5−12,569.4−3886.1−7816.8−11,630.6−7744.9−6728.1−12,409.8
STd0.0181690.1225.6842.31277.5693.4650.2149.2
Median−12,569.5−12,600−3820−6980−11,500−5590−6720−9930
Fm2Mean0018.3552156.613070112.7184369.244659.28367
STd0021.4369312.89967024.5718918.682616.07679
Median00.99672.2138023337386
Fm3Mean8.88 × 10−168.88 × 10−1611.323082.256883.97 × 10−151.145728.415084.64 × 10−7
STd009.661010.720682.03 × 10−150.703410.410511.38 × 10−7
Median8.88 × 10−168.88 × 10−1614.25.034.09 × 10−157.78.750.00566
Fm4Mean000.235340.0100900.575431.032289.76 × 10−11
STd000.22480.0106700.087470.004892.13 × 10−10
Median001.292.7508.981.040.00756
Fm5Mean0.0008130.0011952.2901945.5425450.0052051.2945244.803223.63 × 10−13
STd0.0014820.0014222.9588653.1222470.0035121.1034710.86673.4 × 10−13
Median0.0002040.014234,800,00021.70.0052112.75.165.03 × 10−5
Fm6Mean0.0015890.001577518.68691.0104730.1811970.08128623.191581.69 × 10−12
STd0.0033590.0032782.8454.7010960.1669550.0431824.1956131.16 × 10−12
Median0.000470.14517,800,00095.10.181178028.80.000244
Table 5. Friedman ranking test results of the mean obtained fitness for the uni-modal and multi-modal benchmark functions.
Table 5. Friedman ranking test results of the mean obtained fitness for the uni-modal and multi-modal benchmark functions.
Function ESMOSMOSCASSAWOAMVOPSODE
Fu11.51.5547683
Fu2 12467583
Fu31.51.5647358
Fu412768354
Fu521753684
Fu634715682
Fu712573486
Fm121854673
Fm21.51.5451.5786
Fm31.51.5863574
Fm41.51.5651.5784
Fm523684571
Fm632865471
Summation22.524.5816859679449
Mean rank1.60714291.755.78571434.85714294.21428574.78571436.71428573.5
Final Ranking12764583
Table 6. Parameter settings of the proposed ESMO and other recent SMO versions when minimizing the costs.
Table 6. Parameter settings of the proposed ESMO and other recent SMO versions when minimizing the costs.
Variables SMOProposed ESMOLSMO [31] EQSMO [32]AOSMO [33]FDBSMO [34]
Number of individuals505050502550
Number of iterations300300300300300300
Number of function evaluation per each individual111121
Total number of runs30
Total number of function evaluations15,000
Table 7. Optimal results of the proposed ESMO and other recent SMO versions when minimizing the costs.
Table 7. Optimal results of the proposed ESMO and other recent SMO versions when minimizing the costs.
Variables InitialSMOProposed ESMOLSMOEQSMOAOSMOFDBSMO
Vgen 11.05001.0999696021.11.11.11.11.1
Vgen 21.04001.0883439991.0876395281.0881066051.0880339241.0876064971.087707536
Vgen 51.01001.0618158671.0615134321.0622833841.0621013611.0617002941.060850352
Vgen 81.01001.0695800171.0703298051.0704016661.0695337771.0692576721.06859214
Vgen 111.05001.0999981441.11.11.11.0999426321.1
Vgen 131.05001.11.11.11.11.11.1
Ta 6–91.07801.0456359181.0443698461.0636433741.0513765671.0049893631.064422161
Ta 6–101.06900.9317595650.9116124390.90078770.9298056660.9744570820.9
Ta 4–121.03201.0078027440.9911225261.0080465571.0175518230.9976831620.995096784
Ta 28–271.06800.9710187980.9647079940.9638763320.9799263650.9778986690.972079256
Qcap 1004.2086184584.9015664324.1368228691.4980538824.9769712434.327592577
Qcap 1201.4517968194.3831485864.944828420.6961422164.8841543334.667399693
Qcap 1501.3068755092.4832689554.7866813934.7927435334.7285497570.045181547
Qcap 17054.9981647862.13029574854.9583791394.998203992
Qcap 2001.9611006744.9989409613.3438587244.8042780294.9848210124.839667485
Qcap 2104.8039111734.9849631324.60275671854.6026191084.998527988
Qcap 2304.9670763444.9298967731.4653124431.1689759944.6265282433.897113281
Qcap 2404.99607791954.8366185844.9272081774.6412781155
Qcap 2900.6692995152.0038155440.0860945761.7510110244.7319562372.870308756
Pgen 199.2400177.0369177.054177.4292091177.0980781177.3830955177.2475961
Pgen 28048.658888948.5472526848.3356001448.5121605948.6917299348.5077348
Pgen 55021.3367204321.3858304521.284505621.2271281821.352033921.23838842
Pgen 82021.1622719321.3622286221.4394188921.0313423920.6157811521.13968009
Pgen 112011.920041411.9381668811.5845465112.1808162411.9516929711.91470514
Pgen 132012.0101399412.00255919121212.0821930312.00039502
F1901.9600799.202799.1134799.2048692799.1730514799.1745189799.12964
Table 8. Comparisons of the ESMO and other reported algorithms after minimizing the costs.
Table 8. Comparisons of the ESMO and other reported algorithms after minimizing the costs.
MethodF1
Proposed ESMO799.1134
SMO799.202
LSMO799.2048692
EQSMO799.1730514
AOSMO799.1745189
FDBSMO799.12964
MCSO [49]799.3332
IEOA [50]799.688
NBO [51]799.7516
CSO [52]799.8266
BHBOA [53]799.9217
AGO [54]800.0212
IMFO [55]800.3848
TLA [56]800.4212
DGWA [57]800.433
MSA [58]800.5099
GO [54]800.9728
SOS [59]801.5733
ICA [60]801.843
DHSA [53]802.2966
GA [62]802.1962
Table 9. Optimal results of the proposed ESMO and other recent SMO versions when minimizing the losses.
Table 9. Optimal results of the proposed ESMO and other recent SMO versions when minimizing the losses.
Variables InitialSMOProposed ESMOLSMOEQSMOAOSMOFDBSMO
Vgen 11.05001.11.11.11.11.11.099989573
Vgen 21.04001.0978791.0980381.0975019431.0963196771.0972857161.09804085
Vgen 51.01001.0792291.0801191.0791995191.0800091441.0788527191.081198314
Vgen 81.01001.0881951.0874021.0875969061.084756691.0870536781.088314909
Vgen 111.05001.11.0997671.0985984891.0994552441.11.1
Vgen 131.05001.0996761.11.11.11.11.1
Ta 6–91.07801.0635331.0707521.0494708181.0654528221.070561851.030097886
Ta 6–101.06900.9055390.90.9159314880.9059327130.9027531970.936000592
Ta 4–121.03200.9853920.9892561.0000103560.9854379870.9928732370.987993017
Ta 28–271.06800.9780310.9739710.98448770.9709034020.9781529070.974034656
Qcap 1004.9778534.93642.8389362394.9939222573.9503660740.706305597
Qcap 1201.4556354.8320764.973598540.8896253351.6609062624.999986635
Qcap 1501.52585851.0207069243.0628498924.6499821564.118956228
Qcap 1703.5783624.9963914.5254600064.9992005784.6503988364.986290979
Qcap 2004.6803364.7069863.433938874.83402011353.660338716
Qcap 2104.9917453.1727619864.9919193574.9961405884.999935911
Qcap 2303.5256622.9015744.3551457222.7593360094.9623445854.11599159
Qcap 2404.8658554.9732774.9749771924.8436786064.7026381385
Qcap 2901.4723072.1710792.7329495740.8630152433.0224897892.035194665
Pgen 199.240051.3351.2651.2918217951.2708986351.271909151.28320001
Pgen 28079.9352379.99628808079.997839279.9824479
Pgen 550505050505050
Pgen 820353535353535
Pgen 1120303029.99833311303029.99994255
Pgen 1320404039.98873995404039.99986133
F25.8324002.8735332.8520832.878894862.8708986342.8697482992.865451787
Table 10. Optimal results of the proposed ESMO and other recent SMO versions for minimizing the emissions.
Table 10. Optimal results of the proposed ESMO and other recent SMO versions for minimizing the emissions.
Variables InitialSMOProposed ESMOLSMOEQSMOAOSMOFDBSMO
Vgen 11.05001.11.11.11.11.11.1
Vgen 21.04001.0963541.0963571.0979965781.0961804091.0988071221.096206734
Vgen 51.01001.0795031.079411.0815971251.0793460541.0829419381.078506486
Vgen 81.01001.086811.0871661.0910320821.0859899261.094000011.087012582
Vgen 111.05001.0997381.11.0990856561.0982699931.0617972051.1
Vgen 131.05001.0992071.0998341.0999500611.0994858851.11.099966593
Ta 6–91.07801.0156151.05261.0320798551.0236553221.07312661.019289849
Ta 6–101.06900.9373880.9167710.9593512110.9300010040.9160262620.931165942
Ta 4–121.03200.9999440.9906580.985336890.9908006870.9938673391.020575999
Ta 28–271.06800.9815440.981870.9945976260.9752058871.0075831680.995172232
Qcap 1000.0001114.5496161.9278014671.7973722344.7820561822.358931836
Qcap 1201.885484.99994.5482287621.3097955233.7164234294.15318608
Qcap 1503.0203880.3948474.8047256941.0501361291.2554226974.866707435
Qcap 1703.0677370.7763261.879543921.4064711744.8208775580.180389178
Qcap 2003.6553524.9999031.0412185864.9842953464.918907481.612733824
Qcap 2102.0240774.6297594.13176334.6049578174.9997650240.554657733
Qcap 2302.3030444.5973084.76347985151.048769133.612030106
Qcap 2404.1414124.8846613.6037197223.72480263.7289623394.94272429
Qcap 2902.6485563.4674234.6364321240.4775827744.9387232331.611635229
Pgen 199.240063.979463.982463.9370860363.9024825363.9378215364.09423067
Pgen 28067.45080567.448020467.5282481167.5359705167.5418294767.36693912
Pgen 5505049.99984589505049.9995780350
Pgen 8203534.9997521635353534.99988911
Pgen 112029.999769343030303030
Pgen 1320404039.9999523404040
F30.239096330.2047009810.204692470.204708740.204700450.2047132640.204707036
Table 11. Comparisons of the ESMO and other reported algorithms for minimizing the emissions.
Table 11. Comparisons of the ESMO and other reported algorithms for minimizing the emissions.
AlgorithmF3
Proposed ESMO0.20469247
SMO0.204700981
Stud KHA [66]0.2048
ARBO [64]0.2048
Jaya [65]0.204834
AGO [54]0.20484
MCSO [49]0.2048911
KHA [66]0.2049
GO [54]0.20492
Modified TLA [67]0.20493
CSO [49]0.2051355
NBO [49]0.2052063
Table 12. Wilcoxon rank-sum test of the proposed ESMO against the other SMO versions.
Table 12. Wilcoxon rank-sum test of the proposed ESMO against the other SMO versions.
VariablesSMOLSMOEQSMOAOSMOFDBSMO
p-valueCosts minimization2.13 × 10−45.6 × 10−70.02510.04680.5997
Losses minimization7.39 × 10−112.37 × 10−103.82 × 10−105.00 × 10−92.02 × 10−8
Emissions minimization3.82 × 10−97.38 × 10−105.53 × 10−88.10 × 10−101.43 × 10−8
Table 13. Evaluation of the chaotic strategy and elitist group with the ESMO.
Table 13. Evaluation of the chaotic strategy and elitist group with the ESMO.
Case StudyIndexProposed ESMOSMO_ElitistSMO_Chaotic
Costs minimizationBest799.1133854799.1730514799.1932399
Mean799.248253799.2955193799.3774219
Worst799.4369998799.4791913799.694656
STd0.0748347210.0798021620.1308863
STd improvement -6.6379%74.9000%
Losses minimizationIndexProposed ESMOSMO_ElitistSMO_Chaotic
Best2.8520832192.8708986342.876550431
Mean2.8677697832.9705755022.971105614
Worst2.9140497093.1550980933.18800886
STd0.0126897160.1007121610.080521342
STd improvement -87.4008%84.2415%
Emissions minimizationIndexProposed ESMOSMO_ElitistSMO_Chaotic
Best0.204692470.204700450.204710214
Mean0.2047104280.2047984290.20481888
Worst0.2048930970.2049190110.204947592
STd3.53586 × 10−57.14083 × 10−57.78415 × 10−5
STd improvement -50.4891%54.5761%
Table 14. Results of the 84-unit CHEPD system from the production costs minimization using SMO and ESMO.
Table 14. Results of the 84-unit CHEPD system from the production costs minimization using SMO and ESMO.
UnitSMOESMOUnitSMOESMOUnitSMOESMO
P1113.997473.44459P38109.974890.6213H51125.9833107.7456
P274.05953113.2413P39109.9961109.9997H52113.3157123.7505
P392.0498999.33216P40511.3279511.2792H5377.7947277.54494
P4133.6151180.4777P4186.87628108.6506H54116.001582.17562
P589.3900294.273P42113.3483152.9563H5596.6079481.74176
P6106.2412107.4841P43116.9203108.1153H5694.5944682.75072
P7261.9628186.3668P44152.2478123.2221H5742.3253440.50032
P8292.9954297.196P4591.8912692.14206H5844.506441.83042
P9299.9997287.3499P4641.6699862.4102H5948.6077847.76195
P10204.7936204.7983P4753.3652874.27693H6047.0002440.429
P11243.5362168.7863P4861.4471180.61822H6141.2613220.30929
P12318.5471318.4253P49107.9406105.2862H6222.2805430.79636
P13304.6454394.3084P50140.602126.0701H6332.2035320.44812
P14304.6668484.0242P51119.013886.26312H6422.9164733.83084
P15484.0212484.0372P5296.18743114.7725H65374.0721385.4842
P16483.8587304.5231P5343.2587742.94842H66372.5447382.6649
P17489.4774489.294P5487.4965748.31214H67375.9403385.8598
P18489.3831489.3403P5565.0512747.80923H68377.0608382.8618
P19511.3923511.334P5662.7052148.98129H6959.9680359.99895
P20512.2124511.3195P5715.4448611.2051H706059.99983
P21530.2838545.5427P5820.5301314.27382H7159.9994560
P22532.1481523.2856P5930.1178228.11363H7259.9998460
P23523.3675524.2476P6026.3487411.00413H7359.9921759.99981
P24526.3741523.6606P6181.7766435.68019H7459.98760
P25523.7434523.3108P6240.4384158.77151H756059.99856
P26523.3641523.5737P6361.9513636.01009H766059.99951
P271010P6441.424765.42881H77120120
P2810.0088310.00009H41108.0767120.3123H78119.9998119.9986
P2910.0006810H42122.9207145.1812H79119.9982119.9995
P3096.9777795.47952H43124.954120.0116H80120119.9997
P31162.9052189.9992H44144.741128.4933H81120120
P32189.9974189.9996H45119.7918120.0113H82119.9875120
P33162.2703190H4676.4410594.34085H83120119.9995
P34168.6657171.2658H4786.52864104.5896H84120120
P35199.9998199.827H4893.48256110.0641Sum (Pg)5000.00005000.0000
P36168.4265171.566H49119.885118.4221Sum (Hg)12,700.000012,700.0000
P3761.2683103.6635H50138.2295130.0936F (USD)290,362.8289,498.2
Table 15. Comparison of ESMO, SMO, and other reported techniques for the CHEPD problem.
Table 15. Comparison of ESMO, SMO, and other reported techniques for the CHEPD problem.
OptimizerF (USD)
Proposed ESMO289,498.2
SMO290,362.8
JFSO [70]290,323.8
WOA [69]290,123.97
MPA [71] 294,717.7
IMPA [71] 289,903.8
Table 16. Friedman ranking test for OPF engineering problems.
Table 16. Friedman ranking test for OPF engineering problems.
FunctionIndexProposed ESMOSMOLSMOEQSMOAOSMOFDBSMO
Costs minimizationBest156342
Mean256431
Worst356421
STd256341
Losses minimizationBest156432
Mean164523
Worst164523
STd164523
Emissions minimizationBest135264
Mean156432
Worst246351
STd156342
Summation176065454025
Mean rank1.41666755.4166673.753.3333332.083333
Final ranking156432
Table 17. Friedman’s ANOVA table (MATLAB) for the OPF results.
Table 17. Friedman’s ANOVA table (MATLAB) for the OPF results.
Case StudySourceSSdfMSChi-sqProb > Chi-sq
Costs minimizationColumns125.933525.186735.989.5818 × 10−7
Error399.0671452.7522
Total525179
Losses minimizationColumns252.533550.506772.153.6512 × 10−14
Error272.4671451.8791
Total525179
Emissions minimizationColumns213.867542.773361.17.1836 × 10−12
Error311.1331452.1457
Total525179
Table 18. Post hoc test of the compared methods (MATLAB).
Table 18. Post hoc test of the compared methods (MATLAB).
Costs Minimization
Compared MethodsLCIDGMUCIp-Value
ESMO vs. SMO−2.8432−1.4667−0.09010.0289
ESMO vs. LSMO−3.6099−2.2333−0.85680.0001
ESMO vs. EQSMO−2.3099−0.93330.44320.3823
ESMO vs. AOSMO−2.1765−0.80.57650.561
ESMO vs. FDBSMO−1.14320.23331.60990.9968
SMO vs. LSMO−2.1432−0.76670.60990.6071
SMO vs. EQSMO−0.84320.53331.90990.8799
SMO vs. AOSMO−0.70990.66672.04320.7391
SMO vs. FDBSMO0.32351.73.07650.0058
LSMO vs. EQSMO−0.07651.32.67650.0769
LSMO vs. AOSMO0.05681.43332.80990.0356
LSMO vs. FDBSMO1.09012.46673.84320
EQSMO vs. AOSMO−1.24320.13331.50990.9998
EQSMO vs. FDBSMO−0.20991.16672.54320.1508
AOSMO vs. FDBSMO−0.34321.03332.40990.267
Losses Minimization
Compared MethodsLCIDGMUCIp-Value
ESMO vs. SMO−5.1099−3.7333−2.35680
ESMO vs. LSMO−4.1432−2.7667−1.39010
ESMO vs. EQSMO−4.3432−2.9667−1.59010
ESMO vs. AOSMO−3.1099−1.7333−0.35680.0045
ESMO vs. FDBSMO−3.9765−2.6−1.22350
SMO vs. LSMO−0.40990.96672.34320.3415
SMO vs. EQSMO−0.60990.76672.14320.6071
SMO vs. AOSMO0.623523.37650.0005
SMO vs. FDBSMO−0.24321.13332.50990.1757
LSMO vs. EQSMO−1.5765−0.21.17650.9985
LSMO vs. AOSMO−0.34321.03332.40990.267
LSMO vs. FDBSMO−1.20990.16671.54320.9994
EQSMO vs. AOSMO−0.14321.23332.60990.1091
EQSMO vs. FDBSMO−1.00990.36671.74320.9742
AOSMO vs. FDBSMO−2.2432−0.86670.50990.4695
Emissions Minimization
Compared MethodsLCIDGMUCIp-Value
ESMO vs. SMO1.72353.14.47650
ESMO vs. LSMO−1.6099−0.23331.14320.9968
ESMO vs. EQSMO−0.74320.63332.00990.779
ESMO vs. AOSMO−0.77650.61.97650.8161
ESMO vs. FDBSMO−0.27651.12.47650.2033
SMO vs. LSMO−4.7099−3.3333−1.95680
SMO vs. EQSMO−3.8432−2.4667−1.09010
SMO vs. AOSMO−3.8765−2.5−1.12350
SMO vs. FDBSMO−3.3765−2−0.62350.0005
LSMO vs. EQSMO−0.50990.86672.24320.4695
LSMO vs. AOSMO−0.54320.83332.20990.515
LSMO vs. FDBSMO−0.04321.33332.70990.064
EQSMO vs. AOSMO−1.4099−0.03331.34321
EQSMO vs. FDBSMO−0.90990.46671.84320.9287
AOSMO vs. FDBSMO−0.87650.51.87650.9062
Table 19. Friedman’s ANOVA table (MATLAB) for the CHEPD results.
Table 19. Friedman’s ANOVA table (MATLAB) for the CHEPD results.
Case StudySourceSSdfMSChi-sqProb > Chi-sq
Costs minimizationColumns6.6666716.6666713.330.0003
Error8.33333290.28736
Total1559
Table 20. Post hoc test of the compared methods (MATLAB) for the CHEPD results.
Table 20. Post hoc test of the compared methods (MATLAB) for the CHEPD results.
Costs Minimization
Compared MethodsLCIDGMUCIp-Value
ESMO vs. SMO0.3088310.6666671.0245020.000261
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sarhan, S.; Shaheen, A.M.; El-Sehiemy, R.A.; Gafar, M. An Enhanced Slime Mould Optimizer That Uses Chaotic Behavior and an Elitist Group for Solving Engineering Problems. Mathematics 2022, 10, 1991. https://doi.org/10.3390/math10121991

AMA Style

Sarhan S, Shaheen AM, El-Sehiemy RA, Gafar M. An Enhanced Slime Mould Optimizer That Uses Chaotic Behavior and an Elitist Group for Solving Engineering Problems. Mathematics. 2022; 10(12):1991. https://doi.org/10.3390/math10121991

Chicago/Turabian Style

Sarhan, Shahenda, Abdullah Mohamed Shaheen, Ragab A. El-Sehiemy, and Mona Gafar. 2022. "An Enhanced Slime Mould Optimizer That Uses Chaotic Behavior and an Elitist Group for Solving Engineering Problems" Mathematics 10, no. 12: 1991. https://doi.org/10.3390/math10121991

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop