Next Article in Journal
Analyzing the Dynamics of a Periodic Typhoid Fever Transmission Model with Imperfect Vaccination
Previous Article in Journal
Simultaneous Exact Controllability of Mean and Variance of an Insurance Policy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Dwarf Mongoose Optimization Algorithm for Solving Engineering Problems

1
Electrical Engineering Department, Jazan University, Jazan 45142, Saudi Arabia
2
Electrical Engineering Department, Suez Canal University, Ismailia 41522, Egypt
3
College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
4
Department of Electrical Power Engineering, Faculty of Engineering, Suez University, Suez 43533, Egypt
5
Reactors Department, Nuclear Research Center, Egyptian Atomic Energy Authority, Cairo 11787, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(15), 3297; https://doi.org/10.3390/math11153297
Submission received: 14 June 2023 / Revised: 12 July 2023 / Accepted: 24 July 2023 / Published: 26 July 2023

Abstract

:
This paper proposes a new Enhanced Dwarf Mongoose Optimization Algorithm (EDMOA) with an alpha-directed Learning Strategy (LS) for dealing with different mathematical benchmarking functions and engineering challenges. The DMOA’s core concept is inspired by the dwarf mongoose’s foraging behavior. The suggested algorithm employs three DM social categories: the alpha group, babysitters, and scouts. The family forages as a team, with the alpha female initiating foraging and determining the foraging course, distance traversed, and sleeping mounds. An enhanced LS is included in the novel proposed algorithm to improve the searching capabilities, and its updating process is partially guided by the updated alpha. In this paper, the proposed EDMOA and DMOA were tested on seven unimodal and six multimodal benchmarking tasks. Additionally, the proposed EDMOA was compared against the traditional DMOA for the CEC 2017 single-objective optimization benchmarks. Moreover, their application validity was conducted for an important engineering optimization problem regarding optimal dispatch of combined power and heat. For all applications, the proposed EDMOA and DMOA were compared to several recent and well-known algorithms. The simulation results show that the suggested DMOA outperforms not only the regular DMOA but also numerous other recent strategies in terms of effectiveness and efficacy.

1. Introduction

In many branches of research, optimization is a broad idea. A challenge type with multiple workable solutions is an optimization problem. Additionally, finding the optimum option out of all these workable options in order to fulfill design, planning, or operation activities at the lowest cost possible is the purpose of optimization. Artificial intelligence applications are typically unrestricted or discrete [1]. Moreover, finding the best options using conventional mathematically based programming algorithms is challenging. Besides, conventional optimization strategies offer a single-based solution only [2]. Hence, several optimization methods have been developed in recent years to improve the effectiveness of various systems and reduce computational expenses. Convergence to local optima and an indeterminate search space are two drawbacks and limitations of conventional optimization strategies [3].
An optimization issue is described mathematically in terms of three components: choice variables, constraints, and objective function [4]. The methods used to solve problems in optimization research can be divided into two categories: deterministic algorithms and stochastic algorithms [5]. The two categories of deterministic algorithms—gradient-based and non-gradient-based—are useful for resolving linear, convex, straightforward, low-dimensional, continuous, and differentiable optimization issues [6]. However, when these optimization issues become more complex, the performance of deterministic algorithms is disrupted, and these approaches become stuck in the wrong local optima. However, many optimization issues in research and real-world applications have traits such as high complexity, large dimensionality, non-continuous, unknowable search space, a non-convex, non-differentiable, and nonlinear objective function [7]. Researchers have developed novel algorithms known as stochastic algorithms as a result of these optimization task characteristics and the shortcomings of deterministic algorithms.
One of the most popular stochastic algorithms for solving challenging optimization problems is the employment of metaheuristic algorithmic algorithms [8]. They are effective in resolving high-dimensional, NP-hard, non-differentiable, non-convex optimization issues [9]. Among the benefits that have contributed to the success of metaheuristic algorithms are their effectiveness in solving unknown search spaces, nonlinear, discrete, and the simplicity of their principles, and their independence, their ease of implementation from the nature of the issue [10].
These metaheuristic algorithms are based on the application of random operators and random search in the domain of problem solution. Candidate solutions are initially produced at random. The position of the candidate solutions in the problem-solving space is then modified through a repetition-based process depending on the stages of the algorithm to increase the quality of these initial solutions. Ultimately, the problem can be solved with the top potential solution. The accomplishment of the global optimal by a metaheuristic algorithm is not guaranteed by the use of random search during the optimization process. According to that, solutions produced by metaheuristic algorithms are referred to as pseudo-optimal [11].
Metaheuristic algorithms should be able to provide and manage search operations well, at both global and local levels, in order to organize an efficient search in the problem-solving domain. A global search with the idea of exploration results in a thorough search in the space of problem-solving and a diversion from the best local areas [12]. A local search combined with the idea of exploitation triggers a thorough investigation of the most promising answers in order to converge on potentially superior ones. Given that exploration and exploitation have opposing objectives, striking a balance between them during the search process is essential for metaheuristic algorithms to succeed [13]. Researchers have created a large number of metaheuristic algorithms as a result of the notions of the random search process and quasi-optimal solutions as well as their goal to find better quasi-optimal solutions for these optimization challenges.
The primary categories of metaheuristic algorithms are four, which are named evolutionary algorithms, physics-based algorithms, human-based algorithms, and swarm intelligence algorithms. As explained in [14,15], the genetic algorithm and the evaluation strategy [16], evolutionary algorithms have been built by modelling biological evolutionary traits such as crossovers, mutations, and selections. Physics-based algorithms are motivated by physical laws, such as the equilibrium algorithm (EA) [10,13] and the Henry gas solubility algorithm [12]. Swarm intelligence algorithms, such as the whale optimization algorithm (WOA) [17], jellyfish search optimization (JFSO) [18], heap-based technique (HT) [19], grasshopper optimization (GO) [20], particle swarm optimization [21], manta rays foraging optimization (MRFO) [22], artificial bee colony [23], and marine predators algorithm (MPA) [24] are a series of algorithms influenced by swarming and animal group behavior.
Numerous optimization applications in science use metaheuristic algorithms, including energy [25] and electrical engineering [26]. Additionally, one of the optimization issues is the dispatch of combined power and heat (DCPH) issue. A DCPH is a type of cogeneration system where the heat and power producers generate energy at the lowest possible cost while reducing pollutants [27]. The DCPH issue has been approached in a variety of ways over time, including computational and metaheuristic methods. Thermal power plants produce energy using fossil fuels such as coal, gas, or oil. High-temperature heat is utilized to generate steam power, which is then used to produce electricity. The thermal power plant’s efficiency has dropped to between 50 and 60% as a result. However, various pollutants, including sulphur, carbon dioxides, and nitrogen, are created during the heating process, contributing to the warming effect and endangering the biological landscape.
A variety of optimization issues have been handled using metaheuristic algorithms, which have attracted a lot of attention. They assign several characteristics, such as the two-stage search method [28] that consists of the stages of intensity (exploit) and diversification (explore), respectively. To study diverse searching space areas, the metaheuristic algorithm first generates randomized operations. In the second stage, the optimization strategy looks for the optimum solution inside the search space. An effective metaheuristic optimization strategy must balance the exploitation and exploration stages in order to avoid entrapment at its best.
Distributed beetle antennae search (DBAS) has been employed to enhance multiportfolio selection issues without violating the individual portfolios’ privacy [29]. In addition, Egret swarm optimization algorithm (ESOA) has been illustrated in [30] and assessed on 36 benchmark functions using aggressive strategy with encirclement mechanisms and random wandering to provide optimal solution exploration. In [31], an online solution to the static nonlinear programming problem with time-varying has been provided using the beetle antennae search (BAS). The multilayer perceptrons model of neural network has been identified in [32], and the regarding weights for the prediction of polymeric-based composite materials under changing amplitude loading were optimized using a genetic algorithm. In [33], a gradient-based algorithm has been implemented and designed around Newton’s gradient-based concept by combining a crossover operator for increasing the variety of solutions provided. In order to analyze material life span using neural networks, a framework of system identification technique based on nonlinear autoregressive exogenous inputs (NARX) has been established as manifested in [34]. Also, the fatigue lifetimes of composite materials under multiaxial and multivariable loadings have been estimated using radial basis functions (RBFNN)-NARX models and the multilayer perceptron-NARX. The optimization of forecasting of fatigue life for polymeric-based composites over varied amplitude loading under diverse stress ratio circumstances has been investigated [35] in connection with the availability of restricted fatigue data. The behavior of a beetle has emerged with Deep Convolutional Neural Networks (CNNs) in [36] for image classification tasks and has been applied on ResNet and LeNet-5 architecture. In [37], human intellectual and reflective capabilities in socially growing actions have been used to formulate a growth optimizer which has been developed for parameter identification of solar photovoltaic cells and modules.
The Dwarf Mongoose Optimization Algorithm (DMOA) is a novel technique developed by the dwarf mongoose’s foraging behavior that was presented in 2022 by J.O. Agushaka et al. [38,39]. It has been used to address various real engineering optimizing issues because of its high globally searching ability and resilience [40,41,42,43,44,45,46,47].
This paper proposes a new Enhanced Dwarf Mongoose Optimization Algorithm (EDMOA) with an alpha-directed Learning Strategy (LS) for dealing with different mathematical benchmarking functions and engineering challenges. First, the proposed EDMOA and DMOA were tested on seven unimodal and six multimodal benchmarking tasks. Second, the proposed EDMOA was compared against the traditional DMOA for the CEC 2017 single-objective optimization benchmarks. Further, their application validity was conducted for an important engineering optimization problem regarding optimal dispatch of combined power and heat. For all applications, the proposed EDMOA and DMOA were compared to several other recent and well-known algorithms. The simulation results show that the suggested DMOA outperforms not only the regular DMOA but also numerous other recent strategies in terms of effectiveness and efficacy.

2. EDMOA Version with Alpha-Directed LS

2.1. Standard DMOA

In the DMOA, the population of the dwarf mongoose (DM) employs three social categories: the alpha group, babysitters, and scouts. The family forages as a team, with the alpha female initiating foraging and determining the foraging course, distance traversed, and sleeping mounds. A subset of the DM population, which is generally a combination of male and female types, provides babysitters. They stay behind with the children until the rest of the party arrives in the afternoon. The babysitters are first swapped to begin foraging alongside the group (exploitation phase). The DM group does not create a place to nest to house the young; instead, they always change the sleeping mound, seeking a new foraged spot. The DMs have established a seminomadic manner of existence in an area sufficiently large to accommodate everyone in the party (exploration phase). Nomadic behavior avoids excessive use of a certain region. It additionally assures that the whole terrain is explored to guarantee that no previously visited sleeping mounds are returned to [38].
In the DMOA, the initial DM populations of the candidate solutions of NDM individuals are randomly generated as follows:
D j , d ( 0 ) = D min , d + r a n d ( 0 , 1 ) . D max , d D min , d ,   j = 1 : N D M ,   d = 1 : D i m
where Dj,d represents the location as a searching individual regarding each DM (j) and each control variable (d); Dmin,d and Dmax,d denote the minimum and maximum bounds of each control variable (d); Dim refers to the number of the decision variables related to the optimizing task.
The fitness of every solution is calculated once the population has been initialized. Equation (2) calculates the probability worth of every group’s fitness, and the alpha female (α) is chosen according to this probability [38].
α = F j j = 1 N D M F j
In the alpha group, the number of DMs correlates to the difference between the whole group number (NDM) and the number of babysitters (Bst). The number of babysitters is denoted by bs. Peep represents the alpha female’s vocalization, which maintains the DM family on track. Each DM sleeps in the first sleeping mound, which has been assigned to ∅. The DMOA employs the formula provided in Equation (3) to generate a potential food position.
D k , d ( i + 1 ) = D k , d ( i ) + r a n d ( 0 , 1 ) × p e e p ,   k = 1 : N D M B s t ,   d = 1 : D i m
where i refers to the existing iteration. The sleeping mound may be formulated after each iteration as follows:
S M j = F j + 1 F j max F j + 1 F j                     j = 1 : N D M B s t
where j refers to each DM in the scout group which are the difference between the whole group number (NDM) and the number of babysitters (Bst).
According to that, Equation (5) gives the mean value (ψ) of the sleeping mound discovered.
ψ j = j = 1 N D M S M j N D M                     j = 1 : N D M B s t
The DMOA technique advances to the scouting stage, when a subsequent source of food or sleeping mound will be determined once the babysitting exchange criterion is reached. In DMOA, scouting occurs concurrently with foraging, where the scouts search for an alternative sleeping mound, ensuring exploration. The resulting motion is depicted as an effective or unsuccessful assessment of establishing a fresh mound, based on the mongooses’ total performance. The scout mongoose can be simulated by Equation (6).
D k , d ( i + 1 ) = D k , d ( i ) C F × r a n d ( 0 , 1 ) × D k , d ( i ) M D k , d ( i ) + C F × r a n d ( 0 , 1 ) × D k , d ( i ) M i f ψ j + 1 > ψ j E l s e k = 1 : N D M ,   d = 1 : D i m
where CF is decreasing linearly as iterations progress, as represented in Equation (7), and M indicates a vector that decides the mongoose’s migration to the next sleeping mound, as represented in Equation (8). The CF parameter signifies the parameter that regulates the mongoose group’s collective–volitive motion.
C F = 1 i I t e r max 2 × i I t e r max
M = j = 1 N D M D j × S M j D j
where Itermax refers to the highest number of iterations.

2.2. Proposed EDMOA

In this section, a new EDMOA with an alpha-directed Learning Strategy (LS) for dealing with different mathematical benchmarking functions and engineering challenges is proposed. An enhanced LS is included in the novel proposed solver to improve the searching capabilities, and its updating process is partially guided by the updated alpha. The alpha-directed LS is merged with the formula provided in Equation (3) to generate a potential food position in an effort to enhance the searching capability. Therefore, the change in position of every searching solution within the search space is upgraded as follows:
D k , d ( i + 1 ) = B e s t D M d ( i ) + r a n d ( 0 , 1 ) × D k , d ( i ) D R , d ( i ) D k , d ( i ) + r a n d ( 0 , 1 ) × p e e p   i f   rand < CP E l s e k = 1 : N D M B s t ,   d = 1 : D i m
where BestDMd represents the location as the alpha regarding the searching individual with the best fitness score; DR,d is a randomly picked searching individual from the DM population; CP denotes the choice probability. CP is set to 50% to establish a balance between the exploration features given in Equation (3) and the increased exploitation qualities mentioned in Equation (9). The exploitation features are considerable and strong while employing the aforementioned structure, whereas the exploratory seeking qualities are kept and obtained using the usual method at the same time. Figure 1 depicts the critical phases of the proposed EDMOA.

3. Application Assessment for Benchmarking Models

This part assesses the efficacy and functioning of the prescribed EDMOA and DMOA. They are tested on seven unimodal and six multimodal benchmarking tasks. Their comprehensive information regarding the nature of mathematical frameworks, variable dimensions, and acceptable ranges are outlined in Table 1 and Table 2.
The proposed EDMOA is evaluated against the conventional DMOA. Both strategies are used in comparable circumstances. Assigned to 30 and 1000, respectively, were the number of solutions per population and iterations. To quantify the results, the terms’ best, average, worst, and standard deviation (STd) are utilized. Table 3 shows the corresponding outcomes based on both EDMOA and DMOA for unimodal and multimodal tasks, accordingly. As shown, the proposed EDMOA has greater strength with regard to obtaining the least mean, STd, and median in more than 80% of benchmark functions.
Also, Figure 2 displays the converging properties of the proposed EDMOA and DMOA for the 13 benchmarking tasks. As shown, the proposed EDMOA records average improvement over the standard DMOA of 100%, 100%, 99.4%, 88.5%, 82.2%, 100%, 8.9%, 69.8%, 87.2%, 96.1%, and 97.3% for the optimization benchmarks Fn1, Fn2, Fn3, Fn4, Fn5, Fn6, Fn7, Fn9, Fn11, Fn12, and Fn13, respectively. On the other side, the standard DMOA records average improvement over the EDMOA of 34.1% and 48.4% for only two optimization benchmarks: Fn8 and Fn10, respectively.
The proposed EDMOA is contrasted to other contemporary and well-known applied optimization strategies for benchmarking task functions including the basic DMOA, PSO [21], DE [49], Multi-Verse Optimizer (MVO) [50], salp swarm algorithm (SSA) [51], and sine–cosine algorithm (SCA) [52].
In order to judge the advantage of the proposed algorithm over the well-known ones, the size of the population, the number of iterations, and the number of calculations of the criterion function in these algorithms are displayed in Table 4. Not only that, but this table provides the specified metaparameters of the compared algorithms. To avoid the impact of randomness, thirty different implementations of each method for each benchmark were tested. Based on these circumstances, Table 5 displays the regarding obtained results of the mean and STd of the objective value for the 13 benchmarking tasks under study.
Additionally, Table 6 shows the results of a Friedman ranking test for the implemented benchmark functions. As demonstrated, the presented EDMOA successfully acquires the best performance by recording the first rank by achieving the smallest mean rank with 2.12. Secondly, the DE comes with mean rank of 2.27, while MVO achieves the third rank by 4.04. After that, the SSA, DMOA, SCA, and PSO come subsequently with mean ranks of 4.27, 4.54, 5.35, and 5.42.

4. Application Assessment for Benchmarking Models

Typically, it can be difficult to measure the level of “good” of an efficient optimization technique because of lacking a mathematical proof; hence, the benchmark functions play a significant part in assessing the efficacy of these techniques. As a result, the performance of the proposed EDMOA and DMOA algorithms is assessed in this study considering 12 benchmark functions related to the CEC 2017 competition [53]. The functions considered include unimodal, multimodal, mixed, and composite. Table 7 depicts those unconstrained benchmarking functions.
The proposed EDMOA is compared against the traditional DMOA for the CEC 2017 single-objective optimization benchmarks presented in Table 7. Both methods are employed in similar situations. Table 8 displays the best, average, and worst outcomes based on both EDMOA and DMOA. As demonstrated, the suggested EDMOA has a higher strength in terms of obtaining the lowest mean, STd, and median in more than 85% of benchmark functions. Figure 3 further shows the convergence features of the proposed EDMOA and DMOA for twelve benchmarks considered for the CEC 2017 benchmarking tasks. As demonstrated, in acquiring the best fitness score, the proposed EDMOA records average improvement over the standard DMOA of approximately 15%. Based on the derived average fitness score, the suggested EDMOA shows a 20.5% improvement over the traditional DMOA. Based on the derived worst fitness score, the suggested EDMOA shows a 21% improvement over the traditional DMOA.
To discuss the convergence properties of the proposed algorithm towards the optimum fitness, Figure 4 displays the obtained fitness based on the proposed EDMOA, the optimum value, and the difference percentage of both. As shown, the proposed algorithm is certified to be convergent due to the great coincidence between the obtained fitness based on the proposed EDMOA, the optimum value for the considered benchmarks. Similarly, a very small difference in percentage is illustrated. There is no difference for the functions Fn15, Fn16, Fn18, Fn21, and Fn23, while the maximum difference does not exceed 1.5% for Fn24.
The proposed EDMOA is compared to various commonly used optimization approaches for benchmarking task functions, such as the basic DMOA, PSO [21], circle search algorithm [54], gray wolf algorithm (GWO) [55,56], slime mould algorithm (SMA) [48,57], BAS [31], and Eagle Perching Optimizer (EPO) [58].
Table 9 displays the size of the population, the number of iterations, and the number of computations of the criteria function in the aforementioned algorithms to determine the benefit of the proposed method over the comparative algorithms. Furthermore, this table contains the defined metaparameters of the comparing methods. To avoid the influence of randomization, thirty distinct implementations of each method were examined for each benchmark.
Table 10 shows the findings of the average objective value for the CEC 2017 benchmarking tasks. Table 11 also displays the results of a Friedman ranking test for the implemented benchmark functions. As demonstrated, the provided EDMOA achieves the best performance by recording the top rank with the smallest mean rank with 1.36. Secondly, the SMA comes with a mean rank of 2.46, while the basic DMOA achieves the third rank by 2.714. After that, the GWO, Circle, PSO, EPO, and BAS come subsequently with mean ranks of 4.14, 4.39, 5.89, 6.83, and 7.67, respectively.
To assess the computational complexity and run time of EDMOA and DMOA in solving the considered benchmarking models, the “Big O notation” is considered [59]. First, Dim, NDM, and Itermax are specified for each considered benchmark. For both the DMOA and EDMOA, the computational complexity in the searching space can be estimated as O (Itermax × NDM × Dim) × O (F(x)). Therefore, Table 12 provides the computational complexity for each considered benchmark and the regarding computing run time. As shown, the standard DMOA records slightly lower computing time than the EDMOA with additional learning strategy.

5. Application for Engineering Optimization Problem: Optimal Dispatch of Combined Power and Heat

In this section, the application validity is conducted for one of the important engineering optimization problem regarding optimal Dispatch of Combined Power and Heat (DCPH) [60,61,62,63,64]. Figure 5 depicts the important stakeholders in the DCPH’s attempts to provide the power and heat requirements in the facilities and structures. The primary goal of the DCPH framework could be to determine the economical prospective rates for heating produced by heat units, electricity produced by power units, and the heat and power produced by CPH units, in order to keep fuel costs low while meeting heat and power demands and limits.
In this context, the DCPH concern is addressed using a large-scale testing scenario involving 84 different units. Its goal was to reduce the system’s overall production expenses. As a consequence, the costs of production as a reduction target (Fn) could be stated as follows:
F n = a = 1 N P G C a ( P G a ) + b = 1 N H T C b ( H T b ) + c = 1 N C P H C c ( P G c , H T c )
where NPG, NHT, and NCPH symbolize the numbers of power, heat, and CPH co-facilities, correspondingly, and Ca(PGa) [65], Cb(HTb), and Cc(PGc,HTc) represent the cost functions for the power, heat, and CPH components, correspondingly, which can be mathematically modelled in Equations (11)–(13):
C a ( P G a ) = A 1 , a × P G a 2 + A 2 , a × P G a + A 3 , a + A 4 , a × sin A 5 , a × ( P G a , min P G a k )
C b ( H T b ) = B 1 , b × H T b 2 + B 2 , b × H T b + B 3 , b
C c ( P G c , H T c ) = C 1 , c × P G c 2 + C 2 , c × P G c i + C 3 , c + C 4 , c × H T c 2 + C 5 , c × H T c + C 6 , c × H T c × P G c i
where A1:A5 constitute the parameters of the costs regarding power units; B1:B3 are the parameters of the costs regarding heat units; C1:C6 illustrate the parameters of the costs regarding CPH units.
A large case investigation comprising 48 separate units is examined for the DCPH problem. This system, which consists of twelve CPH facilities, twenty-six power facilities, and ten heat facilities, has an electricity burden of 4700 MW and a heating demand of 2500 MWth. The entire data for the investigated system can be found in [66]. Table 13 displays both the electricity and heating productions from the power, heat, and CPH units based on both the proposed EDMOA and DMOA approaches. Figure 6 also depicts the convergence characteristics of the developed EDMOA and DMOA approaches. In terms of overall production costs, the suggested EDMOA obviously outperforms the DMOA, with a reduced fuel cost in the DCPH system of $116,406.3772 compared to $121,431.6763 by the standard DMOA. With 4.13%, the percentage decrease is pretty large.
Added to that, Figure 7 shows the distribution of the different obtained costs over the different thirty executions for both the proposed EDMOA and the standard DMOA techniques for the DCPH system. In this way, Figure 8 shows the regarding improvement percentages of EDMOA over DMOA considering the different thirty executions for the DCPH system.
As demonstrated, the proposed EDMOA outperforms the basic DMOA by obtaining lower costs in all completed runs. The suggested EDMOA achieves a maximum reduction of 5.78% in the fifth run and a minimum reduction of 3.3% in the second run; the least minimum, mean, maximum, and standard deviation of $116,897.89, $118,004.35, $119,424.03, and $597.05 per hour with improvements of 1.69, 1.7, 4, and 46.03%, respectively.
Also, comparisons are made with the results of powerful optimization algorithms used in solving ED problems in the literature, as shown in Table 14. As shown, the suggested EDMOA provides superior performing features compared to the others. It provides the capability to achieve the smallest worst, average, and best production costs with $117,973.7, $117,331.3 and $116,406.4 per hour, respectively, compared to all other previously reported results. These findings demonstrate the great effectiveness and efficacy of the proposed DMOA not only over the standard DMOA but also over several other recent techniques.
In this regard, Table 15 displays the specified parameters and several successful applications regarding the compared algorithms. For the sake of avoiding the influence of randomness, fifty separate implementations have been examined depending on each algorithm for each benchmark.

6. Conclusions

For coping with various mathematical benchmarking functions and engineering issues, this work introduces a novel EDMOA with an alpha-directed LS. The proposed method makes use of three DM social groups: the alpha group, babysitters, and scouts. The family forages as a group, with the alpha female starting foraging and deciding on the foraging route, distance travelled, and sleeping mounds. The innovative suggested approach includes an enhanced LS to boost searching capabilities, and its updating process is partially led by the revised alpha. First, the proposed EDMOA and DMOA are evaluated on seven unimodal benchmarking tasks and six multimodal benchmarking tasks. Second, using the CEC 2017 single-objective benchmarks, the proposed EDMOA is compared to the standard DMOA. Furthermore, their applicability is investigated for the DCPH engineering problem. The suggested EDMOA and DMOA are compared to numerous other contemporary and well-known algorithms for all applications. In quantitative speaking, it can be illustrated the following:
  • For the first 13 benchmarks, the proposed EDMOA outperforms the regular DMOA in terms of obtaining the lowest mean, STd, and median in more than 80% of benchmark functions. Furthermore, the provided EDMOA achieves the greatest performance by ranking first among DE, MVO, SSA, DMOA, SCA, and PSO.
  • For the CEC 2017 benchmarks, the suggested EDMOA has a higher strength compared to the standard DMOA in terms of obtaining the lowest mean, STd, and median in more than 85% of benchmark functions. Also, the provided EDMOA achieves the best performance by recording the top rank compared to SMA, DMOA, GWO, Circle, and PSO.
  • For the DCPH engineering problem, the proposed EDMOA outperforms the basic DMOA by obtaining lower costs in all completed runs with savings ranging from 3.3% to 5.78%. Furthermore, the suggested DMOA surpasses not only the standard DMOA but also a number of other recent methods.
Given the great efficacy of the proposed novel technique in the preceding investigations, it is advised that the proposed method be evaluated for sufficiency in future attempts to address the optimal integration of photovoltaic energies in electrical power networks. Also, it can be developed for optimal mathematical representations of these sources and others such as fuel cells and batteries.

Author Contributions

Conceptualization, G.M. and A.G.; methodology, A.M.S., I.H.S. and G.M.; software, A.M.S. and A.G.; validation, A.M.E.-R., A.M.S., A.F.Y. and M.A.T.; formal analysis, I.H.S., A.M.E.-R., A.F.Y. and M.A.T.; investigation, A.M.E.-R., A.G., G.M. and M.A.T.; resources, I.H.S. and A.F.Y.; data curation, A.M.S. and M.A.T.; writing—original draft preparation, A.M.S., G.M. and A.G.; writing—review and editing, A.M.E.-R., A.F.Y. and M.A.T.; visualization, A.M.E.-R., A.M.S. and I.H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hajipour, V.; Mehdizadeh, E.; Tavakkoli-Moghaddam, R. A novel Pareto-based multi-objective vibration damping optimization algorithm to solve multi-objective optimization problems. Sci. Iran. 2014, 21, 2368–2378. [Google Scholar]
  2. Hajipour, V.; Kheirkhah, A.S.; Tavana, M.; Absi, N. Novel Pareto-based meta-heuristics for solving multi-objective multi-item capacitated lot-sizing problems. Int. J. Adv. Manuf. Technol. 2015, 80, 31–45. [Google Scholar] [CrossRef]
  3. Wu, G. Across neighborhood search for numerical optimization. Inf. Sci. 2016, 329, 597–618. [Google Scholar] [CrossRef] [Green Version]
  4. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [Green Version]
  5. Alshamrani, A.M.; Alrasheedi, A.F.; Alnowibet, K.A.; Mahdi, S.; Mohamed, A.W. A Hybrid Stochastic Deterministic Algorithm for Solving Unconstrained Optimization Problems. Mathematics 2022, 10, 3032. [Google Scholar] [CrossRef]
  6. Koc, I.; Atay, Y.; Babaoglu, I. Discrete tree seed algorithm for urban land readjustment. Eng. Appl. Artif. Intell. 2022, 112, 104783. [Google Scholar] [CrossRef]
  7. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  8. Agrawal, P.; Alnowibet, K.; Mohamed, A.W. Gaining-sharing knowledge based algorithm for solving stochastic programming problems. Comput. Mater. Contin. 2022, 71, 2847–2868. [Google Scholar] [CrossRef]
  9. Bertsimas, D.; Mundru, N. Optimization-Based Scenario Reduction for Data-Driven Two-Stage Stochastic Optimization. Oper. Res. 2022. [Google Scholar] [CrossRef]
  10. Liu, Q.; Liu, M.; Wang, F.; Xiao, W. A dynamic stochastic search algorithm for high-dimensional optimization problems and its application to feature selection. Knowl.-Based Syst. 2022, 244, 108517. [Google Scholar] [CrossRef]
  11. de Armas, J.; Lalla-Ruiz, E.; Tilahun, S.L.; Voß, S. Similarity in metaheuristics: A gentle step towards a comparison methodology. Nat. Comput. 2022, 21, 265–287. [Google Scholar] [CrossRef]
  12. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  13. Dehghani, M.; Hubalovsky, S.; Trojovsky, P. Tasmanian Devil Optimization: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 19599–19620. [Google Scholar] [CrossRef]
  14. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning, 1989th ed.; Addison-Wesley Publishing Company, Inc.: Redwood City, CA, USA, 1989. [Google Scholar]
  15. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  16. Das, D.; Patvardhan, C. A new hybrid evolutionary strategy for reactive power dispatch. Electr. Power Syst. Res. 2003, 65, 83–90. [Google Scholar]
  17. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  18. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  19. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  20. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  21. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995. [Google Scholar]
  22. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  23. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  24. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  25. Dehghani, M.; Mardaneh, M.; Malik, O.P.; Guerrero, J.M.; Sotelo, C.; Sotelo, D.; Nazari-Heris, M.; Al-Haddad, K.; Ramirez-Mendoza, R.A. Genetic algorithm for energy commitment in a power system supplied by multiple energy carriers. Sustainability 2020, 12, 10053. [Google Scholar] [CrossRef]
  26. Montazeri, Z.; Niknam, T. Optimal utilization of electrical energy from power plants based on final energy consumption using gravitational search algorithm. Electr. Eng. Electromech. 2018, 70–73. [Google Scholar] [CrossRef] [Green Version]
  27. Chen, X.; Li, K.; Xu, B.; Yang, Z. Biogeography-based learning particle swarm optimization for combined heat and power economic dispatch problem. Knowl.-Based Syst. 2020, 208, 106463. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Diabat, A.; Geem, Z.W. A comprehensive survey of the harmony search algorithm in clustering applications. Appl. Sci. 2020, 10, 3827. [Google Scholar] [CrossRef]
  29. Khan, A.T.; Cao, X.; Liao, B.; Francis, A. Bio-inspired Machine Learning for Distributed Confidential Multi-Portfolio Selection Problem. Biomimetics 2022, 7, 124. [Google Scholar] [CrossRef]
  30. Chen, Z.; Francis, A.; Li, S.; Liao, B.; Xiao, D.; Ha, T.T.; Li, J.; Ding, L.; Cao, X. Egret Swarm Optimization Algorithm: An Evolutionary Computation Approach for Model Free Optimization. Biomimetics 2022, 7, 144. [Google Scholar] [CrossRef]
  31. Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Li, S.; Cao, X. Time-varying minimum-cost portfolio insurance under transaction costs problem via Beetle Antennae Search Algorithm (BAS). Appl. Math. Comput. 2020, 385, 125453. [Google Scholar] [CrossRef]
  32. Rohman, M.N.; Hidayat, M.I.P.; Purniawan, A. Prediction of composite fatigue life under variable amplitude loading using artificial neural network trained by genetic algorithm. AIP Conf. Proc. 2018, 1945, 020019. [Google Scholar]
  33. Moustafa, G.; Elshahed, M.; Ginidi, A.R.; Shaheen, A.M.; Mansour, H.S.E. A Gradient-Based Optimizer with a Crossover Operator for Distribution Static VAR Compensator (D-SVC) Sizing and Placement in Electrical Systems. Mathematics 2023, 11, 1077. [Google Scholar] [CrossRef]
  34. Hidayat, M.I.P. System Identification Technique and Neural Networks for Material Lifetime Assessment Application. Stud. Fuzziness Soft Comput. 2015, 319, 773–806. [Google Scholar]
  35. Hidayat, M.I.P.; Yusoff, P.S.M.M. Optimizing neural network prediction of composite fatigue life under variable amplitude loading using bayesian regularization. In Composite Materials Technology: Neural Network Applications; CRC Press: Boca Raton, FL, USA, 2009; pp. 221–250. [Google Scholar] [CrossRef]
  36. Khan, A.H.; Cao, X.; Xu, B.; Li, S. Beetle Antennae Search: Using Biomimetic Foraging Behaviour of Beetles to Fool a Well-Trained Neuro-Intelligent System. Biomimetics 2022, 7, 84. [Google Scholar] [CrossRef]
  37. Aribia, H.B.; El-Rifaie, A.M.; Tolba, M.A.; Shaheen, A.; Moustafa, G.; Elsayed, F.; Elshahed, M. Growth Optimizer for Parameter Identification of Solar Photovoltaic Cells and Modules. Sustainability 2023, 15, 7896. [Google Scholar] [CrossRef]
  38. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  39. Akinola, O.A.; Ezugwu, A.E.; Oyelade, O.N.; Agushaka, J.O. A hybrid binary dwarf mongoose optimization algorithm with simulated annealing for feature selection on high dimensional multi-class datasets. Sci. Rep. 2022, 12, 14945. [Google Scholar] [CrossRef]
  40. Singh, B.; Bishnoi, S.K.; Sharma, M. Frequency Regulation Scheme for PV integrated Power System using Energy Storage Device. In Proceedings of the 2022 International Conference on Intelligent Controller and Computing for Smart Power, ICICCSP 2022, Chengdu, China, 19–22 August 2022. [Google Scholar] [CrossRef]
  41. Sadoun, A.M.; Najjar, I.R.; Alsoruji, G.S.; Wagih, A.; Elaziz, M.A. Utilizing a Long Short-Term Memory Algorithm Modified by Dwarf Mongoose Optimization to Predict Thermal Expansion of Cu-Al2O3 Nanocomposites. Mathematics 2022, 10, 1050. [Google Scholar] [CrossRef]
  42. Abirami, A.; Kavitha, R. An efficient early detection of diabetic retinopathy using dwarf mongoose optimization based deep belief network. Concurr. Comput. Pract. Exp. 2022, 34, e7364. [Google Scholar] [CrossRef]
  43. Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Alshathri, S.; Ibrahim, R.A. Feature Selection for High Dimensional Datasets Based on Quantum-Based Dwarf Mongoose Optimization. Mathematics 2022, 10, 4565. [Google Scholar] [CrossRef]
  44. Balasubramaniam, S.; Satheesh Kumar, K.; Kavitha, V.; Prasanth, A.; Sivakumar, T.A. Feature Selection and Dwarf Mongoose Optimization Enabled Deep Learning for Heart Disease Detection. Comput. Intell. Neurosci. 2022, 2022, 2819378. [Google Scholar] [CrossRef]
  45. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Dwarf Mongoose Optimization Metaheuristics for Autoregressive Exogenous Model Identification. Mathematics 2022, 10, 3821. [Google Scholar] [CrossRef]
  46. Dora, B.K.; Bhat, S.; Halder, S.; Srivastava, I. A Solution to the Techno-Economic Generation Expansion Planning Using Enhanced Dwarf Mongoose Optimization Algorithm. In Proceedings of the IBSSC 2022—IEEE Bombay Section Signature Conference, Mumbai, India, 8–10 December 2022. [Google Scholar] [CrossRef]
  47. Aldosari, F.; Abualigah, L.; Almotairi, K.H. A Normal Distributed Dwarf Mongoose Optimization Algorithm for Global Optimization and Data Clustering Applications. Symmetry 2022, 14, 1021. [Google Scholar] [CrossRef]
  48. Sarhan, S.; Shaheen, A.M.; El-Sehiemy, R.A.; Gafar, M. An Enhanced Slime Mould Optimizer That Uses Chaotic Behavior and an Elitist Group for Solving Engineering Problems. Mathematics 2022, 10, 1991. [Google Scholar] [CrossRef]
  49. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  52. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  53. Shen, Y.; Liang, Z.; Kang, H.; Sun, X.; Chen, Q. A modified jso algorithm for solving constrained engineering problems. Symmetry 2021, 13, 63. [Google Scholar] [CrossRef]
  54. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
  55. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  56. Mahdy, A.; Shaheen, A.; El-Sehiemy, R.; Ginidi, A.; Al-Gahtani, S.F. Single- and Multi-Objective Optimization Frameworks of Shape Design of Tubular Linear Synchronous Motor. Energies 2023, 16, 2409. [Google Scholar] [CrossRef]
  57. Abid, S.; El-Rifaie, A.M.; Elshahed, M.; Ginidi, A.R.; Shaheen, A.M.; Moustafa, G.; Tolba, M.A. Development of Slime Mold Optimizer with Application for Tuning Cascaded PD-PI Controller to Enhance Frequency Stability in Power Systems. Mathematics 2023, 11, 1796. [Google Scholar] [CrossRef]
  58. Khan, A.T.; Senior, S.L.; Stanimirovic, P.S. Model-Free Optimization Using Eagle Perching Optimizer. Available online: https://www.mathworks.com/matlabcentral/fileexchange/67978-model-free-optimization-using-eagle-perching-optimizer (accessed on 20 June 2023).
  59. Sarhan, S.; El-Sehiemy, R.A.; Shaheen, A.M.; Gafar, M. TLBO merged with studying effect for Economic Environmental Energy Management in High Voltage AC Networks Hybridized with Multi-Terminal DC Lines. Appl. Soft Comput. 2023, 143, 110426. [Google Scholar] [CrossRef]
  60. Shaheen, A.M.; Elsayed, A.M.; Elattar, E.E.; El-Sehiemy, R.A.; Ginidi, A.R. An Intelligent Heap-Based Technique With Enhanced Discriminatory Attribute for Large-Scale Combined Heat and Power Economic Dispatch. IEEE Access 2022, 10, 64325–64338. [Google Scholar] [CrossRef]
  61. Kaur, P.; Chaturvedi, K.T.; Kolhe, M.L. Techno-Economic Power Dispatching of Combined Heat and Power Plant Considering Prohibited Operating Zones and Valve Point Loading. Processes 2022, 10, 817. [Google Scholar] [CrossRef]
  62. El-Sehiemy, R.; Shaheen, A.; Ginidi, A.; Elhosseini, M. A Honey Badger Optimization for Minimizing the Pollutant Environmental Emissions-Based Economic Dispatch Model Integrating Combined Heat and Power Units. Energies 2022, 15, 7603. [Google Scholar] [CrossRef]
  63. Sarhan, S.; Shaheen, A.; El-Sehiemy, R.; Gafar, M. A Multi-Objective Teaching-Learning Studying-Based Algorithm for Large-Scale Dispatching of Combined Electrical Power and Heat Energies. Mathematics 2022, 10, 2278. [Google Scholar] [CrossRef]
  64. Ginidi, A.R.; Elsayed, A.M.; Shaheen, A.M.; Elattar, E.E.; El-Sehiemy, R.A. A Novel Heap based Optimizer for Scheduling of Large-scale Combined Heat and Power Economic Dispatch. IEEE Access 2021, 9, 83695–83708. [Google Scholar] [CrossRef]
  65. Shaheen, A.M.; El-Sehiemy, R.A.; Elattar, E.; Ginidi, A.R. An Amalgamated Heap and Jellyfish Optimizer for economic dispatch in Combined heat and power systems including N-1 Unit outages. Energy 2022, 246, 123351. [Google Scholar] [CrossRef]
  66. Nazari-Heris, M.; Mehdinejad, M.; Mohammadi-Ivatloo, B.; Babamalek-Gharehpetian, G. Combined heat and power economic dispatch problem solution by implementation of whale optimization method. Neural Comput. Appl. 2019, 31, 421–436. [Google Scholar] [CrossRef]
  67. Mahdy, A.; El-Sehiemy, R.; Shaheen, A.; Ginidi, A.; Elbarbary, Z.M.S. An Improved Artificial Ecosystem Algorithm for Economic Dispatch with Combined Heat and Power Units. Appl. Sci. 2022, 12, 11773. [Google Scholar] [CrossRef]
  68. Ginidi, A.; Elsayed, A.; Shaheen, A.; Elattar, E.; El-Sehiemy, R. An Innovative Hybrid Heap-Based and Jellyfish Search Algorithm for Combined Heat and Power Economic Dispatch in Electrical Grids. Mathematics 2021, 9, 2053. [Google Scholar] [CrossRef]
  69. Beigvand, S.D.; Abdi, H.; La Scala, M. Combined heat and power economic dispatch problem using gravitational search algorithm. Electr. Power Syst. Res. 2016, 133, 160–172. [Google Scholar] [CrossRef]
  70. Shaheen, A.M.; Ginidi, A.R.; El-Sehiemy, R.A.; Ghoneim, S.S.M. Economic Power and Heat Dispatch in Cogeneration Energy Systems Using Manta Ray Foraging Optimizer. IEEE Access 2020, 8, 208281–208295. [Google Scholar] [CrossRef]
  71. Mohammadi-Ivatloo, B.; Moradi-Dalvand, M.; Rabiee, A. Combined heat and power economic dispatch problem solution using particle swarm optimization with time varying acceleration coefficients. Electr. Power Syst. Res. 2013, 95, 9–18. [Google Scholar] [CrossRef]
Figure 1. Proposed EDMOA flowchart.
Figure 1. Proposed EDMOA flowchart.
Mathematics 11 03297 g001
Figure 2. Converging properties of proposed EDMOA and DMOA for benchmarking tasks.
Figure 2. Converging properties of proposed EDMOA and DMOA for benchmarking tasks.
Mathematics 11 03297 g002aMathematics 11 03297 g002bMathematics 11 03297 g002c
Figure 3. Converging properties of proposed EDMOA and DMOA for some samples of the CEC 2017 benchmarking tasks.
Figure 3. Converging properties of proposed EDMOA and DMOA for some samples of the CEC 2017 benchmarking tasks.
Mathematics 11 03297 g003aMathematics 11 03297 g003bMathematics 11 03297 g003c
Figure 4. Converging evaluation of the proposed EDMOA and the optimum solution for some samples of the CEC 2017 benchmarking tasks.
Figure 4. Converging evaluation of the proposed EDMOA and the optimum solution for some samples of the CEC 2017 benchmarking tasks.
Mathematics 11 03297 g004
Figure 5. DCPH operation framework [62].
Figure 5. DCPH operation framework [62].
Mathematics 11 03297 g005
Figure 6. Convergences for minimizing the DCPH costs using DMOA and EDMOA.
Figure 6. Convergences for minimizing the DCPH costs using DMOA and EDMOA.
Mathematics 11 03297 g006
Figure 7. Distribution of the obtained costs over the different thirty executions for EDMOA and DMOA for the DCPH system.
Figure 7. Distribution of the obtained costs over the different thirty executions for EDMOA and DMOA for the DCPH system.
Mathematics 11 03297 g007
Figure 8. Improvement percentages of EDMOA over DMOA considering the different thirty executions for the DCPH system.
Figure 8. Improvement percentages of EDMOA over DMOA considering the different thirty executions for the DCPH system.
Mathematics 11 03297 g008
Table 1. Test data for unimodal benchmarking tasks.
Table 1. Test data for unimodal benchmarking tasks.
FunctionMinimumDimBounds
F n 1 = D i m j = 1   y j 2 030[−100, 100]
F n 2 = D i m j = 1 y j + Π j = 1 D i m y j 030[−10, 10]
F n 3 = D i m j = 1 j k 1 y k 2 030[−100, 100]
F n 4 = max j y j , 1 < j < D i m 030[−100, 100]
F n 5 = D i m 1 j = 1 100 ( y j + 1 y j 2 ) + ( y j 1 ) 2 030[−30, 30]
F n 6 = D i m j = 1 j j + 1 2 2 030[−100, 100]
F n 7 = D i m k = 1   k × y k 4 + r a n d o m [ 0 , 1 ] 030[−128, 128]
Table 2. Test data for multimodal benchmarking tasks [48].
Table 2. Test data for multimodal benchmarking tasks [48].
FunctionMin.DimBounds
F n 8 = D i m j = 1 - y j × sin ( y j ) −418.9829 × Dim30[−500, 500]
F n 9 = D i m j = 1 y j 2 + 10 10 × cos ( 2 π y j ) 030[−5.12, 5.12]
F n 10 = 20 20 × exp 1 25 D i m D i m j = 1 y j 2 1 2 exp 1 D i m D i m j = 1 cos ( 2 π y j ) + e 030[−32, 32]
F n 11 = 1 + 1 4000 D i m j = 1 y j 2 Π k = 1 D i m cos y k k 030[−600, 600]
F n 12 = 10 π D i m × sin ( π x 1 ) + D i m 1 j = 1 x j - 1 2 1 + 10 sin 2 ( π x j + 1 ) + x d 1 2 + D i m j = 1 Z x j , 10 , 100 , 4 w h e r e , x j = 1 + y j 4 + 1 ,   Z ( y k , α , β , γ ) = β ( α + y k ) γ 0 β ( α y k ) γ i f   y k > α i f   - a < y k < α i f   y k < α 030[−50, 50]
F n 13 = 1 10 × D i m j = 1 ( y j 1 ) 2 1 + sin 2 ( 3 π y j + 1 ) + sin 2 ( 3 π y 1 ) + ( y d 1 ) 2 1 + sin 2 ( 3 π y d ) + j = 1 D i m Z y j , 5 , 100 , 4 030[−50, 50]
Table 3. Statistical outcomes of proposed EDMOA and DMOA for benchmarking tasks.
Table 3. Statistical outcomes of proposed EDMOA and DMOA for benchmarking tasks.
TaskStandard DMOAProposed EDMOAAverage
BestAverageWorstStdBestAverageWorstStdImprovement
Fn11.79 × 10−58.76 × 10−50.0002965.90 × 10−56.61 × 10−271.17 × 10−249.46 × 10−242.28 × 10−24100%
Fn21.03 × 10−52.00 × 10−53.54 × 10−56.11 × 10−67.96 × 10−171.06 × 10−141.06 × 10−132.03 × 10−14100%
Fn35743.58781.612,9661668.82.897127.36883.8721.4899.40%
Fn414.81520.84127.1633.2210.335871.50533.46050.7706888.50%
Fn577.956237.03393.4183.8660.7876232.27588.21528.60582.20%
Fn61.65 × 10−57.08 × 10−50.0001653.21 × 10−59.37 × 10−271.09 × 10−241.13 × 10−232.30 × 10−24100%
Fn70.112380.54811.07470.270230.0842860.522820.933260.289258.90%
Fn8−7211.4−5642.5−4571.6681.27−11,207−9752.5−7496.1846.28−34.1%
Fn9104.94153.31197.1126.50327.90240.15266.6629.054869.80%
Fn100.0011290.0026260.0044520.0007963.29 × 10−140.0446811.34040.24473−48.4%
Fn110.0001120.0325830.403990.07917600.0096820.0320060.01078587.20%
Fn122.7197.668212.872.44313.43 × 10−250.0968020.829190.1962996.10%
Fn131.23649.351224.0695.06092.30 × 10−250.0787741.22250.2535197.30%
Table 4. Compared algorithms for benchmarking tasks (Fn1:Fn13): circumstances and metaparameters.
Table 4. Compared algorithms for benchmarking tasks (Fn1:Fn13): circumstances and metaparameters.
AlgorithmSpecified MetaparametersSize of Population Number of IterationsNumber of Fitness Calculations
SCAParameter constant (a) = 230100030,000
SSAParameters c1 = c2 are random numbers [0, 1]30100030,000
MVOTravelling distance rate is random number [0.6, 1]
Existence probability is random number [0.2, 1]
30100030,000
PSOParameters c1 = c2 = 2
Maximum Velocity (vMax) = 6
30100030,000
DECrossover probability = 0.5
Scaling factor = 0.5
30100030,000
DMOANumber of babysitters = 3
Alpha female vocalization (peep = 2)
30100030,000
EDMOANumber of babysitters = 3
Alpha female vocalization (peep = 2)
30100030,000
Table 5. Comparisons of mean and STd of the compared algorithms for benchmarking tasks.
Table 5. Comparisons of mean and STd of the compared algorithms for benchmarking tasks.
TaskIndexSCASSAMVOPSODEDMOAEDMOA
Fn1Mean1.52 × 10−21.23 × 10−83.19 × 10−11.29 × 1023.03 × 10−121.79 × 10−56.61 × 10−27
STd3.00 × 10−23.54 × 10−91.12 × 10−11.54 × 1013.45 × 10−125.90 × 10−52.28 × 10−24
Fn2Mean1.15 × 10−58.48 × 10−13.89 × 10−18.61 × 1013.72 × 10−81.03 × 10−57.96 × 10−17
STd2.74 × 10−59.42 × 10−11.38 × 10−16.53 × 1011.20 × 10−86.11 × 10−62.03 × 10−14
Fn3Mean3.26 × 1032.37 × 1024.81 × 1014.07 × 1022.42 × 1045.74 × 1032.9
STd2.94 × 1031.56 × 1022.18 × 1017.13 × 1014.17 × 1031.67 × 1032.15 × 101
Fn4Mean2.05 × 1018.251.084.51.971.48 × 1013.36 × 10−1
STd1.10 × 1013.293.11 × 10−13.29 × 10−14.31 × 10−13.227.71 × 10−1
Fn5Mean5.33 × 1021.36 × 1024.08 × 1021.55 × 1054.61 × 1017.80 × 1017.88 × 10−1
STd1.91 × 1031.74 × 1026.15 × 1023.60 × 1042.73 × 1018.39 × 1012.66 × 101
Fn6Mean4.5503.24 × 10−11.33 × 1023.10 × 10−121.65 × 10−59.37 × 10−27
STd3.57 × 10−109.74 × 10−21.52 × 1011.46 × 10−123.21 × 10−52.30 × 10−24
Fn7Mean2.44 × 10−29.55 × 10−22.09 × 10−21.11 × 1022.69 × 10−21.12 × 10−18.43 × 10−2
STd2.07 × 10−25.05 × 10−29.58 × 10−32.15 × 1016.32 × 10−32.70 × 10−12.89 × 10−1
Fn8Mean−3.89 × 103−7.82 × 103−7.74 × 103−6.73 × 103−1.24 × 104−7.21 × 103−1.12 × 104
STd2.26 × 1028.42 × 1026.93 × 1026.50 × 1021.49 × 1026.81 × 1028.46 × 102
Fn9Mean1.84 × 1015.66 × 1011.13 × 1023.69 × 1025.93 × 1011.05 × 1022.79 × 101
STd2.14 × 1011.29 × 1012.46 × 1011.87 × 1016.082.65 × 1019.05
Fn10Mean1.13 × 1012.261.158.424.64 × 10−71.13 × 10−33.29 × 10−14
STd9.667.21 × 10−17.03 × 10−14.11 × 10−11.38 × 10−77.96 × 10−42.45 × 10−1
Fn11Mean2.35 × 10−11.01 × 10−25.75 × 10−11.039.76 × 10−111.12 × 10−40
STd2.25 × 10−11.07 × 10−28.75 × 10−24.89 × 10−32.13 × 10−107.92 × 10−21.08 × 10−2
Fn12Mean2.295.541.294.83.63 × 10−132.723.43 × 10−25
STd2.963.121.18.67 × 10−13.40 × 10−132.441.96 × 10−1
Fn13Mean5.19 × 1021.018.13 × 10−22.32 × 1011.69 × 10−121.242.30 × 10−25
STd2.78 × 1034.74.32 × 10−24.21.16 × 10−125.062.54 × 10−1
Table 6. Friedman ranking of the compared algorithms for benchmarking tasks.
Table 6. Friedman ranking of the compared algorithms for benchmarking tasks.
TaskIndexSCASSAMVOPSODEDMOAEDMOA
Fn1Mean5367241
STd5367241
Fn2Mean4657231
STd4657231
Fn3Mean7324561
STd6423751
Fn4Mean7524361
STd7612354
Fn5Mean6457231
STd6457231
Fn6Mean6157342
STd6157342
Fn7Mean2617354
STd3427156
Fn8Mean7346152
STd2543176
Fn9Mean1367452
STd5364172
Fn10Mean7546231
STd7654123
Fn11Mean5467231
STd7362154
Fn12Mean4736251
STd6743152
Fn13Mean7436251
STd7524163
Summation1391111051415911855
Mean Rank5.354.274.045.422.274.542.12
Final Ranking6437251
Table 7. Mathematical information of the CEC 2017 single-objective optimization benchmarks being considered.
Table 7. Mathematical information of the CEC 2017 single-objective optimization benchmarks being considered.
Function No.FunctionMinimumDimBounds
Fn14Shifted and Rotated Bent Cigar Function10030[−100, 100]
Fn15Shifted and Rotated Zakharov Function30030[−100, 100]
Fn16Shifted and Rotated Rosenbrock’s Function40030[−100, 100]
Fn17Shifted and Rotated Rastrigin’s Function50030[−100, 100]
Fn18Shifted and Rotated Expanded Scaffer’s F6 Function60030[−100, 100]
Fn19Shifted and Rotated Lunacek Bi-Rastrigin Function 70030[−100, 100]
Fn20Shifted and Rotated Noncontinuous Rastrigin’s Function80030[−100, 100]
Fn21Shifted and Rotated Levy Function90030[−100, 100]
Fn22Shifted and Rotated Schwefel’s Function100030[−100, 100]
Fn23Hybrid Function 1 (N = 3)110030[−100, 100]
Fn24Hybrid Function 2 (N = 3)120030[−100, 100]
Fn25Hybrid Function 3 (N = 3)130030[−100, 100]
Table 8. Statistical outcomes of proposed EDMOA and DMOA for CEC 2017 benchmarking tasks.
Table 8. Statistical outcomes of proposed EDMOA and DMOA for CEC 2017 benchmarking tasks.
TaskBestAverageWorst
DMOAEDMOAImproveDMOAEDMOAImproveDMOAEDMOAImprove
Fn14103.10100.792%4947.711833.4363%63,834.8810,627.6683%
Fn15426.95300.0030%1075.82300.0072%2487.44300.0788%
Fn16400.56400.010%405.29403.161%406.86405.070%
Fn17518.01502.983%530.04510.004%539.60521.853%
Fn18600.00600.000%600.00600.000%600.00600.000%
Fn19728.61701.994%743.07724.952%753.99750.161%
Fn20811.29804.971%829.65811.342%841.71826.862%
Fn21900.00900.000%900.00900.020%900.00900.450%
Fn221615.591003.6638%2276.481528.5533%3534.002498.3829%
Fn231103.031100.350%1107.101104.630%1112.531113.660%
Fn247581.321215.0684%1.48 × 1051.76 × 10488%8.49 × 1055.58 × 10593%
Fn251471.091309.9911%4806.375704.65−19%14,058.032.06 × 105−47%
Table 9. Compared algorithms for CEC 2017 benchmarking tasks (Fn14:Fn25): circumstances and metaparameters.
Table 9. Compared algorithms for CEC 2017 benchmarking tasks (Fn14:Fn25): circumstances and metaparameters.
AlgorithmParametersPopulation SizeNumber of IterationsNumber of Fitness Calculations
Circle% Parameter (p) decreases linearly from 1 to 0.05
% Parameter c = 0.8
3050015,000
GWO% Parameter (a) decreases linearly from 2 to 03050015,000
PSO% Parameters c1 = c2 = 2
% Maximum Velocity (vMax) = 6
3050015,000
SMA% z parameter = 0.03 3050015,000
DMOA% Number of babysitters = 3
% Alpha female vocalization (peep = 2)
3050015,000
EDMOA% Number of babysitters = 3
% Alpha female vocalization (peep = 2)
3050015,000
EPO% The Area to Search: l_scale = 1000;
% Resolution Range: res = 0.05;
% Shrinking Coefficient: eta = (res/l_scale)^(1/MaxIt);
3050015,000
BAS% antenna distance:
d0 = 0.001; d1 = 3; d = d1; eta_d = 0.95;
% random walk:
l0 = 0.0; l1 = 0.0; l = l1; eta_l = 0.95;
% steps:
step = 0.8; % step length
eta_step = 0.95;
3050015,000
Table 10. Comparisons of the average objective value for CEC 2017 benchmarking tasks.
Table 10. Comparisons of the average objective value for CEC 2017 benchmarking tasks.
TaskCircleGWOPSOSMADMOAEDMOAEPOBAS
Fn141.33 × 10994,119,6261.12 × 1094047.1054947.711833.438,814,7682.3 × 1010
Fn1514,241.483268.1861578.377300.05291075.8230016,333,666783,773.90
Fn16506.2511420.6218458.378407.543405.29403.162353.7683390.97
Fn17561.921519.452527.4131514.8857530.04510528.8325658.1569
Fn18645.2603601.8139603.4523600.1722600600645.6735686.2571
Fn19789.0027735.5862726.2797725.503743.07724.95737.48251131.222
Fn20846.8571815.9346819.8598816.6209829.65811.34823.9785934.9389
Fn211631.005915.9064906.119900.2505900900.021738.4473657.722
Fn222610.1731763.6441725.0711511.8292276.481528.552691.0753454.588
Fn232005.971157.3781205.2181137.7761107.11104.631,003,62725,751.55
Fn242.49 × 1081.14 × 1061.18 × 1072.08 × 1051.48 × 1051.76 × 1042.75 × 1092.6 × 109
Fn2510,842,62512,488.3815,339.448132.8924806.375704.654.64 × 1083.02 × 108
Table 11. Friedman ranking of the compared algorithms for the obtained average objective value for CEC 2017 benchmarking tasks.
Table 11. Friedman ranking of the compared algorithms for the obtained average objective value for CEC 2017 benchmarking tasks.
TaskCircleGWOPSOSMADMOAEDMOAEPOBAS
Fn1474632158
Fn1546523178
Fn1664532178
Fn1783425167
Fn1864531178
Fn1984325168
Fn2072435168
Fn2165431278
Fn2264315278
Fn2364532187
Fn2464532187
Fn2564531287
Summation7047533134158292
Mean Rank5.8928574.1428574.3928572.4642862.7142861.3571436.83337.6667
Final Ranking64523178
Table 12. Computational complexity and run time of DMOA and EDMOA.
Table 12. Computational complexity and run time of DMOA and EDMOA.
TaskComputational ComplexityAverage Run Time (Seconds)/IterationTaskComputational ComplexityAverage Run Time (Seconds)/Iteration
DMOAEDMOADMOAEDMOA
Fn1O (30 × 30 × 1000)0.0017630.001772Fn14O (30 × 30 × 500)0.00239060.002345
Fn2O (30 × 30 × 1000)0.0018510.001798Fn15O (30 × 30 × 500)0.00278010.002817
Fn3O (30 × 30 × 1000)0.0015920.001608Fn16O (30 × 30 × 500)0.00295670.002966
Fn4O (30 × 30 × 1000)0.0017230.001739Fn17O (30 × 30 × 500)0.00212530.002111
Fn5O (30 × 30 × 1000)0.0015770.001528Fn18O (30 × 30 × 500)0.00241310.002387
Fn6O (30 × 30 × 1000)0.0014140.001464Fn19O (30 × 30 × 500)0.00266150.0026914
Fn7O (30 × 30 × 1000)0.0016950.001691Fn20O (30 × 30 × 500)0.00274110.002802
Fn8O (30 × 30 × 1000)0.0018230.001854Fn21O (30 × 30 × 500)0.00301250.002929
Fn9O (30 × 30 × 1000)0.0020110.001902Fn22O (30 × 30 × 500)0.00221450.0022941
Fn10O (30 × 30 × 1000)0.0018860.001935Fn23O (30 × 30 × 500)0.00238970.0023014
Fn11O (30 × 30 × 1000)0.0016980.001725Fn24O (30 × 30 × 500)0.00231870.002408
Fn12O (30 × 30 × 1000)0.0015980.001554Fn25O (30 × 30 × 500)0.00245190.002415
Fn13O (30 × 30 × 1000)0.0018210.001875DCPHO (60 × 100 × 3000)0.41520.4211
Table 13. Outcomes of costs minimization for 48-unit DCPH system by DMOA and proposed EDMOA.
Table 13. Outcomes of costs minimization for 48-unit DCPH system by DMOA and proposed EDMOA.
VariableDMOAEDMOAVariableDMOAEDMOAVariableDMOAEDMOA
PG1410.919538.715PG2291.269159.899HT3142.38840.211
PG2236.598225.129PG2375.04578.120HT3217.57222.192
PG3320.459150.068PG24108.80577.390HT33108.227105.055
PG470.230159.774PG2594.98494.666HT3488.71579.077
PG5174.784161.645PG26115.53092.638HT35111.902105.339
PG6159.518109.808PG2798.84797.820HT3682.12289.306
PG7110.100110.431PG2861.62640.246HT3734.69041.207
PG8130.571112.748PG2999.78185.629HT3821.41125.351
PG9105.985109.874PG3054.10053.532HT39479.032442.594
PG1088.36577.377PG3116.33310.502HT4054.58159.883
PG1182.70177.516PG3242.67639.931H4156.88859.970
PG1295.34692.383PG3392.90281.538H42103.343119.627
PG1379.87692.440PG3456.26244.747H43117.539119.798
PG14533.413450.161PG35105.64382.097H44459.364447.482
PG1512.329224.017PG3649.40556.616H4554.98059.976
PG16292.225150.102PG3715.49212.834H4657.47459.996
PG17109.793161.726PG3843.82046.780H47110.577119.681
PG1865.965159.664HT27112.829114.220H48108.102119.984
PG19158.946110.029HT2889.52975.174Sum (PG)2500.0000 2500.0000
PG20139.222111.665HT29108.118107.347Sum (HT)4700.00004700.0000
PG21100.136159.743HT3080.61586.532Fn ($)121,431.6763116,406.3772
Table 14. Comparative results for 48-unit DCPH system.
Table 14. Comparative results for 48-unit DCPH system.
OptimizerWorst Fn ($/hr)Average Fn ($/hr)Best Fn ($/h)
Proposed EDMOA117,973.7117,331.3116,406.4
Standard DMOA123,551.6122,238.4121,431.7
Improved artificial ecosystem algorithm [67]119,424.033118,004.349116,897.888
Artificial ecosystem algorithm [67]124,396.472120,045.695118,881.447
Jellyfish search algorithm [68]--117,365.09
Gravitational search algorithm [69]--119,775.9
Manta Ray Foraging Optimizer [70]118,217.5117,875.4117,336.9
CPSO [71]--120,918.9
TVAC-PSO [71]--118,962.5
MVO [70]119,249.3118,724117,657.9
SSA [70]122,636.8121,110.2120,174.1
Table 15. Compared algorithms for DCPH problem: circumstances and metaparameters.
Table 15. Compared algorithms for DCPH problem: circumstances and metaparameters.
AlgorithmParametersPopulation SizeNo. of
Iterations
No. of Fitness Calculations
DMOANumber of babysitters = 3
Alpha female vocalization (peep = 2)
1003000300,000
EDMOANumber of babysitters = 3
Alpha female vocalization (peep = 2)
1003000300,000
Improved artificial
ecosystem algorithm [67]
Adaptive parameters
a = (1 − iter/MaxIt) × r1;
1003000300,000
Artificial ecosystem
algorithm [67]
Adaptive parameters
a = (1 − iter/MaxIt) × r1;
1003000300,000
Jellyfish search algorithm [68]Adaptive parameters
Ar = (1 − iter × ((1)/MaxIt)) × (2 × rand − 1)
1003000300,000
Gravitational search
algorithm [69]
Parameters: G0 = 400, δ = 0.01
10050050,000
Manta Ray Foraging
Optimizer [70]
Adaptive parameters
Coef = iter/MaxIt
1003000300,000
CPSO [71]Parameters: ωmax = 0.9, ωmin = 0.4;
C1f = C2i = 0.5, C1i = C2f = 2.5;
Rmin = 5, Rmax = 10
500300150,000
TVAC-PSO [71]Parameters: ωmax = 0.9, ωmin = 0.4;
C1f = C2i = 0.5, C1i = C2f = 2.5;
Rmin = 5, Rmax = 10
500300150,000
MVO [70]Travelling distance rate is random number [0.6, 1]
Existence probability is random number [0.2, 1]
1003000300,000
SSA [70]Parameters c1 = c2 are random numbers [0, 1]1003000300,000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moustafa, G.; El-Rifaie, A.M.; Smaili, I.H.; Ginidi, A.; Shaheen, A.M.; Youssef, A.F.; Tolba, M.A. An Enhanced Dwarf Mongoose Optimization Algorithm for Solving Engineering Problems. Mathematics 2023, 11, 3297. https://doi.org/10.3390/math11153297

AMA Style

Moustafa G, El-Rifaie AM, Smaili IH, Ginidi A, Shaheen AM, Youssef AF, Tolba MA. An Enhanced Dwarf Mongoose Optimization Algorithm for Solving Engineering Problems. Mathematics. 2023; 11(15):3297. https://doi.org/10.3390/math11153297

Chicago/Turabian Style

Moustafa, Ghareeb, Ali M. El-Rifaie, Idris H. Smaili, Ahmed Ginidi, Abdullah M. Shaheen, Ahmed F. Youssef, and Mohamed A. Tolba. 2023. "An Enhanced Dwarf Mongoose Optimization Algorithm for Solving Engineering Problems" Mathematics 11, no. 15: 3297. https://doi.org/10.3390/math11153297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop