Next Article in Journal
Experimental Study of EICP Combined with Organic Materials for Silt Improvement in the Yellow River Flood Area
Next Article in Special Issue
Integration of Ordinal Optimization with Ant Lion Optimization for Solving the Computationally Expensive Simulation Optimization Problems
Previous Article in Journal
An MI-SDP Model for Optimal Location and Sizing of Distributed Generators in DC Grids That Guarantees the Global Optimum
Previous Article in Special Issue
GRASP and Iterated Local Search-Based Cellular Processing algorithm for Precedence-Constraint Task List Scheduling on Heterogeneous Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DM: Dehghani Method for Modifying Optimization Algorithms

1
Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz 71557-13876, Iran
2
Department of Civil Engineering Islamic Azad Universities of Estahban, Estahban Fars 74518-64747, Iran
3
Department of Power and Control Engineering, School of Electrical and Computer Engineering, Shiraz University, Shiraz 71557-13876, Iran
4
School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey 64849, NL, Mexico
5
Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
6
CROM Center for Research on Microgrids, Dept. of Energy Technology, Aalborg University, 9220 Aalborg, Denmark
7
Department of Computer Science, Government Bikram College of Commerce, Patiala 147004, Punjab, India
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(21), 7683; https://doi.org/10.3390/app10217683
Submission received: 21 September 2020 / Revised: 13 October 2020 / Accepted: 15 October 2020 / Published: 30 October 2020

Abstract

:
In recent decades, many optimization algorithms have been proposed by researchers to solve optimization problems in various branches of science. Optimization algorithms are designed based on various phenomena in nature, the laws of physics, the rules of individual and group games, the behaviors of animals, plants and other living things. Implementation of optimization algorithms on some objective functions has been successful and in others has led to failure. Improving the optimization process and adding modification phases to the optimization algorithms can lead to more acceptable and appropriate solution. In this paper, a new method called Dehghani method (DM) is introduced to improve optimization algorithms. DM effects on the location of the best member of the population using information of population location. In fact, DM shows that all members of a population, even the worst one, can contribute to the development of the population. DM has been mathematically modeled and its effect has been investigated on several optimization algorithms including: genetic algorithm (GA), particle swarm optimization (PSO), gravitational search algorithm (GSA), teaching-learning-based optimization (TLBO), and grey wolf optimizer (GWO). In order to evaluate the ability of the proposed method to improve the performance of optimization algorithms, the mentioned algorithms have been implemented in both version of original and improved by DM on a set of twenty-three standard objective functions. The simulation results show that the modified optimization algorithms with DM provide more acceptable and competitive performance than the original versions in solving optimization problems.

1. Introduction

The purpose of optimization is to determine the best solution among the available solutions for an optimization problem according to the constraints of problem [1]. Each optimization problem is designed with three parts: constraints, objective functions, and decision variables [2]. There are many optimization problems in different sciences that should be optimized using the appropriate method. Stochastic search-based optimization algorithms have always been of interest to researchers in solving optimization problems [3]. Optimization algorithms are able to provide a quasi-optimal solution based on random scan of the search space instead of a full scan. The quasi-optimal solution is not the best solution, but it is close to the global optimal solution of the problem [1]. In this regard, optimization algorithms have been applied by scientists in various fields such as energy [4,5,6], protection [7], electrical engineering [8,9,10,11,12,13], topology optimization [14] and energy carriers [15,16,17] to achieve the quasi-optimal solution. Table 1 shows the optimization algorithms grouped according to the main design idea.
Each optimization problem has a definite solution called a global solution. Optimization algorithms provide a solution based on random search of the search space, which is not necessarily a universal solution, but because it is close to the optimal solution, it is an acceptable solution. The solution that is provided by optimization algorithms is called quasi-optimal solution. Therefore, an optimization algorithm that offers a better quasi-optimal solution than another algorithm is a better optimizer algorithm. In this regard, many optimization algorithms have been proposed by researchers to solve optimization problems and achieve to the better quasi-optimal solution.
Although optimization algorithms have been successful in solving many optimization problems, improving the equations of optimization algorithms and adding modification phases to optimization algorithms can lead to better quasi-optimal solutions. In fact, the purpose of improving an optimization algorithm is to increase the ability of that algorithm to more accurately scan the problem search space and thus provide a more appropriate quasi-optimal solution and closer to the global optimal solution.
In this paper, a new modification method called Dehghani method (DM) is proposed to improve the performance of optimization algorithms. DM is designed based on the use of the algorithm population members information. In the proposed DM, the information of each population member can improve the situation of the new generation. The main idea of DM is to amplify the best population member of an optimization algorithm using population member information. The proposed method is fully described in the next section.
The continuation of the present article is organized in such a way that in Section 2, the DM is fully explained and modeled. Following this, Section 3 explains how to implement the proposed method on several algorithms. The simulation of the proposed method for solving optimization problems is presented in Section 4. Finally, conclusions and several suggestions for future studies are presented in Section 5.

2. Dehghani Method (DM)

In this section, first DM is explained and then its mathematical modeling is presented. DM shows that all population members of the optimization algorithm, even the worst one, can contribute to the development of the population of algorithm.
Each population-based optimization algorithm has a matrix called the population matrix, which each row of this matrix represents a population member. Each member of the population is actually a vector which represents the values of the problem variables. Given that each member of the population is a random vector in the problem search space, it is a suggested solution (SS) to the problem. After the formation of the population matrix, the values proposed by each population member for the problem variables are evaluated in the objective function (OF). The population matrix and values of the objective functions are defined in Equation (1).
SS = X = S S 1 = X 1 x 1 1 x 1 d x 1 m O F 1 S S i = X i x i 1 x i d x i m O F i S S N = X N x N 1 x N d x N m O F N ,
where, SS is the suggested solutions matrix, X is the population matrix, SS i is the i’th suggested solution, X i is the i’th population member, x i d is the value of d’th variables of optimization problem suggested by i’th population member, N is the number of population members or suggested solutions, m is the number of variables, and O F i is the value of objective function for the i’th suggested solution.
Different values for the objective function are obtained based on the values proposed for the variables by the population members. The member that offers the best-suggested solution (BSS) to the optimization problem plays an important role in improving the algorithm population. The row number of this member in the population matrix is determined using Equation (2).
b e s t = t h e   r o w   n u m b e r   o f   X   m a t r i x   w i t h min O F ,       f o r   m i n i m i z a t i o n   p r o b l e m s t h e   r o w   n u m b e r   o f   X   m a t r i x   w i t h max O F ,       f o r   m a x i m i z a t i o n   p r o b l e m s ,
where, b e s t is the row number of the member with BSS. The BSS and it’s OF are specified by Equations (3) and (4).
X b e s t   : v a r i a b l e s   v a l u e   o f min o b j e c t i v e   f u n c t i o n ,
F b e s t   : v a l u e   o f min o b j e c t i v e   f u n c t i o n ,
where, X b e s t is the BSS or best population member and F b e s t is the value of OF for BSS.
As mentioned, the best member of the population plays an important role in improving the population of the algorithm and thus the performance of the optimization algorithm. Optimization algorithms update the status of its population members according to its own process to achieve a quasi-optimal solution. Accordingly, with the improvement of the best member of the population, is expected that the population be updated more effectively and the performance of the algorithm in solving the optimization problem is improved.
DM is designed with the idea of modifying the best population member with the aim of improving the performance of an optimization algorithm.
In DM, just as the best member is influential in updating the population members, the population members even the worst member can influence to modification the best member. The measurement criterion for suggested solutions is the value of the objective function. However, a suggested solution that is not the best solution may provide appropriate values for some problem variables. The proposed DM modifies the best member considering this issue and using the values suggested by other of the population members. This concept is mathematically simulated in Equations (5) and (6).
X D M = X D M i , d =   x b e s t 1     x i d       x b e s t m ,
X b e s t n e w = X D M ,     O F ( X D M i , d ) < F b e s t X b e s t ,     e l s e ,
where, X D M is the modified best member by DM, X D M i , d is the modified best member based on the suggested value for d’th variable by i’th SS, X b e s t n e w is the new status for best member based on DM, and O F ( X D M i , d ) is the objective function value for modified best member by DM.
The pseudo code of DM is presented in Algorithm 1. In addition, the different stages of the proposed method with the aim of improving the best member are shown as a flowchart in Figure 1.
Algorithm 1. Pseudo code of DM
1.  For i = 1:Npopulaion             Npopulaion: number of population members.
2.   For d = 1:m             m: number of variables.
3.    Update X D M using Equation (5).
4.    Calculate O F ( X D M i , d ) .
5.    Update X b e s t n e w using Equation (6):
6.    If O F ( X D M i , d ) < F b e s t
7.      X b e s t n e w = X D M
8.    End if
9.   End for d
10.  End: for i

3. DM Implantation on Optimization Algorithms

This section describes how to implement the proposed DM on optimization algorithms. The proposed DM is applicable to modify population-based optimization algorithms. Although the idea of designing optimization algorithms is different, the procedure is the same. These algorithms provide a quasi-optimal solution starting from a random initial population and following a process based on repetition and population updates in each iteration.
The pseudo code of implantation of the DM for modifying optimization algorithms is presented in Algorithm 2. The steps of the modified version of the optimization algorithms using DM are shown in Figure 2.
Algorithm 2. Pseudo code of implantation of the DM for modifying optimization algorithms
Start.
1.Set parameters.
2.Input: m, OF, constraints.
3Create initial population.
4.Create another matrix (if there are).
5.For t = 1: iterationmax         iterationmax: maximum number of iterations.
6. Calculate OF.
7. Find X b e s t .
8. DM toolbox:
9.  Update X b e s t n e w based on DM.
10. Continue the processes of optimization algorithm.
11. Update population.
12.End for t
13.Output: BSS.
End.

4. Simulation and Discussion

In this section, the performance of the proposed DM in improving optimization algorithms is evaluated. Thus, the present work and the optimization algorithms described in [18,29,32,50,53] are developed using the same computational platform: Matlab R2014a (8.3.0.532) version in the environment of Microsoft Windows 10 with 64 bits on Core i-7 processor with 2.40 GHz and 6 GB memory. To generate and report the results, for each objective function, optimization algorithms utilize 20 independent runs where each run employs 1000 times of iterations.

4.1. Algorithms Used for Comparisons and Benchmark Test Functions

To evaluate the performance of the proposed DM, the following methodology is applied:
(1)
Find in the literature five well-known optimization algorithms, such as: genetic algorithm (GA) [50], particle swarm optimization (PSO) [18], gravitational search algorithm (GSA) [53], teaching learning based optimization (TLBO) [29] and grey wolf optimizer (GWO) [32].
(2)
Modify the optimization algorithms implementing the proposed DM.
(3)
Define the set of twenty-three objective functions and divide it into three main categories: unimodal [53,54], multimodal [31,54], and fixed-dimension multimodal [54] functions (see Appendix A).
(4)
Implement the present work and the optimization algorithms in the same computational platform.
(5)
Compare the performance of the modified and the original optimization algorithms using the following metrics: the average and the standard deviation of the best obtained optimal solution till the last iteration is computed.

4.2. Results

Optimization algorithms in the original version and the modified version, using the proposed DM, are implemented on the objective functions. The simulation results are presented from Table 2, Table 3, Table 4, Table 5 and Table 6 for three different categories: unimodal, multimodal, and fixed-dimension multimodal functions. The first category consists of seven objective functions, F1 to F7, the second category consists of six objective functions, F8 to F13, and the third category consists of ten objective functions, F14 to F23.
To further analyze the simulation results, the convergence curves of the optimization algorithms for the twenty-three objective functions are shown from Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. In these figures, the convergence curves for the original and the modified versions are plotted simultaneously.
Computational time analysis in accessing the optimal solution is presented in Table 7, Table 8 and Table 9. This analysis shows computational time for per iteration, per complete run, and the overall time required for the original and modified algorithm to achieve similar objective function value. In these tables, P.I. means per iteration, P.C. means per complete run, and the O.T.S. means overall time required for the original and modified algorithm to achieve similar objective function value.

4.3. Discussion

Exploitation and exploration abilities are two important indicators in evaluating optimization algorithms. The exploitation ability of an optimization algorithm means its power to provide a quasi-optimal solution. An algorithm that offers a better quasi-optimal solution than another algorithm has a higher exploitation ability. The unimodal objective functions F1 to F7, which have only one global optimal solution without local solutions, are applied to analyze the exploitation ability of optimization algorithms. The results presented in Table 2, Table 3, Table 4, Table 5 and Table 6 show that the proposed DM by modifying the optimization algorithms is able to increase the exploitation ability of the optimization algorithms and as a result more suitable quasi-optimal solutions are provided by the modified version.
The exploration ability means the power of the optimization algorithm to scan the search space of an optimization problem. Given that the basis of optimization algorithms is random scanning of the search space, an algorithm that scans the search space more accurately is able move towards a quasi-optimal solution by escape from local optimal solutions. In the second and third category of objective functions F8 to F23, there are multiple local solutions besides the global optimum which are useful to analyze the local optima avoidance and an explorative ability of an algorithm. Table 2, Table 3, Table 4, Table 5 and Table 6 show that the modified version with the DM of optimization algorithms has a higher exploration ability than the original version.
The convergence curves shown in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 visually show the effect of the proposed DM on the modifying the optimization algorithms. In these figures it is clear that the modified version moves with more convergence towards the quasi-optimal solution.
The simulation results of optimization algorithms to solve the optimization problems show that the modified version of the optimization algorithms with the DM are much more competitive than the its original version. Therefore, the proposed method has the ability to be implemented on a variety of optimization algorithms in order to solve various optimization problems.
The result of computational time analysis for both original and modified by DM versions is presented in Table 7, Table 8 and Table 9. In these tables, three different time criteria are presented, which are the average time per iteration (P.I.), the average time per complete run (P.C.R.), and overall time required for the original and modified algorithm to achieve similar objective function value (O.T.S.). Due to the addition of a correction phase based on proposed DM, P.I. and P.C.R. have been increased compared to the original version. Table 7 shows that except for four cases (TLBO: F3, GWO: F3, F5, and F7), in all unimodal objective functions, the modified version provides the final solution of the original version in less time. Table 8 and Table 9 show that the modified version of the studied algorithms for all F8 to F23 objective functions provides the final solution of the original version in less time.

5. Conclusions

There are various optimization problems in different sciences that should be optimized using the appropriate method. The optimization algorithm is one method to solve such problems, and it can provide a quasi-optimal solution by random scanning in the search space. Many optimization algorithms have been proposed by researchers which have been applied by scientists to solve optimization problems. The performance of optimization algorithms in achieving quasi-optimal solutions is improved by modifying optimization algorithms. In this paper, a new modification method has been presented for optimization algorithms called Dehghani method (DM). The main idea of the proposed DM is to improve and strengthen the best member of the population using the information of the population members. In DM, all members of a population, even the worst one, can contribute to the development of the population. The various stages of DM have been described and then has been modeled mathematically. The DM has been implemented on five different optimization algorithms including GA, PSO, GSA, TLBO, and GWO. The effect of the proposed method on modifying the performance of optimization algorithms in solving optimization problems has been evaluated on a set of twenty-three standard objective functions. In this evaluation, the results of optimizing the objective functions set has been presented for both the original and the modified by DM version of the optimization algorithms. The results of simulation and implementation of DM on the mentioned optimization algorithms with the aim of optimizing the optimization problems show that the proposed method improves the performance of the optimization algorithms. The optimization of different objective functions in the three groups unimodal, multimodal, and fixed-dimension multimodal functions indicates that the modified version with the proposed method is much more competitive than the original version. Moreover, the convergence curves visually show that the modified version moves with more convergence towards the quasi-optimal solution.
The authors suggest several ideas and proposals for future studies and perspectives of this study to researchers. The main potential for these ideas is to be found in modifying various optimization algorithms using DM. DM may also be used to overcome many objective real-life optimizations as well as multi-objective problems.

Author Contributions

Conceptualization, M.D., Z.M., A.D., H.S., O.P.M. and J.M.G.; methodology, M.D., Z.M. and A.D.; software, M.D., Z.M. and H.S.; validation, J.M.G., G.D., A.D., H.S., O.P.M., R.A.R.-M., C.S., D.S. and A.E.; formal analysis, A.D., A.E.; investigation, M.D. and Z.M.; resources, A.D., A.E. and J.M.G.; data curation, G.D.; writing—original draft preparation, M.D. and Z.M.; writing—review and editing, A.D., R.A.R.-M., D.S., C.S., A.E., O.P.M., G.D., and J.M.G.; visualization, M.D.; supervision, M.D., Z.M. and A.D.; project administration, A.D. and Z.M.; funding acquisition, C.S., D.S. and R.A.R.-M. All authors have read and agreed to the published version of the manuscript.

Funding

The current project was funded by Tecnológico de Monterrey and FEMSA Foundation (grant: CAMPUSCITY Project).

Conflicts of Interest

The authors declare no conflict of interest. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

AcronymDefinition
ABCArtificial Bee Colony
ACOAnt Colony Optimization
ACROAArtificial Chemical Reaction Optimization Algorithm
BABat-inspired Algorithm
BBOBiogeography-Based Optimizer
BHBlack Hole
BSSBest-Suggested Solution
CSCuckoo Search
CSOCurved Space Optimization
DMDehghani Method
DGODice Game Optimizer
DGODarts Game Optimizer
DPODoctor and Patient Optimization
DEDifferential Evolution
DTODonkey Theorem Optimization
ESEvolution Strategy
EPOEmperor Penguin Optimizer
FOAFollowing Optimization Algorithm
FGBOFootball Game Based Optimization
GAGenetic Algorithm
GPGenetic Programming
GOGroup Optimization
GOAGrasshopper Optimization Algorithm
GSAGravitational Search Algorithm
GbSAGalaxy-based Search Algorithm
GWOGrey Wolf Optimizer
HOGOHide Objects Game Optimization
MLOMulti Leader Optimizer
OSAOrientation Search Algorithm
PSOParticle Swarm Optimization
RSORat Swarm Optimizer
RORay Optimization
SHOSpotted Hyena Optimizer
SGOShell Game Optimization
SWOASmall World Optimization Algorithm
SSSuggested Solution
TLBOTeaching-Learning-Based Optimization
OFObjective Function

Appendix A

Table A1. Unimodal objective functions.
Table A1. Unimodal objective functions.
F 1 x = i = 1 m x i 2 100 , 100 m
F 2 x = i = 1 m x i + i = 1 m x i 10 , 10 m
F 3 x = i = 1 m j = 1 i x i 2 100 , 100 m
F 4 x = max   x i   ,     1 i m   100 , 100 m
F 5 x = i = 1 m 1 100 x i + 1 x i 2 2 + x i 1 2 ) 30 , 30 m
F 6 x = i = 1 m x i + 0.5 2 100 , 100 m
F 7 x = i = 1 m i x i 4 + r a n d o m 0 , 1 1.28 , 1.28 m
Table A2. Multimodal objective functions.
Table A2. Multimodal objective functions.
F 8 x = i = 1 m x i   sin x i 500 , 500 m
F 9 x = i = 1 m x i 2 10 cos 2 π x i + 10 5.12 , 5.12 m
F 10 x = 20 exp 0.2 1 m i = 1 m x i 2 exp 1 m i = 1 m c o s 2 π x i + 20 + e 3.2 , 3.2 m
F 11 x = 1 4000 i = 1 m x i 2 i = 1 m c o s x i i + 1 600 , 600 m
F 12 x = π m   10 sin π y 1 + i = 1 m y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 m u x i , 10 , 100 , 4 50 , 50 m
u x i , a , i , n = k x i a n x i > a 0 a < x i < a k x i a n x i < a
F 13 x = 0.1   sin 2 3 π x 1 + i = 1 m x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x m + i = 1 m u x i , 5 , 100 , 4 50 , 50 m
Table A3. Multimodal objective functions with fixed dimension.
Table A3. Multimodal objective functions with fixed dimension.
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 65.53 , 65.53 2
F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 5 , 5 4
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 5 , 5 2
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π c o s x 1 + 10 5 , 10 × 0 , 15
F 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 5 , 5 2
F 19 x = i = 1 4 c i exp j = 1 3 a i j x j P i j 2 0 , 1 3
F 20 x = i = 1 4 c i exp j = 1 6 a i j x j P i j 2 0 , 1 6
F 21 x = i = 1 5 X a i X a i T + 6 c i 1 0 , 10 4
F 22 x = i = 1 7 X a i X a i T + 6 c i 1 0 , 10 4
F 23 x = i = 1 10 X a i X a i T + 6 c i 1 0 , 10 4

References

  1. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Ramirez-Mendoza, R.A.; Samet, H.; Guerrero, J.M.; Dhiman, G. MLO: Multi leader optimizer. Int. J. Intell. Eng. Syst. 2020, 13, 364–373. [Google Scholar]
  2. Dhiman, G.; Garg, M.; Nagar, A.; Kumar, V.; Dehghani, M. A novel algorithm for global optimization: Rat swarm optimizer. J. Ambient Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  3. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.P.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  4. Dehghani, M.; Montazeri, Z.; Malik, O.P. Energy commitment: A planning of energy carrier based on energy consumption. Электрoтехника Электрoмеханика 2019, 4, 69–72. [Google Scholar] [CrossRef]
  5. Liu, J.; Dong, H.; Jin, T.; Liu, L.; Manouchehrinia, B.; Dong, Z. Optimization of hybrid energy storage systems for vehicles with dynamic on-off power loads using a nested formulation. Energies 2018, 11, 2699. [Google Scholar] [CrossRef] [Green Version]
  6. Carpinelli, G.; Mottola, F.; Proto, D.; Russo, A.; Varilone, P. A hybrid method for optimal siting and sizing of battery energy storage systems in unbalanced low voltage microgrids. Appl. Sci. 2018, 8, 455. [Google Scholar] [CrossRef] [Green Version]
  7. Ehsanifar, A.; Dehghani, M.; Allahbakhshi, M. Calculating the leakage inductance for transformer inter-turn fault detection using finite element method. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 1372–1377. [Google Scholar]
  8. Dehghani, M.; Montazeri, Z.; Malik, O.P. Optimal sizing and placement of capacitor banks and distributed generation in distribution systems using spring search algorithm. Int. J. Emerg. Electr. Power Syst. 2020, 21, 20190217. [Google Scholar] [CrossRef] [Green Version]
  9. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M.; Dhiman, G. A new methodology called dice game optimizer for capacitor placement in distribution systems. Электрoтехника Электрoмеханика 2020. [Google Scholar] [CrossRef] [Green Version]
  10. Dehbozorgi, S.; Ehsanifar, A.; Montazeri, Z.; Dehghani, M.; Seifi, A. Line loss reduction and voltage profile improvement in radial distribution networks using battery energy storage system. In Proceedings of the IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 215–219. [Google Scholar]
  11. Montazeri, Z.; Niknam, T. Optimal utilization of electrical energy from power plants based on final energy consumption using gravitational search algorithm. Электрoтехника Электрoмеханика, 2018, 4, 70–73. [Google Scholar] [CrossRef] [Green Version]
  12. Dehghani, M.; Mardaneh, M.; Montazeri, Z.; Ehsanifar, A.; Ebadi, M.; Grechko, O. Spring search algorithm for simultaneous placement of distributed generation and capacitors. Електрoтехніка Електрoмеханіка 2018, 6, 68–73. [Google Scholar] [CrossRef]
  13. Yu, J.; Kim, C.-H.; Wadood, A.; Khurshiad, T.; Rhee, S.-B. A novel multi-population based chaotic JAYA algorithm with application in solving economic load dispatch problems. Energies 2018, 11, 1946. [Google Scholar] [CrossRef] [Green Version]
  14. Sleesongsom, S.; Bureerat, S. Topology optimisation using MPBILs and multi-grid ground element. Appl. Sci. 2018, 8, 271. [Google Scholar] [CrossRef] [Green Version]
  15. Dehghani, M.; Montazeri, Z.; Ehsanifar, A.; Seifi, A.; Ebadi, M.; Grechko, O. Planning of energy carriers based on final energy consumption using dynamic programming and particle swarm optimization. Электрoтехника Электрoмеханика 2018, 5, 62–71. [Google Scholar] [CrossRef] [Green Version]
  16. Montazeri, Z.; Niknam, T. Energy carriers management based on energy consumption. In Proceedings of the IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 539–543. [Google Scholar]
  17. Dehghani, M.; Mardaneh, M.; Malik, O.P.; Guerrero, J.M.; RMorales-Menendez, R.; Ramirez-Mendoza, A.; Matas, J.; Abusorrah, A. Energy commitment for a power system supplied by a multiple energy carriers system using following optimization algorithm. Appl. Sci. 2020, 10, 5862. [Google Scholar] [CrossRef]
  18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceeding of the IEEE International Conference on Neural Networks, Perth, WA, Australia; IEEE Service Center: Piscataway, NJ, USA, 1942; Volume 1948. [Google Scholar]
  19. Dorigo, M.; Stützle, T. Ant colony optimization: Overview and recent advances. In Handbook of Metaheuristics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 311–351. [Google Scholar]
  20. Ning, J.; Zhang, C.; Sun, P.; Feng, Y. Comparative study of ant colony algorithms for multi-objective optimization. Information 2019, 10, 11. [Google Scholar] [CrossRef] [Green Version]
  21. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  22. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Malik, O.P. GO: Group optimization. Gazi Univ. J. Sci. 2020, 33, 381–392. [Google Scholar] [CrossRef]
  23. Karaboga, D.; Basturk, B. Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In International Fuzzy Systems Association World Congress; Springer: Berlin/Heidelberg, Germany, 2007; pp. 789–798. [Google Scholar]
  24. Dehghani, M.; Mardaneh, M.; Malik, O. FOA: ‘Following’ optimization algorithm for solving power engineering optimization problems. J. Oper. Autom. Power Eng. 2020, 8, 57–64. [Google Scholar]
  25. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  26. Dhiman, G.; Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl. Based Syst. 2018, 159, 20–50. [Google Scholar] [CrossRef]
  27. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  28. Dehghani, M.; Mardaneh, M.; Malik, O.P.; NouraeiPour, S.M. DTO: Donkey Theorem Optimization. In Proceedings of the 27th Iranian Conference on Electrical Engineering (ICEE), Yazd, Iran, 30 April–2 May 2019; pp. 1855–1859. [Google Scholar]
  29. Sarzaeim, P.; Bozorg-Haddad, O.; Chu, X. Teaching-learning-based optimization (TLBO) algorithm. In Advanced Optimization by Nature-Inspired Algorithms; Springer: Berlin/Heidelberg, Germany, 2018; pp. 51–58. [Google Scholar]
  30. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  31. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.P.; Ramirez-Mendoza, R.A.; Matas, J.; Vasquez, J.C.; Parra-Arroyo, L. A new “Doctor and Patient” optimization algorithm: An application to energy commitment problem. Appl. Sci. 2020, 10, 5791. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  33. Dehghani, M.; Montazeri, Z.; Saremi, S.; Dehghani, A.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M. HOGO: Hide objects game optimization. Int. J. Intell. Eng. Syst. 2020, 13, 216–225. [Google Scholar]
  34. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Ehsanifar, A.; Dehghani, A. OSA: Orientation search algorithm. Int. J. Ind. Electr. Control Optim. 2019, 2, 99–112. [Google Scholar]
  35. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Dhiman, G.; Kumar, V. BOSA: Binary orientation search algorithm. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 5306–5310. [Google Scholar]
  36. Dehghani, M.; Montazeri, Z.; Malik, O.P. DGO: Dice game optimizer. Gazi Univ. J. Sci. 2019, 32, 871–882. [Google Scholar] [CrossRef] [Green Version]
  37. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Givi, H.; Guerrero, J.M. Shell game optimization: A novel game-based algorithm. Int. J. Intell. Eng. Syst. 2020, 13, 246–255. [Google Scholar] [CrossRef]
  38. Dehghani, M.; Montazeri, Z.; Givi, H.; Guerrero, J.M.; Dhiman, G. Darts game optimizer: A new optimization technique based on darts game. Int. J. Intell. Eng. Syst. 2020, 13, 286–294. [Google Scholar] [CrossRef]
  39. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Seifi, A. Spring search algorithm: A new meta-heuristic optimization algorithm inspired by Hooke’s law. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 210–214. [Google Scholar]
  40. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Nouri, N.; Seifi, A. BSSA: Binary spring search algorithm. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 220–224. [Google Scholar]
  41. Moghaddam, F.F.; Moghaddam, R.F.; Cheriet, M. Curved space optimization: A random search based on general relativity theory. arXiv 2012, arXiv:1208.2214. [Google Scholar]
  42. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  43. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  44. Alatas, B. ACROA: Artificial chemical reaction optimization algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
  45. Shah-Hosseini, H. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimization. Int. J. Comput. Sci. Eng. 2011, 6, 132–140. [Google Scholar]
  46. Du, H.; Wu, X.; Zhuang, J. Small-world optimization algorithm for function optimization. In International Conference on Natural Computation; Springer: Berlin/Heidelberg, Germany, 2006; pp. 264–273. [Google Scholar] [CrossRef]
  47. Karkalos, N.E.; Markopoulos, A.P.; Davim, J.P. Evolutionary-based methods. In Computational Methods for Application in Industry 4.0; Springer: Berlin/Heidelberg, Germany, 2019; pp. 11–31. [Google Scholar] [CrossRef]
  48. Mirjalili, S. Biogeography-based optimisation. In Evolutionary Algorithms and Neural Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 57–72. [Google Scholar] [CrossRef] [Green Version]
  49. Storn, R.; Price, K. Differential evolution-A simple and efficient adaptive scheme for global optimization over continuous spaces. Berkeley ICSI 1995, 11, 341–359. [Google Scholar] [CrossRef]
  50. Tang, K.-S.; Man, K.-F.; Kwong, S.; He, Q. Genetic algorithms and their applications. IEEE Signal Process. Mag. 1996, 13, 22–37. [Google Scholar] [CrossRef]
  51. Beyer, H.-G.; Schwefel, H.-P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  52. Koza, J.R. Genetic Programming: A Paradigm for Genetically Breeding Populations of Computer Programs to Solve Problems. PhD Thesis, Stanford University, Stanford, CA, USA, 1990. [Google Scholar]
  53. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  54. Dhghani, M.; Montazeri, Z.; Dhiman, G.; Malik, O.P.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Dehghani, A.; Guerrero, J.M.; Parra-Arroyo, L. A spring search algorithm applied to engineering optimization problems. Appl. Sci. 2020, 10, 6173. [Google Scholar] [CrossRef]
Figure 1. Flowchart of Dehghani method (DM).
Figure 1. Flowchart of Dehghani method (DM).
Applsci 10 07683 g001
Figure 2. Flowchart of implantation of the DM for modifying optimization algorithms.
Figure 2. Flowchart of implantation of the DM for modifying optimization algorithms.
Applsci 10 07683 g002
Figure 3. The convergence curves of GA for original and “modified by DM” versions.
Figure 3. The convergence curves of GA for original and “modified by DM” versions.
Applsci 10 07683 g003
Figure 4. The convergence curves of PSO for original and “modified by DM” versions.
Figure 4. The convergence curves of PSO for original and “modified by DM” versions.
Applsci 10 07683 g004
Figure 5. The convergence curves of GSA for original and “modified by DM” versions.
Figure 5. The convergence curves of GSA for original and “modified by DM” versions.
Applsci 10 07683 g005
Figure 6. The convergence curves of TLBO for original and “modified by DM” versions.
Figure 6. The convergence curves of TLBO for original and “modified by DM” versions.
Applsci 10 07683 g006
Figure 7. The convergence curves of GWO for original and “modified by DM” versions.
Figure 7. The convergence curves of GWO for original and “modified by DM” versions.
Applsci 10 07683 g007
Table 1. Optimization algorithms.
Table 1. Optimization algorithms.
Optimization AlgorithmsSwarm-basedGeneral description: Designed based on simulation of the living thing behavior processes of the plants, and other swarm-based phenomena.
  • Particle Swarm Optimization (PSO) [18], Ant Colony Optimization (ACO) [19,20], Spotted Hyena Optimizer (SHO) [21], Group Optimization (GO) [22], Artificial Bee Colony (ABC) [23], Following Optimization Algorithm (FOA) [24], Rat Swarm Optimizer (RSO) [2], Multi Leader Optimizer (MLO) [1], Bat-inspired Algorithm (BA) [25], Emperor Penguin Optimizer (EPO) [26], Cuckoo Search (CS) [27], Donkey Theorem Optimization (DTO) [28], Teaching-Learning-Based Optimization (TLBO) Algorithm [29], Grasshopper Optimization Algorithm (GOA) [30], Doctor and Patient Optimization (DPO) [31], Gray Wolf Optimizer (GWO) [32]
Ref. [18] The most widely used algorithm in this group, which is designed based on modeling the movement of birds.
Refs. [19,20] It is based on the modeling the discovery of the shortest path by ants.
Ref. [29] It has gained wide acceptance among the optimization researchers. This algorithm is a teaching-learning process inspired algorithm and is based on the effect of influence of a teacher on the output of learners in a class.
Ref. [31] It is designed by simulating the process of treating patients by a physician. The treatment process has three phases, including vaccination, drug administration, and surgery.
Ref. [32] It mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy.
Game-basedGeneral description: Designed based on simulation of different processes and rules of individual and group games.
  • Football Game-Based Optimization (FGBO) [3], Hide Objects Game Optimization (HOGO) [33], Orientation Search Algorithm (OSA) [34,35], Dice Game Optimizer (DGO) [36], Shell Game Optimization (SGO) [37], Darts Game Optimizer (DGO) [38]
Ref. [3] It is designed based on mathematical modeling of football league rules and behaviors of football players and clubs.
Ref. [33] It is based on the simulation of the players behaviors and their trying to find a hidden object in the hide object game.
Physics-basedGeneral description: Designed based on the ideation of various laws of physics.
  • Spring Search Algorithm (SSA) [39,40], Curved Space Optimization (CSO) [41], Black Hole (BH) [42], Ray Optimization (RO) [43] algorithm, Artificial Chemical Reaction Optimization Algorithm (ACROA) [44], Galaxy-based Search Algorithm (GbSA) [45], and Small World Optimization Algorithm (SWOA) [46]
Refs. [39,40] The simulation of the Hooke’s law between a number of weights and springs is used.
Evolutionary-basedGeneral description: They have involved evolution of a population in order to create new generations of genetically superior individuals [47].
  • Biogeography-based Optimizer (BBO) [48], Differential Evolution (DE) [49], Genetic Algorithm (GA) [50], Evolution Strategy (ES) [51], and Genetic Programming (GP) [52].
Ref. [50] It has found wide acceptance in many disciplines, with application to environmental science problems. This algorithm is an optimization tool that mimics natural selection and genetics.
Table 2. Optimization results of genetic algorithm (GA) for original and “modified by DM” versions.
Table 2. Optimization results of genetic algorithm (GA) for original and “modified by DM” versions.
UnimodalMultimodalFixed-Dimension Multimodal
OriginalDM OriginalDM OriginalDM
F1Avg13.24051.6151 × 10−11F8Avg−8184.4142−12569.4850F14Avg0.99860.9980
std4.7664 × 10−151.1560× 10−26std8.3381 × 10−121.2202 × 10−21std1.5640 × 10−153.4755 × 10−16
F2Avg2.47944.5161 × 10−35F9Avg62.41140.0019F15Avg5.3952 × 10−23.3882 × 10−03
std2.2342 × 10−157.1717 × 10−51std2.5421 × 10−143.8789 × 10−19std7.0791 × 10−182.0364 × 10−18
F3Avg1536.89632.0620 × 10−16F10Avg3.22180.0127F16Avg−1.0316−1.0316
std6.6095 × 10−136.6148 × 10−32std5.1636 × 10−150std7.9441 × 10−165.9580 × 10−16
F4Avg2.09426.6058 × 10−15F11Avg1.23020.0272F17Avg0.43690.3984
std2.2342 × 10−158.1141 × 10−30std8.4406 × 10−163.1031 × 10−18std4.9650 × 10−174.9650 × 10−17
F5Avg310.42735.9802F12Avg0.04708.2396 × 10−7F18Avg4.35923.0000
std2.0972 × 10−136.3552 × 10−25std4.6547 × 10−183.3145 × 10−22std5.9580 × 10−162.0853 × 10−15
F6Avg14.550F13Avg1.20852.8065 × 10−5F19Avg-3.85434−3.8627
std3.1776 × 10−150std3.2272 × 10−162.1213 × 10−20std9.9301 × 10−179.9301 × 10−16
F7Avg5.6799 × 10−032.6009 × 10−5 F20Avg−2.8239−3.2387
std7.7579 × 10−191.6364 × 10−29std3.97205 × 10−161.7874 × 10−15
F21Avg−4.3040−10.1522
std1.5888 × 10−151.5888 × 10−15
F22Avg−5.1174−10.4016
std1.2909 × 10−151.6682 × 10−14
F23Avg−6.5621−10.5362
std3.8727 × 10−157.5469 × 10−15
Table 3. Optimization results of particle swarm optimization (PSO) for original and “modified by DM” versions.
Table 3. Optimization results of particle swarm optimization (PSO) for original and “modified by DM” versions.
UnimodalMultimodalFixed-Dimension Multimodal
OriginalDM OriginalDM OriginalDM
F1Avg1.7740 × 10−56.746 × 10−218F8Avg−6908.6558−12516.1893F14Avg2.17350.9980
std6.4396 × 10−210std1.0168 × 10−129.3549 × 10−19std7.9441 × 10−165.4615 × 10−19
F2Avg0.34112.9565 × 10−111F9Avg57.06131.5631 × 10−13F15Avg0.05350.0034
std7.4476 × 10−171.0736 × 10−126std6.3552 × 10−150std3.8789 × 10−192.0849 × 10−18
F3Avg589.49201.9627 × 10−70F10Avg2.15464.7784 × 10−14F16Avg−1.0316−1.0316
std7.1179 × 10−135.0357 × 10−86std7.9441 × 10−161.1289 × 10−29std3.4755 × 10−164.4685 × 10−21
F4Avg3.96341.5239 × 10−97F11Avg0.04620.0214F17Avg0.78540.4076
std1.9860 × 10−165.8112 × 10−113std3.1031 × 10−183.1031 × 10−23std4.9650 × 10−170
F5Avg50.262452.3706 × 10−13F12Avg0.48061.5705 × 10−32F18Avg33
std1.5888 × 10−141.9191 × 10−28std1.8619 × 10−161.2239 × 10−47std3.6741 × 10−152.5818 × 10−19
F6Avg20.250F13Avg0.50841.3497 × 10−32F19Avg−3.8627−3.8627
std00std4.9650 × 10−171.2239 × 10−47std8.9371 × 10−159.0364 × 10−21
F7Avg0.11340.0110 F20Avg−3.2619−3.2744
std4.3444 × 10−171.2412 × 10−27std2.9790 × 10−163.9720 × 10−21
F21Avg−5.3891−10.1532
std1.4895 × 10−153.4755 × 10−15
F22Avg−7.6323−10.4029
std1.5888 × 10−151.2909 × 10−15
F23Avg−6.1648−10.5364
std2.7804 × 10−157.6462 × 10−15
Table 4. Optimization results of gravitational search algorithm (GSA) for original and “modified by DM” versions.
Table 4. Optimization results of gravitational search algorithm (GSA) for original and “modified by DM” versions.
UnimodalMultimodalFixed-Dimension Multimodal
OriginalDM OriginalDM OriginalDM
F1Avg2.0255 × 10−171.6060 × 10−157F8Avg−2849.0724−12532.65497F14Avg3.59130.9980
std1.1369 × 10−320std1.0168 × 10-122.84717 × 10-12std7.9441 × 10−164.2203 × 10−16
F2Avg2.3702 × 10−082.4203 × 10−80F9Avg16.26750F15Avg0.00248.0939 × 10−4
std5.1789 × 10−248.3748 × 10−96std3.1776 × 10−150std2.9092 × 10−193.5153 × 10−28
F3Avg279.34391.4902 × 10−30F10Avg3.5673 × 10−094.3396 × 10−13F16Avg−1.0316−1.0316
std1.2075 × 10−135.4834 × 10−46std3.6992 × 10−259.0314 × 10−29std5.9580 × 10−166.4545 × 10−34
F4Avg3.2547 × 10−97.1301 × 10−71F11Avg3.73750.0311F17Avg0.39780.3978
std2.0346 × 10−243.5969 × 10−87std2.7804 × 10−150std9.9301 × 10−170
F5Avg36.1069523.1212F12Avg0.03622.319 × 10−27F18Avg33
std3.0982 × 10−145.5608 × 10−15std6.2063 × 10−181.0829 × 10−42std6.9511 × 10−161.2909 × 10−35
F6Avg00F13Avg0.00202.5758 × 10−26F19Avg−3.8627−3.8627
std00std4.2617 × 10−147.7006 × 10−42std8.3413 × 10−156.3248 × 10−29
F7Avg0.02067.4501 × 10−4 F20Avg−3.0396−3.3219
std2.7152 × 10−181.9394 × 10−29std2.1846 × 10−141.9860 × 10−25
F21Avg−5.1486−10.1531
std2.9790 × 10−161.1916 × 10−24
F22Avg−9.0239−10.4029
std1.6484 × 10−121.3505 × 10−34
F23Avg−8.9045−10.5364
std7.1497 × 10−145.95808 × 10−45
Table 5. Optimization results of teaching learning based optimization (TLBO) for original and “modified by DM” versions.
Table 5. Optimization results of teaching learning based optimization (TLBO) for original and “modified by DM” versions.
UnimodalMultimodalFixed-Dimension Multimodal
OriginalDM OriginalDM OriginalDM
F1Avg8.3373 × 10−601.1627 × 10−157F8Avg−7408.6107−12569.4866F14Avg2.27210.9980
std4.9436 × 10−760std3.0505 × 10−121.6269 × 10−11std1.9860 × 10−160
F2Avg7.1704 × 10−351.9426 × 10−80F9Avg10.24850F15Avg0.00333.9560 × 10−4
std6.6936 × 10−501.2562 × 10−96std5.5608 × 10−150std1.2218 × 10−170
F3Avg2.7531 × 10−151.0076 × 10−30F10Avg0.27574.7961 × 10−15F16Avg−1.0316−1.0316
std2.6459 × 10−315.8751 × 10−46std2.5641 × 10−151.4111 × 10−30std1.4398 × 10−159.9301 × 10−19
F4Avg9.4199 × 10−155.6754 × 10−71F11Avg0.60820.0182F17Avg0.39780.4085
std2.1167 × 10−302.8775 × 10−86std1.9860 × 10−166.2063 × 10−18std7.4476 × 10−179.9301 × 10−17
F5Avg146.456421.4361F12Avg0.02031.5705 × 10−32F18Avg3.00093
std1.9065 × 10−142.0654 × 10−21std7.7579 × 10−191.2239 × 10−47std1.5888 × 10−151.3902 × 10−26
F6Avg0.44350F13Avg0.32931.3497 × 10−32F19Avg−3.8609−3.8627
std4.2203 × 10−160std2.1101 × 10−161.2239 × 10−47std7.3483 × 10−159.9301 × 10−45
F7Avg0.00173.4102 × 10−4 F20Avg−3.2014−3.3104
std3.87896 × 10−192.4849 × 10−27std1.7874 × 10−159.9301 × 10−18
F21Avg−9.1746−10.1531
std8.5399 × 10−150
F22Avg−10.0389−10.4029
std1.5292 × 10−140
F23Avg−9.2905−10.5364
std1.1916 × 10−150
Table 6. Optimization results of grey wolf optimization (GWO) for original and “modified by DM” versions.
Table 6. Optimization results of grey wolf optimization (GWO) for original and “modified by DM” versions.
UnimodalMultimodalFixed-Dimension Multimodal
OriginalDM OriginalDM OriginalDM
F1Avg1.09 × 10−582.84 × 10−278F8Avg−5885.1172−11901.9832F14Avg3.74081.3948
std5.1413 × 10−740std2.0336 × 10−124.8808 × 10−14std6.4545 × 10−158.44062 × 10−16
F2Avg1.2952 × 10−341.6523 × 10−137F9Avg8.5265 × 10−150F15Avg0.00630.0043
std1.9127 × 10−501.2275 × 10−152std5.6446 × 10−300std1.1636 × 10−183.10317 × 10−28
F3Avg7.4091 × 10−151.0362 × 10−30F10Avg1.7053 × 10−141.0835 × 10−14F16Avg−1.0316−1.0316
std5.6446 × 10−301.2533 × 10−45std2.7517 × 10−292.8223 × 10−30std3.9720 × 10−165.9580 × 10−26
F4Avg1.2599 × 10−142.5914 × 10−47F11Avg0.00370.0014F17Avg0.39780.3978
std1.0583 × 10−292.1742 × 10−63std1.2606 × 10−180std8.6888 × 10−171.2412 × 10−19
F5Avg26.86074.9282F12Avg0.03721.0468 × 10−09F18Avg3.00003.0000
std00std4.3444 × 10−173.2368 × 10−25std2.0853 × 10−152.0853 × 10−18
F6Avg0.64239.6762 × 10−09F13Avg0.57639.4403 × 10−09F19Avg−3.8621−3.8627
std6.2063 × 10−177.3985 × 10−24std2.4825 × 10−163.6992 × 10−24std2.4825 × 10−150
F7Avg0.00080.0005 F20Avg−3.2523−3.2982
std7.2730 × 10−201.9394 × 10−29std2.1846 × 10−151.8867 × 10−30
F21Avg-9.6452−10.1531
std6.5538 × 10−151.1916 × 10−32
F22Avg−10.4025−10.4029
std1.9860 × 10−151.1519 × 10−25
F23Avg−10.1302−10.5364
std4.5678 × 10−152.7804 × 10−20
Table 7. Computational time analysis on unimodal objective functions (second).
Table 7. Computational time analysis on unimodal objective functions (second).
GAMGAPSOMPSOGSAMGSATLBOMTLBOGWOMGWO
F1P.I.0.00250.00790.00110.00390.01050.01290.00550.00630.00250.0042
P.C.R.2.52187.96131.13093.963410.579012.91605.54656.32352.59694.2516
O.T.S.0.86990.24072.54122.37131.8153
F2P.I.0.00240.01030.00190.00500.01070.01380.00560.00570.00270.0031
P.C.R.2.412510.37941.19455.086710.719413.84755.62295.77812.71013.1541
O.T.S.0.10310.21484.24002.16681.1421
F3P.I.0.00710.16300.00350.07840.012600910.0115 0.0851 0.00750.0918
P.C.R.7.1174163.03163.581078.416912.598791.229511.5333885.19817.566191.8338
O.T.S.3.76233.53645.592344.516237.2164
F4P.I.0.00230.00820.00900.00380.01020.01330.0035 0.00650.00260.0075
P.C.R.2.32548.21810.90583.864110.253713.34953.53926.5216 2.69667.5614
O.T.S.0.27930.07213.58591.57571.2614
F5P.I.0.00310.03000.00120.01380.01060.02180.00410.01770.00350.0015
P.C.R.3.101830.00271.281913.892110.681021.82284.172017.74073.582315.2196
O.T.S.0.26770.21059.86390.26296.1547
F6P.I.0.00240.01140.00070.00480.01010.01460.00320.00690.00270.0114
P.C.R.2.415611.47860.79474.836510.183214.62623.20106.92312.722911.4774
O.T.S.0.15580.08700.59400.88430.2376
F7P.I.0.00470.07850.00200.03810.01070.04930.00670.04280.00490.0390
P.C.R.4.718578.57092.072838.195710.741649.30556.719342.86704.964639.0686
O.T.S.4.30800.11560.95536.530925.3156
Table 8. Computational time analysis on multimodal objective functions (second).
Table 8. Computational time analysis on multimodal objective functions (second).
GAMGAPSOMPSOGSAMGSATLBOMTLBOGWOMGWO
F8P.I.0.00370.03200.00150.01450.01080.02610.00490.01750.00340.0260
P.C.R.3.700432.04111.519414.525110.823026.15934.908417.52733.428226.0069
O.T.S.00000
F9P.I.0.00290.01280.00110.00570.01030.01600.00410.00840.00310.0154
P.C.R.2.947712.88831.18135.754710.375816.06884.18558.42333.139115.3918
O.T.S.00.04040.42200.14640.3783
F10P.I.0.00280.01490.00140.00690.01050.01680.00360.00920.00310.0145 3.9092
P.C.R.2.872114.94361.44556.901110.508316.85673.64409.25613.104614.5384
O.T.S.0.17030.07555.27550.46611.3942
F11P.I.0.00380.04030.00150.01960.01100.02880.00490.02120.00390.0289
P.C.R.3.868040.32181.561619.687711.002128.88714.929221.28853.909228.9343
O.T.S.0.38210.60980.68250.33161.7435
F12P.I.0.01020.25060.00470.11880.01350.13070.01480.12940.01110.1383
P.C.R.10.2150250.59244.7914118.826213.5497130.744814.8773129.457311.1936138.3059
O.T.S.0.81690.14463.14681.34161.5073
F13P.I.0.00990.23580.00460.12010.01410.12760.01450.13110.01040.1414
P.C.R.9.9011235.88914.6563120.149214.1304127.658714.5221131.156010.4009141.4475
O.T.S.1.04680.30862.03280.96921.2553
Table 9. Computational time analysis on fixed-dimension multimodal objective functions (second).
Table 9. Computational time analysis on fixed-dimension multimodal objective functions (second).
GAMGAPSOMPSOGSAMGSATLBOMTLBOGWOMGWO
F14P.I.0.00100.04720.00820.02490.01040.02640.02690.04340.01610.0328
P.C.R.1.046847.27948.252824.916210.477126.476626.982743.490816.151032.8053
O.T.S.0.62870.13990.48550.26520.3096
F15P.I.0.00230.00340.00070.00130.00380.00470.00380.00390.00140.0029
P.C.R.2.38533.42270.77641.31483.87124.74573.89333.92491.43752.9109
O.T.S.0.130900.91010.39850.1430
F16P.I.0.00200.00310.00060.00080.00350.00380.00290.00330.00110.0018
P.C.R.2.00423.10250.65280.86313.51033.82812.98613.35011.16851.8361
O.T.S.0.26720.04420.55420.11760.1103
F17P.I.0.00190.00210.00050.00060.00330.00380.00270.00290.00100.0014
P.C.R.1.91202.17990.53580.59883.32513.86002.70692.94071.06111.4350
O.T.S.0.118800.61160.24210.2306
F18P.I.0.00180.00250.00040.00060.00350.00370.00290.00300.00100.0015
P.C.R.1.87572.51660.42110.60293.49173.72972.90813.06891.09031.5060
O.T.S.0.09790.05761.19630.17720.2609
F19P.I.0.00250.00370.00080.00140.00370.00480.00350.00410.00160.0026
P.C.R.2.52333.75410.85401.42553.73974.84793.50754.13931.61222.6632
O.T.S.0.48780.08111.42180.09830.4146
F20P.I.0.00280.00510.00090.00210.00450.00560.00360.00830.00170.0042
P.C.R.2.84115.16400.92462.10164.57735.67753.67004.83581.77974.2165
O.T.S.00.084600.16730.1526
F21P.I.0.00270.00560.00100.00230.00440.00570.00410.00540.00200.0045
P.C.R.2.74665.60091.01732.37774.40415.79094.17285.49672.06704.5858
O.T.S.0.11580.091302.16960.5408
F22P.I.0.00300.00710.00100.00300.00410.00640.00480.00650.00220.0050
P.C.R.3.02877.14161.04543.04634.19966.48204.80346.59082.27295.0265
O.T.S.0.11230.39301.27380.43950.6946
F23P.I.0.00350.00890.00120.00390.00450.00750.00510.00800.00260.0065
P.C.R.3.53268.93431.21843.98634.50947.50265.10038.07552.62916.5630
O.T.S.2.02950.07571.29810.29800.7469
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dehghani, M.; Montazeri, Z.; Dehghani, A.; Samet, H.; Sotelo, C.; Sotelo, D.; Ehsanifar, A.; Malik, O.P.; Guerrero, J.M.; Dhiman, G.; et al. DM: Dehghani Method for Modifying Optimization Algorithms. Appl. Sci. 2020, 10, 7683. https://doi.org/10.3390/app10217683

AMA Style

Dehghani M, Montazeri Z, Dehghani A, Samet H, Sotelo C, Sotelo D, Ehsanifar A, Malik OP, Guerrero JM, Dhiman G, et al. DM: Dehghani Method for Modifying Optimization Algorithms. Applied Sciences. 2020; 10(21):7683. https://doi.org/10.3390/app10217683

Chicago/Turabian Style

Dehghani, Mohammad, Zeinab Montazeri, Ali Dehghani, Haidar Samet, Carlos Sotelo, David Sotelo, Ali Ehsanifar, Om Parkash Malik, Josep M. Guerrero, Gaurav Dhiman, and et al. 2020. "DM: Dehghani Method for Modifying Optimization Algorithms" Applied Sciences 10, no. 21: 7683. https://doi.org/10.3390/app10217683

APA Style

Dehghani, M., Montazeri, Z., Dehghani, A., Samet, H., Sotelo, C., Sotelo, D., Ehsanifar, A., Malik, O. P., Guerrero, J. M., Dhiman, G., & Ramirez-Mendoza, R. A. (2020). DM: Dehghani Method for Modifying Optimization Algorithms. Applied Sciences, 10(21), 7683. https://doi.org/10.3390/app10217683

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop