*Article* **MFO-SFR: An Enhanced Moth-Flame Optimization Algorithm Using an Effective Stagnation Finding and Replacing Strategy**

**Mohammad H. Nadimi-Shahraki 1,2,\*, Hoda Zamani 1,2, Ali Fatahi 1,2 and Seyedali Mirjalili 3,4,\***


**Abstract:** Moth-flame optimization (MFO) is a prominent problem solver with a simple structure that is widely used to solve different optimization problems. However, MFO and its variants inherently suffer from poor population diversity, leading to premature convergence to local optima and losses in the quality of its solutions. To overcome these limitations, an enhanced moth-flame optimization algorithm named MFO-SFR was developed to solve global optimization problems. The MFO-SFR algorithm introduces an effective stagnation finding and replacing (SFR) strategy to effectively maintain population diversity throughout the optimization process. The SFR strategy can find stagnant solutions using a distance-based technique and replaces them with a selected solution from the archive constructed from the previous solutions. The effectiveness of the proposed MFO-SFR algorithm was extensively assessed in 30 and 50 dimensions using the CEC 2018 benchmark functions, which simulated unimodal, multimodal, hybrid, and composition problems. Then, the obtained results were compared with two sets of competitors. In the first comparative set, the MFO algorithm and its well-known variants, specifically LMFO, WCMFO, CMFO, ODSFMFO, SMFO, and WMFO, were considered. Five state-of-the-art metaheuristic algorithms, including PSO, KH, GWO, CSA, and HOA, were considered in the second comparative set. The results were then statistically analyzed through the Friedman test. Ultimately, the capacity of the proposed algorithm to solve mechanical engineering problems was evaluated with two problems from the latest CEC 2020 test-suite. The experimental results and statistical analysis confirmed that the proposed MFO-SFR algorithm was superior to the MFO variants and state-of-the-art metaheuristic algorithms for solving complex global optimization problems, with 91.38% effectiveness.

**Keywords:** global optimization problems; metaheuristic algorithms; moth-flame optimization; premature convergence; population diversity

**MSC:** 68T20

#### **1. Introduction**

Global optimization problems are complex and characterized by various properties, for instance, they can be non-linear, non-separable, symmetric, asymmetrical, smooth with narrow ridges, unimodal, and multimodal, and can involve non-differentiable functions and high dimensionality [1,2]. These properties create challenges for existing optimization algorithms, and finding the global optimum is one of the long-standing goals in this area of study. To overcome such challenges, a series of metaheuristic algorithms have been introduced using various innovative approaches. Metaheuristic algorithms have exhibited impressive performance in exploring the problem space and approximating the promising regions in reasonable timeframes. They have been widely improved upon and adapted to solve optimization problems in diverse fields such as computer science [3,4],

**Citation:** Nadimi-Shahraki, M.H.; Zamani, H.; Fatahi, A.; Mirjalili, S. MFO-SFR: An Enhanced Moth-Flame Optimization Algorithm Using an Effective Stagnation Finding and Replacing Strategy. *Mathematics* **2023**, *11*, 862. https://doi.org/10.3390/ math11040862

Academic Editor: Jian Dong

Received: 2 December 2022 Revised: 22 January 2023 Accepted: 3 February 2023 Published: 8 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

engineering [5,6], and medicine [7–9]. Metaheuristic algorithms can be classified into two groups: single-solution-based and population-based algorithms [10,11]. Single-solutionbased metaheuristic algorithms are more oriented towards exploitation searches and they manipulate a single solution during the optimization process, which increases its potential to easily become stuck in local optima [12]. To solve this challenge, population-based metaheuristic algorithms were developed to be more exploration-oriented and to share the information in order to promote significant diversification in the search space [13,14]. Based on the source of inspiration, these algorithms can be classified as evolutionary-based, physics-based, human-based, and swarm intelligence-based algorithms [15,16].

Evolutionary-based algorithms involve a heuristic approach inspired by the biological evolution of species, such as animals, insects, and plants in nature [17,18]. Some prominent optimizers in this group are genetic algorithms [19], differential evolution [20], and the evolution strategy [21]. Physics-based algorithms are defined based on the main concepts of mathematics and physics, such as quantum physics [22–24], gravity [25,26], and optics [27], with the aim of performing a meaningful search in the problem space. Human-based algorithms simulate various human activities in order to generate innovative solutions in solving optimization problems. The imperialist competitive algorithm [28], the harmony search algorithm [29], teaching learning-based optimization [30], brain storm optimization (BSO) [31], the soccer league competition algorithm [32], the volleyball premier league algorithm [33], poor and rich optimization (PRO) [34], and past present future (PPF) [35] are some of the state-of-the-art optimizers in this group. Swarm intelligencebased optimization algorithms originated from the collective and self-organized behavior of unsophisticated agents such as insects, terrestrial, fish, and birds [36,37]. Ant colony optimization [38] and particle swarm optimization [39] were the most successful swarm intelligence-based optimization algorithms proposed in the 1990s. From the 21st century onwards, some new algorithms have been put forward in this group, such as artificial bee colony (ABC) [40], cuckoo search (CS) [41], the whale optimization algorithm (WOA) [42], elephant herding optimization (EHO) [43], moth-flame optimization (MFO) [44], the horse herd optimization algorithm (HOA) [45], the quantum-based avian navigation optimizer algorithm (QANA) [46], the African vultures optimization algorithm [47], farmland fertility [48], dwarf mongoose optimization (DMO) [49], the starling murmuration optimizer (SMO) [50], and the artificial gorilla troops optimizer [51].

Most population-based metaheuristic algorithms lack mechanisms that can maintain population diversity and the imbalance between search strategies and premature convergence problems. Hence, many effective mechanisms have been proposed to alleviate the weaknesses of these algorithms [52,53]. The artificial bee colony algorithm (ABC) is a prominent population-based metaheuristic algorithm that suffers from poor local search performance. Hence, Zhu et al. [54] proposed the Gbest-guided ABC (GABC) algorithm to incorporate information on the global best solution into the search strategy in order to improve the ability to exploit the algorithm. Other algorithms that have achieved significant performance improvements in terms of their local search ability are the quick artificial bee colony (qABC), best-so-far ABC [55], and grey artificial bee colony (GABC) algorithms [56]. Nadimi-Shahraki et al. [57] introduced a diversity-maintained multi-trial vector-differential evolution algorithm to increase population diversity and suspend the risk of premature convergence during the evolutionary process.

The moth-flame optimization (MFO) algorithm was inspired by the navigation behavior of moths toward a light source in nature and is used to solve global optimization problems. The MFO algorithm benefits from having a straightforward structure and a small number of control parameters, which increases its versatility. However, the MFO algorithm suffers from problems related to low population diversity [58], which leads it to become stuck in unpromising regions and to achieve low-quality solutions. Many MFO variants have been developed by introducing and hybridizing different search strategies and operators to overcome such challenges. Kaur et al. [59] proposed an enhanced moth flame optimization (E-MFO) method to solve global optimization problems. The E-MFO algorithm

applied a Cauchy distribution function and the influence of the best flame parameter to enhance its exploration and exploitation capabilities, respectively. Moreover, an adaptive step size and division of iterations were proposed to balance search strategies. Li et al. [60] presented the Lévy-flight moth-flame optimization (LMFO) algorithm to prevent premature convergence into local optima and enable a trade-off between the algorithm's exploration and exploitation abilities during the search process. Khalilpourazari et al. [61] introduced the WCMFO algorithm, which is a hybridized form of two algorithms, the water cycle and moth-flame optimization algorithms, to increase the exploitation ability of MFO and the exploration ability of the water cycle algorithm. To cope with the weaknesses of MFO, Hongwei et al. [62] proposed chaos-enhanced moth-flame optimization (CMFO) using ten chaotic maps. The chaotic maps are applied in population initialization, boundary handling, and the tuning of the distance parameter. Other variants of MFO are sine-cosine moth-flame optimization (SMFO) [63], combining MFO with Gaussian, Cauchy, and Lévy mutations (LGCMFO) [64], the enhancement of the local search mechanism based on shuffled frog leaping and a death mechanism with MFO (ODSFMFO) [65], and the chaotic local search and Gaussian mutation-enhanced MFO (CLSGMFO) approach [66].

Although the mentioned MFO variants have attained effective modifications in performance, they may still suffer from poor population diversity, which leads to premature convergence to local optima and a decrease in the quality of the algorithms' solutions when tackling complex optimization problems. Moreover, due to the approximate nature of metaheuristic algorithms, there is always an opportunity for improvement in their search strategies. Therefore, this study was devoted to proposing an enhanced moth-flame optimization algorithm named MFO-SFR with the aim of solving global optimization problems. The proposed MFO-SFR algorithm is equipped with an effective stagnation finding and replacing (SFR) strategy to establish diversity throughout the search process and overcome the drawbacks of previous MFO approaches. Moreover, the boundary handling of the MFO algorithm is rectified by generating new random solutions in the range of the problem space. Overall, the main contributions of this study can be summarized as follows.


The performance of the proposed MFO-SFR algorithm was assessed with the CEC 2018 test functions [67] in 30 and 50 dimensions. Then, the MFO-SFR algorithm was compared with two sets of MFO variants and well-known optimizers. In the first set of contender algorithms, the canonical MFO [44] and its variants— Lévy-flight moth-flame optimization LMFO [60], an efficient hybrid algorithm based on the water cycle and mothflame algorithms (WCMFO) [61], chaos-enhanced moth-flame optimization (CMFO) [62], death mechanism-based moth–flame optimization (ODSFMFO) [65], the synthesis of the moth-flame optimizer with sine cosine mechanisms (SMFO) [63], and the hybrid of whale and moth-flame optimization (WMFO) [68]—were selected. In the second set, the particle swarm optimization (PSO) [39], krill herd (KH) [69], grey wolf optimization (GWO) [70], the crow search algorithm (CSA) [71], and the horse herd optimization algorithm (HOA) [45] were considered. Furthermore, the results obtained using the proposed and contender algorithms were statistically analyzed using the Friedman test. Ultimately, two well-known mechanical engineering problems from the CEC 2020 test suite [72] were considered to assess the applicability of MFO-SFR in solving real-world optimization problems. The experimental results indicated that the proposed MFO-SFR algorithm boosted the performance of the canonical MFO by using an effective stagnation finding and replacing (SFR) strategy and an archive construction mechanism. Moreover, the statistical analysis revealed that the performance of the proposed MFO-SFR algorithm was superior to that of the contender algorithms.

The structure of the paper is as follows. Section 2 contains a review of related literature. Section 3 presents the MFO algorithm. In Section 4, the proposed MFO-SFR algorithm is explained in detail. Section 5 thoroughly evaluates the MFO-SFR's performance in addressing CEC 2018 benchmark test functions. Section 6 evaluates the applicability of the proposed MFO-SFR using two real-world mechanical engineering problems from the latest CEC 2020 test suite. Finally, Section 6 summarizes the results and outlines possible future directions of research.

#### **2. Related Works**

MFO variants used to solve different optimization problems are reviewed in this section.

Li et al. [60] boosted the performance of the canonical MFO by using the Lévy-flight strategy. Nadimi-Shahraki et al. [73] proved that the canonical MFO suffers from premature convergence, low population diversity, and an imbalance between search strategies in solving global optimization problems. Therefore, they proposed an improved moth-flame optimization (I-MFO) algorithm to cope with the abovementioned deficiencies. The I-MFO algorithm is equipped with the adapted wandering-around search strategy to maintain population diversity and escape from local optima. The chaos-enhanced MFO (CMFO) [62] algorithm was proposed to improve the performance of the MFO algorithm by incorporating chaos maps into population initialization, boundary handling, and parameter tuning. Pelusi et al. [74] proposed the improved moth-flame optimization (IMFO) algorithm using a hybrid phase, a dynamic crossover mechanism, and a fitness-dependent weight factor. The hybrid phase achieved a good trade-off between the exploration and exploitation phases, the dynamic crossover mechanism enhanced the population diversity, and the fitness-dependent weight factor improved the exploitation phase.

Xu et al. [64] proposed a series of MFO variants by combining the standard MFO algorithm with Gaussian mutation, Cauchy mutation, and Lévy mutation. Gaussian mutation was employed to improve its neighborhood-informed capability, Cauchy mutation was used to enhance its global exploration ability, and the Lévy mutation was employed to increase the randomness in the search process. Li et al. [65] proposed the ODSFMFO algorithm, which consists of an improved flame generation mechanism based on opposition-based learning and the differential evolution algorithm, an enhanced local search mechanism based on the shuffled frog leaping algorithm and a death mechanism. This algorithm maintained the quality of the population through opposition-based learning, population diversity using the differential evolution algorithm, the global search ability through the use of the shuffled frog leaping algorithm, and provided an escape from local optima via the use of the death mechanism. Nadimi-Shahraki et al. [75] proposed a migration-based moth–flame optimization (M-MFO) algorithm with a random migration operator, a guided migration operator, and a guiding archive to alleviate the low population diversity and poor exploration ability of MFO.

Ma et al. [76] developed an improved moth-flame optimization algorithm to prevent premature convergence to local minima. This algorithm uses the inertia weight of diversity feedback control to strike a balance between search strategies and maintain population diversity. Moreover, the mutation probability was added to improve the optimization performance. To enhance the diversity in the position of flames and the search strategy used for moths, Zhao et al. [77] developed an improved MFO (IMFO) algorithm. In this algorithm, the flames are generated through orthogonal opposition-based learning, and their positions are updated using a linear search and a mutation operator. Sapre et al. [78] introduced an opposition-based moth flame optimization method with Cauchy mutation and evolutionary boundary constraint handling (OMFO) to bypass the local optima and accelerate the convergence speed towards promising areas. Sahoo et al. [79] proposed a modified dynamic-opposite-learning-based MFO algorithm named m-MFO, using a modified dynamic-opposite learning strategy to enrich the performance of MFO in solving optimization problems. Other MFO variants include the double-evolutionary

learning MFO algorithm (DELMFO) [80], the improved moth-flame optimization algorithm (IMFO) [81], the hybrid MFO and hill climbing (MFOHC) method [82], an enhanced MFO algorithm integrated with orthogonal learning and the Broyden–Fletcher–Goldfarb–Shanno (BFGSOLMFO) method [83], and quantum-behaved simulated annealing algorithm-based moth-flame optimization (QSMFO) [84].

Due to the simple structure of MFO and its low number of control parameters, it has great potential to solve real-world applications. However, the canonical MFO critically suffers from local optimum trapping and premature convergence during the optimization process, which results in low-quality solutions [85–87]. Therefore, many improved and hybrid variants have been developed to overcome these challenges. Sayed et al. [88] presented the SA-MFO algorithm, a hybrid of the MFO approach and the simulated annealing (SA) algorithm, to escape from local optima using SA and accelerate the search process using MFO. Many researchers have applied the MFO algorithm to solve the optimal power flow (OPF) problem [89–91]. An effective hybridization of the whale optimization algorithm and a modified moth-flame optimization algorithm named WMFO [68] was proposed to solve diverse scales of the OPF problem. Sahoo et al. [92] proposed a hybrid MFO and butterfly optimization algorithm (h-MFOBOA) to overcome shortcomings such as a slow convergence speed and poor exploitation ability in both optimizers. Sattar Khan et al. [93] adapted the MFO algorithm for an integrated power plant system containing stochastic wind. MFO has been applied to the solution of problems related to fuel cells in a renewable active distribution network [94], the identification of parameters for photovoltaic modules [95], and fuel consumption in variable-cycle engines [96], with promising results.

#### **3. Moth-Flame Optimization (MFO) Algorithm**

Nocturnal moths use celestial light sources to navigate over long distances accurately. They fly in a straight line with a constant angle toward the Moon or stars, and this behavior is called transverse orientation. However, when a moth flies toward a nearby artificial light, it thinks it is a star or the Moon. Therefore, the moth continually changes its flight angle to keep going in a straight line toward the light, resulting in a spiral motion around the artificial light. In 2015, this behavior was mathematically modeled in the moth-flame optimization algorithm [44] developed by Mirjalili to solve the global optimization problem, described in detail as follows.

In this approach to solving the optimization problem, the positions of moths evolve during predefined iterations. In the first iteration, moths are randomly distributed in the problem space using Equation (1), where *Xid* denotes the *d*th dimension of the *i*th moth position and the parameters *Ubd* and *Lbd* are the upper and lower boundaries for the *d*th dimension, respectively.

$$X\_{id} = rand\_{i,d} \times (\mathcal{U}b\_d - Lb\_d) + Lb\_d, \qquad 1 \le d \le D \tag{1}$$

For the rest of the iterations, their new positions are updated based on the position of the flame. Therefore, the flame number (*R*) is computed using Equation (2), where the parameters *N* and *MaxIterations* denote the number of moths and the maximum number of iterations, respectively. Then, the positions of the flames are determined based on the stepwise procedure denoted in Table 1.

$$R = round\left(N - t \times \frac{N - 1}{MaxIterations}\right) \tag{2}$$

Ultimately, for the flame number (*R*), each moth can update its position using the two different trials denoted in Equation (3), where *Xi* (*t* + 1) is the new position of the *i*th moth, *Di* (*t*) is computed using Equation (4), b is the constant value, *k* is calculated using Equations (5) and (6), and *Fi* (*t*) denotes the *i*th flame.

$$RX\_i(t+1) = \begin{cases} D\_i^{\prime}(t) \times \varepsilon^{bk} \times \cos(2\pi k) + F\_i(t) & i \le R \\\\ D\_i^{\prime\prime}(t) \times \varepsilon^{bk} \times \cos(2\pi k) + F\_R(t) & i > R \end{cases} \tag{3}$$

$$D\_i^{\prime}(t) = |F\_i(t) - X\_i(t)|\tag{4}$$

$$k = (a - 1) \times rand(0, 1) + 1\tag{5}$$

$$a = -1 + t \times \left(\frac{-1}{\text{MaxIterations}}\right) \tag{6}$$

In the second trial (when *i* > *R*), the parameter *Di* (*t*) is computed using Equation (7), and *FR* (*t*) is the current position of the *R*th flame.

$$D\_i{}^{\prime}(t) = \left| F\_{\mathcal{R}}(t) - X\_i(t) \right| \tag{7}$$

**Table 1.** Flame construction procedure.

Input: *X*: the positions of moths, Fit: the fitness values of moths, F: the position of the flame, and *OF*: the fitness values of flames.

Flame construction in the first iteration when *t* = 1.


Flame construction for the rest iteration when *t* > 1.


#### **4. The Proposed MFO-SFR Algorithm**

According to the literature, the canonical MFO lacks an efficient operator to maintain population diversity. The search process may be biased by the best solutions obtained in each iteration [97]. This deficiency leads to premature convergence into unpromising regions, local optimum stagnation, and a decrease in the solution quality when solving complex problems. Hence, in this study, we were motivated to propose an enhanced mothflame optimization algorithm named MFO-SFR to effectively maintain population diversity and mitigate the deficiencies mentioned above by introducing an effective stagnation finding and replacing (SFR) strategy.

Stagnation finding and replacing (SFR) strategy: Suppose that the matrix *X* (*t*)={*X1D* (*t*), ... , *XiD* (*t*), ... , *XND* (*t*)} denotes a moth population in the current iteration t in a *D*-dimensional search space. Each vector *XiD* (*t*) denotes the position of the *i*th moth in the problem space. The matrix *X* (*t*) is initialized for the first iteration using a uniform random distribution. For the rest of the iterations (when *t* ≥ 2), the new positions of the moths are determined using Equation (8), where *D<sup>α</sup> <sup>i</sup>* (*t*) and *<sup>D</sup><sup>β</sup> <sup>i</sup>* (*t*) are the main elements of the SFR strategy, which is computed using Equations (9) and (10), respectively. A constant *b* expresses the shape of the logarithmic spiral, and *τ* is a random number between the intervals −1 and 1. *Fj*(*t*) and *FR*(*t*) are the positions of the *j*th flame and the *R*th flame such that the parameter R is computed using Equation (2). In Equation (9), vector *Mi*(*t*) is determined using Definition 1. To find the stagnant solutions, the mean of the distance or *ϕ<sup>i</sup>* is calculated using Equation (11), where *Xiq* is the *q*th dimension of the *i*th moth. *Fjq* is the *q*th dimension of the *j*th flame in which the index *j* is determined by Equation (12), which

sorts the results obtained from Equation (11) in descending order to obtain the indexes, then applies them as flame indexes in Equation (10).

$$X\_{i}(t+1) = \begin{cases} D\_{i}^{a}(t) \times e^{b\tau} \times \cos(2\pi t) + F\_{\hat{\jmath}}(t) & \text{if } i \le R(t) \\\\ D\_{i}^{\hat{\varrho}}(t) \times e^{b\tau} \times \cos(2\pi t) + F\_{R}(t) & \text{else} \end{cases} \tag{8}$$

$$D\_i^a(t) = \left| F\_{\bar{l}}(t) - M\_{\bar{l}}(t) \right| \tag{9}$$

$$D\_i^{\mathcal{G}}(t) = \begin{cases} |F\_{\dot{\mathcal{I}}}(t) - X\_{\dot{\mathcal{I}}}(t)| & \varphi\_i > 0\\ \text{Selecting a random position from the } Acc \ \varphi\_i = 0 \end{cases} \tag{10}$$

$$\{\varphi\_1, \dots, \varphi\_{i\prime}, \dots, \varphi\_N\} \leftarrow \varphi\_i = \frac{1}{D} \times \sum\_{q=1}^D \left| F\_{jq}(t) - X\_{iq}(t) \right| \tag{11}$$

$$\{\varphi\_1, \dots, \varphi\_{\dot{\gamma}}, \dots, \varphi\_N\} \leftarrow \text{Sort}(\varphi\_{1\prime}, \dots, \varphi\_{\dot{\imath}\prime}, \dots, \varphi\_N) \tag{12}$$

**Definition 1.** (*Archive construction*): *The main idea behind archive construction is to enrich the population diversity by preserving the generated representative flame and boost the convergence of solutions toward promising areas by preserving the best solutions in each iteration. To construct the archive Arc, consider the matrix M = {M1,* ... *, Mi,* ... *, Mκ} as the memory of the Arc with predefined κ. Each Mi = [mi1, mi2*, ... , *miD] denotes this memory's vector position, which is generated using Algorithm 1. First, dualPop and dualFit are created based on the flame construction process described in* Table 1*. Then, the representative flame (RF) with the average of flames' positions is computed using Equation (13), where C is the total number of considered moths and Fid denotes the dth dimension of the ith flame. Finally, the global best flame and RF position are archived as two new entries in the memory M. In regard to inserting these new entries; they are randomly replaced with two existing entries if the memory is full.*

$$RF\_d(t) = \frac{1}{\mathbb{C}} \sum\_{i=1}^{\mathbb{C}} F\_{id}(t) \tag{13}$$

In addition, MFO-SFR checks the feasibility of the position of the new moths to return those that have violated the problem space boundaries by generating random positions in the range of the problem space.



1. begin


#### *Complexity Analysis*

Regarding the pseudocode of MFO-SFR shown in Algorithm 2, the MFO-SFR algorithm consists of six distinct phases: initialization, flame construction, archive construction, movement, correcting the violated positions, and updating the positions. In the initialization phase, *N* moths are randomly distributed in a *D*-dimensional search space with an O(*ND*) computational complexity. In the flame construction phase, flames are constructed differently with the computational complexity of O(*N*2), considering the worst case for the quicksort algorithm. The computational complexity of the archive construction phase using Algorithm 1 is O(*N*<sup>2</sup> <sup>+</sup> *ND*), because lines 2−4 have the complexity of O(*N*2) with respect to the original paper's definition of MFO, and Equation (13) has O(*ND*) in the worst case. The cost of the movement phase is O(*ND*), using either Equations (8) and (9) when *i* ≤ *R* or using Equations (8) and (10) when *i* > *R*. Then, the feasibility of the new positions is checked to correct the violated positions with the computational complexity of O(*ND*). Finally, the updating phase is performed with O(ND) computational complexity. Therefore, considering *T* iterations, the computational complexity of MFO-SFR is O(*ND* + *N*<sup>2</sup> + *T*(2*N*<sup>2</sup> + 4*ND*) or O(*TN*<sup>2</sup> + *TND*). In the same fashion, the space complexity is O(*N* + *ND* + *κ*), considering that the memory is reusable and the size of the memory is *κ*. Thus, the space complexity of MFO-SFR is O(*ND* + *D*2*log N*).

**Algorithm 2.** The pseudocode of the proposed MFO-SFR algorithm.

Input: *N*: Number of moths, *MaxIterations*: Maximum iterations, and *D*: Dimension size. Output: Returns the position of the global best flame and its fitness value.


#### **5. Evaluation of the Proposed MFO-SFR Algorithm**

In this section we present our evaluation of the performance of the proposed MFO-SFR algorithm in solving global optimization problems from the CEC 2018 benchmark test suite [67]. This test suite is suitable for evaluating the proposed algorithm in terms of its local optimum avoidance ability and the diversity of solutions as it consists of 29 test functions with different characteristics, such as unimodal, multimodal, and hybrid functions, as well as compositions with various dimensions (*D*), specifically, 30 and 50 dimensions. Moreover, in this section, we also present two separate sets of experiments conducted to extensively assess and compare the performance of the proposed MFO-SFR algorithm with several well-known optimization methods. The proposed algorithm was compared to the original MFO and its variants in the first set, and then, in the second experimental set, it was compared to other prominent and recent optimizers. In both experiment sets, all comparative algorithms' control parameter values were adjusted to match those in their original articles, as depicted in Table 2. All of the algorithms were executed 20 times

on a laptop with an Intel Core i7-10750H CPU (2.60 GHz), 24 GB of memory, and MATLAB R2022a with a maximum of (*<sup>D</sup>* <sup>×</sup> 104)/*<sup>N</sup>* iterations, where *<sup>D</sup>* represents the dimension size of the problem and *N* is the population size, which was set to 100 in this study.

**Table 2.** Parameter values for the optimization algorithms.


To investigate the impact of the archive introduced in Equation (10), a numerical pretest was performed on the canonical MFO algorithm using the CEC 2018 benchmark test suite on dimension 30. In this pre-test percentage of situations when the parameter *ϕ<sup>i</sup>* was equal to zero is computed and reported in Table A1 of Appendix A. The results reported in Table A1 in Appendix A showed that for some test functions, especially hybrid and composition ones, the percentage of stagnant solutions was high enough to affect the quality of the generated solutions.

#### *5.1. Comparing the Proposed MFO-SFR Algorithm with MFO Variants*

In this set of experiments, we compared the proposed MFO-SFR algorithm with moth-flame optimization (MFO) [44] and its variants, including LMFO [60], WCMFO [61], CMFO [62], ODSFMFO [65], SMFO [63], and WMFO [68]. Table 3 compares the results of the proposed MFO-SFR algorithm with those of MFO and its variants in solving the CEC 2018 test functions with 30 dimensions. The results acquired from the unimodal test functions F1 and F3 demonstrated that MFO-SFR had an acceptable exploitation potential compared to the other algorithms. The results from multimodal test functions F4–F10 indicated that the proposed algorithm was able to efficiently search the problem space and find the unvisited areas by maintaining its population diversity throughout the optimization process. The overall results of the hybrid and composition functions F11–F30 confirmed that MFO-SFR avoided local optimum solutions by striking a balance between exploration and exploitation abilities. Moreover, the final rows of Tables 3 and 4 reveal that according to the Friedman test [98], the proposed MFO-SFR algorithm ranked first among the algorithms, including MFO and the other investigated variants.

**Table 3.** Comparison of MFO-SFR with MFO variants for CEC 2018 test functions with D = 30.


**Table 3.** *Cont*.



**Table 3.** *Cont*.

**Table 4.** Comparison of MFO-SFR with MFO variants for CEC 2018 test functions with D = 50.



**Table 4.** *Cont*.

Table 4 presents the average and minimum fitness values obtained from the proposed MFO-SFR algorithms, MFO, and its six variants in solving the CEC 2018 benchmark test functions with 50 dimensions. Overall, the results showed that the proposed MFO-SFR algorithm provided competitive results for most test functions, and it ranked first according to the Friedman test results, which are reported in the final row of the table. Additionally, an exploratory data analysis is depicted in Figure 1 to show the ranking of algorithms for each function. Overall, it can be seen that the proposed MFO-SFR algorithm surrounds the center of the radar chart for most test functions in 30 and 50 dimensions. For instance, for F1, the proposed MFO-SFR algorithm was ranked first in 30 dimensions and third in 50 dimensions, whereas WMFO and the canonical MFO algorithms were ranked second and seventh for 30 and 50 dimensions, respectively. For F12, it can be seen that MFO-SFR was ranked second, MFO was ranked seventh, and WMFO was ranked first for 30 dimensions, and these three algorithms were ranked second, seventh, and first, respectively, for 50 dimensions. For F27, MFO-SFR and WMFO were ranked first and sixth in both 30 and 50 dimensions, whereas the canonical MFO algorithm was ranked fourth in 30 dimensions and fifth in 50 dimensions.

The convergence comparison of the proposed MFO-SFR algorithm and the other studied algorithms is shown in Figure 2. For F1 in 30 dimensions, it can be seen that although MFO-SFR exhibited prolonged convergence, it provided the best solution compared to the other algorithms. In 50 dimensions, however, it ranked second after WMFO. For multimodal functions F5 and F7, the convergence trend of MFO-SFR continued up to the final iterations, whereas most of the competitors were flattened in local optimum zones. As evidence of the adequate balance between exploration and exploitation, for hybrid functions F10 and F16, MFO-SFR exhibited sharp movements in the first half of the iterations and relatively modest fluctuations in the second half. Ultimately, for composition test functions F21, F26, and F30, MFO-SFR exhibited a gradual trend toward the optimum solutions after beginning its convergence with a sharply descending slope. This behavior indicates the capacity of MFO-SFR to bypass the local optimum and avoid premature convergence.

**Figure 1.** Exploratory data analysis of MFO-SFR, MFO, and its variants on CEC 2018 with 30 and 50 dimensions.

**Figure 2.** Convergence comparison of MFO-SFR, MFO, and its variants on CEC 2018 with D = 30 and 50.

#### *5.2. Comparing the Proposed MFO-SFR Algorithm with Other Well-Known Optimization Algorithms*

The second set of experiments, we compared the performance of the proposed MFO-SFR algorithm with the well-known representative metaheuristic algorithms presented in the literature, including particle swarm optimization (PSO) [39], krill herd (KH) [69], grey wolf optimization (GWO) [70], the crow search algorithm (CSA) [71], and the horse herd optimization algorithm (HOA) [45]. The algorithms' source codes were gathered from publicly available resources, and their parameter values were the same ones considered in the original papers, as reported in Table 2. Tables 5 and 6 compare the average and minimum fitness values produced by the proposed MFO-SFR algorithm and the other algorithms for 30 and 50 dimensions. The results of the test functions F1 and F3–F10 for both numbers of dimensions demonstrated that MFO-SFR exhibited impressive exploitation and exploration capabilities and generated better solutions while dealing with unimodal and multimodal tests. The results of test functions F11–F30 demonstrated that the MFO-SFR avoided local optimum trapping and balanced the trade-off between exploration and exploitation abilities. Furthermore, the final two rows present the results of the Friedman test for each algorithm, in which MFO-SFR ranked first among the comparative algorithms for both 30 and 50 dimensions.

**Table 5.** Comparison of MFO-SFR with well-known algorithms for CEC 2018 test functions with D = 30.



**Table 5.** *Cont*.

**Table 6.** Comparison of MFO-SFR with well-known algorithms for CEC 2018 test functions with D = 50.


**Table 6.** *Cont*.



**Table 6.** *Cont*.

The exploratory data analysis shown in Figure 3 was conducted to investigate the ranking of algorithms for each test function. Overall, it can be noted that the proposed MFO-SFR algorithm was ranked first among the other compared algorithms for all test functions, except for F10 in 30 dimensions. For 50 dimensions, it is notable that MFO-SFR was ranked first for all test functions except for F4, F10, F22, and F28.

**Figure 3.** Exploratory data analysis of MFO-SFR and other well-known MFO algorithms on CEC 2018 with 30 and 50 dimensions.

As shown in Figure 4, we analyzed MFO-SFR's convergence behavior and compared it with that of the other algorithms. Overall, it can be seen that the proposed MFO-SFR algorithm was able to converge toward more accurate solutions by avoiding local optimum solutions and striking a balance between its search abilities. It is also notable that the proposed MFO-SFR algorithm maintained its solution accuracy by enhancing the number of dimensions, which demonstrates the scalability of the proposed algorithm.

**Figure 4.** Comparison of the convergence behavior of MFO-SFR and well-known algorithms for CEC 2018 test functions with 30 and 50 dimensions.

#### *5.3. Population Diversity Analysis*

Maintaining population diversity is essential in metaheuristic algorithms since low diversity among search agents may cause the algorithm to become stuck at local optimum areas. In this experiment, the population diversity of MFO-SFR and five representatives of comparative algorithms was investigated on several CEC 2018 benchmark test suites with 30 and 50 dimensions. The population diversity curves presented in Figure 5 were calculated by measuring the moment of inertia (*Ic*) [99], where *Ic* denotes the spreading of each individual from their centroid, which was determined by Equation (14), and the centroid *cj* for j = 1, 2, ... *D* was calculated using Equation (15). Comparing the population diversity curves with the convergence curves plotted in Figures 2 and 4, it can be noted that the proposed MFO-SFR algorithm effectively maintained diversification among solutions until the near-optimal solution was met. This behavior occurred mainly because of the introduced SFR strategy, which identified stagnant solutions using a distancebased technique and replaced them with a solution selected from the archive constructed from the previous solutions. The introduced archive was able to maintain not only the diversification of solutions by preserving the generated representative flame but also the convergence of solutions toward promising areas by preserving the best solutions in each iteration.

$$I\_{\mathbb{C}} = \sum\_{i=1}^{D} \sum\_{j=1}^{N} \left( M\_{ij} - c\_i \right)^2 \tag{14}$$

$$c\_i = \frac{1}{N} \sum\_{j=1}^{N} M\_{ij} \tag{15}$$

**Figure 5.** Population diversity of MFO-SFR and comparative algorithms on CEC 2018 test functions.

#### *5.4. The Overall Effectiveness of MFO-SFR*

The overall effectiveness (OE) achieved by the proposed MFO-SFR algorithm in solving test functions with 30 and 50 dimensions was computed using Equation (16) and the results are reported in Tables 7 and 8. *OEi* indicates the overall effectiveness of the *i*-th algorithm, *Li* is the total number of test functions that the *i*-th algorithm lost, and *TF* is the total number of test functions. Table 7 compares the OE achieved by the proposed MFO-SFR with the other MFO variants, showing that MFO-SFR attained the highest OE value, equal to 74.14%. Moreover, Table 8 shows that MFO-SFR achieved a higher OE value of 91.38% compared to other well-known optimization algorithms.

$$OE\_i(\%) = \frac{TF - L\_i}{TF} \tag{16}$$


**Table 7.** The overall effectiveness of MFO-SFR and MFO variants.


**Table 8.** The overall effectiveness of MFO-SFR and contender algorithms.

#### **6. Applicability of MFO-SFR to Solving Mechanical Engineering Problems**

There is a growing interest in using optimization algorithms in mechanical and engineering systems to improve performance, cost, and product lifespan [50,100]. Therefore, in this section we assessed the applicability of MFO-SFR using two challenging real-world optimization issues from the most recent CEC 2020 test suite [72]. The constraints of the problems were handled using a death penalty function. The maximum number of iterations for MFO-SFR and the variants of MFO was (*<sup>D</sup>* <sup>×</sup> 104)/*N*, where *<sup>D</sup>* is the number of decision variables and *N* is the number of search agents, which was set to 20.

#### *6.1. Welded Beam Design (WBD) Problem*

The WBD [101], stated in Equation (17), is a well-known optimization issue in constrained engineering problems. The primary goal of this task, as indicated in Figure 6, is to minimize the total fabrication cost of a welded beam by determining the best design parameters for the clamped bar length (*l*), weld thickness (*h*), bar thickness (*b*), and bar height (*t*). The results tabulated in Table 9 indicate that the proposed MFO-SFR exhibited superior performance compared with the other algorithms.


```
336
```
**Figure 6.** Schematic of the welded beam design problem [101].


**Table 9.** Comparison of the results obtained for the welded beam design problem.

#### *6.2. The Four-Stage Gearbox Problem*

The design of a four-stage gearbox [102] was the second engineering design optimization problem examined in this study. To reduce the weight of the gearbox, the mathematical model specified in Equation (18) was used, together with 86 non-linear constraints and 22 discrete decision variables. According to the results reported in Table 10, the MFO-SFR algorithm outperformed the other algorithms in terms of the quality of its solution.

WMFO 0.20722 3.45341 8.99834 0.20748 1.73151 MFO-SFR 0.20573 3.47056 9.03662 0.20573 1.72486

$$\text{Minimize}: F\left(\bar{\mathbf{x}}\right) = \left(\frac{\pi}{1000}\right) \sum\_{i=1}^{4} \frac{b\_i c\_i^2 \left(N\_{pi}^2 + N\_{\bar{\mathbf{g}}i}^2\right)}{\left(N\_{pi} + N\_{\bar{\mathbf{g}}i}\right)^2}, \; i = (1, 2, 3, 4) \tag{18}$$

**Table 10.** Comparison of the results obtained for the four-stage gearbox problem.


Subject to:

*g*1 − *x* = 366000 *πω*<sup>1</sup> <sup>+</sup> <sup>2</sup>*c*1*Np*<sup>1</sup> *Npi*+*Ng*<sup>1</sup> (*Np*1+*Ng*1) 2 4*b*1*c*<sup>2</sup> <sup>1</sup>*Np*<sup>1</sup> <sup>−</sup> *<sup>σ</sup><sup>N</sup> JR* 0.0167*WK*0*Km* ≤ 0 *g*2 − *x* = 366000*Ng*<sup>1</sup> *πω*1*Np*<sup>1</sup> <sup>+</sup> <sup>2</sup>*c*2*Np*<sup>2</sup> *Np*2+*Ng*<sup>2</sup> (*Np*2+*Ng*2) 2 4*b*2*c*<sup>2</sup> <sup>2</sup>*Np*<sup>2</sup> <sup>−</sup> *<sup>σ</sup><sup>N</sup> JR* 0.0167*WK*0*Km* ≤ 0 *g*3 − *x* = 366000*Ng*1*Ng*<sup>2</sup> *πω*1*Np*1*Np*<sup>2</sup> <sup>+</sup> <sup>2</sup>*c*3*Np*<sup>3</sup> *Np*3+*Ng*<sup>3</sup> (*Np*3+*Ng*3) 2 4*b*3*c*<sup>2</sup> <sup>3</sup>*Np*<sup>3</sup> <sup>−</sup> *<sup>σ</sup><sup>N</sup> JR* 0.0167*WK*0*Km* ≤ 0 *g*4 − *x* = 366000*Ng*1*Ng*2*Ng*<sup>3</sup> *πω*1*Np*1*Np*2*Np*<sup>3</sup> <sup>+</sup> <sup>2</sup>*c*4*Np*<sup>4</sup> *Np*4+*Ng*<sup>4</sup> (*Np*4+*Ng*4) 2 4*b*4*c*<sup>2</sup> <sup>4</sup>*Np*<sup>4</sup> <sup>−</sup> *<sup>σ</sup><sup>N</sup> JR* 0.0167*WK*0*Km* ≤ 0 *g*5 − *x* = 366000 *πω*<sup>1</sup> <sup>+</sup> <sup>2</sup>*c*1*Np*<sup>1</sup> *Np*1+*Ng*<sup>1</sup> (*Np*1+*Ng*1) 3 4*b*1*c*<sup>2</sup> 1*Ng*1*N*<sup>2</sup> *p*1 − *σ<sup>H</sup> Cp* <sup>2</sup> *sin*(∅)*cos*(∅) 0.0334*WK*0*Km* ≤ 0 *g*6 − *x* = 366000*Ng*<sup>1</sup> *πω*1*Np*<sup>1</sup> <sup>+</sup> <sup>2</sup>*c*2*Np*<sup>2</sup> *Np*2+*Ng*<sup>2</sup> (*Np*2+*Ng*2) 3 4*b*2*c*<sup>2</sup> 2*Ng*2*N*<sup>2</sup> *p*2 − *σ<sup>H</sup> Cp* <sup>2</sup> *sin*(∅)*cos*(∅) 0.0334*WK*0*Km* ≤ 0 *g*7 − *x* = 366000*Ng*1*Ng*<sup>2</sup> *πω*1*Np*1*Np*<sup>2</sup> <sup>+</sup> <sup>2</sup>*c*3*Np*<sup>3</sup> *Np*3+*Ng*<sup>3</sup> (*Np*3+*Ng*3) 3 4*b*3*c*<sup>2</sup> 3*Ng*3*N*<sup>2</sup> *p*3 − *σ<sup>H</sup> Cp* <sup>2</sup> *sin*(∅)*cos*(∅) 0.0334*WK*0*Km* ≤ 0 *g*8 − *x* = 366000*Ng*1*Ng*2*Ng*<sup>3</sup> *πω*1*Np*1*Np*2*Np*<sup>3</sup> <sup>+</sup> <sup>2</sup>*c*4*Np*<sup>4</sup> *Np*4+*Ng*<sup>4</sup> (*Np*4+*Ng*4) 3 4*b*4*c*<sup>2</sup> 4*Ng*4*N*<sup>2</sup> *p*4 − *σ<sup>H</sup> Cp* <sup>2</sup> *sin*(∅)*cos*(∅) 0.0334*WK*0*Km* ≤ 0 *<sup>g</sup>*9–12 <sup>−</sup> *x* <sup>=</sup> <sup>−</sup>*Npi*<sup>2</sup> sin2(∅) <sup>4</sup> <sup>−</sup> <sup>1</sup> *Npi* + 1 *Npi* 2 <sup>+</sup> *Ngi*<sup>2</sup> *sin*2(∅) <sup>4</sup> <sup>−</sup> <sup>1</sup> *Ngi* + 1 *Ngi* 2 + *sin*(∅)(*Npi*+*Ngi*) <sup>2</sup> + *CRminπ*cos(∅) ≤ <sup>0</sup> *<sup>g</sup>*13–16 <sup>−</sup> *x* <sup>=</sup> *dmin* <sup>−</sup> <sup>2</sup>*ciNpi Npi*+*Ngi* ≤ 0 *<sup>g</sup>*17–20 <sup>−</sup> *x* <sup>=</sup> *dmin* <sup>−</sup> <sup>2</sup>*ciNgi Npi*+*Ngi* ≤ 0 *<sup>g</sup>*21 <sup>−</sup> *x* = *xp*<sup>1</sup> + (*Np*1+2)*c*<sup>1</sup> *Np*1+*Ng*<sup>1</sup> − *Lmax* ≤ 0 *<sup>g</sup>*22–24 <sup>−</sup> *x* = −*Lmax* + (*Npi*+2)*ci Npi*+*Ngi i*=2,3,4 + *xg*(*i*−1) ≤ <sup>0</sup> *<sup>g</sup>*25 <sup>−</sup> *x* = −*xp*<sup>1</sup> + (*Np*1+2)*c*<sup>1</sup> *Np*1+*Ng*<sup>1</sup> ≤ 0 *<sup>g</sup>*26–28 <sup>−</sup> *x* = (*Npi*+2)*ci Npi*+*Ngi* − *xg*(*i*−1) *i*=2,3,4 ≤ 0 *<sup>g</sup>*29 <sup>−</sup> *x* <sup>=</sup> *yp*<sup>1</sup> <sup>+</sup> (*Np*1+2)*c*<sup>1</sup> *Np*1+*Ng*<sup>1</sup> −*Lmax* ≤ 0 *<sup>g</sup>*30–32 <sup>−</sup> *x* = −*Lmax* + *ci*(2+*Npi*) *Npi*+*Ngi* − *yg*(*i*−1) *i*=2,3,4 ≤ 0 *<sup>g</sup>*33 <sup>−</sup> *x* <sup>=</sup> (2+*Np*1)*c*<sup>1</sup> *Np*1+*Ng*<sup>1</sup> − *yp*<sup>1</sup> ≤ 0 *<sup>g</sup>*34–36 <sup>−</sup> *x* = *ci*(2+*Npi*) *Npi*+*Ngi* − *yg*(*i*−1) *i*=2,3,4 ≤ 0 *<sup>g</sup>*37–40 <sup>−</sup> *x* <sup>=</sup> <sup>−</sup>*Lmax* <sup>+</sup> (2+*Ngi*)*c*<sup>1</sup> *Npi*+*Ngi* + *xgi* ≤ <sup>0</sup> *<sup>g</sup>*41–44 <sup>−</sup> *x* = −*xgi* + (*Ngi*+2)*ci Npi*+*Ngi* + *xgi* ≤ 0 *<sup>g</sup>*45–48 <sup>−</sup> *x* = −*ygi* + (*Ngi*+2)*ci Npi*+*Ngi* −*Lmax* ≤ 0 *<sup>g</sup>*49–52 <sup>−</sup> *x* = −*ygi* + (*Ngi*+2)*ci Npi*+*Ngi* ≤ 0 *<sup>g</sup>*53–56 <sup>−</sup> *x* = (*bi* − 8.255)(*bi* − 5.715)(*bi* − 12.70)(−*Npi* + 0.945*ci* − *Ngi*)(−1) ≤ 0 *<sup>g</sup>*57–60 <sup>−</sup> *x* = (*bi* − 8.255)(*bi* − 3.175)(*bi* − 12.70)(−*Npi* + 0.646*ci* − *Ngi*) ≤ 0 *<sup>g</sup>*61–64 <sup>−</sup> *x* = (*bi* − 5.715)(*bi* − 3.175)(*bi* − 12.70)(−*Npi* + 0.504*ci* − *Ngi*) ≤ 0 *<sup>g</sup>*65–68 <sup>−</sup> *x* = (*bi* − 5.715)(*bi* − 3.175)(*bi* − 8.255)(0*ci* − *Ngi* − *Npi*) ≤ 0

*<sup>g</sup>*69–72 <sup>−</sup> *x* = (*bi* − 8.255)(*bi* − 5.715)(*bi* − 12.70)(*Ngi* + *Npi* − 1.812*ci*) ≤ 0 *<sup>g</sup>*73–76 <sup>−</sup> *x* = (*bi* − 8.255)(*bi* − 3.175)(*bi* − 12.70)(−0.945*ci* + *Npi* + *Ngi*) ≤ <sup>0</sup> *<sup>g</sup>*77–80 <sup>−</sup> *x* = (*bi* − 5.715)(*bi* − 3.175)(*bi* − 12.70)(−0.646*ci* + *Npi* + *Ngi*)(−1) ≤ <sup>0</sup> *<sup>g</sup>*81–84 <sup>−</sup> *x* = (*bi* − 5.715)(*bi* − 3.175)(*bi* − 8.255)(*Npi* + *Ngi* − 0.504*ci*) ≤ 0 *<sup>g</sup>*85 <sup>−</sup> *x* <sup>=</sup> *<sup>ω</sup>min* <sup>+</sup> *<sup>ω</sup>*1(*Np*1*Np*2*Np*3*Np*4) (*Ng*1*Ng*2*Ng*3*Ng*4) <sup>≤</sup> <sup>0</sup> *<sup>g</sup>*86 <sup>−</sup> *x* <sup>=</sup> *<sup>ω</sup>*1(*Np*1*Np*2*Np*3*Np*4) (*Ng*1*Ng*2*Ng*3*Ng*4) <sup>−</sup> *<sup>ω</sup>min* <sup>≤</sup> <sup>0</sup> (1)

where

$$\begin{cases} \bar{\mathbf{x}} = \begin{Bmatrix} N\_{p1}, N\_{\mathbf{g}1}, N\_{p2}, N\_{\mathbf{g}2} \dots b\_{\mathbf{1}}, b\_2 \dots x\_{p1}, \mathbf{x}\_{\mathbf{g}1}, \mathbf{x}\_{\mathbf{g}2} \dots y\_{p1}, y\_{\mathbf{g}1}, y\_{\mathbf{g}2} \dots y\_{\mathbf{g}4} \end{Bmatrix} \\\ c\_i = \sqrt{\left(y\_{\mathbf{g}i} - y\_{\mathbf{p}i}\right)^2 + \left(\mathbf{x}\_{\mathbf{g}i} - \mathbf{x}\_{\mathbf{p}i}\right)^2}, K\_0 = 1.5, d\_{\min} = 25, f\_R = 0.2, \mathcal{Q} = 120^\circ, W = 55.9, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q} = 120^\circ, \mathcal{Q$$

with bounds:

*b*<sup>1</sup> ∈ {3.175, 12.7, 8.255, 5.715} *yp*1, *xp*1, *ygi*, *xgi* ∈ {12.7, 38.1, 25.4, 50.8, 76.2, 63.5, 88.9, 114.3, 101.6} 7 ≤ *Ngi*, *Npi* ≤ 76 ∈ *integer*.

#### **7. Conclusions and Future Works**

MFO is a prominent metaheuristic algorithm, inspired by the nighttime convergent behavior of moths in relation to a light source. A large part of MFO's popularity in recent years has been attributed to its straightforward construction. However, due to its rapid loss of population diversity and inadequate exploration ability, the MFO algorithm often encounters local optimum entrapment and premature convergence. In this study, an enhanced moth-flame optimization (MFO-SFR) algorithm was proposed to tackle these weaknesses. MFO-SFR introduces an effective stagnation finding and replacing (SFR) strategy to effectively maintain population diversity by finding stagnant solutions using a distance-based technique and replacing them with a solution selected from the archive constructed on the basis of previous solutions.

The performance of the proposed MFO-SFR algorithm was evaluated on global optimization problems using the CEC 2018 benchmark test suite in two different sets of experiments. In the first set of experiments, the performance of MFO-SFR was benchmarked by conducting the CEC 2018 benchmark functions with 30 and 50 dimensions. The obtained results were compared to those obtained using MFO and its six recent variants, including Lévy-flight moth-flame optimization (LMFO), an efficient hybrid algorithm based on the water cycle and moth-flame (WCMFO), chaos-enhanced moth-flame optimization (CMFO), death mechanism-based moth-flame optimization (ODSFMFO), the synthesis of the moth-flame optimizer with sine cosine mechanisms (SMFO), and the hybrid of whale and moth-flame optimization (WMFO). In the second set of experiments, the results obtained using MFO-SFR were compared with the results of five well-known swarm intelligence algorithms, including particle swarm optimization (PSO), krill herd (KH), the grey wolf optimizer (GWO), the crow search algorithm (CSA), and the horse herd optimization algorithm (HOA) in 30 and 50 dimensions. Furthermore, the results of the two sets of experiments were statistically analyzed and ranked based on their average fitness values. To further analyze the performance of the proposed algorithms, convergence and population diversity results were plotted and compared with those of the other studied algorithms. The plotted curves showed that MFO-SFR could avoid premature convergence and local optimum solutions by maintaining its population diversity throughout the optimization process. To verify the viability of MFO-SFR in solving real-world optimization problems, two well-known mechanical engineering problems from the CEC 2020 dataset were considered. For future studies, solving the problem of improving the exploitation ability of MFO-SFR without degrading its exploration ability is a worthwhile direction of research. Furthermore, the SFR strategy could be considered as a reference in solving the issue of low population diversity for those metaheuristic algorithms that suffer from this problem. Moreover, alternative methods to construct an archive, such as history-based methods, as used in SHADE [103], can be investigated in future studies.

**Author Contributions:** Conceptualization, M.H.N.-S., A.F. and H.Z.; methodology, M.H.N.-S., A.F. and H.Z.; software, M.H.N.-S., A.F. and H.Z.; validation, M.H.N.-S., H.Z. and S.M.; formal analysis, M.H.N.-S., H.Z., A.F. and S.M.; investigation, M.H.N.-S., A.F. and H.Z.; resources, M.H.N.-S., H.Z. and S.M.; data curation, M.H.N.-S., A.F. and H.Z.; writing, M.H.N.-S., H.Z. and A.F.; original draft preparation, M.H.N.-S., A.F. and H.Z.; writing—review and editing, M.H.N.-S., H.Z., A.F. and S.M.; visualization, M.H.N.-S., A.F. and H.Z.; supervision, M.H.N.-S., H.Z. and S.M.; project administration, M.H.N.-S. and S.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data and code used in the research may be obtained from the corresponding author upon request.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A**

Table A1 provides the results of the pretest conducted on the canonical MFO in 30 dimensions to investigate the average and maximum percentages of situations when *ϕ<sup>i</sup>* was equal to 0.


**Table A1.** The analysis of situations where *ϕ<sup>i</sup>* was equal to zero with D = 30.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
