Next Article in Journal
Surgical Operation Scheduling with Goal Programming and Constraint Programming: A Case Study
Next Article in Special Issue
An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning
Previous Article in Journal
Existence and Stability Results for a Fractional Order Differential Equation with Non-Conjugate Riemann-Stieltjes Integro-Multipoint Boundary Conditions
Previous Article in Special Issue
A Novel Hybrid Algorithm for Minimum Total Dominating Set Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems

Department of CSE, Visvesvaraya National Institute of Technology, Nagpur 440010, India
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 250; https://doi.org/10.3390/math7030250
Submission received: 2 February 2019 / Revised: 1 March 2019 / Accepted: 5 March 2019 / Published: 11 March 2019
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

:
Firefly-Algorithm (FA) is an eminent nature-inspired swarm-based technique for solving numerous real world global optimization problems. This paper presents an overview of the constraint handling techniques. It also includes a hybrid algorithm, namely the Stochastic Ranking with Improved Firefly Algorithm (SRIFA) for solving constrained real-world engineering optimization problems. The stochastic ranking approach is broadly used to maintain balance between penalty and fitness functions. FA is extensively used due to its faster convergence than other metaheuristic algorithms. The basic FA is modified by incorporating opposite-based learning and random-scale factor to improve the diversity and performance. Furthermore, SRIFA uses feasibility based rules to maintain balance between penalty and objective functions. SRIFA is experimented to optimize 24 CEC 2006 standard functions and five well-known engineering constrained-optimization design problems from the literature to evaluate and analyze the effectiveness of SRIFA. It can be seen that the overall computational results of SRIFA are better than those of the basic FA. Statistical outcomes of the SRIFA are significantly superior compared to the other evolutionary algorithms and engineering design problems in its performance, quality and efficiency.

1. Introduction

Nature-Inspired Algorithms (NIAs) are very popular in solving real-life optimization problems. Hence, designing an efficient NIA is rapidly developing as an interesting research area. The combination of evolutionary algorithms (EAs) and swarm intelligence (SI) algorithms are commonly known as NIAs. The use of NIAs is popular and efficient in solving optimization problems in the research field [1]. EAs are inspired by Darwinian theory. The most popular EAs are genetic algorithm [2], evolutionary programming [3], evolutionary strategies [4], and genetic programming [5]. The term SI was coined by Gerardo Beni [6], as it mimics behavior of biological agents such as birds, fish, bees, and so on. Most popular SI algorithms are particle swarm optimization [7], firefly algorithm [8], ant colony optimization [9], cuckoo search [10] and bat algorithm [11]. Recently, many new population-based algorithms have been developed to solve various complex optimization problem such as killer whale algorithm [12], water evaporation algorithm [13], crow search algorithm [14] and so on. The No-Free-Lunch (NFL) theorem described that there is not a single appropriate NIA to solve all optimization problems. Consequently, choosing a relevant NIAs for a particular optimization problem involves a lot of trial and error. Hence, many NIAs are studied and modified to make them more powerful with regard to efficiency and convergence rate for some optimization problems. The primary factor of NIAs are intensification (exploitation) and diversification (exploration) [15]. Exploitation refers to finding a good solution in local search regions, whereas exploration refers to exploring global search space to generate diverse solutions [16].
Optimization algorithms can be classified in different ways. NIAs can be simply divided into two types: stochastic and deterministic [17]. Stochastic (in particular, metaheuristic) algorithms always have some randomness. For example, the firefly algorithm has “ α ” as a randomness parameter. This approach provides a probabilistic guarantee for a faster convergence of global optimization problem, usually to find a global minimum or maximum at an infinite time. In the deterministic approach, it ensures that, after a finite time, the global optimal solution will be found. This approach follows a detailed procedure and the path and values of both dimensions of problem and function are reputable. Hill-climbing is a good example of deterministic algorithm, and it follows same path (starting point and ending point) whenever the program is executed [18].
Real-world engineering optimization problems contain a number of equality and inequality constraints, which alter the search space. These problems are termed as Constrained-Optimization Problems (COPs). The minimization COPs defined as:
Minimize : f z = ( z 1 , z 2 , . . . . , z n ) z S ,
g j ( z ) 0 j = 1 , 2 , 3 , , m ;
h j ( z ) = 0 j = m + 1 , , q ;
l x k u x x = 1 , 2 , . . . . , n ,
where f ( z ) is the objective-function given in Equation (1), ( z ) = ( z 1 , z 2 , z 3 , z n ) n-dimensional design variables, l x and u x are the lower and upper bounds, g j ( z ) inequality with m constraints and h j ( z ) equality with q 1 constraints.
The feasible search space F S is represented as the equality (q) and inequality (m). Some point in the z F contains feasible or infeasible solutions. The active constraint ( z ) is defined as inequality constraints that are satisfied when g j ( z ) 0 ( j = { 1 , 2 , 3 , , m } ) at given point ( z ) F . In feasible regions, all constraints (i.e., equality constraints) were acknowledged as active constraints at all points.
In NIA problems, most of the constraint-handling techniques deal with inequality constraints. Hence, we have transformed equality constrained into equality using some tolerance value ( ε ):
h j ( z ) ε 0 ,
where j { m + 1 , , q } and ’ ε ’ is tolerance allowed. Apply the value of tolerance ε for equality constraints for a given optimization problem. Then, the constraint-violation C V j ( z ) of an individual from the j t h constraint can be calculated by
C V j ( z ) = m a x { g j ( z ) , 0 } 1 j m , max { | h j ( z ) | ε , 0 } m + 1 j q .
The maximum constraint-violation of z of every constraint in the all individual or population is given as:
C V j ( z ) = j = 1 q C V j ( z ) .
With this background, the rest of paper is ordered as follows: Section 2 explains the classification of constrained-Handling Techniques (CHT); Section 3 deals with an overview of Constrained FA; Section 4 gives the outline of SR and OBL approaches. Section 5 described the proposed SRIFA with OBL; the experimental setup and computational outcomes of the SRIFA with 24 CEC 2006 benchmark test functions are illustrated in Section 6. The comparison of SRIFA with existing metaheuristic algorithms is also discussed with respect to its performance and effectiveness. The computational results of the SRIFA are examined with an engineering design problem in Section 7. Finally, in Section 8, conclusions of the paper are given.

2. Constrained-Handling Techniques (CHT)

Classification of CHT

In this section, we provide a literature survey of various CHT approaches that are adapted into NIAs to solve COPs. The classification of constrained handling approaches is shown in Figure 1. In the past few decades, various CHTs have been developed, particularly for EAs. Mezura-Montes and Coello conducted a comprehensive survey of NIA [19].
  • Feasibility Rules Approach: The most effective CHT was proposed by Deb [20]. Between any two solutions A i and A j compared, A i is better than A j , under the following conditions:
    (a)
    If A i is a feasible solution, then A j solution is not.
    (b)
    Between two A i and A j feasible solutions, if A i has better objective value over A j , then A i is preferred.
    (c)
    Between two A i and A j infeasible solutions, if A i has the lowest sum of constraint-violation over A j , then A i is preferred.
    Wang and Li [21] integrated a Feasibility-Rule integrated with Objective Function Information (FROFI), where Differential Evolution (DE) is used as a search algorithm along with feasibility rule.
  • Penalty Function Method: COPs can be transformed into unconstrained problems using penalty function. This penalty method includes various techniques such as static-penalty, dynamic-penalty [22], adaptive-penalty [23], death-penalty [24], oracle-penalty and exact-penalty methods.
  • Special representation scheme: This method includes decoders, locating the boundary of a feasible solution [25] and repair method [26]. The new repair methods classified into three types: Constrained Optimization by Radical basis Function Approximation (COBRA) [27], the centroid and No-pair method.
  • Multi-objective Methods (MO) or Vector optimization or Pareto-optimization: It is an optimization problems that has two or more objectives [28]. There are roughly two types of MO methods: bi-objective and many-objective.
  • Split-up objective and constraints: There are many techniques to handle split-up objective and constraints. These techniques are co-evolution, Powell and Skolnick technique, Deb-rule and ranking method. There are different types of ranking methods such as stochastic ranking, Tessema and Yen method, multiple ranking and the balanced ranking method.
  • Hybrid Method: The NIAs combined with a classical constrained method or heuristic method are called as hybrid methods. The hybrid method includes Lagrangian multipliers, constrained Optimization by random evolution and fuzzy logic [25].
  • Miscellaneous Method: These methods include ensemble [29], ϵ -constrained [30], dynamic constrained, hyper-heuristic, parent-centric and inverse parabolic [31].

3. Overview of Constrained FA

3.1. Basic FA

FA is a swarm-based NIAs proposed by Xin-she Yang [8]. Fister et al. [32] carried out in detail comprehensive review of FA. The basic FA pseudo-code is indicated in Algorithm 1. The mathematical formulation of the basic FA is as follows (Figure 2):
Let us consider that attractiveness of FA is assumed as brightness (i.e., fitness function). The distance between brightness of two fireflies (assume u and v) is given as:
I = I 0 e γ r 2 u v z u z v ,
where I is an intensity of light-source parameter, γ is an absorption coefficient, and ( z u ) is distance between two fireflies u and v. I 0 is the intensity of light source parameter when r = 0. The attractiveness for two fireflies u and v (u is more attractive than v) is defined as:
β = β 0 e γ r 2 u v z u z v .
β 0 is attractiveness parameter when r = 0.
Movement of fireflies are basically based on the attractiveness, when a firefly u is less attractive than firefly v; then, firefly u moves towards firefly v and it is determined by Equation (10):
z v = z v + β 0 e γ r 2 u v z u z v + α r a n d 1 2 ,
where the second term is an attractive parameter, the third term is a randomness parameter and r a n d is a vector of random-numbers generated uniform distribution between 0 and 1.

3.2. Constrained FA

The FA Combined with CHT has been widely used for solving COPs. Some typical constrained FA (CFA) has been briefly discussed below.
To solve engineering optimization problems, the adaptive-FA is designed has been discussed in [33]. Costa et al. [34] used penalty based techniques to evaluate different test functions for global optimization with FA. Brajevic et al. [35] developed feasibility-rule based with FA for COPs. Kulkarni et al. [36] proposed a modified feasibility-rule based for solving COPs using probability. The upgraded FA (UFA) is proposed to solve mechanical engineering optimization problem [37]. Chou and Ngo designed a multidimensional optimization structure with modified FA (MFA) [38].
Algorithm 1 Stochastic Ranking Approach (SRA)
1: Number of population (N), P f balanced dominance of two solution of f ( z ) , C V k ( z ) is sum of constrained violation, m is individual who will be ranked
2: Rank the individual based on P f and f ( z )
3: Calculate z k = 1,  */ k 1 , 2 , 3 , , λ and z k is variable of f(z)
4: for i = 1 to n do
5:  for k = 1 to m-1 do
6:    Random R = U(0, 1) */random number generator
7:  end for
8:  if ( C V k (( z k ) = C V k (( z k + 1 ) = 0)) or R < P f then
9:    if (f( z k ) > f( z k + 1 )) then
10:      swap ( z k , z k + 1 )
11:    end if

12:  else if ( C V k ( z k ) > C V k ( z k + 1 ) ) then
13:    swap ( z k , z k + 1 )

      end if
14:  end if
15:  if no swapping then break;
16:  end if
17: end for

4. Stochastic Ranking and Opposite-Based Learning (OBL)

This section represents an overview of SR and OBL.

4.1. Stochastic Ranking Approach (SRA)

This approach, which was introduced by Runarsson and Yao [39], which balances fitness or (objective function) and dominance of a penalty approach. Based on this, the SRA uses a simple bubble sort technique to rank the individuals. To rank the individual in SRM, P f is introduced, which is used to compare the fitness function in infeasible area of search space. Normally, when we take any two individuals for comparison, three possible solutions are formed.
(a) if both individuals are in a feasible region, then the smallest fitness function is given the highest priority; (b) For both individuals at an infeasible region, an individual having smallest constraint-violation ( C V k ) is preferred to fitness function and is given the highest priority; and (c) if one individual is feasible and other is infeasible, then the feasible region individual is given highest priority. The pseudo code of SRM is given in Algorithm 1.

4.2. Opposition-Based Learning (OBL)

The OBL is suggested by Tizhoosh in the research industry, which is inspired by a relationship among the candidate and its opposite solution. The main aim of the OBL is to achieve an optimal solution for a fitness function and enhance the performance of the algorithm [40]. Let us assume that z∈ [ x + y ] is any real number, and the opposite solution of z is denoted as z ´ and defined as
z ´ = x + y z .
Let us assume that Z = ( z 1 , z 2 , z 3 , , z n ) is an n-dimensional decision vector, in which z i [ x i + y i ] and i = 1 , 2 , , n . In the opposite vector, p is defined as Z ´ = ( z 1 ´ , z 2 ´ , z 3 ´ , , z n ´ ) , where z i ´ = ( [ x i + y i ] z i ) .

5. The Proposed Algorithm

The most important factor in NIAs is to maintain diversity of population in search space to avoid premature convergence. From the intensification and diversification viewpoints, an expansion in diversity of population revealed that NIAs are in the phase of intensification, while a decreased population of diversity revealed that NIAs are in the phase of diversification. The adequate balance between exploration and exploitation is achieved by maintaining a diverse populations. To maintain balance between intensification and diversification, different approaches were proposed such as diversity maintenance, diversity learning, diversity control and direct approaches [16]. The diversity maintenance can be performed using a varying size population, duplication removal and selection of a randomness parameter.
On the other hand, when the basic FA algorithm is performed with insufficient diversification (exploration), it leads to a solution stuck in local optima or a suboptimal region. By considering these issues, a new hybridizing algorithm is proposed by improving basic FA.

5.1. Varying Size of Population

A very common and simple technique is to increase the population size in NIAs to maintain the diversity of population. However, due to an increase in population size, computation time required for the execution of NIAs is also increased. To overcome this problem, the OBL concept is applied to improve the efficiency and performance of basic FA at the initialization phase.

5.2. Improved FA with OBL

In the population-based algorithms, premature convergence in local optimum is a common problem. In the basic FA, every firefly moves randomly towards the brighter one. In that condition, population diversity is high. After some generation, the population diversity decreases due to a lack of selection pressure and this leads to a trap solution at local optima. The diversification of FA is reduced due to premature convergence. To overcome this problem, the OBL is applied to an initial phase of FA, in order to increase the diversity of firefly individuals.
In the proposed Improved Firefly Algorithm (IFA), we have to balance intensification and diversification for better performance and efficiency of the proposed FA. To perform exploration, a randomization parameter is used to overcome local optimum and to explore global search. To balance between intensification and diversification, the random-scale factor (R) was applied to generate randomly populations. Das et al. [41] used a similar approach in DE:
R u , v = l b v + 0.5 ( 1 + r a n d ( 0 , 1 ) ) ( u b v l b v ) ,
where R u , v is a vth parameter of the uth firefly, u b v is upper-bound, l b v is a lower-bound of vth value and rand (0, 1) is randomly distributed of the random-number.
The movement of fireflies using Equation (10) will be modified as
z v = z v + β 0 e γ r 2 u v z u z v + R u , v .

5.3. Stochastic Ranking with an Improved Firefly Algorithm (SRIFA)

Many studies are published in literature for solving COPs using EAs and FA. However, it is quite challenging to apply this approach for constraints effectively handling optimization problems. FA produces admirable outcomes on COPs and it is well-known for having a quick convergence rate [42]. As a result of the quick convergence rate of FA and popularity of the stochastic-ranking for CHT, we proposed a hybridized technique for constrained optimization problems, known as Stochastic Ranking with an Improved Firefly Algorithm (SRIFA). The flowchart of SRIFA is shown in Figure 3.

5.4. Duplicate Removal in SRIFA

The duplicate individuals in a population should be eliminated and new individuals should generated and inserted randomly into SRIFA. Figure 4 represents the duplication removal in SRIFA.

6. Experimental Results and Discussions

To examine the performance of SRIFA with existing NIAs, the proposed algorithm is applied to 24 numerical benchmark test functions given in CEC 2006 [43]. This preferred benchmark functions have been thoroughly studied before by various authors.
In Table 1, the main characteristics of 24 test function are determined, where a fitness function (f(z)), number of variables or dimensions (D), ρ = F e a s / S e a R is expressed as a feasibility ratio between a feasible solution (Feas) with search region (SeaR), Linear-Inequality constraint (LI), Nonlinear Inequality constraint (NI), Linear-Equality constraint (LE), Nonlinear Equality constraint (NE), number active constraint represented as ( a ) and an optimal solution of fitness function denoted (OPT) are given. For convenience, all equality constraints, i.e., h j ( z ) are transformed into inequality constraint h j ( z ) ε 0 , where ε = 10 4 is a tolerance value, and its goal is to achieve a feasible solution [43].

6.1. Experimental Design

To investigate the performance and effectiveness of the SRIFA, it is tested over 24 standard- functions and five well-known engineering design-problems. All experiments of COPs were experimented on an Intel core (TM) i 5 3570 processor @3.40 GHz with 8 GB RAM memory, where an SRIFA algorithm was programmed with Matlab 8.4 (R2014b) under Win7 (x64). Table 2 shows the parameter used to conduct computational experiments of SRIFA algorithms. For all experiments, 30 independent runs were performed for each problem. To investigate efficiency and effectiveness of the SRIFA, various statistical parameters were used such as best, worst, mean, global optimum and standard-deviation (Std). Results in bold indicate best results obtained.

6.2. Calibration of SRIFA Parameters

In this section, we have to calibrate the parameter of the SRIFA. According to the strategy of the SRIFA, described in Figure 2, the SRIFA contains eight parameters: size of population ( N P ), initial randomization value ( α 0 ), initial attractiveness value ( β 0 ), absorption-coefficient ( γ ), max-generation (G), total number of function evaluations ( N F E s ), probability P f and varphi ( ϕ ). To derive a suitable parameter, we have performed details of fine-tuning by varying parameters of SRIFA. The choice of each of these parameters as follows: ( N P )∈ (5 to 100 with an interval of 5), ( α 0 ) ∈ (0.10 to 1.00 with an interval of 0.10), ( β 0 ) ∈ (0.10 to 1.00 with an interval 0.10), ( γ ) ∈ (0.01 to 100 with an interval of 0.01 until 1 further 5 to 100), (G) ∈ (1000 to 10,000 with an interval of 1000), N F E s ∈ (1000 to 240,000), P f = ∈ (0.1 to 0.9 with an interval of 0.1) and ( ϕ ) ∈ (0.1 to 1.0 with an interval of 0.1). The best optimal solutions obtained by SRIFA parameter experiments from the various test functions. In Table 2, the best parameter value for experiments for the SRIFA are described.

6.3. Experimental Results of SRIFA Using a GKLS (GAVIANO, KVASOV, LERA and SERGEYEV) Generator

In this experiment, we have compared the proposed SRIFA with two novel approaches: Operational Characteristic and Aggregated Operational Zone. An operational characteristic approach is used for comparing deterministic algorithms, whereas an aggregated operational zone approach is used by extending the idea of operational characteristics to compare metaheuristic algorithms.
The proposed algorithm is compared with some widely used NIAs (such as DE, PSO and FA) and the well-known deterministic algorithms such as DIRECT, DIRECT-L (locally-biased version), and ADC (adaptive diagonal curves). The GKLS test classes generator is used in our experiments. The generator allows us to randomly generate 100 test instances having local minima and dimension. In this experiment, eight classes (small and hard) are used (with dimensions of n = 2, 3, 4 and 5) [45]. The control parameters of the GKLS-generator required for each class contain 100 functions and are defined by the following parameters: design variable or problem dimension (N), radius of the convergence region ( ρ ), distance from of the paraboloid vertex and global minimum (r) and tolerance ( δ ). The value of control parameters are given in Table 3.
From Table 4, we can see that the mean value of generations required for computation of 100 instances are calculated for each deterministic and metaheuristic algorithms using an GKLS generator. The values “>m(i)” indicate that the given algorithm did not solve a global optimization problem i times in 100 × 100 instances (i.e., 1000 runs for deterministic and 10,000 runs for metaheuristic algorithms). The maximum number of generations is set to be 10 6 . The mean value of generation required for proposed algorithm is less than other algorithms, indicating that the performance SRIFA is better than the given deterministic and metaheuristic algorithms.

6.4. Experimental Results FA and SRIFA

In our computational experiment, the proposed SRIFA is compared with the basic FA. It differs from the basic FA in following few points. In the SRIFA, the OBL technique is used to enhance initial population of algorithm, while, in FA, fixed generation is used to search for optimal solutions. In the SRIFA, the chaotic map (or logistic map) is used to improve absorption coefficient γ , while, in the FA, fixed iteration is applied to explore the global solution. The random scale factor (R) was used to enhance performance in SRIFA. In addition, SRIFA uses Deb’s rules in the form of the stochastic ranking method.
The experimental results of the SRIFA with basic FA are shown in Table 5. The comparison between SRIFA and FA are conducted using 24 CEC (Congress on Evolutionary-Computation) benchmark test functions [43]. The global optimum, CEC 2006 functions, best, worst, mean and standard deviation (Std) outcomes produced by SRIFA and FA in over 25 runs are described in Table 5.
In Table 5, it is clearly observed that SRIFA provides promising results compared to the basic FA for all benchmark test functions. The proposed algorithm found optimal or best solutions on all test functions over 25 runs. For two functions (G20 and G22), we were unable to find any optimal solution. It should be noted that ’N-F’ refers to no feasible result found.

6.5. Comparison of SRIFA with Other NIAs

To investigate the performance and effectiveness of the SRIFA, these results are compared with five metaheuristic algorithms. These algorithms are stochastic ranking with a particle-swarm-optimization (SRPSO) [46], self adaptive mix of particle-swarm-optimization (SAMO-PSO) [47], upgraded firefly algorithm (UFA) [37], an ensemble of constraint handling techniques for evolutionary-programming (ECHT-EP2) [48] and a novel differential-evolution algorithm (NDE) [49]. To evaluate proper comparisons of these algorithms, the same number of function evaluations (NFEs = 240,000) were chosen.
The statistical outcomes achieved by SRPSO, SAMO-PSO, UFA, ECHT-EP2 and NDE for 24 standard functions are listed in Table 6. The outcomes given in bold letter indicates best or optimal solution. N-A denotes “Not Available”. The benchmark function G20 and G22 are discarded from the analysis, due to no feasible results were obtained.
On comparing SRIFA with SRPSO for 22 functions as described in Table 6, it is clearly seen that, for all test functions, statistical outcomes indicate better performance in most cases. The SRIFA obtained the best or the same optimal values among five metaheuristic algorithms. In terms of mean outcomes, SRIFA shows better outcomes to test functions G02, G14, G17, G21 and G23 for all four metaheuristic algorithms (i.e., SAMO-PSO, ECHT-EP2, UFA and NDE). SRIFA obtained worse mean outcomes to test function G19 than NDE. In the rest of all test functions, SRIFA was superior to all compared metaheuristic algorithms.

6.6. Statistical Analysis with Wilcoxon’s and Friedman Test

Statistical analysis can be classified as parametric and non-parametric test (also known as distribution-free tests). In parametric tests, some assumptions are made about data parameters, while, in non-parametric tests, no assumptions are made for data parameters. We performed statistical analysis of data by non-parametric tests. It mainly consists of a Wilcoxon test (pair-wise comparison) and Friedman test (multiple comparisons) [50].
The outcomes of statistical analysis after conducting a Wilcoxon-test between SRIFA and the other five metaheuristic algorithms are shown in Table 7. The R + value indicates that the first algorithm is significantly superior than the second algorithm, whereas R indicates that the second algorithm performs better than the first algorithm. In Table 7, it is observed that R + values are higher than R values in all cases. Thus, we can conclude that SRIFA significantly outperforms compared to all metaheuristic algorithms.
The statistical analysis outcomes by applying Friedman test are shown in Table 8. We have ranked the given metaheuristic algorithms corresponding to their mean value. From Table 8, SRIFA obtained first ranking (i.e., the lowest value gets the first rank) compared to all metaheuristic algorithms over the 22 test functions. The average ranking of the SRIFA algorithm based on the Friedman test is described in Figure 5.

6.7. Computational Complexity of SRIFA

In order to reduce complexity of the given problem, constraints are normalized. Let n be population size and t is iteration. Generally in NIAs, at each iteration, a complexity is O ( n F E s + C o f F E s ) , where F E s is the maximum amount of function evaluations allowed and and C o f is the cost of objective function. At the initialization phase of SRIFA, the computational complexity of population generated randomly by the OBL technique is O ( n t ) . In a searching and termination phase, the computational complexity of two inner loops of FA and stochastic ranking using a bubble sort are O ( n 2 t + n ( l o g ( n ) ) ) + O ( n t ) ) . The total computational complexity of SRIFA is O ( n , t ) = O ( n t ) + O ( n 2 t + n l o g n ) + O ( n t ) O ( n 2 t ) .

7. SRIFA for Constrained Engineering Design Problems

In this section, we evaluate the efficiency and performance of SRIFA by solving five widely used constrained engineering design problems. These problems are: (i) tension or compression spring design [51]; (ii) welded-beam problem [52]; (iii) pressure-vessel problem [53]; (iv) three-bar truss problem [51]; and (v) speed-reducer problem [53]. For every engineering design problem, statistical outcomes were calculated by executing 25 independent runs for each problem. The mathematical formulation of all five constrained engineering design problems are given in “Appendix A”.
Every engineering problem has unique characteristics. The best value of constraints, parameter and objective values obtained by SRIFA for all five engineering problems are listed in Table 9. The statistical outcomes and number of function-evaluations (NFEs) of SRIFA for all five engineering design problems are listed in Table 10. These results were obtained by SRIFA over 25 independent runs.

7.1. Tension/Compression Spring Design

A tension/compression spring-design problem is formulated to minimize weight with respect to four constraints. These four constraints are shear stress, deflection, surge frequency and outside diameter. There are three design variables, namely: mean coil (D), wire-diameter d and the amount of active-coils N.
This proposed SRIFA approach is compared to SRPSO [46], MVDE [54], BA [44], MBA [55], JAYA [56], PVS [57], UABC [58], IPSO [59] and AFA [33]. The comparative results obtained by SRIFA for nine NIAs are given in Table 11. It is clearly observed that SRIFA provides the most optimum results over nine metaheuristic algorithms. The mean, worst and SD values obtained by SRIFA are superior to those for other algorithms. Hence, we can draw conclusions that SRIFA performs better in terms of statistical values. The comparison of the number of function evacuations (NFEs) with various NIAs is plotted in Figure 6.

7.2. Welded-Beam Problem

The main objective of the welded-beam problem is to minimize fabrication costs with respect to seven constraints. These constraints are bending stress in the beam ( σ ), shear stress ( τ ), deflection of beam ( δ ), buckling load on the bar ( P c ), side constraints, weld thickness and member thickness (L).
Attempts have been made by many researchers to solve the welded-beam-design problem. The SRIFA was compared with SRPSO, MVDE, BA, MBA, JAYA, MFA, FA, IPSO and AFA. The statistical results obtained by SRIFA on comparing with nine metaheuristic algorithms are described in Table 12. It can be seen that statistical results obtained from SRIFA performs better than all metaheuristic algorithms.
The results obtained by the best optimum value for SRIFA performs superior to almost all of the seven algorithms (i.e., SRPSO, MVDE, BA, MBA, MFA, FA, and IPSO) but almost the same optimum value for JAYA and AFA. In terms of mean results obtained by SRIFA, it performs better than all metaheuristic algorithms except AFA as it contains the same optimum mean value. The standard deviation (SD) obtained by SRIFA is slightly worse than the SD obtained by MBA. From Table 12, it can be seen that SRIFA is superior in terms of SD for all remaining algorithms. The smallest NFE result is obtained by SRIFA as compared to all of the metaheuristic algorithms. The comparisons of NFEs with all NIAs are shown in Figure 7.

7.3. Pressure-Vessel Problem

The main purpose of the pressure-vessel problem is to minimize the manufacturing cost of a cylindrical-vessel with respect to four constraints. These four constraints are thickness of head ( T h ), thickness of pressure vessel ( T s ), length of vessel without head (L) and inner radius of the vessel (R).
The SRIFA is optimized with SRPSO, MVDE, BA, EBA [60], FA [44], PVS, UABC, IPSO and AFA. The statistical results obtained by SRIFA for nine metaheuristic algorithms are listed in Table 13. It is clearly seen that SRIFA has the same best optimum value when compared to six algorithms (MVDE, BA, EBA, PVS, UABC and IPSO). The mean, worst, SD and NFE results obtained by SRIFA are superior to all NIAs. The comparisons of NFEs with all NIAs are shown in Figure 8.

7.4. Three-Bar-Truss Problem

The main purpose of the given three-bar-truss problem is to minimize the volume of a three-bar truss with respect to three stress constraints.
The SRIFA is compared with SRPSO, MVDE, NDE [49], MAL-FA [61], UABC, WCA [62] and UFA. The statistical results obtained by SRIFA in comparison with the seven NIAs are described in Table 14. It is clearly seen that SRIFA has almost the same best optimum value except with the UABC algorithm. In terms of mean and worst results obtained, SRIFA performed better compared to all metaheuristic algorithms except NDE and UFA, which contain the same optimum mean and worst value. The standard deviation (SD) obtained by SRIFA is superior to all metaheuristic algorithms. The smallest NFE value is obtained by SRIFA compared to all other metaheuristic algorithms. The comparisons of NFEs with all other NIAs are shown in Figure 9.

7.5. Speed-Reducer Problem

The goal of the given problem is to minimize the speed-reducer of weight with respect to eleven constraints. This problem has seven design variables that are gear face, number of teeth in pinion, teeth module, length of first shaft between bearings. diameter of first shaft, length of second shaft between bearings, and diameter of second shaft.
The proposed SRIFA approach is compared with SRPSO, MVDE, NDE, MBA, JAYA, MBA, UABC, PVS, IPSO and AFA. The statistical results obtained by SRIFA for nine metaheuristic algorithms are listed in Table 15. It can be observed that the SRIFA provides the best optimum value among all eight metaheuristic algorithms except JAYA (they have the same optimum value). The statistical results (best, mean and worst) value obtained by SRIFA and JAYA algorithm is almost the same, while SRIFA requires less NFEs for executing the algorithm. Hence, we can conclude that SRIFA performed better in terms of statistical values. Comparisons of number of function evacuations (NFEs) with various metaheuristic algorithm are plotted in Figure 10.

8. Conclusions

This paper proposes a review of constrained handling techniques and a new hybrid algorithm known as a Stochastic Ranking with an Improved Firefly Algorithm (SRIFA) to solve a constrained optimization problem. In population-based problems, stagnation and premature convergence occurs due to imbalance between exploration and exploitation during the development process that traps the solution in the local optimal. To overcome this problem, the Opposite Based Learning (OBL) approach was applied to basic FA. This OBL technique was used at an initial population, which leads to increased diversity of the problem and improves the performance of the proposed algorithm.
The random scale factor was incorporated into basic FA, for balancing intensification and diversification. It helps to overcome the premature convergence and increase the performance of the proposed algorithm. The SRIFA was applied to 24 CEC benchmark test functions and five constrained engineering design problems. Various computational experiments were conducted to check the effectiveness and quality of the proposed algorithm. The statistical results obtained from SRIFA when compared to those of the FA clearly indicated that our SRIFA outperformed in terms of statistical values.
Furthermore, the computational experiments demonstrated that the performance of SRIFA was better compared to five NIAs. The performance and efficiency of the proposed algorithm were significantly superior to other metaheuristic algorithms presented from the literature. The statistical analysis of SRIFA was conducted using the Wilcoxon’s and Friedman test. The results obtained proved that efficiency, quality and performance of SRIFA was statistically superior compared to NIAs. Moreover, SRIFA was also applied to the five constrained engineering design problems efficiently. In the future, SRIFA can be modified and extended to explain multi-objective problems.

Author Contributions

Conceptualization, U.B.; methodology, U.B. and D.S.; software, U.B. and D.S.; validation, U.B. and D.S.; formal analysis, U.B. and D.S.; investigation, U.B.; resources, D.S.; data curation, D.S.; writing—original draft preparation, U.B.; writing—review and editing, U.B. and D.S.; supervision, D.S.

Acknowledgments

The authors would like to thank Dr. Yaroslav Sergeev for sharing GKLS generator. The authors would also like to thank Mr. Rohit for his valuable suggestions, which contribute a lot to technical improvement of the manuscript.

Conflicts of Interest

The authors have declared no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
COPsConstrained Optimization Problems
CHTConstrained Handling Techniques
EAsEvolutionary Algorithms
FAFirefly Algorithm
OBLOpposite-Based Learning
NIAsNature Inspired Algorithms
NFEsNumber of Function Evaluations
SRAStochastic Ranking Approach
SRIFAStochastic Ranking with Improved Firefly Algorithm

Appendix A

Appendix A.1. Tension/Compression Spring Design Problem

f Z = z 3 + 2 z 2 z 1 2 , subject to : G 1 z = 1 z 2 3 z 3 71 , 785 x 1 4 , G 2 z = 4 z 2 2 z 1 z 2 / 12 , 566 z 2 z 1 3 z 1 4 + 1 / 5108 x 1 2 1 0 , G 3 z = 1 140.45 z 1 / z 2 2 z 3 0 , G 4 z = z 2 + z 1 / 1.5 1 0 , 0.05 z 1 2 , 0.25 z 2 1.3 , 2 z 3 15 .

Appendix A.2. Welded-Beam-Design Problem

f Z = 1.10471 z 1 2 z 2 + 0.04811 z 3 z 4 14 + z 2 , subject to : G 1 z = τ z τ max 0 , G 2 z = σ z σ max 0 , G 3 z = z 1 z 4 0 , G 4 z = 0.10471 z 1 2 + 0.4811 z 3 z 4 14 + z 2 5 0 , G 5 z = 0.125 z 1 0 , G 6 z = δ z δ max 0 , G 7 z = P P c z 0 , 0.1 z i 2 i = 1 , 4 , 0.1 z i 10 i = 2 , 3 , where τ z = τ ι + 2 τ ι τ ι ι z 2 2 R τ ι ι , τ ι = P 2 z 1 z 2 , τ ι ι = M R J , M = P L + z 2 2 , R = x 2 2 4 + z 1 + z 2 2 2 , J = 2 2 z 1 z 2 z 2 2 4 + z 1 + z 2 2 2 , σ z = 6 P L z 4 z 3 2 , δ z = 4 P L 3 E x 3 3 z 4 , P c z = 4.013 E x 3 2 x 4 6 36 L 2 1 z 3 2 L E 4 G , P = 6000 lb , L = 14 in , E = 30 × 10 6 psi , G = 12 × 10 6 psi , τ max = 13 , 600 psi , σ max = 30 , 000 psi , δ max = 0.25 in .

Appendix A.3. Pressure-Vessel Design Problem

f Z = 0.6224 z 1 z 3 z 4 + 1.7781 z 2 z 3 2 + 3.1661 z 1 2 z 4 + 19.84 z 1 2 z 3 , subject to : G 1 z = z 1 + 0.0193 z , G 2 z = z 2 + 0.00954 z 3 0 , G 3 z = π z 3 2 z 4 4 3 π z 3 2 + 12 , 96 , 000 0 , G 4 z = z 4 240 0 , 0 z 1 100 i = 1 , 2 , 10 z 1 200 i = 3 , 4 .

Appendix A.4. Three-Bar-Truss Design Problem

f Z = 2 + 2 z 1 z 2 × l , subject to : G 1 z = 2 z 1 + z 2 2 z 1 2 + 2 z 1 z 2 p ρ 0 , G 2 z = z 2 2 z 1 2 + 2 z 1 z 2 p ρ 0 , G 1 z = 1 2 z 1 2 + 2 z 1 z 2 p ρ 0 , 0 z 1 1 i = 1 , 2 , l = 100 cm , P = 2 kN / c m 2 , σ = 2 kN / c m 3 .

Appendix A.5. Speed-Reducer-Design Problem

f Z = 0.7854 z 1 z 2 2 3.3333 z 3 2 + 14.933 z 3 43.0934 1.508 z 1 z 6 2 + z 7 2 + 7.4777 z 6 3 + z 7 3 + 0.7854 z 4 x 6 2 + z 5 z 7 2 , subject to G 1 z = 27 z 1 z 2 2 z 3 1 0 , G 2 z = 397.5 z 1 z 2 2 z 3 2 1 0 , G 3 z = 1.93 z 4 3 z 2 z 6 4 z 3 1 0 , G 4 z = 1.93 z 5 3 z 2 z 7 4 z 3 1 0 , G 5 z = 745 z 4 / z 2 z 3 2 + 16.9 × 10 6 1 / 2 110 x 6 3 1 0 , G 6 z = 745 z 5 / z 2 z 3 2 + 157.5 × 10 6 1 / 2 85 z 7 3 1 0 , G 7 z = z 2 z 3 40 1 0 , G 8 z = 5 z 2 z 1 1 0 , G 9 z = z 1 12 z 2 1 0 , G 10 z = 15 z 6 + 1.9 z 4 1 0 , G 11 z = 11 z 7 + 1.9 z 5 1 0 , where 2.6 z 1 3.6 , 0.7 z 2 0.8 , 17 z 3 28 , 7.3 z 4 , z 5 8.3 , 2.9 z 6 3.9 , 5 z 7 5.5 .

References

  1. Slowik, A.; Kwasnicka, H. Nature Inspired Methods and Their Industry Applications—Swarm Intelligence Algorithms. IEEE Trans. Ind. Inform. 2018, 14, 1004–1015. [Google Scholar] [CrossRef]
  2. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef] [Green Version]
  3. Fogel, D.B. An introduction to simulated evolutionary optimization. IEEE Trans. Neural Netw. 1994, 5, 3–14. [Google Scholar] [CrossRef] [PubMed]
  4. Beyer, H.G.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  5. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  6. Beni, G. From Swarm Intelligence to Swarm Robotics. In Swarm Robotics; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–9. [Google Scholar]
  7. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  8. Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  9. Dorigo, M.; Birattari, M. Ant Colony Optimization. In Encyclopedia of Machine Learning; Springer: Boston, MA, USA, 2010; pp. 36–39. [Google Scholar]
  10. Yang, X.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  11. Yang, X.S. A New Metaheuristic Bat-Inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  12. Biyanto, T.R.; Irawan, S.; Febrianto, H.Y.; Afdanny, N.; Rahman, A.H.; Gunawan, K.S.; Pratama, J.A.; Bethiana, T.N. Killer Whale Algorithm: An Algorithm Inspired by the Life of Killer Whale. Procedia Comput. Sci. 2017, 124, 151–157. [Google Scholar] [CrossRef]
  13. Saha, A.; Das, P.; Chakraborty, A.K. Water evaporation algorithm: A new metaheuristic algorithm towards the solution of optimal power flow. Eng. Sci. Technol. Int. J. 2017, 20, 1540–1552. [Google Scholar] [CrossRef]
  14. Abdelaziz, A.Y.; Fathy, A. A novel approach based on crow search algorithm for optimal selection of conductor size in radial distribution networks. Eng. Sci. Technol. Int. J. 2017, 20, 391–402. [Google Scholar] [CrossRef]
  15. Blum, C.; Roli, A. Metaheuristics in Combinatorial Optimization: Overview and Conceptual Comparison. ACM Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  16. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and Exploitation in Evolutionary Algorithms: A Survey. ACM Comput. Surv. 2013, 45, 35:1–35:33. [Google Scholar] [CrossRef]
  17. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. Emmental-Type GKLS-Based Multiextremal Smooth Test Problems with Non-linear Constraints. In Learning and Intelligent Optimization; Battiti, R., Kvasov, D.E., Sergeyev, Y.D., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 383–388. [Google Scholar]
  18. Kvasov, D.E.; Mukhametzhanov, M.S. Metaheuristic vs. deterministic global optimization algorithms: The univariate case. Appl. Math. Comput. 2018, 318, 245–259. [Google Scholar] [CrossRef]
  19. Mezura-Montes, E.; Coello, C.A.C. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  20. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, Y.; Wang, B.; Li, H.; Yen, G.G. Incorporating Objective Function Information Into the Feasibility Rule for Constrained Evolutionary Optimization. IEEE Trans. Cybern. 2016, 46, 2938–2952. [Google Scholar] [CrossRef] [PubMed]
  22. Tasgetiren, M.F.; Suganthan, P.N. A Multi-Populated Differential Evolution Algorithm for Solving Constrained Optimization Problem. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 33–40. [Google Scholar]
  23. Farmani, R.; Wright, J.A. Self-adaptive fitness formulation for constrained optimization. IEEE Trans. Evol. Comput. 2003, 7, 445–455. [Google Scholar] [CrossRef] [Green Version]
  24. Kramer, O.; Schwefel, H.P. On three new approaches to handle constraints within evolution strategies. Natural Comput. 2006, 5, 363–385. [Google Scholar] [CrossRef]
  25. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  26. Chootinan, P.; Chen, A. Constraint handling in genetic algorithms using a gradient-based repair method. Comput. Oper. Res. 2006, 33, 2263–2281. [Google Scholar] [CrossRef]
  27. Regis, R.G. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Eng. Optim. 2014, 46, 218–243. [Google Scholar] [CrossRef]
  28. Mezura-Montes, E.; Reyes-Sierra, M.; Coello, C.A.C. Multi-objective Optimization Using Differential Evolution: A Survey of the State-of-the-Art. In Advances in Differential Evolution; Springer: Berlin/Heidelberg, Germany, 2008; pp. 173–196. [Google Scholar]
  29. Mallipeddi, R.; Das, S.; Suganthan, P.N. Ensemble of Constraint Handling Techniques for Single Objective Constrained Optimization. In Evolutionary Constrained Optimization; Springer: New Delhi, India, 2015; pp. 231–248. [Google Scholar]
  30. Takahama, T.; Sakai, S.; Iwane, N. Solving Nonlinear Constrained Optimization Problems by the e Constrained Differential Evolution. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8–11 October 2006; Volume 3, pp. 2322–2327. [Google Scholar]
  31. Padhye, N.; Mittal, P.; Deb, K. Feasibility Preserving Constraint-handling Strategies for Real Parameter Evolutionary Optimization. Comput. Optim. Appl. 2015, 62, 851–890. [Google Scholar] [CrossRef]
  32. Fister, I.; Yang, X.S.; Fister, D. Firefly Algorithm: A Brief Review of the Expanding Literature. In Cuckoo Search and Firefly Algorithm: Theory and Applications; Springer International Publishing: Cham, Switzerland, 2014; pp. 347–360. [Google Scholar]
  33. Baykasoğlu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  34. Costa, M.F.P.; Rocha, A.M.A.C.; Francisco, R.B.; Fernandes, E.M.G.P. Firefly penalty-based algorithm for bound constrained mixed-integer nonlinear programming. Optimization 2016, 65, 1085–1104. [Google Scholar] [CrossRef]
  35. Brajevic, I.; Tuba, M.; Bacanin, N. Firefly Algorithm with a Feasibility-Based Rules for Constrained Optimization. In Proceedings of the 6th WSEAS European Computing Conference, Prague, Czech Republic, 24–26 September 2012; pp. 163–168. [Google Scholar]
  36. Deshpande, A.M.; Phatnani, G.M.; Kulkarni, A.J. Constraint handling in Firefly Algorithm. In Proceedings of the 2013 IEEE International Conference on Cybernetics (CYBCO), Lausanne, Switzerland, 13–15 July 2013; pp. 186–190. [Google Scholar]
  37. Brajević, I.; Ignjatović, J. An upgraded firefly algorithm with feasibility-based rules for constrained engineering optimization problems. J. Intell. Manuf. 2018. [Google Scholar] [CrossRef]
  38. Chou, J.S.; Ngo, N.T. Modified Firefly Algorithm for Multidimensional Optimization in Structural Design Problems. Struct. Multidiscip. Optim. 2017, 55, 2013–2028. [Google Scholar] [CrossRef]
  39. Runarsson, T.P.; Yao, X. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput. 2000, 4, 284–294. [Google Scholar] [CrossRef] [Green Version]
  40. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar]
  41. Das, S.; Konar, A.; Chakraborty, U.K. Two Improved Differential Evolution Schemes for Faster Global Search. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 991–998. [Google Scholar]
  42. Ismail, M.M.; Othman, M.A.; Sulaiman, H.A.; Misran, M.H.; Ramlee, R.H.; Abidin, A.F.Z.; Nordin, N.A.; Zakaria, M.I.; Ayob, M.N.; Yakop, F. Firefly algorithm for path optimization in PCB holes drilling process. In Proceedings of the 2012 International Conference on Green and Ubiquitous Technology, Jakarta, Indonesia, 30 June–1 July 2012; pp. 110–113. [Google Scholar]
  43. Liang, J.; Runarsson, T.P.; Mezura-Montes, E.; Clerc, M.; Suganthan, P.N.; Coello, C.C.; Deb, K. Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization. J. Appl. Mech. 2006, 41, 8–31. [Google Scholar]
  44. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Mixed variable structural optimization using Firefly Algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  45. Sergeyev, Y.D.; Kvasov, D.; Mukhametzhanov, M. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Ali, L.; Sabat, S.L.; Udgata, S.K. Particle Swarm Optimisation with Stochastic Ranking for Constrained Numerical and Engineering Benchmark Problems. Int. J. Bio-Inspired Comput. 2012, 4, 155–166. [Google Scholar] [CrossRef]
  47. Elsayed, S.M.; Sarker, R.A.; Mezura-Montes, E. Self-adaptive mix of particle swarm methodologies for constrained optimization. Inf. Sci. 2014, 277, 216–233. [Google Scholar] [CrossRef]
  48. Mallipeddi, R.; Suganthan, P.N. Ensemble of Constraint Handling Techniques. IEEE Trans. Evol. Comput. 2010, 14, 561–579. [Google Scholar] [CrossRef]
  49. Mohamed, A.W. A novel differential evolution algorithm for solving constrained engineering optimization problems. J. Intell. Manuf. 2018, 29, 659–692. [Google Scholar] [CrossRef]
  50. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  51. Ray, T.; Liew, K.M. Society and civilization: An optimization algorithm based on the simulation of social behavior. IEEE Trans. Evol. Comput. 2003, 7, 386–396. [Google Scholar] [CrossRef]
  52. zhuo Huang, F.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  53. Lee, K.S.; Geem, Z.W. A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Comput. Methods Appl. Mech. Eng. 2005, 194, 3902–3933. [Google Scholar] [CrossRef]
  54. de Melo, V.V.; Carosio, G.L. Investigating Multi-View Differential Evolution for solving constrained engineering design problems. Expert Syst. Appl. 2013, 40, 3370–3377. [Google Scholar] [CrossRef]
  55. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  56. Rao, R.V.; Waghmare, G. A new optimization algorithm for solving complex constrained design optimization problems. Eng. Optim. 2017, 49, 60–83. [Google Scholar] [CrossRef]
  57. Savsani, P.; Savsani, V. Passing vehicle search (PVS): A novel metaheuristic algorithm. Appl. Math. Model. 2016, 40, 3951–3978. [Google Scholar] [CrossRef]
  58. Brajevic, I.; Tuba, M. An upgraded artificial bee colony (ABC) algorithm for constrained optimization problems. J. Intell. Manuf. 2013, 24, 729–740. [Google Scholar] [CrossRef]
  59. Guedria, N.B. Improved accelerated PSO algorithm for mechanical engineering optimization problems. Appl. Soft Comput. 2016, 40, 455–467. [Google Scholar] [CrossRef]
  60. Yılmaz, S.; Küçüksille, E.U. A new modification approach on bat algorithm for solving optimization problems. Appl. Soft Comput. 2015, 28, 259–275. [Google Scholar] [CrossRef]
  61. Balande, U.; Shrimankar, D. An oracle penalty and modified augmented Lagrangian methods with firefly algorithm for constrained optimization problems. Oper. Res. 2017. [Google Scholar] [CrossRef]
  62. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
Figure 1. The classification of Constrained-Handling Techniques.
Figure 1. The classification of Constrained-Handling Techniques.
Mathematics 07 00250 g001
Figure 2. Basic Firefly Algorithm (BFA).
Figure 2. Basic Firefly Algorithm (BFA).
Mathematics 07 00250 g002
Figure 3. The flowchart of the SRIFA algorithm.
Figure 3. The flowchart of the SRIFA algorithm.
Mathematics 07 00250 g003
Figure 4. The flowchart of duplication removal in SRIFA.
Figure 4. The flowchart of duplication removal in SRIFA.
Mathematics 07 00250 g004
Figure 5. Average ranking of the proposed algorithm with various metaheuristic algorithms.
Figure 5. Average ranking of the proposed algorithm with various metaheuristic algorithms.
Mathematics 07 00250 g005
Figure 6. NIAs with NFEs for the tension/compression problem.
Figure 6. NIAs with NFEs for the tension/compression problem.
Mathematics 07 00250 g006
Figure 7. NIAs with NFEs for welded-beam problem.
Figure 7. NIAs with NFEs for welded-beam problem.
Mathematics 07 00250 g007
Figure 8. NIAs with NFEs for pressure-vessel problem.
Figure 8. NIAs with NFEs for pressure-vessel problem.
Mathematics 07 00250 g008
Figure 9. NIAs with NFEs for three-bar-truss problem.
Figure 9. NIAs with NFEs for three-bar-truss problem.
Mathematics 07 00250 g009
Figure 10. NIAs with NFEs for the speed-reducer problem.
Figure 10. NIAs with NFEs for the speed-reducer problem.
Mathematics 07 00250 g010
Table 1. Characteristic of 24 standard-functions.
Table 1. Characteristic of 24 standard-functions.
ProblemsDimensionTypes of Functions ρ (%)L-IN-IL-EN-E a Opt
G0113Quadratic0.000390006−15.0000
G0220Non-linear99.996211001−0.8036
G0310Non-linear0.000200011−1.0000
G045Quadratic26.908906002−30,655.5390
G054nonlinear0.0000200335126.4970
G062Non-linear0.006502002−6961.8140
G0710Quadratic0.00103500624.3060
G082Non-linear0.8488020000.9583
G097Non-linear0.531904002680.6300
G108Linear0.0005330067049.2480
G112Quadratic0.0099000110.7499
G123Quadratic4.745209000−1.0000
G135Non-linear0.0000001230.0539
G1410Non-linear0.000000303−47.7650
G153Quadratic0.000000112961.7150
G165Non-linear0.0204434004−1.9050
G176Non-linear0.0000004448853.5397
G189Quadratic0.0000013006−0.8660
G1915Non-linear33.47610500032.6560
G2024Linear0.000006212160.0205
G217Linear0.000001056193.7250
G2222Linear0.00000181119236.4310
G239Linear0.000002316−400.0050
G242Linear79.655602002−5.5080
Table 2. Experimental parameters for SRIFA.
Table 2. Experimental parameters for SRIFA.
ParametersValueSignificances
Size of population (NP)50Gandomi [44] suggested that 50 fireflies are adequate to
perform experiments for any application. If we
increase the population size, the computational time of
the proposed algorithm will be increased.
Initial randomization value ( α 0 )0.5In the literature, many authors suggested that a randomness
parameter must used in range (0, 1). In our experiment,
we have used a 0.5 value.
Initial attractiveness value ( β 0 )0.2The attractiveness parameter for our experiment is
0.2 value.
Absorption coefficient ( γ )4The absorption value is crucial in our experiment.
It determines convergence speed of algorithms.
In most applications, the γ value in range (0.001, 100)
Number of iterations or generations (G)4800Total number of iterations.
Total number of function evaluation (NFEs)240,000The total number of objective function evaluations
(50 × 4800 = 240,000 evaluations)
Constrained-handling values Initial tolerance value: 0.5 (for equality)
Final tolerance: 1 × 10 4 (for equality)
Probability P f 0.45It is used to rank objects. P f is used to compare
the fitness (objective) function in infeasible areas
of the search space.
Varphi ( ϕ )1Sum of constrained violation.
Table 3. Control parameter of the GKLS generator.
Table 3. Control parameter of the GKLS generator.
NClassr ρ δ
2Simple0.90.2 10 4
2Hard0.90.1 10 4
3Simple0.660.2 10 5
3Hard0.90.2 10 5
4Simple0.660.2 10 6
4Hard0.90.2 10 6
5Simple0.660.3 10 7
5Hard0.90.2 10 7
Table 4. Statistical results obtained by deterministic and metaheuristic algorithms using GKLS generator.
Table 4. Statistical results obtained by deterministic and metaheuristic algorithms using GKLS generator.
NClassDeterministic AlgorithmMetaheuristic Algorithms
(100 Runs for Each Algorithm and Class)(10,000 Runs for Each Algorithm and Class)
DIRECTDIRECT-LADCDEPSOFASRIFA
2Simple198.9292.8176.3>52,910.38 (511)>110,102.74 (1046)1190.31008
2Hard1063.81267.1675.7>357,467.49 (3556)>247,232.35 (2282)>4299.6 (3)>3457.6 (3)
3Simple1117.71785.7735.8>165,125.02 (1515)>170,320.10 (1489)15,269.214,987
3Hard>42,322.7 (4)4858.92006.8>476,251.20 (4603)>285,499.04 (2501)>21,986.3 (1)20,989
4Simple>47,282.9 (4)18,983.65014.1>462,401.52 (4546)>303,436.36 (2785)23,166.722,752.4
4Hard>95,708.3 (7)68,75416,473>773,481.03 (7676)>456,996.08 (4157)40,380.738,123.2
5Simple>16,057.5 (1)16,758.45129.9>294,839.01 (2815)>181,805.17 (1561)>47,203.1 (16)>45,892.8 (15)
5Hard>217,215.6 (16)>269,064.4 (4)30,471.8>751,930.00 (7473)>250,462.63 (2109)>79,555.2 (38)>76,564 (34)
Table 5. Statistical results obtained by SRIFA and FA on 24 benchmark functions over 25 runs.
Table 5. Statistical results obtained by SRIFA and FA on 24 benchmark functions over 25 runs.
Algo.FunctionsGlobal OptBestWorstMeanStd
FAG01−15.000−14.420072−11.281250−13.8401041.16 × 10 0
SRIFA −15.000−15.000−15.0007.86 × 10 13
FAG02−0.8036191−0.8036191−0.5205742−0.74584756.49 × 10 2
SRIFA −0.8036191−0.800909−0.802518.95 × 10 4
FAG03−1.000−1.0005−1.0005−1.00059.80 × 10 7
SRIFA −1.0005−1.0005−1.00056.54 × 10 6
FAG04−30,665.539−30,665.539−30,665.539−30,665.5392.37 × 10 9
SRIFA −30,665.539−30,665.54−30,665.546.74 × 10 11
FAG055126.496715126.496715144.30285233.23772.92 × 10 1
SRIFA 5126.496715126.49675126.49671.94 × 10 9
FAG06−6961.8138−6961.81388−6961.81388−6961.813881.76 × 10 7
SRIFA −6961.8138−6961.814−6961.8144.26 × 10 8
FAG0724.30624.30628324.31061424.326523.80 × 10 3
SRIFA 24.30624.30624.3062.65 × 10 8
FAG08−0.09582−0.09582504−0.09582504−0.095825041.83 × 10 17
SRIFA −0.09582−0.09582−0.095825.40 × 10 20
FAG09680.63680.630058680.630063680.6300827.11 × 10 6
SRIFA 680.6334680.6334680.63345.64 × 10 7
FAG107049.2487071.7575867181.027147111.549373.00 × 10 1
SRIFA 7049.24847049.24847049.24845.48 × 10 4
FAG110.74990.74990.74990.74995.64 × 10 9
SRIFA 0.74990.74990.74998.76 × 10 15
FAG12−1.000−1.000−1.000−1.0005.00 × 10 2
SRIFA −1.000−1.000−1.0006.00 × 10 3
FAG130.0539420.0540.4390.1311.54 × 10 1
SRIFA 0.0539430.0539430.0539430.00 × 10 0
FAG14−47.765−47.764879−47.764563−47.7628783.82 × 10 4
SRIFA −47.7658−47.7658−47.76585.68 × 10 6
FAG15961.715961.715961.715961.7158.67 × 10 9
SRIFA 961.7155961.7155961.71556.34 × 10 11
FAG16−1.9050−1.90515−1.90386−1.902398.76 × 10 5
SRIFA −1.9050−1.9050−1.90502.55 × 10 10
FAg178853.53978853.53398900.08319131.58495.52 × 10 1
SRIFA 8853.53398853.53398853.53395.80 × 10 3
FAG18−0.8660−0.8660−0.8660−0.86607.60 × 10 5
SRIFA −0.8660−0.8660−0.86606.54 × 10 10
FAG1932.656032.778934.622438.38271.65 × 10 0
SRIFA 32.656032.656032.65602.22 × 10 6
FAG2030.0967’N-F’’N-F’’N-F’’N-F’
SRIFA ’N-F’’N-F’’N-F’’N-F’
FAG21193.7250193.7245683.1906350.96965.41 × 10 2
SRIFA 193.7240193.7240193.72404.26 × 10 4
FAG22236.4310’N-F’’N-F’’N-F’’N-F’
SRIFA ’N-F’’N-F’’N-F’’N-F’
FAG23−400.0050−347.917268−347.9345669−347.9234707.54 × 10 3
SRIFA −400.0050−400.0052−400.00505.65 × 10 4
FAG24−5.5080−5.5081−5.5080−5.50801.11 × 10 5
SRIFA −5.5081−5.5080−5.50801.21 × 10 13
Table 6. Statistical outcomes achieved by SRPSO, SAMO-PSO, ECHT-EP2, UFA, NDE AND SRIFA.
Table 6. Statistical outcomes achieved by SRPSO, SAMO-PSO, ECHT-EP2, UFA, NDE AND SRIFA.
FunFeaturesSRPSOSAMO-PSOECHT-EP2UFANDESRIFA
G01Best−15.00−15.00−15.000−15.000−15.000−15.000
Mean−15.00−15.00−15.000−15.000−15.000−15.000
Worst−15.00N-A−15.000001−15.000001−15.000001−15.000
SD5.27 × 10 12 0.00 × 10 0 0.00 × 10 0 8.95 × 10 10 0.00 × 10 0 7.86 × 10 13
G02Best−0.803468050.8036191−0.8036191−0.8036191−0.803480−0.8036191
Mean−0.788615−0.79606−0.7998220−0.7961871−0.801809−0.80251
Worst−0.7572932N-A−0.7851820−0.7851820−0.800495−0.800909
SD1.31 × 10 3 5.3420 × 10 3 6.29 × 10 3 7.48 × 10 3 5.10 × 10 4 8.95 × 10 4
G03Best−0.9997−1.0005−1.0005−1.0005−1.0005001−1.0005
Mean−0.9985−1.0005001−1.0005−1.0005−1.0005001−1.0005
Worst−0.996532N-A−1.0005−1.0005−1.0005001−1.0005
SD8.18 × 10 5 0.02 × 10 0 0.02 × 10 0 1.75 × 10 6 0.00 × 10 0 6.54 × 10 6
G04Best−30,665.538−30,665.539−30,665.53867−30,665.539−30,665.539−30,665.539
Mean−30,665.5386−30,665.539−30,665.53867−30,665.539−30,665.539−30,665.539
Worst−30,665.536N-A−30,665.538−30,665.539−30,665.539−30,665.539
SD4.05 × 10 5 0.00 × 10 0 0.00 × 10 0 6.11 × 10 9 0.00 × 10 0 6.74 × 10 11
G05Best5126.49855126.49675126.49675126.49675126.49675126.4967
Mean5129.90105126.4965126.4965126.4965126.4965126.496
Worst5145.93N-A5126.4965126.4965126.4965126.496
SD5.111.3169 × 10 10 0.00 × 10 0 1.11 × 10 8 0.00 × 10 0 1.94 × 10 9
G06Best−6961.8139−6961.8138−6961.8138−6961.8138−6961.8138−6961.8138
Mean−6916.1370−6961.8138−6961.8138−6961.8138−6961.8138−6961.8138
Worst−6323.3140N-A−6961.8138−6961.8138−6961.8138−6961.8138
SD138.3310.00 × 10 0 0.00 × 10 0 3.87 × 10 8 0.00 × 10 0 4.26 × 10 8
G07Best24.31280324.30620924.306224.30620924.30620924.3062
Mean24.3824.30620924.306324.30620924.30620924.306
Worst24.885038N-A24.306324.30620924.30620924.306
SD1.13 × 10 2 1.9289 × 10 8 3.19 × 10 5 1.97 × 10 9 1.35 × 10 14 2.65 × 10 8
G08Best−0.09582−0.095825−0.09582504−0.09582504−0.095825−0.09582
Mean−0.095823−0.095825−0.095825−0.095825−0.095825−0.09582
Worst−0.095825N-A−0.09582504−0.09582504−0.095825−0.09582
SD2.80 × 10 11 0.00 × 10 0 0.00 × 10 0 1.70 × 10 17 0.00 × 10 0 5.40 × 10 20
G09Best680.63004680.630057680.630057680.630057680.630057680.6334
Mean680.66052680.6300680.6300680.6300680.6300680.6300
Worst680.766N-A680.6300680.6300680.6300680.6300
SD3.33 × 10 3 0.00 × 10 0 2.61 × 10 8 5.84 × 10 10 0.00 × 10 0 5.64 × 10 7
G10Best7076.3977049.248027049.24837049.248027049.248027049.2484
Mean7340.69647049.24807049.2497049.24807049.24807049.2480
Worst8075.92N-A7049.25017049.248027049.248027049.2484
SD255.371.5064 × 10 5 6.60 × 10 4 2.26 × 10 7 3.41 × 10 9 5.48 × 10 4
G11Best0.750.7499990.7499990.74990.7499990.7499
Mean0.750.7499990.7499990.74990.7499990.7499
Worst0.75N-A0.7499990.74990.7499990.7499
SD9.44 × 10 5 0.00 × 10 0 3.40E-169.26E-160.00 × 10 0 8.76 × 10 15
G12Best−1−1.000−1.000−1.000−1.000−1.000
Mean−1−1.000−1.000−1.000−1.000−1.000
Worst−1N-A−1.000−1.000−1.000−1.000
SD2.62 × 10 11 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 6.00 × 10 3
G13BestN-A0.0539410.0539410.0539410.0539410.053941
MeanN-A0.05394150.05394150.05394150.05394150.053943
WorstN-AN-A0.05394150.05394150.05394150.053943
SDN-A0.00 × 10 0 1.30 × 10 12 1.43 × 10 12 0.00 × 10 0 0.00 × 10 0
G14BestN-A−47.7648−47.7649−47.76489−47.7648−47.7658
MeanN-A−47.7648−47.7648−47.76489−47.7648−47.7658
WorstN-AN-AN-A−47.76489−47.7648−47.7658
SDN-A4.043 × 10 2 N-A2.34 × 10 6 5.14 × 10 15 5.68 × 10 6
G15Best961.7151961.7150961.7150961.7150961.7150961.7150
Mean961.7207961.7150961.7150961.7150961.7150961.7150
Worst961.7712N-AN-A961.7150961.7150961.7155
SD1.12 × 10 2 0.00 × 10 0 N-A1.46 × 10 11 0.00 × 10 0 6.34 × 10 11
G16Best−1.9051−1.9051−1.9051−1.9051−1.9051−1.9050
Mean−1.9050−1.9051−1.9050−1.9050−1.9050−1.9050
Worst−1.9051N-AN-A−1.9051−1.9051−1.9050
SD1.12 × 10 11 1.15 × 10 5 N-A1.58 × 10 11 0.00 × 10 0 2.55 × 10 10
G17BestN-A8853.53388853.53978853.53388853.53388853.5338
MeanN-A8853.53388853.88718853.53388853.53388853.5338
WorstN-AN-AN-A8853.53388853.53388853.5338
SDN-A0.00 × 10 0 N-A2.18 × 10 8 0.00 × 10 0 5.80 × 10 3
G18BestN-A−0.8660−0.8660−0.8660−0.8660−0.8660
MeanN-A−0.8660−0.8660−0.8660−0.8660−0.8660
WorstN-AN-AN-A−0.8660−0.8660−0.8660
SDN-A7.0436 × 10 7 N-A3.39 × 10 10 0.00 × 10 0 6.54 × 10 10
G19BestN-A32.655532.655532.655532.655532.6560
MeanN-A32.655636.427432.655532.655632.6560
WorstN-AN-AN-A32.655532.655732.6560
SDN-A6.145 × 10 2 N-A1.37 × 10 8 3.73 × 10 5 2.22 × 10 6
G20BestN-AN-AN-AN-AN-AN-A
MeanN-AN-AN-AN-AN-AN-A
WorstN-AN-AN-AN-AN-AN-A
SDN-AN-AN-AN-AN-AN-A
G21BestN-A193.7255193.7251266.5193.72451193.7250
MeanN-A193.7251246.0915255.5590193.7251193.7250
WorstN-AN-AN-A520.1656193.724193.7260
SDN-A1.9643 × 10 2 N-A9.13 × 10 1 6.26 × 10 1 14.26 × 10 4
G22BestN-AN-AN-AN-AN-AN-A
MeanN-AN-AN-AN-AN-AN-A
WorstN-AN-AN-AN-AN-AN-A
SDN-AN-AN-AN-AN-AN-A
G23BestN-A−400.0551−355.661−400.0551−400.0551−400.005
MeanN-A−400.0551−194.7603−400.0551−400.0551−400.0050
WorstN-AN-AN-A−400.0551−400.0551−400.0052
SDN-A1.96 × 10 1 N-A5.08 × 10 8 3.45 × 10 9 5.65 × 10 4
G24Best−5.5080−5.5080−5.5080−5.5080−5.5080−5.5080
Mean−5.5080−5.5080−5.5080−5.5080−5.5080−5.5080
Worst−5.5080N-AN-A−5.5080−5.5080−5.5080
SD2.69 × 10 11 0.00 × 10 0 N-A5.37 × 10 13 0.00 × 10 0 1.21 × 10 13
Table 7. Results obtained by a Wilcoxon-test for SRIFA against SRPSO, SAMO-PSO, ECHT-EP2, UFA and NDE.
Table 7. Results obtained by a Wilcoxon-test for SRIFA against SRPSO, SAMO-PSO, ECHT-EP2, UFA and NDE.
Algorithms R + R p-valueBestEqualWorstDecision
SRIFA versus SRPSO17630.4651732+
SRIFA versus SAMO-PSO167380.3631471+
SRIFA versus ECHT-EP2142170.0021534+
SRIFA versus UFA45190.0161084
SRIFA versus NDE67250.6911446
Table 8. Results obtained Friedman test for all metaheuristic algorithms.
Table 8. Results obtained Friedman test for all metaheuristic algorithms.
FunctionsSRPSOSAMO-PSOECHT-EP2UFANDESRIFA
G014.54.54.54.54.54.5
G026754.533
G0344.56644
G044.54.54.54.54.54.5
G054.54.54.54.54.53
G064.5664.54.54.5
G073.54.53.53.53.53.5
G084.574.54.54.54.5
G0964.5664.54.5
G1064.54.54.533
G114.573.54.54.53
G124.54.54.54.54.54.5
G1344.54444
G143.543.53.53.53.5
G15374.54.54.54.5
G164.584.54.54.54.5
G174.54.54.5222
G184.53.533.533
G195.584554
G20N-AN-AN-AN-AN-AN-A
G21664.52.52.52.5
G22N-AN-AN-AN-AN-AN-A
G238763.51.51.5
G24684.54.54.53.5
Avearge rank4.84090915.6136363644.54545454.253.84090913.6136
Table 9. Best outcomes of parameter objective and constraints values for over engineering-problems.
Table 9. Best outcomes of parameter objective and constraints values for over engineering-problems.
Tension/CompressionWelded-BeamPressure-VesselThree-Truss-ProblemSpeed-Reducer
x10.05167766385920.2057296389468440.81250.7886751452969953.50000000002504
x20.35673248169613.470488666632450.43750.408248260193600.70000000000023
x311.28810154181579.03662391025916--17
x4-0.2057296397928442.0984455958043-7.30000000000014
x5--176.63659584313-7.71531991152672
x6----3.35021466610421
x7----5.28665446498064
x8-----
x9-----
x10-----
F(x)0.0126652328055631.724852312543286059.714335263.8958433765152994.47106614799
G1(x)0−0.000063371885873−0.0000000000000873−0.070525402833398−0.073915280394101
G2(x)−1.216754326628263−0.000002714066983−0.00035880820872−1.467936135628140−0.197998527141053
G3(x)−4.0521785529112−0.000000000839532−0.000000016701007−0.602589267205258−0.499172248101033
G4(x)−0.727728835000534−3.432983781912125−0.633634041562312-−0.904643904554311
G5(x)-−0.080729638942761--−0.000000000000654
G6(x)-−0.235540322583421--−0.000000000000212
G7(x)-−0.000000209274321--−0.702499999999991
G8(x)----−0.000000000000209
G9(x)----−0.795833333333279
G10(x)----−0.051325753542591
G11(x)----−0.000000000001243
Table 10. Statistical outcomes achieved by SRIFA for all five engineering problems over 25 independent runs.
Table 10. Statistical outcomes achieved by SRIFA for all five engineering problems over 25 independent runs.
ProblemsBest-ValueMean-ValueWorst-ValueSDNFEs
Tension/Compression0.01266523280.01266523290.01266523336.54 × 10 10 2000
Welded-beam1.72485230871.72485230871.72485230898.940 × 10 12 2000
pressure-vessel6059.71433505616059.71433516059.71433520696.87 × 10 8 2000
Three-truss263.8958433765263.8958433768263.89584337706.21 × 10 11 1500
Speed-reducer2996.3481652996.3481652996.3481658.95 × 10 12 3000
Table 11. Statistical results of comparison between SRIFA and NIAs for tension/compression spring design problem.
Table 11. Statistical results of comparison between SRIFA and NIAs for tension/compression spring design problem.
AlgorithmBest-ValueMean-ValueWorst-ValueSDNFEs
SRPSO0.0126680.0126780.0126857.05 × 10 6 20,000
MVDE0.0126652720.0126673240.0127190552.45 × 10 6 10,000
BA0.012665220.013500520.01689543.09 × 10 6 24,000
MBA0.0126650.0127130.01296.30 × 10 5 7650
Jaya0.0126650.0126660.0126794.90 × 10 4 10,000
PVS0.012670.0128380.013141N-A10,000
UABC0.0126650.012683N-A3.31 × 10 5 15,000
IPSO0.012665230.0136765270.017828641.57 × 10 3 4000
AFA0.0126653050.1267704460.0001280580.01271168850,000
SRIFA0.01266523280.01266523290.01266523336.54 × 10 10 2000
Table 12. Statistical results of comparison between SRIFA and NIAs for welded-beam problem.
Table 12. Statistical results of comparison between SRIFA and NIAs for welded-beam problem.
AlgorithmBest-ValueMean-ValueWorst-ValueSDNFEs
SRPSO1.724866581.724899341.725422121.12 × 10 6 20,000
MVDE1.72485271.72486211.72492157.88 × 10 6 15,000
BA1.73120651.8786562.34557930.267798950,000
MBA1.7248531.7248531.7248536.94 × 10 19 47,340
JAYA1.7248521.7248521.7248533.30 × 10 2 10,000
MFA1.72491.72771.73272.40 × 10 3 50,000
FA1.73120651.8786652.34557932.68 × 10 1 50,000
IPSO1.72486241.72485281.72485232.02 × 10 6 12,500
AFA1.7248531.7248531.7248530.00 × 10 0 50,000
SRIFA1.72485230871.72485230871.72485230898.940 × 10 12 2000
Table 13. Statistical results of comparison between SRIFA and NIAs for the pressure-vessel problem.
Table 13. Statistical results of comparison between SRIFA and NIAs for the pressure-vessel problem.
AlgorithmBest-ValueMean-ValueWorst-ValueSDNFEs
SRPSO6086.206042.846315.018.04 × 10 1 20,000
MVDE6059.7143876059.9972366090.5335282.91 × 10 0 15,000
BA6059.716179.136318.951.37 × 10 2 15,000
EBA6059.716173.676370.771.42 × 10 2 15,000
FA5890.3835937.33796258.968251.65 × 10 2 25,000
PVS6059.7146063.6436090.526N-A42,100
UABC6059.7143356192.116211N-A2.04 × 10 2 15,000
IPSO6059.71436068.75396090.53141.40 × 10 1 7500
AFA6059.714271966090.526142596064.336052611.13 × 10 1 50,000
SRIFA6059.71433505616059.71433516059.71433520696.87 × 10 8 2000
Table 14. Statistical results of comparison between SRIFA and NIAs for the three-bar truss problem.
Table 14. Statistical results of comparison between SRIFA and NIAs for the three-bar truss problem.
AlgorithmBest-ValueMean-ValueWorst-ValueSDNFEs
SRPSO263.8958440263.8977800263.90795503.02 × 10 5 20,000
MVDE263.8958434263.8958434263.89585482.55 × 10 6 7000
NDE263.8958434263.8958434263.89584340.00 × 10 0 4000
MAL-FA263.895843263.896101263.8958479.70 × 10 7 4000
UABC263.895843263.895843N-A0.00 × 10 0 12,000
WCA263.895843263.896201263.8959038.71 × 10 5 5250
UFA263.8958433765263.8958433768263.89584337701.92 × 10 10 4500
SRIFA263.8958433765263.8958433768263.89584337706.21 × 10 11 1500
Table 15. Statistical results of comparison between SRIFA and NIAs for the speed-reducer problem.
Table 15. Statistical results of comparison between SRIFA and NIAs for the speed-reducer problem.
AlgorithmBest-ValueMean-ValueWorst-ValueSDNFEs
SRPSO2514.972700.102860.138.73 × 10 1 20,000
MVDE2994.4710662994.4710662994.4710692.819316 × 10 7 30,000
NDE2994.4710662994.4710662994.4710664.17 × 10 12 18,000
MBA2994.4824532996.7690192999.6524441.56 × 10 0 6300
JAYA2996.3482996.3482996.3480.00 × 10 0 10,000
UABC2994.4710662994.471072N-A5.98 × 10 6 15,000
PVS2994.473262994.72532994.8327N-A6000
IPSO2994.4710662994.4710662994.4710662.65 × 10 9 5000
AFA2996.3726982996.5148742996.5148749.00 × 10 2 50,000
SRIFA2996.3481652996.3481652996.3481658.95 × 10 12 3000

Share and Cite

MDPI and ACS Style

Balande, U.; Shrimankar, D. SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems. Mathematics 2019, 7, 250. https://doi.org/10.3390/math7030250

AMA Style

Balande U, Shrimankar D. SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems. Mathematics. 2019; 7(3):250. https://doi.org/10.3390/math7030250

Chicago/Turabian Style

Balande, Umesh, and Deepti Shrimankar. 2019. "SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems" Mathematics 7, no. 3: 250. https://doi.org/10.3390/math7030250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop