Next Article in Journal
A Delayed Epidemic Model for Propagation of Malicious Codes in Wireless Sensor Network
Next Article in Special Issue
An Entropy-Assisted Particle Swarm Optimizer for Large-Scale Optimization Problem
Previous Article in Journal
Prešić Type Nonself Operators and Related Best Proximity Results
Previous Article in Special Issue
Improved Whale Algorithm for Solving the Flexible Job Shop Scheduling Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Elephant Herding Optimization with Novel Individual Updating Strategies for Large-Scale Optimization Problems

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(5), 395; https://doi.org/10.3390/math7050395
Submission received: 16 February 2019 / Revised: 26 April 2019 / Accepted: 27 April 2019 / Published: 30 April 2019
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

:
Inspired by the behavior of elephants in nature, elephant herd optimization (EHO) was proposed recently for global optimization. Like most other metaheuristic algorithms, EHO does not use the previous individuals in the later updating process. If the useful information in the previous individuals were fully exploited and used in the later optimization process, the quality of solutions may be improved significantly. In this paper, we propose several new updating strategies for EHO, in which one, two, or three individuals are selected from the previous iterations, and their useful information is incorporated into the updating process. Accordingly, the final individual at this iteration is generated according to the elephant generated by the basic EHO, and the selected previous elephants through a weighted sum. The weights are determined by a random number and the fitness of the elephant individuals at the previous iteration. We incorporated each of the six individual updating strategies individually into the basic EHO, creating six improved variants of EHO. We benchmarked these proposed methods using sixteen test functions. Our experimental results demonstrated that the proposed improved methods significantly outperformed the basic EHO.

1. Introduction

Inspired by nature, a large variety of metaheuristic algorithms [1] have been proposed that provide optimal or near-optimal solutions to various complex large-scale problems that are difficult to solve using traditional techniques. Some of the many successful metaheuristic approaches include particle swarm optimization (PSO) [2,3], cooperative coevolution [4,5,6], seagull optimization algorithm [7], GRASP [8], clustering algorithm [9], and differential evolution (DE) [10,11], among others.
In 2015, Wang et al. [12,13] proposed a new metaheuristic algorithm called elephant herd optimization (EHO), for finding the optimal or near-optimal function values. Although EHO exhibits a good performance on benchmark evaluations [12,13], like most other metaheuristic methods, it does not utilize the best information from the previous elephant individuals to guide current and future searches. This gap will be addressed, because previous individuals can provide a variety of useful information. If such information could be fully exploited and applied in the later updating process, the performance of EHO may be improved significantly, without adding unnecessary operations and fitness evaluations.
In the research presented in this paper, we extended and improved the performance of the original EHO (which we call “the basic EHO”) by fully investigating the information in the previous elephant individuals. Then, we designed six updating strategies to update the individuals. For each of the six individual updating strategies, first, we selected a certain number of elephants from the previous iterations. This selection could be made in either a fixed or random way, with one, two, or three individuals selected from previous iterations. Next, we used the information from these selected previous individual elephants to update the individuals. In this way, the information from the previous individuals could be reused fully. The final elephant individual at this iteration was generated according to the elephant individual generated by the basic EHO at the current iteration, along with the selected previous elephants using a weighted sum. While there are many ways to determine the weights, in our current work, they were determined by random numbers. Last, by combining the six individual updating strategies with EHO, we developed the improved variants of EHO. To verify our work, we benchmarked these variants using sixteen cases involving large-scale complex functions. Our experimental results showed that the proposed variants of EHO significantly outperformed the originally described EHO.
The organization of the remainder of this paper is as follows. Section 2 reviews the main steps of the basic EHO. In Section 3, we describe the proposed method for incorporating useful information from previous elephants into the EHO. Section 4 provides details of our various experiments on sixteen large-scale functions. Lastly, Section 5 offers our conclusion and suggestions for future work.

2. Related Work

As EHO [12,13] is a newly-proposed swarm intelligence-based algorithm, in this section, some of the most representative work regarding swarm intelligence, including EHO, are summarized and reviewed.
Meena et al. [14] proposed an improved EHO algorithm, which is used to solve the multi-objective distributed energy resources (DER) accommodation problem of distribution systems by combining a technique for order of preference by similarity to ideal solution (TOPSIS). The proposed technique is productively implemented on three small- to large-scale benchmark test distribution systems of 33-bus, 118-bus, and 880-bus.
When the spectral resolution of the satellite imagery is increased, the higher within-class variability reduces the statistical separability between the LU/LC classes in spectral space and tends to continue diminishing the classification accuracy of the traditional classifiers. These are mostly per pixel and parametric in nature. Jayanth et al. [15] used EHO to solve the problems. The experimental results revealed that EHO shows an improvement of 10.7% on Arsikere Taluk and 6.63% on the NITK campus over the support vector machine.
Rashwan et al. [16] carried out a series of experiments on a standard test bench, as well as engineering problems and real-world problems, in order to understand the impact of the control parameters. On top of that, the main aim of this paper is to propose different approaches to enhance the performance of EHO. Case studies ranging from the recent test bench problems of Congress on Evolutionary Computation (CEC) 2016, to the popular engineering problems of the gear train, welded beam, three-bar truss design problem, continuous stirred tank reactor, and fed-batch fermentor, are used to validate and test the performances of the proposed EHOs against existing techniques.
Correia et al. [17] firstly used a metaheuristic algorithm, namely EHO, to address the energy-based source localization problem in wireless sensor networks. Through extensive simulations, the key parameters of the EHO algorithm are optimized, such that they match the energy decay model between two sensor nodes. The simulation results show that the new approach significantly outperforms the existing solutions in noisy environments, encouraging further improvement and testing of metaheuristic methods.
Jafari et al. [18] proposed a new hybrid algorithm that was based on EHO and cultural algorithm (CA), known as the elephant herding optimization cultural (EHOC) algorithm. In this process, the belief space defined by the cultural algorithm was used to improve the standard EHO. In EHOC, based on the belief space, the separating operator is defined, which can create new local optimums in the search space, so as to improve the algorithm search ability and to create an algorithm with an optimal exploration–exploitation balance. The CA, EHO, and EHOC algorithms are applied to eight mathematical optimization problems and four truss weight minimization problems, and to assess the performance of the proposed algorithm, the results are compared. The results clearly indicate that EHOC can accelerate the convergence rate effectively and can develop better solutions compared with CA and EHO.
Hassanien et al. [19] combined support vector regression (SVR) with EHO in order to predict the values of the three emotional scales as continuous variables. Multiple experiments are applied to evaluate the prediction performance. EHO was applied in two stages of the optimization. Firstly, to fine-tune the regression parameters of the SVR. Secondly, to select the most relevant features extracted from all 40 EEG channels, and to eliminate the ineffective and redundant features. To verify the proposed approach, the results proved EHO-SVR’s ability to gain a relatively enhanced performance, measured by a regression accuracy of 98.64%.
Besides EHO, many other swarm intelligence-based algorithms have been proposed, and some of the most representative ones are summarized and reviewed as follows.
Monarch butterfly optimization (MBO) [20] is proposed for global optimization problems, inspired by the migration behavior of monarch butterflies. Yi et al. [21] proposed a novel quantum inspired MBO methodology, called QMBO, by incorporating quantum computation into MBO, which is further used to solve uninhabited combat air vehicles (UCAV) path planning navigation problem [22,23]. In addition, Feng et al. proposed various variants of MBO algorithms to solve the knapsack problem [24,25,26,27,28]. In addition, Wang et al. also improved on the performance of the MBO algorithm from various aspects [29,30,31,32]; a variant of the MBO method in combination with two optimization strategies, namely GCMBO, was also put forward.
Inspired by the phototaxis and Lévy flights of the moths, Wang developed a new kind of metaheuristic algorithm, called the moth search (MS) algorithm [33]. Feng et al. [34] divided twelve transfer functions into three families, and combined them with MS, and then twelve discrete versions of MS algorithms are proposed for solving the set-union knapsack problem (SUKP). Based on the improvement of the moth search algorithm (MSA) using differential evolution (DE), Elaziz et al. [35] proposed a new method for the cloud task scheduling problem. In addition, Feng and Wang [36] verified the influence of the Lévy flights operator and fly straightly operator in MS. Nine types of new mutation operators based on the global harmony search have been specially devised to replace the Lévy flights operator.
Inspired by the herding behavior of krill, Gandomi and Alavi proposed a krill herd (KH) [37]. After that, Wang et al. improved the KH algorithms through different optimization strategies [38,39,40,41,42,43,44,45,46]. More literature regarding the KH algorithm can be found in the literature [47].
The artificial bee colony (ABC) algorithm [48] is a swarm-based meta-heuristic algorithm that was introduced by Karaboga in 2005 for optimizing numerical problems. Wang and Yi [49] presented a robust optimization algorithm, namely KHABC, based on hybridization of KH and ABC methods and the information exchange concept. In addition, Liu et al. [50] presented an ABC algorithm based on the dynamic penalty function and Lévy flight (DPLABC) for constrained optimization problems.
Also, many other researchers have proposed other state-of-the-art metaheuristic algorithms, such as particle swarm optimization (PSO) [51,52,53,54,55,56], cuckoo search (CS) [57,58,59,60,61], probability-based incremental learning (PBIL) [62], differential evolution (DE) [63,64,65,66], evolutionary strategy (ES) [67,68], monarch butterfly optimization (MBO) [20], firefly algorithm (FA) [69,70,71,72], earthworm optimization algorithm (EWA) [73], genetic algorithms (GAs) [74,75,76], ant colony optimization (ACO) [77,78,79], krill herd (KH) [37,80,81], invasive weed optimization [82,83,84], stud GA (SGA) [85], biogeography-based optimization (BBO) [86,87], harmony search (HS) [88,89,90], and bat algorithm (BA) [91,92], among others
Besides benchmark evaluations [93,94], these proposed state-of-the-art metaheuristic algorithms are also used to solve various practical engineering problems, like test-sheet composition [95], scheduling [96,97], clustering [98,99,100], cyber-physical social systems [101], economic load dispatch [102,103], fault diagnosis [104], flowshop [105], big data optimization [106,107], gesture segmentation [108], target recognition [109,110], prediction of pupylation sites [111], system identification [112], shape design [113], multi-objective optimization [114], and many-objective optimization [115,116,117].

3. Elephant Herd Optimization

The basic EHO can be described using the following simplified rules [12]:
(1)
Elephants belonging to different clans live together led by a matriarch. Each clan has a fixed number of elephants. For the purposes of modelling, we assume that each clan consists of an equal, unchanging number of elephants.
(2)
The positions of the elephants in a clan are updated based on their relationship to the matriarch. EHO models this behavior through an updating operator.
(3)
Mature male elephants leave their family groups to live alone. We assume that during each generation, a fixed number of male elephants leave their clans. Accordingly, EHO models the updating process using a separating operator.
(4)
Generally, the matriarch in each clan is the eldest female elephant. For the purposes of modelling and solving the optimization problems, the matriarch is considered the fittest elephant individual in the clan.
As this paper is focused on improving the EHO updating process, in the following subsection, we provide further details of the EHO updating operator as it was originally presented. For details regarding the EHO separating operator, see the literature [12].

3.1. Clan Updating Operator

The following updating strategy of the basic EHO was described by the authors of [12], as follows. Assume that an elephant clan is denoted as ci. The next position of any elephant, j, in the clan is updated using (1), as follows:
x n e w , c i , j = x c i , j + α × ( x b e s t , c i x c i , j ) × r ,
where xnew,ci,j is the updated position, and xci,j is the prior position of elephant j in clan ci. xbest,ci denotes the matriarch of clan ci; she is the fittest elephant individual in the clan. The scale factor α [ 0 , 1 ] determines the influence of the matriarch of ci on xci,j. r [ 0 ,   1 ] , which is a type of stochastic distribution, can provide a significant improvement for the diversity of the population in the later search phase. For the present work, a uniform distribution was used.
It should be noted that xci,j = xbest,ci,, which means that the matriarch (fittest elephant) in the clan cannot be updated by (1). To avoid this situation, we can update the fittest elephant using the following equation:
x n e w , c i , j = β × x c e n t e r , c i ,
where the influence of xcenter,ci on xnew,ci, is regulated by β [ 0 , 1 ] .
In Equation (2), the information from all of the individuals in clan ci is used to create the new individual xnew,ci,j. The centre of clan ci, xcenter,ci, can be calculated for the d-th dimension through D calculations, where D is the total dimension, as follows:
x c e n t e r , c i , d = 1 n c i × j = 1 n c i x c i , j , d
Here, 1 ≤ dD represents the d-th dimension, nci is the number of individuals in ci, and xci,j,d is the d-th dimension of the individual xci,j.
Algorithm 1 provides the pseudocode for the updating operator.
Algorithm 1: Clan updating operator [12]
Begin
  for ci = 1 to nClan (for all clans in elephant population) do
    for j = 1 to nci (for all elephant individuals in clan ci) do
      Update xci,j and generate xnew,ci,j according to (1).
      if xci,j = xbest,ci then
        Update xci,j and generate xnew,ci,j according to (2).
      end if
    end for j
  end for ci
End.

3.2. Separating Operator

In groups of elephants, male elephants leave their family group and live alone upon reaching puberty. This process of separation can be modeled into a separating operator when solving optimization problems. In order to further improve the search ability of the EHO method, let us assume that the individual elephants with the worst fitness will implement the separating operator for each generation, as shown in (4).
x w o r s t , c i = x min + ( x max x min + 1 ) × r a n d
where xmax and xmin are the upper and lower bound, respectively, of the position of the individual elephant. xworst,ci is the worst individual elephant in clan ci. r a n d [ 0 ,   1 ] is a kind of stochastic distribution, and the uniform distribution in the range [0, 1] is used in our current work.
Accordingly, the separating operator can be formed, as shown in Algorithm 2.
Algorithm 2: Separating operator
Begin
  for ci =1 to nClan (all of the clans in the elephant population) do
    Replace the worst elephant individual in clan ci using (4).
  end for ci
End.

3.3. Schematic Presentation of the Basic EHO Algorithm

For EHO, like the other metaheuristic algorithms, a kind of elitism strategy is used with the aim of protecting the best elephant individuals from being ruined by the clan updating and separating operators. In the beginning, the best elephant individuals are saved, and the worst ones are replaced by the saved best elephant individuals at the end of the search process. This elitism ensures that the later elephant population is not always worse than the former one. The schematic description can be summarized as shown in Algorithm 3.
Algorithm 3: Elephant Herd Optimization (EHO) [12]
Begin
  Step 1: Initialization.
       Set the generation counter t = 1.
     
Initialize the population P of NP elephant individuals randomly, with uniform distribution in the search space.
     
Set the number of the kept elephants nKEL, the maximum generation MaxGen, the scale factor α and β, the number of clan nClan, and the number of elephants for the ci-th clan nci.
  Step 2: Fitness evaluation.
       Evaluate each elephant individual according to its position.
  Step 3: While t < MaxGen do the following:
       Sort all of the elephant individuals according to their fitness.
       Save the nKEL elephant individuals.
         Implement the clan updating operator as shown in Algorithm 1.
         Implement the separating operator as shown in Algorithm 2.
         Evaluate the population according to the newly updated positions.
         Replace the worst elephant individuals with the nKEL saved ones.
         Update the generation counter, t = t + 1.
  Step 4: End while
  Step 5: Output the best solution.
End.
As described before, the basic EHO algorithm does not take the best available information in the previous group of individual elephants to guide the current and later searches. This may lead to a slow convergence during the solution of certain complex, large-scale optimization problems. In our current work, some of the information used for the previous individual elephants was reused, with the aim of improving the search ability of the basic EHO algorithm.

4. Improving EHO with Individual Updating Strategies

In this research, we propose six new versions of EHO based on individual updating strategies. In theory, k (k ≥ 1) previous elephant individuals can be selected, but as more individuals (k ≥ 4) are chosen, the calculations of the weights become more complex. Therefore, for this paper, we investigate k { 1 ,   2 ,   3 } .
Suppose that x i t is the ith individual at iteration t, and xi and f i t are its position and fitness values, respectively. Here, t is the current iteration, 1 ≤ iNP is an integer number, and NP is the population size. y i t + 1 is the individual generated by the basic EHO, and f i t + 1 is its fitness. The framework of our proposed method is given through the individuals at the (t − 2)th, (t − 1)th, tth, and (t + 1)th iterations.

4.1. Case of k = 1

The simplest case is when k = 1. The ith individual x i t + 1 can be generated as follows:
x i t + 1 = θ y i t + 1 + ω x j t ,
where x j t is the position for individual j ( j { 1 ,   2 , , N P } ) at iteration t, and f j t is its fitness. θ and ω are weighting factors satisfying θ + ω = 1. They can be given as follows:
θ = r , ω = 1 r
Here, r is a random number that is drawn from the uniform distribution in [0, 1]. The individual j can be determined in the following ways:
(1)
j = i;
(2)
j = r1, where r1 is an integer between 1 and NP that is selected randomly.
The individual generated by the second method has more population diversity than the individual generated the first way. We refer to these updating strategies as R1 and RR1, respectively. Their incorporation into the basic EHO results in EHOR1 and EHORR1, respectively.

4.2. Case of k = 2

Two individuals at two previous iterations are collected and used to generate elephant i. For this case, the ith individual x i t + 1 can be generated as follows:
x i t + 1 = θ y i t + 1 + ω 1 x j 1 t + ω 2 x j 2 t 1 ,
where x j 1 t and x j 2 t 1 are the positions for individuals j1 and j2 ( j 1 , j 2 { 1 ,   2 , , N P } ) at iterations t and t − 1, and f j 1 t and f j 2 t 1 are their fitness values, respectively. θ, ω1, and ω2 are weighting factors satisfying θ + ω1 + ω2 = 1. They can be calculated as follows:
θ = r , ω 1 = ( 1 r ) × f j 2 t 1 f j 2 t 1 + f j 1 t , ω 2 = ( 1 r ) × f j 1 t f j 2 t 1 + f j 1 t .
Here, r is a random number that is drawn from the uniform distribution in [0, 1]. Individuals j1 and j2 in (8) can be determined in several different ways, but in this paper, we focus on the following two approaches:
(1)
j1 = j2 = i;
(2)
j1 = r1, and j2 = r2, where r1 and r2 are integers between 1 and NP selected randomly.
As in the previous case, the individuals generated by the second method have more population diversity than the individuals generated the first way. We refer to these updating strategies as R2 and RR2, respectively. Their incorporation into EHO yields EHOR2 and EHORR2, respectively.

4.3. Case of k = 3

Three individuals at three previous iterations are collected and used to generate individual i. For this case, the ith individual x i t + 1 can be generated as follows:
x i t + 1 = θ y i t + 1 + ω 1 x j 1 t + ω 2 x j 2 t 1 + ω 3 x j 3 t 2 ,
where x j 1 t , x j 2 t 1 and x j 3 t 2 are the positions of individuals j1, j2, and j3 ( j 1 , j 2 , j 3 { 1 ,   2 , , N P } ) at iterations t, t − 1, and t − 2, and f j 1 t , f j 2 t 1 , and f j 3 t 2 are their fitness values, respectively. Their weighting factors are θ, ω1, ω2, and ω3 with θ + ω1 + ω2 + ω3 = 1. The calculation can be given as follows:
θ = r , ω 1 = 1 2 × ( 1 r ) × f j 2 t 1 + f j 3 t 2 f j 1 t + f j 2 t 1 + f j 3 t 2 , ω 2 = 1 2 × ( 1 r ) × f j 1 t + f j 3 t 2 f j 1 t + f j 2 t 1 + f j 3 t 2 , ω 3 = 1 2 × ( 1 r ) × f j 1 t + f j 2 t 1 f j 1 t + f j 2 t 1 + f j 3 t 2 .
Although j1j3 can be determined in several ways, in this work, we adopt the following two methods:
(1)
j1 = j2 = j3 = i;
(2)
j1 = r1, j2 = r2, and j3 = r3, where r1r3 are integer numbers between 1 and NP selected at random.
As in the previous two cases, the individuals generated using the second method have more population diversity. We refer to these updating strategies as R3 and RR3, respectively. Their incorporation into EHO leads to EHOR3 and EHORR3, respectively.

5. Simulation Results

As discussed in Section 4, in the experimental part of our work, the six individual updating strategies (R1, RR1, R2, RR2, R3, and RR3) were incorporated separately into the basic EHO. Accordingly, we proposed six improved versions of EHO, namely: EHOR1, EHORR1, EHOR2, EHORR2, EHOR3, and EHORR3. For the sake of clarity, the basic EHO can also be identified as EHOR0, and we can call the updating strategies R0, R1, RR1, R2, RR2, R3, and RR3 for short. To provide a full assessment of the performance of each of the proposed individual updating strategies, we compared the six improved EHOs with each other and with the basic EHO. Through this comparison, we could look at the performance of the six updating strategies in order to determine whether these strategies were able to improve the performance of the EHO.
The six variants of the EHO were investigated fully from various respects through a series of experiments, using sixteen large-scale benchmarks with dimensions D = 50, 100, 200, 500, and 1000. These complicated large-scale benchmarks can be found in Table 1. More information about all the benchmarks can be found in the literature [86,118,119].
As all metaheuristic algorithms are based on a certain distribution, different runs will generate different results. With the aim of getting the most representative statistical results, we performed 30 independent runs under the same conditions, as shown in the literature [120].
For all of the methods studied in this paper, their parameters were set as follows: the scale factor α = 0.5, β = 0.1, the number of the kept elephants nKEL = 2, and the number of clans nClan = 5. In the simplest form, all of the clans have an equal number of elephants. In our current work, all of the clans have the same number of elephants (i.e., nci = 20). Except for the number of elephants in each clan, the other parameters are the same as in the basic EHO, which can be found in the literature [12,13]. The best function values found by a certain intelligent algorithm are shown in bold font.

5.1. Unconstrained Optimization

5.1.1. D = 50

In this section of our work, seven kinds of EHOs (the basic EHO plus the six proposed improved variants) were evaluated using the 16 benchmarks mentioned previously, with dimension D = 50. The obtained mean function values and standard values from thirty runs are recorded in Table 2 and Table 3.
From Table 2, we can see that in terms of the mean function values, R2 performed the best, at a level far better than the other methods. As for the other methods, R1 and RR1 provided a similar performance to each other, and they could find the smallest fitness values successfully on only one of the complex functions used for benchmarking. From Table 3, obviously, R2 performed in the most stable way, while for the other algorithms, EHO has a significant advantage over the other algorithms.

5.1.2. D = 100

As above, the same seven kinds of EHOs were evaluated using the sixteen benchmarks mentioned previously, with dimension D = 100. The obtained mean function values and standard values from 30 runs are recorded in Table 4 and Table 5.
Regarding the mean function values, Table 4 shows that R2 performed much better than the other algorithms, providing the smallest function values on 13 out of 16 functions. As for the other algorithms, R0, R1, and RR1 gave a similar performance to each other, performing the best only on one function each (F10, F06, and F05, respectively). From Table 5, obviously, R2 performed in the most stable way, while for the other algorithms, EHO has significant advantage over other algorithms.

5.1.3. D = 200

Next, the seven types of EHOs were evaluated using the 16 benchmarks mentioned previously, with dimension D = 200. The obtained mean function values and standrd values from 30 runs are recorded in Table 6 and Table 7.
From Table 6, we can see that in terms of the mean function values, R2 performed much better than the other algorithms, providing the smallest function values on 13 out of 16 of the benchmark functions. As for the other methods, R1 ranked second, having performed the best on two of the benchmark functions. RR1 ranked third, giving the best result on one of the functions. From Table 7, obviously, R2 performed in the most stable way, while for the other algorithms, EHO has a significant advantage over the other algorithms.

5.1.4. D = 500

The seven kinds of EHOs also were evaluated using the same 16 benchmarks mentioned previously, with dimension D = 500. The obtained mean function values and standard values from 30 runs are recorded in Table 8 and Table 9.
In terms of the mean function values, Table 8 shows that R2 performed much better than the other methods, providing the smallest function values on 13 out of 16 functions. In comparison, R0, RR1, and RR3 gave similar performances to each other, performing the best on only one function each. From Table 9, obviously, R2 performed in the most stable way, while for the other algorithms, EHO has a significant advantage over the other algorithms.

5.1.5. D = 1000

Finally, the same seven types of EHOs were evaluated using the 16 benchmarks mentioned previously, with dimension D = 1000. The obtained mean function values and standard values from 30 runs are recorded in Table 10 and Table 11.
In terms of the mean function values, Table 10 shows that R2 had the absolute advantage over the other metaheuristic algorithms, succeeding in finding function values on 12 out of 16 functions. Among the other metaheuristic algorithms, R0 ranked second, having performed the best on three of the benchmark functions. In addition, RR1 was successful in finding the best function value. From Table 11, obviously, R2 performed in the most stable way, while for the other algorithms, EHO has a significant advantage over the other algorithms.

5.1.6. Summary of Function Values Obtained by Seven Variants of EHOs

In Section 4.1, the mean function values obtained from 30 runs were collected and analyzed. In addition, the best, mean, worst, and standard (STD) function values obtained from 30 implementations were summarized and are recorded, as shown in Table 12.
From Table 12, we can see that, in general, R2 performed far better than the six other algorithms. R1 and R0 were second and third in performance, respectively, among all of the seven tested methods. Except for R0, R1, and R2, the other metaheuristic algorithms provided similar performances, which were highly inferior to R0, R1, and R2. Looking carefully at Table 12 for the best function values, R1 provided the best performance among the seven metaheuristic algorithms, which was far better than R2. This indicates that finding a means to improve the best performance of R2 further is a challenging question in EHO studies.
To provide a clear demonstration of the effectiveness of the different individual updating strategies, in this part of our work, we selected five functions randomly from the 16 large-scale complex functions, and their convergence histories with dimension D = 50, 100, 200, 500, and 1000. From Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5, we can see that of the seven metaheuristic algorithms, R2 succeeded in finding the best function values at the end of the search in each of these five large-scale complicated functions. This trend coincided with our previous analysis.

5.2. Constrained Optimization

Besides the standard benchmark evaluation, in this section, fourteen constrained optimization problems originated from CEC 2017 [121] are selected in order to further verify the performance of six improved versions of EHO, namely: EHOR1, EHORR1, EHOR2, EHORR2, EHOR3, and EHORR3. The six variants of the EHO were investigated fully through a series of experiments, using fourteen large-scale constrained benchmarks with dimensions D = 50, and 100. These complicated large-scale benchmarks can be found in Table 13. More information about all of the benchmarks can be found in [86,118,119]. As before, we performed 30 independent runs under the same conditions as shown in the literature [120]. For all of the parameters, they are the same as before.

5.2.1. D = 50

In this section of our work, seven kinds of EHOs were evaluated using the 14 constrained benchmarks mentioned previously, with dimension D = 50. The obtained mean function values and standard values from thirty runs are recorded in Table 14 and Table 15.
From Table 14, we can see that in terms of the mean function values, RR1 performed the best, at a level far better than the other methods. As for the other methods, R1 and R2 provided a similar performance to each other, and they could find the smallest fitness values successfully on two constrained functions. From Table 15, it can be observed that RR3 performed in the most stable way, while for the other algorithms, they have a similar stable performance.

5.2.2. D = 100

As above, the same seven kinds of EHOs were evaluated using the fourteen constrained benchmarks mentioned previously, with dimension D = 100. The obtained mean function values and standard values from 30 runs are recorded in Table 16 and Table 17.
Regarding the mean function values, Table 16 shows that RR1 and R2 have the same performance, which performed much better than the other algorithms, providing the smallest function values on 4 out of 14 functions. As for the other algorithms, R1 ranks R2, EHO, RR2, R3, and RR3 gave a similar performance to each other, performing the best only on one function each. From Table 17, it can be observed that RR3 performed in the most stable way, while for the other algorithms, R1 has a significant advantage over the other algorithms.

6. Conclusions

In optimization research, few metaheuristic algorithms reuse previous information to guide the later updating process. In our proposed improvement for basic elephant herd optimization, the previous information in the population is extracted to guide the later search process. We select one, two, or three elephant individuals from the previous iterations in either a fixed or random manner. Using the information from the selected previous elephant individuals, we offer six individual updating strategies (R1, RR1, R2, RR2, R3, and RR3) that are then incorporated into the basic EHO in order to generate six variants of EHO. The final EHO individual at this iteration is generated according to the individual generated by the basic EHO at the current iteration, along with the selected previous individuals using a weighted sum. The weights are determined by a random number and the fitness of the elephant individuals at the previous iteration.
We tested our six proposed algorithms against 16 large-scale test cases. Among the six individual updating strategies, R2 performed much better than the others on most benchmarks. The experimental results demonstrated that the proposed EHO variations significantly outperformed the basic EHO.
In future research, we will propose more individual updating strategies to further improve the performance of EHO. In addition, the proposed individual updating strategies will be incorporated into other metaheuristic algorithms. We believe they will generate promising results on large-scale test functions and practical engineering cases.

Author Contributions

Methodology, J.L. and Y.L.; software, L.G. and Y.L.; validation, Y.L. and C.L.; formal analysis, J.L., Y.L., and C.L.; supervision, L.G.

Acknowledgments

This work was supported by the Overall Design and Application Framework Technology of Photoelectric System (no. 315090501) and the Early Warning and Laser Jamming System for Low and Slow Targets (no. 20170203015GX).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, G.-G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2019, 49, 542–555. [Google Scholar] [CrossRef] [PubMed]
  2. Saleh, H.; Nashaat, H.; Saber, W.; Harb, H.M. IPSO task scheduling algorithm for large scale data in cloud computing environment. IEEE Access 2019, 7, 5412–5420. [Google Scholar] [CrossRef]
  3. Zhang, Y.F.; Chiang, H.D. A novel consensus-based particle swarm optimization-assisted trust-tech methodology for large-scale global optimization. IEEE Trans. Cybern. 2017, 47, 2717–2729. [Google Scholar] [PubMed]
  4. Kazimipour, B.; Omidvar, M.N.; Qin, A.K.; Li, X.; Yao, X. Bandit-based cooperative coevolution for tackling contribution imbalance in large-scale optimization problems. Appl. Soft Compt. 2019, 76, 265–281. [Google Scholar] [CrossRef]
  5. Jia, Y.-H.; Zhou, Y.-R.; Lin, Y.; Yu, W.-J.; Gao, Y.; Lu, L. A Distributed Cooperative Co-evolutionary CMA Evolution Strategy for Global Optimization of Large-Scale Overlapping Problems. IEEE Access 2019, 7, 19821–19834. [Google Scholar] [CrossRef]
  6. De Falco, I.; Della Cioppa, A.; Trunfio, G.A. Investigating surrogate-assisted cooperative coevolution for large-Scale global optimization. Inf. Sci. 2019, 482, 1–26. [Google Scholar] [CrossRef]
  7. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  8. Cravo, G.L.; Amaral, A.R.S. A GRASP algorithm for solving large-scale single row facility layout problems. Comput. Oper. Res. 2019, 106, 49–61. [Google Scholar] [CrossRef]
  9. Zhao, X.; Liang, J.; Dang, C. A stratified sampling based clustering algorithm for large-scale data. Knowl.-Based Syst. 2019, 163, 416–428. [Google Scholar] [CrossRef]
  10. Yildiz, Y.E.; Topal, A.O. Large Scale Continuous Global Optimization based on micro Differential Evolution with Local Directional Search. Inf. Sci. 2019, 477, 533–544. [Google Scholar] [CrossRef]
  11. Ge, Y.F.; Yu, W.J.; Lin, Y.; Gong, Y.J.; Zhan, Z.H.; Chen, W.N.; Zhang, J. Distributed differential evolution based on adaptive mergence and split for large-scale optimization. IEEE Trans. Cybern. 2017. [Google Scholar] [CrossRef]
  12. Wang, G.-G.; Deb, S.; Coelho, L.d.S. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI 2015), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar]
  13. Wang, G.-G.; Deb, S.; Gao, X.-Z.; Coelho, L.d.S. A new metaheuristic optimization algorithm motivated by elephant herding behavior. Int. J. Bio-Inspired Comput. 2016, 8, 394–409. [Google Scholar] [CrossRef]
  14. Meena, N.K.; Parashar, S.; Swarnkar, A.; Gupta, N.; Niazi, K.R. Improved elephant herding optimization for multiobjective DER accommodation in distribution systems. IEEE Trans. Ind. Inform. 2018, 14, 1029–1039. [Google Scholar] [CrossRef]
  15. Jayanth, J.; Shalini, V.S.; Ashok Kumar, T.; Koliwad, S. Land-Use/Land-Cover Classification Using Elephant Herding Algorithm. J. Indian Soc. Remote Sens. 2019. [Google Scholar] [CrossRef]
  16. Rashwan, Y.I.; Elhosseini, M.A.; El Sehiemy, R.A.; Gao, X.Z. On the performance improvement of elephant herding optimization algorithm. Knowl.-Based Syst. 2019. [Google Scholar] [CrossRef]
  17. Correia, S.D.; Beko, M.; da Silva Cruz, L.A.; Tomic, S. Elephant Herding Optimization for Energy-Based Localization. Sensors 2018, 18, 2849. [Google Scholar]
  18. Jafari, M.; Salajegheh, E.; Salajegheh, J. An efficient hybrid of elephant herding optimization and cultural algorithm for optimal design of trusses. Eng. Comput.-Ger. 2018. [Google Scholar] [CrossRef]
  19. Hassanien, A.E.; Kilany, M.; Houssein, E.H.; AlQaheri, H. Intelligent human emotion recognition based on elephant herding optimization tuned support vector regression. Biomed. Signal Process. Control 2018, 45, 182–191. [Google Scholar] [CrossRef]
  20. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2015. [Google Scholar] [CrossRef]
  21. Yi, J.-H.; Lu, M.; Zhao, X.-J. Quantum inspired monarch butterfly optimization for UCAV path planning navigation problem. Int. J. Bio-Inspired Comput. 2017. Available online: http://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijbic (accessed on 30 March 2019).
  22. Wang, G.-G.; Chu, H.E.; Mirjalili, S. Three-dimensional path planning for UCAV using an improved bat algorithm. Aerosp. Sci. Technol. 2016, 49, 231–238. [Google Scholar] [CrossRef]
  23. Wang, G.; Guo, L.; Duan, H.; Liu, L.; Wang, H.; Shao, M. Path planning for uninhabited combat aerial vehicle using hybrid meta-heuristic DE/BBO algorithm. Adv. Sci. Eng. Med. 2012, 4, 550–564. [Google Scholar] [CrossRef]
  24. Feng, Y.; Wang, G.-G.; Deb, S.; Lu, M.; Zhao, X. Solving 0-1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2017, 28, 1619–1634. [Google Scholar] [CrossRef]
  25. Feng, Y.; Yang, J.; Wu, C.; Lu, M.; Zhao, X.-J. Solving 0-1 knapsack problems by chaotic monarch butterfly optimization algorithm. Memetic Comput. 2018, 10, 135–150. [Google Scholar] [CrossRef]
  26. Feng, Y.; Wang, G.-G.; Li, W.; Li, N. Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem. Neural Comput. Appl. 2018, 30, 3019–3036. [Google Scholar] [CrossRef]
  27. Feng, Y.; Yang, J.; He, Y.; Wang, G.-G. Monarch butterfly optimization algorithm with differential evolution for the discounted {0-1} knapsack problem. Acta Electron. Sin. 2018, 46, 1343–1350. [Google Scholar]
  28. Feng, Y.; Wang, G.-G.; Dong, J.; Wang, L. Opposition-based learning monarch butterfly optimization with Gaussian perturbation for large-scale 0-1 knapsack problem. Comput. Electr. Eng. 2018, 67, 454–468. [Google Scholar] [CrossRef]
  29. Wang, G.-G.; Zhao, X.; Deb, S. A novel monarch butterfly optimization with greedy strategy and self-adaptive crossover operator. In Proceedings of the 2015 2nd International Conference on Soft Computing & Machine Intelligence (ISCMI 2015), Hong Kong, 23–24 November 2015; pp. 45–50. [Google Scholar]
  30. Wang, G.-G.; Deb, S.; Zhao, X.; Cui, Z. A new monarch butterfly optimization with an improved crossover operator. Oper. Res. Int. J. 2018, 18, 731–755. [Google Scholar] [CrossRef]
  31. Wang, G.-G.; Hao, G.-S.; Cheng, S.; Qin, Q. A discrete monarch butterfly optimization for Chinese TSP problem. In Proceedings of the Advances in Swarm Intelligence: 7th International Conference, ICSI 2016, Part I, Bali, Indonesia, 25–30 June 2016; Tan, Y., Shi, Y., Niu, B., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 9712, pp. 165–173. [Google Scholar]
  32. Wang, G.-G.; Hao, G.-S.; Cheng, S.; Cui, Z. An improved monarch butterfly optimization with equal partition and F/T mutation. In Proceedings of the Eight International Conference on Swarm Intelligence (ICSI 2017), Fukuoka, Japan, 27 July–1 August 2017; pp. 106–115. [Google Scholar]
  33. Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  34. Feng, Y.; An, H.; Gao, X. The importance of transfer function in solving set-union knapsack problem based on discrete moth search algorithm. Mathematics 2019, 7, 17. [Google Scholar] [CrossRef]
  35. Elaziz, M.A.; Xiong, S.; Jayasena, K.P.N.; Li, L. Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution. Knowl.-Based Syst. 2019. [Google Scholar] [CrossRef]
  36. Feng, Y.; Wang, G.-G. Binary moth search algorithm for discounted {0-1} knapsack problem. IEEE Access 2018, 6, 10708–10719. [Google Scholar] [CrossRef]
  37. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simulat. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  38. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Hao, G.-S. Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput. Appl. 2014, 25, 297–308. [Google Scholar] [CrossRef]
  39. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  40. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  41. Guo, L.; Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Duan, H. A new improved krill herd algorithm for global numerical optimization. Neurocomputing 2014, 138, 392–402. [Google Scholar] [CrossRef]
  42. Wang, G.-G.; Deb, S.; Gandomi, A.H.; Alavi, A.H. Opposition-based krill herd algorithm with Cauchy mutation and position clamping. Neurocomputing 2016, 177, 147–157. [Google Scholar] [CrossRef]
  43. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Deb, S. A hybrid method based on krill herd and quantum-behaved particle swarm optimization. Neural Comput. Appl. 2016, 27, 989–1006. [Google Scholar] [CrossRef]
  44. Wang, G.-G.; Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. A new hybrid method based on krill herd and cuckoo search for global optimization tasks. Int. J. Bio-Inspired Comput. 2016, 8, 286–299. [Google Scholar] [CrossRef]
  45. Abdel-Basset, M.; Wang, G.-G.; Sangaiah, A.K.; Rushdy, E. Krill herd algorithm based on cuckoo search for solving engineering optimization problems. Multimed. Tools Appl. 2017. [Google Scholar] [CrossRef]
  46. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Deb, S. A multi-stage krill herd algorithm for global numerical optimization. Int. J. Artif. Intell. Tools 2016, 25, 1550030. [Google Scholar] [CrossRef]
  47. Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Gong, D. A comprehensive review of krill herd algorithm: Variants, hybrids and applications. Artif. Intell. Rev. 2019, 51, 119–148. [Google Scholar] [CrossRef]
  48. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  49. Wang, H.; Yi, J.-H. An improved optimization method based on krill herd and artificial bee colony with information exchange. Memetic Comput. 2018, 10, 177–198. [Google Scholar] [CrossRef]
  50. Liu, F.; Sun, Y.; Wang, G.-G.; Wu, T. An artificial bee colony algorithm based on dynamic penalty and chaos search for constrained optimization problems. Arab. J. Sci. Eng. 2018, 43, 7189–7208. [Google Scholar] [CrossRef]
  51. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  52. Helwig, S.; Branke, J.; Mostaghim, S. Experimental analysis of bound handling techniques in particle swarm optimization. IEEE Trans. Evol. Comput. 2012, 17, 259–271. [Google Scholar] [CrossRef]
  53. Li, J.; Zhang, J.; Jiang, C.; Zhou, M. Composite particle swarm optimizer with historical memory for function optimization. IEEE Trans. Cybern. 2015, 45, 2350–2363. [Google Scholar] [CrossRef]
  54. Gong, M.; Cai, Q.; Chen, X.; Ma, L. Complex network clustering by multiobjective discrete particle swarm optimization based on decomposition. IEEE Trans. Evol. Comput. 2014, 18, 82–97. [Google Scholar] [CrossRef]
  55. Yuan, Y.; Ji, B.; Yuan, X.; Huang, Y. Lockage scheduling of Three Gorges-Gezhouba dams by hybrid of chaotic particle swarm optimization and heuristic-adjusted strategies. Appl. Math. Comput. 2015, 270, 74–89. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Gong, D.W.; Cheng, J. Multi-objective particle swarm optimization approach for cost-based feature selection in classification. IEEE/ACM Trans. Comput. Biol. Bioinform. 2017, 14, 64–75. [Google Scholar] [CrossRef]
  57. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the World Congress on Nature & Biologically Inspired Computing (NaBIC 2009), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  58. Wang, G.-G.; Gandomi, A.H.; Zhao, X.; Chu, H.E. Hybridizing harmony search algorithm with cuckoo search for global numerical optimization. Soft Comput. 2016, 20, 273–285. [Google Scholar] [CrossRef]
  59. Wang, G.-G.; Deb, S.; Gandomi, A.H.; Zhang, Z.; Alavi, A.H. Chaotic cuckoo search. Soft Comput. 2016, 20, 3349–3362. [Google Scholar] [CrossRef]
  60. Cui, Z.; Sun, B.; Wang, G.-G.; Xue, Y.; Chen, J. A novel oriented cuckoo search algorithm to improve DV-Hop performance for cyber-physical systems. J. Parallel Distrib. Comput. 2017, 103, 42–52. [Google Scholar] [CrossRef]
  61. Li, J.; Li, Y.-X.; Tian, S.-S.; Zou, J. Dynamic cuckoo search algorithm based on Taguchi opposition-based search. Int. J. Bio-Inspired Comput. 2019, 13, 59–69. [Google Scholar] [CrossRef]
  62. Baluja, S. Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning; CMU-CS-94-163; Carnegie Mellon University: Pittsburgh, PA, USA, 1994. [Google Scholar]
  63. Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  64. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  65. Li, Y.-L.; Zhan, Z.-H.; Gong, Y.-J.; Chen, W.-N.; Zhang, J.; Li, Y. Differential evolution with an evolution path: A deep evolutionary algorithm. IEEE Trans. Cybern. 2015, 45, 1798–1810. [Google Scholar] [CrossRef] [PubMed]
  66. Teoh, B.E.; Ponnambalam, S.G.; Kanagaraj, G. Differential evolution algorithm with local search for capacitated vehicle routing problem. Int. J. Bio-Inspired Comput. 2015, 7, 321–342. [Google Scholar] [CrossRef]
  67. Beyer, H.; Schwefel, H. Natural Computing; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2002. [Google Scholar]
  68. Reddy, S.S.; Panigrahi, B.; Debchoudhury, S.; Kundu, R.; Mukherjee, R. Short-term hydro-thermal scheduling using CMA-ES with directed target to best perturbation scheme. Int. J. Bio-Inspired Comput. 2015, 7, 195–208. [Google Scholar] [CrossRef]
  69. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  70. Yang, X.S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  71. Wang, G.-G.; Guo, L.; Duan, H.; Wang, H. A new improved firefly algorithm for global numerical optimization. J. Comput. Theor. Nanosci. 2014, 11, 477–485. [Google Scholar] [CrossRef]
  72. Zhang, Y.; Song, X.-F.; Gong, D.-W. A return-cost-based binary firefly algorithm for feature selection. Inf. Sci. 2017, 418–419, 561–574. [Google Scholar] [CrossRef]
  73. Wang, G.-G.; Deb, S.; Coelho, L.d.S. Earthworm optimization algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–23. [Google Scholar] [CrossRef]
  74. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley: New York, NY, USA, 1998. [Google Scholar]
  75. Sun, X.; Gong, D.; Jin, Y.; Chen, S. A new surrogate-assisted interactive genetic algorithm with weighted semisupervised learning. IEEE Trans. Cybern. 2013, 43, 685–698. [Google Scholar] [PubMed]
  76. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  77. Dorigo, M.; Stutzle, T. Ant Colony Optimization; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  78. Ciornei, I.; Kyriakides, E. Hybrid ant colony-genetic algorithm (GAAPI) for global continuous optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 234–245. [Google Scholar] [CrossRef] [PubMed]
  79. Sun, X.; Zhang, Y.; Ren, X.; Chen, K. Optimization deployment of wireless sensor networks based on culture-ant colony algorithm. Appl. Math. Comput. 2015, 250, 58–70. [Google Scholar] [CrossRef]
  80. Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H. Chaotic Krill Herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  81. Li, J.; Tang, Y.; Hua, C.; Guan, X. An improved krill herd algorithm: Krill herd with linear decreasing step. Appl. Math. Comput. 2014, 234, 356–367. [Google Scholar] [CrossRef]
  82. Mehrabian, A.R.; Lucas, C. A novel numerical optimization algorithm inspired from weed colonization. Ecol. Inform. 2006, 1, 355–366. [Google Scholar] [CrossRef]
  83. Sang, H.-Y.; Duan, P.-Y.; Li, J.-Q. An effective invasive weed optimization algorithm for scheduling semiconductor final testing problem. Swarm Evol. Comput. 2018, 38, 42–53. [Google Scholar] [CrossRef]
  84. Sang, H.-Y.; Pan, Q.-K.; Duan, P.-Y.; Li, J.-Q. An effective discrete invasive weed optimization algorithm for lot-streaming flowshop scheduling problems. J. Intell. Manuf. 2015, 29, 1337–1349. [Google Scholar] [CrossRef]
  85. Khatib, W.; Fleming, P. The stud GA: A mini revolution? In Proceedings of the 5th International Conference on Parallel Problem Solving from Nature, New York, NY, USA, 27–30 September 1998; pp. 683–691. [Google Scholar]
  86. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  87. Simon, D.; Ergezer, M.; Du, D.; Rarick, R. Markov models for biogeography-based optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2011, 41, 299–306. [Google Scholar] [CrossRef]
  88. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  89. Bilbao, M.N.; Ser, J.D.; Salcedo-Sanz, S.; Casanova-Mateo, C. On the application of multi-objective harmony search heuristics to the predictive deployment of firefighting aircrafts: A realistic case study. Int. J. Bio-Inspired Comput. 2015, 7, 270–284. [Google Scholar] [CrossRef]
  90. Amaya, I.; Correa, R. Finding resonant frequencies of microwave cavities through a modified harmony search algorithm. Int. J. Bio-Inspired Comput. 2015, 7, 285–295. [Google Scholar] [CrossRef]
  91. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  92. Xue, F.; Cai, Y.; Cao, Y.; Cui, Z.; Li, F. Optimal parameter settings for bat algorithm. Int. J. Bio-Inspired Comput. 2015, 7, 125–128. [Google Scholar] [CrossRef]
  93. Wu, G.; Shen, X.; Li, H.; Chen, H.; Lin, A.; Suganthan, P.N. Ensemble of differential evolution variants. Inf. Sci. 2018, 423, 172–186. [Google Scholar] [CrossRef]
  94. Wang, G.-G.; Lu, M.; Zhao, X.-J. An improved bat algorithm with variable neighborhood search for global optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (IEEE CEC 2016), Vancouver, BC, Canada, 25–29 July 2016; pp. 1773–1778. [Google Scholar]
  95. Duan, H.; Zhao, W.; Wang, G.; Feng, X. Test-sheet composition using analytic hierarchy process and hybrid metaheuristic algorithm TS/BBO. Math. Probl. Eng. 2012, 2012, 712752. [Google Scholar] [CrossRef]
  96. Pan, Q.-K.; Gao, L.; Wang, L.; Liang, J.; Li, X.-Y. Effective heuristics and metaheuristics to minimize total flowtime for the distributed permutation flowshop problem. Expert Syst. Appl. 2019. [Google Scholar] [CrossRef]
  97. Peng, K.; Pan, Q.-K.; Gao, L.; Li, X.; Das, S.; Zhang, B. A multi-start variable neighbourhood descent algorithm for hybrid flowshop rescheduling. Swarm Evol. Comput. 2019, 45, 92–112. [Google Scholar] [CrossRef]
  98. Zhang, Y.; Gong, D.; Hu, Y.; Zhang, W. Feature selection algorithm based on bare bones particle swarm optimization. Neurocomputing 2015, 148, 150–157. [Google Scholar] [CrossRef]
  99. Zhang, Y.; Gong, D.-W.; Sun, J.-Y.; Qu, B.-Y. A decomposition-based archiving approach for multi-objective evolutionary optimization. Inf. Sci. 2018, 430–431, 397–413. [Google Scholar] [CrossRef]
  100. Logesh, R.; Subramaniyaswamy, V.; Vijayakumar, V.; Gao, X.-Z.; Wang, G.-G. Hybrid bio-inspired user clustering for the generation of diversified recommendations. Neural Comput. Appl. 2019. [Google Scholar] [CrossRef]
  101. Wang, G.-G.; Cai, X.; Cui, Z.; Min, G.; Chen, J. High performance computing for cyber physical social systems by using evolutionary multi-objective optimization algorithm. IEEE Trans. Emerg. Top. Comput. 2017. [Google Scholar] [CrossRef]
  102. Zou, D.; Li, S.; Wang, G.-G.; Li, Z.; Ouyang, H. An improved differential evolution algorithm for the economic load dispatch problems with or without valve-point effects. Appl. Energ. 2016, 181, 375–390. [Google Scholar] [CrossRef]
  103. Rizk-Allah, R.M.; El-Sehiemy, R.A.; Wang, G.-G. A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution. Appl. Soft Compt. 2018, 63, 206–222. [Google Scholar] [CrossRef]
  104. Yi, J.-H.; Wang, J.; Wang, G.-G. Improved probabilistic neural networks with self-adaptive strategies for transformer fault diagnosis problem. Adv. Mech. Eng. 2016, 8, 1687814015624832. [Google Scholar] [CrossRef]
  105. Sang, H.-Y.; Pan, Q.-K.; Li, J.-Q.; Wang, P.; Han, Y.-Y.; Gao, K.-Z.; Duan, P. Effective invasive weed optimization algorithms for distributed assembly permutation flowshop problem with total flowtime criterion. Swarm Evol. Comput. 2019, 44, 64–73. [Google Scholar] [CrossRef]
  106. Yi, J.-H.; Xing, L.-N.; Wang, G.-G.; Dong, J.; Vasilakos, A.V.; Alavi, A.H.; Wang, L. Behavior of crossover operators in NSGA-III for large-scale optimization problems. Inf. Sci. 2018. [Google Scholar] [CrossRef]
  107. Yi, J.-H.; Deb, S.; Dong, J.; Alavi, A.H.; Wang, G.-G. An improved NSGA-III Algorithm with adaptive mutation operator for big data optimization problems. Future Gener. Comput. Syst. 2018, 88, 571–585. [Google Scholar] [CrossRef]
  108. Liu, K.; Gong, D.; Meng, F.; Chen, H.; Wang, G.-G. Gesture segmentation based on a two-phase estimation of distribution algorithm. Inf. Sci. 2017, 394–395, 88–105. [Google Scholar] [CrossRef]
  109. Wang, G.-G.; Guo, L.; Duan, H.; Liu, L.; Wang, H. The model and algorithm for the target threat assessment based on Elman_AdaBoost strong predictor. Acta Electron. Sin. 2012, 40, 901–906. [Google Scholar]
  110. Wang, G.; Guo, L.; Duan, H. Wavelet neural network using multiple wavelet functions in target threat assessment. Sci. World J. 2013, 2013, 632437. [Google Scholar] [CrossRef] [PubMed]
  111. Nan, X.; Bao, L.; Zhao, X.; Zhao, X.; Sangaiah, A.K.; Wang, G.-G.; Ma, Z. EPuL: An enhanced positive-unlabeled learning algorithm for the prediction of pupylation sites. Molecules 2017, 22, 1463. [Google Scholar] [CrossRef] [PubMed]
  112. Zou, D.-X.; Deb, S.; Wang, G.-G. Solving IIR system identification by a variant of particle swarm optimization. Neural Comput. Appl. 2018, 30, 685–698. [Google Scholar] [CrossRef]
  113. Rizk-Allah, R.M.; El-Sehiemy, R.A.; Deb, S.; Wang, G.-G. A novel fruit fly framework for multi-objective shape design of tubular linear synchronous motor. J. Supercomput. 2017, 73, 1235–1256. [Google Scholar] [CrossRef]
  114. Sun, J.; Gong, D.; Li, J.; Wang, G.-G.; Zeng, X.-J. Interval multi-objective optimization with memetic algorithms. IEEE Trans. Cybern. 2019. [Google Scholar] [CrossRef] [PubMed]
  115. Liu, Y.; Gong, D.; Sun, X.; Zhang, Y. Many-objective evolutionary optimization based on reference points. Appl. Soft Compt. 2017, 50, 344–355. [Google Scholar] [CrossRef]
  116. Gong, D.; Liu, Y.; Yen, G.G. A Meta-Objective Approach for Many-Objective Evolutionary Optimization. Evol. Comput. 2018. [Google Scholar] [CrossRef]
  117. Gong, D.; Sun, J.; Miao, Z. A set-based genetic algorithm for interval many-objective optimization problems. IEEE Trans. Evol. Comput. 2018, 22, 47–60. [Google Scholar] [CrossRef]
  118. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  119. Yang, X.-S.; Cui, Z.; Xiao, R.; Gandomi, A.H.; Karamanoglu, M. Swarm Intelligence and Bio-Inspired Computation; Elsevier: Waltham, MA, USA, 2013. [Google Scholar]
  120. Wang, G.; Guo, L.; Wang, H.; Duan, H.; Liu, L.; Li, J. Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [Google Scholar] [CrossRef]
  121. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
Figure 1. Optimization process of seven algorithms on five functions with D = 50. EHO—elephant herd optimization.
Figure 1. Optimization process of seven algorithms on five functions with D = 50. EHO—elephant herd optimization.
Mathematics 07 00395 g001aMathematics 07 00395 g001bMathematics 07 00395 g001c
Figure 2. Optimization process of seven algorithms on five functions with D = 100.
Figure 2. Optimization process of seven algorithms on five functions with D = 100.
Mathematics 07 00395 g002aMathematics 07 00395 g002bMathematics 07 00395 g002c
Figure 3. Optimization process of seven algorithms on five functions with D = 200.
Figure 3. Optimization process of seven algorithms on five functions with D = 200.
Mathematics 07 00395 g003aMathematics 07 00395 g003bMathematics 07 00395 g003c
Figure 4. Optimization process of seven algorithms on five functions with D = 500.
Figure 4. Optimization process of seven algorithms on five functions with D = 500.
Mathematics 07 00395 g004aMathematics 07 00395 g004bMathematics 07 00395 g004c
Figure 5. Optimization process of seven algorithms on five functions with D = 1000.
Figure 5. Optimization process of seven algorithms on five functions with D = 1000.
Mathematics 07 00395 g005aMathematics 07 00395 g005bMathematics 07 00395 g005c
Table 1. Sixteen benchmark functions.
Table 1. Sixteen benchmark functions.
No.NameNo.Name
F01AckleyF09Rastrigin
F02AlpineF10Schwefel 2.26
F03BrownF11Schwefel 1.2
F04Holzman 2 functionF12Schwefel 2.22
F05LevyF13Schwefel 2.21
F06Penalty #1F14Sphere
F07PowellF15Sum function
F08Quartic with noiseF16Zakharov
Table 2. Mean function values obtained by elephant herd optimization (EHO) and six improved methods with D = 50.
Table 2. Mean function values obtained by elephant herd optimization (EHO) and six improved methods with D = 50.
EHOR1RR1R2RR2R3RR3
F012.57 × 10−47.11 × 10−40.058.38 × 10−51.571.53 × 10−40.01
F021.04 × 10−42.69 × 10−40.012.74 × 10−50.235.13 × 10−52.38 × 10−3
F034.41 × 10−79.16 × 10−63.37 × 10−36.14 × 10−90.764.32 × 10−88.25 × 10−5
F041.50 × 10−154.97 × 10−113.58 × 10−62.27 × 10−160.033.38 × 10−161.82 × 10−9
F054.494.253.954.434.894.444.50
F061.221.061.621.722.011.761.79
F075.13 × 10−72.55 × 10−60.023.50 × 10−82.251.51 × 10−74.85 × 10−4
F082.57 × 10−161.24 × 10−157.23 × 10−92.21 × 10−161.72 × 10−52.21 × 10−164.03 × 10−13
F092.59 × 10−66.86 × 10−50.039.83 × 10−89.285.06 × 10−73.92 × 10−3
F101.65 × 1041.64 × 1041.63 × 1041.65 × 1041.61 × 1041.64 × 1041.64 × 104
F111.44 × 10−54.47 × 10−40.361.02 × 10−649.043.52 × 10−62.18
F121.07 × 10−33.81 × 10−30.142.96 × 10−42.375.31 × 10−40.02
F136.69 × 10−41.34 × 10−30.071.52 × 10−41.213.05 × 10−40.01
F141.27 × 10−81.63 × 10−73.85 × 10−47.00 × 10−100.042.50 × 10−93.02 × 10−6
F156.70 × 10−71.40 × 10−57.03 × 10−34.76 × 10−83.611.77 × 10−79.87 × 10−4
F161.28 × 10−30.30512.903.00 × 10−53.87 × 1071.71 × 10−40.56
TOTAL01114000
Table 3. Standard values obtained by EHO and six improved methods with D = 50.
Table 3. Standard values obtained by EHO and six improved methods with D = 50.
EHOR1RR1R2RR2R3RR3
F012.39 × 10−59.89 × 10−40.063.60 × 10−50.224.89 × 10−50.01
F021.37 × 10−53.32 × 10−40.011.14 × 10−50.041.38 × 10−52.17 × 10−3
F031.55 × 10−83.34 × 10−56.16 × 10−33.72 × 10−90.073.52 × 10−87.80 × 10−5
F044.50 × 10−162.49 × 10−101.08 × 10−56.72 × 10−180.012.92 × 10−165.64 × 10−9
F050.160.260.520.300.200.300.33
F060.230.180.410.280.320.250.27
F071.40 × 10−75.25 × 10−60.042.71 × 10−80.767.78 × 10−81.58 × 10−3
F086.86 × 10−183.99 × 10−151.96 × 10−81.68 × 10−206.10 × 10−61.26 × 10−191.17 × 10−12
F094.61 × 10−71.75 × 10−40.096.36 × 10−81.603.11 × 10−70.02
F10444.40591.40502.50506.70486.40454.70349.00
F113.73 × 10−61.98 × 10−30.718.92 × 10−729.122.30 × 10−69.49
F121.03 × 10−45.54 × 10−30.151.03 × 10−40.281.21 × 10−40.01
F139.58 × 10−51.97 × 10−30.075.39 × 10−50.156.84 × 10−56.05 × 10−3
F142.07 × 10−95.02 × 10−71.09 × 10−36.10 × 10−100.012.04 × 10−93.90 × 10−6
F151.00 × 10−74.76 × 10−50.023.41 × 10−80.738.84 × 10−83.40 × 10−3
F168.21 × 10−41.431.23 × 1032.06 × 10−51.92 × 1071.21 × 10−40.44
TOTAL31011001
Table 4. Mean function values obtained by EHO and six improved methods with D = 100.
Table 4. Mean function values obtained by EHO and six improved methods with D = 100.
EHOR1RR1R2RR2R3RR3
F013.22 × 10−46.27 × 10−40.126.92 × 10−51.701.78 × 10−40.01
F022.55 × 10−46.86 × 10−40.046.28 × 10−50.481.08 × 10−44.78 × 10−3
F039.83 × 10−72.65 × 10−51.27 × 10−31.55 × 10−81.509.96 × 10−82.01 × 10−4
F041.18 × 10−141.85 × 10−97.43 × 10−62.55 × 10−160.137.25 × 10−161.04 × 10−9
F059.199.018.419.229.519.079.26
F063.112.913.893.744.303.803.80
F072.56 × 10−64.65 × 10−50.027.34 × 10−85.333.31 × 10−73.35 × 10−3
F084.18 × 10−161.13 × 10−131.38 × 10−72.21 × 10−168.16 × 10−52.24 × 10−163.03 × 10−12
F098.20 × 10−64.58 × 10−50.082.47 × 10−719.191.05 × 10−61.30 × 10−3
F103.58 × 1043.56 × 1043.50 × 1043.55 × 1043.59 × 1043.56 × 1043.68 × 104
F116.15 × 10−54.45 × 10−33.545.51 × 10−6200.201.74 × 10−50.07
F122.53 × 10−35.37 × 10−30.265.94 × 10−45.271.04 × 10−30.05
F138.12 × 10−41.50 × 10−30.061.72 × 10−41.463.52 × 10−40.02
F143.99 × 10−81.09 × 10−65.59 × 10−41.23 × 10−90.094.99 × 10−98.68 × 10−6
F154.45 × 10−65.90 × 10−40.112.52 × 10−715.008.77 × 10−71.83 × 10−3
F160.042.207.85 × 1057.09 × 10−41.58 × 10103.31 × 10−3987.40
TOTAL11113000
Table 5. Standard values obtained by EHO and six improved methods with D = 100.
Table 5. Standard values obtained by EHO and six improved methods with D = 100.
EHOR1RR1R2RR2R3RR3
F011.85 × 10−58.00 × 10−40.382.47 × 10−50.124.83 × 10−50.01
F021.24 × 10−51.10 × 10−30.082.65 × 10−50.083.99 × 10−53.78 × 10−3
F032.22 × 10−89.04 × 10−51.77 × 10−38.47 × 10−90.214.81 × 10−81.77 × 10−4
F042.03 × 10−151.02 × 10−82.98 × 10−53.90 × 10−170.056.76 × 10−164.03 × 10−9
F050.110.290.760.250.100.310.25
F060.310.330.380.290.410.250.29
F073.85 × 10−71.44 × 10−40.046.16 × 10−80.951.70 × 10−70.01
F082.21 × 10−175.63 × 10−134.37 × 10−74.23 × 10−203.09 × 10−53.81 × 10−199.87 × 10−12
F098.05 × 10−71.53 × 10−40.181.46 × 10−73.216.43 × 10−75.79 × 10−4
F10733.60769.20834.60606.00547.10725.00621.20
F111.18 × 10−50.0113.784.34 × 10−6114.209.54 × 10−60.25
F121.23 × 10−40.010.322.23 × 10−40.502.95 × 10−40.03
F136.13 × 10−53.10 × 10−30.084.52 × 10−50.171.02 × 10−40.01
F143.74 × 10−93.58 × 10−61.84 × 10−38.08 × 10−100.012.98 × 10−98.97 × 10−6
F155.46 × 10−72.88 × 10−30.232.59 × 10−72.766.25 × 10−74.04 × 10−3
F163.18 × 10−38.633.03 × 1064.19 × 10−43.52 × 1091.74 × 10−34.76 × 103
TOTAL30010210
Table 6. Mean function values obtained by EHO and six improved methods with D = 200.
Table 6. Mean function values obtained by EHO and six improved methods with D = 200.
EHOR1RR1R2RR2R3RR3
F013.68 × 10−48.01 × 10−40.038.94 × 10−51.711.78 × 10−49.60 × 10−3
F025.65 × 10−47.10 × 10−40.061.08 × 10−41.062.19 × 10−40.01
F032.17 × 10−62.23 × 10−58.24 × 10−32.72 × 10−83.121.83 × 10−77.55 × 10−4
F047.43 × 10−141.46 × 10−91.82 × 10−53.25 × 10−160.563.46 × 10−152.37 × 10−8
F0518.4518.3917.5718.5318.7018.5018.50
F066.906.898.037.608.677.727.82
F077.24 × 10−62.87 × 10−50.041.76 × 10−711.346.97 × 10−72.21 × 10−3
F081.20 × 10−152.20 × 10−121.05 × 10−82.22 × 10−163.94 × 10−42.21 × 10−168.55 × 10−11
F092.16 × 10−51.46 × 10−40.146.14 × 10−740.872.38 × 10−60.01
F107.58 × 1047.43 × 1047.59 × 1047.58 × 1047.50 × 1047.58 × 1047.58 × 104
F112.41 × 10−42.46 × 10−37.162.72 × 10−5699.607.04 × 10−50.14
F125.71 × 10−30.020.371.29 × 10−311.262.41 × 10−30.12
F139.65 × 10−42.89 × 10−30.061.93 × 10−41.573.64 × 10−40.02
F141.08 × 10−71.82 × 10−66.24 × 10−42.74 × 10−90.191.24 × 10−81.46 × 10−5
F152.25 × 10−51.90 × 10−40.961.02 × 10−663.154.19 × 10−60.01
F161.521.53 × 1034.00 × 1080.015.17 × 10120.084.61 × 105
TOTAL02113000
Table 7. Standard values obtained by EHO and six improved methods with D = 200.
Table 7. Standard values obtained by EHO and six improved methods with D = 200.
EHOR1RR1R2RR2R3RR3
F011.10 × 10−51.35 × 10−30.044.38 × 10−50.135.87 × 10−54.78 × 10−3
F022.70 × 10−58.25 × 10−40.103.76 × 10−50.124.61 × 10−50.01
F033.50 × 10−87.44 × 10−50.021.65 × 10−80.271.10 × 10−71.55 × 10−3
F041.35 × 10−145.94 × 10−94.19 × 10−51.21 × 10−160.149.76 × 10−151.22 × 10−7
F050.120.160.760.170.050.180.18
F060.340.400.280.300.580.220.29
F076.49 × 10−76.58 × 10−50.041.11 × 10−72.004.21 × 10−75.56 × 10−3
F087.08 × 10−171.13 × 10−111.80 × 10−81.61 × 10−191.28 × 10−41.05 × 10−184.33 × 10−10
F091.44 × 10−63.78 × 10−40.235.66 × 10−73.791.43 × 10−60.05
F10897.80853.40971.601.12 × 103833.101.02 × 103907.60
F114.63 × 10−55.96 × 10−314.392.64 × 10−5353.005.11 × 10−50.27
F122.72 × 10−40.030.364.03 × 10−41.098.39 × 10−40.13
F137.38 × 10−54.15 × 10−30.087.99 × 10−50.169.51 × 10−50.01
F147.41 × 10−97.42 × 10−61.10 × 10−31.96 × 10−90.021.03 × 10−81.03 × 10−5
F151.88 × 10−63.50 × 10−42.138.43 × 10−79.402.12 × 10−60.04
F160.144.85 × 1031.74 × 1098.45 × 10−37.73 × 10110.042.00 × 106
TOTAL4009210
Table 8. Mean function values obtained by EHO and six improved methods with D = 500.
Table 8. Mean function values obtained by EHO and six improved methods with D = 500.
EHOR1RR1R2RR2R3RR3
F013.92 × 10−47.03 × 10−40.038.75 × 10−51.761.72 × 10−48.52 × 10−3
F021.53 × 10−33.40 × 10−30.112.99 × 10−42.585.47 × 10−40.02
F035.60 × 10−68.70 × 10−50.045.67 × 10−87.914.15 × 10−72.14 × 10−3
F046.51 × 10−134.40 × 10−97.69 × 10−31.48 × 10−153.941.45 × 10−147.70 × 10−8
F0545.8045.9045.4445.9245.9945.9145.93
F0618.5519.2319.9319.4421.1319.4319.62
F072.35 × 10−56.78 × 10−50.244.42 × 10−727.031.59 × 10−68.63 × 10−3
F088.52 × 10−153.89 × 10−131.60 × 10−72.23 × 10−162.42 × 10−32.39 × 10−161.51 × 10−10
F096.13 × 10−52.15 × 10−30.871.31 × 10−6107.706.53 × 10−60.02
F101.96 × 1051.97 × 1051.97 × 1051.99 × 1051.99 × 1051.92 × 1051.95 × 105
F111.61 × 10−30.1710.431.40 × 10−45.30 × 1034.16 × 10−40.71
F120.020.031.303.15 × 10−330.316.04 × 10−30.57
F131.07 × 10−33.01 × 10−30.062.07 × 10−41.724.22 × 10−40.02
F143.12 × 10−74.56 × 10−65.26 × 10−36.15 × 10−90.482.74 × 10−86.08 × 10−5
F151.71 × 10−47.41 × 10−41.857.59 × 10−6427.302.17 × 10−50.07
F161.60 × 1033.60 × 1071.33 × 10121.208.83 × 101516.252.37 × 109
TOTAL10113001
Table 9. Standard function values obtained by EHO and six improved methods with D = 500.
Table 9. Standard function values obtained by EHO and six improved methods with D = 500.
EHOR1RR1R2RR2R3RR3
F017.22 × 10−68.01 × 10−40.022.66 × 10−50.084.32 × 10−54.04 × 10−3
F024.67 × 10−54.09 × 10−30.111.19 × 10−40.201.49 × 10−40.01
F035.65 × 10−82.10 × 10−40.132.62 × 10−80.512.91 × 10−75.94 × 10−3
F046.17 × 10−141.83 × 10−80.043.12 × 10−150.982.53 × 10−142.84 × 10−7
F050.090.020.750.030.050.040.02
F060.310.320.180.210.850.150.25
F071.05 × 10−61.35 × 10−40.783.05 × 10−73.869.25 × 10−70.02
F083.61 × 10−161.72 × 10−124.03 × 10−72.01 × 10−185.40 × 10−41.31 × 10−176.30 × 10−10
F092.15 × 10−60.011.719.91 × 10−712.304.69 × 10−60.04
F101.54 × 1031.43 × 1031.30 × 1031.28 × 1031.38 × 1031.45 × 1031.65 × 103
F113.44 × 10−40.7012.219.50 × 10−52.00 × 1032.38 × 10−41.51
F123.52 × 10−40.041.931.23 × 10−32.112.09 × 10−31.24
F135.90 × 10−54.79 × 10−30.066.75 × 10−50.121.69 × 10−40.01
F141.75 × 10−81.60 × 10−50.023.39 × 10−90.051.32 × 10−81.16 × 10−4
F158.50 × 10−61.04 × 10−33.966.20 × 10−650.281.15 × 10−50.19
F16113.001.90 × 1084.50 × 10122.371.61 × 101517.621.27 × 1010
TOTAL4009012
Table 10. Mean function values obtained by EHO and six improved methods with D = 1000.
Table 10. Mean function values obtained by EHO and six improved methods with D = 1000.
EHOR1RR1R2RR2R3RR3
F014.04 × 10−41.29 × 10−30.038.66 × 10−51.791.90 × 10−40.01
F023.14 × 10−35.68 × 10−30.206.20 × 10−45.451.29 × 10−30.04
F031.11 × 10−55.10 × 10−50.071.41 × 10−716.188.78 × 10−71.97 × 10−3
F043.05 × 10−121.17 × 10−97.48 × 10−44.48 × 10−1516.773.01 × 10−144.97 × 10−7
F0591.2991.3691.1991.3691.5391.3691.38
F0638.2338.9039.7939.1142.7139.0839.26
F075.15 × 10−51.20 × 10−30.391.01 × 10−655.163.93 × 10−60.01
F083.73 × 10−141.00 × 10−113.77 × 10−62.28 × 10−160.012.73 × 10−161.66 × 10−8
F091.33 × 10−41.91 × 10−40.913.21 × 10−6221.001.00 × 10−50.05
F103.94 × 1053.94 × 1054.01 × 1054.00 × 1053.98 × 1053.97 × 1053.98 × 105
F115.88 × 10−30.391.18 × 1034.79 × 10−42.06 × 1041.45 × 10−37.41
F120.030.091.9466.7472.2854.8556.84
F131.16 × 10−31.77 × 10−30.122.40 × 10−41.874.74 × 10−40.02
F146.68 × 10−74.59 × 10−60.031.79 × 10−81.005.22 × 10−81.46 × 10−4
F157.43 × 10−40.0214.342.45 × 10−51.73 × 1039.52 × 10−50.57
F164.71 × 1058.97 × 1081.28 × 101477.472.51 × 10183.89 × 1034.76 × 1012
TOTAL30112000
Table 11. Standard function values obtained by EHO and six improved methods with D = 1000.
Table 11. Standard function values obtained by EHO and six improved methods with D = 1000.
EHOR1RR1R2RR2R3RR3
F018.50 × 10−62.69 × 10−30.042.81 × 10−50.076.05 × 10−50.01
F026.45 × 10−56.76 × 10−30.243.09 × 10−40.394.28 × 10−40.03
F037.82 × 10−81.24 × 10−40.211.02 × 10−71.704.10 × 10−72.21 × 10−3
F042.22 × 10−133.61 × 10−91.86 × 10−38.75 × 10−152.653.01 × 10−141.80 × 10−6
F050.060.020.490.010.060.010.01
F060.220.390.150.221.080.180.22
F071.06 × 10−64.96 × 10−30.586.86 × 10−75.681.98 × 10−60.01
F089.68 × 10−162.77 × 10−111.55 × 10−53.36 × 10−182.65 × 10−36.47 × 10−179.11 × 10−8
F094.43 × 10−63.58 × 10−41.642.11 × 10−620.667.28 × 10−60.09
F102.19 × 1032.04 × 1032.11 × 1031.98 × 1031.60 × 1031.93 × 1032.75 × 103
F119.97 × 10−41.464.01 × 1034.03 × 10−49.58 × 1038.04 × 10−424.87
F125.39 × 10−40.151.764.901.612.311.48
F136.61 × 10−52.68 × 10−30.159.41 × 10−50.151.45 × 10−40.01
F142.21 × 10−81.05 × 10−50.101.05 × 10−80.082.83 × 10−82.89 × 10−4
F152.62 × 10−50.0452.242.14 × 10−5213.104.96 × 10−51.43
F162.34 × 1043.39 × 1094.12 × 101493.374.52 × 10173.42 × 1032.67 × 1013
TOTAL5018110
Table 12. Optimization results values obtained by EHO and six improved methods for 16 benchmark functions. STD—standard.
Table 12. Optimization results values obtained by EHO and six improved methods for 16 benchmark functions. STD—standard.
D EHOR1RR1R2RR2R3RR3
50BEST01321000
MEAN01114000
WORST02013001
STD31011001
TOTAL317339002
100BEST01312000
MEAN11113000
WORST12013000
STD30010210
TOTAL516238210
200BEST11013010
MEAN02113000
WORST12013000
STD4009210
TOTAL614238220
500BEST11112001
MEAN10113001
WORST20014000
STD4009012
TOTAL811238014
1000BEST01113001
MEAN30112000
WORST31012000
STD5018110
TOTAL1112335111
Table 13. Details of 14 Congress on Evolutionary Computation (CEC) 2017 constrained functions. D is the number of decision variables, I is the number of inequality constraints, and E is the number of equality constraints.
Table 13. Details of 14 Congress on Evolutionary Computation (CEC) 2017 constrained functions. D is the number of decision variables, I is the number of inequality constraints, and E is the number of equality constraints.
No.ProblemSearch RangeType of ObjectiveNumber of Constraints
EI
F01C05[−10, 10]DNon-Separable02
Non-Separable, Rotated
F02C06[−20, 20]DSeparable60
Separable
F03C07[−50, 50]DSeparable2
Separable
0
F04C08[−100, 100]DSeparable2
Non-Separable
0
F05C09[−10, 10]DSeparable2
Non-Separable
0
F06C10[−100, 100]DSeparable2
Non-Separable
0
F07C12[−100, 100]DSeparable02
Separable
F08C13[−100, 100]DNon-Separable03
Separable
F09C15[−100, 100]DSeparable11
F10C16[−100, 100]DSeparable1
Non-Separable
1
Separable
F11C17[−100, 100]DNon-Separable1
Non-Separable
1
Separable
F12C18[−100, 100]DSeparable12
Non-Separable
F13C25[−100, 100]DRotated1
Rotated
1
Rotated
F14C26[−100, 100]DRotated1
Rotated
1
Rotated
Table 14. Mean function values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 50.
Table 14. Mean function values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 50.
EHOR1RR1R2RR2R3RR3
F018.50 × 10−62.69 × 10−30.042.81 × 10−50.076.05 × 10−50.01
F026.45 × 10−56.76 × 10−30.243.09 × 10−40.394.28 × 10−40.03
F037.82 × 10−81.24 × 10−40.211.02 × 10−71.704.10 × 10−72.21 × 10−3
F042.22 × 10−133.61 × 10−91.86 × 10−38.75 × 10−152.653.01 × 10−141.80 × 10−6
F050.060.020.490.010.060.010.01
F060.220.390.150.221.080.180.22
F071.06 × 10−64.96 × 10−30.586.86 × 10−75.681.98 × 10−60.01
F089.68 × 10−162.77 × 10−111.55 × 10−53.36 × 10−182.65 × 10−36.47 × 10−179.11 × 10−8
F094.43 × 10−63.58 × 10−41.642.11 × 10−620.667.28 × 10−60.09
F102.19 × 1032.04 × 1032.11 × 1031.98 × 1031.60 × 1031.93 × 1032.75 × 103
F119.97 × 10−41.464.01 × 1034.03 × 10−49.58 × 1038.04 × 10−424.87
F125.39 × 10−40.151.764.901.612.311.48
F136.61 × 10−52.68 × 10−30.159.41 × 10−50.151.45 × 10−40.01
F142.21 × 10−81.05 × 10−50.101.05 × 10−80.082.83 × 10−82.89 × 10−4
TOTAL1292000
Table 15. Standard values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 50.
Table 15. Standard values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 50.
EHOR1RR1R2RR2R3RR3
F014.06 × 1043.58 × 1044.08 × 1045.75 × 1046.26 × 1045.65 × 1044.16 × 104
F02127.50123.10100.7086.83102.00122.1059.11
F0365.6846.1548.9450.8438.4342.0451.71
F040.370.610.720.720.340.700.76
F050.360.350.310.370.270.290.22
F062.752.561.912.732.662.151.19
F071.73 × 1031.42 × 1031.44 × 1031.79 × 1031.67 × 1031.61 × 1031.37 × 103
F082.51 × 1082.07 × 1082.54 × 1082.49 × 1083.87 × 1082.38 × 1082.63 × 108
F091.391.771.442.322.312.011.44
F1041.3240.7633.7739.2941.6936.2434.50
F110.450.370.490.360.520.430.37
F121.81 × 1031.48 × 1031.65 × 1031.84 × 1031.85 × 1031.75 × 1031.63 × 103
F1380.2067.8675.4375.8272.3269.1261.46
F141.391.542.281.662.111.881.76
TOTAL2311205
Table 16. Mean function values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 100.
Table 16. Mean function values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 100.
EHOR1RR1R2RR2R3RR3
F011.09 × 1069.71 × 1059.48 × 1059.49 × 1051.13 × 1069.71 × 1051.01 × 106
F023.88 × 1033.76 × 1033.80 × 1033.77 × 1034.00 × 1033.77 × 1033.88 × 103
F039.35 × 1039.35 × 1039.35 × 1039.35 × 1039.34 × 1039.35 × 1039.37 × 103
F041.01 × 1031.01 × 1031.01 × 1031.01 × 1031.01 × 1031.01 × 1031.01 × 103
F059.399.379.379.369.769.359.43
F061.04 × 1031.04 × 1031.04 × 1031.04 × 1031.05 × 1031.04 × 1031.04 × 103
F076.99 × 1046.79 × 1046.58 × 1046.50 × 1047.20 × 1046.68 × 1046.96 × 104
F088.77 × 1098.02 × 1097.80 × 1098.19 × 1098.95 × 1098.14 × 1098.62 × 109
F0947.5247.2047.7347.9849.0247.5646.86
F102.18 × 1032.16 × 1032.13 × 1032.14 × 1032.22 × 1032.17 × 1032.16 × 103
F1117.9917.6317.1817.2218.6017.5117.96
F126.82 × 1046.60 × 1046.50 × 1046.52 × 1047.01 × 1046.63 × 1046.87 × 104
F133.60 × 1033.55 × 1033.58 × 1033.57 × 1033.78 × 1033.59 × 1033.66 × 103
F1455.8955.1953.9953.8860.1654.6156.37
TOTAL1244111
Table 17. Standard values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 100.
Table 17. Standard values obtained by EHO and six improved methods on fourteen CEC 2017 constrained optimization functions with D = 100.
EHOR1RR1R2RR2R3RR3
F017.04 × 1046.49 × 1047.85 × 1048.34 × 1049.27 × 1046.51 × 1047.03 × 104
F02122.00125.20106.10111.90137.30113.80102.50
F0373.8274.8772.6166.7596.8280.9865.38
F040.250.230.390.250.310.240.36
F050.140.140.140.190.130.140.09
F061.201.231.391.461.731.221.27
F072.93 × 1032.42 × 1033.14 × 1033.02 × 1033.25 × 1032.73 × 1032.52 × 103
F085.94 × 1086.11 × 1087.07 × 1087.24 × 1081.01 × 1096.43 × 1085.77 × 108
F090.770.641.250.930.570.951.06
F1052.3260.0053.1163.7152.4256.6151.64
F110.670.720.730.720.840.570.59
F122.35 × 1032.77 × 1033.13 × 1032.71 × 1033.00 × 1033.00 × 1032.34 × 103
F1380.5477.54125.50116.70119.80108.5072.75
F143.022.933.443.273.053.382.56
TOTAL1300118

Share and Cite

MDPI and ACS Style

Li, J.; Guo, L.; Li, Y.; Liu, C. Enhancing Elephant Herding Optimization with Novel Individual Updating Strategies for Large-Scale Optimization Problems. Mathematics 2019, 7, 395. https://doi.org/10.3390/math7050395

AMA Style

Li J, Guo L, Li Y, Liu C. Enhancing Elephant Herding Optimization with Novel Individual Updating Strategies for Large-Scale Optimization Problems. Mathematics. 2019; 7(5):395. https://doi.org/10.3390/math7050395

Chicago/Turabian Style

Li, Jiang, Lihong Guo, Yan Li, and Chang Liu. 2019. "Enhancing Elephant Herding Optimization with Novel Individual Updating Strategies for Large-Scale Optimization Problems" Mathematics 7, no. 5: 395. https://doi.org/10.3390/math7050395

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop