Next Article in Journal
Matrix Method for the Optimal Scale Selection of Multi-Scale Information Decision Systems
Next Article in Special Issue
Topology Structure Implied in β-Hilbert Space, Heisenberg Uncertainty Quantum Characteristics and Numerical Simulation of the DE Algorithm
Previous Article in Journal
Novel Transformer Fault Identification Optimization Method Based on Mathematical Statistics
Previous Article in Special Issue
SRIFA: Stochastic Ranking with Improved-Firefly-Algorithm for Constrained Optimization Engineering Design Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning

1
School of Information Engineering, Nanchang Institute of Technology, Nanchang 330099, China
2
Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang 330099, China
3
School of Business Administration, Nanchang Institute of Technology, Nanchang 330099, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 289; https://doi.org/10.3390/math7030289
Submission received: 19 February 2019 / Revised: 12 March 2019 / Accepted: 13 March 2019 / Published: 21 March 2019
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

:
Artificial bee colony is a powerful optimization method, which has strong search abilities to solve many optimization problems. However, some studies proved that ABC has poor exploitation abilities in complex optimization problems. To overcome this issue, an improved ABC variant based on elite strategy and dimension learning (called ABC-ESDL) is proposed in this paper. The elite strategy selects better solutions to accelerate the search of ABC. The dimension learning uses the differences between two random dimensions to generate a large jump. In the experiments, a classical benchmark set and the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set are tested. Computational results show the proposed ABC-ESDL achieves more accurate solutions than ABC and five other improved ABC variants.

1. Introduction

In many real-world applications various optimization problems exist, which aim to select optimal parameters (variables) to maximize (minimize) performance indicators. In general, a minimization optimization problem can be defined by:
min f ( X ) ,
where X is the vector of the decision variables.
To effectively solve optimization problems, intelligent optimization methods have been presented. Some representative algorithms are particle swarm optimization [1,2,3,4,5], artificial bee colony (ABC) [6,7], differential evolution [8,9], firefly algorithm [10,11,12,13], earthworm optimization algorithm [14], cuckoo search [15,16], moth search [17], pigeon inspired optimization [18], bat algorithm [19,20,21,22,23], krill herd algorithm [24,25,26,27], and social network optimization [28]. Among these algorithms, ABC has few control parameters and strong exploration abilities [29,30].
ABC simulates the foraging behaviors of bees in nature [6]. The processes of bees finding food sources are analogous to the processes of searching candidate solutions for a given problem. Although ABC is effective in many problems, it suffers from poor exploitation and slow convergence rates [31,32]. The possible reasons can be summarized in two ways: (1) offspring are in the neighborhood of their corresponding parent solutions and they are near to each other, and (2) offspring and their corresponding parent solutions are similar because of one-dimension perturbation.
In this work, a new ABC variant based on elite strategy and dimension learning (ESDL), called ABC-ESDL, is presented to enhance the performance of ABC. For the elite strategy, better solutions are chosen to guide the search. Moreover, the differences between different dimensions are used to generate candidate solutions with large dissimilarities. In the experiments, a classical benchmark set (with dimensions 30 and 100) and the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set are tested. Results of ABC-ESDL are compared with ABC and five other modified ABCs.
The remainder of this work is organized as follows. In Section 2, the concept and definitions of ABC are introduced. Some recent work on ABC is given in Section 3. The proposed ABC-ESDL is described in Section 4. Test problems, results, and discussions are presented in Section 5. Finally, this work is summarized in Section 6.

2. Artificial Bee Colony

Like other bio-inspired algorithms, ABC is also a population-based stochastic method. Bees in the population try to find new food sources (candidate solutions). According to the species of bees, ABC consists of three types of bees: employed bees, onlooker bees, and scouts. The employed bees search the neighborhood of solutions in the current population, and they share their search experiences with the onlooker bees. Then, the onlooker bees choose better solutions and re-search their neighborhoods to find new candidate solutions. When solutions cannot be improved during the search, the scouts randomly initialize them [33].
Let Xi = (xi1, xi2, …, xiD) be the i-th solution in the population at the t-th iteration. An employed bee randomly selects a different solution Xk from the current population and chooses a random dimension index j. Then, a new solution Vi is obtained by [33]:
v i j = x i j + ϕ i j ( x i j x k j ) ,
where i = 1, 2, …, N, and φij is randomly chosen from [−1.0, 1.0]. As seen, the new solution Vi is similar to its parent solution Xi, and their differences are only on the j-th dimension. If Vi is better than Xi, Xi is updated by Vi. This means that bees find better solutions during the current search. However, this search process is slow, because the similarities between Xi and Vi are very large.
When employed bees complete the search around the neighborhood for solutions, all solutions will be updated by comparing each pair of {Xi, Vi}. Then, the selection probability pi for each Xi is defined as follows [33]:
p i = f i t i j = 1 N f i t j ,
where fiti is the fitness value of Xi and fiti is calculated by:
f i t i = { 1 / ( 1 + f i ) ,   if   f i 0 1 + a b s ( f i ) ,   otherwise ,
where fi is the function value of Xi. It is obvious that a better solution will have a larger selection probability. So, the onlooker bees focus on searching the neighborhoods for better solutions. This may accelerate the convergence.
For a specific solution X, if employed bees and onlooker bees cannot find any new solutions in its neighborhood to replace it, the solutions maybe trapped into local minima. Then, a scout re-initializes it as follows [33]:
x j = L j + r a n d j ( U j L j ) ,
where j = 1, 2, …, D, [Lj, Uj] is the search range of the j-th dimension, and randj is randomly chosen from [0, 1.0] for the j-th dimension.

3. Related Work

Since the introduction of ABC, many different ABC variants and applications have been proposed. Some recent work on ABC is presented as follows.
Zhu and Kwong [31] modified the search model by introducing the global best solution (Gbest). Experiments confirmed that the modifications could improve the search efficiency. Karaboga and Gorkemli [34] presented a quick ABC (qABC) by employing a new solution search equation for the onlooker bees. Moreover, the neighborhood of Gbest was used to help the search. Gao and Liu [35] used the mutation operator in differential evolution (DE) to modify the solution search equation of ABC. Wang et al. [32] integrated multiple solution search strategies into ABC. It was expected that the multi-strategy mechanism could balance exploration and exploitation abilities. Cui et al. [36] proposed a new ABC with depth-first search framework and elite-guided search equation (DFSABC-elite), which assigned more computational resources to the better solutions. In addition, elite solutions were incorporated to modify the solution search equation. Li et al. [37] embedded a crossover operator into ABC to obtain a good performance. Yao et al. [38] used a multi-population technique in ABC. The entire population consisted of three subgroups, and each one used different evolutionary operators to play different roles in the search. Kumar and Mishra [39] introduced covariance matrices into ABC. Experiments on comparing continuous optimiser (COCO) benchmarks showed the approach was robust and effective. Yang et al. [40] designed an adaptive encoding learning based on covariance matrix learning. Furthermore, the selection was also adaptive according to the successful rate of candidate solutions. Chen et al. [41] firstly employed multiple different solution search models in ABC. Then, an adaptive method was designed to determine the chosen rate of each model.
In [42], a binary ABC was used to solve the spanning tree construction problem. Compared to the traditional Kruskal algorithm, the binary ABC could find sub-optimal spanning trees. In [43], a hybrid ABC was employed to tackle the effects of over-fitting in high dimensional datasets. In [44], chaos and quantum theory were used to improve the performance of ABC. Dokeroglu et al. [45] used a parallel ABC variant to optimize the quadratic assignment problem. Kishor et al. [46] presented a multi-objective ABC based on non-dominated sorting. A new method was used for employed bees to achieve convergence and diversity. The onlooker bees use similar operations with the standard ABC. Research on wireless sensor networks (WSNs) has attracted much attention [47,48,49]. Hashim et al. [50] proposed a new energy efficient optimal deployment strategy based on ABC in WSNs, in which ABC was used to optimize the network parameters.

4. Proposed Approach

In this section, a new ABC variant based on elite strategy and dimension learning (ABC-ESDL) is proposed. The proposed strategies and algorithm framework are described in the following subsections.

4.1. Elite Strategy

Many scholars have noticed that the original ABC was not good at exploitation during the search. To tackle this issue, several elite strategies were proposed. It is expected that elite solutions could help the search and save computational resources.
Zhu and Kwong used Gbest to modify the solution search model as below [31]:
v i j = x i j + ϕ i j ( x i j x k j ) + φ i j ( G b e s t j x i j ) ,
where φij and φij are two random values between −1.0 and 1.0.
Motivated by the mutation strategy of DE, new search equations were designed as follows [32,35]:
v i j = G b e s t j + ϕ i j ( x r j x k j ) ,
v i j = G b e s t j + ϕ i j ( G b e s t j x k j ) ,
where Xr and Xk are two different solutions.
In our previous work [51], an external archive was constructed to store Gbests during the iterations. Then, these Gbests are used to guide the search:
v i j = A ˜ j + ϕ i j ( x r j x k j ) ,
where A ˜ is randomly chosen from the external archive.
Similar to [51], Cui et al. [36] designed an elite set E, which stores the best ρ*N solutions in the current population, where ρ ∈ (0,1). Based on the elite set, two modified search equations are defined as below:
v i j = E l j + ϕ i j ( E l j x k j ) ,
v i j = 1 2 ( E l j + G b e s t j ) + ϕ i j ( G b e s t j x k j ) ,
where El is randomly chosen from the set E.
Inspired by the above work, a new search model for the employed bees is designed:
v i j = 1 2 ( E l j + G b e s t j ) + ϕ i j ( x i j E l j ) + φ i j ( x i j G b e s t j ) ,
where El is randomly chosen from the elite set E, φij is a random value between −0.5 and 0.5, and φij is a random value between 0 and 1.0.
As mentioned before, the onlooker bees re-search the neighborhoods of good solutions to find potentially better solutions. Therefore, further searching by the onlooker bees can be regarded as the exploitation phase. How to improve the effectiveness of the onlooker bees is important to the quality of exploitation. Thus, a different method is designed for the onlooker bees:
v i j = 1 2 ( E m j + G b e s t j ) + ϕ i j ( x i j E l j ) + φ i j ( x i j G b e s t j ) ,
where m = 1, 2, ..., M; M is the elite set size; and El is randomly chosen from the set E. If a solution Xi is selected based on the probability pi, an onlooker bee generates M candidate solutions according to Equation (13). Each candidate solution is compared with Xi, and the better one is used as the new Xi. The size of the elite set should be small, because a large M will result in high computational time complexity.
To maintain the size of the elite set E, a simple replacement method is used. Initially, the best M solutions in the population are selected into E. During the search, if the offspring Vi is better than the worst solution Ew in the elite set E, we replace Ew with Vi. Then, the size of E will be M in the whole search.

4.2. Dimensional Learning

In ABC, a random dimension j is selected for conducting the solution search equation. Under this dimension, if their component values are similar, the difference (xijxkj) will be very small. This means that the step size (xijxkj) cannot help Xi jump to a far position. If the solution is trapped into local minima, it hardly escapes from the minima. In [52], a concept of dimension learning was proposed. The difference (xijxkh) between two different dimensions is used as the step size, where j and h are two randomly selected dimension indices and j ≠ h. In general, the difference between two different dimensions is large. A large step size may help trapped solutions jump to better positions.
Based on the above analysis, dimension learning is embedded into Equations (12) and (13). Then, the new search models are rewritten as below:
v i j = 1 2 ( E l h + G b e s t j ) + ϕ i j ( x i h E l j ) + φ i j ( x i h G b e s t j ) ,
v i j = 1 2 ( E m j + G b e s t h ) + ϕ i j ( x i j E l h ) + φ i j ( x i j G b e s t h ) ,
where h is a random dimension and j ≠ h.

4.3. Framework of Artificial Bee Colony-Elite Strategy and Dimension Learning

Our approach, ABC-ESDL, consists of four main operations: an elite set updating, an employed bee phase, an onlooker bee phase, and a scout bee phase. The first operation exists in the employed and onlooker bee phases. So, we only present the latter three operations.
In the employed bee phase, for each Xi, a new candidate solution Vi is created by Equation (12). The better one between Vi and Xi is chosen as Xi. If Vi is better than Ew in the elite set E, Ew is replaced by Vi. The procedure of the employed bee phase is presented in Algorithm 1, where FEs is the number of function evaluations.
Algorithm 1: Framework of the Employed bee phase
Begin
for i = 1 to N do
  Generate Vi by Equation (14);
  Compute f(Vi) and FEs = FEs + 1;
  if f(Vi) < f(Xi) then
    Update Xi by Vi, and set triali= 0;
    Update Ew, if possible;
  else
    triali = triali + 1;
  end if
end for
End
The onlooker bee phase is described in Algorithm 2, where rand(0,1) is a random value in the range [0, 1]. Compared to the employed bees, a different search model is employed for the onlooker bees. In Algorithm 1, an elite solution El is chosen from E randomly, and it is used for generating a new Vi. In Algorithm 2, all elite solutions in E are used to generate M new solutions Vi because there are M elite solutions. All M new solutions are compared with the original Xi, and the best one is used as the new Xi.
Algorithm 2: Framework of the Onlooker bee phase
Begin
 Calculate the probability pi by Equation (3);
I = 1, t = 1;
while tN do
  if rand(0,1) < pithen
   for h = 1 to M do
    Generate Vi by Equation (15);
    Compute f(Vi) and FEs = FEs + 1;
    if f(Vi) < f(Xi) then
     Update Xi by Vi, and set triali = 0;
    Update Ew, if possible;
    else
     triali = triali + 1;
    end if
   end for
   t++;
  end if
  i = (I + 1)%N + 1;
end while
End
When triali is set to 0, it means that the solution Xi has been improved. If the value of triali exceeds a predefined value limit, it means that the solution Xi may fall into local minima. Thus, the current Xi should be reinitialized. The main steps of the scout bee phase are given in Algorithm 3.
Algorithm 3: Framework of the Scout bee phase
Begin
if trialilimit then
  Initialize Xi by Equation (5);
  Compute f(Xi) and FEs = FEs + 1;
end if
 Update the global best solution;
End
The framework of our approach, ABC-ESDL, is presented in Algorithm 4, where N represents the population size, M is the elite set size, and MaxFEs is the maximum value of FEs. To clearly illustrate the proposed ABC-ESDL, Figure 1 gives its flowchart.
Algorithm 4: Framework of ABC-ESDL
Begin
 Initialize N solution in the population;
 Initialize the elite set E;
 Set triali = 0, I = 1,2, ..., N;
while FEsMaxFEs do
  Execute Algorithm 1;
  Execute Algorithm 2;
  Execute Algorithm 3;
  Update the global best solution;
end while
End

5. Experimental Study

5.1. Test Problems

To verify the performance of ABC-ESDL, 12 benchmark functions with dimensions 30 and 100 were utilized in the following experiments. These functions were employed to test the optimization [53,54,55,56,57,58]. Table 1 presents the descriptions of the benchmark set where D is the dimension size, and the global optimum is listed in the last column.

5.2. Parameter Settings

In the experiments, ABC-ESDL was tested on the benchmark set with D = 30 and 100, respectively. Results of ABC-ESDL were compared with several other ABCs. The involved ABCs are listed as follows:
  • ABC;
  • Gbest guided ABC (GABC) [31];
  • Improved ABC (IABC) [51];
  • Modified ABC (MABC) [35];
  • ABC with variable search strategy (ABCVSS) [59];
  • ABC with depth-first search framework and elite-guided search equation (DFSABC-elite) [36];
  • Our approach, ABC-ESDL.
To attain a fair comparison, the same parameter settings were used. For both D = 30 and 100, N and limit were equal to 100. For D = 30, MaxFEs was set to 1.5 × 10 5 . For D = 100, MaxFEs was set to 5.0 × 10 5 . The constant value C = 1.5 was used in GABC [31]. In MABC, the parameter p = 0.7 was used [35]. The archive size m was set to 5 in IABC [51]. The number of solution search equations used in ABCVSS was 5 [59]. In DFSABC-elite, p and r were set to 0.1 and 10, respectively [36]. In ABC-ESDL, the size (M) of the elite set was set to 5. All algorithms ran 100 times for each problem. The computing platform was with CPU Intel (R) Core (TM) i5-5200U 2.2 GHz, RAM 4 GB, and Microsoft Visual Studio 2010.

5.3. Comparison between ABC-ESDL and Other ABC Variants

Table 2 shows the results of ABC-ESDL and six other ABCs for D = 30, where “Mean” indicates the mean function value and “Std Dev” represents the standard deviation. The term “w/t/l” represents a summary for the comparison between ABC-ESDL and the six competitors. The symbol w represents that ABC-ESDL outperformed the compared algorithms on w functions. The symbol l means that ABC-ESDL was worse than its competitor on l functions. For the symbol t, ABC-ESDL and its compared algorithm obtained the same result on t functions. As shown, ABC-ESDL was better than ABC on all functions except for f6. For this problem, all ABCs converged to the global minima. Compared to GABC, our approach ABC-ESDL performed better on nine functions. Both of them attained similar results on three functions. For ABC-ESDL, IABC, and ABCVSS, the same performances were achieved on four functions. ABC-ESDL found more accurate solutions than IABC and ABCVSS for the rest of the eight functions. DFSABC-elite outperformed ABC-ESDL on only one function, f4, while ABC-ESDL was better than DFSABC-elite on seven functions.
Table 3 lists the results of ABC-ESDL and six other ABCs for D = 100. From the results, ABC-ESDL surpassed ABC on all problems. ABC-ESDL, ABC, and IABC retained the same results on f6 and f8. ABC-ESDL obtained better solutions for the rest of the ten functions. Compared to MABC and ABCVSS, ABC-ESDL was better on seven functions. Three algorithms had the same performance on five functions. DFSABC-elite outperformed ABC-ESDL on two functions, but ABC-ESDL was better than DFSABC-elite on five functions. Both of them obtained similar performances on five functions.
Figure 2 presents the convergence processes of ABC-ESDL, DFSABC-elite, MABC, and ABC on selected problems with D = 30. As seen, ABC-ESDL was faster than DFSABC-elite, MABC, and ABC. For f1, f2, f10, and f12, DFSABC-elite converged faster than MABC and ABC. For f5, DFSABC-elite was the slowest algorithm. ABC was faster than DFSABC-elite on f7. For f10, ABC-ESDL was slower than DFSABC-elite at the beginning search stage, and it was faster at the last search stage.
By the suggestions of [53,56], a nonparametric statistical test was used to compare the overall performances of seven ABCs. In the following, the mean rank of each algorithm on the whole benchmark set was calculated by the Friedman test. Table 4 gives the mean rank values of seven ABCs for D = 30 and 100. The smallest rank value meant that the corresponding algorithm obtained the best performance. For D = 30 and 100, ABC-ESDL achieved the best performances, and DFSABC-elite was in second place. For D = 30, both MABC and ABCVSS had the same rank. When the dimension increased to 100, ABCVSS obtained a better rank than MABC.

5.4. Effects of Different Strategies

There are two modifications in ABC-ESDL: elite strategy (ES) and dimension learning (DL). To investigate the effects of different strategies (ES and DL), we tested different combinations between ABC, ES, and DL on the benchmark set. The involved combinations are listed as below:
  • ABC without ES or DL;
  • ABC-ES: ABC with elite strategy;
  • ABC-DL: ABC with dimension learning;
  • ABC-ESDL: ABC with elite strategy and dimension learning.
For the above four ABC algorithms, the parameter settings were kept the same as in Section 5.3. The parameters MaxFEs, N, limit, and M were set to 5000*D, 100, 100, and 5, respectively. All algorithms ran 100 times for each problem for D = 30 and 100.
Table 5 presents the comparison of ABC-ESDL, ABC-ES, ABC-DL, and ABC for D = 30. The best result for each function is shown in boldface. From the results, all four algorithms obtained the same results on f6. ABC was worse than ABC-ES on eight problems, but ABC-ES obtained worse results on three problems. ABC-DL outperformed ABC on ten problems, while ABC-DL was worse than ABC on only one problem. ABC-ESDL outperformed ABC-DL and ABC on 11 problems. Compared to ABC-ES, ABC-ESDL was better on ten problems, and both of them had the same performances on the rest of the two problems.
Table 6 gives the results of ABC-ESDL, ABC-ES, ABC-DL, and ABC for D = 100. The best result for each function is shown in boldface. Similar to D = 30, we can get the same conclusion. ABC-ESDL performed better than ABC, ABC-ES, and ABC-DL. ABC-ES was better than ABC-DL on most test problems, and both of them outperformed the original ABC.
For the above analysis, ABC with a single strategy (ES or DL) achieved better results than the original ABC. By introducing ES and DL into ABC, the performance of ABC-ESDL was further enhanced, and it outperformed ABC and ABC with a single strategy. This demonstrated that both ES and DL were helpful in strengthening the performance of ABC.

5.5. Results of the CEC 2013 Benchmark Set

In Section 5.3, ABC-ESDL was tested on several classical benchmark functions. To verify the performance of ABC-ESDL on difficult functions, the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set was utilized in this section [60].
In the experiments, ABC-ESDL was compared with ABC, GABC, MABC, ABCVSS, and DFSABC-elite on the CEC benchmark set with D = 30. By the suggestions of [60], MaxFEs was set to 10,000*D. For other parameters, the same settings were used as described in Section 5.3. For each test function, each algorithm was run 51 times. Throughout the experiments, the mean function error value (f(X) − f(X*)) was reported, where X was the best solution found by the algorithm in a run, and X* was the global optimum of the test function [60].
Table 7 presents the computational results of ABC-ESDL, DFSABC-elite, ABCVSS, MABC, GABC, and ABC on the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set, where “Mean” indicates the mean function error values and “Std Dev” represents the standard deviation. The best result for each function is shown in boldface. From the results, ABC-ESDL outperformed ABC and GABC on 25 functions, but it was worse on the rest of the three functions. Compared to MABC, ABC-ESDL achieved better results on 20 functions, but MABC was better than ABC-ESDL on the rest of the eight functions. ABC-ESDL performed better than ABCVSS and DFSABC-elite on 21 and 22 functions, respectively. From the above analysis, even for difficult functions, ABC-ESDL still obtained better performances than the compared algorithms.

6. Conclusions

To balance exploration and exploitation, an improved version of ABC, called ABC-ESDL, is proposed in this paper. In ABC-ESDL, there are two modifications: elite strategy (ES) and dimension learning (DL). The elite strategy is used to guide the search. Good solutions are selected into the elite set. These elite solutions are used to modify the search model. To maintain the size of the elite set, a simple replacement method is employed. In dimension learning, the difference between different dimensions can achieve a large jump to help trapped solutions escape from local minima. The performance of our approach ABC-ESDL is verified on twelve classical benchmark functions (with dimensions 30 and 100) and the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set.
Computational results of ABC-ESDL are compared with ABC, GABC, IABC, MABC, ABCVSS, and DFSABC-elite. For D = 30 and 100, ABC-ESDL is not worse than ABCVSS, MABC, IABC, GABC, and ABC. DFSABC-elite is better than ABC-ESDL on only one problem for D = 30 and two problems for D = 100. For the rest of problems, ABC-ESDL outperforms DFSABC-elite. For the 2013 IEEE Congress on Evolutionary (CEC 2013) benchmark set, ABC-ESDL still achieves better performances than the compared algorithms.
Another experiment investigates the effectiveness of ES and DL. Results show that ES or DL can achieve improvements. ABC with two strategies (both ES and DL) surpasses ABC and ABC with a single strategy (ES or DL). It confirms the effectiveness of our proposed strategies.
For the onlooker bees, offspring is generated for each elite solution in the elite set. So, an onlooker bee generates M new solutions when a parent solution Xi is selected. This complexity will increase the computational time. To reduce the effects of such computational effort, a small parameter M is used. In the future work, other strategies will be considered to replace the current method. In addition, more test functions [61] will be considered to further verify the performance of our approach.

Author Contributions

Writing—original draft preparation, S.X. and H.W.; writing—review and editing, W.W.; visualization, S.X.; supervision, H.W., D.T., Y.W., X.Y., and R.W.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 61663028, 61703199), the Distinguished Young Talents Plan of Jiang-xi Province (No. 20171BCB23075), the Natural Science Foundation of Jiangxi Province (No. 20171BAB202035), the Science and Technology Plan Project of Jiangxi Provincial Education Department (Nos. GJJ170994, GJJ180940), and the Open Research Fund of Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing (No. 2016WICSIP015).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kennedy, J. Particle Swarm Optimization. In Proceedings of the 1995 International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  2. Wang, F.; Zhang, H.; Li, K.S.; Lin, Z.Y.; Yang, J.; Shen, X.L. A hybrid particle swarm optimization algorithm using adaptive learning strategy. Inf. Sci. 2018, 436–437, 162–177. [Google Scholar] [CrossRef]
  3. Souza, T.A.; Vieira, V.J.D.; Souza, M.A.; Correia, S.E.N.; Costa, S.L.N.C.; Costa, W.C.A. Feature selection based on binary particle swarm optimisation and neural networks for pathological voice detection. Int. J. Bio-Inspir. Comput. 2018, 11, 91–101. [Google Scholar] [CrossRef]
  4. Sun, C.L.; Jin, Y.C.; Chen, R.; Ding, J.L.; Zeng, J.C. Surrogate-assisted cooperative swarm optimization of high-dimensional expensive problems. IEEE Trans. Evol. Comput. 2017, 21, 644–660. [Google Scholar] [CrossRef]
  5. Wang, H.; Wu, Z.J.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar]
  6. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-tr06; Engineering Faculty, Computer Engineering Department, Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  7. Amiri, E.; Dehkordi, M.N. Dynamic data clustering by combining improved discrete artificial bee colony algorithm with fuzzy logic. Int. J. Bio-Inspir. Comput. 2018, 12, 164–172. [Google Scholar] [CrossRef]
  8. Meang, Z.; Pan, J.S. HARD-DE: Hierarchical archive based mutation strategy with depth information of evolution for the enhancement of differential evolution on numerical optimization. IEEE Access 2019, 7, 12832–12854. [Google Scholar] [CrossRef]
  9. Meang, Z.; Pan, J.S.; Kong, L.P. Parameters with adaptive learning mechanism (PALM) for the enhancement of differential evolution. Knowl.-Based Syst. 2018, 141, 92–112. [Google Scholar] [CrossRef]
  10. Yang, X.S. Engineering Optimization: An Introduction with Metaheuristic Applications; John Wiley & Sons: Etobicoke, ON, Canada, 2010. [Google Scholar]
  11. Wang, H.; Wang, W.; Sun, H.; Rahnamayan, S. Firefly algorithm with random attraction. Int. J. Bio-Inspir. Comput. 2016, 8, 33–41. [Google Scholar] [CrossRef]
  12. Wang, H.; Wang, W.J.; Cui, Z.H.; Zhou, X.Y.; Zhao, J.; Li, Y. A new dynamic firefly algorithm for demand estimation of water resources. Inf. Sci. 2018, 438, 95–106. [Google Scholar] [CrossRef]
  13. Wang, H.; Wang, W.J.; Cui, L.Z.; Sun, H.; Zhao, J.; Wang, Y.; Xue, Y. A hybrid multi-objective firefly algorithm for big data optimization. Appl. Soft Comput. 2018, 69, 806–815. [Google Scholar] [CrossRef]
  14. Wang, G.G.; Deb, S.; Coelho, L.S. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspir. Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  15. Yang, X.S.; Deb, S. Cuckoo Search via Levy Flights. Mathematics 2010, 1, 210–214. [Google Scholar]
  16. Zhang, M.; Wang, H.; Cui, Z.; Chen, J. Hybrid multi-objective cuckoo search with dynamical local search. Memet. Comput. 2018, 10, 199–208. [Google Scholar] [CrossRef]
  17. Wang, G.G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memet. Comput. 2016, 10, 1–14. [Google Scholar] [CrossRef]
  18. Cui, Z.; Wang, Y.; Cai, X. A pigeon-inspired optimization algorithm for many-objective optimization problems. Sci. China Inf. Sci. 2019, 62, 070212. [Google Scholar] [CrossRef]
  19. Yang, X.S. A new metaheuristic bat-inspired algorithm. Comput. Knowl. Technol. 2010, 284, 65–74. [Google Scholar]
  20. Wang, Y.; Wang, P.; Zhang, J.; Cui, Z.; Cai, X.; Zhang, W.; Chen, J. A novel bat algorithm with multiple strategies coupling for numerical optimization. Mathematics 2019, 7, 135. [Google Scholar] [CrossRef]
  21. Cai, X.J.; Gao, X.Z.; Xue, Y. Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int. J. Bio-Inspir. Comput. 2016, 8, 205–214. [Google Scholar] [CrossRef]
  22. Cui, Z.H.; Xue, F.; Cai, X.J.; Gao, Y.; Wang, G.G.; Chen, J.J. Detection of malicious code variants based on deep learning. IEEE Trans. Ind. Inf. 2018, 14, 3187–3196. [Google Scholar] [CrossRef]
  23. Cai, X.; Wang, H.; Cui, Z.; Cai, J.; Xue, Y.; Wang, L. Bat algorithm with triangle-flipping strategy for numerical optimization. Int. J. Mach. Learn. Cybern. 2018, 9, 199–215. [Google Scholar] [CrossRef]
  24. Wang, G.G.; Guo, L.H.; Gandomi, A.H.; Hao, G.S.; Wang, H.Q. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  25. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  26. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  27. Wang, G.G.; Guo, L.H.; Wang, H.Q.; Duan, H.; Luo, L.; Li, J. Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Comput. Appl. 2014, 24, 853–871. [Google Scholar] [CrossRef]
  28. Grimaccia, F.; Gruosso, G.; Mussetta, M.; Niccolai, A.; Zich, R.E. Design of tubular permanent magnet generators for vehicle energy harvesting by means of social network optimization. IEEE Trans. Ind. Electron. 2018, 65, 1884–1892. [Google Scholar] [CrossRef]
  29. Karaboga, D.; Akay, B. A survey: Algorithms simulating bee swarm intelligence. Artif. Intell. Rev. 2009, 31, 61–85. [Google Scholar] [CrossRef]
  30. Kumar, A.; Kumar, D.; Jarial, S.K. A review on artificial bee colony algorithms and their applications to data clustering. Cybern. Inf. Technol. 2017, 17, 3–28. [Google Scholar] [CrossRef]
  31. Zhu, G.; Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 2010, 217, 3166–3173. [Google Scholar] [CrossRef]
  32. Wang, H.; Wu, Z.J.; Rahnamayan, S.; Sun, H.; Liu, Y.; Pan, J.S. Multi-strategy ensemble artificial bee colony algorithm. Inf. Sci. 2014, 279, 587–603. [Google Scholar] [CrossRef]
  33. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  34. Karaboga, D.; Gorkemli, B. A quick artificial bee colony (qABC) algorithm and its performance on optimization problems. Appl. Soft Comput. 2014, 23, 227–238. [Google Scholar] [CrossRef]
  35. Gao, W.; Liu, S. A modified artificial bee colony algorithm. Comput. Oper. Res. 2012, 39, 687–697. [Google Scholar] [CrossRef]
  36. Cui, L.Z.; Li, G.H.; Lin, Q.Z.; Du, Z.H.; Gao, W.F.; Chen, J.Y.; Lu, N. A novel artificial bee colony algorithm with depth-first search framework and elite-guided search equation. Inf. Sci. 2016, 367–368, 1012–1044. [Google Scholar] [CrossRef]
  37. Li, G.H.; Cui, L.Z.; Fu, X.H.; Wen, Z.K.; Lu, N.; Lu, J. Artificial bee colony algorithm with gene recombination for numerical function optimization. Appl. Soft Comput. 2017, 52, 146–159. [Google Scholar] [CrossRef]
  38. Yao, X.; Chan, F.T.S.; Lin, Y.; Jin, H.; Gao, L.; Wang, X.; Zhou, J. An individual dependent multi-colony artificial bee colony algorithm. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  39. Kumar, D.; Mishra, K.K. Co-variance guided artificial bee colony. Appl. Soft Comput. 2018, 70, 86–107. [Google Scholar] [CrossRef]
  40. Yang, J.; Jiang, Q.; Wang, L.; Liu, S.; Zhang, Y.; Li, W.; Wang, B. An adaptive encoding learning for artificial bee colony algorithms. J. Comput. Sci. 2019, 30, 11–27. [Google Scholar] [CrossRef]
  41. Chen, X.; Tianfield, H.; Li, K. Self-adaptive differential artificial bee colony algorithm for global optimization problems. Swarm Evol. Comput. 2019, 45, 70–91. [Google Scholar] [CrossRef]
  42. Zhang, X.; Zhang, X. A binary artificial bee colony algorithm for constructing spanning trees in vehicular ad hoc networks. Ad Hoc Netw. 2017, 58, 198–204. [Google Scholar] [CrossRef]
  43. Zorarpacı, E.; Özel, S.A. A hybrid approach of differential evolution and artificial bee colony for feature selection. Expert Syst. Appl. 2016, 62, 91–103. [Google Scholar] [CrossRef]
  44. Yuan, X.; Wang, P.; Yuan, Y.; Huang, Y.; Zhang, X. A new quantum inspired chaotic artificial bee colony algorithm for optimal power flow problem. Energy Convers. Manag. 2015, 100, 1–9. [Google Scholar] [CrossRef]
  45. Dokeroglu, T.; Sevinc, E.; Cosar, A. Artificial bee colony optimization for the quadratic assignment problem. Appl. Soft Comput. 2019, 76, 595–606. [Google Scholar] [CrossRef]
  46. Kishor, A.; Singh, P.K.; Prakash, J. NSABC: Non-dominated sorting based multi-objective artificial bee colony algorithm and its application in data clustering. Neurocomputing 2016, 216, 514–533. [Google Scholar] [CrossRef]
  47. Wang, P.; Xue, F.; Li, H.; Cui, Z.; Xie, L.; Chen, J. A multi-objective DV-Hop localization algorithm based on NSGA-II in internet of things. Mathematics 2019, 7, 184. [Google Scholar] [CrossRef]
  48. Pan, J.S.; Kong, L.P.; Sung, T.W.; Tsai, P.W.; Snasel, V. α-Fraction first strategy for hierarchical wireless sensor networks. J. Internet Technol. 2018, 19, 1717–1726. [Google Scholar]
  49. Xue, X.S.; Pan, J.S. A compact co-evolutionary algorithm for sensor ontology meta-matching. Knowl. Inf. Syst. 2018, 56, 335–353. [Google Scholar] [CrossRef]
  50. Hashim, H.A.; Ayinde, B.O.; Abido, M.A. Optimal placement of relay nodes in wireless sensor network using artificial bee colony algorithm. J. Netw. Comput. Appl. 2016, 64, 239–248. [Google Scholar] [CrossRef] [Green Version]
  51. Wang, H.; Wu, Z.J.; Zhou, X.Y.; Rahnamayan, S. Accelerating artificial bee colony algorithm by using an external archive. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 517–521. [Google Scholar]
  52. Li, B.; Sun, H.; Zhao, J.; Wang, H.; Wu, R.X. Artificial bee colony algorithm with different dimensional learning. Appl. Res. Comput. 2016, 33, 1028–1033. [Google Scholar]
  53. Wang, H.; Wang, W.; Zhou, X.; Sun, H.; Zhao, J.; Yu, X.; Cui, Z. Firefly algorithm with neighborhood attraction. Inf. Sci. 2017, 382, 374–387. [Google Scholar] [CrossRef]
  54. Wang, H.; Sun, H.; Li, C.H.; Rahnamayan, S.; Pan, J.S. Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 2013, 223, 119–135. [Google Scholar] [CrossRef]
  55. Wang, G.G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2019, 49, 542–555. [Google Scholar] [CrossRef]
  56. Wang, H.; Rahnamayan, S.; Sun, H.; Omran, M.G.H. Gaussian bare-bones differential evolution. IEEE Trans. Cybern. 2013, 43, 634–647. [Google Scholar] [CrossRef]
  57. Wang, H.; Cui, Z.H.; Sun, H.; Rahnamayan, S.; Yang, X.S. Randomly attracted firefly algorithm with neighborhood search and dynamic parameter adjustment mechanism. Soft Comput. 2017, 21, 5325–5339. [Google Scholar] [CrossRef]
  58. Sun, C.L.; Zeng, J.C.; Pan, J.S.; Xue, S.D.; Jin, Y.C. A new fitness estimation strategy for particle swarm optimization. Inf. Sci. 2013, 221, 355–370. [Google Scholar] [CrossRef]
  59. Kiran, M.S.; Hakli, H.; Guanduz, M.; Uguz, H. Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf. Sci. 2015, 300, 140–157. [Google Scholar] [CrossRef]
  60. Liang, J.J.; Qu, B.Y.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session and Competition on Real-Parameter Optimization; Tech. Rep. 201212; Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Nanyang Technological University: Singapore, 2013. [Google Scholar]
  61. Serani, A.; Leotardi, C.; Iemma, U.; Campana, E.F.; Fasano, G.; Diez, M. Parameter selection in synchronous and asynchronous deterministic particle swarm optimization for ship hydrodynamics problems. Appl. Soft Comput. 2016, 49, 313–334. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flowchart of the proposed artificial bee colony-elite strategy and dimension learning (ABC-ESDL) algorithm.
Figure 1. The flowchart of the proposed artificial bee colony-elite strategy and dimension learning (ABC-ESDL) algorithm.
Mathematics 07 00289 g001
Figure 2. The convergence curves of ABC-ESDL, DFSABC-elite, MABC, and ABC on selected functions. (a) Sphere; (b) Schwefel 2.22; (c) Rosenbrock; (d) Quartic; (e) Ackley; and (f) Penalized.
Figure 2. The convergence curves of ABC-ESDL, DFSABC-elite, MABC, and ABC on selected functions. (a) Sphere; (b) Schwefel 2.22; (c) Rosenbrock; (d) Quartic; (e) Ackley; and (f) Penalized.
Mathematics 07 00289 g002
Table 1. Benchmark problems.
Table 1. Benchmark problems.
NameFunctionGlobal Optimum
Sphere f 1 ( X ) = i = 1 D x i 2 0
Schwefel 2.22 f 2 ( X ) = i = 1 D | x i | + i = 1 D | x i | 0
Schwefel 1.2 f 3 ( X ) = i = 1 D ( j = 1 i x j ) 2 0
Schwefel 2.21 f 4 ( X ) = max { | x i | , 1 < = i < = D } 0
Rosenbrock f 5 ( X ) = i = 1 D [ 100 ( x i + 1 x i 2 ) 2 + ( 1 x i 2 ) 2 ] 0
Step f 6 ( X ) = i = 1 D x i + 0.5 0
Quartic f 7 ( X ) = i = 1 D i x i 4 + r a n d [ 0 , 1 ) 0
Schwefel 2.26 f 8 ( X ) = i = 1 D x i sin ( | x i | ) −418.98*D
Rastrigin f 9 ( X ) = i = 1 D [ x i 2 10 cos 2 π x i + 10 ] 0
Ackley f 10 ( X ) = 20 exp ( 0.2 1 D i = 1 D x i 2 ) exp ( 1 D i = 1 D cos ( 2 π x i ) ) + 20 + e 0
Griewank f 11 ( X ) = 1 4000 i = 1 D ( x i ) 2 i = 1 D cos ( x i i ) + 1 0
Penalized f 12 ( X ) = π D { i = 1 D ( y i 1 ) 2 [ 1 + sin ( π y i + 1 ) ] + ( y D 1 ) 2 ) + ( 10 sin 2 ( π y 1 ) ) } + i = 1 D u ( x i , 10 , 100 , 4 ) , y i = 1 + x i + 1 4 u ( x i , a , k , m ) = { u ( x i , a , k , m ) , x i > a 0 , a < x i < a k ( x i a ) m , x i < a 0
Table 2. Results of ABC-ESDL and six other ABC algorithms for D = 30.
Table 2. Results of ABC-ESDL and six other ABC algorithms for D = 30.
FunctionsABCGABCIABCMABCABCVSSDFSABC-EliteABC-ESDL
MeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd Dev
f11.14 × 10−153.58 × 10−164.52 × 10−162.79 × 10−161.67 × 10−356.29 × 10−369.63 × 10−426.67 × 10−411.10 × 10−363.92 × 10−364.72 × 10−753.17 × 10−742.30 × 10−821.13 × 10 −80
f21.49×10−102.34 × 10−101.43 × 10−153.56 × 10−153.09 × 10−193.84 × 10−191.5 × 10−216.64 × 10−228.39 × 10−201.6 × 10−196.01 × 10−382.25 × 10−383.13 × 10−416.81 × 10 −40
f31.05 × 1043.37 × 1034.26 × 1032.17 × 1035.54 × 1032.71 × 1031.48 × 1041.44 × 1049.92 × 1039.36 × 1034.90 × 1039.80 × 1033.61 × 10 31.28 × 10 3
f44.07 × 1011.72 × 1011.16 × 1016.32 × 1001.06 × 1014.26 × 1005.54 × 10−14.50 × 10−14.36 × 10−13.72 × 10−12.60 × 10−22.99 × 10−22.11 × 10−17.20 × 10−1
f51.28 × 1001.05 × 1002.30 × 10−13.72 × 10−12.36 × 10−13.94 × 10−11.10 × 1003.45 × 1001.20 × 1001.03 × 1011.58 × 1011.00 × 1021.16 × 10−32.08 × 10−2
f600000000000000
f71.54 × 10−12.93 × 10−15.63 × 10−23.66 × 10−24.23 × 10−23.02 × 10−22.77 × 10−26.36 × 10−33.25 × 10−24.72 × 10−21.64 × 10−22.42 × 10 −21.46 × 10−22.64 × 10−2
f8−12,490.55.87 × 10+1−12,569.53.25 × 10−10−12,569.51.31 × 10−10−12,569.51.97 × 10−13−12,569.51.94 × 10−11−12,569.51.97 × 10−11−12,569.54.65 × 10−11
f97.11 × 10−152.28 × 10−15000000000000
f101.60 × 10−94.32 × 10−93.97 × 10−142.83 × 10−143.61 × 10−141.76 × 10−147.07 × 10−142.36 × 10−143.02 × 10−142.04 × 10−142.87 × 10−141.46 × 10−142.82 × 10−142.00 × 10−14
f111.04 × 10−133.56 × 10−131.12 × 10−162.53 × 10−1600001.85 × 10−173.87 × 10−162.05 × 10−116.04 × 10−1000
f125.46 × 10−163.46 × 10−164.03 × 10−162.39 × 10−163.02 × 10−1701.57 × 10−324.50 × 10−471.57 × 10−324.50 × 10−471.57 × 10−324.50 × 10−471.57 × 10−325.81 × 10−47
w/t/l11/1/0 9/3/0 8/4/0 7/5/0 8/4/0 7/4/1 -
* The best result for each function is shown in boldface.
Table 3. Results of ABC-ESDL and six other ABC algorithms for D = 100.
Table 3. Results of ABC-ESDL and six other ABC algorithms for D = 100.
FunctionsABCGABCIABCMABCABCVSSDFSABC-EliteABC-ESDL
MeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd Dev
f17.42 × 10−155.89 × 10−153.37 × 10−157.52 × 10−163.23 × 10−331.45 × 10−347.98 × 10−382.17 × 10−376.18 × 10−351.84 × 10−341.04 × 10−731.09 × 10−726.82 × 10−853.06 × 10−83
f21.09 × 10−94.56 × 10−96.54 × 10−152.86 × 10−154.82 × 10−183.53 × 10−182.68 × 10−203.49 × 10−201.18 × 10−181.47 × 10−182.80 × 10−375.35 × 10−371.02 × 10−521.76 × 10−51
f31.13 × 1052.62 × 1049.28 × 1042.71 × 1049.76 × 1042.81 × 1041.58 × 1059.39 × 1041.10 × 1055.12 × 1046.42 × 1045.17 × 1047.82 × 1048.55 × 104
f48.91 × 1014.37 × 1018.37 × 1013.68 × 1018.29 × 1011.28 × 1013.88 × 1013.70 × 1003.82 × 1001.09 × 1007.32 × 10−11.01 × 1002.66 × 1011.32 × 101
f53.46 × 1004.29 × 1002.08 × 1013.46 × 1002.97 × 1002.72 × 1002.31 × 1002.62 × 1001.29 × 1011.23 × 1022.07 × 1018.46 × 1011.92 × 10−33.22 × 10−2
f61.58 × 1001.68 × 100000000000000
f71.96 × 1002.57 × 1009.70 × 10−17.32 × 10−17.45 × 10−12.27 × 10−11.75 × 10−11.67 × 10−21.44 × 10−11.72 × 10−11.44 × 10−18.16 × 10−28.34 × 10−29.15 × 10−2
f8−40,947.57.34 × 102−41,898.35.68 × 10−10−41,898.33.21 × 10−10−41,898.32.91 × 10−12−41,898.31.60 × 10−10−41,898.37.02 × 10−11−41898.31.63 × 10−10
f91.83 × 10−112.27 × 10−111.95 × 10−143.53 × 10−141.42 × 10−142.63 × 10−1400000000
f103.54 × 10−97.28 × 10−101.78 × 10−135.39 × 10−131.50 × 10−134.87 × 10−133.58 × 10−112.91 × 10−121.32 × 10−133.64 × 10−141.25 × 10−135.36 × 10−141.25 × 10−135.61 × 10−14
f111.12 × 10−149.52 × 10−151.44 × 10−153.42 × 10−157.78 × 10−165.24 × 10−1600001.81 × 10−163.42 × 10−1500
f124.96 × 10−153.29 × 10−152.99 × 10−154.37 × 10−159.05 × 10−1804.71 × 10−3304.71 × 10−3304.71 × 10−3304.71 × 10−330
w/t/l12/0/0 10/2/0 10/2/0 7/5/0 7/5/0 5/5/2 -
* The best result for each function is shown in boldface.
Table 4. Mean ranks achieved by the Friedman test for D = 30 and 100.
Table 4. Mean ranks achieved by the Friedman test for D = 30 and 100.
AlgorithmsMean Rank
D = 30D = 100
ABC6.506.67
GABC4.585.33
IABC4.084.42
MABC3.793.58
ABCVSS3.793.29
DFSABC-elite3.292.67
ABC-ESDL1.962.04
* The best rank for each dimension is shown in boldface.
Table 5. Comparison of ABC with different strategies (D = 30).
Table 5. Comparison of ABC with different strategies (D = 30).
ProblemsABCABC-ESABC-DLABC-ESDL
MeanStd DevMeanStd DevMeanStd DevMeanStd Dev
f11.14 × 10−153.58 × 10−161.37 × 10−332.51 × 10−344.67 × 10−174.78 × 10−172.30 × 10−821.13 × 10−80
f21.49 × 10−102.34 × 10−102.82 × 10−213.23 × 10−211.02 × 10−103.46 × 10−113.13 × 10−416.81 × 10−40
f31.05 × 1043.37 × 1036.71 × 1032.94 × 1037.62 × 1033.27 × 1033.61 × 1031.28 × 103
f44.07 × 1011.72 × 1012.21 × 1002.06 × 1003.82 × 1011.24 × 1012.11 × 10−17.20 × 10−1
f51.28 × 1001.05 × 1003.88 × 1011.65 × 1019.63 × 10−21.09 × 10−21.16 × 10−32.08 × 10−2
f600000000
f71.54 × 10−12.93 × 10−19.40 × 10−21.77 × 10 −22.82 × 10−12.51 × 10−21.46 × 10−22.64 × 10−2
f8−12,490.55.87 × 10+1−12557.81.62 × 101−12,533.11.93 × 102−12,569.54.65 × 10−11
f97.11 × 10−152.28 × 10−157.94 × 10−142.58 × 10−152.43 × 10−15000
f101.60 × 10−94.32 × 10−93.49 × 10−141.87 × 10−146.45 × 10−101.99 × 10−142.82 × 10−142.00 × 10−14
f111.04 × 10−133.56 × 10−137.55 × 10−36.38 × 10−32.49 × 10−151.52 × 10−1500
f125.46 × 10−163.46 × 10−161.57 × 10−3201.56 × 10−1901.57 × 10−325.81 × 10−47
w/t/l11/1/0 10/2/0 11/1/0 -
* The best result for each function is shown in boldface.
Table 6. Comparison of ABC with different strategies (D = 100).
Table 6. Comparison of ABC with different strategies (D = 100).
ProblemsABCABC-ESABC-DLABC-ESDL
MeanStd DevMeanStd DevMeanStd DevMeanStd Dev
f17.42 × 10−155.89 × 10−151.53 × 10−272.87 × 10−263.96 × 10−151.91 × 10−146.82 × 10−853.06 × 10−83
f21.09 × 10−94.56 × 10−91.11 × 10−161.18 × 10−159.41 × 10−101.86 × 10−91.02 × 10−521.76 × 10−51
f31.13 × 1052.62 × 1049.39 × 1042.63 × 1041.08 × 1053.91 × 1047.82 × 1048.55 × 104
f48.91 × 1014.37 × 1013.44 × 1013.44 × 1018.73 × 1017.92 × 1002.66 × 1011.32 × 101
f53.46 × 1004.29 × 1001.19 × 1021.19 × 1022.21 × 10−11.83 × 1001.92 × 10−33.22 × 10−2
f61.58 × 1001.68 × 100003.13 × 1006.59 × 10000
f71.96 × 1002.57 × 1001.87 × 10−11.35 × 10−11.43 × 1001.04 × 1008.34 × 10−29.15 × 10−2
f8−40,947.57.34 × 102−41,762.15.95 × 102−41,240.78.02 × 102−41,898.31.63 × 10−10
f91.83 × 10−112.27 × 10−111.29 × 10−93.78 × 10−82.07 × 10−65.32 × 10−500
f103.54 × 10−97.28 × 10−101.57 × 10−134.35 × 10−142.17 × 10−95.06 × 10−91.25 × 10−135.61 × 10−14
f111.12 × 10−149.52 × 10−159.13 × 10−41.49 × 10−21.89 × 10−157.52 × 10−1500
f124.96 × 10−153.29 × 10−154.29 × 10−288.78 × 10−273.21 × 10−182.19 × 10−174.71 × 10−337.50 × 10−48
w/t/l12/0/0 11/1/0 12/0/0 -
* The best result for each function is shown in boldface.
Table 7. Results on the CEC 2013 benchmark set.
Table 7. Results on the CEC 2013 benchmark set.
ProblemsABCGABCMABCABCVSSDFSABC-eliteABC-ESDL
MeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd DevMeanStd Dev
f16.82 × 10−142.18 × 10−135.71 × 10−139.44 × 10−144.55 × 10−141.33 × 10−126.82 × 10−141.88 × 10−124.55 × 10−141.19 × 10−127.29 × 10−51.66 × 10−3
f21.05 × 1055.86 × 1063.43 × 1075.63 × 1061.88 × 1074.34 × 1072.54 × 1057.24 × 1051.98 × 1074.55 × 1072.55 × 1061.16 × 106
f32.49 × 1096.24 × 1091.05 × 10101.19 × 1095.58 × 1072.94 × 1071.89 × 1085.60 × 1087.83 × 1072.26 × 1071.95 × 1063.08 × 106
f46.81 × 1032.09 × 1033.51 × 1051.48 × 1049.83 × 1032.56 × 1038.18 × 1032.05 × 1036.58 × 1031.79 × 1035.94 × 1031.81 × 105
f54.97 × 10−108.53 × 10−94.70 × 10−135.65 × 10−145.68 × 10−141.52 × 10−121.71 × 10−136.98 × 10−121.02 × 10−132.50 × 10−121.07 × 10−33.00 × 10−3
f61.73 × 1005.07 × 1001.77 × 1021.82 × 1001.76 × 1006.81 × 1002.25 × 1007.12 × 1001.67 × 1001.14 × 1008.58 × 10−13.57 × 10−1
f71.29 × 1023.58 × 1014.09 × 1021.29 × 1011.06 × 1013.52 × 1011.26 × 1013.96 × 1019.27 × 1002.53 × 1007.16 × 1002.15 × 100
f82.10 × 1005.96 × 1002.13 × 1013.51 × 10−22.09 × 1005.97 × 1002.11 × 1005.97 × 1002.10 × 1005.96 × 1002.08 × 1005.95 × 100
f93.02 × 1018.69 × 1001.40 × 1022.43 × 1012.79 × 1018.69 × 1003.05 × 1018.81 × 1003.04 × 1018.47 × 1002.97 × 1018.08 × 100
f103.40 × 10−18.22 × 10−11.43 × 1008.48 × 10−11.62 × 10−14.60 × 10−13.05 × 10−11.32 × 10−12.46 × 10−15.59 × 10−12.51 × 10−26.91 × 10−2
f113.30 × 10−134.73 × 10−131.54 × 10−132.86 × 10−141.14 × 10−143.24 × 10−141.71 × 10−144.30 × 10−145.68 × 10−152.50 × 10−156.81 × 10−41.91 × 10−4
f123.14 × 1018.42 × 1011.60 × 1035.64 × 1011.57 × 1015.52 × 1002.46 × 1016.01 × 1012.20 × 1015.61 × 1001.51 × 1014.91 × 100
f133.14 × 1019.36 × 1001.81 × 1035.60 × 1012.63 × 1017.29 × 1002.27 × 1017.64 × 1002.13 × 1016.65 × 1002.70 × 1017.51 × 100
f141.09 × 1004.05 × 1002.85 × 1001.28 × 1002.48 × 10−15.23 × 10−16.25 × 10−31.14 × 10−22.10 × 10−21.80 × 10−27.47 × 10−13.05 × 10−1
f153.49 × 1031.22 × 1021.57 × 1046.11 × 1023.21 × 1031.06 × 1022.65 × 1031.28 × 1025.09 × 1031.40 × 1022.62 × 1031.10 × 102
f161.65 × 1005.11 × 1002.07 × 1002.57 × 10−11.57 × 1003.98 × 1002.15 × 1005.66 × 1002.49 × 1005.97 × 1008.49 × 10−13.75 × 10−1
f173.11 × 1008.81 × 1011.07 × 1021.06 × 1023.04 × 1008.67 × 1003.04 × 1008.66 × 1003.27 × 1008.66 × 1003.09 × 1008.76 × 100
f183.88 × 1021.01 × 1021.76 × 1035.02 × 1021.90 × 1026.59 × 1013.45 × 1029.27 × 1012.84 × 1027.53 × 1011.44 × 1024.99 × 101
f191.07 × 10−13.67 × 10−12.25 × 1002.94 × 10−16.81 × 10−22.26 × 10−21.54 × 10−16.58 × 10−14.56 × 10−22.29 × 10−24.49 × 10−29.31 × 10−2
f201.54 × 1014.17 × 1005.00 × 1016.93 × 1001.46 × 1014.11 × 1001.48 × 1014.11 × 1001.46 × 1014.02 × 1001.41 × 1013.82 × 100
f212.01 × 1025.59 × 1013.67 × 1029.04 × 1012.06 × 1025.77 × 1012.18 × 1026.27 × 1012.00 × 1029.20 × 1011.02 × 1025.31 × 101
f221.16 × 1023.71 × 1017.47 × 1012.64 × 1011.05 × 1023.06 × 1011.15 × 1024.13 × 1011.19 × 1023.20 × 1011.47 × 1012.04 × 100
f235.53 × 1031.52 × 1022.16 × 1048.78 × 1034.11 × 1031.35 × 1025.57 × 1031.63 × 1025.87 × 1031.74 × 1023.18 × 1031.26 × 102
f243.02 × 1028.33 × 1016.00 × 1027.56 × 1012.88 × 1028.13 × 1012.86 × 1028.27 × 1012.86 × 1028.09 × 1012.80 × 1028.07 × 101
f253.15 × 1028.88 × 1017.16 × 1029.05 × 1002.96 × 1028.53 × 1013.02 × 1028.67 × 1013.00 × 1028.53 × 1013.00 × 1028.56 × 101
f262.01 × 1025.73 × 1012.07 × 1023.95 × 10−12.01 × 1025.72 × 1012.01 × 1025.73 × 1012.01 × 1025.72 × 1012.00 × 1025.71 × 101
f274.02 × 1021.34 × 1013.81 × 1036.27 × 1021.11 × 1033.00 × 1024.02 × 1021.90 × 1014.02 × 1021.14 × 1014.00 × 1021.26 × 101
f281.64 × 1027.10 × 1014.27 × 1035.67 × 1023.00 × 1028.54 × 1013.13 × 1029.29 × 1013.00 × 1028.70 × 1011.04 × 1028.45 × 101
w/t/l25/0/325/0/320/0/821/0/722/1/5
* The best result for each function is shown in boldface.

Share and Cite

MDPI and ACS Style

Xiao, S.; Wang, W.; Wang, H.; Tan, D.; Wang, Y.; Yu, X.; Wu, R. An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning. Mathematics 2019, 7, 289. https://doi.org/10.3390/math7030289

AMA Style

Xiao S, Wang W, Wang H, Tan D, Wang Y, Yu X, Wu R. An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning. Mathematics. 2019; 7(3):289. https://doi.org/10.3390/math7030289

Chicago/Turabian Style

Xiao, Songyi, Wenjun Wang, Hui Wang, Dekun Tan, Yun Wang, Xiang Yu, and Runxiu Wu. 2019. "An Improved Artificial Bee Colony Algorithm Based on Elite Strategy and Dimension Learning" Mathematics 7, no. 3: 289. https://doi.org/10.3390/math7030289

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop